How to Create Maintainable Application – Part 3 – Improvement

From the previous blogs, we’ve explored the meaning behind maintainable, and how to measure them. This section will explore a few ways to improve maintainability in an application.

These are ways that I have found to work in improving maintainability. 

Design Patterns

Using design patterns correctly allows others to modify and extend the program within the scope and intention of the designer. 

Let’s assume 2 similar complex functions in a program. The first function was created with an extension point via strategy pattern. The extension point allows others to use it and extend the program within the scope and boundary that exists inherently. When the business requested to extend the functionality, the maintainer only required to use the extension point. Several unit tests may already be available for the existing functionalities. The maintainer only requires to add a few new unit tests for the new extension.

The second function did not have any extension point. When the time comes to extend the functionality, the developer requires to open the function and modify the functionality directly. The original intention of the first writer dwindled every time this happened. There is no way to know confidently that the function would work as is. The maintainer would require to assess the new function individually and no longer within the scope of the previous developer. In this case, let’s assume that some unit tests are available for the existing functionality. As the maintainer requires to open and modify existing functionality, the existing unit tests are no longer valid and they all require to be replaced. The maintainer would have to then rewrite the unit test that tests previous functionality, as well as the new addition.

Knowing when to use a pattern is not an easy feat and it takes many practices of failure and error. The second scenario above is the most likely scenario that would occur. The first scenario could only happen by intention and may not always be successful. When it is successful however, the pay off is usually worth the effort. I have blogged about some design patterns such as the cascade pattern and command handler pattern previously.

Readability

The aim of having good readability is to help the maintainer understand the context and intention of the code written. It is to reduce the time required to debug, modify and extend the part of the application. The writer is required to convey information and ensure that their intention is clear.  A high level code reader could usually discern hidden intention and information from any code. They are able to read code and understand to a deeper level the thought process of the writer. Not every maintainer however, is a high level code reader. 

Proper Naming

To ensure that code in an application is readable, it is important to have a consistent way to name classes, functions and variables. There are many guidelines on how to do this already such as the naming convention in Wikipedia. What is also important is that the team as a whole follows a standard that they agreed upon. Having a code review process also helps to reduce naming issues and increase readability.

Cyclomatic Complexity Rating

Looking into the cyclomatic complexity rating can be a good indicator on how readable the part of the function is. A good cyclomatic rating indicates that the part of the code contains only some amount of decision points which increases readability. Jason Roberts from dontcodetired blogged about this with code samples in more detail.

Unit Testing

The aim for unit testing is to instill confidence when the code change during maintenance activity. By having “some” degree of unit testing, the maintainer would have some degree of confidence to modify existing code without breaking the existing functionalities. High level of confidence in this case is the key to continuous integration which leads to continuous deployement.

Code Coverage

When being asked how much coverage is enough, Testivus on test coverage by Alberto Savoia resonates with me, and answers this nicely.

In summary, when there are no unit tests at all, aim to just start writing some tests. There will be a point that we have an overload of unit tests. When an application reaches this point, it is the time that we need to start assessing their values and how are we creating them. At the end of the day, as a rule of thumb, it is generally good to have 80% of code coverage.

Code coverage is the insurance for the future at the cost of maintainability.

Quality Unit Test

Whilst having a good degree of coverage should be the aim of every application, it is also necessary to aim for quality unit tests. Creating quality unit tests takes practice. There are many guides on creating quality tests already. In general, these are some attributes that good unit tests have:

  • Fast: As unit tests are created to be repeatable, they require to be run very quickly as well. Developers could run unit tests several times in minutes to ensure that they don’t break existing functionalities.
  • Reliable: Each test created must be deterministic. When a unit test run, it must produce exactly the same output every time. This is the key to instil confidence in unit tests.
  • Independent: A unit test should only test one thing at a time and it should not rely on each other.
  • Readable: Unit tests should be able to be understood and the intent should be clear. If the test fails it needs to convey information of which behaviour of the application that fails.

Final Words

Knowing methods to measure and improve maintainability are a good start for any people working to improve their application. However, there are usually more challenges when someone or a group of people attempts to improve their software maintainability. The impact of these attempts can be negative and hinder the next attempt to improve maintainability.

In the next article we will be exploring these challenges in more detail.

References

How to Create Maintainable Application – Part 2 – Measurement

In the previous section, we explored the definition and the reason why maintainability is critical in a company.

When given a task to review or extend current functionalities in a codebase, the first thing I’d like to do is to assess the risk of side effects on the application. This would usually give a decent insight on how the application was developed and how mature the development practices are. A more mature team would have a decent codebase and a style being followed in unison. They are usually easier to understand as they follow a particular style even though they are written by different developers. At the end, it is a risk assessment activity that requires to be done to see how much of a risk it is to modify or extend the current functionalities. 

These are few ways to measure maintainability.


Lines of Code

The number of lines of code is a good indicator of how big a class is. In C# and .net world, a class is often written in a singular file. I have seen a program with classes of more than 10000 lines of code. When a class grows too big, it is an indicator of a broken SOLID principle. I have blogged about how to look for broken liskov substitution principle previously. The class had too much responsibility and at the same time crossed different boundaries of the business domain.

It is difficult to discern what number is acceptable in an application as it differs between each case. Measuring based on the lines of code can be the first test to look into deeper maintainability issues in an application. 10000 lines of code in a class is usually too much and 100 lines may be too little but could also be enough.

Cyclomatic Complexity

Cyclomatic complexity measures the number of decision points of a program. It is a measurement to indicate the complexity by measuring the number of linearly independent paths through a program’s source code.

A low cyclomatic complexity means a higher probability of a good maintainable application. Higher number means it would take more time to modify the application as there is a higher probability that it contains more use cases. An analysis using cyclomatic complexity can pinpoint areas in an application that can be improved.

Deployment Frequency

A high deployment frequency is an indicator of good development practice. The only way to achieve this in the first place is to have a good and maintainable application. The longer a program not in a production, the higher the probability that it would have a lower maintainability degree when it reaches production.

I have worked on an application that the last release to production was already in more than 2 years. They had to modify the application and re-release it. The production code, and what was in the repository was completely different. The code in the repository was buggy and broken, and could not be used.

Many things could happen before a code reaches the production. From scope creeps, the feature itself could become obsolete of its usefulness, to unknown and unperceived bugs that happens because of environment issues. All of these factors would add up and lowers the probability of the application being maintainable. Having a higher deployment frequency would reduce this probability.

The Next Developer Test

The next developer has a broader meaning in this case. It could literally mean the next developer that requires to maintain the code, or the next developer that sits next to you. It could also mean the same developer who wrote the code in 6 months’ time when a new feature request is required to extend or modify the existing code.

The next developer test happens by default when doing code review. However, the judgement here is not whether the code conforms to the team’s coding standard or framework, but whether the code can convey enough information of what the programmer’s intention and the business requirement for the application. The aim is to be able to have higher confidence level on the next developer when they have to modify the part of the application.

Final Words

There are no silver bullets in determining the maintainability degree of an application. All of the measurements described above must be used together. With enough practice, at best, an individual can only sharpen their intuition and have a better feel of these maintainability degrees and assess the risk when they require to modify or extend the application.

In the next section, we will look into a few methods on how to improve maintainability.

How to Create Maintainable Application – Part 1 – Definition

An application could live for as long as the enterprise itself. A 5 years old startup would have a 5 years old system and a 20 years old enterprise may have a batch job that was written 20 years ago that still runs in one of their servers.

The cost of maintaining these applications grows as time passes by. The technology becomes outdated, the original developer gets promoted and in time, nobody remembers the reasons for the application created in the first place. At best, the future maintainer is left with a lengthy document that may explain some contexts but not all, or at worst, they are left with a guessing game.

Maintaining these applications are the challenge of everyday people working in IT. 

This series is a 4 part series. In the first part we will explore into what is maintainability and why do we need it. The second part will explore into ways that we can measure maintainability. The third part will look into how we can improve maintainability and the fourth part of this series will look into challenges that we often encountered and how we can mitigate them when attempting to create maintainable software.

Let’s look into maintainability.

What Is Maintainability? 

The Systems Engineering Body of Knowledge (sebok) defines maintainability as the probability that a system or system element can be repaired in a defined environment within a specified period of time. Increased maintainability implies shorter repair times.

The IEEE defines maintainability as the ease with which a software system or component can be modified to correct faults, improve performance, or other attributes, or adapt to a changed environment.

Maintainability is the ease in which a software system can be repaired and extend within a specified period of time. It has a direct impact on the cost to keep the software running. A poor maintainability in an application would take more time for the developer to repair and extend. When a change happens, there is a lower confidence level from the maintainer to make modifications without side effects. A good and maintainable application instills a higher confidence that the modifications can be done without side effects.

Maintainability is the ease in which a software system can be repaired and extend within a specified period of time

Why do we need maintainability?

A software must keep adapting to the business requirement to satisfy user needs. As companies move to become reliant on software to keep competitive, the volatile landscape of changing business itself becomes the mandate of the change required. Maintainable software allows people to accommodate and adapt to these requirements at a much faster rate with lesser side effects.

As companies move to become reliant on software to keep competitive, the volatile landscape of changing business itself becomes the mandate of the change required

What is also important to remember is that the degree of maintainability is a function of time. The longer the system exists in a company without being touched, the more difficult it is to maintain. People who created the application would forget and move and the undocumented reason the system was created would be lost. The situation leaves the rest of the individual to guess that reason. Often, this results in the new maintainer to re-write the application, as they would find the older application to be not manageable and it is easier just to do a rewrite.

In the next part of the series, we will explore a few ways that I have used to measure maintainability in an application.

References

  • Reliability, Availability, and Maintainability – Wikipedia
  • Why Is It Important to Measure Maintainability and What Are the Best Ways to Do It? – IEEE Xplore

Simple Service Collection Extensions For Microsoft Dependency Injection

On numerous occasions I needed to register all implemented interfaces using the Microsoft dependency injection. Previously, with Autofac, this functionality comes by default.

These are a few extensions that I use to help register generics with Microsoft dependency injection.

Adding multiple generic interfaces

Supposedly we have an IAnimal with multiple implementations of class Cat, Dog and Parrot and we would like to be able to do the following:

services.AddEnumerableInterfaces<IAnimal>(mainAssembly);

The extension for above is simply:

public static void AddEnumerableInterfaces<T>(this IServiceCollection services, Assembly assembly, ServiceLifetime serviceLifeTime = ServiceLifetime.Scoped)
{
    var allTypes = assembly
        .GetTypes()
        .Where(x =>
            !x.IsAbstract &&
            !x.IsInterface &&
            x.GetInterfaces()
                .Any(i => i == typeof(T))).ToList();
    foreach(var t in allTypes)
    {
        if(serviceLifeTime == ServiceLifetime.Scoped)
            services.TryAddEnumerable(ServiceDescriptor.Scoped(typeof(T), t));
        if(serviceLifeTime == ServiceLifetime.Transient)
            services.TryAddEnumerable(ServiceDescriptor.Transient(typeof(T), t));
        else
            services.TryAddEnumerable(ServiceDescriptor.Singleton(typeof(T), t));
    }
}

Add all implemented interfaces with a class name that ends with a particular word

Supposedly we have multiple implementations of IRepository, with class names such as AnimalRepository, CustomerRepository and so on.

We would like to be able to do:

services.AddImplementedInterfacesNameEndsWith(mainAssembly, “Repository”);

The code for the above is simply like below:

public static void AddImplementedInterfacesNameEndsWith(this IServiceCollection services, Assembly assembly, string endsWith, ServiceLifetime serviceLifeTime = ServiceLifetime.Scoped)
{
    var allAssemblies = _getAllReferencedAssemblies(assembly);
    var allTypes = allAssemblies
        .SelectMany(a => a.GetTypes()
            .Where(x =>
                !x.IsAbstract &&
                !x.IsInterface &&
                x.GetInterfaces().Any(i => i.Name.EndsWith(endsWith))))
        .ToList();
    foreach(var t in allTypes)
    {
        var interfaceType = t.GetInterfaces().First(i => i.Name.EndsWith(endsWith));
        if(serviceLifeTime == ServiceLifetime.Scoped)
            services.TryAddEnumerable(ServiceDescriptor.Scoped(interfaceType, t));
        if(serviceLifeTime == ServiceLifetime.Transient)
            services.TryAddEnumerable(ServiceDescriptor.Transient(interfaceType, t));
        else
            services.TryAddEnumerable(ServiceDescriptor.Singleton(interfaceType, t));
    }
}

In here, the _getAllReferencedAssemblies code is as below:

private static List<Assembly> _getAllReferencedAssemblies(Assembly mainAssembly)
{
    var assembliesName = mainAssembly
        .GetReferencedAssemblies()
        .Where(a => a.Name.Contains("MyApplicationNamespace"))
        .ToList();
    var loadedAssemblies = assembliesName.Select(a => Assembly.Load(a)).ToList();
    loadedAssemblies.Add(mainAssembly);
    return loadedAssemblies.ToList();
}

The _getAllReferencedAssemblies above helps to ensure when we register the interface above, we include all of the assemblies require, as well as ensuring that all of the assemblies had been loaded during runtime.

The Hunt of Broken Liskov Substitution Principle

Over the course of my programming experience, I’ve met plenty of codes that break this principle. In this article, we will recap on what the principle is about, a common symptom, and discuss what does it mean when we found a codebase with a broken LSP.

The Principle

Liskov Substitution Principle (LSP) is one of the most critical object-oriented programming principles. It was first introduced by Barbara Liskov, an American computer scientist, one of the first women to be granted a doctorate in computer science. It is part of the infamous ‘L’ in SOLID by uncle Bob.

Understanding LSP is critical in order to create a good object composition in programming. It defines an inherent rule, exists, in object inheritance. In the original paper, Barbara Liskov and Jeannette Wing describe the principle as follows:

“Subtype requirement: Let o(x) be a property provable about objects x of type T. Then o(y) should be true for objects y of type S where S is a subtype of T”

Wikipedia further explained LSP as; if S is a subtype of T, then objects of type T may be replaced with objects of type S without altering any of the desirable properties of the program. In simpler words, LSP states that; any objects with a parent-child relationship must be substitutable between each other without any consequences to the correctness of the application.

Consequence

The principle, once understood is simple. It is, however, difficult to follow in practice. Let’s look at the consequences of breaking the principle.

In object-oriented programming, violation of this rule means creating a dynamic class with a parent-child relationship that will not hold true even in a context that it was created in. A caller of the parent class must know the behavior of the children, subsequently, when creating a new children class, a change must be made to accommodate the new children behaviors in the caller class.

“This is the very definition of a distributed monolith.”

Any change to the children class must be accompanied by a full regression test to everything that affects the caller. This is the very definition of a distributed monolith. A small change to the system requires changes in other places that may not be necessarily relevant. The change becomes a risky exercise as application deployment tangled between one and another.

Example

One of the most common symptoms of a broken LSP is a usage of a switch like the following:

public void ProcessCustomer(Customer cust)
{
    switch(cust)
    {
        case Lawyer lawyer:
            ProcessLawyerCustomer(lawyer);
            break;
        case Doctor doctor:
            ProcessDoctorCustomer(doctor);
            break;
        case Police police:
            ProcessPoliceCustomer(police)
            break;
        default:
            throw new Exception(“Unknown Customer Job Type”);
    }
}

The innocent snippet above is a symptom of a broken LSP that developers need to keep an eye on. When we follow down the code, the broken principle usually becomes more apparent. Quite likely, we’d find something similar with the following code:

public void ProcessDoctorCustomer(Doctor doctor) 
{
    var specialty = doctor.GetSpecialty();
    ...
}

public void ProcessLawyerCustomer(Lawyer lawyer) 
{
    var lawyerType = lawyer.GetType();
    ...
}

public void ProcessPoliceCustomer(Police police) 
{
    var rank = police.GetRank();
    ...
}

ProcessCustomer method is a user of the parent class ‘Customer’. When we are adding a new customer type with the above code, ProcessCustomer must be modified as well. This is a direct violation to both LSP and Open-Closed Principle (OCP) in SOLID.

Another way of handling different object types is the usage of the generic T factory for processing the type. Using this method, when creating a new customer type, the caller does not require any change.

The caller code becomes:

public class CustomerProcessor
{
    private readonly ICustomerJobProcessorFactory _customerJobProcessorFactory;

    public CustomerProcessor(ICustomerJobProcessorFactory customerJobProcessorFactory)
    { … }

    public void ProcessCustomer(Customer cust)
    {
        _customerJobProcessorFactory.CreateProcessor(cust).Process();
    }
}

ICustomerJobProcessorFactory interface is as follows:

public interface ICustomerJobProcessorFactory
{
    CustomerJobProcessor CreateProcessor(T customer);
}

By abstracting out the processor into a different class, this makes CustomerJobProcessor becomes extendable. If there is an addition to the customer type, we can imagine that we just need to create a new CustomerJobProcessor. The code now conforms with OCP.

In order to further makes the code more robust, the Customer, Lawyer, Doctor, and Police class must be further re-structured. The Customer and their child becomes:

public interface ICustomer 
{
    string GetJobDescription();
}

public class Doctor : ICustomer 
{
    public string GetJobDescription() { … }
}

public class Lawyer : ICustomer 
{
    public string GetJobDescription() { … }
}

public class Police : ICustomer 
{
    public string GetJobDescription() { … }
}

If there is a method that needs to be across all child classes, we can create an abstract class of Customer. We can also force methods that require all the child class to override. Let’s force GetJobDescription() method for the child class to override.

public abstract class AbstractCustomer : ICustomer
{
    public abstract string GetJobDescription();
}

We then change the child classes to inherit from the abstract class.

public class Police : AbstractCustomer
{
    public override string GetJobDescription()
    {
        return _getRank();
    }
    ...
}

Using these simple inheritance techniques we have addressed both OCP and LSP issues. The solution now conforms to both principles. We also (tried to) addressed the distributed monolith problem. Additional customer job type does not necessarily require full regression for other job types.

But, Hold on! Does the new solution above is actually better? Implementing a generic factory and re-structuring code bears an inherent cost by having the extra layers. The notion of whether we should take further steps depend on whether we truly need the modularity attribute in our codebase. In a smaller codebase, the risk of change is not that high, and hence modularity is not that important. In a larger codebase, the modularity plays a critical part to reduce the cost of change.

Wider Issues

Broken LSP, when found, usually felt in conjunction with other problems. It is an indication of wider issues in software development.

Abuse of Yagni

Over the course of my experience, one of the usual culprits is, improper usage of ‘Ya Ain’t Gonna Need It’ (YAGNI) principle.

“YAGNI principle sometimes translate to; don’t create extra layers.”

YAGNI principle sometimes translates to; don’t create extra layers. This, however, is a short-sighted solution. As the example above suggests, conforming to the principles require extra layers. These layers, in turn, allows the application to be modular.

As a developer, we must be able to balance between a short-term and a long-term gain. To balance this, we require both experience and skill. We need the experience to know when to start to think about the long-term gain, as well as the skill necessary to make the changes itself.

Too Late To Fix

When a broken LSP identified, as the example suggests, it also breaks other principles. Refactoring the code become quite costly, and maybe, a destructive exercise.

“Refactoring the code become quite costly, and maybe, a destructive exercise.”

The symptom usually only felt when the entanglement became too much. When the application requires change, it needs to modify several other classes, making the change itself, a risky and costly exercise.

Once a broken LSP has been identified, if not tackled in a responsive manner, it may never be fixed at all. The cost of fixing became substantial and could no longer be accommodated with a simple fix. A ‘re-write’ to the entire codebase became cheaper.

Final Words

A symptom of a broken LSP is key to wider issues. It indicates that the application may already be a distributed monolith. A simple change in the application requires a regression and deployment of several other parts in the application. The change becomes a risky exercise.

Understanding the LSP is the first step. Being able to follow it, is another. The ability to adhere to the principle, identify a broken one and fix it, is critical in creating a modular long-term maintainable and robust application.

Enhancing Your Unit Tests with Pressius

As developers, we need to write unit tests regularly. Its to insure our code against changes in the future as a living document of the present. The value of unit tests is often cannot be measured in the present but more realised when the unexpected happens in the future.

When we write unit tests, we often require to create mock objects. The mock objects acted as a replacement of our model. Here’s an oversimplified example of a mock customer object that we may need to create. It only has 4 attributes. In reality, we may have more than 40 or even 50 attributes.

var customer = new Customer()
{
    Id = 1,
    FirstName = “Bruce”,
    LastName = “Wayne”,
    Occupation = “Entrepreneur”
};

The customer is a valid customer. As a good unit tester, we need to tests all permutations of valid values for all of the attributes. Can the attribute FirstName and LastName accept non-alphabetical values? What are the valid characters for FirstName and LastName? What are the expected outputs? If the occupation is an enum but it saves into the database as a string, how will our application behave given other values?

The way we would tackle the scenarios mentioned above are usually by creating a series of customer object and maybe creating a test suite for a set of scenarios. With xUnit framework, below is a small example of what we normally would create. The sample below indicates what values are accepted as valid values for the application. It includes null, non-alphabetical characters for the first name and last name, and some values specific for the occupation.

[Theory]
[InlineData(1312, “James”, “Warden”, “Cashier”)]
[InlineData(998, “12345”, “W4’[]2”, “Tree Cutter”)]
[InlineData(9999, “James”, “The 3rd”, “Royalty”)]
[InlineData(1231, null, null, null)]
public void Should_Create_Valid_Customer(int id, string firstname, string lastname, string occupation)
{
    //Prepare for test
    var customer = new Customer()
    {
        Id = id,
        FirstName = firstname,
        LastName = lastname,
        Occupation = occupation
    };
    var result = _service.CreateCustomer(customer );
    result.Id.ShouldBeGreaterThan(0);
    result.FirstName.ShouldBe(firstname);
}

Whilst the sample above is a valid test case, it is, however, is not an exhaustive list of the permutations. The non-alphabetical input for first name and last name is being tested at the same test. All of the null values are tested within the same test. The test does not cater valid scenarios where the first name and last name contains alphabet, but the occupation may be null, or where the last name may be null, but the first name and occupation contains values. There are many more permutations that the test above does not cater for.

In addition to the drawback above, the sample values of what makes a “Valid Customer” is lost and cannot be repeated in other unit tests that may require valid customer object. For example, suppose there is a function where a valid customer creates an order. The order creation may take an input of a valid customer object. As the values of the attributes are not being saved anywhere, creating permutations of another valid customer with the same attribute values is not possible (it will involve a lot of copy pasting). How would the create order behave if the supposedly valid customer object does not have a first name or last name?

The intricacies of managing list permutations of object models in our unit tests are in itself an art. A good design to create useful, robust, valid and evolvable unit tests is necessary.  

Introducing Pressius

Pressius is an open source NuGet library that helps to create a permutation of objects. It generates a list of the targeted object by permutating either from the attributes or from the constructor. Let’s look into how we would tackle the above concern with Pressius.

The simplest way to create a permutation with Pressius is just by calling it.

var customerList = Permutor.Generate();

By default, Pressius used its default permutation value list. Those default values can be found in https://github.com/LeonSutedja/Pressius. The command above will result in the following attribute permutations of customer object:

10 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog 1234567890 Cozy lummox gives smart squid who asks for job pen
10 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog ~!@#$%&*()_+=-`\][{}|;:,./?><'" The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10  The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10 xxx... The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
-2147483648 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
2147483647 The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog

The default permutation ensures several scenarios for string and integer attribute type. To further refine a valid customer object, we need to provide custom attribute values that define valid customer. Let’s define a set of valid values for the first name attribute.

public class ValidFirstName : DefaultParameterDefinition
{
    public override List InputCatalogues =>
        new List {
           "John",
           "Anastasia",
            ""
        };

    public override ParameterTypeDefinition TypeName =>
        new ParameterTypeDefinition("FirstName");

    public override bool CompareParamName => true;
}

Let’s breakdown ValidFirstName class. InputCatalogues contain the list of values that we consider valid for the first name. By default, Pressius is able to attach to a particular type (e.g int, string, and others). In this case, we would like to create valid values for the attribute name. The target attribute name is “FirstName”. To enable comparing against attribute name, we set the CompareParamName to true.

Next, we change the calling method to the following:

var permutor = new Permutor();
var pressiusTestObjectList = permutor
    .AddParameterDefinition(new ValidFirstName())
    .GeneratePermutation();

The result will be the customer object with the following attributes:

10 John The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10 John The quick brown fox jumps over the lazy dog 1234567890 Cozy lummox gives smart squid who asks for job pen
10 John The quick brown fox jumps over the lazy dog
10 John The quick brown fox jumps over the lazy dog ~!@#$%&*()_+=-`\][{}|;:,./?><'" The quick brown fox jumps over the lazy dog
10 John  The quick brown fox jumps over the lazy dog
10 John xxx... The quick brown fox jumps over the lazy dog
10 Anastasia The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
10  The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
-2147483648 John The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog
2147483647 John The quick brown fox jumps over the lazy dog The quick brown fox jumps over the lazy dog

It looks better already. Let’s finish this up by setting the valid values for last name and occupation.

public class ValidLastName : DefaultParameterDefinition
{
    public override List InputCatalogues =>
        new List {
           "Wick",
           "Laluna",
            ""
        };

    public override ParameterTypeDefinition TypeName =>
        new ParameterTypeDefinition("LastName");

    public override bool CompareParamName => true;
}

public class ValidOccupation : DefaultParameterDefinition
{
    public override List InputCatalogues =>
        new List {
           "Entrepreneur",
           "Car Dealer",
           "Death Maker"
        };

    public override ParameterTypeDefinition TypeName =>
        new ParameterTypeDefinition("Occupation");

    public override bool CompareParamName => true;
}

To add more to our model, we need to ensure that the id field is specified as an id. The final result is as follows:

var permutor = new Permutor();
var pressiusTestObjectList = permutor
    .AddParameterDefinition(new ValidFirstName())
    .AddParameterDefinition(new ValidLastName())
    .AddParameterDefinition(new ValidOccupation())
    .WithId("Id")
    .GeneratePermutation();

The .WithId input specifies which attribute name is an id. The output is as below:

1 John Wick Entrepreneur
2 John Wick Car Dealer
3 John Wick Death Maker
4 John Laluna Entrepreneur
5 John  Entrepreneur
6 Anastasia Wick Entrepreneur
7  Wick Entrepreneur
8 John Wick Entrepreneur
9 John Wick Entrepreneur

By creating a set of valid values using Pressius, we can ensure that the customer instances are repeatable across our test suites. We can reuse the same customer class in other places. If the rule for valid customer changes, such as a new occupation is being added or the first name attribute can now accept the null value, we can simply update the attribute definition classes.

Pressius is a tool that can help to create consistency in our unit test suites. It helps by simplifying object model permutation. More information can be found in the https://github.com/LeonSutedja/Pressius. The NuGet package can be found in https://www.nuget.org/packages/Pressius.  

What Programmers Can Learn From Musicians

Take the Coltrane

Music has always been a part of my life in one way or another. I had lessons in piano, performed in a band for various occasions and had involved in music events. There are values and practices that are important as musicians and very much relevant and applicable in software development. This is especially if we also consider what we do in IT as an art form.

Practice Practice Practice

You’ve got to learn your instrument. Then, you practice, practice, practice. And then, when you finally get up there on the bandstand, forget all that and just wail
~ Charlie Parker

Anyone who plays instrument knows, they cannot skip the practice. The more practice they have, the more confidence they will be when they play their music. An instrument is a tool to convey feeling, thoughts and at the same time to create the nuances for the audience. Practice is the repeatable process that musicians do to correct their mistake in a safe environment. When musician practices, they aim for perfection. This is because they know, practice is all they can lean on when they finally get up there on the bandstand.

Just like Charlie Parker said, once we learnt our instrument, we need to practice, practice and practice. In IT, practice is the process that we often neglect. The most important goal of practice is that, to correct the mistakes in a safe environment whilst aiming for perfection.

In order to do this, we need to be able to identify our mistakes, and what could have been done better. We apply our solution, and reassess the result whether our solution works. We then, rinse and repeat the process.

The agile concept of ‘Fail Often, Fail Fast’ is very much aligned with this notion. The ability of an IT team to be able to quickly fail, recover, apply their solution and fail again is imperative to be able to move forward.

The other critical element of practice is to be able to do this in a safe environment. The safe environment in here means that, an environment where we could make mistakes and not be afraid of criticism, as well as not having any side effects to other parts of the system.

To achieve this, we need a combination of a culture where we celebrate mistakes and an infrastructure environment that is separate from the production. The bottleneck in most places usually resides in the culture rather than the environment. Build the culture so we can practice, practice and practice.

Fail often, fail fast and create a culture that celebrates mistakes.

Live Environment is not Practice Environment

There’s nothing to compare to live music, there just isn’t anything
Gloria Gaynor

Any musicians know, nothing compares to live performance. It does not matter how much practice they have had, in live performance everything is different. The cool breeze from the wind, the view of the audience from the elevated stage, the blinding lights from the spotlights and the nuance and energy from the yelling and excited audience, they are all different. Even for a seasoned musician, every live performance will never be the same.

This is the equivalent of the production run in IT. Every application deployment will be different, and every infrastructure will never be the same. Whether it is because of a certain port that is open, or an obscure library that was installed many moons ago, they all contribute to the production run in one way or another. And just like Dirk Gently’s said, everything is connected.

Rather than alienates the production, it is much better to make an acquaintance with it. After all, production will always be part of our software development. That is if we plan for our software to be available for other people at all. To achieve this, a strategy to make an acquaintance of the production environment is required. Involvement from all parties and stakeholders is paramount as there are costs associated with it. Production environment must be able to be replicated in a safe environment at any point in time.

Luckily for us, we have a new branch of IT that was created recently to help with this. It is called DevOps. The benefits of DevOps practices as described by Amazon are the characteristics that we would like to achieve in the software development process. The automation methodology of ‘One Button Click’ described by my colleague, Rolf Shroff, is a practice that could help to achieve this.

Make an acquaintance with the production and practice, practice, practice.

Create a Legacy

As an artist, I never want to be a moment. I want to be a legacy, and I want my music to touch people for years to come
~ Khalid

When we think about music legacies, we think the great music that was left by our ancestors such as Mozart, Bach and Handel. In jazz, we have Charlie Parker, Art Tatum, Coltrane and the Great Oscar Peterson. In modern and pop music we have icons such as The Beatles, Queens, and of course Michael Jackson.

When we heard the term ‘legacy’ in IT, we often irk on it. We think of spaghetti codes, technical debt and possibly an obscure technology that we may not be able to replicate in modern technology. The reality is, any application that we create and write will become a legacy. The only question we have now is, what legacy do we want to leave?

The negative connotation of legacies in IT culture comes from our bad experiences with older application and technology. As technology moves fast, any code that was written longer than 5 years ago, by default will become a legacy. Any application that is 10 years or more of an age, may have a problem where the modern technology is no longer able to support it.

The best legacy that we could give to the future, is the ability to not be dependent on the past. It is the foundation and structure to be able to move forward without having to worry about side effects.

In the infrastructure landscape, the microservices architecture described by Fowler and Lewis is one solution that we could do. Microservice characteristic of componentisation allows the system when constructed properly, to be able to be modified with lesser side effects.

To reduce the side effects in the code, we could use the command handler pattern as described in my previous blog. The command handler pattern forces the user to think and code in a testable manner. This allows the user to have more freedom in how they would like to manage their code. Practices described in SOLID and Clean Code principle by Uncle Bob are also aligned with this notion.

Creating a legacy in IT requires thinking ahead to the future.

Create Our Music

Create a safe practice environment and have a repeatable process for improvement. Have a strategy for production and create our legacy for the future. I believe that it is the responsibility of any people who work in IT, including us, programmers to work towards it. These idealistic concepts are only a dream if we do not strive to achieve it.

Shoot for the moon, and if you miss, you’ll land among the stars.

 

Other Related Articles:

Business Value for Developer

Whenever we create an application, a very strong statement were always thrown around.

“We want to provide as much business value for the business”

There was nothing wrong with this statement. But when we asked further with question of “How?” and what do we mean by “Value”, nobody seems to be able to give a straight answer.

When someone says “business value” in business, they usually means The Dollar.

For a project to be approved and move forward to production, someone requires to justify how it would benefit the company. These benefits are usually in terms of how much it would save the company in the long run, or how much the company would make by embarking on these new adventures. At the end of each financial year, this is how companies are usually being assessed on their performance.

Let’s look at the basic in business.

Scenario 1: Simple buying and selling.

If I buy 10 Apples for $1.00 Ea, the cost of buying me an Apple is $10.00.

If I sell the Apple for $2.00 Ea, and I managed to sell all of them, my gross revenue is $20.00 and my profit is $10.00.

Let’s add another variable to it.

If my salary is $100.00 / hr and it took me 2 hours to buy/sell those apples, the extra cost for the company is $200.00 to buy/sell those apples.

If I buy only 10 apples, the profit of the company is -$190.00 for those apples.

For the company to be at least break even with the apple business and paying my wage, I’d need to buy/sell at least 200 apples in every 2 hours. This however, does not answer the question whether it was justifiable to hire me for buying and selling apples in the first place.

Let’s see how this scenario might evolve when we develop a software.

Scenario 2A project to make more money by enabling company to offer more for a particular targeted market segment.

Salary of the developer: $100.00 / hr.

The developer estimates, it would take them 3 months to deliver the project. This equates to 12 weeks of project. With enough knowledge of project management, the developer added a contingency to that 12 weeks of 20%, and this added 2.4 weeks more. Let’s round that up to 3 weeks and hence the project is a 15 weeks project.

Supposedly the developer is working 7.5 hours per day, 15 weeks of project is 562.5 hours and it cost the company $56,250.00 to pay the developer for the software. In here of course we assume as the developer being hired is a skillful one, hence the application will be finished and launched to the production by the end of those 15 weeks, and at the earliest 12 weeks.

If the new application estimates to make new profit of $2,000.00 (after tax) per month (because making money is harder than spending them…), It would take the company 28.125 months (just about 2 years) before covering up the cost of the developer.

Adding the length of time to develop of 15 weeks, it means the company can only get any any profit after about two and half years.

The project was a success, it meets the budget and was on time. But does this application consider as a success?

What does this means?

As a software developer, usually we do not need to make the justification such as the above because someone has already make them. But they are the more reasons why collaboration between the business and IT in any projects is necessary.

Any business that developed software is a software company

Businesses that fails to understand this will ultimately fail managing their IT. Businesses run by taking risks. They create contingencies, mitigate risks and plan for the best. They compete in the market with others and try to win the war by winning smaller battles. IT however were train to make complex things more complex by thinking of the edge cases. The art of any good IT individual is to make complex thing simple whilst not making it simpler.

As developers, it becomes very critical to understand the business we are in. This drives the application development and effectively becomes the value of the application itself. Value for any applications created for the business are measured and driven by how well the business perform after the application is being used. Only by understanding this and combined with integration and collaboration of IT in the business, IT will stop being the cost department and starts being revenue maker.

DRY-ing code with Command Handler

Keeping code DRY is a challenge for any developers. When we were studying programming, we were measured and taught that duplication is bad. In some cases, our score were measured by the number of code duplication. Working in the industry, we heard the idea of “It’s ok to duplicate”. We were happy with this idea for a while, and so in our code now, we have duplication and accepted it as part of programming whilst we convince ourselves that we will re-factor this when it grows. Let’s face it, some of the codes that we created will probably never be re-factored. They’ve grown too much, and we only wrote tests for some but not all.

Now, let me get this straight. I don’t have a problem with duplication. I have a problem with duplication that happened because we are just too lazy to do the right way. I like to think that duplication is a ‘flag’ that says “Please keep attention of me”. When you only have 1 place that the code duplicated, it is still maintainable, and easy to look for.But when we start having to copy and paste the code to more places, and when we want to change a thing becomes a treasure hunt, this is means, there are abstraction(s) missing that supposedly to happen.

Let’s consider the following snippet:

BookAppointmentInputDto.cs

public class BookAppointmentInputDto
    {
        [Required]
        public int PatientId { get; set; }

        [Required]
        public int RoomId { get; set; }

        [Required]
        public int ScheduleId { get; set; }
    }

AppointmentService.cs

public ServiceResponse BookAppointment(BookAppointmentInputDto input)
        {
            var isRoomFree  = _scheduleRepository.IsRoomScheduleFree(input.roomId, input.scheduleId);
            if (!isRoomFree)
                return ServiceResponse.Failed("Room is not free. Please select other time.");

            var isPatientFree = _scheduleRepository.isPatientFree(input.scheduleId, input.patientId);
            if (!isPatientFree)
                return ServiceResponse.Failed("Patient already have an appointment at this time. Please book other time.");

            // Process appointment booking
            try {
                var newAppointment = new Appointment();
                newAppointment.PatientId = input.PatientId;
                newAppointment.RoomId = input.RoomId;
                newAppointment.ScheduleId = input.ScheduleId;

                // Persist into the db
                var newAppointmentId = appointmentRepository.Insert(newAppointment);
                _context.SaveChanges();

                // Log the action.
                Logger.Log(“Appointment booked”);

                // return
                return ServiceResponse.Success(newAppointmentId);
            }
            catch(Exception e) {
                return ServiceResponse.Failed(e);
            }
        }

The above code is an example of a typical service class with exceptions catching and several validations. Whilst they are perfectly valid and ok, the code will become a maintenance nightmare when the file grew.

These are few of code duplication that we can see from the code above:

  1. Validation Rules
  2. Error/Exceptions Handling
  3. Logging

Imagine adding few more methods to the above such as “RescheduleAppointment”, “CancelAppointment” and “PatientFailedToAttendAppointment” into the service class. Multiply this with 3 other service classes that you will be adding within the next week.

Any developer coding the service layer must inherently think about how they would like to tackle the code smell. Trying to keep them DRY becomes a chore. This usually results in an inconsistent code state, where some part may have logging or error/exceptions handling, and some aren’t. Some part may have their own error and exceptions handling, and making it even more confusing to maintain. This means we cannot ensure quality and consistency throughout the code.

What we want instead is to focus on values. The values in the function above lies in the validations that happen prior processing the request, and processing the request itself.

Compare to the following snippet:

BookAppointmentCommand.cs

public class BookAppointmentCommand : ICommand {

        [Required]
        public int PatientId { get; set; }
        [Required]
        public int RoomId { get; set; }
        [Required]
        public int ScheduleId { get; set; }

        public class CreateAppointmentMapper : ICreateCommandMapper&amp;lt;BookAppointmentCommand, Appointment&amp;gt; {
            public Appointment Create(BookAppointmentCommand command)
            {
                var newAppointment = Appointment.Create(command.PatientId, command.RoomId, command.ScheduleId);
                return newAppointment;
            }
        }

        public class RoomMustBeAvailable : ICommandBusinessRuleValidation&amp;lt;BookAppointmentCommand, Appointment&amp;gt;
        {
            private readonly IScheduleRepository _scheduleRepository;

            public RoomMustBeAvailable(IScheduleRepository scheduleRepository) {
                _scheduleRepository = scheduleRepository;
            }

            public ValidationResult Validate(BookAppointmentCommand command) {
                var isRoomFree = _scheduleRepository.IsRoomScheduleFree(input.roomId, input.scheduleId);
                if (!isRoomFree)
                    return ValidationResult.Failed("Room is not free. Please select other time.");
                return ValidationResult.Success();
            }
        }

        public class PatientMustBeAvailable: ICommandBusinessRuleValidation&amp;lt;BookAppointmentCommand, Appointment&amp;gt;
        {
            private readonly IScheduleRepository _scheduleRepository;

            public PatientMustBeAvailable( … ) { … }

            public ValidationResult Validate(BookAppointmentCommand command) {
                var isPatientFree = _scheduleRepository.isPatientFree(input.roomId, input.patientId);
                if (!isPatientFree)
                    return ValidationResult.Failed("Patient already have an appointment at this time. Please book other time.");
                return ValidationResult.Success();
            }
        }
    }

And the service method becomes:

AppointmentService.cs

public ServiceResponse BookAppointment(BookAppointmentCommand command) {
    var handler = _handlerFactory(command);
    return handler.Handle(command);
}

The alternative snippet with command handler, focuses on the BookAppointmentCommand.cs. All values that we really need out the method BookAppointment were all inside BookAppointmentCommand.cs. All other details about logging and exceptions handled elsewhere (usually with decorator with the handler). The service class becomes a true facade class. We reap the benefits more when we have multiple places that requires exposure of different services.

There are downside to this pattern as well. Designing a command can be a challenge in itself, as ideally we would like a command to not be doing too much. The pattern introduce many classes into the system, and for developers that aren’t used to this yet, it will become a nightmare for them as opposed to the first straightforward snippet example. 

I found that command handler benefits usually outweigh its costs in modern web application context especially if we use this in concurrent with CQRS pattern. I prefer a fast and quick development turn-around, and throwaway code (if you don’t need the command, delete it, if you don’t need the validation, delete it) than a chunky code at any time alongside with logging and exceptions being handled properly.

Related articles:

Do you agree with the article? Do you use command handler pattern?
What other code smells / duplication that you usually see, and how do tackle the problem?

3 Code POV Secret for Better Code

free-photos

The 1st POV is Your Self
The 2nd POV is Your Team
The 3rd POV is Your Stakeholder

Following my previous post for seeing code as an art, let’s look at the different POVs a code can have.

Code for Yourself

The focus for this code is the result and having the correct output. The lack of documentation is the most common symptom here. There was no need for any documentation as the coder codes for themselves. Complex algorithm can be written and maintain easily by the individual.

In the enterprise system, we sometimes see this type of code as part of ‘legacy’ system. It may also be part of a system that has not yet been cleaned and refactored. The code works, and do what it is supposed to do. When we need to modify or change the code, extra caution is needed. They may look confusing (if you are not the coder), and difficult to understand, change and modify. But they work and produce the correct output.

The problem with the code in this view is they don’t communicate much to others. Even though they work correctly, if you are not the coder, it may be difficult to maintain and extend in the long run. In this case, they are added to the technical debt (also see: technical debt is not a Code Mess!).

This view is the quickest to get a working solution and to be able to get a confirmation of the correct solution. The very first person who needs to understand the code is the coder themselves.

Code for Your Team

In this pov, the code is more maintainable and generally easier to understand by other developers. Patterns and code intentions are more recognizable and documentation exists in one form or another to help those communications.

We see this code more often as part of a system that we are currently working on and we see them often in production as well. The code in this pov are more visible in a team that does code review regularly. Debugging becomes easier and any developer in the team usually would understand the code. With the occurrence of design pattern and documentations, they may also be easily extended by other developers.

The code usually does not yet entirely reflect the business or the stakeholder intention. This is because the code is usually created when the developer does not fully engage or understand the domain. Only partial business knowledge are reflected in the code.

Quite often, this type of code may be enough for the company to move forward. Technical debt accumulates at a reasonable pace. The code itself already have a certain degree of extensibility and easily understood by other developers.

The side effects of this POV is that any changes in the business may take longer to implement. This becomes more apparent when we start telling them “We can’t do it in this timeframe” for a business rule request change that made the stakeholder to actually thinks “That should be simple”.

Code for The Stakeholder

The code in this pov can be difficult to understand at a glance. This is because the code actually reflects the domain and may use the language and rules of the domain itself. The gap in understanding the code is actually in the business knowledge, rather than programming skills. The code describes the behavior that the domain intended. In Domain Driven Design they are part of Ubiquitous language, and Martin Fowler technique of Domain Specific Language also helps to achieve this.

A software developer cannot create a code for the stakeholder alone. (With few exceptions for example if the person himself might own the business).

The code created for stakeholder contains the problem knowledge of the stakeholder. Producing this code is time-consuming and depends on the skill of the developer to absorb the business itself. It takes a collaboration between the stakeholder and the developer. Without the collaboration, creating this code may be nearly impossible.

The trademark of this type of code that, because it follows the business, it is very easy to change according to the business. When the business rule changes, the code and implementation can change accordingly and confidently. With code in this pov, even other developers in the team will be able to make the change confidently without fear of side effects as the change should only affect the context the change it is on.

The skill to transfer the business knowledge to code takes years to practice and forever master. The aim shifted from just creating a working code to a business and domain modeling. Microservices as an approach in Martin Fowler blog is a good example of how this can be achieved.

Conclusion

Developing an application is a complicated thing. Developing an application that just works, is easy, but they will not last long and soon will become a ‘Legacy’ that needs to be replaced. An application for a particular business must follow the business growth strategy. To be able to do that, the code created, must be done in a way that can follow the change in the business.

Codes exist to assist the business, not the other way around. In fact, in most discussions with the stakeholder, it actually helps the business to questions its own process and procedure that must be followed. As a developer, it is our job to identify how to support the business to expand and grow quicker from the technology pov and not to limit them.