Creating Chaos from Simplicity

Creating Chaos from Simplicity

Software is complex, often too complex for the human mind to comprehend.

As IT professionals exposed to this complexity day in and day out, we create mental models, models that simplify problems down to a level that our puny minds can handle. Design Patterns as popularised by the Gang of Four are a response to the inherent complexity in software. Here we have a set of patterns or models that 9 times out of 10 provide solutions to software architecture challenges. Patterns provide a common language that we can use to describe complex concepts in a consistent way.

Ah, consistency, something that software developers are very familiar with. Consistency is drilled into us at programming school – encapsulate the commonality once and then reuse often. This is what frameworks are all about. Who writes a string class from scratch these days?

So the thing that has amazed me enough to spur me to write this post is why do these fundamentals fall by the wayside when we find ourselves under pressure? Why do people re-invent the wheel when a simple, well known pattern would suffice?

Let me explain a very simple pattern. This is a pattern we are applying in a service based architecture. The idea is simple

  • Each service exposes one of more unit of work.
  • Each unit of work is invoked by a Command message
  • The command message relates directly to the service business domain. It is a command to invoke a business process or operation.
  • Once the unit of work is complete, either successfully or not, this is reported by way of an Event message

The pseudo code for this might be

public void Handle(MortgageApproveCommand message)
{
    ValidateMessage(message);
    var mortgageApproval = Transform(message);
    
    var result = Mortgage.Approve(mortgageApproval);
		
    if (result.OK)
    {
	PublishEvent(new MortgageApprovedEvent
	{
		Result = result
	});
    }
    else
    {
        PublishEvent(new MortgageApprovalFailedEvent
	{
	    Result = result
	});
    }
}

I’m not suggesting this is perfect but it demonstrates how to keep the implementation clean by avoiding the logic of other services creeping in and also how to provide an anti-corruption layer between the service interface and the internal business logic. I also like the simplicity of the command -> process -> event pattern.

I recently reviewed some code implemented by a project team and found this. I’ve mapped the code to a very simple service model (Road and Car) to avoid any complexity related to the real business domains causing confusion.

public void Handle(AccelerateCarCommand message)
{
    Road road = message.CurrentRoad;
    if (road == null)
    {		
        throw new ArgumentNullException("No road");
    }
	
    if (road.SpeedLimit == null)
    {		
        throw new ArgumentNullException ("No speed limit");
    }
	
    if (road.TrafficCondition == null )
    {		
        throw new ArgumentNullException ("No traffic");
    }
	
    var actualSpeed = Car.AccelerateToOptimalSpeed(road);
	
    if (actualSpeed < SpeedLimit)
    {
        road.State = Busy;
    }
    else
    {
        road.State = Clear;
    }

    PublishEvent( new RoadStateChangedEvent
    {
        Road = road
    });
}

So what are the issues with this?

1) Passing Service objects/interface structures into external services

Road road = message.CurrentRoad;
if (road == null)
{
    throw new ArgumentNullException("No road");
}
…

The Car service is passed an instance of a Road. This couples the Road and Car services together. A change to the structure of the Road object now impacts the Car. Ideally the Car would have exposed a command that allowed the caller to pass the Speed Limit and Traffic Conditions without passing the entire state that comprises a Road. Both services do need a shared understanding of these concepts but that does not mean they have to share a representation.

2) Passing Interface object into the business logic

var actualSpeed = Car.AccelerateToOptimalSpeed(road)

Not only does this compound the previous issue but now there is no separation between the service interface and the internal logic. If the interface changes, (which it will do by the way, when you least want it to) it has impact across the implementation not just at the perimeter. There must be an anti-corruption layer between the interface and implementation to allow them to change at different rates. I see this as very similar to the Model-View-ViewModel MVVM pattern you see in many web applications.

3) Changing the state of data you don’t own

if (actualSpeed < SpeedLimit)
{
    road.State = Busy;
}
else
{
    road.State = Clear;
}

Here we are changing the state of Road in the Car service.

With the previous points there are ways to refactor our way out of a corner. However this point, and the following one have much worst smells and they are clues to what was going on in the developer’s mind during implementation.

What I see here is the developer making assumptions about what other services will do. They have changed the state of an object they don’t own because they know how the Road and Car services will coordinate to realise an end to end process. This is just another form of coupling… “implementation coupling”. We are using a nice interface but services on each side of the interface are making assumptions about what that other is doing. Change one implementation and you break the other one.

4) Raising Domain Events the service doesn’t own

PublishEvent( new RoadStateChangedEvent
{
    Road = road
});

This is the second piece of the implementation coupling jigsaw. Now that we have changed the state of the Road, we need the Road service to commit the change. So lets forget the problems associated with implementation coupling for a moment and think about how we would deal with this. I would reach for a Command Message where I called the Road service to commit my change. The command message is a point to point message that is only consumed by the Road service.

So what has the developer done? They are publishing an event!

In this system an Event message is a convention for using a Publisher/Subscriber pattern. By definition the publisher has no knowledge about the message subscribers neither does the publisher know when the message is consumed. There are absolutely no guarantees that the message will reach the Road service and even if it does the service may not be able to process it.

BadEvents

If you are following a domain event model, a domain service should only raise events related to its business domain. Here we are raising an event from the Car service that should be owned by the Road service. Any other service could be subscribing to this event and they will assume that this was raised in response to a state change in the Road service. They may get this event before the Road service has a chance to commit the change and it is entirely possible that the change could be rejected. Any one of the subscribing services may kick off another business process which in turn may trigger further events. A rejection by the Road service would leave all the services that had consumed the event in an inconsistent state.

What a mess!

The implementation that spurred this example emerged from good intentions. What I have presented here was the result of a series of incremented design choices, each one on their own, not too bad, but the result took us a long way from where we needed to be. But going back to the introduction, the real problem is the pattern for building these units of work was not well understood and where it was known there was a lack of buy in. Delivery pressure meant that the team had no time to sit up and understand the pattern or understand the concepts it is built on. Instead they forged ahead reinventing the wheel as they went.

So we are left with coupling and chaos where we should be enjoying simplicity and consistency.

Advertisements

Story Kick-Offs

Story Kick-Offs

I can sense the fear from my colleagues when I walk into a story kick off.

“He is so picky and pedantic!”
“This is going to take all day!”
“All the planning I’ve done will be wasted”

Yes, some story kick-offs I have attended have been very painful for all involved. But why?

We do story kick-offs to ensure that everyone is on the same page when a story is picked up by the development team. All the key stakeholders have a chat which should take around 10 minutes. The objectives are to:

  • Make sure everyone understands the story and its scope
  • Check that there are no ambiguities or inconsistencies
  • Ensure everyone agrees the acceptance criteria

Sitting in on story kick-offs can tell you something about the effectiveness of the team. If there are conflicts that might be a sign that people are pulling in different directions.

There is a famous quote

No battle plan survives exposure to the enemy

In Agile circles this could be paraphrased as

No story written in isolation survives exposure to the development team

Yet, I have seen people trying to do just that. On larger projects it might seem sensible to have feature stories created by business analysts on behalf of the Product Owner. Those features are split into related stories and acceptance criteria will be created. Stories will be allocated to future product increments and sprints.

There is an important component missing here… the development team!

“But they are busy delivering the current sprint and should not be disturbed”

“And we have a team of business analysts to do this sort of thing”

If the development team aren’t involved it is highly likely that something will be missed in the story definition. Waiting until the story kick-off to discover this is too late. This is often the catalyst for conflict and a 10 min chat becomes an hour long argument.

A good question to ask in story kick-offs is “how will you know when the story is done?” The response to this will indicate whether you have a chance of completing the work or whether this will be another never ending story that demoralises all involved. If you see the latter the team must have the courage to recommend changes to the story.

Often there will be resistance to change the story due to the time and effort invested creating it in the first place. Good luck to you when discovering there is a different way to carve up a feature into stories, either to better align with the application architecture or deliver business value earlier. It sometimes seems like the “story creation” team want to get shot of the story, problems and all, and for the development team to “just get on delivering” it, resolving all the problems and carrying all the risk as they go.

Two teams working in this way are not demonstrating Shared Ownership. The story is a means to deliver business value – it is a means to an end, and not the end itself.

Writing good stories is hard and shouldn’t be under estimated. It is everyone’s responsibility to create good stories and everyone needs to be aware that writing good stories needs practice. The development teams must be part of the story creation process and will know when a story is ready for development.

And if you find yourself in a story kick-off and nobody is calling out the obvious issues … you should step up and do it, whether people like it or not.

 

 

Using Azure Table Storage

Using Azure Table Storage

On a recent project I recommended the use of Azure Table Storage.

In one scenario, there was a need to import large volumes of data from an external system quickly and reliably. In another, we wanted to provide a read only API that represented the state of business domain entities changing over time. It was an API that needed to be fast under load. In each case the primary focus was the volume of the data and speed of the storage. The structure of the data and querying flexibility was less of a priority.

Another thing that was at the back of my mind was the tight coupling database schemas can have on a solution. I have been on many projects where a strongly defined schema became an inhibitor rather than an enabler. Once you have invested effort in a comprehensive common schema it becomes one big service interface. Subsequently  simple changes to the database can have a large impact across applications… and this was something I wanted to avoid.

In my mind a NoSQL database looked a good fit and given that we were targeting Azure, Table Storage looked like the way to go. At the time Azure DocumentDB had just been announced and didn’t look mature enough for our needs.

So forward wind to today… We are taking Azure Table Storage out of the solution… why?

Querying

Table storage has very limited querying support. Essentially you are limited to one index query per table which is based on the Partition and Row keys. This may be okay if your querying needs are limited, but you quickly become unstuck if you have requirements for ad-hoc, “what if?” querying. As it turned out, a limited logging implementation meant that the only reliable way to determine what was happening in the system was to query the data sources.

Issue #1!

Backup and Restore

Backing up and restoring SQL databases is bread and butter for most organisations. Adding Azure Table Storage into the mix however, resulted in much head scratching.

Azure Table Storage’s partitioning and data duplication (across data centres if geo-redundancy is enabled) means it is highly available and reliable. Data loss or corruption due to system failure is unlikely. So are backups aren’t required, right?

Backup and restore provides solutions for other use cases. When you think about it, you need to guard against accidental data loss, either via the user or due to application bugs, and you might need to provide a means to load data in test environments. Azure Table Storage out of the box does not provide a good story here.

We looked at dealing with these situations by recreate data from source. This became error prone and time consuming. We were lacking querying capability too so it was difficult to establish whether “all the expected data was available” in the destination system. Later we looked at third party tools such as Azcopy or those provided by Cherry Safe or Cerebrata.  In the end, all of these options represented additional time and effort to solve an unexpected problem – one that had an easy solution if only we were using a SQL database!

Issue #2!

Building Custom Tooling

In order to solve these and other problems we were tending to build custom tooling for our needs. Although we were running an Agile project we did have critical business milestones and on this project story points burnt building tooling were not seen as story points delivering business value. I’ll probably elaborate that in a future post!

Issue #3!

So…

These issues became an insurmountable challenge for table storage. I started hearing phrases such as “not fit for purpose” and “inadequate” from a number of stakeholders. The heart and minds of the team were lost and as a pragmatist there was little point fighting against the tide.

This is not a failure however. I have learnt something and I hope members of the project team have too. In Agile and Lean approaches you need to be able to experiment, get rapid feedback and change course often and quickly. In the past it would have been almost impossible to change technology  2/3s of the way into a project. Yet we are doing so. We had put incremental releases live with Table Storage and realised that is wasn’t right for us. All stakeholders understand the value that will come because they have felt the pain of not making this change.

Setting up Hot Towel with Visual Studio 2015

Setting up Hot Towel with Visual Studio 2015

Im my last post I highlighted the need for architects to be producers which amongst other things, means architects need to be comfortable with their sleeves rolled up, writing code. Sometimes this is a challenge because there is not enough time in the day to learn every new technology. I’m always looking for short cuts.

Over the last year it became apparent that my knowledge of JavaScript libraries and SPA (Single Page Application) architecture was limited. I was going through one the PluralSight courses, Building Web Apps & Services with Entity Framework and Web API, and the Hot Towel SPA accelerator was mentioned. This seemed like a good starting point for my learning.

Hot Towel provides Visual Studio Templates for rapidly creating SPAs which are based on a number of JavaScript libraries and frameworks. I wanted to get this up and running on Visual Studio 2015 but I hit a few snags. Looking at the blog and the GitHub repository I could see that there had been no activity for some time which meant that it was very unlikely that the project had been tested with VS 2015. In particular the VSIX project templates did not work. Undeterred I forged ahead with my attempt to get the template application up and running. What follows are the steps I took.

Getting Started

Without the project templates you have to start from a MVC application. So create a New ASP.NET Web Application and select the MVC Template.

Installing via NuGet Package Manager

If you search for HotTowel in NuGet you’ll get several hits. The one I used was HotTowel.Angular.Breeze.

NuGet

After installing this package you’ll notice that a folder called app has been added to your web project. This is the client side JavaScript SPA.

AppFolder

Starting the application

Set the project’s start page to Index.html in the root folder and run the project. You’ll see the following

startup1

This isn’t right!

The application gets stuck on this page because some components are missing. Behind the scenes there are JavaScript errors which point you to the missing bits. Hit F12 in your browser to pull up the development tools.

startup1_error

The error points to module ‘ngAnimate’ being unavailable. This means the Angular Animate package is missing. You’ll find that if you correct this there will be more errors. To cut a long story short there are three packages missing

  • AngularJS.Animate
  • AngularJS.Route
  • AngularJS.Sanitize
NuGet Angular

Back in NuGet you’ll notice all three are present but they have the blue upgrade arrow next to them. Update all three packages and run rerun the application.

working

You should now have a working Hot Towel based SPA with all the necessary packages managed through NuGet. Time to start exploring.

What being an architect means to me

When I started working as in architecture rather than development roles 8 years ago it was rare for Architects to have development IDEs such as Visual Studio or Eclipse on their laptops. Not only were architects barred from accessing live and production systems but they also didn’t have access to test and development environments. Complex design concepts and ideas were communicated through Visio, PowerPoint or tools such as Sparx Enterprise Architect and the primary audience for these deliverables were the client and not the development team building the solution. The architecture and design of a system was a static set of documents.

So as an architect I was being actively discouraged from rolling up my sleeves and getting into the detail. I had 10 years of development experience behind me so it seemed counter intuitive to actively limit my ability to bring these skills to bear.

As an architect I am a technical leader. A key skill of this role is the ability to communicate a solution to all stakeholders regardless of their technical background. Whilst a hands off approach works well with business stakeholders, but it is not sufficient to lead a technical team.

In my story I went with the hands off approach. This enabled me to develop my client facing skills whilst neglecting my technical skills. This was encouraged through a development plan that focused on architectural certification and leadership programmes rather than technical training and conferences.

However my inner technie would not be silenced and it often jumped to the forefront when fires needed extinguishing or when technical finger pointing erupted between suppliers. A couple of years of this left a bad taste in my mouth – how much stress had been caused because I was not proactively dealing with technical risks at source, how much of the client’s budget had been burnt (or how much profit was lost by my employer) when projects over ran due to “unforeseen technical issues” which in hindsight would have been uncovered much earlier if prototyping had been done earlier in the project.

The recent resurgence of Agile on my career radar has cemented my belief that the classic hands off, BUFD (Big Up Front Design) architect is a problem rather and a solution. I have come to this conclusion because the classic Architect role is not a producer, it is a management role.

http://techbeacon.com/how-agile-killing-management

Large corporations have huge numbers of managers whose job is to guide producers rather than be producers. But companies that have applied the principles of agile to management favor products over process, and therefore favor software engineers over managers.

Architecture is still important in modern software development but it is clearly becoming a role rather than a person. People with architectural experience and aspirations can only be credible in modern software development if they are producers. Therefore they need to be able to code.

I’m not going to let my technical skills lose their relevance. It will be a challenge that I’m dealing with my listening to technical podcasts such as DotNetRocks and SE Radio on my commute and I have a Pluralsight playlist I’m working through. Most importantly I have Visual Studio and Eclipse installed on my work laptop and they are not going anywhere.