Premature Optimisation in Software Teams

Premature Optimisation in Software Teams

Back in developer school you are taught to simplify a problem. You are encouraged to look for commonalities and abstract them, look for repeated patterns and reuse them.

When you wind forwards to a large software project these ingrained skills turn into statements that follow this sort of pattern

We will implement complex solution X 
because it will be easier 
if we ever have to implement Y

I am torn when I hear this. I am a developer at heart so I know the person is saying this for the right reasons and best intentions. Developers are efficient problem solvers and that this is a sign of an attempt to find the most elegant solution for a problem.

But I’m torn because there was an “if” in the statement. Do something if something else happens. So the Agile part of my brain is saying “the if might never happen”. The team are building something without any requirements. No-one making design decisions is doing so based on fact. So any decisions from that point are guesses. If those guesses are incorrect not only has effort been wasted implementing the wrong thing you have may also have made it harder to implement the right thing.

Agile projects are typically arranged into sprints and each sprint contains a number of stories. It’s quite common for a number of stories to deliver a coherent feature, but those stories may be split across a number of sprints. So a feature consists of many stories and some of those stories may be in one sprint, some in the next sprint and so on.

Along with many other purposes, stories provide a way to deliver incremental business value and also allow the team members working on the story to focus on a very small dimension of a bigger problem. Most “good story guides” encourage you to create stories that represent a single dimension of a feature. Dimensions vary. They could be screens, user journeys, requirements, business rules or anything else. I like to use this model to give me ideas for different ways to break up a story.

When you have stories that are part of the same feature in a single sprint there is an argument for optimizing the design and delivering them all at once. The benefit is that the design is efficient but the risk is that if the work takes longer than planned the whole thing can’t be finished within the sprint. Depending on your circumstance that could risk a major release milestone. So you could say the team has been very efficient but it has been less effective because it has failed to meet a release commitment.

Sometimes I see teams optimise the design based on stories that exist in the backlog, and not always the ones at the top. Stories in the backlog are possible work. There is no commitment to deliver this work until they are in an active sprint. Priorities may change. As time passes more information might be uncovered that completely changes their purpose. They may no longer be required.

I have worked with a number of teams that have struggled with this. Somewhere deep inside many developers is a desire to tackle a big problem with a superb solution. Talk about incremental delivery of the software is met with retorts such as “it would take much longer that way”, “we’d be duplicating effort” and my favourite “that approach is inefficient”. All of these responses are judged against a potential product and probable solution. However one of main reason for doing Agile is to enable rapid changes of course so why build in unnecessary rigidity? So what if the team will take longer and effort might be duplicated as long as they build what the customer needs when they want it. The team may be less efficient but are they more effective as a whole? They definitely can be.

I liken this to Premature Optimisation. Creating a software solution to cater for something that might happen is just another form of this. This quote from Donald Knuth is often misquoted:

We should forget about small efficiencies, say about 97% of the time: 
premature optimization is the root of all evil. 
Yet we should not pass up our opportunities in that critical 3%.

People just tend to focus on this bit

premature optimization is the root of all evil.

I still think the full quote applies to software design . We probably spend far too much time thinking about optimisations we could make that have little positive value.

Advertisements

Getting lost in Azure

Getting lost in Azure

When you start working with Azure, you start with one subscription. It tends to be private for your own use. All is good.

Over time you gain experience working with Azure and perhaps you start working with a client. They give you access to their corporate subscription. Now you have to remember to switch between subscriptions and this can become a burden.

In time not only have you amassed more clients but they’ll also be putting their Azure projects into production. You might have access to these production subscriptions. Now the impact of deploying a change to the wrong place becomes larger and mistakes become inevitable.

All that power sits behind a single Windows Live account. Single Sign On is useful in many situations but maybe not in this case!

In Azure your Live account is added to a directory. Directories can relate to zero, one or more subscriptions. In both the old and new Azure portals when you select either the subscriptions option or switch between accounts what you are actually doing is switching directories. You are not switching subscriptions directly. As far as I know a subscription cannot be related to multiple directories.

azure1
Selecting a directory – Old Portal
azure2
Selecting a directory – Old Portal

When you switch directories and therefore subscriptions you want to know at a glance which one you are in. The old portal doesn’t give many options here, so as most of us have switched to the new portal the rest of this post will focus there.

The first option is to change the theme for each directory. You can do this by using the Settings option on the top right of the screen. However, you are limited to four options and they are quite similar. Changing the theme is the first step of customising the portal experience. I recommend that you configure the portal dashboard for each directory to reflect its purpose. That should give you a visual clue as to whether you have the correct directory selected.

It seems as those the Azure portal remembers the directory and subscription you were last using and that is the one it starts with when opening up the portal next time. That may or may not be useful. I have noticed though that you can pass the name of a directory as part of the query string to force the portal to start in a given directory.

For example

https://portal.azure.com/signin/index/{directoryName}

With this knowledge you can create a group of shortcuts with descriptive names to ensure you are always in the correct place.

But perhaps the simplest thing to do is add a markdown widget to the dashboard. This can contain anything but it can be used to clearly identify where you are working.

That is all well and good if you are using the portal. But what happens if you are staring into the blue abyss that is the PowerShell command prompt. How do you ensure that you are working in the right place then?

ps
The dark blue abyss…

This requires a bit more discipline and care. It is quite easy to made mistakes. The classic Azure model had the concept of a default subscription. This article explains how that works in PowerShell. However, this is not so straight forwards with the newer Resource Manager model. You’d like to think it was just a case of adding RM to the appropriate parts of the command so

 Get-AzureSubscription

Becomes

Get-AzureRmSubscription

Whilst this works for some command the results are not exactly the same.

If you are using the RM model you use the following command to login

Login-AzureRmAccount

This gives you access to all your subscriptions. You can see them all with

Get-AzureRmSubscription

This gives you the name, subscriptionId and tenant Id. Many subscriptions can exist in a single tenant. You then select a subscription with

Select-AzureRmSubscription -SubscriptionID $subscriptionId

So you can select the subscription but how do you find out what is currently selected. The following command comes to your assistance

(Get-AzureRmContext).Subscription

For a belt and braces approach you can combine these commands at the top of your scripts to ensure that you only ever work on the intended subscription.  A small price to paid for stress free Azure scripting.

 

Azure Resource Manager Templates

Azure Resource Manager Templates

I tend to find that I have to learn a new technology concept under a bit of time pressure. The requirement is suddenly presented to me and so is an imminent deadline. This was the case when I was asked to create an environment in Azure and ensure that it could be recreated easily. As I was aware of Azure Resource Manager (ARM) templates so I decide to head off it that direction. The first problem is that my awareness of something doesn’t equate to experience so I would be learning on the job. The rest of this post covers some of the things I learned on this journey.

What is an ARM Template?

An ARM template is a JSON file that describes all the resources in an Azure resource group. The syntax is JSON but extended with expressions and functions that allow it to describe resources. See Authoring Azure Resource Manager templates, for more details about the template syntax. Templates can have variables (which simply act as global variables across the template file) and parameters, which allows values to be plugged in from an external source. This makes your templates flexible, so they can be designed to provision an environment but the parameters can be used to vary server names or customise domain names or public IP addresses, etc. In many of the examples you find two JSON files, one being the template and another being values for parameters.

Getting started

Talking of examples, the best place to find some is in this repository on GitHub. Microsoft has created templates for a huge number of situations so you should find something that is close to what you need. You are free to use them verbatim, plugging in your own parameter values or they can be use as the basis of your own templates. Whilst customising the template is as simple as hacking around the JSON with your favourite text editor, in practice it is not that obvious what you need to do. I found that a better approach was to use the template to setup its bits in Azure. I would then tailor the resource group that was created for my own needs manually via the Azure portal. Then it is possible to export the changes as a customised template using the automation script option. The download button gives you the template files that you can then start editing.

arm1

Customisation

But why do you need to edit the files – they provide a complete (usually) description of the resource group. I have found that although the templates that are exported do represent the resource group, they do need to be whipped into shape if they are going to be reused. Firstly, the export process parameterizes everything. Most of these parameters are not normally things you’d want to change so it is better if they were converted to variables in the template. You might also want to derive the name of resources from other things, so the name of the resource group might be resources_rg and you want the public Ip to be named resources_ip and the network security group to be resources_nsg. Therefore, you’ll want one parameter and have the template construct the values for three variables. Secondly if your resource group contains 3 servers behind a load balancer the exported template will have three virtual machine configurations in it. If you want the number of servers in the template to be configurable then you’ll be back editing the template.

Testing

Once you start editing the file that is when your problems start. The only way to be sure that your template will work is by deploying it to Azure. The first type of problems you’ll encounter are syntactical errors in the form of malformed JSON. Incorporate a JSON validator into your workflow will minimise the amount of hair you’ll lose dealing with this type of problem. You can also upload the template into Azure via the Templates preview features. However, at the time of writing I didn’t feel that this offered much particularly as you have to copy and paste your template and/or edited it in place. Instead I choose to work on my templates in a richer text editor. Once I was happy with my changes I then test deployed it to an Azure subscription. Hold on, I have missed a step I have not explained how you deploy templates to Azure.

arm2.png

Deployment

Luckily if you have exported the template from Azure you have already got everything you need to deploy the template using a number of languages. If you are using powershell you’ll find a deploy script called, deploy.ps1. it can be launched as follows:

.\Deploy.ps1 -subscriptionId <your_sub_id> -resourceGroupName <rg-to-be-created> -resourceGroupLocation <region> -deploymentName <name>

You can also deploy your efforts via the previously mentioned Templates Azure feature. You simply select the deploy option. However, this method doesn’t allow you to test whether your parameter file (if you have one), is applied correctly.
Finally, you could take a leaf out of Microsoft’s book by allowing any of your template you happen to have in GitHub to be invoke directly. If you add the following to your readme.md file you’ll get a nice little button that does just that.

 <a href="https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2F<MyRepoUrl>%2Fazuredeploy.json" target="_blank">
 <img src="http://azuredeploy.net/deploybutton.png"/>
 </a>

Again the downside is that it does not use a parameters file. I have also found that it is hard to get this method to target the correct subscription if you happen to have many.

What to do when things go wrong?

The Azure resource group model is pretty good at keeping you informed with ongoing deployments. If you click on the resource group that you are interested in within the portal, you’ll be able to see its deployment status on the summary screen. It will say whether there is a deployment in progress. It will also tell you if a deployment has failed. This is where to go to troubleshoot the inevitable problems you’ll have when designing and deploying your own templates.

arm3

Microsoft and Integration – A history and a prediction

Microsoft and Integration – A history and a prediction

I have spent much of my IT career working with Microsoft technologies. I have also found myself dealing with integrating the Microsoft stack with applications built in many other technologies.

In the dim and distant past, integrating applications built in Microsoft technologies with something else was a significant challenge. Put simply Microsoft technology didn’t play nicely. I remember when being able to connect a Microsoft application to an Oracle database was a big deal. That opened the door to sharing data between Microsoft and none Microsoft applications via a database intermediary. This shared database integration solved many technical challenges at the time but it is now considered to be an anti-pattern.

The next significant event on the Microsoft integration roadmap was the release of BizTalk Server. This product grew its capabilities over a couple of releases but a fundamental architectural change suddenly put it on many organisation’s radars. The product was capable and Microsoft priced it aggressively to ensure many people looked up and took notice.

Whilst the product was capable it was complex. Setting it up took much effort and getting it tuned for what you were asking it to do took a lot of expertise. If you could accept this initial setup cost, you ended up with a product that acted as a message broker and process manager and had adapters that allow data to be extracted or pushed into systems that you might have never thought possible before.

However, there was another problem. Microsoft had built a mature integration product. The nature of integration made it challenging for average Microsoft developers of the time to build integration solutions. You have to understand the Microsoft walled garden made developers very productive because there was only so many ways to skin a cat. Most problems had clear and obvious solutions. This way of working didn’t scale up when you were dealing with Integrating with an AS400 system or having to get your head around compensating for a partial failure of a long running transaction. So this meant getting good BizTalk developers was hard. It was not a natural progression for a Microsoft developer, and people who did have the right mindset were most often to be found working in lucrative positions on other stacks.

I was recently taken through Microsoft’s current Integration roadmap and technology offerings. I was surprised that BizTalk is still a primary product. In the latest release you’ll be able to run the BizTalk product on premise or using VMs on the Azure IaaS offering. There is also a PaaS service called Azure Logic Apps that enable some of the integration patterns supported by BizTalk without the need to be running your own BizTalk infrastructure whether that be on premise or IaaS on Azure.

What I have seen with projects more recently is that they will take the path of least resistance to solve problems. So when project teams encounter an Integration challenge for the first time I can picture them reaching for Azure Logic Apps. There will be costs associated with that action but they are unlikely to be huge. The team will build on this until they find something that cannot be solved with Azure Logic Apps.

Microsoft’s answer to this is that they should use BizTalk. However, I struggle to swallow that. Even in its modern incarnation BizTalk requires some effort to stand up, run and develop solutions for. There will be a not insignificant setup cost and all the historic challenges of finding BizTalk developers have not gone away. Also I can’t see how Azure Logic Apps could be ported to BizTalk so you’ll end up with two Integration solutions.

Would it be even possible to build a case to implement BizTalk for a small number of Integration scenarios? If you spot these potential problems early and decide that BizTalk is the better solution for all your Integration needs from the outset, I can see many people being unconvinced that the solution couldn’t be delivered on Azure logic applications. On the other hand, if you are incrementally building your solution and suddenly find you need BizTalk later during delivery it is not very Agile to wait while you source good BizTalk developers, build a BizTalk environment and change your support model.

So the approach feels a bit disjointed for me.

If you have an existing investment in BizTalk you can get more from it and you can move it to the cloud if you want to. You can use the other capabilities in Azure and decide on a case by case basis whether to build in BizTalk or in Azure Logic applications.

However, if you don’t already have that investment, the case for starting from scratch with BizTalk is reducing. So if you want to use Microsoft you will use Logic applications. Which means when you find things that it doesn’t do the natural alternative will be to code around the problem. How hard can it be? Well in my experience the answer is “easy when you start and becoming extremely hard when you least want it to”.

I am not sure what will be added to the PaaS stack but I would be surprised if it wasn’t brought even closer to what BizTalk does, making BizTalk the solution that offers the more control vs Azure Logic Apps: the solution that offers the most productivity.  At least the current situation is likely to keep people like me busy.