Distributed Retrospectives

Distributed Retrospectives

Looking at past performance, understanding what was good and what wasn’t and acting on that knowledge is a fundamental activity in highly performing teams. Furthermore, it is the basis of intelligence – behaviour is adapted based on previous experience. Put another way some say the definition of insanity is repeating the same action and expecting alternative outcomes.

Lean talks about continuous improvement. This emerged from the manufacturing industry identifying opportunities for streamlining work and reducing waste. As Agile software development arose from Lean thinking it comes as no surprise that this value carries forwards. The Agile Manifesto repackages Continuous Improvement in the following principle.

At regular intervals, the team reflects on how to become more effective, then tunes and adjusts its behaviour accordingly.

This definition is more explicit. It talks about the behaviours that a team should exhibit. In effect, it is equating effectiveness to the team adjusting its behaviours.

Over time frameworks and processes have been layered on top of the Agile Manifesto, but the concept of continuous improvement remains. XP or eXtreme Programming has a rule that states “Fix XP when it breaks”. The creators of XP are not assuming that their framework is a one stop shop for all eventualities. In fact, the rule’s definition especially calls out the use of the word “when” rather than “if”. XP will have to be adapted on a continuous basis.

Likewise, SCRUM calls out the regular ceremony called the Sprint Retrospective which formalises further the process of inspecting past behaviour and adapting it to be more effective as a team.

The problem

Whether you follow SCRUM to the letter, performing sprint retrospectives, or do something different, it is the team that is continuous improving. Most of the techniques for making these activities successful focus on having the team in the same physical location at the same time. So, what do you do if your team happens to be distributed, in different countries and working different hours.

The principles of a ‘restrospective’ or ‘inspect and adapt’ or whatever you happen to call it are generally the same

  • Gather points for discussion about past performance
  • Get a common shared understanding about these within the team and form a collective view of the most important things to act on
  • Formalise the activities to be undertaken to improve team behaviours, performance and effectiveness

These for points hold true whether you are one team in the same room or distributed across the world in different time zones.

Information Gathering

This should be the easiest thing to do in a distributed team. After all, in a sprint retrospective with a team sat around the same table there are still periods of time when each team members gather their thoughts individually ready to be presented back.

The time when a distributed team can come together is precious so one option would be to do the information gathering offline. Hopefully retrospectives come at regular intervals so they are not a surprise. The team can be gathering thoughts all the time and adding them to some sort of shared bucket so they are not lost and can be visible to other team members. No set period needs to be put aside for this – it can be a continuous process. Having said this, in most situations it can also be useful to remind your team member of upcoming retrospectives and to put side some time to collect their thoughts.

Collective Understanding

While it is possible to discuss the points being raised on messaging systems such as Slack, this should not be relied upon entirely. The team can discuss points when they are raised but there will come a point where an interactive conversation is necessary. You want to ensure that the whole team the opportunity to speak and to encourage the shy ones to contribute. You also want the team to be focused on this activity and not be distracted. Asynchronous messaging is good for some things but this is not one of them.

The specific format of this session is down to the team. Some may be happy with a conference call; others will prefer video conferencing. Keep the session interactive so that points can be clarified, misunderstandings reduced and allow all team members to share the “air time”. It requires effort to make this work. No one likes sitting on a conference call just listening and it is an easy place to hide. The team need to put in effort to made this successful.

The group session should be scheduled at a time that minimises impact.  If you are in a situation where there is no convenient time for the entire team, then you may have to change the schedule each time so the inconvenience is shared across all geographic regions.

Deciding what are the most important points and what to act on is a team activity. Individuals taking the lead and prioritising some actions over others must be discouraged. Neither should the team be tempted to identify the highest priority actions offline. Not only is it likely that some team members will not understand what they are being asked, the opportunity to have the team get clarifications will have been missed.

Acting as a team

The final step is to formalise the actions so the team can act on them. Ideally there should be a predefined arrangement of how many actions that team should address and how they go about dealing with them. In a distributed team, it is important that the actions and owners are documented somewhere that can be accessed by everyone. It is not so important that they are written up as a group but it is a good idea to determine action owners collaboratively. This often surfaces further misunderstandings. Therefore, if you do this as a group you might be able to catch problems early.

Whichever way you choose to implement your continuous improvement this in itself is not exempt from improvement and the elimination of waste. In fact, as the primary vehicle to effect change it is arguably the most important things to inspect. Therefore, it should not be neglected and neither should it be assumed that once set up it can never be changed. Furthermore, it should be recognised that it requires effort from all involved. No amount of writing up a process or facilitation guarantees success when the team aren’t willing to self-improve.

A small note on tooling

The number one piece of advice I can give for organisations trying to build effective distributed teams is

Invest in communications

This is the obvious stuff like high speed Internet access that supports reliable audio and video conferencing but it also covers establishing the working environment where using this technology is seamless and frictionless. Finally, this investment should include the right tooling so all the team’s efforts can be placed on the task in hand and not wrestling the technology.

Google Docs Retrospective Template

docs

If you have ever been involved in a face to face retrospective think of this Google Docs template as a means of turning the post its and pens into electronic equivalents. It doesn’t hold your hands so whatever level of facilitation your team requires if they were in the same room, the same levels will be needed when working remotely. So, this means that if the team already has a commonly agreed approach to managing retrospectives this template will allow them to apply this remotely. It won’t help much with teams just starting out on a continuous improvement journey. Therefore, I would recommend this approach to teams that already have a productive retrospective mindset but who are now branching out to work remotely.

Retruim

Retruim is an online tool that guides teams through the process of undertaking a retrospective. It is appropriate for both collocated and remote teams. What I like about this tool is that it allows you to choose from a number of well-known facilitation technique such as “Start, Stop, Continue”, “Mad, Sad, Glad” and “Lean Coffee” rather than forcing the team into one style. It also means that each retrospective doesn’t have to be the same as the last. It also formalises the Think, Group, Vote & Discuss stages.

retruim

Whilst this is good in principle care has to be taken when working remotely. There may be feeling that the first three stages can be done offline. However, it is the group stage where the team discuss whether items can be grouped which often elicits many of the valuable conversations. When using this tool for the first few times it is better to do all four stages collaboratively and then adapt the process as necessary.

Build Once Deploy Anywhere – Configuring the Deployment Side

Build Once Deploy Anywhere – Configuring the Deployment Side

My last few posts have covered my experiences of creating a build once deploy anywhere release pipeline for Azure Cloud Services using Team City and Octopus Deploy. My first post covered some of the various deploy/release models and why you might want to try a build once deploy anywhere model. My second post went into a bit more detail about how I went about packaging my solution in order for it to be consumed by Octopus Deploy. This post covers what I found when configuring Octopus Deploy to deploy these packages.

If you are interesting in learning more about Octopus Deploy I can recommend this podcast from Scott Hanselman. Orchestrating and automating deployments with Octopus Deploy and Damian Brady.

The first thing to mention is that a lot of the hurdles I encountered and lessons I learnt were due to the use of Azure Cloud Services. Whilst the discussion that follows includes a lot of details that are specific to this type of deployment, I hope that there are commonalities that are useful in many different situations.

Deploying a Cloud Service

Deploying a Cloud Service in Octopus Deploy should be straight forwards. After all there is an out of the box deployment step designed for this very purpose. The thing you have to be aware of is how Octopus Deploy applies configuration transforms and setting changes to an already packaged Azure deployment. Essentially in unpacks it, apply transformations to the files and repackages it. So simple so far.

The first thing I tried was getting the step to swap out the packaged ServiceConfiguration.cscfg file with the environment specific one. This is something that is advertised that the deployment step will do. However ,this became the first obstacle to clear.

Whilst you can add rules to tell Octopus Deploy how to transform files in your package by enabling the configuration transforms feature this does not apply to the ServiceConfiguration.cscfg file by default. In fact, it is not really doing a transform. Instead it is swapping the *.cscfg that the build process places in the package with the one you actually want for the environment. So Octopus Deploy looks in the root of the package for a file called ServiceConfiguration.{ENV}.cscfg where ENV is the name of the environment that OD is deploying to, or one called ServiceConfiguration.Cloud.cscfg. If it does not find one, the deployment doesn’t work.

deploy1

So this is the start of the changes needed in the packaging on the build side. Originally I had

<file src="..\MyWebRole\ServiceConfiguration.*.cscfg"
target="ServiceConfiguration" />

Instead I needed

<file src="..\MyWebRole\ServiceConfiguration.*.cscfg"
target="" />

The next problem I had was that my environments in Octopus Deploy did not match the names of my Azure target profiles. So even when it was getting the files on the root of the package the deployment still failed. The only way I could see around this was to ensure these matched. There may be ways around this but I couldn’t find any in the time I had. The next stage involved the transformation of the web.config file. This proved to be the biggest challenge.

Up until now all of the files we were swapping or transforming lived outside of the Azure Cloud Service package. This time the web.config file could be found deep inside a folder structure that is defined by the Azure package itself. Within the outer NuGet package I had copied the transform files into a folder called WebConfigTransforms. Octopus Deploy could not handle this structure automatically. This was hard to find because the deployment was not failing. It is only a missing entry in a log that acts as a clue. OD allows you to define custom transformation rules. What this amount to are expressions that describe where the configuration file is relative to the transform file. In order to this I would have had to understand the file structure that results from OD unpacking the Azure package. Whilst it was possible to discover this it didn’t feel right to me to bake this folder structure into the deployment step. What if this folder structure was to change? Unlikely, but it might.

The configuration transform feature screen hints that you can do something like this

Web.BETA.config => Web.config

However this doesn’t work either. The deployment continues with only the following in the log to tell you something has gone wrong.

The transform pattern Web.BETA.config =&gt; Web.config was not performed due to a missing file or overlapping rule

It was Googling this error that finally surfaced this post from the Octopus Deploy help forum.

http://help.octopusdeploy.com/discussions/questions/5449-cloudservice-nuget-package-structure-for-config-files

This post indicated that is simplest solution was include the transforms in the Azure package by setting the Web.Config’s ‘Copy to Output Directory’ value to ‘Copy always’.

deploy2

Doing this puts the web.config and all of its transforms into the same folder. The Configure Transform step could now apply the transforms automatically. Time to remove the webconfig transform files from the *.nuspec file as there was no longer a need to add them to the NuGet package explicitly.

deploy3

The next stage was to apply the necessary transforms to the ServiceDefinition.csdef file. This can be achieved with the Configure Transforms feature but this required a tiny bit of work. The Configure Transform step doesn’t deal with this type of file automatically so you need a custom transform rule. One like this one.

ServiceDefinitionProfiles\ServiceDefinition.#{Octopus.Environment.Name}.config
=&gt; ServiceDefinition.csdef

.#{Octopus.Environment.Name} is the name of a system variable that gives you the name of the environment you are deploying to.

At this point I had a functioning deployment with all the replacements and transformations working. However, there was one other aspect of my deployment that is worth discussing. This element highlights something that you’ll commonly do as part of your release cycle that you might need to think differently about when working in a Build Once and Deploy Anywhere model. That aspect is running database migrations.

Database Migration

As discussed previously the build step is environment agnostic. Migrations on the other hand are applied to a specific environment. Therefore in a build once deploy anywhere model it does not make sense to have the build step run migrations.

In order for the deployment step to run migrations a number of things that must be in place

  • The assemblies containing the migrations must be packaged and accessible to the deployment server
  • Any scripts used to run the database migrations must be available to the deployment server
  • Any tools required to run the database migrations must be available to the deployment server

If you read the second post in this series you might realise that the answer lies in packaging. In order to make everything available to the deployment server they must be added to the NuGet package.

But first a comment about how Octopus Deploy handles deployment scripts. In the deployment process there are four stages

  • Pre Deploy
  • Deploy
  • Post Deploy
  • Deploy Failed

If the NuGet package contains powershell scripts called PreDeploy.ps1, Deploy.ps1, PostDeploy.ps1 & DeployFailed.ps1 on the root of the package, Octopus Deploy will automatically run them at the correct stage. This help article explains it in more detail. In order to have Octopus Deploy run a migration script after a success deployment of a Cloud Service, add the migration script to the root of the package and call it PostDeploy.ps1.

Now the script is not the end of the story. You’ll also need the Migration assemblies and if you are using something like FluentMigrator you’ll need that too. I ensured all of this was in the package. I don’t rely on the tool being on the deployment server in a specific location. If you to that you are coupling the script and server configuration together. You may as well hard code the script in the deployment project configuration. If you want the script to be part of the deployment package it should be stand alone. The tooling should be packaged in a known location and the script can refer to the tool in relative terms.

<!-- Package deployment script - Must be on the root -->
<file src="DeployScripts\Migrations\*.*" target="" />

<!-- Add migration DLLs in the package so they are available on the Deploy Server -->
<file src="Migrations\bin\$ConfigurationName$\Migrations.dll" target="migrationdlls" />

<!-- Bring across the Migration Tool to ensure that we can run the migration -->
<file src="packages\FluentMigrator.Tools.1.6.0\tools\AnyCPU\40\*.*" target="migrationtool" />

The powershell script itself has a line like this in it

$migrationExePath = (join-path $currentDirectory "migrationtool\Migrate.exe")

This is what it took to create a working release pipeline. I have simplified or skipped over things to keep these posts brief but they give a flavour of what the typical pain points are. I hope you find them useful.

Build Once Deploy Anywhere – Configuring the Build Side

Build Once Deploy Anywhere – Configuring the Build Side

In my last post I discussed various build and deployment models and in particular build once and deploy anywhere. Rather than leaving it there I thought I should try to make this more real by drawing on some experience of actually doing this for real.

I was using Teamcity for the build part of the model and Octopus Deploy for deployment. But this isn’t a post about how to use these technologies. Instead I hope to highlight some of the things you need to think about when implementing this model. My intention is that this is applicable to any build and deployment technology but having a concrete example is a useful reference point.

It is important to understand what you want before configuring the technology to work that way. However, you should not forget to consider how the technology wants you to work.

For example, using Octopus Deploy did end up influencing what the end result looked like. Octopus Deploy works through conventions – if you use them then the product does much of the heavy lifting without much help. If you don’t you are likely to have your work cut out. The very fact that you are using conventions that Octopus Deploy expects has impacts that ripple out to your build server and even your project or solution structure.

In this setup the Build server (Teamcity in my case) was responsible for the following

  • Getting the code from source control
  • Compiling the code
  • Running unit tests
  • Packaging the solution
  • Kicking off an automated release into a Dev/Test environment

The package is the key interface between the build and deployment server. As an interface it is the coupling point – it is the place where the build has to understand what the deployment server is expecting.

Environment configuration is the hot spot for Build Once Deploy anywhere models. Here the build cannot be relied up on to apply environment specific settings.

Once build servers would be configured to compile a build configuration in a visual studio solution. This mapped to a particular environment and the act of compiling the solution would ensure that all necessary configuration transforms were applied. The preconfigured solution was packaged and then dropped without change into an environment. The primary drawback of this was that the solution and the build server needed to know exactly how many environments you had. To some people this might not be an issue but to others this represents an unnecessary overhead.

Coming from a developer background I want all my configuration settings for all environments sitting alongside my code in my source control system. If I came from an infrastructure background maybe, I’d think differently. Maybe I’d be happy having all the configuration settings available to the deployment server in a different way, separate from the source code. However, I’m not, so the primary challenge I have when setting up a build once deploy anywhere release pipeline is how to get all the configuration settings needed from source control to the deployment stage. This is a question of how to package the solution.

Octopus Deploy wants NuGet packages, so the build process has to create and publish them. Depending on what you are deploying you may be creating a NuGet package of a package – for example if working with Azure Cloud services they need to be packaged first and then placed in a NuGet wrapper. When you look at a cloud service package there are a number of configuration points. Firstly, there is the web and worker roles themselves so there will be at least one app or web.config file lying around. Next comes the service configuration and the service definition files.

When working with a build once deploy anywhere model the first thing to remember is that it doesn’t really matter which configuration you apply to the solution at the build stage. Okay you might need to consider the configuration settings required to run your test suite but these settings are not going deployed or at least used in any target environment. Instead the deployment stage will transform or replace your configuration as necessary for the particular environment it is deploying to. Or in other words the build stage is environment agnostic – it is completely unaware of which environments you have and it is not impacted if you were to add or even remove environments.

nuget12

To create NuGet packages you need a *.nuspec files. This is an XML based manifest for the package describing all the files necessary to create it. The next thing you need is a tool to interpret the *.nuspec file and to turn it into a package. There are various tools that can do this. I chose OctoPack as I was targeting Octopus Deploy. It seemed the obvious choice. Octopack itself is a Nuget package so using it is as simple as adding it to the projects in your solution that you want to package, through the NuGet package manager, and running a command such as:

msbuild MySolution.sln /t:Build /p:RunOctoPack=true

Now we know how to create a package but how is that going to solve the problem of getting all the necessary configuration to the deployment server? The key is what we put in the *.nuspec file.

In the case of an Azure cloud service web role we need the web.config and environments specific transforms, the service definition file and its transforms and environment specific Service Configuration files. This can be achieved like this.

<file src="..\MyWebRole\bin\$ConfigurationName$\app.publish\*.*" 
       target="" />

<file src="web.*.config" target="WebConfigTransforms" /> 

<file src="..\MyWebRole\ServiceConfiguration.*.cscfg" 
        target="ServiceConfiguration" />

<file src="..\MyWebRole\bin\$ConfigurationName$\ServiceDefinition.csdef" 
        target="" />
<file src="..\MyWebRole\ServiceDefinitionProfiles\ServiceDefinition.*.config" 
        target="ServiceDefinitionTransforms" />

Here I’m doing the following

  1. Packaging the output of the Cloud Service
  2. Adding the Web.config transforms into a folder called “webconfigtransforms” in the package
  3. Adding the environment specific ServiceConfiguration.cscfg files into a folder called “ServiceConfiguration” in the package
  4. Adding the primary ServiceDefinition file to the root of the package
  5. Adding transforms for the ServiceDefinition into a folder called “ServiceDefinitionTransforms” in the package

It should be noted that the web.config file is already part of the web role and therefore it is automatically deployed in the Azure cloud package. The first line of the snippet above adds this package. The result is the *.cspkg file in the Nuget package below. This is just a zip file so you can explore its contents in your favourite zip tool.

nuget1

This represents my first attempt of packaging my solution. In the next post I’ll walkthrough what I needed to do on the deployment side to consume this package and as it turned out, what I needed to change in the packaging step to align with what Octopus Deploy wanted.

Build Once Deploy Anywhere

Build Once Deploy Anywhere

It wasn’t long ago that a deployment of a new solution into Live was a major undertaking. There were the weeks planning the release which resulted in a tomb of a release plan. This was the detailed step by step instructions to be followed. It included pre-release activities, things to do on the night and then all the clean-up tasks. These steps were very detailed, they were the exact commands a person, (yes a person!) would execute on the servers and they had timings down to the nearest 30mins so the release activities could be scheduled.

If you were lucky you got to try these steps out in at least one environment before going live. The result was numerous changes, after all how were you supposed to determine all the steps down to minute detail when you’d never done them before. The release plan was revised time and again until the release night was upon you.

I say release night; it was more like a release weekend. I worked on many multiple night releases where sleep was a nice to have. I always thought this strange – a live release is often the most tense and stressful time yet the general consensus was to get the team high on Red Bull and sleep deprived – thus increasing the chance of a mistake rather than planning enough time in the release window to help the team keep stress levels manageable.

I do remember on one occasions spending hours through the night trying to determine why something had not deployed successfully only to find out the release engineer had remoted into the wrong server so we were looking in the wrong place!

Despite all the pain points we did seem to do something sensible. The code was built once, packaged and then deployed to many environments. In the Microsoft space it was common to manually create MSIs that were deployed by the release engineer in each environment. The code that we had seen running and tested in earlier environments was exactly the same code that we were now deploying to Live.

build1

During the onset of Continuous Integration and Continuous Delivery, teams became proficient at automating builds. This meant that so long as the input was the same an automated process would ensure that the result was the same. We created automated builds and deployments that could take the source code and create a release and deploy it into Live without any intervention. I do remember some nervousness from my testing colleagues when working on projects doing this. Initially there was mistrust that the automated processes were really recreating exactly what they had been testing in earlier environments. In some cases the teams were asked to rein back the automation and retained manual deployment of packages to keep the testers happy.

build2

Writing automated build and deployment scripts was hard and error prone. We tested the software we were putting into production but who was testing the scripts that got it there? Luckily the tooling caught up so we no longer have to do all the heavy lifting ourselves.

build3

When looking at modern deployment tools such as Octopus Deploy and Visual Studio Release Management they use the original model of build once deploy anywhere. You are encouraged to have a build that creates a package once and then that package is promoted through a number of environments. This creates its own set of challenges

  1. The build can’t simply select a build configuration for a particular environment to apply web.config and other transforms
  2. The deployment server has to have access to the configuration needed for each environment
  3. If you want to manage environment configuration alongside your source code how should you package this in such a way that it is available to the deployment server?
  4. How do you manage sensitive configuration information so that developers don’t have access to settings they shouldn’t have?

These are not insurmountable problems but they require thinking about.