It wasn’t long ago that a deployment of a new solution into Live was a major undertaking. There were the weeks planning the release which resulted in a tomb of a release plan. This was the detailed step by step instructions to be followed. It included pre-release activities, things to do on the night and then all the clean-up tasks. These steps were very detailed, they were the exact commands a person, (yes a person!) would execute on the servers and they had timings down to the nearest 30mins so the release activities could be scheduled.
If you were lucky you got to try these steps out in at least one environment before going live. The result was numerous changes, after all how were you supposed to determine all the steps down to minute detail when you’d never done them before. The release plan was revised time and again until the release night was upon you.
I say release night; it was more like a release weekend. I worked on many multiple night releases where sleep was a nice to have. I always thought this strange – a live release is often the most tense and stressful time yet the general consensus was to get the team high on Red Bull and sleep deprived – thus increasing the chance of a mistake rather than planning enough time in the release window to help the team keep stress levels manageable.
I do remember on one occasions spending hours through the night trying to determine why something had not deployed successfully only to find out the release engineer had remoted into the wrong server so we were looking in the wrong place!
Despite all the pain points we did seem to do something sensible. The code was built once, packaged and then deployed to many environments. In the Microsoft space it was common to manually create MSIs that were deployed by the release engineer in each environment. The code that we had seen running and tested in earlier environments was exactly the same code that we were now deploying to Live.
During the onset of Continuous Integration and Continuous Delivery, teams became proficient at automating builds. This meant that so long as the input was the same an automated process would ensure that the result was the same. We created automated builds and deployments that could take the source code and create a release and deploy it into Live without any intervention. I do remember some nervousness from my testing colleagues when working on projects doing this. Initially there was mistrust that the automated processes were really recreating exactly what they had been testing in earlier environments. In some cases the teams were asked to rein back the automation and retained manual deployment of packages to keep the testers happy.
Writing automated build and deployment scripts was hard and error prone. We tested the software we were putting into production but who was testing the scripts that got it there? Luckily the tooling caught up so we no longer have to do all the heavy lifting ourselves.
When looking at modern deployment tools such as Octopus Deploy and Visual Studio Release Management they use the original model of build once deploy anywhere. You are encouraged to have a build that creates a package once and then that package is promoted through a number of environments. This creates its own set of challenges
- The build can’t simply select a build configuration for a particular environment to apply web.config and other transforms
- The deployment server has to have access to the configuration needed for each environment
- If you want to manage environment configuration alongside your source code how should you package this in such a way that it is available to the deployment server?
- How do you manage sensitive configuration information so that developers don’t have access to settings they shouldn’t have?
These are not insurmountable problems but they require thinking about.