Products not Projects

Products not Projects

I have spent a long time delivering different IT projects. Whilst the details vary, it is not hard to spot commonalities.

There is the part at the start where a need is identified. This might be building a business case or identifying a sales opportunity. The project springs into life if the need is of a high enough priority and someone is willing to invest.

Next comes the start up. This includes understanding what needs to be done and how it will be achieved. Whilst project delivery methodologies differ they have a concept for this start up stage, project initiation, inception or sprint 0.

The bit in the middle makes up the bulk of the project – delivery. It is where the main portion of the work occurs. It might be processional; it may be iterative but this stage can take from a few months to years.

And finally, there is a ramp down. The work is coming to an end and people are moving on to other challenges. Sometimes there is a big bang release and other times there is a gradual reduction of new features and a settling into a BAU style of working.

What is clear is that there is a start, middle and end. This is obvious in waterfall style projects but this is often the case in more agile projects. There can be the initial creation of the backlog, delivery of the high-risk stories, working towards to minimal viable product, knocking off the remaining stories and then stabilising the solution.

This is project centric thinking. However, no-one actually asks for a project. They want the result. So why do we think in terms of projects? Instead we should be thinking about products. Whilst a project and product may look the same initially, many projects stop when the product is actually becoming interesting: when the product is being used by real customers.

For this discussion, I classify a product as something that is live and being used by customers. For the team that own the product there isn’t a start, middle or an end – just what is next. There are new features to be built, support incident to investigate and resolve and the team have access to real customer feedback.

When the product is initially being build, prioritising the work is relatively straight forwards. The backlog order is defined by the product owner representing the potential customer. It does not matter how well this is done it is still a potential customer. Most agile delivery efforts will be striving to get the product live to real customers, to get real feedback, but more often than not we will be shipping early increments to stakeholders who are not the real customer.

Once the product is Live, in the hands of real customers, then the fun starts. You’ll actually see what they really want, maybe by interpreting usage metrics or by hearing what they say to your support team. Perhaps you use UserVoice to capture feedback and new features requests. Now you have work coming from multiple sources. You have a backlog of new features, but you also have features the customer is requesting. You’ll have live incidents to deal with and you may identify technical improvements through application performance monitoring tools like NewRelic.

The prioritisation of these different streams of work can become a challenge. When working with small product teams it is likely that the team will be responsible for all aspects of the product, so knowing where to focus at a given point in time can be difficult. A team has a fixed capacity to do work, or in other words, they can do a predictable amount of work in a fixed period. If they are focusing on delivering a feature it can be very disruptive to have to focus on an immediate serious support incident.

Therefore, the team needs to put in place structures and ways of working to deal with these scenarios. Being able to react to a problem and fix it immediately is of no value unless you have a means to get that fix in front of  customers quickly. Therefore, focusing on foundations such as Continuous Integration and Continuous Delivery is just as important as finding heroes in the team who will drop whatever they are doing at a moment’s notice.

However even the best CI/CD technology stack in the world can’t help if you can’t ensure that your latest code is stable when you need to. There needs to be automated testing at all levels. You need a branching strategy that keeps unfinished features out of the main release branch or you need to be building new features on mainline hidden behind toggles.

Often, during project focused efforts you find the people who manage the purse strings challenging you on why their needs to be investment in these areas. Whilst they are useful, are they really needed to get the product out of the door? However, we know that they are essential for keeping the product going throughout its lifetime whether you are involved at that stage or not. And people will remember the product through its quality and flexibility not the project that was used to deliver it.

 

 

Learning Docker – Moving on to Linux

Learning Docker – Moving on to Linux

Last time I walked through my experience of Docker on Windows. Now it is time to see if any of that knowledge is transferable to Docker on Linux.

But before that I wanted to call out a seemingly an obvious statement for people equating containers to virtual machines. With Virtual Machines you can run a Linux guest on a Windows Host and vice versa because the hypervisor is pretending to be bare computer with no OS running on it. This is not the case with containers as the host operating system is available to the software running in the container. Therefore you can only run Windows containers on a Windows machine and you can only run Linux containers on Linux machine. Containers can do a lot of things but they are not magic. And as ninety nine percent of the available containers are Linux ones… you still be doing a lot of work with Bash!

In order to translate the experiences I have had understanding Docker on Windows to Linux I thought I would try setting up something I was ready familiar with. So I went about trying to set up TeamCity. The one thing I decided to do to help my understanding was to set up the main TeamCity server and one  build agent in separate containers. Whilst in this write up they are sharing the same host machine, having the two components in separate containers should allow you to scale up with ease.

So where to start? Obviously you could go the whole hog and install a flavour of Linux into a VM, but I went for a simpler solution. I decided to use one of the Amazon Web Services pre baked Linux EC2 instances. You can do this for free if you select the right instance size. So sign up for an AWS account and create a new Amazon Linux T2-Micro EC2 instance. The T2-Micro size is part of the free tier so you won’t be charged.

It is relatively  straightforward to set up Docker on an Amazon Linux instance. Use this guide to get you started.

We are going to setup TeamCity but where to start. If you were doing this outside of the container world you would obtain the installation package , install in on the target machine and then configure it. However you won’t be surprise to learn with container it is easier.

Docker maintains a hub of all public repositories at hub.docker.com. Whilst it is possible to search and browse it,  you do really need to known what you are looking for. So for this post we’ll be using

The next step is to get these container images onto your machine. In the terminal window, issue the following two commands

docker pull jetbrains/teamcity-server
docker pull jetbrains/teamcity-agent

You may notice one of the benefit of containers at this point. Whilst the first command downloaded a number of container layers, the second gets away with only downloading a smaller subset – the teamcity-agent container shares many of its base layers with the teamcity-server.

2-1

Last time when I started containers I had to map the ports exposed by the container to ports exposed by the host machine. With these TeamCity container images you have to do that and more. Not only is there a requirement to expose ports but you also need to map file locations too.

TeamCity adapts its behaviour based on a number of configuration settings. You tweak them as you see fit. These need to be located on the host and mapped to the container because if they weren’t you’ll lose all of your changes each time the container started. The same goes for the logs output from TeamCity . If you don’t map the container’s log path to somewhere on the host then you’ll lose them if the container is stopped. We’ll need to do the same for the agent container.

Therefore create the directory structure on the host that we will map to the containers

sudo mkdir -p /data/teamcity_server/datadir
sudo mkdir -p /opt/teamcity/logs
sudo mkdir -p /data/teamcity_agent/conf

Then we can start the teamcity-server container with the following command. We expose ports through the -p argument and we map volumes through the -v arguments. It is useful to start in interactive mode the first time. That way you’ll see the teamcity start up process and can look for any problems.

docker run -it --name teamcity-server-instance
-v /data/teamcity_server/datadir:/data/teamcity_server/datadir
-v /opt/teamcity/logs:/opt/teamcity/logs
-p 8111:8111
jetbrains/teamcity-server

As it starts up it wants to ask you a number of questions through the web site.  Open a browser and navigate to port 8111 on the hosts IP address. You did remember to set up your security group in AWS EC2 correctly didn’t you? If you did you’ll see the screen below. If not, ensure you have a security group rule for port 8111.

2-2

The initial process is building up the configuration files that are stored on the host at /data/teamcity_server/datadir. Once TeamCity has finished setting itself up exit the container via [CTRL]+C+ because it running interactively. You’ll find various directories and files if you look at this location. They will be used to configure the teamcity-server container the next time it runs.

Starting the container again is a bit easier. Run docker ps -a to find the container id, then run docker start [containerId]. You can switch back to the web site to see it loading. A T2-Micro is not the most powerful machine so the start-up process may take some time.

Now it is time to start the agent. The process is very similar. This command starts the agent in detached mode.

docker run -d -e SERVER_URL="http://[host_ip]:8111/"
-v /data/teamcity_agent/conf:/data/teamcity_agent/conf
jetbrains/teamcity-agent

The thing to note here is the -e SERVER_URL="http://[host_ip]:8111/". This allows you pass in custom settings into  the container. In this case you are passing the public URL of the TeamCity server in so the agent know where to connect.

You can check whether this has worked by looking in TeamCity server website

2-3

You should see an unauthorized agent. This can be authorised to be a normal team city build agent. This team city running in containers.

Learning Docker – Getting up and Running in Windows

Learning Docker – Getting up and Running in Windows

In my field, you can’t go anywhere without hearing about Docker. It is everywhere.

  • Architecture & Design – Use Docker containers to host your microservices
  • Development – Run your production topology on a developer’s workstation by having all your components running in Docker containers.
  • DevOps – Avoid environment specific configuration by building everything in Docker containers and then pulling them into target environments.

So, I thought it was time to have a look at what this means to me. As I find the best way to learn something is to get my hands dirty, it was time to get technical…

The first challenge: Docker has grown out of the Linux world and most of my background is on the Microsoft stack. These two communities up until recently have been like oil and water – no mixing. My learning curve therefore look like this:

1-cliff

In order to keep the number of learning challenges I had to a minimum, I started looking at the Docker support in Windows Server 2016. This way I wouldn’t have to trip over anything specific to Linux. Generally, if I encounter a hiccup on the Windows stack I can usually find a way through it.

This article is excellent for understanding how to get started with Docker on Windows. It details how install and setup Docker on a Windows Server 2016 Core image which means you need work with the command line for everything.You might think, “why do I need to do this – Windows Server 2016 has a perfectly good UI built into it”. There are a number of reasons for this.

Docker containers are built in layers. A container that hosts a custom website, might be layered on top of a generic web server container which in turn is layered on top of an OS image. At the time of writing there are two OS images that you can use for Windows based containers

  • Windows Server 2016 Core
  • Windows Server 2016 Nano Server

Both of these run headless so you’ll have to get used to working at the command line. Need to install IIS – then you need to know the command to do this. Need to download files from a Git repo, it’s scripting all the way.

Docker itself is command line only so even if you have a nice UI you won’t be able to use it much of the time. Containers in general are lightweight in nature. Its seem counter intuitive to have all of this lightweight goodness in a full fat Windows wrapper. Finally working headless mirrors a typical Linux workflow so what you are learning should be transferable, to some extent, to Linux.

When I was looking at this, it was possible to use Windows Server 2016 Core through virtualization but there weren’t any suitable images to use in either Microsoft Azure or Amazon Web Services. Only the full fat versions were available, so this is something to bear in mind if you go that way. Hopefully this will change in the near future.

I should also draw your attention to the official Microsoft documentation here. This takes you through some of the concepts and gets you up and running on both Windows Server 2016 and Windows 10.

After spending some time with these articles, I got to the following point

  • I had Windows based containers running on both Windows Server 2016 and Windows 10
  • I had realised that Windows 10 only support HyperV containers
  • I had a basic grasp of the Docker command basics, e.g., listing images and containers and starting and topping them

However, this stuff is moving fast and there some things in the documentation that is already out of date. I want to document the steps I took to get a basic website running in a container. I don’t want to rehash the articles I have previously linked to, but instead highlight some of the things that tripped me up along the way.

The first thing you want to do after installing Docker is get the IIS image if you haven’t got it already. You can check what images you have by running

docker images

1-1

If you haven’t got it already you need to pull down the IIS docker image from the docker repository.

docker pull microsoft/iis

This is large for a container at 4GB so it might take some time. Once it is downloaded you can start it. This where I have seen some inconsistencies. Some of the documentation states that you can run the container in interactive mode and then run a command prompt inside it to start exploring and changing things. More recently I think the base container has been changed so this doesn’t work as expected. This isn’t a huge problem because most of the time you’ll want the container to run in the background. It is just a pain when setting up new ones.

In order for the IIS container to work there needs to be a process continuously running that Docker can monitor to ensure the container is up. At the time of writing this process is stopping you accessing the container interactively.

You can try to start IIS container interactively and ask it to start powershell with this command

docker run -it -p 81:80 microsoft/iis powershell

The -p argument tells docker to expose the container’s port 80 on the host’s port 81. Running this command results in the following screen

1-2

No powershell prompt!  If you do the same thing with the nanoserver image you get the powershell prompt where you can execute commands

docker run -it microsoft/nanoserver powershell

1-3

So what happened with the IIS container? It is running and you can confirm this but we can’t get access to the console.

You can confirm that the container is working correctly by browsing to the website hosted on it. There were a few pieces of useful information that helped me at this point.

  • Using loopback to access the site won’t work
  • I’ve not had much luck accessing the website on the machine running the container in general although I’ve not yet worked out why.
  • If you are running your docker container server in Azure or AWS you can access the site from your local machine so make sure you have set up the security groups properly.

At the moment it would seem that with the current IIS image to only way to extend it is with a docker file, like this one.

FROM microsoft/iis

RUN echo "Hello World - Dockerfile" > c:\inetpub\wwwroot\index.html

What does this file do? The first line says that we want to create a new container image or layer based on the Microsoft IIS image. The next line  runs a simple command to create an index.html file with a message of our choice. Obviously you can make this files as simple or as complex as you like by added commands. For most scenarios powershell is the best source of commands to build up your image so you can add the following after the FROM line

SHELL ["powershell", "-command"]

The final piece of the jigsaw is to tell docker to create a new container image from the dockerfile.

docker build -t [dockerUserName]/basiciis

dockerUserName will come in handy when you want to start sharing container images via the docker repository. Running this executes the commands in the docker file on the FROM container and then creates a new container image from the result.

docker images should list your new container image. Your new container can be started in the same way as before. Notice that I have sneakily switched from -it which is interactive mode to -d which is detached or background mode.

docker run -d -p 81:80 [dockerUserName]/basiciis

You can remove  the container by first running

docker ps

to obtain the container id. Then docker kill [containerid].  This removes the container but the image stays intact.

The final thing I want to show is one of the things that makes containers different to VMs. Run three instances of your container, remembering to specific different host ports .

docker run -d -p 80:80 [dockerUserName]/basiciis
docker run -d -p 81:80 [dockerUserName]/basiciis
docker run -d -p 82:80 [dockerUserName]/basiciis

I mentioned earlier that it is not currently possible to run the IIS container or any container extended from it in interactive mode. This is because a process called ServiceMonitor is running on the container as a foreground task for Docker to monitor. Open up task manger on the machine running docker and sort by process name.

1-task

You should see three instances of the service monitor process. It is not running under a user account because it running within a container. This shows there is no isolation as you would expect from Virtual Machines. The processes the containers are running are visible on the host and they will compete for resources on the host as any other process would.

Musings about Velocity

Musings about Velocity

When you choose to work within Agile frameworks such as SCRUM you are doing so to receive certain benefits. Two of these are predictability and stability. In order to achieve this SCRUM defines fixed time boundaries called sprints and encourages the team to fix the amount of work to be achieved in this period. Effectively it is turning two of the aspects that are often variables in traditional software development, time and scope, into constants.

The rate at which work is completed in the fixed boundary of sprint is called the team’s velocity. Velocity is a characteristic of how the team works together. In depends how the work is sized, how well the team members collaborate, how good they are at actually completing the work vs managing work in process and many other aspects.  Therefore, is very personal to the team and this is just one of the reasons why the velocity for different teams should not be compared.

velocity-trend1
Velocity after 1 sprint

Velocity changes too. The team may get better at sizing work or they simply may get more effective. This can cause the velocity to go up. The team make up may change or the definition of what completed work is could be expanded. This might make the velocity go down.

It can tell you other things too. When looking at trends of sprint velocity over a number of sprints a sudden dip on an otherwise stable trend may tell me that that team had a particularly difficult sprint. Perhaps, a story was badly defined and the team ended up thrashing around on it, not getting it done.  Maybe something outside the team’s control such as technical issues meant that, for example, work could be tested properly.

velocity-trend2
Velocity after many sprints. Here the third sprint highlights a shared environmental problem

I said previously that you shouldn’t be tempted to compare the velocity for different teams. However sometimes the urge gets too much and we do so anyway.

Recently a few of us working on an Agile project did just that, looking for trends that might be enlightening. We were comparing two teams, with different people, skills sets and working on different things. The velocity was different but they showed the same general pattern. There was clear upwards trend and when one team had a difficult sprint so did the other one. So, question 1 was

Is the share pattern a coincidence or something else?

Next we wanted to see if there was anything interesting when comparing the velocity of the work done across a wider period, in this case several sprints. When we did this, we noticed something unexpected. Whilst it was the clear the velocity was always different on each sprint, the averaged velocity over the longer time period was almost exactly the same. So, the second question was

Why was the average velocity the same for different teams?

Going back to the first question I had these thoughts. Firstly, we were coaching both teams the same way at the same time and they were improving at a similar rate. So, both teams having a similar upward trend was to be expected. The teams were getting work from the same backlog and there was a set of circumstances that meant during one sprint the stories simply weren’t good enough meaning both team suffered the same set of consequences.

My feeling about the second result is purely speculation so read into it what you will.

I think this is an indication that the teams are capable of much more. By much more, I mean a higher velocity. If you looking at the bottom of two drain pipes, one wide and one thin and water was flowing through them at an identical rate you might start to think that something was restricting the flow rate upstream. After all, the pipes are obviously capable of a higher flow rate. Is there something the environment, something to do with the work coming into the teams that is limited how effective they can be?

In summary, velocity is very personal to a team because it is function of how that team works. Similar velocity trends across multiple teams and sprints may highlight environmental changes that are effecting all teams. And velocity shouldn’t be compared unless it is and it tells you something interesting.