In my field, you can’t go anywhere without hearing about Docker. It is everywhere.
- Architecture & Design – Use Docker containers to host your microservices
- Development – Run your production topology on a developer’s workstation by having all your components running in Docker containers.
- DevOps – Avoid environment specific configuration by building everything in Docker containers and then pulling them into target environments.
So, I thought it was time to have a look at what this means to me. As I find the best way to learn something is to get my hands dirty, it was time to get technical…
The first challenge: Docker has grown out of the Linux world and most of my background is on the Microsoft stack. These two communities up until recently have been like oil and water – no mixing. My learning curve therefore look like this:
In order to keep the number of learning challenges I had to a minimum, I started looking at the Docker support in Windows Server 2016. This way I wouldn’t have to trip over anything specific to Linux. Generally, if I encounter a hiccup on the Windows stack I can usually find a way through it.
This article is excellent for understanding how to get started with Docker on Windows. It details how install and setup Docker on a Windows Server 2016 Core image which means you need work with the command line for everything.You might think, “why do I need to do this – Windows Server 2016 has a perfectly good UI built into it”. There are a number of reasons for this.
Docker containers are built in layers. A container that hosts a custom website, might be layered on top of a generic web server container which in turn is layered on top of an OS image. At the time of writing there are two OS images that you can use for Windows based containers
- Windows Server 2016 Core
- Windows Server 2016 Nano Server
Both of these run headless so you’ll have to get used to working at the command line. Need to install IIS – then you need to know the command to do this. Need to download files from a Git repo, it’s scripting all the way.
Docker itself is command line only so even if you have a nice UI you won’t be able to use it much of the time. Containers in general are lightweight in nature. Its seem counter intuitive to have all of this lightweight goodness in a full fat Windows wrapper. Finally working headless mirrors a typical Linux workflow so what you are learning should be transferable, to some extent, to Linux.
When I was looking at this, it was possible to use Windows Server 2016 Core through virtualization but there weren’t any suitable images to use in either Microsoft Azure or Amazon Web Services. Only the full fat versions were available, so this is something to bear in mind if you go that way. Hopefully this will change in the near future.
I should also draw your attention to the official Microsoft documentation here. This takes you through some of the concepts and gets you up and running on both Windows Server 2016 and Windows 10.
After spending some time with these articles, I got to the following point
- I had Windows based containers running on both Windows Server 2016 and Windows 10
- I had realised that Windows 10 only support HyperV containers
- I had a basic grasp of the Docker command basics, e.g., listing images and containers and starting and topping them
However, this stuff is moving fast and there some things in the documentation that is already out of date. I want to document the steps I took to get a basic website running in a container. I don’t want to rehash the articles I have previously linked to, but instead highlight some of the things that tripped me up along the way.
The first thing you want to do after installing Docker is get the IIS image if you haven’t got it already. You can check what images you have by running
If you haven’t got it already you need to pull down the IIS docker image from the docker repository.
docker pull microsoft/iis
This is large for a container at 4GB so it might take some time. Once it is downloaded you can start it. This where I have seen some inconsistencies. Some of the documentation states that you can run the container in interactive mode and then run a command prompt inside it to start exploring and changing things. More recently I think the base container has been changed so this doesn’t work as expected. This isn’t a huge problem because most of the time you’ll want the container to run in the background. It is just a pain when setting up new ones.
In order for the IIS container to work there needs to be a process continuously running that Docker can monitor to ensure the container is up. At the time of writing this process is stopping you accessing the container interactively.
You can try to start IIS container interactively and ask it to start powershell with this command
docker run -it -p 81:80 microsoft/iis powershell
-p argument tells docker to expose the container’s port 80 on the host’s port 81. Running this command results in the following screen
No powershell prompt! If you do the same thing with the nanoserver image you get the powershell prompt where you can execute commands
docker run -it microsoft/nanoserver powershell
So what happened with the IIS container? It is running and you can confirm this but we can’t get access to the console.
You can confirm that the container is working correctly by browsing to the website hosted on it. There were a few pieces of useful information that helped me at this point.
- Using loopback to access the site won’t work
- I’ve not had much luck accessing the website on the machine running the container in general although I’ve not yet worked out why.
- If you are running your docker container server in Azure or AWS you can access the site from your local machine so make sure you have set up the security groups properly.
At the moment it would seem that with the current IIS image to only way to extend it is with a docker file, like this one.
FROM microsoft/iis RUN echo "Hello World - Dockerfile" > c:\inetpub\wwwroot\index.html
What does this file do? The first line says that we want to create a new container image or layer based on the Microsoft IIS image. The next line runs a simple command to create an index.html file with a message of our choice. Obviously you can make this files as simple or as complex as you like by added commands. For most scenarios powershell is the best source of commands to build up your image so you can add the following after the FROM line
SHELL ["powershell", "-command"]
The final piece of the jigsaw is to tell docker to create a new container image from the dockerfile.
docker build -t [dockerUserName]/basiciis
dockerUserName will come in handy when you want to start sharing container images via the docker repository. Running this executes the commands in the docker file on the FROM container and then creates a new container image from the result.
docker images should list your new container image. Your new container can be started in the same way as before. Notice that I have sneakily switched from
-it which is interactive mode to
-d which is detached or background mode.
docker run -d -p 81:80 [dockerUserName]/basiciis
You can remove the container by first running
to obtain the container id. Then
docker kill [containerid]. This removes the container but the image stays intact.
The final thing I want to show is one of the things that makes containers different to VMs. Run three instances of your container, remembering to specific different host ports .
docker run -d -p 80:80 [dockerUserName]/basiciis docker run -d -p 81:80 [dockerUserName]/basiciis docker run -d -p 82:80 [dockerUserName]/basiciis
I mentioned earlier that it is not currently possible to run the IIS container or any container extended from it in interactive mode. This is because a process called ServiceMonitor is running on the container as a foreground task for Docker to monitor. Open up task manger on the machine running docker and sort by process name.
You should see three instances of the service monitor process. It is not running under a user account because it running within a container. This shows there is no isolation as you would expect from Virtual Machines. The processes the containers are running are visible on the host and they will compete for resources on the host as any other process would.