Boxstarter and Chocolatey

Boxstarter and Chocolatey

Coincidently a very appropriate Easter post.

The first task for a new developer joining a new project, after they have been through the normal introductions and shown where the toilet and coffee machine is, is to get their development environment set up. How well this goes can be a good indication of the team the developer is joining. Is there a check-list that can be followed and can team members provide support when the instructions don’t work? Often teams are too busy to keep setup instructions up to date and often team members barely have time to do their own work never mind helping newbies get started.

Luckily there are tools available that allow you to automate environment setup. First I want to look at chocolatey

Chocolatey

c2

Chocolatey is like apt-get for Windows. I’m not going to cover all the details. There are lots available here and here.

The reason Chocolatey is useful is that you can provide a one-line command to silently install anything you need on a development machine.

You need Notepad++? No problem!

choco install notepadplusplus.install

What about version 4.2.2 of NodeJS?

choco install nodejs.install -version 4.2.2

The Azure SDK 2.7 for VS 2015?

choco install VWDOrVs2015AzurePack.2.7 -source webpi

and the IIS Web Role?

choco install IIS-WebServerRole -source WindowsFeatures

Behind the scenes Chocolatey is going out to get each installation from NuGet. Each Chocolatey installation is a NuGet package.

You might be wondering how this helps with automating environment builds. Well read on.

Boxstarter

b

Boxstarter provides the automation goodness around the solid foundation of Chocolatey.  Boxstarter allows you to chain chocolatey command into scripts. It handles restart logic to ensure that the complete script eventually completes.

So your chocolatey script might look like this

choco install app1Noreboot

choco install appRequiresReboot

choco install app2NoReboot

Boxstarter will use Chocolatey to install the first app and then start on the second one. When that completes it will detect that a reboot is required and handle the restart process. When the machine starts back up, it will prompt for your credentials so subsequent reboots can be handled without intervention. This behaviour can be changed if that makes you nervous. Boxstarter will start the script from the beginning but this time detect that the first two applications are already installed. It then finishes by installing the 3rd application. With more complex scripts requiring multiple reboots it seems as though Boxstarter repeatedly runs your script until it is sure that everything is installed correctly.

Boxstarter has a number of other useful features. It can configure windows explorer to your tastes by showing or hiding system files, it can trigger windows updates and pin applications that have been installed to the task bar. It can be run on the target machine via a single command, installing Boxstarter automatically and getting its Chocolatey script via a GitHub Gist, DropBox or a file share. It can also be triggered by pointing itself at a remote machine. I haven’t tried that myself but all of the details can be found at Boxstarter.org.

Boxstarter and Chocolatey in the field

As with Chocolatey you can find lots of articles that can get you started combining the two tools. This and this are examples.  However, when you get into this for real some of the details can be quite illusive.

Both the examples below install windows features (cinst is an alias for choco install BTW)

cinst IIS-WebServerRole -source windowsfeatures

and you’ll find Chocolatey examples that use the Web Platform Installer

cinst VWDOrVs2015AzurePack.2.7 -source webpi

When you are configuring your own scripts you will soon realise that this is very useful but just as quickly stumble over the question of “what things can I install as Windows Feature or from the Web Platform Installer?”

Let’s start with windows features. These are the things that you get to from Control Panel -> Programs and Features -> Turn Windows Features On and Off. I’m writing this on Windows 10 so this will be different with other Windows OSs. From here I can see what I need but how do I know that IIS-WebServiceRole maps to Internet Information Services -> World Wide Web Services. This took me quite some time to work out but it won’t take you that long because I’m telling you how to do it below.

The key is the Deployment Image Service and Management (DISM) command line. The feature of it that we need is its ability to enumerate the features available on a running instance of Windows. So if you run the following on your target machine

dism /online /get-features

You will get something like this

cmd1

You get quite a bit so it makes sense to pipe the output of this command to a file.

Anything that shows up in the Feature Name field can be used as an input to the Chocolatey install command.

With the Web Platform Installer, you follow a similar process. Once you find the command things get a lot easier.

webpicmd /list /ListOption:available

The command above gets you a list of what is available through the installer. You’ll need to ensure it is installed first. You can get it here.

cmd2

Anything in the ID column can be an input to a Chocolatey install.

oOo

Hopefully this and the links I have provided will have given you enough to get started and get past some of the initial learning curve.  By automating your environment build you are bringing another part of your system under control. Rather than relying on a human reading a Word document you have a machine reading a script. The script can be version controlled with all the benefits this brings. If you get into the habit of setting up all environments via an automated means you can guarantee that all machines are configured the same. You rule out those sys admin changes that don’t get documented and are impossible to replicate across many machines and start thawing out the snowflakes in your environment.

 

 

Advertisements

When the Fallacies of Distributed Computing don’t apply.

When the Fallacies of Distributed Computing don’t apply.

When I first encountered the Fallacies of Distributed Computing I felt relieved. At last, all of the thoughts that were swirling around my head concerning the problems I’d had when building numerous distributed systems were all there in a neat list.  Articles that built on the basic list, such as this one by A Rotem-Gal-Oz, provide such clarity that one would think that it would be impossible to make the same mistakes again.

So why is it, time and time again, people design and build distributed systems as though the whole thing will be self-contained on a single server? Often the reason you’ll hear is “this system is too simple to have to worry about those things”.

So lets look at these typical simple systems:

  • Client Server: This may be desktop machines talking to a remote server, or web applications where users connect over the Internet. Multiple nodes have to talk to one another.
  • N-Tier: Redundancy is likely a concern so a logical tier may be made up of multiple physical servers. Even more nodes need to talk to one another.
  • Shared Database: In all but the most trivial situations data will be shared between users via a common database. This is a separate node in a distributed application. More nodes.
  • Storage: Storage itself can be a further networked node. When high performance databases use SANs there are network connections between the database servers and SAN devices. Even more nodes.

This list is quite an historical look at things. Today all modern applications are distributed

  • Classic dynamic web applications where users connect to servers over HTTP
  • Single page applications where client side libraries are regularly communicating to the server to update page fragments
  • Mobile applications where the network between client and server is unreliable
  • Microservices based applications where almost all activity is made up of many services coordinating over a network.

Building modern applications is complex so as humans we have a natural tendency to create simple mental models. The models we make need to be reasonable to the individual and are heavily based on previous experience. It is no coincidence that early remote procedure calling frameworks attempted to simulate making in-process calls. It was a model that was well understood.

An interesting side effect of waterfall projects is that often the developer who was building the system wasn’t the same person who actually had to get the thing working once it was in a staging or a production environment where there was a realistic number of servers. The developer happily rolled off the project, updating their mental model to something that actually didn’t work in practice.

And life is not always rosy in Agile projects…

The same mistakes can be made under the banner of YAGNI, You Aren’t Going to Need It. I worry about YAGNI because it contains a kernel of truth, but it is often expressed in a manner that makes it very easy to get you into trouble. This is particularly the case when it’s paired with other ones like “the simplest thing that could possibly work”. You hear this a lot when delivery pressure increases and when this happens, considering the fallacies are one of the first things to go. The team only think of the happy path, when latency is zero and the network is reliable and bandwidth is infinite…. Oops.

The saving grace of Agile projects is that you will see your mistakes early. The people who implemented the problems  have to fix them, and they will throw away their old mental models and replace them with ones that are fit for purpose in the modern era of software applications.

Now, if only I could speed up time…

 

 

Microsoft Cloud Roadshow

Microsoft Cloud Roadshow

I’m writing this during registration at the Microsoft Cloud Roadshow at the Excel centre in London at the end of February 2016.

I have been working on an Azure based project for 15 months and I am well aware that some of the concepts that the project was based on all those months ago are feeling decidedly old. It has become obvious to me that when working with Azure, one the major challenges has been keeping up to date. Microsoft are fully bought into regular incremental delivery which is a positive, because you can rely on getting new features quickly but as a technology consultant you have your work cut out trying to understand how the leverage new features and also keeping a handle on product renaming and repositioning. I get limited opportunity for training so I am hoping that the next two days can bring me up to speed.

The rest of this post is going to cover the key highlights for me.

Keynote

The key message from the keynote was that Cloud transformation is happening: businesses can lead on this or be dragged. If you are not transforming, you’ll find that in time you’ll struggle to be competitive. Cloud transformation comes in three stages:

  • Efficiency – The Cloud allows you to deliver value early or test your assumptions with your customers allowing faster feedback. Its pay as you go model keeps costs under control and avoids hardware sitting underutilised in your data centre.
  • Agility – Services can be spun up on demand, utilised and removed with ease. You don’t have to worry about getting your sizing maths right up front, you can change later. Azure provides a number of sophisticated rule based elastic scale controls that allows you to flex your environment dynamically and manage your costs all within well-defined constraints.
  • Differentiation – The Cloud allows for business models that are not possible or cost effective with traditional architectures. The canonical example used throughout the sessions was IoT.

Something I hadn’t thought about was the way in which organisations start with the Cloud. One way is to start with SaaS services such as Office 365 and Salesforce. The other way is through virtualisation and hosting infrastructure in the Cloud rather than on premise. Once your transformation has started you start using high level services and may end up moving development onto the Cloud.

Microservices – Containers and Service Fabric

One session was focused on Azure Service Fabric. Apparently Azure Service Fabric underpins many other Azure services, including pretty heavy one such as Azure SQL DB. Service fabric provides a means for hosting lightweight services that can the stateful or stateless. It also provides support for the Actor model. Under the hood VMs are added to a cluster which Service Fabric manages. Service Fabric controls which host the service runs on and deals with reliability and failover automatically.

The presenter gave a demo of a simple stateful service incrementing a counter. All the code did was increment a number stored in a special data type, a reliable collection. As the service was incrementing the counter on screen, the speaker ripped nodes out of the fabric. As nodes came out the counter stopped briefly and then continued on another node seamlessly without losing state.

The take away here, this was all for free. There was very little Azure configuration required and no need for anything like SQL Server. This is really something to consider when building Microservices on the Microsoft stack.

Other sessions covered Azure’s support for Docker containers. Whilst Microsoft have a good story of Docker containers running on Azure it still seems their strategy isn’t coherent. All they really have is an alternative for Linux shops running Docker. But why would a Linux shop move to Azure for that capability?

I think there is a long way to go before Microsoft developers really get containers. Containers are something that has lived in the Linux ecosystem for some time whereas there is nothing really equivalent in the Microsoft space. Maybe there will be more traction behind this once Windows Server 2016 provides native container support.

1

Microsoft uses the diagram above to differentiate the various ways of hosting Microservices. Service Fabric provides lots of productivity but limited control, whereas hosting Docker on your own VMs or using the Windows Container Service (basically containers for Windows 2016) provides lots of control but at the cost of limited productivity.

Build Always-On, Hyper-Scalable Microservice-based Cloud Services

Automate application deployment and orchestration using Docker and containers

Azure Websites

It is pretty clear that Azure Websites are the preferred way to host websites, mobile and API applications on Azure. The “legacy” option of Web Role Cloud Services didn’t get a look in. Officially Azure Websites provide greater productivity and Web Roles provide more control but I would struggle to provide a strong case to use anything other than Azure Websites.

The stand out features for me is the concept of web site slots which amongst other things provide a target for deployments. For example, you could have testing, staging and production slots on the same website. Swapping staging to production is straight forwards and it is possible to have “sticky” configuration which ensures that production still points at the production database whilst still allowing any configuration necessary to support new features to move into production. It is possible to split traffic between slots. You might do this for a form of A-B testing, splitting traffic between staging and production allows new features in staging to be tested with real customers.

Integrate, deliver, and deploy continuously with cloud DevOps

Mobile Story

The announcement that Microsoft was procuring Xamarin emerged during the running of the roadshows, and surprisingly there was not a spotlight on it, but I think this is a key strategic move for Microsoft. Up to this point Microsoft did not have a coherent strategy when it came to cross platform mobile development.

You could build a responsive Javascript application that targeted the mobile browsers on Windows Phone, Android or iOS but this doesn’t let the application access the native features of the device. If you need to access the hardware you’d use Apache Cortova which, via plugins, can access the underlying hardware.

However, for the best performance and experience you need to build native applications. In the past, to do this you are either limited to targeting Windows Phone or building Universal applications or you leave Microsoft’s development stack and using more specialised tools. Xamarin plugs this gap and provides a means to have a single project which contains all the common code and provides for light weight satellite projects which contains all the specialised code for each target device .

2

Azure DevTest Labs

The final thing I wanted to draw out was Azure DevTest Labs.

This was touched on as part of a session on DevOps and Visual Studio Team Services (VSTS) tooling. This looked very powerful. It effectively allows developers and testers to spin up preconfigured development and test labs hosted on Azure. It can be hooked into a VSTS release pipeline allowing environments to be provisioned as part of the release cycle. Administrators create templates that define the environment, server and software configuration. Resources and costs can be managed by enforcing quotas and policies and the VMs can be configured to auto shutdown when they are not being used. This was in preview at the time of publishing but it represents an interesting addition to Microsoft’s DevOps story.

Personality Traits in Agile Software Development

Personality Traits in Agile Software Development

The Agile Activist cannot see beyond their chosen framework whether that be SCRUM or something else. They will talk about delivering business value but they only really care about the rate of delivery. For them it’s about JDFI rather than quality.

These people are easy to spot, just suggest improving the non-functionality characteristics of the system

Me: We need to invest time building in this logging framework so the solution can be monitored better in production
Agile Activist (AA): That is not business value

Me: We need to understand security protocol X for when we integrate with system
AA: We can do without security

Me: We should implement patterns that create a loosely coupled components and minimise dependencies on external systems
AA: That sounds like up front design. Just get the two project teams in a room and we will sort it out.

On paper these exchanges look crazy but they happen. The justification is that the project is ticking off high level business features and hitting all the key milestones. The downside is that technical debt is mounting up which is causing team velocity to flatline or even slow down.

Craftsman use their experience to deal with the task at hand. A carpenter will use power tools when manipulating large panels of wood but they will use small and light hand tools for detailed work.

Software engineers must be allowed to build secure software that is easy to manage and is independently releasable. This provides value as much as any new business feature does.

— oOo —

When organisations are transitioning to Agile, Product Owners come in many shapes and sizes. Some will be open minded to Agile ways, others, Project Manager POs, will have one foot in the old ways, the command and control ways, the time and budget micromanagement ways.

In SCRUM, Product Owners are authoritative. They have the position and stature to make decisions and arbitrate disputes. However, they don’t know everything and should rely on their team in many situations, particularly when a technical decision needs to be made or a complex arguments needs resolving. It is question of shared ownership and trust throughout the team.

Project Manager POs often reach for the old ways of costs and scope control without understanding the implications. The complexity resulting from a technical discussion flies over their heads and they select the option that looks the cheapest, or does not expand the projects scope or looks like it will be delivered on schedule. Sometimes they simply make the decision that makes the problem go away.

Hopefully you’ll be on a strong team and can support your PO in these decisions. If you’re not then let’s hope those decisions don’t come back to bite you, the project or the organisations you are working for.

— oOo —

It is possible for someone with little technical experience to find themselves in senior technical roles. This may be due to the Peter Principle or they simply may have been around at the same company on the same project for a long time. Once these Institutionalised Technical Leads are taken out of their comfort zone either by working on a new project or the project they have always worked on being ported to a new technology stack, they are exposed.

The currency of Institutionalised Technical Leads is experience and knowledge of their domain. In the past they always knew more than newer members of the project team and often use that to their advantage. Often Institutionalised Technical Leads are good in a crisis; they are excellent fire fighters. When dropped into a new project, or a new business domain or asking them to use new technology resets the levels across the team.

When presented with a new way of doing something, that may be well understood in the industry, their confirmation bias tells them that the work or method they are hearing is overkill and unnecessary. It sounds complex to them but it is simple to the person doing the explaining.

  • I don’t need REST and JSON. SOAP and XML does everything we need.
  • Why do we need messaging when I can simply have all of my code share a common database schema?
  • Implementing SRP by creating many classes is much more complex than sticking all the logic in one class? One thing is simpler that 100s of things, right?

I have worked with Institutionalised Technical Leads like this many times in the past. They can cause disruption in a couple of key ways.

One way they have impacted projects has been through their general resistance to anything “different”. This results in moaning and complaining that can have negative impact on team moral. An alternative manifestation of Institutionalised Technical Leads are long sessions where relatively well known patterns have to be described and justified in detail whilst they are deconstructed and challenged.

The latter manifestation can be very damaging. Not only does it suck up a lot of time, it also creates a perception that the team are not aligned which often reduces the confidence of key stakeholders.

Don’t get me wrong, buy in is important but it works both ways. It is important to point Institutionalised Technical Leads to examples of how modern software is built but it is also their responsibility as software professionals to keep up to speed with what is happening in their chosen trade by understanding the pros and cons of the various way software is designed.