Effective Teams – Euro 2016 Edition

Effective Teams – Euro 2016 Edition

I’m writing this as the EURO 2016 football tournament has finished the group stages and the knock out phase is about to start. Watching some of the games and comparing team’s actual performance vs their predicted performance has parallels with the things you see when building teams.

The Portuguese team can boast one of the best players in the world in the form of Cristiano Ronaldo. However, they have scraped through their group in 3rd place. This was a group they might have expected to win. They didn’t win a game, they drew all three fixtures. On paper Portugal are a good team. They have a number of good players many of whom play for successful league teams. So what is going on?

In the first two games their performance was lack lustre. They were operating as individuals and expecting Ronaldo to create something from nothing. And it wasn’t as though he wasn’t trying. The UEFA stats indicate that he had 33 goal attempts across the three games, which is more than all the attempts of 8 teams in the tournament. An average of 11 attempts per games is not to be sniffed at. In the final group game things seem better, Ronaldo scored two of Portugal’s three goals, which would have been perfect, if only their opponents Hungry hadn’t also scored three.

What you have here is a team with an obvious over achiever. In a development team this might be your rock star developer or a fire fighting hero. These people stand alone as technical experts in their field. They have a track record of results and their reputation precedes them. But having them in your team does not guarantee results. Success is measured in terms of team effectiveness and not an individual’s performance. The rock star developer cannot carry the team just as Ronaldo can’t guarantee a win by scoring two goals, if his team’s defence has let in three.

This happens in many teams not just development teams. My post, When Efficient is not Effective, cover some of my thoughts about this. The crux is that you can’t just throw people together and simply expect the team to work at the same level or above the level of the highest performer.

If you look at some of the so called lesser teams, the teams that held Portugal to a draw or the Republic of Ireland who beat Italy (past Euro runner’s up and World Champions) you see something different. Here you see players working as a unit. Whilst they have good players, none are at the same level of Ronaldo.

The old adage of the whole is greater than the sum of its part rings true.

Update: As it turned out, Portugal, the team with the over achiever, the team being carried by one player, went on to win the tournament. So does that mean that this is the most effective team mode? If you watched Portugal’s final against France something interesting happened. Within the first 10 mins Rolando was obviously injured in a challenge. Despite trying to carry on he was substituted mid-way through the first half.

The star player was gone and the team looked shocked. For some time it seemed that there was no plan B. Portugal went on the defensive (something that had worked well for them over the course of the tournament) possible hoping to overcome the pressure and force the game to penalties. Penalties are nothing to do with the team and all about the individual.

France, had there fair share of attacks but Portugal’s defence held strong and when France did break through luck was not on their side. As the game went on Portugal’s confidence grew and they looked more like a complete side, forming their own attacks and not just relying on defense. And then late into extra time, with penalties looming, Portugal created a chance and scored. This was enough to see them through to the end of the game and to lift the trophy .

So what happened?

The team lost their star player and where forced to reform. Not only did there team members change but also their tactics. They focused on what they we good at and they realised that this was a strong foundation to build on. Without the star, other team members came to be fore and they self organised into an effective unit. This reorganised team went on to meet there objective and win the game.

It is hard to say whether Portugal would have still won if Ronaldo had stayed on the field. But what this game did highlight was that sometimes unexpected bad things happen but teams can still be effective if they have the ability self organise.

 

What is a Service anyway?

What is a Service anyway?

The current architecture style “de jour” is Microservices. You can’t get away from articles that mention them. People write about when to use them and when not to, how big they should be and what they help you achieve and how they will cause problems. If you are working on projects using Microservices these debates will rage, opinions will conflict and sometimes, just sometimes something useful will be delivered.

Whilst these conversations are going on, the one thing that is often missing is something that can be the key to the success of a project. The thing that is missing is a common understanding of the term “Service”.

Once you identify that a common understanding is required the next problem is to agree. If you have three different people to define a service you get three different opinions, three different perspectives. I’m not going to write about how you get a consensus, that is a different subject. What I will describe is my understanding of services… take it or leave it.

When I visualise a service I see an independent of unit software with discrete and clear boundaries. I home in the now elderly “Services are Autonomous” tenet from the original SOA Four Tenets. Don Box is attributed with coming up with the four tenets. When he defined “Services are Autonomous” he meant that services should be built, deployed and versioned independently from each other. People jumped on this bandwagon and duly built systems this way but they often turned out to be brittle.

These systems delivered a network of services with many runtime dependencies. Situations could arise where many services shared dependencies with a small number of core services. A failure of any one of these services, which had been built, versioned and deployed independently, brought the whole system down. It was not just one application that was effected. The organisations that had fully embraced SOA were sharing services across multiple applications, so now a single failure had the ability to disrupt a whole suite of business applications.

To understand why this happened you need to go back to the Fallacies of Distributed Computing. The approach described above introduces temporal coupling where all services are assumed to be up and responsive at all times. The fallacies tell us that, well, this is a fallacy. Instead your architecture should expect regular failures in a network of services.

More recently people (notably Udi Dahan) have advocated that the “Services are Autonomous” should be extended to also stipulate that services should be able to function autonomously at run time. That is, they should be able to function as much as possible without direct influence or dependency from or on other services. That is not to say that services live in a vacuum. Services must of course be able to interact with other services but now they use asynchronous one-way messaging.

I am firmly in this camp. My service architectures are likely to minimise temporal coupling and therefore required some form of asynchronous communication. Depending on the situation this may be pseudo asynchronous patterns built on top of HTTP or it may be a fully fledge messaging system. I often face the challenge that this type of approach is too complex or just too hard. Surely it is as simple as Service A synchronously calling Service B.

I smile and then refer the challenger to this post.

When efficient isn’t effective

When efficient isn’t effective

A good developer can get a lot of stuff done when they work on their own. Everything is in their head and everything is under their control. There are no distractions and no-one to disagree with. Now back in the real world there are very few one man projects. To start with you have a least one customer and as the team grows that is where the trouble starts.

When you have 1 developer they have the capacity to be 100% efficient. When I say efficient, I mean there is very little waste and they are approaching the task in hand in the most optimal way. But they may not be effective. How can that be? Well the definition of effective in this context is that the developer is doing the right task and achieving all of their goals. In the 1 developer scenario being efficient helps you be effective… but it does not guarantee it. There simply might not be enough hours in the day to achieve the goals that have been set out. Hence the developer is as efficient as they can be but they are still not effective.

So the team grows. More developers join. Maybe these developers are also very efficient. Maybe they approach each task in the most optimal way. To be very efficient in team situations means that there isn’t time for debate, experimentation and challenge. This might mean your team will work as individuals with very little communication and there will be no feedback. Without feedback there can be no guarantee that the right task is being done, so even though you now have a team of efficient developers the combined team is no more effective or even less effective than the single developer working on their own.

You are looking for the team to be effective and not necessarily 100% efficient. The team has to “gel”. They need to communicate well, understand their combined strengths and weaknesses and help each other out. In this example it is important to build an effective team before that team can become more efficient. This gives you the basis to scale the team out as required. This is the basis of Tuckman’s stages of group development.

It is a lot harder to scale a team to make it more effective when it is already highly efficient but ineffective. The reason for this is the activity that is required to scale the team will negatively affect efficiency, which is the primary metric that the team value. In Tuckman’s model scaling the team is re-forming it. However, if the team is efficient or otherwise busy but are under pressure to be more effective they will be countering this by trying to be even more efficient. Their efficiency is a positive, they believe it brings results, so they want more of it. Bringing in new tooling or processes will require training which reduces efficiency. On-boarding new team members requires knowledge transfer which takes time and reduces efficiency. So for a time the team is not effective enough and you just reduced its efficiency too. If this team is under delivery pressure, which they will be if they are being asked to be more effective, then you see resistance which seems strange when the reason changes are being made are in order to reduce the pressure in the first place.

There are no easy answers to this in my experience but you should be able to recognise these situations when they occur, understand their characteristics and guide the team in the right direction.

The first important point to realise is that any drive to make a team more effective usually comes with the added context that they are probably not meeting objectives. The pressure is already on. Any change that is introduced at this point is likely to reduce effectiveness in the short term. There is not much that can be done about this so all stakeholders need to be aware of its impact. The team will be moving through Tuckman’s stages so this simply needs to be accepted. It also probably means you should not try to implement many changes incrementally over long periods of time. Each change is likely to revert the team to forming and if this keeps happening it will stop the team from reaching the performing stage.

One of the most likely changes that a team will be subjected to is new people. Therefore, care must be taken about the skillset of the people who are introduced at the start. Whilst the project team may be crying out for people with specialisms in particular areas to “help solved an urgent problem” the bigger problem is lack of effectiveness. You should not exacerbate the problem by bringing more individualists into the team. Instead you should be looking for people with the soft skills that enable them to quickly adapt, skills that allows them to coach the team whilst also having the ability to see the bigger picture. These first people need to be self-starters, stay calm and provide helpful contributions in a crisis and importantly be able to operate with very little guidance from the existing team. So when selecting candidates for these projects I’m looking for the right person not a series of skills listed on a profile.

Risk and Architecture

Risk and Architecture

When you review the definition of the role of an Architect you often see references to “managing” or “owning” technical risk. Technical risk might include

  • Does the solution perform in the prescribed way under typical load?
  • Is the solution secure?
  • Can the solution be integrated with all the necessary external systems?

So what does managing or owning this risk look like.

Most projects manage a risk log. This is a log of all the identified risks, the likelihood of their occurrence, the likely impact if the risk is realised, and mitigations to minimise the risk occurring or its impact if it does. Sometimes people focus efforts on the risk discovery. This is the process of identifying the risks and mitigations. Much effort is spent on this process and the results neatly compiled into a spreadsheet. This becomes a project document and is tightly controlled. Adding or removing risks suddenly becomes a chore and the document instantly becomes a historical record of the project risks when it was compiled. It no longer represents the current status. The project team have missed the point about managing the risk.

The project or solution landscape is constantly changing. If you are undertaking mitigating actions, you should be reducing the likelihood and impact of the risks. You may even be eliminating them. Your actions may have side effects, causing new risks to emerge. External factors could introduce new risks and turn risks into issues. Risks are not static.

One of the best pieces of advice I have received is to use a technical risk log as basis for constructing a daily task list. This means reducing risks, mitigating them and discovering more risks becomes part of your daily routine, driving your daily activity. This helps a risk log come alive rather than being a static list. You are constantly asking what is being done about technical risks rather than simply looking at them in a document.

This might not sound like it is appropriate for agile projects but it can be. Even if Architecture is a role rather than a person there is value in addressing risks regularly. This may involve reviewing the risk log with the PO and helping them understand what the risks are and what their impact might mean to the project and the solution. This can be fed into the prioritisation process so stories that address technical risk can be delivered at the appropriate times. The risk log itself is lightweight with each risk given a score or a weighting. As risks are mitigated these weightings are reviewed regularly. The total of these weightings are recorded at regular intervals (perhaps at sprint reviews) and the changes to the total weighting slowly burndown over time. This should be made visible along other with metrics on the project radiator, demonstrating to all stakeholders and other interested parties that the project is reducing its risk profile over time.