Connecting Web Apps to External Services – Virtual Appliance

Connecting Web Apps to External Services – Virtual Appliance

Last time I proposed a solution which enables Web Apps hosted in Azure App Services to communicate with services running in a private network. This solution required the configuration of a Site to Site VPN which require network configuration both in Azure and on the private network. Sometimes that is not possible so alternative options should be considered. Below I have restated the problem we are trying to solve and then go on to describe an alternative approach.

Problem:

Code hosted in an App Service needs access to a web service endpoint hosted on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Deploy Virtual Appliance

In this scenario, there is no private connection between the Web App and the private network. Instead all traffic destined for services on the private network will be routed on an Azure VNET that would subsequently be routed via a Virtual Appliance. The Virtual Appliance is acting as a Firewall and an Outbound Proxy. It forwards traffic over the Internet to the external system endpoints. The Public IP of the appliance would need to be white listed on those systems.

Challenges

The software that runs on a Virtual Appliance is “Enterprise Class” which means it can be difficult to understand and configure correctly. It may also require additional effort to support over the course of the solution’s lifetime.

Context

The following diagram explains the configuration we are trying to achieve. This time there is no need to have an understanding of the private network topology. The only thing that will be shared is the IP Address of the outbound traffic originating from the Virtual Appliance and the public IP address of the service we are connecting to.

virual(no ips)

The following list highlights the differences from the Site to Site VPN solution.

Azure VNet: The main difference with the VNET is that User Defined Routes via a routing table must be created and maintained in order to ensure the correct traffic is routed through the Virtual Appliance.

Frontend Subnet: Acts as the perimeter subnet for the Azure VNet. Having this subnet contain only the virtual appliance makes routing simpler.

Barracuda F-Series: This is the software providing the Virtual Appliance. It is a fully featured Firewall and as such requires some investment to understand it properly. Not only do you need to know how to operate it, you also need to understand how to secure it properly.

In Azure, this is a preconfigured VM which is provisioned with a Network Security Group and a Public IP.

It must be licensed and you can operate it on a Pay As You Go basis or you can pre purchase a license. By default, the Virtual Appliance is a single point of failure. If the firewall were to go down in production, at best, all connectivity to the private network would be lost and at worst, all Internet connectivity from your Web App would be lost (depending how the Azure VNET and Point to Site VPN is configured).

Test Endpoint: The test endpoint only needs to be Internet accessible for this configuration to work. In real world scenarios, the public endpoint is likely to be expose through some sort of perimeter network and your originating IP address will need to be whitelisted before access can be established.

Connecting Web Apps to External Services – Site to Site VPN

Connecting Web Apps to External Services – Site to Site VPN

Last time I set the scene for a common scenario when using Web Apps hosted on Azure App Services. How do I connect to services hosted on a private network? This time I’ll walk through the first potential solution option.

Problem:

Code hosted in an App Service needs access to a web service endpoint hosted in an on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Site to Site VPN

Build a private network segment in Azure as an Azure VNET. Connect the App Service to the private Azure VNET using a Point to Site VPN. This acts as a private connection between your application hosted in the Azure multi tenanted App Service infrastructure, allowing it to access resources routable via the Azure VNET. Resources on the VNET are not able to access the Application. The on premise network is connected to the Azure VNET via a Site to Site VPN. This effectively extends the on premise network to the cloud allowing bi-directional communication between resources hosted on premise and those hosted in Azure via private network addressing.

Challenges

Network configuration is required within the on premise network to enable the VPN connection to function. This includes setup of either VPN software or an appliance and configuring network routing to ensure that traffic destine to Azure is routed through the VPN.

The network in Azure must be designed with the on premise network in mind. As a minimum, you need to understand the on premise network design enough to avoid address conflicts when creating the Azure VNET. More likely, any design principles in play on the on premise network are likely to extend to the cloud hosted network.

What this means in practice is that there needs to be collaboration and coordination between the people managing the on premise network and yourself. Depending on the situation this may not be desirable or even possible.

Context

The following diagram explains the configuration we are trying to achieve.

site2site(no ips)

The main components are:

Azure App Services: When setting up the point to site VPN you must define a network range. This is a range of addresses that Azure will select as the outbound IP addresses that the App Service hosted application presents into the Azure VNET. Whilst you might assume that this is the IP address of the server hosting your application it is not quite that straight forwards as Azure is working under the covers to make this all work. However, you can assume that traffic from your application will always originate from this range of addresses so if you make it sufficiently small it is suitable for whitelisting in firewalls, etc. without comprising security.

Azure VNET: Represents your virtual networking space in Azure. You define an address space in which all of your subnets and resources will live.

GatewaySubnet: This is created automatically when you create the VPN gateway in Azure. From experience, it is better to leave it alone. If you add a virtual machine or other networkable devices into this network, routing becomes more of a challenge. Consider this subnet to be the place where external traffic enters and leaves the Azure VNET. The gateway subnet exists inside your Azure VNET so its address range, must exist entirely within the Azure VNET address space.

Backend Subnet: This is an optional subnet. Its primary in this walkthrough is for testing. It is relatively simple to add a VM to the subnet so you can test whether traffic is propagating correctly. For instance, you can test that a Point to Site VPN is working if an App Service application can hit an endpoint exposed on the VM. Additionally, you can test that your Site to Site VPN is working if a VM on this subnet can connect to an endpoint on a machine on your on premise network via its private IP address. The subnet must have an address range within that of the Azure VNET and must not clash with any other subnet. In practice, this subnet can be the location for any Azure resource that needs to be network connected. For example, if you wanted to use Service Fabric, a VM Scale Set is required. That scale set could be connected to the backend subnet which means it is accessible to applications hosted as App Services. In this configuration, it has a two-way connection into the on premise network but a one-way connection from Azure App Service to resources on the backend subnet.

On Premise: This represents your internal network. For demonstration purposes, you should try to build something isolated from your primary Azure subscription. This builds confidence that you have everything configured correctly and you understand why things are working rather than it being a case of Azure “magic”. You could set this up in something completely different from Azure such as in Amazon Web Services and in a later post I’ll walk through how to do that. However, if you are using Azure ensure that your representation of your on premise network is isolated from the Azure resources you are testing. The IP address space of the on premise network and the Azure VNET must not overlap.

Connecting Web Apps to external services – Overview

Connecting Web Apps to external services – Overview

When starting out building websites with Azure, it is likely that you’ll start by deploying a Web App to Azure App Services. This is a great way to get you going but with all things as you invest more and more time and your solution grows up you might experience some growing pains.

All credit to Microsoft. You can build a sophisticated solution with Azure Web Apps and you can have it connect to an Azure SQL instance without really thinking about the underlying infrastructure. It just works.

The growing pains start when you want to connect this Web App to something else. Perhaps you need to connect to some other service hosted in Azure. Or perhaps you need to leverage a third-party system that does not allow connections over the Internet.

Some development focused organisation stumble here. They have chosen Azure for its PaaS capability so they don’t have to think about infrastructure. They can code, then click deploy in Visual Studio – job done. Unfortunately for them to break out of this closed world requires some different skills – some understanding of basic networking is required.

Getting through this journey is not hard but it requires breaking the problem down into more manageable pieces. Once these basics are understood they become a foundation for more sophisticated solutions. Over the next few posts I going to go through some of these foundation elements that allow you to break out of Web App running on Azure App Services (or any other type of App Service) first to leverage other resources running in your Azure subscription such as databases or web services running on VMs and then out into other services running in other infrastructure whether they be cloud hosted or on private infrastructure.

Over this series of posts I’ll be addressing the following scenario.

The code running in a Web App hosted on Azure App Services needs to call a Web Service endpoint hosted in a private network behind a firewall. The organisation says that they’ll only open the firewall to enable access to IP addresses that you own.

This discounts opening the firewall for the range of outbound IP address exposed by Azure App Services as there is no guarantee that you have exclusive use of them.

So the approach will be to build a network in Azure to which the Web App can connect. Then connect the Azure network to the private network by way of a private connection or by way of a connection over the Internet where traffic is routed through a network appliance whose outbound IP is one controlled by you.

 

 

Backlog Black Hole – Agile Anti Patterns

Backlog Black Hole – Agile Anti Patterns

If I create a story and add it to the backlog, it will be lost forever and will never get done

A backlog is a prioritised list of work that needs to be done. The important stuff is at the top and the least important stuff at the bottom. If you find that work is “disappearing” in your backlog what could be the cause

  1. The backlog is not being maintained. The backlog is a living thing and as such needs feeding and watering. By that it needs near constant refinement. New work is discovered it gets added to the backlog. But what is happening to the existing stuff. All stories need to be reviewed not just the new ones. They need updating based on current knowledge. That might mean that the story is not required any longer and should be removed. Some people go so far as purging stories that have not been delivered based on their age. The thinking is that if a story gets to being 3 to 6 months old without being delivered then the chances are it will not be delivered in its current form at all.
  2. The newest work is the highest priority. Just because you have thought of the next killer feature it doesn’t automatically mean delivering that work is the highest priority. It should be assessed based on all the work in the backlog. If new work is always added to the top this starts to push older worked down, often meaning the team never get a chance to work on it.
  3. The work is not well defined. In order for someone to understand the work involved in a story it must be clear. If you are going to the trouble of adding work to the backlog that you think needs to be done you should also put in some effort to describe it. I’m not saying that you need to write “War and Peace” but you do need to represent the work to the Product Owner in ceremonies such as backlog refinement. In some circumstances, there are benefits to be found by having a triage process for new work. This provides a chance for the work to be reviewed by the necessary parties to ensure that it is understood, be prioritised and actually needed.
  4. You don’t have the right tools. A small team might get away with managing their backlog with sticky notes on a board. Large teams may need some tooling. Tooling can be an inhibitor as well as an enabler. So perhaps a tool has been implemented that is hard to use or that requires the team to be trained on. This might make it hard to find stories when you need them. Often it is possible to configure tools to provide reports of stories added in the last week or to enable integration with messaging tools such as slack so you have a constant stream of messages indicating new work entering the backlog.

Up to now this discussion has focused on the negative position that it is a bad thing that work is “being lost” in the backlog. However, when you think about it this may be a sign that you are doing the right thing. The work coming in may be aspirational or simply a wish list which is not what your customers really need. If you have an effective feedback loop you’ll be reacting to your customer’s needs rather than focusing on the things that they don’t care about.

Therefore, if you are the one coming up with the ideas that are not making it into the system you need to understand why. You can’t be precious about the work because it was “your idea”. This is looking at the product you’re building from a personal point of view and not considering how the product is used in reality. Perhaps you don’t understand the product as well as you think you do.

Finally, it is worth making a point around continuous technical improvement. My point of view is that for a product to be successful over a long period of time the technology it is built with needs to continuously evolve. Whether you call this technical debt or something else the point is that there will always be technical work that needs to be done that may not have direct value for the customer. The value is actually to your business as you’ll be able to continue to serve your customers in the future.

How you deal with this depends on the organisation. Often people implement a capacity tax that says that a given percentage of the team’s capacity goes towards technical improvement. This way the team are not asking for permission to improve things but there is a still need to document and prioritise the technical work that needs to be done. This is still a backlog. In other situations where the product owner is technically savvy and understands the relative value between delivering new features vs technical improvement, technical stories can be treated as any other work in the backlog.

Whichever way you look at this, it boils down to the fact that there is a pile of work that needs to be done. The work needs to be prioritised and each work item will have a different potential value to your customer and your business. And their needs to be way to make this work visible and transparent in an efficient manner.