Connecting Web Apps to external services – Point to Site VPN Walkthrough

Connecting Web Apps to external services – Point to Site VPN Walkthrough

This post is the first step of breaking out of your Web App hosted in Azure App Services into the outside world. It involves connecting it to a virtual network (VNET) running in Azure. To make this real we’ll have it connecting to a Web Service running on a Virtual machine connected to the VNET. In doing so we will have proven that a network connection can be initiated from the Web App, traffic routes onto the virtual network and is directed to an endpoint connected to the VNET.

I have updated the diagram from the overview to include the network address ranges we’ll be using. For this part of the walk through you can ignore the on premise network and the Site to Site VPN connection. We’ll be dealing with that and the virtual appliance alternative option in a future post.


The point to site VPN feature in Azure is designed to let individuals connect their PCs and laptops to a network hosted in Azure in a similar way to how you might use VPN software to connect from home to your corporate network. Connecting App Services to an Azure VNET piggy-backs on this technology so the official Azure Point – Site VPN documentation makes no mention of App Services. Instead this article provides better coverage of what needs to be achieved when using Point to Site VPN with App Services. The article describes allowing Azure to create the VNET and the VPN gateway and then hooking it up to the App Service application. Instead in this walk through the components will be built up from scratch to aid understanding their roles.

Follow along with the steps below to configure a Point to Site VPN connection.

  1. Create a resource group that will contain all of the network components. Keeping the networking and the resources that connect to the network separated in resource groups allows them to have different lifecycles.
  2. Create a Virtual Network in the resource group created in step 1). Give it an address space of You will also have to create a subnet at this point. Call it “backend” and give it the subnet address range of If you are not using this you can always remove it later. I have not created the GatewaySubnet at this point as this is treated as a special subnet by Azure and is created automatically when you create the Virtual Network Gateway in the next step.
  3. Create a Virtual Network Gateway in the resource group created in step 1). Select Gateway Type: VPN, VPN Type: Route Based & SKU: Basic. Select the Virtual Network created in step 2) and enter a subnet range of This is the address range for the GatewaySubnet. The gateway needs a public IP address so you can create one at this stage. It can take up to 45mins for Azure to provision the Virtual Network Gateway.
  4.  The next step is to configure the Point to Site VPN. This is achieved by selecting the Virtual Network Gateway you created in step 3) and selecting the “Point-to-Site configuration” option. The screen that appears can seem confusing. There is mention of an Address Pool and Root Certificates. However most of this can be ignored. All you need to configure is the range of IP addresses that will be allocated to clients as they connect over the VPN. This is the address pool mentioned on this screen and you define a range of IP addresses using CIDR notation. These addresses must exist outside of your VNET address space and must not clash with any other connected networks. In this example, the range to use is

P2S Sazure.png

Think of the Virtual Network Gateway as an appliance rather than a network component. It will manage all external connections to and from your network. You configure the gateway as either an Express Route gateway or a VPN gateway. For this example, we consider only the VPN gateway. A Point to Site VPN is not restricted to one connection. It is not a one to one relationship, it is many to one. In theory, you could authorise numerous external endpoints to connect to your network through a single VPN gateway. Likewise, you might create multiple site to site connections through a VPN gateway that connect multiple on premise networks into your Azure VNET. When you create multiple connections this way they all share the bandwidth that is available to the gateway.

This is enough to configure the Point to Site VPN, although it isn’t useful yet because nothing is connected to it. The next steps connect a pre-existing application hosted in an App Service to the Azure VNET via a Point to Site VPN.

Think of the Point to Site VPN address pool as analogous to the address pool managed by a DHCP server. As clients connect they are allocated an address from the pool. If the pool is exhausted new clients cannot connect.

You cannot simply connect any App Service to your VNET through a Point to Site VPN. Only App Services running in Standard or Premium service plans have access to this feature.

  1. In the Azure Portal select the App Service you want to connect and then select the Networking option. Under VNET Integration, click “Setup” and then select the network you created earlier. If the Point to Site VPN connection is not configured on that network then it will not be selectable. When the connection process is complete you’ll see something like thisscreen2
  2. Clicking “Click here to configure” takes you to a screen which is a good summary of how all the networking is configured.

At this point the App Service is connected to the Azure VNET. Now is a good time to test it. This can be achieved by deploying a VM into the backend subnet. This can host a simple API endpoint that can be called from the App Service.

I have a trivial .NET application that does that here.  Essentially it is the MVC starter application with a slight modification. The client application should be deployed to an App Service where it will call the Server application (a simple Web Api project) which should be hosted on a VM connected to the Azure VNET. All this Web Api server application does is construct a string containing the IP address of the server it is hosted on and the IP address reported by the caller. You’ll need to remember to override the client application’s BaseApiUrl application setting in the App Service blade in the Azure portal to reflect your environment.

If this all works you know the Point to Site VPN is functioning and that traffic is being routed onto the VNET from the App Service correctly. You’ll also be able to see the IP address that is being presented to the VM by the App Service. This should be in the address pool range of the Point to Site VPN connection, in this case I like to test this step before moving on the on premise connection because it makes troubleshooting at that stage simpler.

So, what have we achieved? We have connected a App Service hosted in Azure to a VM hosted also in Azure using a Point to Site VPN. One thing that is important to realise is that whilst on the face of it you have a simple connection from one Azure resource to another, you must remember that by definition the Point to Site VPN is routing traffic via the Internet. Remember your VPN gateway has a public IP address so the VPN tunnel from the App Service is routing onto the Internet and then back again. It is also sharing the bandwidth available to the VPN Gateway so you must consider this in your designs. While this configuration will be suitable for some scenarios, in others you may be better off using Express Route to provide a more predictable connectivity solution.


Connecting Web Apps to External Services – Virtual Appliance

Connecting Web Apps to External Services – Virtual Appliance

Last time I proposed a solution which enables Web Apps hosted in Azure App Services to communicate with services running in a private network. This solution required the configuration of a Site to Site VPN which require network configuration both in Azure and on the private network. Sometimes that is not possible so alternative options should be considered. Below I have restated the problem we are trying to solve and then go on to describe an alternative approach.


Code hosted in an App Service needs access to a web service endpoint hosted on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Deploy Virtual Appliance

In this scenario, there is no private connection between the Web App and the private network. Instead all traffic destined for services on the private network will be routed on an Azure VNET that would subsequently be routed via a Virtual Appliance. The Virtual Appliance is acting as a Firewall and an Outbound Proxy. It forwards traffic over the Internet to the external system endpoints. The Public IP of the appliance would need to be white listed on those systems.


The software that runs on a Virtual Appliance is “Enterprise Class” which means it can be difficult to understand and configure correctly. It may also require additional effort to support over the course of the solution’s lifetime.


The following diagram explains the configuration we are trying to achieve. This time there is no need to have an understanding of the private network topology. The only thing that will be shared is the IP Address of the outbound traffic originating from the Virtual Appliance and the public IP address of the service we are connecting to.

virual(no ips)

The following list highlights the differences from the Site to Site VPN solution.

Azure VNet: The main difference with the VNET is that User Defined Routes via a routing table must be created and maintained in order to ensure the correct traffic is routed through the Virtual Appliance.

Frontend Subnet: Acts as the perimeter subnet for the Azure VNet. Having this subnet contain only the virtual appliance makes routing simpler.

Barracuda F-Series: This is the software providing the Virtual Appliance. It is a fully featured Firewall and as such requires some investment to understand it properly. Not only do you need to know how to operate it, you also need to understand how to secure it properly.

In Azure, this is a preconfigured VM which is provisioned with a Network Security Group and a Public IP.

It must be licensed and you can operate it on a Pay As You Go basis or you can pre purchase a license. By default, the Virtual Appliance is a single point of failure. If the firewall were to go down in production, at best, all connectivity to the private network would be lost and at worst, all Internet connectivity from your Web App would be lost (depending how the Azure VNET and Point to Site VPN is configured).

Test Endpoint: The test endpoint only needs to be Internet accessible for this configuration to work. In real world scenarios, the public endpoint is likely to be expose through some sort of perimeter network and your originating IP address will need to be whitelisted before access can be established.

Connecting Web Apps to External Services – Site to Site VPN

Connecting Web Apps to External Services – Site to Site VPN

Last time I set the scene for a common scenario when using Web Apps hosted on Azure App Services. How do I connect to services hosted on a private network? This time I’ll walk through the first potential solution option.


Code hosted in an App Service needs access to a web service endpoint hosted in an on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Site to Site VPN

Build a private network segment in Azure as an Azure VNET. Connect the App Service to the private Azure VNET using a Point to Site VPN. This acts as a private connection between your application hosted in the Azure multi tenanted App Service infrastructure, allowing it to access resources routable via the Azure VNET. Resources on the VNET are not able to access the Application. The on premise network is connected to the Azure VNET via a Site to Site VPN. This effectively extends the on premise network to the cloud allowing bi-directional communication between resources hosted on premise and those hosted in Azure via private network addressing.


Network configuration is required within the on premise network to enable the VPN connection to function. This includes setup of either VPN software or an appliance and configuring network routing to ensure that traffic destine to Azure is routed through the VPN.

The network in Azure must be designed with the on premise network in mind. As a minimum, you need to understand the on premise network design enough to avoid address conflicts when creating the Azure VNET. More likely, any design principles in play on the on premise network are likely to extend to the cloud hosted network.

What this means in practice is that there needs to be collaboration and coordination between the people managing the on premise network and yourself. Depending on the situation this may not be desirable or even possible.


The following diagram explains the configuration we are trying to achieve.

site2site(no ips)

The main components are:

Azure App Services: When setting up the point to site VPN you must define a network range. This is a range of addresses that Azure will select as the outbound IP addresses that the App Service hosted application presents into the Azure VNET. Whilst you might assume that this is the IP address of the server hosting your application it is not quite that straight forwards as Azure is working under the covers to make this all work. However, you can assume that traffic from your application will always originate from this range of addresses so if you make it sufficiently small it is suitable for whitelisting in firewalls, etc. without comprising security.

Azure VNET: Represents your virtual networking space in Azure. You define an address space in which all of your subnets and resources will live.

GatewaySubnet: This is created automatically when you create the VPN gateway in Azure. From experience, it is better to leave it alone. If you add a virtual machine or other networkable devices into this network, routing becomes more of a challenge. Consider this subnet to be the place where external traffic enters and leaves the Azure VNET. The gateway subnet exists inside your Azure VNET so its address range, must exist entirely within the Azure VNET address space.

Backend Subnet: This is an optional subnet. Its primary in this walkthrough is for testing. It is relatively simple to add a VM to the subnet so you can test whether traffic is propagating correctly. For instance, you can test that a Point to Site VPN is working if an App Service application can hit an endpoint exposed on the VM. Additionally, you can test that your Site to Site VPN is working if a VM on this subnet can connect to an endpoint on a machine on your on premise network via its private IP address. The subnet must have an address range within that of the Azure VNET and must not clash with any other subnet. In practice, this subnet can be the location for any Azure resource that needs to be network connected. For example, if you wanted to use Service Fabric, a VM Scale Set is required. That scale set could be connected to the backend subnet which means it is accessible to applications hosted as App Services. In this configuration, it has a two-way connection into the on premise network but a one-way connection from Azure App Service to resources on the backend subnet.

On Premise: This represents your internal network. For demonstration purposes, you should try to build something isolated from your primary Azure subscription. This builds confidence that you have everything configured correctly and you understand why things are working rather than it being a case of Azure “magic”. You could set this up in something completely different from Azure such as in Amazon Web Services and in a later post I’ll walk through how to do that. However, if you are using Azure ensure that your representation of your on premise network is isolated from the Azure resources you are testing. The IP address space of the on premise network and the Azure VNET must not overlap.

Connecting Web Apps to external services – Overview

Connecting Web Apps to external services – Overview

When starting out building websites with Azure, it is likely that you’ll start by deploying a Web App to Azure App Services. This is a great way to get you going but with all things as you invest more and more time and your solution grows up you might experience some growing pains.

All credit to Microsoft. You can build a sophisticated solution with Azure Web Apps and you can have it connect to an Azure SQL instance without really thinking about the underlying infrastructure. It just works.

The growing pains start when you want to connect this Web App to something else. Perhaps you need to connect to some other service hosted in Azure. Or perhaps you need to leverage a third-party system that does not allow connections over the Internet.

Some development focused organisation stumble here. They have chosen Azure for its PaaS capability so they don’t have to think about infrastructure. They can code, then click deploy in Visual Studio – job done. Unfortunately for them to break out of this closed world requires some different skills – some understanding of basic networking is required.

Getting through this journey is not hard but it requires breaking the problem down into more manageable pieces. Once these basics are understood they become a foundation for more sophisticated solutions. Over the next few posts I going to go through some of these foundation elements that allow you to break out of Web App running on Azure App Services (or any other type of App Service) first to leverage other resources running in your Azure subscription such as databases or web services running on VMs and then out into other services running in other infrastructure whether they be cloud hosted or on private infrastructure.

Over this series of posts I’ll be addressing the following scenario.

The code running in a Web App hosted on Azure App Services needs to call a Web Service endpoint hosted in a private network behind a firewall. The organisation says that they’ll only open the firewall to enable access to IP addresses that you own.

This discounts opening the firewall for the range of outbound IP address exposed by Azure App Services as there is no guarantee that you have exclusive use of them.

So the approach will be to build a network in Azure to which the Web App can connect. Then connect the Azure network to the private network by way of a private connection or by way of a connection over the Internet where traffic is routed through a network appliance whose outbound IP is one controlled by you.



Backlog Black Hole – Agile Anti Patterns

Backlog Black Hole – Agile Anti Patterns

If I create a story and add it to the backlog, it will be lost forever and will never get done

A backlog is a prioritised list of work that needs to be done. The important stuff is at the top and the least important stuff at the bottom. If you find that work is “disappearing” in your backlog what could be the cause

  1. The backlog is not being maintained. The backlog is a living thing and as such needs feeding and watering. By that it needs near constant refinement. New work is discovered it gets added to the backlog. But what is happening to the existing stuff. All stories need to be reviewed not just the new ones. They need updating based on current knowledge. That might mean that the story is not required any longer and should be removed. Some people go so far as purging stories that have not been delivered based on their age. The thinking is that if a story gets to being 3 to 6 months old without being delivered then the chances are it will not be delivered in its current form at all.
  2. The newest work is the highest priority. Just because you have thought of the next killer feature it doesn’t automatically mean delivering that work is the highest priority. It should be assessed based on all the work in the backlog. If new work is always added to the top this starts to push older worked down, often meaning the team never get a chance to work on it.
  3. The work is not well defined. In order for someone to understand the work involved in a story it must be clear. If you are going to the trouble of adding work to the backlog that you think needs to be done you should also put in some effort to describe it. I’m not saying that you need to write “War and Peace” but you do need to represent the work to the Product Owner in ceremonies such as backlog refinement. In some circumstances, there are benefits to be found by having a triage process for new work. This provides a chance for the work to be reviewed by the necessary parties to ensure that it is understood, be prioritised and actually needed.
  4. You don’t have the right tools. A small team might get away with managing their backlog with sticky notes on a board. Large teams may need some tooling. Tooling can be an inhibitor as well as an enabler. So perhaps a tool has been implemented that is hard to use or that requires the team to be trained on. This might make it hard to find stories when you need them. Often it is possible to configure tools to provide reports of stories added in the last week or to enable integration with messaging tools such as slack so you have a constant stream of messages indicating new work entering the backlog.

Up to now this discussion has focused on the negative position that it is a bad thing that work is “being lost” in the backlog. However, when you think about it this may be a sign that you are doing the right thing. The work coming in may be aspirational or simply a wish list which is not what your customers really need. If you have an effective feedback loop you’ll be reacting to your customer’s needs rather than focusing on the things that they don’t care about.

Therefore, if you are the one coming up with the ideas that are not making it into the system you need to understand why. You can’t be precious about the work because it was “your idea”. This is looking at the product you’re building from a personal point of view and not considering how the product is used in reality. Perhaps you don’t understand the product as well as you think you do.

Finally, it is worth making a point around continuous technical improvement. My point of view is that for a product to be successful over a long period of time the technology it is built with needs to continuously evolve. Whether you call this technical debt or something else the point is that there will always be technical work that needs to be done that may not have direct value for the customer. The value is actually to your business as you’ll be able to continue to serve your customers in the future.

How you deal with this depends on the organisation. Often people implement a capacity tax that says that a given percentage of the team’s capacity goes towards technical improvement. This way the team are not asking for permission to improve things but there is a still need to document and prioritise the technical work that needs to be done. This is still a backlog. In other situations where the product owner is technically savvy and understands the relative value between delivering new features vs technical improvement, technical stories can be treated as any other work in the backlog.

Whichever way you look at this, it boils down to the fact that there is a pile of work that needs to be done. The work needs to be prioritised and each work item will have a different potential value to your customer and your business. And their needs to be way to make this work visible and transparent in an efficient manner.