Connecting Web Apps to External Services – Virtual Appliance

Connecting Web Apps to External Services – Virtual Appliance

Last time I proposed a solution which enables Web Apps hosted in Azure App Services to communicate with services running in a private network. This solution required the configuration of a Site to Site VPN which require network configuration both in Azure and on the private network. Sometimes that is not possible so alternative options should be considered. Below I have restated the problem we are trying to solve and then go on to describe an alternative approach.

Problem:

Code hosted in an App Service needs access to a web service endpoint hosted on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Deploy Virtual Appliance

In this scenario, there is no private connection between the Web App and the private network. Instead all traffic destined for services on the private network will be routed on an Azure VNET that would subsequently be routed via a Virtual Appliance. The Virtual Appliance is acting as a Firewall and an Outbound Proxy. It forwards traffic over the Internet to the external system endpoints. The Public IP of the appliance would need to be white listed on those systems.

Challenges

The software that runs on a Virtual Appliance is “Enterprise Class” which means it can be difficult to understand and configure correctly. It may also require additional effort to support over the course of the solution’s lifetime.

Context

The following diagram explains the configuration we are trying to achieve. This time there is no need to have an understanding of the private network topology. The only thing that will be shared is the IP Address of the outbound traffic originating from the Virtual Appliance and the public IP address of the service we are connecting to.

virual(no ips)

The following list highlights the differences from the Site to Site VPN solution.

Azure VNet: The main difference with the VNET is that User Defined Routes via a routing table must be created and maintained in order to ensure the correct traffic is routed through the Virtual Appliance.

Frontend Subnet: Acts as the perimeter subnet for the Azure VNet. Having this subnet contain only the virtual appliance makes routing simpler.

Barracuda F-Series: This is the software providing the Virtual Appliance. It is a fully featured Firewall and as such requires some investment to understand it properly. Not only do you need to know how to operate it, you also need to understand how to secure it properly.

In Azure, this is a preconfigured VM which is provisioned with a Network Security Group and a Public IP.

It must be licensed and you can operate it on a Pay As You Go basis or you can pre purchase a license. By default, the Virtual Appliance is a single point of failure. If the firewall were to go down in production, at best, all connectivity to the private network would be lost and at worst, all Internet connectivity from your Web App would be lost (depending how the Azure VNET and Point to Site VPN is configured).

Test Endpoint: The test endpoint only needs to be Internet accessible for this configuration to work. In real world scenarios, the public endpoint is likely to be expose through some sort of perimeter network and your originating IP address will need to be whitelisted before access can be established.

Connecting Web Apps to External Services – Site to Site VPN

Connecting Web Apps to External Services – Site to Site VPN

Last time I set the scene for a common scenario when using Web Apps hosted on Azure App Services. How do I connect to services hosted on a private network? This time I’ll walk through the first potential solution option.

Problem:

Code hosted in an App Service needs access to a web service endpoint hosted in an on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Site to Site VPN

Build a private network segment in Azure as an Azure VNET. Connect the App Service to the private Azure VNET using a Point to Site VPN. This acts as a private connection between your application hosted in the Azure multi tenanted App Service infrastructure, allowing it to access resources routable via the Azure VNET. Resources on the VNET are not able to access the Application. The on premise network is connected to the Azure VNET via a Site to Site VPN. This effectively extends the on premise network to the cloud allowing bi-directional communication between resources hosted on premise and those hosted in Azure via private network addressing.

Challenges

Network configuration is required within the on premise network to enable the VPN connection to function. This includes setup of either VPN software or an appliance and configuring network routing to ensure that traffic destine to Azure is routed through the VPN.

The network in Azure must be designed with the on premise network in mind. As a minimum, you need to understand the on premise network design enough to avoid address conflicts when creating the Azure VNET. More likely, any design principles in play on the on premise network are likely to extend to the cloud hosted network.

What this means in practice is that there needs to be collaboration and coordination between the people managing the on premise network and yourself. Depending on the situation this may not be desirable or even possible.

Context

The following diagram explains the configuration we are trying to achieve.

site2site(no ips)

The main components are:

Azure App Services: When setting up the point to site VPN you must define a network range. This is a range of addresses that Azure will select as the outbound IP addresses that the App Service hosted application presents into the Azure VNET. Whilst you might assume that this is the IP address of the server hosting your application it is not quite that straight forwards as Azure is working under the covers to make this all work. However, you can assume that traffic from your application will always originate from this range of addresses so if you make it sufficiently small it is suitable for whitelisting in firewalls, etc. without comprising security.

Azure VNET: Represents your virtual networking space in Azure. You define an address space in which all of your subnets and resources will live.

GatewaySubnet: This is created automatically when you create the VPN gateway in Azure. From experience, it is better to leave it alone. If you add a virtual machine or other networkable devices into this network, routing becomes more of a challenge. Consider this subnet to be the place where external traffic enters and leaves the Azure VNET. The gateway subnet exists inside your Azure VNET so its address range, must exist entirely within the Azure VNET address space.

Backend Subnet: This is an optional subnet. Its primary in this walkthrough is for testing. It is relatively simple to add a VM to the subnet so you can test whether traffic is propagating correctly. For instance, you can test that a Point to Site VPN is working if an App Service application can hit an endpoint exposed on the VM. Additionally, you can test that your Site to Site VPN is working if a VM on this subnet can connect to an endpoint on a machine on your on premise network via its private IP address. The subnet must have an address range within that of the Azure VNET and must not clash with any other subnet. In practice, this subnet can be the location for any Azure resource that needs to be network connected. For example, if you wanted to use Service Fabric, a VM Scale Set is required. That scale set could be connected to the backend subnet which means it is accessible to applications hosted as App Services. In this configuration, it has a two-way connection into the on premise network but a one-way connection from Azure App Service to resources on the backend subnet.

On Premise: This represents your internal network. For demonstration purposes, you should try to build something isolated from your primary Azure subscription. This builds confidence that you have everything configured correctly and you understand why things are working rather than it being a case of Azure “magic”. You could set this up in something completely different from Azure such as in Amazon Web Services and in a later post I’ll walk through how to do that. However, if you are using Azure ensure that your representation of your on premise network is isolated from the Azure resources you are testing. The IP address space of the on premise network and the Azure VNET must not overlap.

Connecting Web Apps to external services – Overview

Connecting Web Apps to external services – Overview

When starting out building websites with Azure, it is likely that you’ll start by deploying a Web App to Azure App Services. This is a great way to get you going but with all things as you invest more and more time and your solution grows up you might experience some growing pains.

All credit to Microsoft. You can build a sophisticated solution with Azure Web Apps and you can have it connect to an Azure SQL instance without really thinking about the underlying infrastructure. It just works.

The growing pains start when you want to connect this Web App to something else. Perhaps you need to connect to some other service hosted in Azure. Or perhaps you need to leverage a third-party system that does not allow connections over the Internet.

Some development focused organisation stumble here. They have chosen Azure for its PaaS capability so they don’t have to think about infrastructure. They can code, then click deploy in Visual Studio – job done. Unfortunately for them to break out of this closed world requires some different skills – some understanding of basic networking is required.

Getting through this journey is not hard but it requires breaking the problem down into more manageable pieces. Once these basics are understood they become a foundation for more sophisticated solutions. Over the next few posts I going to go through some of these foundation elements that allow you to break out of Web App running on Azure App Services (or any other type of App Service) first to leverage other resources running in your Azure subscription such as databases or web services running on VMs and then out into other services running in other infrastructure whether they be cloud hosted or on private infrastructure.

Over this series of posts I’ll be addressing the following scenario.

The code running in a Web App hosted on Azure App Services needs to call a Web Service endpoint hosted in a private network behind a firewall. The organisation says that they’ll only open the firewall to enable access to IP addresses that you own.

This discounts opening the firewall for the range of outbound IP address exposed by Azure App Services as there is no guarantee that you have exclusive use of them.

So the approach will be to build a network in Azure to which the Web App can connect. Then connect the Azure network to the private network by way of a private connection or by way of a connection over the Internet where traffic is routed through a network appliance whose outbound IP is one controlled by you.

 

 

Painting the Forth Bridge

Painting the Forth Bridge

They say that it takes so long to paint the Forth Bridge in Edinburgh, by the time the painting team have worked their way across the bridge, the paint at the start will need renewing, so they have to start again. This is of course, a myth, but if it were truth the workers painting the bridge would have a job for life.

Sometimes software projects are like this. They are in a state of perpetual rewrite. The rewrite may be needed because the wrong JavaScript framework was selected at the start so the team are moving to framework N which when completed will solve all problems. Or the application is considered a monolith so the team are “doing the right thing” breaking the solution up into Microservices. The “rewrite” is done with the best intentions but the outcome is often the same. The rewrite takes so long, that the IT world has moved on and now the goal the team is working towards is old fashioned and out of date, Fresh thinking is required, which triggers the next big rewrite and so the cycle continues, much like painting the proverbial bridge.

As professional technies, developers like solving the hard problems. They like using new technologies and the latest frameworks. However, it is a fact of life that most development work isn’t sexy or glamourous. Often developers spend a lot of time grinding out “business logic”, or fixing bugs. The work can become repetitive and boring. There is often a tension between the motivation to keep software development simpler and predictable through standardisation vs. a desire from the technical team to keep their skills fresh on the job.

For freelancers or developers working for software consultancies, getting stuck in one technical stack for a single project is not a problem. The next one is never too far away and it is likely to be very different. Change doesn’t come so frequently for those developers working in software houses. Typically they will be working longer terms on a smaller portfolio of projects and products. For software houses the economics are straight forwards, ship more products – make more money! Investing time in rewrites is a big challenge. Redirecting effort is large scale technical changes means they are not fixing bugs nor are they delivering as many features.

But if a product isn’t changing in pace with the technology landscape it is in danger of stagnating and becoming irrelevant. The software used to build it become out of date. The development team start to feel deskilled and may started to leave this business taking critical knowledge with them. It becomes harder to replace them as your technology stack is no longer attractive to the job market. Before you know it all the innovation that took you from a start up to a mature software house in the first place has leaked away.

As with everything there is a balance to be found. The development team need to be able to stay current but the organisation still needs to pay the bills. Here are a few things to look at to ensure that this balance is maintained.

Be aware of technical debt and pay it down frequently. This is simple really. The best way to avoid big changes in the first place is to fix problems soon after they occur. If they are left to mount up over time it become much harder to fix. Therefore, ensure that the team have the opportunity to fix things as part of the development process.

Ensure that the business value of large technical changes is understood. All work the development teams do should have a business value so ensure that this is understood when it comes to technical changes. There are often valid business reasons for changing from framework X to framework Y, but it is often hard to articulate. There is a temptation to avoid identifying the business value because it is hard to do, and instead the change is delivered as a side project or worst as someone’s pet project. Avoid this temptation as the term “side project” implies a lower priority so it is likely to be pushed to the side when your important customers are hammering down your door asking for the next great feature. Technical changes and evolving architecture is just as important as new features and so all the work should be in the same pot. The Product Owner must be given the hard problem of deciding whether to improve the system itself or deliver new features.

Ensure that large technical changes are delivered as a series of steps as part of a roadmap. Agile development is based on short feedback loops. This is no different when it comes to technical changes. Therefore, a big change should be broken into a roadmap. At the end of the roadmap is a goal and a vision, and at the beginning is a next few steps to get there. The idea is that you don’t create a detailed plan. You might only define the next few steps. This approach also allows the goals to change with little impact. It should be easy to get started as there is no long planning exercise which also means there is no temptation to follow through on a now invalid plan simply because too much cost has been sunken into the planning exercise.

Speeding up Azure Cloud Service deployment in Octopus Deploy

Speeding up Azure Cloud Service deployment in Octopus Deploy

This post is a brain dump of something I discovered working with Azure Cloud Services specifically, when deploying them to Azure with Octopus Deploy.

In the beginning.

Cloud Service deployments have been designed by Microsoft to provide a seamless upgrade experience.  If your cloud service infrastructure comprises of multiple cloud service instances, than the fabric controller in Azure which controls deployments will perform a rolling upgrade. The underlying instances will be gradually upgraded until all are done. When you are deploying to a production slot this all make sense. You want to avoid down time, you want to minimise impact for your customers. However this convenience comes at a cost – time. If you have a large number of Cloud Service instances and web roles the process seems to take for ever.  That is the last thing you want if you are watching over a Live release in an evening.

What other choices are there?

In some deployment scenarios you might deploy to the staging slot of the Cloud Service, do all of your testing and then perform a slot swap to get this version into Live. In this case you don’t want to incur the cost of a rolling upgrade as customers don’t use the staging slot.

The Cloud Service upgrade documentation talks about a deployment mode called Simultaneous. Unfortunately there is not a lot of documentation around that describes what it does. This Stack Overflow question highlights that simultaneous mode is referred to as BlastUpgrade in the topologyChangeDiscovery attribute in the Cloud Service’s Service Definition File. What I determined by experimenting is that in this mode, the fabric controller ignores all upgrade domains meaning all instances are upgraded at once. This was a lot quicker and exactly what I wanted when deploying to Staging slots.

So the obvious answer would be to update the service definition file? Wrong! This didn’t work with Octopus Deploy so I was faced with looking for other options. This led to an investigation of how Octopus Deploy actually deploys Cloud Services. I found that it uses this script by default.

function CreateOrUpdate()
{
    $deployment = Get-AzureDeployment -ServiceName $OctopusAzureServiceName -Slot $OctopusAzureSlot -ErrorVariable a -ErrorAction silentlycontinue
    if (($a[0] -ne $null) -or ($deployment.Name -eq $null))
    {
        CreateNewDeployment
        return
    }
    UpdateDeployment
}

function UpdateDeployment()
{
    Write-Verbose "A deployment already exists in $OctopusAzureServiceName for slot $OctopusAzureSlot. Upgrading deployment..."
    Set-AzureDeployment -Upgrade -ServiceName $OctopusAzureServiceName -Package $OctopusAzurePackageUri -Configuration $OctopusAzureConfigurationFile -Slot $OctopusAzureSlot -Mode Auto -label $OctopusAzureDeploymentLabel -Force
}

function CreateNewDeployment()
{

    Write-Verbose "Creating a new deployment..."
    New-AzureDeployment -Slot $OctopusAzureSlot -Package $OctopusAzurePackageUri -Configuration $OctopusAzureConfigurationFile -label $OctopusAzureDeploymentLabel -ServiceName $OctopusAzureServiceName
}

function WaitForComplete()
{

    $completeDeployment = Get-AzureDeployment -ServiceName $OctopusAzureServiceName -Slot $OctopusAzureSlot
    $completeDeploymentID = $completeDeployment.DeploymentId
    Write-Host "Deployment complete; Deployment ID: $completeDeploymentID"
}

CreateOrUpdate
WaitForComplete

You’ll notice calls to Set-AzureDeploymentwhere the modeparameter is set to Auto.However the documentation for this powershell cmdlet states that the optional mode argument can be set to Simultaneous. How do you get Octopus to do something different. Luckily if you drop a powershell script called DeployToAzure.ps1 into the root of the package you are deploying , Octopus will use your script rather than its own. Therefore you can adjust the script to look like this.

if ($UseSimultaneousUpgradeMode -eq "True")
{
    Write-Verbose "Using Simultaneous Upgrade Mode"
    Set-AzureDeployment -Upgrade -ServiceName $OctopusAzureServiceName -Package $OctopusAzurePackageUri -Configuration $OctopusAzureConfigurationFile -Slot $OctopusAzureSlot -Mode Simultaneous -label $OctopusAzureDeploymentLabel -Force
}
else
{
    Set-AzureDeployment -Upgrade -ServiceName $OctopusAzureServiceName -Package $OctopusAzurePackageUri -Configuration $OctopusAzureConfigurationFile -Slot $OctopusAzureSlot -Mode Auto -label $OctopusAzureDeploymentLabel -Force
}

Where $UseSimultaneousUpgradeMode is an Octopus variable that can be used to control which mode is used when.

One word of warning. You see the function in the script called WaitForComplete()? This is used by Octopus to determine when the release is complete. It works by querying the relevant AzureDeployment. I have found that this reports back as complete before the Cloud Service instances have upgraded. And if you were to swap from Staging to Production whilst they were still upgrading… oops you have a temporary outage. So if you are doing this remember to physically check the status of the staging deployment before swapping.

Never ending stories – Agile Anti Patterns

Never ending stories – Agile Anti Patterns

The Never-Ending Story was a 1984 fantasy film about a boy who reads a magical book that tells a story of a young warrior whose task is to stop a dark storm called the Nothing from engulfing a fantasy world.  Apparently it was quite good but all I can remember about it is the large white flying dog like creature.

The Never-Ending stories I am more familiar with are those in Agile projects that start out as something that is assumed to be quite straight forwards but then generates much more work than was expected and before you know it, not only are you carrying it forwards into the next sprint but it starts reoccurring in multiple sprints. So, what are the characteristics of a never-ending story.

A story really represents a feature and as such becomes a bucket for all areas of functionality related to that feature

Agile is about reducing feedback loops in order to build confidence that you are always working on the most important thing at any given moment. Often never-ending stories internalise that feedback loop. Maybe the story represents a new area of a solution. You don’t what to do too much up front design or suffer from analysis paralysis so you have a story that enables you to try a few things out. This is reviewed with the Product Owner and the rest of the team which generates new ideas which is new work.

There is nothing wrong with this description so far, however a never-ending story emerges when that new work is added to the original story. Often this is coupled with a feeling that no one really knows what good looks like. The team starts to feel like they are working towards a moving target. It will only be a matter of time before the “When will you be done” questions start. Think for a minute about how it is possible to formulate an answer. Every time the team asks for feedback, more work is generated and so the goal has changed. Yet they are asked to say when work, which they may not know about yet, will be complete.

Work is blocked by external dependencies

Sometimes someone outside the team has a stake on when a story is complete. It may be that you are integrating with a third party and they have onboarding activities. Perhaps there is an external customer that has a final say on whether the work has been delivered to the necessary standard. The important thing to realise is that you cannot control this decision making and it is highly likely that your timescales will not align. The best thing to do is to accept it and then put in controls to minimise the impact on you.

There are a couple of things you can do to take control.

  1. Understand the requirements of the third parties upfront and ensure that this is factored in when creating the stories in the first place. This may feel like up front design but you are doing just enough to mitigate the risk of a delay in the future.
  2. Don’t “leave the bonnet open” on work whilst it is outside your control. Deal with feedback whether that is good or bad as new work coming into the backlog and deal with it on a priority basis.
  3. Minimise external dependencies by only having them when you absolutely can’t avoid them or where they add value to solutions you deliver.

Quality defects stopping work being completed

In this situation, the work has been done and the team’s tester is working on it only to find a quality issue. The story goes back to the developer to be fixed. Subsequently the tester picks up the work only to find more issues and repeat.  When this has been happening for some time, e.g., it is a reoccurring theme in multiple stand ups, the team needs to try to understand why things are bouncing around between two team members. Are the developer and tester working together on the story or is the developer “throwing the work over the wall”?  Are the defects being raised related to the changes being made under the story in question or are they coincidental. Are the developer and tester on the same page as to the quality metrics for the story?

So what?…

Never-ending stories are bad because they harm your team’s predictability. They are a black hole, they consume effort and people. No-one in your team is really sure when they will be complete. If you are working on one it can be very demoralising. Personally, I am motivated by finishing things but never-ending stories can really feel like a ball and chain, never allowing you to finish and never enabling you to move on to other things.

When you look at the examples I give there are a couple of ways to avoid never ending stories

Firstly, ensure that you are aware of the work that is being created. Whilst it is normal to create some new work when delivering stories, you need to decide the point at which you need to call it out, surfacing it as new work in the backlog and letting the PO prioritise it rather than absorbing it. If you are not doing this during a sprint, maybe when the story is carried over at a sprint boundary, this would be a good time to ask yourself if the story should be broken down.

Secondly many of the problems with defining the scope and boundary of a story could be resolve by investing time in defined acceptance criteria at the start. You may use a Story Kick-Off for this. The acceptance criteria could describe how the story should function, they define the quality criteria for the story and the expectations of third parties. And let’s not forget that we should be aiming to avoid large stories, breaking them down along their natural seams in order to keep your team predictable and high performing.

Azure, Cloud Service & Reserved IPs

Azure, Cloud Service & Reserved IPs

Azure is pretty good at getting you up and running quickly. You can get from nothing to a solution in production very quickly. Whilst this approach definitely reduces time to market, it can introduce growing pains along the way. Let’s consider Cloud Services as a specific example of how growing pains might manifest themselves.

When you create a Cloud Service you get two IP addresses, one for each slot, Staging and Production. These are allocated from a huge range Azure manages for each region and you have no guarantee of which IP you’ll get. When you’re setting up your Cloud Service you probably didn’t worry about that. As time passes and the solution matures you may have used those IP addresses to create firewall rules to your databases in Azure and perhaps even giving them to third parties in order for them to be whitelisted to allow your application to access another service.

Now the specific IP addresses that were allocated at random by Azure are now critical to the success of your solution. And guess what, those IP are not as secure as you might think. If for whatever reason your Cloud Service is deallocated, the IP addresses will be lost. When you recreate the Cloud Service it will be allocated a new IP address. All of your firewall rules now don’t work. That might not be a major problem for your own rules which you can hopefully change rapidly but it might be a problem if you are working with a supplier that has a 2 week turn around SLA for “minor changes”.

This is where Reserved IPs come in. They are a means to control the life span of an IP address by effectively taking ownership of it in your Azure subscription. Now Azure will never reclaim an IP Address whilst it is reserved in your subscription. The following PowerShell command will create a new reserved IP address.

New-AzureReservedIP –ReservedIPName very-important-ip –Location "UK South"

And this command associates the IP address with an existing Cloud Service.

Set-AzureReservedIPAssociation -ReservedIPName very-important-ip -ServiceName MyBrilliantService

However, we might already have a Cloud Service and its IP address has become important. Changing it would cause unacceptable problems. Luckily it is possible to create a reserved IP from an existing cloud service. The following Powershell command creates the reserved IP using the IP address of the Staging slot of a Cloud Service and also creates the association between the reserved IP and the Cloud Service

New-AzureReservedIP –ReservedIPName very-important-ip –Location "UK South" -ServiceName MyBrilliantService -Slot Staging

A couple of notes about these commands

  1. The -Location is the region in which the reserved IP will be created
  2. The -Slot is an optional argument on these commands. It lets you target either the Staging or Production Cloud Service deployment slots. Production is the default.
  3. Reserved IPs are a “classic” Azure feature. As such resource groups are a meaningless concept. You’ll see all your reserved IPs deployed to a Default resource group.
  4. The first five reserved IPs are free but you should be aware managing more than this is not. You are charged based on the time you hold on to an IP address, which is in effective, to dissuade you from holding on to a large number of publically routable IPV4 addresses which are increasingly become a limited resource. https://azure.microsoft.com/en-us/pricing/details/ip-addresses/

Let’s talk about deallocation. Over the lifetime of your solution your architecture will evolve. You might need to move an IP address from one resource to another. You may want to release IP addresses that you no longer use. Once you have an IP address reserved you own it until the reserved IP resource is deleted. The only way it can used is by creating an association. To use it elsewhere it must be deallocated from the original resource and associated with the new resource. It is important to note that during the process the original resource will receive a new IP address from Azure’s pool.

When playing around with reserved IPs I noticed a couple of behaviours that are worth noting.

Firstly, once a Cloud Service has a reserved IP you must specify its name when deploying the Cloud Service. Remember that the name of the IP should map to the one used for the particular slot. You do this by adding a NetworkConfiguration section to the service ServiceConfiguration file

<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="My Service" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="4" osVersion="*" schemaVersion="2015-04.2.6">
<Role name="MyRole">
…
</Role>
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="very-important-ip"/>
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
</ServiceConfiguration>

I found when the reserved IP was not referenced I received the following error when deploying. I believe by not specifying the IP the deployment process assumes you are changing the IP which is not allowed

Set-AzureDeployment : BadRequest: A reserved IP cannot be added, removed or changed during deployment update or upgrade.

Secondly if you added a reserved IP to one slot of your Cloud Service you must also add one to the other if you want to be able swap the deployment slots. You’ll get this error if you forget.

Move-AzureDeployment : BadRequest: Cannot swap VIPs when only one deployment has a Reserved IP.

Finally, as the number of Cloud Services in a particular environment grows and the number of environments increases the management overhead for the individual reserved IPs increases greatly. Let’s say you have 6 cloud services in 4 environments. That is:

6 cloud services * 2 deployment slots * 4 environments = 48 reserved IPs

In that case it might be better in the long run to build up a VNET with a subnets for each environment and then have a Virtual Network Appliance presenting these network to the Internet on a smaller range of IPs.

For further reading on Reserved IPs take a look at the following links.