Connecting Web Apps to external services – Virtual Appliance Walkthrough

Connecting Web Apps to external services – Virtual Appliance Walkthrough

For this purpose of this walkthrough it is assumed that the steps to establish a working Point to Site VPN have been completed. If you have also established a working Site to Site VPN following this walkthrough it will stop the it working correctly. The walkthrough assumes that the virtual network appliance is using the Barracuda NextGen Firewall F-Series virtual network appliance.

  1. Create an subnet in your VNET with the following settings Name:frontend and Address Range (CIDR Block) : 10.160.3.0/24
  2. Using the Azure MarketPlace create an instance of the virtual network appliance. This is a “Barracuda NextGen Firewall F-Series (PAYG)”. You will be charged for it use so you’ll want to ensure that these costs are factored into any planning. You are asked a number of things when setting this up. The important things for this walkthrough are that it is in its own resource group and it is connected to the frontend subnet you just created. Make a note of the password you use – it will be needed later.

fw1
Notice that you will be charge for the Barracuda License outside of any Azure Credit you might have.

fw2

  1. The Virtual Network Appliance needs to act as a network gateway, capturing traffic that is not address directly to it and forwarding it on to its destination. In order to do this IP forwarding needs to be enabled on the Network Interface for the Virtual Network Appliance.

fw3

  1. Create a Route Table that will route all Internet bound traffic to the virtual network appliance. In this example, I’m controlling the public IP so I’ll restrict the rule to just this IP. This means that I’ll still be able to connect to my VM over the Internet without issues. In production scenarios, you’ll have to think a bit harder about how the routes should be set up. Ensure that the route table is created in the network resource group.
  2. In the newly created route table create a route that routes traffic that needs to go through the virtual network appliance. Ensure that you have the following settings, Next Hop Type: Virtual Appliance & Next Hop Address: . If you are using a single IP remember you’ll still need to use CIDR notation. That means appending /32 to the end of the address.

fw4

  1. Assign the route table to the GatewaySubnet and the Backend Subnets.
  2. Through the service plan for the App Service, navigate to the networking option and then click to Manage the VNET Integration. Under the VNET Integration you need to add an entry to the IP ADDRESSES ROUTED TO VNET section. For this walkthrough you can set the start and end address to that of the external endpoint you are connecting to. For production scenarios use an appropriate address range. Take into consideration other services that the application might use, such as SQL Azure and Redis cache. By default all these connections are over the Internet so if you are not careful, you can route traffic for these connection into your VNET.

fw5

Configuring the Firewall

When the Firewall is setup up it will be secure by default and as such it will not know what to do with traffic destine to your external endpoint. Therefore, it will need to be configured to allow this traffic to flow. The Barracuda Firewall runs on Linux so connecting to it is a bit different to connected to a Windows machine. Luckily Barracuda provide a Windows Desktop application that takes away much of the pain.

  1. Download the Barracuda NextGen Admin application from the Barracuda website. You’ll need to create an account to achieve this. https://login.barracudanetworks.com/account/
  2. Once logged in go to Support -> Download and then select Barracuda NextGen Firewall project. On the resulting page select NextGen Admin as the Type and select the most recent version of the Barracuda NextGen Admin tool

fw6

  1. Run the NGAdmin tool and enter the public ip of the Virtual Network Appliance assigned to it by Azure. Then use Root of the username and enter the password you used when creating the appliance.

fw8

  1. The operation of the firewall software is broadly split into two roles. One for monitoring the system and the other for configuring it. If you click on the Firewall Tab at the top you’ll enter a screen that allows you to monitor firewall operations. From this screen you can do things such as seeing live and historic traffic cross the firewall and see what is allowed and what it blocked. On clicking the Forwarding Rules you can see how those access rules are defined. Luckily there is already a rule called LAN-2-INTERNET that manages access from the internal LAN (The Azure VNET in our case) and the Internet. The main problem is the software as it stands does not know the address ranges that represent our network. To change that we’ll have to use the configuration part of the software.

fw7

  1. Click on the configuration tab. The options to change the forwarding rules are very well hidden. Navigate to Box / Virtual Servers / S1 () [xxx] / Assigned Services / NGFW (Firewall) / Forwarding Rules

fw9

  1. On the resulting screen, you need to change the definition of the LAN-2-INTERNET rule. In the Source field you’ll find “ref: Trusted LAN”. If you drill into that by double clicking you’ll find that this in turn is defined as “Ref:Trusted LAN Networks” and “Ref:Trusted Next Hop Networks”. Again, by drilling in you’ll find that only “Ref: Trusted LAN Networks” is defined as 127.0.0.0/24. This is not sufficient for our needs.

fw10

  1. This rule needs to allow traffic that originates both from our VNET and from the VPN clients used to establish the Point to Site VPN from the App Services to the VNET. In production scenarios it would make sense to create groups to represent these address ranges and add them to the list of references defined by “Ref:Trusted LAN”. However, to keep things simple the rule can be updated directly.
  2. Add the address ranges 10.10.1.0/24 (Point to Site VPN Clients) and 10.160.0.0/16 (VNET) as sources. Note that the software locks down the UI to avoid mistakes creeping in. Therefore, you must Unlock the UI (by clicking LOCK bizarrely).

fw11

  1. Activate the changes by clicking “Send Changes” then clicking on “Activation Pending” and finally the “Activate” button
  2. This should be enough to browse from a VM on the backend subnet to the external resource. You can see traffic flowing from the Firewall tab under History. If the firewall is blocking traffic you’ll also see it here.

fw12

Connecting Web Apps to external services – Building a Simulated On Premise Network

Connecting Web Apps to external services – Building a Simulated On Premise Network

I mentioned last time that to test out the system that has been building up over the last few posts you need a simulated on premise network.  I briefly outlined that is was possible to copy many of the steps taken to build up cloud network to act as an on-premise network.

However, when I did this for real I was learning Amazon Web Services (AWS). So, this was the perfect opportunity to test out what I had learnt. The rest of the post is a walk-through I what I set up.

I’m not going to cover how to set up a Amazon account so I assume you have already done this. Amazon is slightly less forgiving when it comes to accruing costs so it is your responsibly to ensure that you choose free or cheap resources and that you delete things when you are done.

Secondly the walk-through builds up an IaaS based implement. The reason I do this is that it is closer to what you’ll find when integrating with an on-premise network for real. It is often useful to be able to have enough of understanding of the moving parts so that you can have productive conversations with the engineers working with the on-premise systems whose help you’ll need.

This walk-through will configure an EC2 instance running Windows Server on a VPC in AWS. The Windows Server will be running Remote Access Services (RAS) configured to act as an VPN endpoint. I use a T2 Micro sized EC2 instance to keep within the Free Tier in AWS. Before you can complete these steps you need two things from Azure, that this the Public IP address of your VPN Gateway and the shared secret you used when setup the Site to Site VPN in Azure.

AWS Configuration

  1. Log into the AWS and open up the VPC options
  2. Use the “Start VPC Wizard” and create a “VPC with a Single Public Subnet”. Note that a public rather than private subnet is used to keep the network configure simple and to allow RDP access to the EC2 instance over the Internet. Once the VPN is set up communication will be via a private IP address.
  3. For the IP4 CIDR block use 10.100.1.0/24. Give the VPC a name and use the same address range for the Public Subnet’s address range. The rest of the options can be left as their defaults.

Notice how similar this is to setting up an Azure VNET. AWS VPCs and Azure VNETs are equivalent. What the AWS VPC wizard does in the background is create an Internet Gateway and network routing which allows traffic from this subnet out on to the Internet.

Using the same address range for the VPC and the subnet is not something you’d do for real but it is enough for this demo.

  1. Open up the EC2 page and select Launch Instance
  2. From the list of Amazon Machine Images (AMI) select Microsoft Windows Server 2016 Base
  3. On the instance type page ensure t2 micro is selected, and click “Next: Configure Instance Details”
  4. On the Configure Instance Details page ensure that you change the network and subnet to the one created in Step 3. You also want to set Auto Assign Public IP to Enabled so we have the ability to RDP to the instance over the Internet. Everything else can be left at their default settings.
  5. Remember to either create or use an existing key pair in order to be able to get the EC2 instance’s Admin password.

It will take a few minutes for the instance to start and be at a state where you’ll be able to obtain the admin password. Once you have the password you’ll be able to RDP into it using the public IP it was assigned at startupaws1

  1. Your EC2 instance will be acting as a network gateway which will allow network traffic destine for other resources to flow through it. AWS doesn’t allow that by default, but it can be setup by disabling source/destination checking.

aws2aws3

  1. Open RDP and connect to your windows EC2 instance as Administrator
  2. There is a script available here at that will install and configure RRAS on your server. It mentions Windows Server 2012 but it also works Windows Server 2016. It requires a few changes for the demo setup, so the updated script is included below.

# Windows Azure Virtual Network

# This configuration template applies to Microsoft RRAS running on Windows Server 2012 R2.

# It configures an IPSec VPN tunnel connecting your on-premise VPN device with the Azure gateway.

# !!! Please notice that we have the following restrictions in our support for RRAS:
# !!! 1. Only IKEv2 is currently supported
# !!! 2. Only route-based VPN configuration is supported.
# !!! 3. Admin priveleges are required in order to run this script

Function Invoke-WindowsApi(
    [string] $dllName,
    [Type] $returnType,
    [string] $methodName,
    [Type[]] $parameterTypes,
    [Object[]] $parameters
    )
{
    ## Begin to build the dynamic assembly
    $domain = [AppDomain]::CurrentDomain
    $name = New-Object Reflection.AssemblyName 'PInvokeAssembly'
    $assembly = $domain.DefineDynamicAssembly($name, 'Run')
    $module = $assembly.DefineDynamicModule('PInvokeModule')
    $type = $module.DefineType('PInvokeType', "Public,BeforeFieldInit")

    $inputParameters = @()

    for($counter = 1; $counter -le $parameterTypes.Length; $counter++)
    {
        $inputParameters += $parameters[$counter - 1]
    }

    $method = $type.DefineMethod($methodName, Public,HideBySig,Static,PinvokeImpl',$returnType, $parameterTypes)

    ## Apply the P/Invoke constructor
    $ctor = [Runtime.InteropServices.DllImportAttribute].GetConstructor([string])
    $attr = New-Object Reflection.Emit.CustomAttributeBuilder $ctor, $dllName
    $method.SetCustomAttribute($attr)

    ## Create the temporary type, and invoke the method.
    $realType = $type.CreateType()

    $ret = $realType.InvokeMember($methodName, 'Public,Static,InvokeMethod', $null, $null, $inputParameters)

    return $ret
}

Function Set-PrivateProfileString(
    $file,
    $category,
    $key,
   $value)
{
    ## Prepare the parameter types and parameter values for the Invoke-WindowsApi script
    $parameterTypes = [string], [string], [string], [string]
    $parameters = [string] $category, [string] $key, [string] $value, [string] $file

    ## Invoke the API
    [void] (Invoke-WindowsApi "kernel32.dll" ([UInt32]) "WritePrivateProfileString" $parameterTypes $parameters)
}

# Install RRAS role
Import-Module ServerManager
Install-WindowsFeature RemoteAccess -IncludeManagementTools
Add-WindowsFeature -name Routing -IncludeManagementTools

# !!! NOTE: A reboot of the machine might be required here after which the script can be executed again.

# Install S2S VPN
Import-Module RemoteAccess
if ((Get-RemoteAccess).VpnS2SStatus -ne "Installed")
{
    Install-RemoteAccess -VpnType VpnS2S
}

# Add and configure S2S VPN interface

Add-VpnS2SInterface -Protocol IKEv2 -AuthenticationMethod PSKOnly -NumberOfTries 3 -ResponderAuthenticationMethod PSKOnly -Name 51.140.107.124 -Destination 51.140.107.124 -IPv4Subnet @("10.160.1.0/24:100", "10.160.2.0/24:100", "10.10.1.0/24:100") -SharedSecret 1234567890ABC

Set-VpnServerIPsecConfiguration -EncryptionType MaximumEncryption

Set-VpnS2Sinterface -Name 51.140.107.124 -InitiateConfigPayload $false -Force

# Set S2S VPN connection to be persistent by editing the router.pbk file (required admin priveleges)
Set-PrivateProfileString $env:windir\System32\ras\router.pbk "51.140.107.124" "IdleDisconnectSeconds" "0"
Set-PrivateProfileString $env:windir\System32\ras\router.pbk "51.140.107.124" "RedialOnLinkFailure" "1"

# Restart the RRAS service
Restart-Service RemoteAccess

# Dial-in to Azure gateway
Connect-VpnS2SInterface -Name 51.140.107.124

It is surprisingly difficult to highlight within a code block in WordPress so review the IP addresses in the calls Add-VpnS2SInterface, Set-VpnS2Sinterface and Set-PrivateProfileString carefully.

This script installs the RRAS feature. It then configures an interface which will allow traffic into the VPN. You need to define where the VPN will connect to, which is the Public IP address of your Virtual Network Gateway in Azure. You then need to define all the subnets that can be routed to via the VPN. In this case, we define the address range for the Gateway and Backend subnets. We also define the address pool for the Point to Site VPN. This will allow traffic that entered the on premise network from the App Services to flow back again. Finally, we use the same shared secret that set up on the Azure side.

  1. Once the script has run you can confirms its status via the powershell command Get-VpnS2SInterface -name 51.140.107.124 | Format-List. The result should be something like this. Note that the ConnectionState will remind Disconnected until the Azure side is setup.

aws4

  1. You’ll need to set up routing rules which allows network traffic to flow correctly from the AWS VPC through the VPN connection. Open up the Route Table associated with the subnet you created and add the following routes. The routes tells AWS VPC to route traffic destine to the Azure VNET and App Services sitting at the end of the Point to Site VPN, through the EC2 instance running the AWS side of the VPN Gateway.

aws5

If you attempt to simulate an on-premise network in Azure by creating another VNET and VPN gateway and connecting that to the other side of the Site to Site VPN you also need equivalent routes.

At this point if you have completed the Site to Site VPN configuration on the Azure side you should be set. Check that the Azure side VPN connection is reporting Connected and rerun Get-VpnS2SInterface -name 51.140.107.124 to see if the AWS side is happy.

aws6

Sometimes the RRAS service does not start correctly, so if you are having problems to run the command Connect-VpnS2SInterface -Name 51.140.107.124.

Connecting Web Apps to External Services – Site to Site VPN

Connecting Web Apps to External Services – Site to Site VPN

Last time I set the scene for a common scenario when using Web Apps hosted on Azure App Services. How do I connect to services hosted on a private network? This time I’ll walk through the first potential solution option.

Problem:

Code hosted in an App Service needs access to a web service endpoint hosted in an on premise private network. It must be possible for the on premise endpoint to identify traffic from your application hosted on App Services in Azure and only allow access to that traffic.

Solution Option – Site to Site VPN

Build a private network segment in Azure as an Azure VNET. Connect the App Service to the private Azure VNET using a Point to Site VPN. This acts as a private connection between your application hosted in the Azure multi tenanted App Service infrastructure, allowing it to access resources routable via the Azure VNET. Resources on the VNET are not able to access the Application. The on premise network is connected to the Azure VNET via a Site to Site VPN. This effectively extends the on premise network to the cloud allowing bi-directional communication between resources hosted on premise and those hosted in Azure via private network addressing.

Challenges

Network configuration is required within the on premise network to enable the VPN connection to function. This includes setup of either VPN software or an appliance and configuring network routing to ensure that traffic destine to Azure is routed through the VPN.

The network in Azure must be designed with the on premise network in mind. As a minimum, you need to understand the on premise network design enough to avoid address conflicts when creating the Azure VNET. More likely, any design principles in play on the on premise network are likely to extend to the cloud hosted network.

What this means in practice is that there needs to be collaboration and coordination between the people managing the on premise network and yourself. Depending on the situation this may not be desirable or even possible.

Context

The following diagram explains the configuration we are trying to achieve.

site2site(no ips)

The main components are:

Azure App Services: When setting up the point to site VPN you must define a network range. This is a range of addresses that Azure will select as the outbound IP addresses that the App Service hosted application presents into the Azure VNET. Whilst you might assume that this is the IP address of the server hosting your application it is not quite that straight forwards as Azure is working under the covers to make this all work. However, you can assume that traffic from your application will always originate from this range of addresses so if you make it sufficiently small it is suitable for whitelisting in firewalls, etc. without comprising security.

Azure VNET: Represents your virtual networking space in Azure. You define an address space in which all of your subnets and resources will live.

GatewaySubnet: This is created automatically when you create the VPN gateway in Azure. From experience, it is better to leave it alone. If you add a virtual machine or other networkable devices into this network, routing becomes more of a challenge. Consider this subnet to be the place where external traffic enters and leaves the Azure VNET. The gateway subnet exists inside your Azure VNET so its address range, must exist entirely within the Azure VNET address space.

Backend Subnet: This is an optional subnet. Its primary in this walkthrough is for testing. It is relatively simple to add a VM to the subnet so you can test whether traffic is propagating correctly. For instance, you can test that a Point to Site VPN is working if an App Service application can hit an endpoint exposed on the VM. Additionally, you can test that your Site to Site VPN is working if a VM on this subnet can connect to an endpoint on a machine on your on premise network via its private IP address. The subnet must have an address range within that of the Azure VNET and must not clash with any other subnet. In practice, this subnet can be the location for any Azure resource that needs to be network connected. For example, if you wanted to use Service Fabric, a VM Scale Set is required. That scale set could be connected to the backend subnet which means it is accessible to applications hosted as App Services. In this configuration, it has a two-way connection into the on premise network but a one-way connection from Azure App Service to resources on the backend subnet.

On Premise: This represents your internal network. For demonstration purposes, you should try to build something isolated from your primary Azure subscription. This builds confidence that you have everything configured correctly and you understand why things are working rather than it being a case of Azure “magic”. You could set this up in something completely different from Azure such as in Amazon Web Services and in a later post I’ll walk through how to do that. However, if you are using Azure ensure that your representation of your on premise network is isolated from the Azure resources you are testing. The IP address space of the on premise network and the Azure VNET must not overlap.

Connecting Web Apps to external services – Overview

Connecting Web Apps to external services – Overview

When starting out building websites with Azure, it is likely that you’ll start by deploying a Web App to Azure App Services. This is a great way to get you going but with all things as you invest more and more time and your solution grows up you might experience some growing pains.

All credit to Microsoft. You can build a sophisticated solution with Azure Web Apps and you can have it connect to an Azure SQL instance without really thinking about the underlying infrastructure. It just works.

The growing pains start when you want to connect this Web App to something else. Perhaps you need to connect to some other service hosted in Azure. Or perhaps you need to leverage a third-party system that does not allow connections over the Internet.

Some development focused organisation stumble here. They have chosen Azure for its PaaS capability so they don’t have to think about infrastructure. They can code, then click deploy in Visual Studio – job done. Unfortunately for them to break out of this closed world requires some different skills – some understanding of basic networking is required.

Getting through this journey is not hard but it requires breaking the problem down into more manageable pieces. Once these basics are understood they become a foundation for more sophisticated solutions. Over the next few posts I going to go through some of these foundation elements that allow you to break out of Web App running on Azure App Services (or any other type of App Service) first to leverage other resources running in your Azure subscription such as databases or web services running on VMs and then out into other services running in other infrastructure whether they be cloud hosted or on private infrastructure.

Over this series of posts I’ll be addressing the following scenario.

The code running in a Web App hosted on Azure App Services needs to call a Web Service endpoint hosted in a private network behind a firewall. The organisation says that they’ll only open the firewall to enable access to IP addresses that you own.

This discounts opening the firewall for the range of outbound IP address exposed by Azure App Services as there is no guarantee that you have exclusive use of them.

So the approach will be to build a network in Azure to which the Web App can connect. Then connect the Azure network to the private network by way of a private connection or by way of a connection over the Internet where traffic is routed through a network appliance whose outbound IP is one controlled by you.

 

 

Azure, Cloud Service & Reserved IPs

Azure, Cloud Service & Reserved IPs

Azure is pretty good at getting you up and running quickly. You can get from nothing to a solution in production very quickly. Whilst this approach definitely reduces time to market, it can introduce growing pains along the way. Let’s consider Cloud Services as a specific example of how growing pains might manifest themselves.

When you create a Cloud Service you get two IP addresses, one for each slot, Staging and Production. These are allocated from a huge range Azure manages for each region and you have no guarantee of which IP you’ll get. When you’re setting up your Cloud Service you probably didn’t worry about that. As time passes and the solution matures you may have used those IP addresses to create firewall rules to your databases in Azure and perhaps even giving them to third parties in order for them to be whitelisted to allow your application to access another service.

Now the specific IP addresses that were allocated at random by Azure are now critical to the success of your solution. And guess what, those IP are not as secure as you might think. If for whatever reason your Cloud Service is deallocated, the IP addresses will be lost. When you recreate the Cloud Service it will be allocated a new IP address. All of your firewall rules now don’t work. That might not be a major problem for your own rules which you can hopefully change rapidly but it might be a problem if you are working with a supplier that has a 2 week turn around SLA for “minor changes”.

This is where Reserved IPs come in. They are a means to control the life span of an IP address by effectively taking ownership of it in your Azure subscription. Now Azure will never reclaim an IP Address whilst it is reserved in your subscription. The following PowerShell command will create a new reserved IP address.

New-AzureReservedIP –ReservedIPName very-important-ip –Location "UK South"

And this command associates the IP address with an existing Cloud Service.

Set-AzureReservedIPAssociation -ReservedIPName very-important-ip -ServiceName MyBrilliantService

However, we might already have a Cloud Service and its IP address has become important. Changing it would cause unacceptable problems. Luckily it is possible to create a reserved IP from an existing cloud service. The following Powershell command creates the reserved IP using the IP address of the Staging slot of a Cloud Service and also creates the association between the reserved IP and the Cloud Service

New-AzureReservedIP –ReservedIPName very-important-ip –Location "UK South" -ServiceName MyBrilliantService -Slot Staging

A couple of notes about these commands

  1. The -Location is the region in which the reserved IP will be created
  2. The -Slot is an optional argument on these commands. It lets you target either the Staging or Production Cloud Service deployment slots. Production is the default.
  3. Reserved IPs are a “classic” Azure feature. As such resource groups are a meaningless concept. You’ll see all your reserved IPs deployed to a Default resource group.
  4. The first five reserved IPs are free but you should be aware managing more than this is not. You are charged based on the time you hold on to an IP address, which is in effective, to dissuade you from holding on to a large number of publically routable IPV4 addresses which are increasingly become a limited resource. https://azure.microsoft.com/en-us/pricing/details/ip-addresses/

Let’s talk about deallocation. Over the lifetime of your solution your architecture will evolve. You might need to move an IP address from one resource to another. You may want to release IP addresses that you no longer use. Once you have an IP address reserved you own it until the reserved IP resource is deleted. The only way it can used is by creating an association. To use it elsewhere it must be deallocated from the original resource and associated with the new resource. It is important to note that during the process the original resource will receive a new IP address from Azure’s pool.

When playing around with reserved IPs I noticed a couple of behaviours that are worth noting.

Firstly, once a Cloud Service has a reserved IP you must specify its name when deploying the Cloud Service. Remember that the name of the IP should map to the one used for the particular slot. You do this by adding a NetworkConfiguration section to the service ServiceConfiguration file

<?xml version="1.0" encoding="utf-8"?>
<ServiceConfiguration serviceName="My Service" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceConfiguration" osFamily="4" osVersion="*" schemaVersion="2015-04.2.6">
<Role name="MyRole">
…
</Role>
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="very-important-ip"/>
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
</ServiceConfiguration>

I found when the reserved IP was not referenced I received the following error when deploying. I believe by not specifying the IP the deployment process assumes you are changing the IP which is not allowed

Set-AzureDeployment : BadRequest: A reserved IP cannot be added, removed or changed during deployment update or upgrade.

Secondly if you added a reserved IP to one slot of your Cloud Service you must also add one to the other if you want to be able swap the deployment slots. You’ll get this error if you forget.

Move-AzureDeployment : BadRequest: Cannot swap VIPs when only one deployment has a Reserved IP.

Finally, as the number of Cloud Services in a particular environment grows and the number of environments increases the management overhead for the individual reserved IPs increases greatly. Let’s say you have 6 cloud services in 4 environments. That is:

6 cloud services * 2 deployment slots * 4 environments = 48 reserved IPs

In that case it might be better in the long run to build up a VNET with a subnets for each environment and then have a Virtual Network Appliance presenting these network to the Internet on a smaller range of IPs.

For further reading on Reserved IPs take a look at the following links.

The state of “Not Invented Here Syndrome” in 2017

The state of “Not Invented Here Syndrome” in 2017

Development teams often build up high levels of trust internally due to the nature of the constant collaboration between team members.  Whilst that internal trust increases and increases, it can cause a lack of trust of outsiders whether that be 3rd parties or even other internal teams. So, when there is a genuine case for reuse there is often a strong argument against it. A common one is that the high-quality standards of the team can only be assured if code is written in house.

And why not. Developers like writing code, therefore given the chance they will write “all the code”. But code has a cost in terms on maintaining a solution over time. And we will have to support the solution because software isn’t written once then forgotten about, it continuously evolves. And let’s not forget that writing scalable, reliable and adaptable distributed systems is hard.  Who really wants to be debugging a custom load balancing solution when your system is on its knees and customers are beating down your door. Why invest the next couple of months building yet another custom security solution when your competitors seem to be releasing new features every few weeks.

The IT industry is seeing trends that will hopefully consign that old insular mindset to the history books.

Cloud computing offers, amongst many other advantages, the opportunity of offloading complexity on to some other party. Why worry about heating and air conditioning in a custom data centre when all you really need to do is build a website? Economies of scale means that costs are substantially reduced but you need to remember that cloud offerings are built for the masses and if you don’t fit then you may not get the benefits you expected. Cloud solutions such as Azure and Amazon Web Services practically offer a menu of services that you pick based on your requirements for ease of use vs the flexibility and control that you need. At the extreme, serverless computing promises that you can deploy and run code in the cloud without ever worrying about how the underlying infrastructure will be scaled to meet demand.

There is a trend where many companies are reinventing themselves as tech companies – Netflix and Amazon are just a couple of examples of companies that in order to be disruptive in their particular marketplaces transformed themselves into technology companies. Over the last few years this has reached a tipping point and now many organisations are trying the same thing and expecting the same results. Whilst it is true that IT is fundamental to many business models and being technically savvy as an organisation has a key role to play it is unlikely that everyone needs to code their IT from the ground up.

By looking at the first movers in that space you see technologies being developed in house to solve a particular problem and then shared back to the community. Google created AngularJS and Facebook the Cassandra NoSQL database. Today anyone can pick up these projects for their own use and perhaps more importantly they can contribute to them allowing them to evolve independently.

So, my vision of a team that is successfully avoiding NIH Syndrome in 2017 is one that

  • Has a wide understanding of the technology landscape
  • Does not exhibit siloed thinking about technology stacks, particular products or architectures
  • Has the time and space to try new things
  • And is encourage to contribute back into the community that they take from.

Reusing open source software is not like picking apples from somebody else’s orchard. It is a two-way proposition. You use an open source project to enhance your own product – usually to save cost and time. Therefore, you should invest some of that time back even if it is to simply fix bugs or answer questions on Stack overflow. And here in lies the challenge. Many organisations do not yet see the value of reinvesting in the community that bootstrapped them to where they are today and are so single minded they cannot see beyond their own immediate business pressures to deliver more features. Whether this approach is sustainable – I’m not sure. But as more and more companies transform into technology companies the cream of the development world will come to expect certain values from their employers and as you know the cream rises to the top.

 

 

 

 

Traffic Manager Profiles – Custom Domains

Traffic Manager Profiles – Custom Domains

Last time I walked through a basic traffic manager setup. As with most walkthroughs this should help get you up and running but it doesn’t cover some of the things you need to consider to make this a real world solution. So, this time I’m going to delve into some of this. I’ll cover the Azure part of setting up a custom domain to give your site a realistic presence to your customers. To better understand this I’ll look at what Traffic Manager Profiles are actually doing under the covers with requests coming from clients.

After setting up a traffic manager profile if you look at the custom domains for your site you’ll see something like this.

part 2.1

You’ll see that Azure has added a custom domain for azurewebsites.net so you have a means of accessing the site even if you do nothing else. It is greyed out as you cannot remove it. In the screen grab I have also added a custom domain in order to give the site a friendly name. To get this to work you need to setup the relevant DNS A or CNAME records whether that is in Azure itself or via a 3rd party. Azure will only allow you to add this after verifying that the domain records are correctly registered.

If you set up a traffic manager profile and add your web site as an endpoint, when you look at the custom domains again you notice a change. An entry for the traffic manager profile has been added, but why? To understand that you need to look at what the traffic manager profile is really doing.

part 2.2

When a client makes a request for tmpprofile1233.trafficmanager.net, a DNS lookup is required to resolve the address. Normally this would result in the same IP address (in the case of an A Record) or a domain name (in the case of an CNAME record) for every lookup. If the result is a domain name, the process is repeated until an IP address is returned. The client then uses this IP address to talk to the web application directly. Traffic is not being routed through the DNS infrastructure for each request nor is a lookup done each time. The client holds on to the IP address for a set period of time, called the Time To Live, TTL, and only looks up the address up again when this time has expired.

Traffic manager profiles provide a set of rules so different domain names are returned based on the routing method and the endpoint configuration e.g., the number of endpoints, their weight and priority. You also define a TTL which is lower than normal to ensure that address lookups occur more regularly. This ensures that clients are not disrupted for too long in the case of a failure.

part 2.3

Based on its rules, traffic manager will provide the domain for one of your endpoints, such as uksouth-dev.hamersmith.space. The client will then resolve that to an IP address and talk to it directly. This explains why trafficmanager.net addresses show up in each of your app services custom domains list. It is also why you configure a shared domain name such as dev.hamersmith.space at each site and not in the traffic manager profile itself.

part 2.4

In the screen grab above I have a local, pretty domain, xxx-dev1.hamersmith.space, that routes the clients directly to this web site. This is useful for testing purposes to bypass any traffic manager policies. You’ll also see the shared domain name xxx-dev.hamersmith.space which is needed to ensure that the site works correctly when it is picked by the traffic manager policy.

It took a while to get my head around this when I first used traffic manager, but once you walk through what it is doing it starts to make more sense.