info@nimbus.expert

Securing workloads in Azure: Part 2 making connections private

Read up on part 1 of securing workloads to get the background on the introduction of the WAF component. This post will be heavy on the why again. The how's are coming, I promise!

The situation is the same, we're looking to secure this deployment:

deployment schematicWe have already introduced the WAF in the previous post. Enter the next step: locking down the network.

Private connections

One of the main asks of compliance and security policies, is to segment your networks and have workloads in one of those segments. The typical setup from many moons ago was that none of your servers were accessible unless you were on the company network. Nothing was accessible from the outside, unless it passed the firewall, reverse proxy, load balancer, jump box host etc.

In terms of security and compliance stance in the cloud, the mindset has not changed. Security in-depth and zero trust environments, require you to secure both the identity landscape (RBAC roles in azure) and the network perimeter (how connections and data flows in your azure environment). A design to answer those security requirements:

hub spoke networking architecture

What’s shown here are 3 subscriptions, each with their own VNET in it. There is a management VNET (VWAN-VNET) that takes care of all routing, security etc. The other VNET’s are then connected to that management VNET, and all or only a subset of traffic passes through/from that management VNET.

There are many variations to this image, they are outside the scope of this post. I’ll ask Chris to write up some posts about the many options in the land of network control. It’s a broad and interesting topic.

In any case, what does that mean for us? If we go back to our earlier design of the WAF + app service.

hub spoke and our application: connecting the to the vwan vnet

We must first consider that our app service is not in the network, nor is our database. Even if the app gateway is in between the public web and our app service, there is still a DNS record (like https://myappservice.azurewebsites.net) that points to the app service, and public access is still open on the app service itself. Visually, this is the reality at this point:

unsecured request paths

The first thing you will want to do is stop the traffic going directly to the app service, so that the WAF can do its job. Shielding the app service from outside requests can be done by using a private endpoint on the app service.

secured request paths

What Azure Private Endpoint + Azure App Services achieve, is the full isolation of the app service, according to the docs.

From the docs:

App Service + Private endpoints: security perspective MS Docs

From a design perspective, what we’ve done is this:

App service + Private endpoint

We’ve brought the app service into the network and shielded it from outside access. We have now successfully secured the app service with a WAF. Now, the side effect is that the AppGw and the app service are on the network, and now we can let the IT management team secure outgoing traffic if they choose to. To get incoming traffic secured by the management network as well, we would need to do some more configuration changes. That is possible, but I feel that is out of the scope of this article. I hope by now the idea is clear.

At this point, we’ve made big steps towards being compliant with networking requirements.

We’ll have to do the same for the other resources and then the final design becomes something in the lines of:

All services behind PE

One of the intended side effects we now have, is that the app service is not accessible anymore through the public network. If we enable the private endpoint and configure the firewall on the SQL server and key vault as well, the same will happen for those services: not accessible from the public environment anymore. In terms of network security and compliance, this is a big step towards security-in-depth. This goes beyond identity security and expands towards network security. What's not to like?

What's next?

The problem: deployments

Let’s consider the use of a CI/CD tool for a minute, like Azure DevOps. ADO has 2 options for build servers: Microsoft hosted build servers, and self-hosted. Microsoft’s version is running in the cloud, not your network. Well, that basically means those servers won’t be able to access the app services anymore.

If we want to be able to deploy to the app service, we need to resort to hosting our own ADO build servers. The full setup could then become something in the lines of the following design:

3 VNET's: one VWAN, one for build and DNS tooling, the last one for the application

Now, because you wanted to be more secure in azure, you have the added burden of managing your own ADO build agents. Either through docker or through VM’s. Depending on the size of your organization and what technology choices your teams have, this could become a big task. Imagine having to maintain java, python, ruby, .net framework, net core, powershell, bash, … toolings on a VM, just because you want your azure services to be more secure. Luckily, the ADO team publishes the code they use to build their VM’s. We can re-use their work to help minimize ours.

If all this is of interest to you, stay tuned. I’ll be walking you through setting things up in azure for both the development aspect, as well as the DevOps engineer aspect.