An Introduction to PenTesting Azure

6 August 2021 - Articles

I recently wrote an introduction to PenTesting an AWS Environment. A sensible place to start given that I included that in Q1 of 2018 Amazon holds a 33% market share in cloud whereas Microsoft only holds 13%. However I did want to add a few notes that are specific to PenTesting within Azure environments here.

Many of the concepts are the same however, in my AWS article I broke the perspective a penetration tester could take of a cloud environment down into testing “on the cloud”, “in the cloud”, and “testing the cloud console.” That concept remains the same, which is:

When testing cloud environments, there are different perspectives the assessment could consider, some are very similar to external infrastructure/web application assessments and some are different.

I’ll separate the things that are the same from the things that are different to traditional penetration testing by considering the following types of cloud testing and then breaking each one down into the kinds of testing that could take place:

  • Testing on the Cloud: testing traditional system which are simply hosted within a cloud environment. For example, this can be virtualised systems that have been moved from on-premises to the cloud (e.g. “lift and shift”) or it could be web applications which are hosted on the cloud where only the applications themselves are considered in scope for the assessment and not the supporting infrastructure.
  • Testing in the Cloud: testing systems within the cloud that are not exposed publicly, this for example could be testing the server hosting an application or for example, testing systems which are hosted on the cloud but have a firewall preventing direct access and are instead accessed through a bastion host. Additionally we would consider the risks associated with a compromised application that allows a threat actor access to the backend infrastructure.
  • Testing the Cloud Console: testing the configuration of the cloud console (sometimes referred to as the portal) itself, such as looking at the user accounts which have been set up, their permissions, the access-control lists which have been configured, etc. This is effectively a configuration review and could well be compliance driven – however there are still several things in this category to consider as a penetration tester in case access to a cloud console is gained during a penetration test. It’s also most likely an efficient way to determine potential paths of privilege escalation.

The first difference you would notice, if reading the AWS article and this one side-by-side, is that Microsoft no longer requires prior approval for Penetration Testing activity. That’s documented here. However you are still “encouraged” to notify Microsoft, but it’s not a requirement, and of course there are still rules of engagement. The rules of engagement restrict the types of things you would expect – for example, denial-of-service testing is forbidden, as is attempting to access data that does is not wholly your own. There are a few areas which you might be surprised that Microsoft “encourages”, such as: attempting to break out of a shared service container, and load testing your application.

For the first, Microsoft sets the limitation on if you are successful in breaking out of a shared service container you must stop testing immediately and report the success to Microsoft. For the latter, Microsoft differentiations here between “load-testing” and “denial-of-service” attacks. Further explanation states that by this, they mean testing surge capacity is acceptable whereas aiming to degrade the services of others of impact assets outside of your own environment is not. Still not a lawyer, terms and conditions still apply, and of course no batteries are ever included.

Microsoft also notes that they perform their own Penetration Testing and Red Teaming of their cloud infrastructure, services , and applications, as documented here.

Testing on the Cloud

As with AWS, testing systems and application that are simply lifted-and-shifted to the cloud are likely to be no different to testing an application that was hosted on premises. However one thing to consider is Azure data storage. It’s common to use a MS-SQL database within Azure for storage and there are several security tools which apply here, such as database firewalls, data masking, and “always encrypted database”. Additionally, Azure offers a Web Application Firewall.

Regarding Azure SQL Firewalls, they are always on and cannot be disabled, however it is possible to set up an allow rule with an allowed range like 0.0.0.0/0 which effectively disables it. Whilst this may be considered unlikely it should be remembered that the SQL firewall configuration may be done at the server level not the database level, so a “testing” rule may have granted wider than expected access.

One interesting detail I found whilst researching this post was a section within the “Pentesting Azure Applications” by Matt Burrough which states:

“One final possible weakness is that SQL firewall rules are configured
at the server level, not per database. So, if a server has 20 databases, each used by different teams, one rule set is applied to all of them.”

However this may be misleading, as it is possible to restrict access at a database-level, as documented here. Although I would note, just because a security feature is available doesn’t mean that it’s being used, or is correctly deployed, so it’s always worth checking!

Finally, in regards to Azure’s Web Application Firewall, it comes preconfigured with the OWASP Core Rule Set (CRS) 3.0, although it supports 2.2.9. Information on configuring the WAF for defenders can be found here, but attackers might prefer to take a look at the ruleset documentation (and even grab a copy of the ruleset for testing) here.

Testing in the Cloud

As with my AWS article, by “testing in the cloud” I mean effectively having the perspective of a system running within the customer’s cloud environment, such as a compromised web server.

Here it’s likely a good idea to introduce the Azure Deployment Models. Really there is the old model and the new model. These models control how systems are deployed into an Azure environment. The legacy model is known as “Azure Service Management” (sometimes called “Azure Classic”, or ASM) and the new system is known as “Azure Resource Manager” (ARM) which includes role-based access control. The previous ASM portal (“Classic Portal”) was shut down on 8th January 2018. However you will still see many references to it in Microsoft Documentation. If you’re torn apart by curiosity to lean about ASM then information can be found here.

The simplest kinds of testing you could do would be to deploy a scanning engine into the environment to allow for vulnerability scans. With Amazon there is Amazon Machine Images which allows for scanners to be downloaded – there’s an equivalent in Azure, if you log in to your Azure Portal and take a look in the Marketplace you can find scanners such as NexposeQualys, or Nessus.

Testing in the cloud could be achieved where a system is compromised during a Penetration Test (such as a web server being vulnerable to command injection) or it may be provided by a client to allow testing of this eventuality to take place without the prerequisite vulnerability.

A Penetration Tester could be given secure access to an instance within the environment, or alternatively access can be granted by means of a VPN to allow access more directly. The number of deployed systems, complexity of the environment, and desired level of assurance will likely dictate which is the best approach. Testing with this type of context is likely going to be similar for a Penetration Tester as an internal corporate network, including known weaknesses such as servers with remote desktop protocol (terminal services) exposed with self-signed certificates.

Previously under ASM it was possible to authenticated using “Management Certificates”, under the newer ARM these have been replaced with “Service Principals” however I will explain both here.

Whilst access may be granted through a username and password (and hopefully multi-factor authentication!) it’s possible to grant access to ASM (that’s the legacy one we talked about earlier) through Management Certificates. These certificates could be contained within files such as .pfx and .cspkg. If you’re unfamiliar with these a PFX file is a PKCS#12 archive which (may be password protected and) may contain a private key. A CSPKG file is a cloud services package which is effectively a zip file created by Visual Studio to allow applications to be deployed to Azure, however they may contain management certificates.

The newer ARM does not use Management Certificates but instead users service principals. Service Principals are more like Service Accounts for on-premises deployments and do not allow portal access but allow more restricted access to resources, authenticating as a service principal can be configured to be done either by username and password, or by certificate. If during a Penetration Test you find yourself some service principal credentials lying around, take a look here for information about authenticating with them.

In short, it can be achieved with powershell such as:

$pscredential = Get-Credential Connect-AzureRmAccount -ServicePrincipal -ApplicationId "http://my-app" -Credential $pscredential -TenantId $tenantid

Storage and Storage Keys

Azure Storage can be accessed in several ways, with storage account keys, user credentials and a shared access signature. User credentials should be limited if a company has deployed multi-factor authentication. Shared access signature (SAS) tokens are used to grant a small amount of permissions to a set of files in storage and are formatted as URLs. SAS tokens are often not particularly useful due to the high level or restrictions placed on them (such as read-only access to a single file). However storage keys are a great find.

Storage accounts have two keys, a primary and a secondary. They’re not user defined but are instead 64-byte values (but usually base64-encoded) and they are supported by Azure storage utility as well as storage APIs. For a quick look at accessing data within a container in an Azure Blob (Azure data storage, somewhat similar to AWS S3) you can use AzCopy and access a Blog like:

AzCopy /Source:https://myaccount.blob.core.windows.net/mycontainer /Dest:C:\myfolder /SourceKey:key /S

Testing the Cloud Console

Finally, by testing the cloud itself, I am referring to connecting to the Azure Portal and taking a look at the security of the configuration of the console. However, remember that the portal is used for ARM and that ASM had the “Classic Portal”. As of January 8 2018 the classic portal has been “sunsetted” and the old address https://manage.windowsazure.com/ will simply redirect to the new address https://portal.azure.com/.

With full access to all aspects of the Azure environment it is possible to perform a full security review, which would likely work more like a “configuration review”. It’s sensible to start with basic things such as checking accounts for lack-of-MFA, determining how secrets are stored within the environment, and by finding weak network security groups (NSG). If you’ve not come across NSG before, they’re effectively firewall filters.

I’ll run through some of the security features of Azure which should be considered. Additionally, it’s worth taking a look at Azurite (here) which assists in the enumeration and visualisation of an Azure environment and may assist a security review. Azurite is not to be confused with Azurite, the first being an enumeration tool written by MWR as previously mentioned and the latter being a tool written by Microsoft as a clone of Azure server storage (here).

Azure Security Features

Azure Security Center – https://azure.microsoft.com/en-gb/services/security-center/ – announces itself as “unified security management and advanced threat protection across hybrid cloud workloads” which sounds really good but what is it? It’s an agent based security tool which works on Windows and Linux. It looks for security features which are not enabled as well as enforcing security policies, enforcing application whitelisting (with enforcement or audit modes), flag missing features such as WAF not enabled, NSG not enabled, missing operating system updates.

Key Vault – https://azure.microsoft.com/en-gb/services/key-vault/ – is a feature to encrypt keys and passwords within a hardware security module. It can even handle requesting and renewing TLS certificates. Authentication is handled by Active Directory. It is designed in such a way that Microsoft cannot access the keys in storage.

Multi-factor Authentication – https://azure.microsoft.com/en-gb/services/multi-factor-authentication/ – I mentioned MFA within the post itself, however it’s worth bringing it up again here as it’s such as huge benefit to the security of an Azure environment.

Operations Management Suite – https://www.microsoft.com/en-gb/cloud-platform/operations-management-suite – includes four main services Log Analytics, Automation, Backup, and Site Recovery. Analytics enables you to explore and take action on log events, automation allows you to ensure consistent deployment and compliance, and backup activities protect your data should a failure occur.

Play Cover Track Title
Track Authors