Jump to the Solution
In many scenarios, companies opt against enclosing their Azure storage accounts within a virtual network (VNet), instead favoring the simplicity and flexibility of IP whitelisting for access control. Yet, what seems like a straightforward solution often transforms into a labyrinth of challenges, particularly when attempting to implement granular access controls through automation tools like Terraform. The dreaded "Status=403 Code=AuthorizationFailure" error becomes a familiar foe, leaving administrators grappling for effective remedies.
Typical online solutions often involve temporary 'hack' fixes, such as granting all network access temporarily or manually tracking and adding build agent IPs to the whitelist during the deployment pipeline. While these solutions provide a quick workaround for the immediate problem, they are inherently temporary and lack sustainability.
But what if there's a smarter, more streamlined way to achieve the same level of security and control?
In this blog, I will delve into a solution that I've discovered to be more effective, requiring no temporary changes to the firewall during the creation of storage accounts. This approach tackles the common challenges faced by companies relying on IP whitelisting for Azure storage access control, offering a smoother path towards resolution.
Introducing Terraform Modules
Before we dive in, let's introduce two important Terraform items: azurerm and azurerm_storage_container. These are commonly used to manage Azure resources, including storage accounts and their containers.
The azurerm module covers various Azure services, while azurerm_storage_container focuses specifically on storage containers.
e.g.,
terraform {
required_version = "=1.7.4"
required_providers {
azurerm = {
source = "hashicorp/azurerm"
version = "=3.0.0"
}
}
}
resource "azurerm_resource_group" "example" {
name = "example-rg"
location = "Australia East"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azurerm_storage_container" "example" {
name = "examplecontainer"
storage_account_name = azurerm_storage_account.example.name
container_access_type = "private"
}
Now that we have all the context we need and are familiar with the commonly used resources, let's dive in.
The Problem with Workarounds
When administrators encounter the "Status=403 Code=AuthorizationFailure" error in Azure storage containers while enforcing IP whitelisting, they often turn to quick fixes found online. These makeshift solutions, though seemingly effective initially, come with their own set of challenges and limitations:
1. Temporary Allowing All Network Access
One common workaround involves temporarily granting all network access to the Azure storage account. While this may solve the immediate issue by allowing Terraform deployments to proceed without authorisation failures, it compromises security. By opening the storage account to unrestricted access, organisations risk exposing sensitive data to security breaches and unauthorised access, leading to potential compliance issues.
2. Manually Tracking and Adding Build Agent IPs
Automatically tracking and adding IPs from the DevOps pipeline to the whitelist streamlines the process but raises timing and compliance concerns. This involves querying services like https://api.ipify.org?format=json to retrieve the build agent IP and updating Azure storage account's network rules using PowerShell or AzCLI within the pipeline.
Azure's delay in registering changes could lead to pipeline failures, while auditors may prefer static whitelists, potentially causing approval delays. Unauthorised changes pose security and compliance risks, necessitating careful monitoring and control to ensure reliability and integrity.
The Crux of the Issue
The heart of the matter lies in the functionality of Terraform's azurerm and azurerm_storage_container modules, which operate within Azure's data plane. This data plane is protected by firewall access rules, meaning that any attempts to create or modify resources, such as storage containers, must adhere to these rules.
The current behavior aligns with this setup: if Azure DevOps (ADO) is not on the whitelist, encountering a "Status=403 Code=AuthorizationFailure" error is expected. However, what we ideally seek is the ability to create containers not within the constraints of the data plane but rather within the control plane.
In essence, while Terraform excels at interacting with the data plane to configure resources, our aim is to transcend this limitation and interact directly with Azure's control plane for access control tasks. This shift would enable more seamless enforcement of IP whitelisting and other access controls without encountering the current roadblocks posed by the data plane's firewall rules.
Create an Azure storage container with Terraform
Fortunately, the workaround for this issue is quite simple. By replacing the use of the azurerm provider and azurerm_storage_container module with the azapi provider and azapi_resource module, we can leverage the Azure API directly. This allows us to bypass the constraints of the data plane and interact directly with Azure's control plane for access control tasks.
The updated Terraform to create an Azure Storage container would look something like this:
terraform {
required_version = "=1.7.4"
required_providers {
azurerm = {
version = "=3.0.0"
source = "hashicorp/azurerm"
}
azapi = {
version = "=1.13.1"
source = "Azure/azapi"
}
}
}
resource "azurerm_resource_group" "example" {
name = "example-rg"
location = "Australia East"
}
resource "azurerm_storage_account" "example" {
name = "examplesa"
resource_group_name = azurerm_resource_group.example.name
location = azurerm_resource_group.example.location
account_tier = "Standard"
account_replication_type = "LRS"
}
resource "azapi_resource" "example" {
type = "Microsoft.Storage/storageAccounts/blobServices/containers@2023-01-01"
parent_id = "${azurerm_storage_account.example.id}/blobServices/default"
body = jsonencode({
properties = {}
})
}
Conclusion & Closing Thoughts
So there you have it! With the help of the azapi provider and azapi_resource module, we've managed to sidestep the hurdles posed by Azure's data plane and streamline our access control processes in the Azure environment. It's like finding a shortcut in a maze - suddenly, everything becomes a lot smoother and more efficient.
As we navigate the ever-evolving landscape of cloud computing, it's essential to stay open to new solutions and approaches. By embracing tools like azapi, we can not only overcome current challenges but also pave the way for even more innovative solutions in the future.
But hey, we're not done yet! We'd love to hear your thoughts, experiences, or questions about the solutions we've explored in this blog post. Have you encountered similar challenges? Found different workarounds? Or maybe you have some tips to share? Whatever it is, we're all ears!
Let's keep the conversation going and continue exploring ways to make our Azure deployments smoother, more efficient, and more secure. Your insights could be just the thing someone else needs to solve their own cloud conundrums. Can't wait to hear from you!
Comments