Deploy an Azure API Management API with Terraform

We have an ever growing number of APIs hosted in Azure API Manager, in an ever growing number of environments.

Most of our APIs follow the same back end pattern, we have a number Azure Function Apps or Logic apps that talk to either our back end systems or those of a third party.

Deploying API’s manually is a real time sink, not to mention unutterably tedious and prone to transcription errors. I believe that out operations team has had one member of staff doing nothing other than configuring API-M for several weeks.

Every other part of our deployments are scripted and carried out by DevOps pipelines so it has been a constant source of irritation that we have not, until now, been able to eliminate this last manual step.

I have previously tried to use ARM templates to automate our deployments, but I ran out of time on that, towards the end it was just bogged down in the complexity and lack of clarity which ARM brings to the process.

Having recently introduced myself to Terraform, and noticing that it seemed to have a lot of support for API Manager I thought I would put some time aside to revisit APIM with Terraform.

In short, its far easier to use.

What I wanted was to be able to define an entire API, including its back end connections to logic apps or function apps, policy files, etc as a set of variables, hand that to Terraform and let it manage creating and updating the APIs from then on.

It looked like the best way of doing that was to create a Terraform module, this is what I did, and you can access the module here.

Deploy an Azure Function App with Terraform

A lot of my work lately revolves around creating new applications that have a fairly similar structure in Azure.

I have a resource group to contain the new functionality, one or more function apps, to do some work, a key vault for the function apps to store their secrets and an application insights for monitoring.

We use Microsoft DevOps so the resource group and associated resources is generally contained within a single repo and has a single pipeline.

The pipeline for deploying the above with ARM Templates is tedious and long winded, I wondered if I could simplify it using Terraform.

I wanted to have the resource group and its resources defined purely in variables and I wanted as much of the gubbins to be fairly reusable, or capable of being used with further Terraform, or even ARM scripts for the cases when we need more stuff in the resource group.

On the whole it was fairly easy to build a Terraform module to achieve all of the above goals. Some parts, such as including Key Vault keys in the function apps configuration required a bit of thinking about.

The source code for the module is available here and also via the Terraform Registry.

Function App Group as Variables

I’ll start at the end, the following module configures my function app group.

module "terraform-functionapp-group" {
    source  = "JoeAtRest/functionapp-group/azurerm""

    subscription_prefix = "dev"
    location_prefix = "uks"
    location = "uksouth"
    app_name = "mytestgroup"

    keyvault_secrets = { "secret-name" : "the secret", "secret-name2" : "squirrel" }
    tags = { "Solution" : "Test" }

    functionapps = [{
        name              = "fa-1"    
        zip_path          = "local/fa1"
        ip_restrictions   = ["192.168.1.23","200.32.29.4"]
        settings          = { "NameInFuctionApp" = "https://some-url" ,  "OtherThingInFunctionApp" = "false" }
        key_settings      = [{ name = "mysecret", secret = "secret1" },{name = "myothersecret", secret = "secret2"}]
    }]

    access-policies = []
}

We have a naming convention for our Azure components to denote which subscription and region they’re in, the top few variables take care of those so that I only require one script for all regions and subscriptions.

The app_name is the name of the resource group.

When the key vault is created I want to preload it with the various secrets and what have you for the environment. In real life these would not be defined in the main.tf but would be in Azure Secure Files that I copy in the pipeline into the terraform folder.

Tags, its good to tag things.

   functionapps = [{
        name              = "fa-1"
        zip_path          = "local/fa1"
        ip_restrictions   = ["192.168.1.23","200.32.29.4"]
        settings          = { "NameInFuctionApp" = "https://some-url" ,  "OtherThingInFunctionApp" = "false" }
        key_settings      = [{ name = "mysecret", secret = "secret1" },{name = "myothersecret", secret = "secret2"}]
    }]

This is an array of function apps which I want to create, in this instance its just a single function app.

I am deploying the function app using the WEBSITE_RUN_FROM_PACKAGE setting, which means I build the code, zip it up and store the zip file in an Azure storage blob. I then use the SAS key in the function app settings to tell it where to run from.

This raised the first issue I faced with the Terraform process. If I always provide Terraform with a file that has the same name as the last version of the function app then it will not take any action, it thinks nothing has changed.

I suspect you can tell the resource not to behave like that and to always re-deploy, I didn’t know that 3 days ago when I made this module and pipeline.

Instead I always give my zip file a random name and pass that into the module in zip_path.

Our OPSEC team like us to restrict function apps to talk only to API-M, the list of ip_restrictions limit the function app to talk to only ip addresses in this list.

A function app needs settings, you see them in the Configuration blade in the Azure portal. There are a number of different kinds of settings, all lumped into the one place.

First of all ( my settings map ) are settings the function app is expecting to determine how it runs, flags, URLs of things its talking to, etc.

Secondly there are sensitive settings which you would rather store in key vault, you put a big long address into the configuration and Azure automatically provides your function app with the relevant value.

So that is enough configuration to create the resource group, a function app, a key vault and an application insights. The ARM equivalent is, it goes without saying, utterly hideous.

How Does That All Work ?

I am going to skip completely over the configuration of key vault and app insights, its a case of copying and pasting the relevant sections from the Terraform Azurerm pages.

Storage

The function app needs some storage, in which to keep its zip file.

resource "azurerm_storage_account" "function-storageaccount" {
  name                     = local.storage_account
  resource_group_name      = azurerm_resource_group.app-rg.name
  location                 = azurerm_resource_group.app-rg.location
  account_tier             = "Standard"
  account_replication_type = "LRS"
  tags                     = var.tags
}

resource "azurerm_storage_container" "function_storagecontainer" {
  name                  = local.storage_container
  storage_account_name  = azurerm_storage_account.function-storageaccount.name
  container_access_type = "private"
}

resource "azurerm_storage_blob" "function_storageblob" {
    for_each = { for functionapp in var.functionapps : functionapp.name => functionapp }

  name                   = each.value.zip_path
  storage_account_name   = azurerm_storage_account.function-storageaccount.name
  storage_container_name = azurerm_storage_container.function_storagecontainer.name
  type                   = "Block"
  source                 = each.value.zip_path
}

data "azurerm_storage_account_sas" "function_sas" {
  connection_string = azurerm_storage_account.function-storageaccount.primary_connection_string
  https_only        = false
  resource_types {
    service   = false
    container = false
    object    = true
  }
  services {
    blob  = true
    queue = false
    table = false
    file  = false
  }
  start  = "2018-03-21"
  expiry = "2028-03-21"
  permissions {
    read    = true
    write   = false
    delete  = false
    list    = false
    add     = false
    create  = false
    update  = false
    process = false
  }
}

Ta da ! Now it has storage.

The only bits of note here is the creation of the storage blob and loading the zip file into it and the generation of the SAS key which allows the function app to access its zip file.

First, the blob …

resource "azurerm_storage_blob" "function_storageblob" {
    for_each = { for functionapp in var.functionapps : functionapp.name => functionapp }

  name                   = each.value.zip_path
  storage_account_name   = azurerm_storage_account.function-storageaccount.name
  storage_container_name = azurerm_storage_container.function_storagecontainer.name
  type                   = "Block"
  source                 = each.value.zip_path
}

The cool part here is the for_each, it will create blob for each function app in my configuration ( which was a set, so can be multiple ). Each resource can then be identified elsewhere in the scripts like this

something = azurerm.storage_blob.function_storageblob["fa-1"].id

The source then instructs terraform to load the zip file into the blob.

Function App

The actual function app creation is very straight forward, which was just as well as it left plenty of time to work out how to get the settings set.

locals {
  app_settings = {
    "FUNCTIONS_WORKER_RUNTIME" : "dotnet",
    "FUNCTIONS_EXTENSION_VERSION" : "~3",
    "APPINSIGHTS_INSTRUMENTATIONKEY" : azurerm_application_insights.rg.instrumentation_key       
  }    
}

# App Service Plan
resource "azurerm_app_service_plan" "app_app_service_plan" {
  name                = local.app_service_plan
  location            = azurerm_resource_group.app-rg.location
  resource_group_name = azurerm_resource_group.app-rg.name
  kind                = "FunctionApp"
  sku {
    tier = "Dynamic"
    size = "Y1"
  }
  tags = var.tags
}

resource "azurerm_function_app" "app_functionapp" {
  for_each = { for functionapp in var.functionapps : functionapp.name => functionapp }

  name                      = "${var.subscription_prefix}-${var.location_prefix}-fa-${each.value.name}"
  location                  = azurerm_resource_group.app-rg.location
  resource_group_name       = azurerm_resource_group.app-rg.name
  app_service_plan_id       = azurerm_app_service_plan.app_app_service_plan.id
  storage_connection_string = azurerm_storage_account.function-storageaccount.primary_connection_string
  app_settings              = merge(
    each.value.settings,
    {"APPINSIGHTS_INSTRUMENTATIONKEY" : azurerm_application_insights.rg.instrumentation_key}, 
    {"WEBSITE_RUN_FROM_PACKAGE" : "https://${azurerm_storage_account.function-storageaccount.name}.blob.core.windows.net/${azurerm_storage_container.function_storagecontainer.name}/${azurerm_storage_blob.function_storageblob[each.value.name].name}${data.azurerm_storage_account_sas.function_sas.sas}"},    
    {"HASH" : filebase64sha256(each.value.zip_path) },
    zipmap(each.value.key_settings[*].name, [for s in each.value.key_settings[*].secret: "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault.app_keyvault.vault_uri}secrets/${azurerm_key_vault_secret.app_secret[s].name}/${azurerm_key_vault_secret.app_secret[s].version})"]) 
    )
  version = "~3"
  identity {
    type = "SystemAssigned"
  }
  
  tags = var.tags
  
  site_config {
    dynamic "ip_restriction" {
      for_each = each.value.ip_restrictions
      
      content {
        ip_address  = "${ip_restriction.value}/32"        
      }
    }    
  }
}

The settings, so there are 3 categories of settings. There’s the boiler plate function app settings, which I have hard coded into the local variable.

locals {
  app_settings = {
    "FUNCTIONS_WORKER_RUNTIME" : "dotnet",
    "FUNCTIONS_EXTENSION_VERSION" : "~3",
    "APPINSIGHTS_INSTRUMENTATIONKEY" : azurerm_application_insights.rg.instrumentation_key       
  }    
}

Then there are the function app settings from the variables and finally the settings which I want to configure to use keyvault.

All these 3 things need to be brought together into the single settings map in the function app configuration.

Happily, terraform provides the merge function, to merge together many maps into a single map.

merge(
    each.value.settings,
    {"APPINSIGHTS_INSTRUMENTATIONKEY" : azurerm_application_insights.rg.instrumentation_key}, 
    {"WEBSITE_RUN_FROM_PACKAGE" : "https://${azurerm_storage_account.function-storageaccount.name}.blob.core.windows.net/${azurerm_storage_container.function_storagecontainer.name}/${azurerm_storage_blob.function_storageblob[each.value.name].name}${data.azurerm_storage_account_sas.function_sas.sas}"},    
    {"HASH" : filebase64sha256(each.value.zip_path) },
    zipmap(each.value.key_settings[*].name, [for s in each.value.key_settings[*].secret: "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault.app_keyvault.vault_uri}secrets/${azurerm_key_vault_secret.app_secret[s].name}/${azurerm_key_vault_secret.app_secret[s].version})"]) 
    )

I don’t mind admitting, this bit took me way longer to work out than is feasible. Largely becuase I had messed up the data structure in each.value.settings whilst I was trying to get zipmap to work, the errors I though were telling me I had got the zipmap bit wrong were actually not about that at all.

Many hours later. I realised that. So, this may not be the best, or even most sensible way of doing this. It is the way which worked after many hours, so I am sticking with it.

zipmap(each.value.key_settings[*].name, [for s in each.value.key_settings[*].secret: "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault.app_keyvault.vault_uri}secrets/${azurerm_key_vault_secret.app_secret[s].name}/${azurerm_key_vault_secret.app_secret[s].version})"]) 

So, I want the setting name as the maps key and then for that string to be generated using the secret name and the keyvault I created, invisibly so far as you are concerned, earlier as the value. Zipmap takes the first parameter as a list of keys, and the second as a list of values.

each.value.key_settings[*].name

key_settings is a set of objects. The object has the properties name and secret. The splat ( * ) generates a list of all of the names in the set of maps.

[for s in each.value.key_settings[*].secret: "@Microsoft.KeyVault(SecretUri=${azurerm_key_vault.app_keyvault.vault_uri}secrets/${azurerm_key_vault_secret.app_secret[s].name}/${azurerm_key_vault_secret.app_secret[s].version})"]

This bit, I have my doubts about this bit. But it works. So far as I can tell, it loops through each secret in the set of maps and creates a list where that secret has been transformed into the string after the :

Handily that is the exact string required for Azure to automatically provide the function app with the latest version of that secret.

Key Vault and Managed Identity

I’m not sure what the best strategy is with key vault, whether to have one key vault or lots of them. We seem to have opted to have lots of them, one per resource group.

The key vault is there to do two things

  • Securely hold keys and secrets for the application
  • Allow access to those people or things who should have access and prevent access for everything else

There are several things which require varying degrees of access to the key vault

  • The devops pipeline needs to be able to add and remove keys and secrets and also, if I use terraform destroy the ability to delete the key vault
  • The function apps need to be able to access their secrets
  • Our operations teams needs to be able to see and change the keys or secrets
  • In pre-production environments the development and test teams need to be able to access the keys and secrets

Access for the pipeline and access for the function apps, via a managed identity, is handled automatically by the module whilst the other cases are configurable using the access_policies parameter.

   access_policies = [{
             tenant_id           = "tenant id of azure subscription"
             object_id           = "object if of resource requiring access"
             key_permissions     = ["create","get"]
             secret_permissions  = ["set","get","delete"]
]}

There are various kinds of permissions you can assign to both keys and secrets, the example is only a few of them.

Managed Identity

The function app is given an Azure Managed Identity when its created through this property

identity {
    type = "SystemAssigned"
  }

Then the key vault assigns permissions to that managed identity like this

resource "azurerm_key_vault_access_policy" "app_keyvault_functionapps" {
  for_each = { for functionapp in var.functionapps : functionapp.name => functionapp }

  key_vault_id = azurerm_key_vault.app_keyvault.id

  tenant_id = azurerm_function_app.app_functionapp[each.value.name].identity[0].tenant_id
  object_id = azurerm_function_app.app_functionapp[each.value.name].identity[0].principal_id

  key_permissions = [    
    "get",
  ]

  secret_permissions = [    
    "get",    
  ]
}

First of all, there may be more than one function app so the for_each will apply this for all of them.

The tenant_id and object_id can then be retrieved from the resource.

Allow Access to Live Helper Chat REST API

Currently, we use Live Helper Chat from our main corporate website, to allow customers to chat directly to our customer services team.

We are in the process of developing a mobile app and a new microsite for out customers. We want to provide chat functionality from these new platforms.

Live Helper Chat comes with a fairly comprehensive REST API, which the mobile app can use, to allow the customer to chat with customer services.

Unfortunately, out of the box, our Bitnami installation of Live Helper Chat breaks access to the REST API.

API calls require an Authorization header, Apache filters this header out. So all of our calls to the API resulted in a “Authorization header is missing!” response.

The solution is very simple, once you have spent ages working out what it is.

Adding this line

SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1

To the bitnami.conf file, within the virtual host section allows the Authorization header to be passed to the API

<VirtualHost _default_:80>
  SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1

The bitnami.conf file is here

/opt/bitnami/apache2/conf/bitnami/bitnami.conf

Fix Bitnami Query String Stripping

If you want to use the Live Helper Chat API, and you are using the Bitnami version, there is another problem which you will need to fix.

By default, the htaccess file rewrite rules will strip out any query strings from URLs, many calls to the API rely on query strings, they will not work with the default Bitnami rewrite rules.

Look for the htaccess.conf file, which should be here

/opt/bitnami/apps/livehelperchat/conf/htaccess.conf

and change this line

RewriteRule ^(.*)?$ index.php?/$1 [L]

to this

RewriteRule ^(.*)?$ index.php?/$1 [L,QSA]

Terraform / Azure APIM

Can you use it to incorporate Azure APIM deployments in a pipeline ?

I have an existing API Manager instance which now hosts many different API’s, all of which have been created and configured manually.

I need to be able to manage individual API’s without affecting anything pre-existing and, in the first instance, without having to bring those APIs into an automated process.

You cannot do this in any sensible manner with ARM Templates, I have tried. Terraform does seem to contain the necessary AzureRM functions to make a better job of it.

For another project I have created a containerised DevOps build agent with Terraform installed upon it, so I can use that for this project.

Desired Process

I want to deploy individual APIM APIs from the repositories that they use. For the most part, an API is a front end to a collection of logic/function apps. I want the API to be controlled and managed as a part of that project/repository and its build pipelines.

I don’t want any individual API deployment to affect other API’s or the APIM framework. The deployment and configuration of the actual APIM its self is currently manual but will eventually also be deployed automatically.

Can I Do This ?

I don’t know yet, I expect so. I will post here as the journey unfolds.

Terraform / Azure VM

We have recently added new chat functionality to our companies website using an open source project called Live Helper Chat, it is an excellent piece of software which allows customers looking on our website to talk directly to our Customer Service Team.

This project was undertaken at pace, the project manager made use of the fact that there is a Bitnami Virtual Machine in Azure for Live Helper Chat that they could install with a single click directly into the production environment.

This enabled us to get the new chat functionality up and running very quickly. In the longer term it did create some challenges

  • Various configuration changes were made to the virtual machine to allow it to meet our security requirements
  • Various configuration changes were made to the application to provide the functionality we require for our customers and customer services team
  • The VM runs MySQL as well as the web application and yet our systems architecture team call for it to be high availability and resilient
  • The IG team require the database to be backed up
  • Future work is envisioned which may require changes to the configuration or code
  • The Live Helper Chat application is updated frequently and we want to keep up with the latest updates where possible

The first challenge is that we had no way of immediately replicating the production set up anywhere else short of cloning the VM into other environments. The lack of the system in any other environments means it is impossible to test any changes.

Using Terraform / Ansible to Deploy An Azure VM

We use DevOps to deploy our solutions for everything else, so I wanted a pipeline in DevOps which could do the following

  • Create an instance of the Bitnami Live Helper Chat in a designated subscription/resource group
  • Provide the network related artefacts which are normally created when you create a Bitnami VM in Azure
  • Apply the post deployment configuration to the Bitnami VM
  • Import a baseline MySQL database to provide the application with enough functionality for it to be useable, user accounts etc
  • Deploy a test website which incorporates the Live Helper Chat widget so that people can use it as they would use the production system, for testing and demonstration purposes primarily

There are a variety of tools I could have used, these are the ones I have chosen

  • Terraform to configure the Azure Resource Groups and create an instance of the VM and associated network artefacts
  • Ansible to deploy the post deployment configuration to the VM and import the baseline database
  • Azure Storage Account web hosting for the test website
  • DevOps Build/Release pipelines to orchestrate the above
  • DevOps git repository to hold all of the code for this process