This is part 3 in a series of posts about writing service brokers in .NET Core. All posts in the series:
- Marketplace and service provisioning
- Service binding
- Provisioning an Azure Storage account (this post)
- Binding an Azure Storage account
In the previous posts we implemented a service catalog, service (de)provisioning and service (un)binding. Both provisioning and binding were blocking operations that happened in-memory. In this post we will give some body to the implemetation by provisioning an actual backend service: an Azure Storage account.
All source code for this post can be found here.
Azure, Azure Storage and Azure Active Directory
If you don’t know anything about Azure or Azure Storage, here’s a (very) short conceptual introduction to help explain the remainder of the post.
- Azure Active Directory (AAD) is Microsoft’s identity service in the cloud. It stores user identities and service principals and implements the OAuth 2.0 and OpenID Connect protocols. We will use the OAuth 2.0 client credentials grant flow to authorize the service broker to perform the necessary operations on Azure.
- An Azure Subscription is the billing unit that contains all the Azure resources you work with. If you want to do anything with Azure you need a subscription with payment details (a credit card for example).
- An Azure Resource Group is a container for your Azure resources. Usually you group resources that belong together (e.g.: for one application) into one resource group. A resource group is also a security boundary in the sense that you can authorize principals to perform certain operations on the resource group and the resources within.
- An Azure Storage account gives access to Azure Blob Storage, File Storage, Table Storage and Queues.
What are we building?
The service broker we are developing will use the OAuth 2.0 client credentials grant flow to obtain a token that authorizes the bearer to perform the necessary Azure operations. A custom role will be defined that gives the service broker exactly the set of permissions required.
Inside Cloud Foundry we have the concept of orgs and spaces as security boundaries. Azure Subscriptions and Resource Groups are at the same abstraction level. However, creating a new subscription from my service broker and linking credit card details may become a little complex for now so we take the following approach:
- Provisioning:
- When a request comes in to provision a new Azure Storage account, we take the org/space combination and create a resource group with the name
<org guid>_<space_guid>
(for example:109718b6-e892-41e7-8993-09ace9544385_7e5f5bc3-1da9-4f14-8827-d88c09affe02
). If the resource group already exists we do nothing. - Inside the resource group, we create a new storage account where the name derives from the service instance id (Azure Storage account names have a maximum length of 24 characters and service instance ids in PCF are GUIDs with a length of 32).
- When a request comes in to provision a new Azure Storage account, we take the org/space combination and create a resource group with the name
- Deprovisioning:
- We remove the storage account.
- If no other resources are provisioned inside the resource group, we delete the resource group.
- Binding:
- We retrieve the storage connection string and return it inside the credentials object.
- Unbinding:
- This is a no-op, nothing needs to happen on the Azure side.
Custom Azure role
Following the principal of least privilege we want to give our service broker the minimum set of permissions required to perform the task at hand. So it should be able to create, list and delete resource groups and create, list and delete storage accounts. Besides, the service broker should be able to read storage connection strings during bind operations.
This leads us to the following role definition:
With this role definition we can create the role in our Azure subscription using the Azure CLI:
az login
az configure --defaults location=westeurope
az account set --subscription 4c70a177-b978-43f9-9fc0-1e50dd20271f
az role definition create --role-definition service-broker-role.json
A short inspection in the Azure portal tells us that our role has been created:
If you wonder where the action names (e.g.: Microsoft.Storage/storageAccounts/read
) come from, you can find the complete list here.
Azure AD application
Next step is to create an Azure AD application and service principal that enables our service broker to get an access token that allows it to perform the required operations. The service principal will be assigned to the role we just defined.
I chose to create the AAD application from the Azure portal and the result is an application named Azure Storage Service Broker with client id b2213c77-9d93-474b-9b7f-89a1f0040162
:
Next we generate a client secret that, together with the client id, allows the service broker to authenticate for this AD application using the standard OAuth 2.0 client credentials grant flow.
Finally we assign the service principal that corresponds to the Azure AD application to the role we created earlier:
az ad sp list --display-name 'Azure Storage Service Broker' | jq '.[0].objectId'
az role assignment create
--assignee-object-id 5afa5a58-fa38-4122-a114-34b989ed88b4
--role 'Azure Storage Service Broker'
First we list all service principals with the name Azure Storage Service Broker and get the object id of the first result. Next we assign the Azure Storage Service Broker role to this principal.
We have now done all the preparatory work on the Azure side, back to our service broker application.
Azure REST API authorization
The first thing we need to worry about is getting the proper authorization for performing all desired operations. For this we use the Microsoft Authentication Library for .NET (MSAL). MSAL lets us acquire tokens from Azure AD using the OAuth 2.0 client credentials flow via the ConfidentialClientApplication class:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
private static readonly TokenCache AppTokenCache = new TokenCache();
private readonly IConfidentialClientApplication _clientApplication;
public AzureAuthorizationHandler(IOptions<AzureRMAuthOptions> azureRMAuthOptions)
{
var azureRMAuth = azureRMAuthOptions.Value;
_clientApplication = new ConfidentialClientApplication(
azureRMAuth.ClientId,
$"{azureRMAuth.Instance}{azureRMAuth.TenantId}",
$"https://{azureRMAuth.ClientId}",
new ClientCredential(azureRMAuth.ClientSecret),
null,
AppTokenCache);
}
We need a number of settings, most of which are defined in the Azure AD app we created earlier. Here’s an overview of them (from a Cloud Foundry user-provided service which we will use later):
The following settings are necessary to be able to get an authorization token (via client credentials flow) from Azure AD that grants the bearer the permissions we defined earlier in our custom role:
client_id
: the id of the Azure AD application (OAuth 2.0 Client Identifier)client_secret
: a secret shared between Azure AD and our service broker (OAuth 2.0 Client Password)- instance and tenant id: together form the base url for the OAuth 2.0 token endpoint, in this case:
https://login.microsoftonline.com/e402c5fb-58e9-48c3-b567-741c4cef0b96/oauth2/v2.0/token
(Oauth 2.0 Token Endpoint) redirect_uri
: is a relevant part of the OAuth 2.0 spec but not for the client credentials flow so we can enter any valid URI we like here (null
is not accepted)
Azure REST API operations
Every Azure operation has a corresponding REST API call. For the purpose of our service broker I wrote a small Azure REST API client library containing the operations we need. I made use of IHttpClientFactory to create typed HTTP clients, as described here.
The library has one entry point AddAzureServices
for adding all client middleware dependencies:
1
2
3
4
5
6
public static class ServiceCollectionExtensions
{
public static IServiceCollection AddAzureServices(
this IServiceCollection services,
Action<AzureOptions> configureAzureOptions,
Action<AzureADAuthOptions> configureAzureADAuthOptions)
One example dependency that is added to the service collection is a typed http client for accessing Azure Storage:
1
2
3
4
5
6
7
8
services
.AddHttpClient<IAzureStorageClient, AzureStorageClient>((serviceProvider, client) =>
{
var azureConfig = serviceProvider.GetRequiredService<AzureOptions>();
client.BaseAddress =
new Uri($"https://management.azure.com/subscriptions/{azureConfig.SubscriptionId}/resourceGroups/");
})
.AddHttpMessageHandler<AzureAuthorizationHandler>();
We add a typed http client that implements the interface IAzureStorageClient
and set the base address for accessing the Azure REST API. Besides, we add a DelegatingHandler
implementation that fetches an authorization token and sets it on every request.
Back to the service broker
With all the plumbing out of the way we can finally implement a service broker that provisions Azure Storage accounts. Let’s take the provisioning step as an example. All code samples below are from the ServiceInstanceBlocking.ProvisionAsync
method (see the first blog post for details on this method).
1
2
3
4
5
6
7
8
public async Task<ServiceInstanceProvision> ProvisionAsync(ServiceInstanceContext context, ServiceInstanceProvisionRequest request)
{
LogContext(_log, "Provision", context);
LogRequest(_log, request);
var orgId = request.OrganizationGuid;
var spaceId = request.SpaceGuid;
var resourceGroupName = $"{orgId}_{spaceId}";
The first step is to determine the name of the resource group, a combination of org and space GUID. Next, we create the resource group if it does not exists:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
// Create resource group if it does not yet exist.
var exists = await _azureResourceGroupClient.ResourceGroupExists(resourceGroupName);
if (exists)
{
_log.LogInformation($"Resource group {resourceGroupName} exists");
}
else
{
_log.LogInformation($"Resource group {resourceGroupName} does not exist: creating");
var resourceGroup = await _azureResourceGroupClient.CreateResourceGroup(new ResourceGroup
{
Name = resourceGroupName,
Location = "westeurope",
Tags = new Dictionary<string, string>
{
{ "cf_org_id", orgId },
{ "cf_space_id", spaceId }
}
});
_log.LogInformation($"Resource group {resourceGroupName} created: {resourceGroup.Id}");
}
Note that we apply some tags to the resource group to be able to link it back to our Cloud Foundry environment. The final step is to create the Azure Storage account itself. A lot of the properties are hard-coded for now: location is always westeurope
, the SKU is Standard_LRS
, etc. In a later blog post we will see how to parameterize these properties.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
// Create storage account.
var storageAccountName = context.InstanceId.Replace("-", "").Substring(0, 24);
await _azureStorageClient.CreateStorageAccount(
resourceGroupName,
new StorageAccount
{
Name = storageAccountName,
Kind = StorageKind.StorageV2,
Location = "westeurope",
Properties = new StorageAccountProperties
{
AccessTier = StorageAccessTier.Hot,
Encryption = new StorageEncryption
{
KeySource = StorageEncryptionKeySource.Storage,
Services = new StorageEncryptionServices
{
Blob = new StorageEncryptionService { Enabled = true },
File = new StorageEncryptionService { Enabled = true },
Table = new StorageEncryptionService { Enabled = true },
Queue = new StorageEncryptionService { Enabled = true }
}
},
SupportsHttpsTrafficOnly = true
},
Sku = new StorageSku
{
Name = StorageSkuName.Standard_LRS,
Tier = StorageSkuTier.Standard
},
Tags = new Dictionary<string, string>
{
{ "cf_org_id", orgId },
{ "cf_space_id", spaceId },
{ "cf_service_instance_id", context.InstanceId }
}
});
Again we provide some tags that we use to link Azure resources to CF service instances.
Service broker configuration
The new service broker needs a bit of configuration to be able to authorize and perform operations. There are a number of ways to provide this configuration:
- in
appsettings.<env>.json
but now we have to push stuff to source control that probably varies per environment - directly from the environment by using
cf set-env
as we did with the basic authentication password in the first post but the number of settings has grown so this becomes a bit cumbersome - via a user-provided service instance
I opted for the latter approach by defining two user-provided service instances, one for settings concerning authorization and one for settings concerning the Azure subscription we target. The screenshot below shows how to create the user-provided service instance for the authorization settings by providing a JSON object with these settings.
Next we bind the user-provided service to our rwwilden-broker
app:
After binding the service we show the environment for the app. You can see that the credentials are available in the VCAP_SERVICES
environment variable.
Steeltoe
As you can see from the last screenshot, we have one VCAP_SERVICES
environment variable with our settings buried deep within. We could use some help parsing this. Lucky for us, a library exists that can help us do just that: Steeltoe. Part of the Steeltoe set of libraries is Steeltoe.Extensions.Configuration.CloudFoundryCore
that helps provide settings from VCAP_SERVICES
in a more readable format via the CloudFoundryServicesOptions
class.
This is in many ways still a dictionary of properties so we need to perform some translation to get to the AzureRMAuthOptions
class that the small Azure library we wrote expects. You can check out the Startup
class to see how that works.
Testing
We now have a new version of the service broker running inside Pivotal Cloud Foundry that actually provisions a backend resource: an Azure Storage account inside a resource group. The service broker receives its configuration from two user-provided service instances and has the exact required set of permissions required to do its job.
Now let’s see if all this works. Maybe you remember from the previous posts that the service is named rwwilden
(not that good a name anymore, but alas). There is one service plan called basic
. So we can create a service instance as follows:
Note that I introduced timing information to show how long it takes before the command returns. In this case it takes about 27s. Remember that we implemented a blocking version of service instance creation so somewhere a thread is blocked for 27s. Not the worst for these one-off operations but we could do better (which is the topic of a next post).
Let’s check the Azure portal to see if a resource group is created with a storage account:
I underlined the interesting parts:
- the name of the resource group is a combination of the CF org and space ids
- the resource group has two tags: the
cf_org_id
and thecf_space_id
- the resource group contains one resource, a storage account whose name is the first 24 characters of the service instance id
So it seems all our efforts paid of and our service broker can provision Azure Storage accounts! Let’s open the Storage account itself:
As you can see it has the three tags we defined and the hard-coded properties we specified. Now let’s create another service in the same org/space. The expected behavior is a new Storage account in the same resource group:
As you can see this takes about the same amount of time. A quick check in the Azure portal reveals that a second storage account is created inside the resource group:
Now let’s see if deprovisioning also works by deleting the two service instances:
Both operations succeed and a check in the Azure portal reveals that both Storage accounts and the Resource Group they were a part of have disappeared.
Conclusion
In this (long) post we added a small Azure service library, implemented a custom Azure role for our service broker and configured the service broker to get an authorization token for performing a number of Azure operations. The primary goal for this exercise was to gain some experience implementing a real service broker. Staying with the in-memory version of the previous blog posts does not expose us to any problems we might encounter in the real world.
For this post we just implemented service provisioning and deprovisioning. The next post will handle binding and unbinding.
After that, we will turn our attention to asynchronous provisioning and binding.
The original article was posted on: ronaldwildenberg.com