Saturday, March 18, 2017

ARM Template Updates for Managed Disks

Recently, Azure announced managed disks.  The whole premise behind these disks is that you no longer have to manage disk from a couple of different points of view.  The first is the IOPS limits on storage accounts.  Traditionally, you get 20,000 IOPS from a standard storage account, which of course, can only handle so many VM disks.  As you get to larger deployments this can become a hassle, so much so that many people were starting to deploy a single storage account per VM.  The second is around the availability of the storage.  In the old system, you were not in control of the stamp that your disks were placed on.  While you could place the VMs in an availability set, this did not guarantee that the storage was also placed on separate stamps, still leaving deployments with a single point of failure.

You can read more about managed disks here.

Managed disks fixes all of these issues, and I just wanted to chat a little bit about ARM template changes required to make managed disks work.

Firstly, from an availability standpoint, managed disks can only be used with managed availability sets.


In order to set this, you need to point at one of the newer schema for compute.   I've found that 2016-04-30-preview is the one to target.  (Github).  In examples on the internet, particularly from https://social.technet.microsoft.com/Forums/Azure/en-US/c07c2f1c-d70d-4182-a918-0309897e2163/arm-template-example-managed-disks?forum=windowsazuredata it mentioned that you needed to add a SKU parameter with the value of "Aligned".  I've found this not to be true.  Here is my version:

    {
      "comments": "Availability set for the cluster servers",
      "type": "Microsoft.Compute/availabilitySets",
      "apiVersion": "2016-04-30-preview",
      "name": "[parameters('availabilitySetName')]",
      "tags": {
        "displayName": "Cluster AS"
      },
      "location": "[resourceGroup().location]",
      "properties": {
        "platformFaultDomainCount": "2",
        "platformUpdateDomainCount": "5",
        "managed": true
      }
    },
You'll notice above that the "managed" property under properties seems to be what needs to be set.  This is also detailed in the schema.

As managed disks now exist in a region and are either Standard or Premium, you do not need to specify a storage account or create one ahead of time.  I've found different templates on the internet where sometimes the ID is specified and sometimes just the type of storage is specified.  I've yet to play around with it enough to understand the differences in the approach.

Here is what I've done for OS disks.

          "osDisk": {
            "name": "[concat(parameters('serverNamePrefix'),'0',copyindex(1),'-OS')]",
            "createOption": "FromImage",
            "caching": "ReadWrite",
            "managedDisk": {
              "storageAccountType": "Standard_LRS"
            },
            "diskSizeGB": 64
          },

And for data disks:

          "dataDisks": [
            {
              "lun": 0,
              "name": "[concat(parameters('serverNamePrefix'),'0',copyindex(1),'-Disk1')]",
              "createOption": "Empty",
              "managedDisk": {
                "storageAccountType": "Standard_LRS"
              },
              "caching": "None",
              "diskSizeGB": 32
            }
          ],
Remember that now,  size matters, so try and keep those values in mind.  We are currently in promo pricing at 50%, but we all know that won't last!

Enjoy.



Saturday, March 11, 2017

An intro look at logging for Azure Storage


As I am sure you know by now, Azure storage is implemented as a service.  Because of this, Azure storage is accessible over the internet to any location in the world.  Given sufficient authentication (IE: Azure storage key or SAS tokens) you can access any storage account.  There is no way to make this communication completely private, and therefore, most "prevention" type of security controls are not applicable to this type of deployment.  The goal of this post is to chat a little bit about Azure storage logs and how we can use them to gain some understanding of what is going on with our storage accounts.

The key questions I would like to understand are the following:
  • Can I determine when my keys are being recycled?
  • Can I determine who is accessing my storage account?
  • Can I determine what is being accessed from my storage account?
  • Can I determine how my storage account is being accessed?

Before we dive into answering those questions, let's talk a little bit about the logs that are available within Azure storage.

The first log that we can look at is the activity log for the Azure storage account.  This log will capture all operations that were executed on a storage account, essentially representing the log of the control plane on a given resource.  For more information on these logs, please see https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs.  In this log I would expect to see CRUD operations on settings/configuration relating to the particular resource.  Specifically, I would probably want to be looking at these logs to assist with understanding my key management operations.

Here is an example of the activity log for a given azure storage account:





















As you can see from the image above, you can very quickly identify the operation type and who initiated the event.  Clicking on a particular event will give more detailed information that conforms to the following schema: https://docs.microsoft.com/en-us/azure/monitoring-and-diagnostics/monitoring-overview-activity-logs#event-schema.  Of particular note will be fields such as httpRequest which contain the client IP addresses of the action, and the correlationId/EventID which can be used for further troubleshooting .

The second log that we can look at is the Azure storage diagnostics logs.  This log, when enabled, can capture metrics on the storage account as well as transactional level details on actions done on the storage account.  This log represents the actions conducted against a storage account at the data plane level.  For more information on these logs, please see https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/enabling-storage-logging-and-accessing-log-data.  In this log, I would expect to see information about CRUD actions against resources within the Azure storage account.  It is important to note that these logs are stored inside a special container within the Azure storage account, and can be accessed by downloading them (via Azure storage explorer) to your desktop and analyzing them.  For information on the format of these logs, please see https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/storage-analytics-log-format.

Okay, now that we have a brief understanding of the logging options in Azure storage, let's have a look at answering the questions posed at the beginning of this post.

Can I determine when my keys are being recycled?

As this is an action against the resource itself, we can turn to the Azure activity log to see this event.  Here is a snippit of what this event looks like.





































From the snippit above we can quickly see the event that occurred, the date/time, who initiated it, the scope of the authorization and so on.  One key piece of information that is missing is that we cannot see which key was actually regenerated.  This is mostly likely because the regenerate key action takes with "keytype" parameter as a body element rather than on the query string.  Here is a snippit from powershell:








From the MSDN docs (https://msdn.microsoft.com/en-us/library/azure/dn495112.aspx) you can see that the KeyType parameter can be either primary or secondary.

Can I determine who is accessing my storage account?

To answer this question, we can turn to the blob diagnostic logging.  In the log, there is a field for the IP address requesting the blob. 






Can I determine what is being accessed from my storage account?

Once again, the blob diagnostic logging reveals this information via the request-url and the requested-object-key parts of the log.










Can I determine how my storage account is being accessed?

Once again, the blob diagnostic log does capture this information in the authentication-type parameter.








The issue here again is that there is no reference to which storage key is used, rather just a record with the word "authenticated" in it. 

In conclusion, it looks like between the audit log and the diagnostic log, once an put together a picture of key events in the system and start to better understand the access/usage of the storage account. 

Thursday, March 9, 2017

Azure Storage Keys

Azure storage is the bed rock of many of the services in the Azure platform.  While there are a host of controls that can be put in place to protect/secure/monitor Azure storage, we need to remember that it is inherently a public facing service and there is not much we can do to change that.  Given a storage account name and one of the two storage account keys, anyone can access your azure storage account, from anywhere.  The goal of this post is to chat a little bit more about Azure storage keys.

It is important to note that there are two types of keys in Azure storage.  The Azure Storage Keys (ASK) and Shared Access Signatures (SAS).  This article focuses on the ASK and not SAS keys.  One interesting thing is that SAS tokens are actually signed by one of the ASK, so in theory regenerating a ASK will invalidate the SAS keys that were generated against it.

Azure storage keys are easily accessible via the REST api for Azure and this has been incorporated into the all major access forms (CLI, Portal, Powershell, etc).  From a portal perspective, simply navigate to the storage account in question and click on access keys.













More information about azure storage keys can be found in the following links:

From a powershell perspective, you can access the keys by running the Get-AzureRMStorageAccountKey cmdlet (https://docs.microsoft.com/en-us/powershell/resourcemanager/azurerm.storage/v2.3.0/get-azurermstorageaccountkey)  It is important to note that this command will simply print the keys to the console, and so please use this with caution.

I've created a little script using this technique that you can use to get storage account keys.

param(
    [Parameter(Mandatory=$true,HelpMessage="Subscription ID to target")]
    [string]$subscriptionId,
    [Parameter(Mandatory=$true,HelpMessage="Storage Account to target")]
    [string]$storageAccountName
)

Write-Host "Authenticating to Azure..." -ForegroundColor Cyan
try
{
    $context = Get-AzureRmContext
    if ($context.Subscription.SubscriptionId -ne $subscriptionId){
        throw "Not logged into the correct subscription"
    }
}
catch
{
    Login-AzureRmAccount -SubscriptionId $subscriptionId
}


$storageAccountReference = Find-AzureRmResource -ResourceNameEquals $storageAccountName `
                                                -ResourceType "Microsoft.Storage/storageAccounts"

if (-not $storageAccountReference){
    throw "Could not find $storageAccountName in $subscriptionID"
}



$keys = Get-AzureRmStorageAccountKey -ResourceGroupName $storageAccountReference.ResourceGroupName `
                                        -Name $storageAccountReference.Name

Write-Output $keys






Azure storage keys are used to provide remote access to the storage account.  It is important to note that these keys grant full permission to the storage account.  This access type was used prior to the release of SAS keys and most Azure documentation and services point to now using SAS keys for access rather than using the ASK keys directly.

Lastly, in order to be able to access the storage account keys, you need to have the required permissions on the resource itself.  Based on the official documentation, users looking to access the keys need to have contributor or owner permissions on the resource in question.