Saturday, July 22, 2017

Static Code Analysis for ARM Templates?

In a previous post I discussed the fact that ARM templates have an interesting flow when it comes to dealing with passwords.  The recommendation was that the ARM templates should be reviewed on each check-in.  This got me thinking that maybe we need a static code analysis tool for ARM templates.  The goal of this post is to discuss this in more detail.

For a long time now, code analysis tools (particularly static ones) have been in use for many managed and un-managed languages.  It is so popular that code analysis is built-in to Visual Studio 2017.  In fact, security guidance generally recommends the use of these tools.  Taking this further, many development shops use this type of analysis for not only enforcing code quality but also enforcing style guides, etc.  As ARM templates are "infrastructure-as-code" it only makes sense to try and extend these processes to those artifacts as well.

I think there are two avenues that static analysis could assist with ARM templates.  Firstly, it could identify common security concerns that may be found in these templates.  The first one that comes to mind is how passwords are handled.  As my previous post discussed, you can pass in passwords via securestring, or you can use keyvault references.  The better method is to use keyvault, and so we could write a test that ensures no parameters of type "securestring" exists.  This is just one example.

The second use for this would be around style.  One example is the use of description attributes on parameter elements.  You could issue a warning (for example) if a parameter was missing this attribute.  This could help enforcing good design guides at scale.

I know that Microsoft has been working on expanding Azure Resource Policies, and while a step in the right direction, they focus more on what can be created  in Azure (and what properties can/should/must be set) than how a template is written.  I do see a need where rules are potentially enforced in Azure (via ARP) and also, potentially, enforced in a build-step.  The build-step aspect would provide rapid feedback to the developers of policy violations (usability, security, or otherwise).

Ultimately, coming up with a full set of tests is outside the scope of this blog post.  I'm really looking for some feedback from the community on the viability of something like this.  For completeness, here is some example code that could be used to write tests.


#
# Script.ps1
#

param(
 [Parameter(Mandatory=$true)]
 [string]$filePath
)

class ARMParameter{
 [string]$name
 [string]$type
 [string]$metadata
 ARMParameter([string]$name,[string]$type,[string]$metadata){
  $this.name = $name
  $this.type = $type
  $this.metadata = $metadata
 }
 [string] ToString(){
  return "{0}-{1}" -f $this.name,$this.type
 }
}

class ARMResource{
}

Class ARMTemplate{
 hidden [PSCustomObject]$json
 hidden [ARMParameter[]]$parameters

 hidden [Void] ParseParameters(){
  Write-Verbose "Entering ParseParameters"
  $parsedParameters = @()
  $parametersRaw = $this.json.parameters
  foreach ($parameterRaw in $parametersRaw.psobject.Properties){
   Write-Verbose "parsing $parameterRaw"
   $name = $parameterRaw.Name
   $type = ""
   $metadata = ""
   foreach ($parameterRawContent in $parameterRaw.value.psobject.Properties){
    if ($parameterRawContent.Name -eq "type"){
     $type = $parameterRawContent.value
    }

    if ($parameterRawContent.Name -eq "metadata"){
     $metadata = $parameterRawContent.value.description
    }
   }
   $parsedParameters += [ARMParameter]::new($name,$type,$metadata)
  }
  $this.parameters = $parsedParameters
 }

 ARMTemplate([string]$filePath){
  Write-Verbose "Creating ARMTemplate Object"
  $this.json = ConvertFrom-Json -InputObject ((Get-Content -Path $filePath) -Join "`n")
  $this.ParseParameters()
 }

 [PsCustomObject[]] GetParametersByType([string]$type){
  return $this.parameters | ? {$_.type -eq $type}
 }

 [PsCustomObject[]] GetParameters(){
  return $this.parameters
 }
}


if (!(Test-Path $filePath)){
 Write-Error ("$filePath does not exist")
 exit
}
$armTemplate = [ARMTemplate]::new($filePath)

Write-Host "Test 1: Parameters with securestring" -ForegroundColor Cyan
$parameterWithTypeSecureString = $armTemplate.GetParametersByType("securestring")
if ($parameterWithTypeSecureString.length -eq 0){
 Write-Host "PASSED: No parameters with securestring type found" -ForegroundColor Green
} else {
 Write-Host "FAILED: Review Required: Paramters found with securestring type" -ForegroundColor Red
 Write-Host $parameterWithTypeSecureString
@"
Notes
===========
 Since these securestring parameters are passed in via the template, a reviewer should ensure that the pipeline for deployment
 handles the password in a secure manner.  An example would be a build server that retrieves the password from a secret store.
 Pipelines that give the developer access to the password for deployment purposes, or that store the credentials in plain-text
 should be avoided.
"@  
 
}


Write-Host "Test 2: All parameters should have metadata flag" -ForegroundColor Cyan
$parameters = $armTemplate.GetParameters()
$parametersWithNoMetadata = $parameters.Where({[string]::IsNullOrEmpty($_.metadata)})
if ($parametersWithNoMetadata.length -gt 0){
 Write-Host "Failed: Review Required: Metadata description missing from parameter fields" -ForegroundColor Red
 Write-Host "The following properties should have a metadata description added"
 Write-Host $parametersWithNoMetadata
@"
Notes
===========
 All parameters should contain a metadata tag with a description tag that defines the purpose of the property and where it 
 is used in the template.
"@  
} else {
 Write-Host "Passed" -ForegroundColor Green
}


So what are some thoughts around this concept?  Is is worth creating something like this where a team could create "standards" both from a usability perspective and a security perspective, and have it enforced (likely as part of a build pipeline)?  Does one of these exist in the marketplace and I am simply unaware?




Wednesday, July 19, 2017

Enterprise Cloud Strategy Part 2 - Experimentation

So far in our tour of Microsoft's Enterprise Cloud Strategy Book, we've discussed the 5 R's as a methodology for deciding how to "modernize" your applications.  The next section of this book discusses three common steps to a cloud migration and then focuses on the concept of experimentation.

What is experimentation in this context?

In the context of the book, the authors describe experimentation as a key step to cloud adoption.  In this step, "the engineers and others create the IT department's first cloud applications, with the objective of learning what the cloud is all about...".  The goal, of course, is to give IT the opportunity to learn about the cloud and the various aspects of building applications that live in the cloud.  One interesting aspect is how the book defines the principles of "a culture of experimentation".


My Thoughts

In my opinion, safe to fail experiments are key to the success of any projects in IT.  While I'd like to think that cloud computing does not meet the traditional definition of a complex system, the act of designing solutions in the cloud can be extremely difficult.  Moving business targets, ever-changing cloud capability landscape, and endless possible effective solutions decrease the predictability of design outcomes for similar projects.  Safe to fail experiments are key to testing out architectures that push the boundaries of the cloud platform, make use of the latest enhancements/improvements, and provide targeted feedback for lessons learned.

The second point I'd like to make is that the book makes mention of experimentation as a first step to an organization adopting the cloud.  I would argue that this is an important step not only in the initial phases of an overall roadmap, but also important for every single project that is in scope.  When I am working with my clients, I always push for a proof of concept phase as part of a cloud migration. What this allows us to do is:

  • Put an appropriate amount of design work in, without having to define everything up front
  • Experiment/test with multiple architectures to test for best fit
  • Put focus on delivering repeatable steps via automation
This type of approach has many benefits in my mind.  Firstly, proof of concept phases can be treated as safe-to-fail experiments, allowing the delivery teams flexibility in approach and output.  Proof of concept phases are short in duration, which allow for quick iteration on target architectures.  The focus on automation not only increases the velocity of future work, it supports a more agile delivery approach.  From a business perspective, proof of concept phases help drive down uncertainty/risk in the rest of the project delivery, while increasing the accuracy of estimates around time and effort for project completion.

In conclusion, I'm a big believer of experimentation as an important aspect of project delivery, big or small.  In the context of the book, experimentation is a phase where IT is given the chance to learn more about the various aspects of the cloud and how to run applications within it.  It fits well with a "crawl,walk,run" approach, and one that can pay dividends in the long run.  My addition would be that experimentation is not only fit for the start of a cloud journey, but also fit for every project within that program.



Saturday, July 15, 2017

Enterprise Cloud Strategy Part 1 - The 5 R's

Microsoft has released the 2nd edition of it's Enterprise Cloud Strategy book, and it has a ton of good content for companies looking to make the jump to cloud.  The beauty of this type of book is that it really is "cloud agnostic".  While specific examples and references are made to the Azure platform, you can use the concepts in here to plan your cloud strategy regardless of the target cloud.

I wanted to do a book review on this book, but I quickly realized that it would be a large undertaking.  There is so many core concepts discussed in this book and it would be hard to capture all of it in one post.  I've decided to write a short series on different aspects covered and my thoughts/experiences in dealing with clients in these specific areas.

Part 1, this post, will chat about the 5 R's of modernization.

When enterprises are making the move to the cloud, there is generally a business reason to do so.  The business derives values from the applications that IT hosts/maintains/builds for them, and therefore, a lot of cloud migration discussion focus on how to get applications "to the cloud".  We are not all fortunate to work with green-field cloud apps, and the 5 R's represent a set of actions one could take to migrate their apps.

The first one is retire.  This is an often overlooked option when reviewing applications for a cloud migration.  Simply put, there is always an option to mark an application for retirement and not consider it for a cloud migration.  Start working with the business to decommission the application and determine a best fit for areas of capability that are still required.

The second option is to replace.  From experience, I find that very few companies are interested in the replace option as a first step, but it is important to consider as part of the process.  With the pace of technological innovation, especially in the SaaS space, there is a good chance that the legacy application in scope might have an acceptable alternative in the marketplace.  I always suggest a short period where business SMEs and a cloud architect investigate options in the field.  One important note is that this is generally only viable when the application is non-differentiating.  Applications that act as the 'secret sauce' that separate your client from it's competition are generally not good for a pre-canned SaaS app.  Another con to this approach is integration.  Chaining SaaS applications together to deliver business value can be a difficult task.

The third option is to retain, wrap, and expand.  When considering this option, one must understand why we are moving the particular application to the cloud.  Retain/Wrap strategies are good when changes cannot be made to the application.  In my experience, I have rarely seen legacy applications written to be able to accommodate such a pattern.  Data replication patterns seem to be a best fit here, allowing you to isolate the application (and then move it) while still making the data accessible to other downstream systems.  Generally this method is used to save costs, so one must understand the ingress/egress and storage costs associated.

Expanding an application is a interesting one.  One use case I have seen is the idea of batched operation, particular for simulations.  Cloud services can be used to provide on-demand capacity to existing processes.  Many Azure batch functions work with existing HPC techniques and toolsets.

The next option is to rehost.  I always think of this as one of the "boring" approaches to cloud migrations.  In most cases, you can mimic your existing environment in the cloud, and simply move the hosting of the application there.  One thing that is often overlooked in this approach is the cost of integration with downstream systems, security systems, etc.  I've rarely found an isolated application (of value) that did not involve some degree of conversation around integration when this approach is used.  Further, integration methods that work great on-premises could have large latency/bandwidth considerations in the cloud.

The last option is reenvision.  Many companies without large development teams in-house tend to shy away from this approach.  Others have their development teams engaged in new features and not available to completely redesign the application.  While this approach may seem daunting, it often can be justified from a cost perspective.  The development time spent not only allows a legacy system to better meet the current business requirements, a modern architecture can help the application better scale/integrate/etc, allowing for easier enhancements into the future. 

In conclusion, there are pros/cons to every approach from a cloud migration perspective.  One must really understand the main business drivers for the overall cloud strategy as well as the particular application before choosing a suitable method.  Cost/Time/Resource concerns all play in to picking a successful migration strategy.

Sunday, July 9, 2017

Azure ARM Templates and Passwords

Password management in CI/CD pipelines is an important task in creating a secure devops workflow.   The goal of this post is to chat a little bit about ARM templates and how passwords are used within them.

How are passwords passed in?

Regardless of the keyvault integration options in ARM templates, the base template itself accepts a password as a securestring and uses it like that during its processes.

Here is an example of an Azure sql server being deployed via template.

{
  "$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
  "contentVersion": "1.0.0.0",
  "parameters": {
    "password": {
      "type": "securestring"
    }
  },
  "variables": {
    "blahName": "[concat('blah', uniqueString(resourceGroup().id))]"
  },
  "resources": [
    {
      "name": "[variables('blahName')]",
      "type": "Microsoft.Sql/servers",
      "location": "[resourceGroup().location]",
      "apiVersion": "2014-04-01-preview",
      "dependsOn": [ ],
      "tags": {
        "displayName": "blah"
      },
      "properties": {
        "administratorLogin": "blah",
        "administratorLoginPassword": "[parameters('password')]"
      },
      "resources": [
        {
          "name": "AllowAllWindowsAzureIps",
          "type": "firewallrules",
          "location": "[resourceGroup().location]",
          "apiVersion": "2014-04-01-preview",
          "dependsOn": [
            "[resourceId('Microsoft.Sql/servers', variables('blahName'))]"
          ],
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "0.0.0.0"
          }
        }
      ]
    }

  ],
  "outputs": {
    "password": {
      "type": "string",
      "value": "[parameters('password')]"
    }
  }
}

Can I directly output the password, in it's unsecured form?

You bet.  If you notice in the template above I pass in a securestring but output the same variable as a string.  Here is what the output looks like:


It is important to note that this is possible regardless of if the keyvault references are used or not.

Do the passwords show up in debug mode?

ARM template deployments have a debug level that you can set.  In the debug level, you can ask for it to log the requests, the responses, or all.  When passing securestring passwords, the passwords are converted to plaintext before being transmitted across the wire.  As such, they show up in the debug logs.  For example:


How do I do this securely?

Good question.  Ultimately, because even the keyvault-integration interacts with ARM templates as a securestring, it makes it hard to build a fully secured pipeline.  Here are some thoughts:

Deployment pipeline separation

Essentially, I think ARM template deployments should go through a secured build/release pipeline that does not allow developers access to edit the deployment process.  This protects against developers needing direct access to the secrets (either in the pipeline or in keyvault) and protects against developers using the debug features to pull the requests with the unsecured password in it.

ARM Template separation

At this point, there are many ways to leak the confidential data in an ARM template.  I showed one above, by using the output feature.  It is important to note that this would probably work in a separate deployment pipeline since the pipeline itself would log this event and the developer probably has access to the deployment logs for "debugging purposes".  I think the solution here is more of a process one than a technical one.  Here are some suggestions:

  • Separate project for ARM templates
  • Code review on each check-in
  • Non-developer resource to modify build/release pipeline to use new version of the template
In conclusion, password management in raw ARM templates leaves much to be desired.  The fact that passwords can be leaked in many ways requires firm processes in the build/release pipeline to protect secrets.  Your process may still require multiple people to ensure requirements such as segregation of duties are met appropriately.


Sunday, July 2, 2017

Azure Elastic Pool Databases and ARM Templates

Recently, I've been working with dynamically creating elastic pool databases based on customer need/demand.  The goal of this post is to discuss some of the aspects of this process.

What are elastic pools?

Essentially, elastic pool databases are an effective way of managing many databases that have varying usage/demand profiles.  Instead of thick-provisioning compute capacity on a per database perspective, you can create a "pool" of resources and then deploy many databases on top of this pool.  For more information, here is a link to the documentation.

What is the architecture of elastic pools?

Logically, Azure has the concept of a "sql server" that can then host many elastic pools and/or regular sql databases.  At the server level, authentication/backup/audit are configured.  Each logical resource deployed on that then inherit those properties.  At the pool level, performance targets are set (in eDTUs), diagnostics can be configured, and IAM policies can be set.  If you are deploying a pool, you can then deploy several databases on top of this.  These databases are the logical contains for data, handling authentication/access concerns at that level.

Elastic pools suffer from the noisy neighbours problem, and due care in the architecture should be considered.

Okay, lets build the server

For my deployment, I decided to create a separate sql server ARM template to handle that concern.  I am big into powershell orchestration, and my scripts are already using this as the deployment technology.  As such, it is easy for me to separate these concerns while keeping a clean process for the end user.

Here is the ARM template that I have used.

    {
      "comments": "The sql server",
      "type": "Microsoft.Sql/servers",
      "name": "[parameters('sqlServerName')]",
      "location": "[resourceGroup().location]",
      "apiVersion": "2015-05-01-preview",
      "dependsOn": [],
      "tags": {
        "displayName": "shared sql"
      },
      "properties": {
        "administratorLogin": "[parameters('sqlServerLogin')]",
        "administratorLoginPassword": "[parameters('sqlServerPassword')]",
        "version": "12.0"
      },
      "identity": {
        "type": "SystemAssigned"
      },
      "resources": [
        {
          "name": "AllowAllWindowsAzureIps",
          "type": "firewallRules",
          "location": "[resourceGroup().location]",
          "apiVersion": "2014-04-01",
          "dependsOn": [
            "[parameters('sqlServerName')]"
          ],
          "properties": {
            "startIpAddress": "0.0.0.0",
            "endIpAddress": "0.0.0.0"
          }
        }
      ]
    }

At the server level, there are essentially two things to configure.  The first is the default username/password, and the second is the sql firewall.  For the password, I chose to pass in the password via a securestring parameter.  As this script is designed to dynamically build servers and pools, I need somewhere to handle the password concern without the need for human intervention.  Here is a powershell snip that shows the code to integrate with Azure KeyVault.


function GetTempPassword([int]$length) {
 $ascii=$NULL;For ($a=33;$a –le 126;$a++) {$ascii+=,[char][byte]$a }
 for ($loop=1; $loop –le $length; $loop++) {
            $TempPassword+=($ascii | GET-RANDOM)
    }
 return $TempPassword
}

function StorePasswordIn($keyVaultName,$secretName,$password){
    "Converting to secure string"
    $securePassword = ConvertTo-SecureString -String $password -AsPlainText -Force
    "Setting in keyvault $secretName"
    Set-AzureKeyVaultSecret -VaultName $keyVaultName `
                            -Name $secretName `
                            -SecretValue $securePassword 
}

function GetStoredPassword($keyVaultName,$secretName){
 $secrets = Get-AzureKeyVaultSecret -VaultName $keyVaultName
 if ($secrets.Name -notcontains $secretName){
  $unsecuredPassword = (GetTempPassword -length 30)
  StorePasswordIn -keyVaultName $keyVaultName -secretName $secretName -password $unsecuredPassword
 }
 return (Get-AzureKeyVaultSecret -VaultName $keyVaultName -Name $secretName).SecretValue
}

Basically, I needed to handle the case where the script was being run against an already created database.  This is important as I want to be able to incrementally add elastic pools to my existing server as required.

I feel like a better approach to passing the password via securestring parameter is to use the keyvault integration in ARM.  See this link.  A planned upgrade for sure!

Now for the elastic pools

Here is my template for an elastic pool:

    {
      "comments": "Resource for each elastic pool",
      "name": "[concat(parameters('sqlServerName'),'/',parameters('elasticPoolNames')[copyIndex()])]",
      "type": "Microsoft.Sql/servers/elasticPools",
      "location": "[resourceGroup().location]",
      "apiVersion": "2014-04-01",
      "tags": {
        "displayName": "elastic pools"
      },
      "properties": {
        "edition": "[parameters('editions')[copyIndex()]]",
        "dtu": "[parameters('dtus')[copyIndex()]]"
      },
      "copy": {
        "name": "elasticPoolCopy",
        "count": "[length(parameters('elasticPoolNames'))]"
      },
      "dependsOn": [
      ]
    },

The interesting part here is being able to dynamically add pools.  Essentially, I pass in 3 arrays into this script that contain the required information.


    "elasticPoolNames": {
      "type": "array",
      "metadata": {
        "description": "The names of the pools to create"
      }
    },
    "editions": {
      "type": "array",
      "metadata": {
        "description": "The edition of the pools"
      }
    },
    "dtus": {
      "type": "array",
      "metadata": {
        "description": "The DTUs for the pools"
      }
    },

This way, my script can be used to either (a) scale existing pools as required or (b) create new pools as requirements dictate.

Add some alerts for good measure

Essentially, we are creating our own "shared infrastructure" for the various applications/clients that are running on the given pool.  Decisions need to be made at a certain point as to if the solution should be vertically or horizontally scaled.  In order to support reactive scaling, I've created alerts on the following metrics:
1) dtu_consumption_percentage
2) storage_percentage
3) sessions_percentage

The first two in the list are directly linked to the size of the elastic pool and could trigger a scale vertical/horizontal decision.  Sessions, on the other hand, are fixed for all sizes.  Reaching this limit could trigger a horizontal scale decision.

Here is the ARM template for that:

   {
      "comments": "Adding Session Alerts",
      "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','sessions_percent')]",
      "type": "Microsoft.Insights/alertrules",
      "location": "[resourceGroup().location]",
      "apiVersion": "2016-03-01",
      "tags": {
        "displayName": "session alert"
      },
      "properties": {
        "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','sessions_percent')]",
        "description": "an alert rule",
        "isEnabled": true,
        "condition": {
          "odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
          "dataSource": {
            "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
            "resourceUri": "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]",
            "metricName": "sessions_percent"
          },
          "threshold": 90,
          "windowSize": "PT10M"
        }
      },
      "dependsOn": [
        "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]"
      ],
      "copy": {
        "name": "elasticPool",
        "count": "[length(parameters('elasticPoolNames'))]"
      }
    },
    {
      "comments": "Adding DTU Alerts",
      "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','dtu_consumption_percent')]",
      "type": "Microsoft.Insights/alertrules",
      "location": "[resourceGroup().location]",
      "apiVersion": "2016-03-01",
      "tags": {
        "displayName": "dtu alert"
      },
      "properties": {
        "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','dtu_consumption_percent')]",
        "description": "an alert rule",
        "isEnabled": true,
        "condition": {
          "odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
          "dataSource": {
            "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
            "resourceUri": "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]",
            "metricName": "dtu_consumption_percent"
          },
          "threshold": 90,
          "windowSize": "PT10M"
        }
      },
      "dependsOn": [
        "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]"
      ],
      "copy": {
        "name": "elasticPool",
        "count": "[length(parameters('elasticPoolNames'))]"
      }
    },
    {
      "comments": "Adding Storage Alerts",
      "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','storage_percent')]",
      "type": "Microsoft.Insights/alertrules",
      "location": "[resourceGroup().location]",
      "apiVersion": "2016-03-01",
      "tags": {
        "displayName": "storage alert"
      },
      "properties": {
        "name": "[concat(parameters('elasticPoolNames')[copyIndex('elasticPool')],'_','storage_percent')]",
        "description": "an alert rule",
        "isEnabled": true,
        "condition": {
          "odata.type": "Microsoft.Azure.Management.Insights.Models.ThresholdRuleCondition",
          "dataSource": {
            "odata.type": "Microsoft.Azure.Management.Insights.Models.RuleMetricDataSource",
            "resourceUri": "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]",
            "metricName": "storage_percent"
          },
          "threshold": 90,
          "windowSize": "PT10M"
        }
      },
      "dependsOn": [
        "[concat(resourceId('Microsoft.Sql/servers', parameters('sqlServerName')), '/elasticPools/',parameters('elasticPoolNames')[copyIndex()])]"
      ],
      "copy": {
        "name": "elasticPool",
        "count": "[length(parameters('elasticPoolNames'))]"
      }
    }



In conclusion, we talked about some considerations for elastic pool architecture and shared some ARM templates for the dynamic creation of both sql servers and elastic pools. Lastly, we talked a bit about alerts that could be used to support vertical/horizontal scale decisions.