Saturday, October 14, 2017

Tapping into Azure resource group events with Event Grid

At this point in the Azure lifecycle, there are a couple of different ways you could tap into events occurring in your resource groups in Azure.  One of the newest services (still in preview) is Azure Event Grid.  The goal of this post is to use an event subscription to get access to Azure events on a target resource group in real-time.

After creating a resource group in Azure (to target), you can head on over to "Event Subscriptions" in the menu bar.

Click on the "+ Event Subscription" button, and you will follow the form to create an event subscription for a target Azure Resource Group.


A couple of notes:
  • The name has to be subscription unique, so you might want to be fairly descriptive here
  • There are 7 default event types (all the events you can do with a resource) and they are generic in nature.  Remember that an event will be created for all of them, so pick carefully not to overload any downstream systems
  • Prefix could be used to narrow this down to only the specific event types you want to target.
After you have all of this setup, you can create some events in your target resource group and watch them show up in your destination.  For my purposes, I executed an Azure Backup of a VM that I had already created.  The two events I was looking to see were the restore point creation event and the restore point deletion event.

Based on the RequestBin output, the event grid service calls out to the target url (the subscriber endpoint above) with a POST call.  The body of the POST is essentially the same as what you would see in the activity log for that event.

Of course, the endpoint could have just as easily been an Azure function or an Automation runbook.

In conclusion, the goal of this post was to tap into Azure Resource Group events via Azure Event Grid, a new service currently in preview.  We walked through a basic setup and passed our event directly to RequestBin.  Happy event tapping!

Wednesday, October 11, 2017

Course Review: Managing IaC With Terraform

As you probably have seen from previous posts, Terraform as a technology has peaked my interest.  Luckily for me, safaribooks put on a 3-hour course on this very topic.  The course was presented by Yevgeniy Brikman and it was very well done!

What I liked?

The presenter was absolutely awesome.  Very knowledgeable and what I liked was the presentation went quickly from the basics to some of the more advanced terraform concepts.  He talked about some issues that I ran up against when I was using terraform.  For example, he touched on how he does credential management from both local servers and build servers.  He also talked about how he organizes his tf files and how he integrates modules into his workflow.  Finally, I really liked the project he showed at the end for using go to "test" your terraform scripts in an automated fashion.

What I didn't like?

The O'Reilly platform requires flash to get the audio.  Wow, #howisthisstillathing ?

I am really liking terraform and it's ability to deploy resources cross-platform.  I am hoping to experiment more with it to see how I can bring this technology to my clients.  If you are interested in the course, I think there is another one coming up in a few weeks.  Check out for more info.

Sunday, October 8, 2017

Performing an Azure SQL Security Audit with AzSDK

I have really been enjoying being an MVP.  One of the great perks about this is access to mailing lists where all the MVPs can discuss relevant topics to the different Microsoft areas in scope.  One email thread got me to look at the Azure AzSDK project on github.  This project seems to have a bunch of Microsoft contributors, and is focused on building scripts to both report and remediate on security baselines in Azure.  The goal of this post is to show how to apply this project against your Azure SQL resources.

The first thing to cover here is that the AzSDK is focused on the Azure side of any given resource.  That is to say, it will investigate (via the APIs) how your resource is configured in Azure against recommended best practices.  In the case of SQL, for example, it will not "log in" to the server to do any checks at the sql level.

Step 1 in the process is obviously to install the module.  You can find detailed instructions in their posted Installation Guide.  What I like about the guide is it's focus on not having to use elevated permissions in powershell to get the project up and going. One key note here is to ensure you have the correct version of the AzureRM modules installed.

After installation, you can simply run the built-in set of command-lets against your Azure resources.  As always, I recommend reading the code before running it to get an idea of what it is doing.  There is a lot of bootstrapping code in the modules, which eventually targets JSON files that have all the rules to apply defined.

Here is an example of one of the SQL rules:

      "ControlID": "Azure_SQLDatabase_Audit_Enable_Logging_and_Monitoring_DB",
      "Description": "Enable SQL Database audit with selected event types and retention period of minimum $($this.ControlSettings.SqlServer.AuditRetentionPeriod_Min) days",
      "Id": "SQLDatabase140",
      "ControlSeverity": "Medium",
      "Automated": "Yes",
      "MethodName": "CheckSqlDatabaseAuditing",
      "Rationale": "Auditing enables log collection of important system events pertinent to security. Regular monitoring of audit logs can help to detect suspicious and malicious activities early enough.",
      "Recommendation": "Run command  Set-AzureRmSqlDatabaseAuditingPolicy -ResourceGroupName '{ResourceGroupName}' -ServerName '{ServerName}' -DatabaseName '{DatabaseName}' -StorageAccountName '{StorageAccountName}' -EventType 'All' -AuditType 'Blob' -RetentionInDays $($this.ControlSettings.SqlServer.AuditRetentionPeriod_Min). Refer:",
      "Tags": [
      "Enabled": true,
      "FixControl": {
         "FixMethodName": "EnableDatabaseAuditingPolicy",
         "FixControlImpact": "Low",
         "Parameters": {
            "StorageAccountName": ""

I like that it is extensible, and allows you to potentially add your own rules if required.

Running the scan is pretty easy.  You can follow the instructions here and simply target the resource group you would like to evaluate.  Here is the output from the command on one of my resources in production.

Oh my, that is a lot of failures.  You can find detailed reports in a CSV that is stored on disk (see the last line of the screenshot above).  Upon review, it seems like all of my failures were due to auditing not being turned on at the database level.  Of course, auditing is turned on at the server level, and, as per the docs, this covers all databases.  Seems like this test could use some improvement.

In any event, I really like where this project is going and will be following it closely. I can see adding this type of automated check in a build/release pipeline to ensure at least the baselines are covered.  There is a LOT more that this project can do, so be sure to check it out if you are interested.