Thursday, December 24, 2015

Exposing Azure Storage Account Monitoring/Logs Via PowerBI

Monitoring Azure services is a complex task that is constantly evolving.  Not only are features changing (Azure Security Center, Operational Insights, Application Insights), the exact view they provide into a particular application can be spotty at best.  Obviously this is an area that Microsoft is working heavily on, and recent announcements regarding OMS are strides in this direction.  The goal, of course, is to provide a central, unified view into an application and the various components that support it.

One service that is core to almost everything provisioned in Azure is storage.  The storage team has worked on providing several metrics/monitoring/logging tools to help customers.  That is a good step one: access to the data.  Visualizing this data, however, leaves some room for improvement.

Currently:

1) You can use the old portal
The old portal provides a monitoring page that you can access.  You can add metrics to it, and view those metrics over a pre-defined period.  You can learn more about this toolset here.

2) You can use the new portal
The new portal is constantly under development.  Right now, you can select your storage blades and add a tile from a pre-set list of visualizations.  This is pretty cool feature, and combining this with the "dashboard" feature of the new portal will lead to some interesting monitoring capabilities.

The ultimately problem with the above two solutions is they are currently Azure specific.  In this day and age, it is rare to have an entire solution living in one single environment.  The visualizations provided are also determined by Microsoft.

In comes PowerBI

PowerBI is almost like the swiss-army knife of reporting.  It combines some really cool visualizations, a versatile query language, and the ability to add multiple data sources to a single report/dashboard.  This really allows us to start building application wide monitoring dashboards.

From an Azure storage perspective, the team has done a great job by making the monitoring/logging data accessible to everyone.  You can read more about this here.  In short, monitoring metrics are stored in non-visible tables in Azure Table Storage located on the storage account that you enabled monitoring.  As such, you can use this as a data source into powerbi.

Unfortunately, at time of writing, PowerBI does not support Azure Tables as a data source directly from the web interface.  So, for this you will have to use PowerBI desktop.

You can read this post to learn how to add Azure Table Storage as a data source in your PowerBI dashboards.  The one caveat here is that the tables for monitoring will not be returned from any list command (which is what PowerBI uses to get the list of tables).  As such, you need to use the following code snip-it to access your monitoring tables:

    Source = AzureStorage.Tables("<<storage account name>>"),
    HourlyMetricsTable = Source{[Name="$MetricsHourPrimaryTransactionsBlob"]}[Data],

If you put that at the start of your query (see this link for information on how to get to the advanced query editor), you will be able to access the tables and perform any post tasks you require.

You can find a list of the monitoring tables and schema information here.

One you have established the access, you can create whatever your heart desires (or your client for that matter).  Here is an example I hacked up.


I personally plan to use this type of reporting for performance baseline/monitoring purposes.  Have fun!

Sunday, November 22, 2015

Course Review: Security in a Cloud-Enabled World

Just recently released, I decided to spend some time watching "Security in a Cloud-Enabled World" on MVA.

Overall I thought it was a pretty good course, although not what I expected it to be.  The course was broken down into 2 sections, the first focusing on Microsoft's role as a Trusted cloud provider and the second being a list of roadmaps that should be considered when clients chose a cloud provider to host their solutions.

Here are a couple of points I made about the course:

1)  It is good to get validated on what I am currently doing.  When I engage as an SA on a project, I review many aspects of the roadmaps outlined in this course.  This is good validation that I am on the right path.

2)  If you want to skip several hours of boring content, just read the poster and do the quizzes. 

3)  I am not a big fan of using "user reviews" when judging how secure a cloud provider or solution is.  In the second module, many references to how users perceived the security/availability of their solutions in the cloud.  Most, as you could expect, were favorable of the cloud.  While interesting material, it has been well documented that security is a lemons market.  While I am not saying that Azure's security stance is bad, I do think that ultimately it is very difficult for customers or end users to make even an educated guess on the subject.

4)  There was an inherent lack of focus on how to do things in Azure.  While I guess that wasn't the point of the course, I think that this material needs to be covered somewhere.  In one module, the presenter talks at length about access to the administrative consoles.  Some info is provided on MFA and about how to configure subscriptions for security, but no info is presented on how to audit these admin accounts, control these admin accounts, tie these admin accounts to PAM toolsets, etc.  I think there is a lot of room for content like this.

Overall it was a good course.  It was well structured, and provides a good framework for review when designing out cloud solutions.

Tuesday, November 10, 2015

Similarities between Medical teams and Agile IT Teams


From my own knowledge/reading and being married to a health-care professional, I've come to see many parallels that makes a discussion about how the field of medicine tackles teamwork and how it is done in IT.

Parallel 1: Team Based Approach

Over the past several years, health-care has become more of a team sport.  Rather than having one individual tend to patient needs, these responsibilities have been spread out to a team of health-care professionals.  This team is meant to act as one, having to gain a shared mental model of the situation, learn to work together while taking the lead from a central actor, and all have specialist knowledge that needs to be surfaced at the right time.

To me, this is essentially what an agile team has become.  Long ago are the days where a single developer/architect/person could hold the complexity of a solution to themselves.  We now have a clear distinction between front-end and back-end developers.  UX is super important, and generally we have a specialist that deals only with that area.  Integration is a different beast then standard storage options, and generally someone with competency in that area is required.  Not to mention other areas such as security, performance, and testing.  Even when those specialists don't exist, someone has to play that role.  In agile teams, these roles are distributed within the team in a "best-fit" format.  Or short-straw, depending on how your scrums go. 

Parallel 2: Roles are sorta-clearly defined in a dynamic way

From offices to the emergency room, medical teams are created (sometimes on the fly) to attend to patient needs.  Generally in these teams, there is a doctor who essentially is the lead.  They job is not only to be the lead, but also know the most.  Of course we know that isn't possible, and hence there are specialists that also make up the team.  Depending on the situation there may be an array of specialized nurses, technicians, or other disciplines.  With recent developments in patient-centric care, a lot of literature on the subject also includes the patient in the team and defines roles and responsibilities for that individual.

This parallels quite well with how agile works.  Regardless of the size of the agile team, there is generally a team lead or scrum master, several team members with varying skill and specialty, the product owner and the stakeholders.  Agile is about a team lead working closely with a product owner to deliver success in a project.

In both cases above, the roles on the team shift depending on who is on the team and what skill sets they bring in.  For example, you may have a technically weak scrum master who is great at communicating and keeping on top of things.  This person may share the role of "lead" with that of a technical guru also on the team.  The same can happen (although with less frequency) in the medical world.  In the event of an emergency, what happens when there is no doctor there to lead the charge?  Somebody has to play that role.

Parallel 3: Teams form, disband, and re-form with different configurations

At an almost breathtaking pace, as compared to IT, medial teams form to deal with a specific case and then re-form to deal with other cases.  This is essentially what happens in agile teams as they transition between projects.  The main difference here is the speed at which this occurs in the medial arena. 

While there are probably more parallels that can be drawn, I think that it is safe to say that there is a lot of overlap between how medical teams operate and how agile IT teams operate.  The medical community has been trying to tackle these concepts for quite some time now.  What interests me the most is the discussion on competency.

Sunday, November 8, 2015

Making the switch to Azure DNS

One service that has been on my radar for some time has been Azure DNS.  Released to preview in May of last year, Azure DNS is yet another offering to compete with already established services from Amazon and Google.

From an IT perspective, I like these services being added to Azure.  It allows for a the creation of a on-stop shop for hosting IT services, allows the creation of a single point for billing, and, utilizing resource manager deployment model, allows for you to create strong RBAC controls around who can manage and maintain the service.

Getting started with Azure DNS is pretty easy, and is detailed quite well in the following Microsoft blog posts:

Getting Started with Azure DNS using Powershell

Create DNS Records

A couple of things I noted during the process:

1)  Some of the operations are offline.  They are clearly marked in the documentation, but keep in mind that the "set" commands are required. 

2)  You need to create record sets for everything, even things with only 1 record.  This is an interesting design decision, and adds to the initial setup.

3)  It is deployed only via Powershell and Resource Manager.  So standard rules/considerations apply around the lifetime of the resource, RBAC considerations, etc.

Making the switch took about an hour or so one night.  Previously, I was using mydomain to host my DNS for shamirc.com.  This has now switched over to Azure DNS.

Follow this link for a pingdom report on the DNS configuration:  Pingdom

Some interesting things for future investigation

1) DNS Performance testing from around the world (and in comparison to Amazon / Google)
2) Actual cost for a production site
3) From a security perspective, eDOS.  All these services charge per million queries.  I wonder what protection mechanisms are in place against queries done on the service
4) DNSSEC Support


Sunday, October 4, 2015

The Extended Domain Concept

One interesting problem that comes up during the course of security architecture is the concept of domains and how those domains interact with each other.  A simple example is shown in the figure below.










The figure depicts a few security policies (SA1, SA2, and SA3) along with some policy domains (domain 1, domain 2, domain 3).  The users A, and B belong to domain 1 and the users X and Y belong to domain 2.  For the above scenario, the following is true

  • If user A wants to talk to user B, they must conform to security policy 1
  • If user X wants to talk to user Y, they must conform to security policy 2
  • domain 1 and domain 2 are sub-domains of domain 3
  • If a user in domain 1 wants to talk to domain 2, they must conform to the security policy 3
Domains can generally interact with their environment in several different ways. For example, a domain could follow the example above where inter-domain communication is governed by a policy of the super domain.  In other cases, domain communication could be facilitated by a trusted third party.  This is similar to the current certificate authority system (if clients were also using certificates to validate their identity). 

One concept that I like in this space is that of the extended domain policy.  This policy is generally used when facilitating communication between a domain of stricter policies with that of a weaker one.  In this case, the domain with the stronger policy extends their domain into the weaker domain policy, only allowing communication to be facilitated via that broker.

Being a Canadian citizen, a nice way to illustrate this is to compare and contrast airport customs between Canada and the US.  When I travel to the US from Canada, I pass US customs in Canada.  This is a great example of the extended domain concept at work.  The US, believing their security requirements are much greater than Canada, will not let you board a plane destine for the US without first authorizing you.

Canada, on the other hand, acts more like a traditional inter-domain policy association.  Canada enforces it's controls at it's own border.

In the business world, I think that the extended domain concept is an important one.  One use case would be a corporate entity that owns several subsidiaries.  Using the extended domain model, the entity could enforce strict control over communication into it's network by deploying trusted agents in each subsidiary to facilitate communication.

 **Image taken from the book: Enterprise Security Architecture - Nicholas A Sherwood

Friday, July 24, 2015

SABSA Chartered Security Architect - Foundation Certificate Achieved!

A few months ago, I decided to attend SABSA training.  For a while, it had been something on the radar.  I wanted to find a good, recognized certification that spanned both architecture and security. SABSA fits the bill quite perfectly.

The course I attended was taught in Winnipeg of all places, and lead by the great Michael Legary.  Due to some administrative problems on a client end, there ended up being only 2 of us in the course.  This worked out great as we were able to explore in more detail the various sections and really work to apply the concepts to our current positions.  From a professional services perspective, I was interested in how to apply these concepts to our project delivery.  SABSAs focus on creating controls/solutions that are both traceable and justifiable in business context is, in my opinion, critical to the success of any project.

In case, at this point, you are wondering what SABSA actually is, please allow me to fill in some details.

SABSA stands for Sherwood Applied Business Security Architecture.  It is a methodology for developing business-driven, risk and opportunity focused enterprise security & information assurance architectures.  It is comprised of a number of frameworks, models, methods and processes.

The SABSA methodology focuses on delivering the following features:

- It is business-driven in nature
- It is risk focused (both from a threat and opportunity standpoint)
- It is comprehensive (and thus can be scaled from point areas to enterprise wide)
- It is modular (you don't have to big-bang this approach)
- It is open source (well, kinda ;) )
- It is auditable (this is the entire point, justify what you are doing)
- It is transparent (two-way traceability)

There is a ton to SABSA. If you are interested in finding out more, please take a gander at the SABSA Whitepaper (registration required).

One thing I will say, it was a TOUGH exam.  I think the level of abstractions that are dealt with in enterprise architecture are hard to grasp over the course of 5 days.  I look forward to spending a significant amount of time digesting the course material and integrating it into my day job.



Tuesday, June 9, 2015

Am I now a starfish?


In his book, Management 3.0, Jurgen Appelo makes a startling comparison between starfish and managers.

"For example, the ancestors of brainless starfish had a brain. But starfish don’t, and nobody knows why…. (Some believe the same applies to managers.)"

While he was probably just trying to make a joke, my experience suggests that the best jokes are generally ones based in reality.  When I look at my personal career, I am fortunate enough to boast that I have had mostly good managers.  I have been able to relate to my managers.  Most of them have put some effort into my personal and professional development.  They have always been able to provide timely advice and guide me in the right direction (or maybe in their self-image?). 

During all this time, however, my view of management has always been along a singular theme, the theme of the starfish.  How do these people get into these positions?  Why don't they use their brains?  Why does everything seem so dis-jointed? Why are they not addressing the problems that really matter?

Reading this book has caused me to spend a fair bit of time self-reflecting.  When I look back at years past, I always wanted things to be orderly.  I wanted to be able to explain what was going on in simple terms.  After all, if you can't break down a problem into simple components, then you don't really understand it… right?  The book would classify what I was attempting to do as reductionism.

"The approach of deconstructing systems into their parts and analyzing how these parts interact to make up the whole is called reductionism".

What I learned is that while these concepts are good to understand how an airplane works, it does little to help explain how complex systems such as corporates work.  Things are really not that simple, and can't always be explained in simple terms.

In a lot of ways, I am really, REALLY glad that the author took the time to write this book.  He approaches the concepts of management like I would (or at least, would hope to).  He spends a lot of time talking about systems theory, explaining just enough of the core concepts to get his point across.  He relates management of people to that of complex systems.  And luckily, work has been done over the years into how to describe, manage, and interact with complex systems.  So why not try and apply those concepts to management?

The application of those concepts, combined with years of experience I'm sure, has led to what is termed the "Management 3.0 model".  There 6 views to this model.

  1. Align Constraints
  2. Develop Competence
  3. Empower Teams
  4. Grow Structure
  5. Energize People
  6. Improve Everything

The last chapter is really the icing on the cake for this book, and probably puts it among the top of ones that I have read.  The author goes out of his way to claim that his model is probably incorrect.  The point, ultimately, is that there is no one way to view/manage complex systems.  The system functions as a will, all you can do, as management, is hope to contribute a little to its direction.  If you adopt agile approaches, focus on using appropriate (for your environment) tools and processes, and have a little bit of luck, you might just be successful at it!

As always, I highly recommend you check out this book.  You can start at the authors website.

Recently, I have been promoted to Manager of the Application Infrastructure group at Hitachi Solutions Canada.  I look forward to the new challenge.

Saturday, April 4, 2015

Book Review: The Talent Code

In the talent code, author Daniel Coyle tells us that "Greatness isn't born. It's Grown.  Here's How" and I must say, the book really does deliver.

The premise of the book is simple.  Traditionally, there has been a misconception that certain people are born to do certain things.  When we look at a leader in any given field, we think that they were born to do this, it was in their genes.  The book aims to prove this notion false, providing instead an alternate explanation, and that is that we are all myelin beings.

Myelin is a insulating material grown on axons in the brain, essentially ensuring that electrical signal travels fast and without loss of amplification. The explanation follows that the more we practice something, the more Myelin forms around the neurons that make up that action.  The more myelin that forms, the better and more accurate the brain can transmit that signal.  This is essentially the difference between the tops in a given field and the average.

There are three elements needed to grow myelin: deep practice, ignition, and master coaching.  Combine these three, and you are destine to greatness in any field that you chose.

Deep practice is the concept that one must practice to be good at something, but it can't just be "any" type of practice.  The practice that helps grow myelin is "deep".  There are basically two components to deep practice.

1)  Practice at the edge of your ability
Essentially, practice is only effective if it pushes you. You have to practice something that makes you use your brain, makes you concentrate on what you are doing.

2)  Break down complex tasks into their core components, and practice those over and over

When practicing at the edge, you have to work to break the complex task that you are doing into it's core components.  By doing that, you can then create exercises to practice each of those core components.  The more you practices it, the more myelin will coat the electrical path in your brain.

Ignition is basically the passion behind what you are doing.  The author has many stories about leaders in their field and the various events that "triggered" their ignition.  Essentially, ignition is a switch, it is either on or off.  Further, you can trigger ignition using primal queues.  Most of these queues are tied to the words that we use. 

The last ingredient in talent is master coaching.  The best description of master coaching is from a quote in the book.  "Great teachers focus on what the student is saying or doing ... and are able, by being so focused and by their deep knowledge of the subject matter, to see and recognize the inarticulate stumbling, fumbling effort of the student who's reaching toward mastery, and then connect to them with a targeted message." 

There are four virtues to a master coach.

1)  The matrix
2)  Perceptiveness
3)  The GPS Reflex
4)  Theatrical Honesty

Summary

This was a great book and a great read.  What I liked about it is how it confirmed a lot of what I already though was true about talent.  Master coaches are required because they have already broken down a complex problem into it's parts, and are experts at teaching those parts and further .... the synthesis of those parts into the whole.  Because they have already practiced doing this with the art they are teaching, they have also applied this ability to the act of teaching itself.  They know to watch for queues, can adapt teaching patterns to the student, and are genuine people.  Ignition is the passion that keeps the student going forward.  For me, it was my uncle showing me how to make the background color on my C64 switch using basic.  After that, I was hooked!  Lastly, it is the deep practice.  If you are passionate about something, you are always going to strive to improve at what you are doing.  You can only do that by practicing at the edge.

I think my only criticism of the book is how "easy" the author makes this sound.  I guess after the years of research he has put into it, he sees it clearly and can articulate it well.  I, however, still feel that finding the combination of master coaching, ignition, and deep practice to be a rare thing. 


Sunday, February 8, 2015

Book Review: The Five Dysfunctions of a Team

Notes


  • Dysfunction 1: Absence of Trust
    • "is the confidence among team members that their peers' intentions are good, and that there is no reason to be protective or careful around the group"
    • Basically, team members should feel comfortable being vulnerable with one another
    • Tools To Address
      • "Get to know each other games"
        • Builds a personal connection that relationships can build on
      • Evaluate shared models
        • Express what each person feels they contribute / could contribute better to the team
        • Discuss
      • My Thoughts
        • Running evaluations such as Strength finder might provide valuable insight
    • Leader
      • Take the first step!
  • Dysfunction 2: Fear of Conflict
    • "All great relationships require productive conflict in order to grow"
      • Should be productive ideological conflict
        • Maintain respect... somehow!
        • Focus on ideas, not on people
    • Healthy conflict is actually more efficient then no conflict
    • Tools To Address
      • Make it part of the culture
      • Moderate as much as possible, but don't protect
  • Dysfunction 3: Lack of Commitment
    • "Commitment is a function of two things: clarity and buy-in"
      • This does not mean consensus!
      • Better to make a bold move and change then waffle
    • Tools To Address
      • End the meetings with a review of key points
      • Establish clear deadlines and responsibilities
    • Leader Role
      • Take a chance and be prepared to be wrong
  • Dysfunction 4: Avoidance of accountability
    • "the willingness of team members to call their peers on performance or behaviors that might hurt the team"
      • Peer pressure works!
    • Tools To Address
      • Publish goals and standards
      • Team rewards
    • Leader
      • You can't do this alone, create the right culture!
  • Dysfunction 5: Inattention to results
    • "the tendency of members to care about something other than he collective goals of the group"
      • Team status and individual status vs company goals
    • Tools To Address
      • Rewards that address good behavior

Summary

This is actually the second time I've read this book.  The first was during a previous job.  I was involved in leadership training, and my wife recommended that I read this book in conjunction with that training.  At that time, I noted that the company I worked for actually possessed several of these dysfunctions, but didn't even know it.  One particularly vivid memory comes to mind.

I was talking with a manager, and they commented about how the 360 review (that they had self selected to do) was a stunning success.  That person commented that nobody had anything negative to share during the process, and that was a good mark to their record as a manager.  If you get a chance to read this book, you'll note that this is probably as a result of dysfunction than as a result of stellar management techniques. There is no way to please everybody, and there is no way that everyone "just agrees" with the approach you are taking.

This book is actually quite well written, and I highly suggest picking it up.  The "fable" approach is quite nice, and allows for a lot of color to be added to help describe the finer points of the dysfunctions noted above. Further, the dysfunctions are actually broken out and discussed in a chapter near the end, which makes for a great reference at any time!

Saturday, January 3, 2015

Scripting a basic network in Azure

Azure has a strong PowerShell API that allows for scripting of all components within their infrastructure as a service offering.  In this blog post, I am going to use parts of this API to create a basic network that I can use to further build on later.

Here is a diagram of what I am trying to build.



In the diagram above, the Azure virtual network is the base.  While configuring the virtual network, you will need to know some basic information such as the address space you intend to use, the subnets you would like to configure, DNS servers that will be used for the environment, and S2S or P2S connectivity details.  For my purposes, I am going to use the following configuration:

Address Space:  10.0.0.0/8
Server Subnet: 10.1.0.0/24
DMZ Subnet: 10.2.0.0/24
Secure Subnet: 10.3.0.0/24
Client Subnet: 10.4.0.0/24

DC1 - 10.1.0.4
DC2 - 10.1.0.5

There are two commands that the Azure PowerShell API has to interact with vnet configuration.  The Get-AzureVnetConfig allows you to export the existing configuration to a file.  The Set-AzureVnetConfig takes an XML configuration file and apply that config to the subscription.  One interesting quirk is that these commands are used to control all virtual network, dns, and local network configurations.  If you, say, used the get command to get the XML, deleted a virtual network config, and then used the set command to apply it, the corresponding virtual network would be deleted (or attempted to be deleted).

This quirk throws a slight complication in my script.  In reality, I want to add a new virtual network, and if it already exists, skip to further steps down the line.  It turns out that powershell makes interacting with XML quite easy, and creating the corresponding add scripts are actually quite simple to do.

Here is the code I use to add my virtual network configuration to an existing configuration file.


$networkConfigurationPath = [System.IO.Path]::GetTempFileName()

Write-Host $networkConfigurationPath

Get-AzureVNetConfig -ExportToFile $networkConfigurationPath

AppendDNSTo -filePath $networkConfigurationPath -dnsServerName "PIA-DC1" -dnsServerIP "10.1.0.4"
AppendDNSTo -filePath $networkConfigurationPath -dnsServername "PIA-DC2" -dnsServerIP "10.1.0.5"

$subnets = @{
    "Server" = "10.1.0.0/24";
    "DMZ" = "10.2.0.0/24";
    "Secure" = "10.3.0.0/24";
    "Client" = "10.4.0.0/24";
}

$vnetName = "PIA"

AppendVNetTo -filePath $networkConfigurationPath    -dnsServerRefs @("PIA-DC1","PIA-DC2") `
                                                    -vnetName $vnetName `
                                                    -vnetLocation $location `
                                                    -addressSpace "10.0.0.0/8" `
                                                    -subnets $subnets


Set-AzureVNetConfig -ConfigurationPath $networkConfigurationPath

Remove-Item -Path $networkConfigurationPath


The code above basically goes through the two steps.  Firstly, add the DNS servers that you wish to reference in the vnet.  Second, add the VNET.  Here is a reference for the AppendDNSTo function.


function AppendDNSTo{
    <#
    .SYNOPSIS
    Helper method to add DNS to an existing Azure network config file
    .DESCRIPTION
    Appends a DNS entry to an existing config.  If it already exists, will continue without error
    .PARAMETER configPath
    The path to an already existing configuration
    .PARAMETER dnsServerName
    The Name of the new DNS Server
    .PARAMETER dnsServerIP
    The IP of the new DNS Server
    .PARAMETER failIfExists
    Instructs program to throw error if DNS already exists
    #>

    param(
    [Parameter(Mandatory=$true)]
    [string]$filePath,
    [Parameter(Mandatory=$true)]
    [string]$dnsServerName,
    [Parameter(Mandatory=$true)]
    [string]$dnsServerIP,
    [bool]$failIfExists = $false
    )

    TestGivenPath -filePath $filePath

    $xml = [xml] (Get-Content $filePath)

    $dnsServers = $xml.NetworkConfiguration.VirtualNetworkConfiguration.Dns.DnsServers

    if ($dnsServers.ChildNodes | ? {$_.name -eq $dnsServerName}){
        Write-Debug "DNS Server already exists"
        if ($failIfExists){
            throw "DNS Server Name provided already exists in config file"
        }
    }
    else{
        $dnsServerToAdd = $xml.CreateElement("DnsServer",$xml.DocumentElement.NamespaceURI)
        $dnsServerToAdd.SetAttribute("name",$dnsServerName)
        $dnsServerToAdd.SetAttribute("IPAddress",$dnsServerIP)
        $dnsServers.AppendChild($dnsServerToAdd)
        $xml.Save($filePath)
    }

}

Now that the basic vnet has been created, we need to create the "firewall" between each zone as depicted in the diagram.  With the recent addition of multiple nics, I could see a few different ways to do this, potentially using 3rd party solutions.  However, Azure also provides the concept of Network Security Groups which can be used.  These are basically stateful firewalls that can be applied to either a VM or a subnet.  What is also interesting is that they come with a default ruleset that is easy to work with.  After installing mine, here is what the rules came out to.



While not complete for my purposes, it is a good base that allows you to get started with network security groups quickly.

Here is a function I wrote to help with adding NSG to the subnets.  It is pretty simple.  Create one, and associate with the subnet.


function AddNetworkSecurityGroupAndAssociate{
    param(
    [Parameter(Mandatory=$true)]
    [string]$nsgName,
    [Parameter(Mandatory=$true)]
    [string]$nsgLabel,
    [Parameter(Mandatory=$true)]
    [string]$subnetName,
    [Parameter(Mandatory=$true)]
    [string]$vnetName,
    [Parameter(Mandatory=$true)]
    [string]$location
    )

    New-AzureNetworkSecurityGroup -Name $nsgName -Location $location -Label $nsgLabel
    Get-AzureNetworkSecurityGroup -Name $nsgName | Set-AzureNetworkSecurityGroupToSubnet -VirtualNetworkName $vnetName -SubnetName $subnetName
}

While I don't have anything in this virtual network to test with, I was able to quite simply confirm the configuration.  I now have a script to build out a basic network in any subscription.  Cool!