Saturday, June 24, 2017

Book Review: Blockchain Basics: A Non-Technical Introduction in 25 Steps

Blockchain is all the rage these days, and I'm hoping to get more involved with projects involving this technology.  Not to mention, Azure has a service that can be used to create corporate ready blockchains.  I've read a lot about blockchain over the years, and I wanted to ensure that I had all the core concepts in check before continuing down this journey.  My search for a good intro book lead me to Blockchain Basics: A Non-Technical Introduction in 25 Steps. I feel like I knew most of the concepts coming in, but this book was wonderful at grounding my knowledge and will make an excellent reference going forward.

What is the book about?

As the book title suggests, the author aims, through 25 steps, to build up the fundamental knowledge required to understand what the blockchain is, applications that can run on the blockchain, and some challenges/limitations with current implementations.  Each step generally provides:
  • A metaphor to root the concept in "real-life"
  • A description of the overall goal of that step
  • A non-technical description of the step
  • An "outlook" and "summary" section which relates the steps to each other
 What did I like?

  • The book is extremely well laid out, the "steps" approach seems novel and really helps the author build up fundamental knowledge before getting into the more advanced steps
  • The outlook and summary section make for great review areas
  • The book is logically laid out, the use of metaphors accomplishes the intended goal
  • The book does cover advanced concepts and limitations of current implementations, which is great for advanced readers
What did I dislike?

I think the only constructive criticism I could provide is that many of the words used in the non-technical descriptions, are themselves, quite technical.  Take for example the idea of "trust".  While the author does attempt to describe those terms, it still is a very technical term with very technical nuances that are hard to describe non-technically.  The general feeling I got while reading the book was that I still needed to have a pretty deep technical understanding of these words to fully grasp the concepts.  Maybe this was just me....

What did I learn?

Honestly, I learned a lot.  But if I had to pick one specific thing, I'd say it was focused around the blockchain selects a "transaction history" (Step 19).  There are two ways that are described, one is the "longest-chain" and the second is the "heaviest-chain".  I guess, in hindsight, it makes complete sense that transactions could be created in the blockchain and then subsequently abandoned as consensus is formed.  The act of being abandoned also reclaims the reward granted for creating the block, which can be devastating depending on the complexity of the hash puzzle that was solved.

What is even more elegant, is the idea that these orphaned transactions could come back, as they are placed back in the inbox for reprocessing.  The way that the blockchain achieves and maintains consistency is quite fascinating.

I found this book quite a delightful read, something that you could probably action in a few minutes before bed every night.  The authors attempt to make it as "non-technical" as possible gives it an easy read feel, and the layout of the steps allows you to "pick up where you left off" if you need to, and/or re-read sections as needed.

Sunday, June 18, 2017

Pondering Azure Subscription Architectures

I recently had a conversation with a Microsoft Cloud Solutions Architect regarding subscription architecture for a large client of mine.  He provided some compelling points of view, and the goal of this post is to capture some of my thoughts around subscription architecture.

First things first, what is an Azure subscription anyways?  Put simply, a subscription forms part of the digital agreement between end-users (companies or otherwise) and Microsoft for use of Azure services.  There are a few logical constructs in play here, and a subscription allows you to segment your azure deployments along an axis of billing and management.  Guidance on the use of subscriptions has varied over the years.

So how did we get here?  In the beginning, there was Azure Service Management (ASM).  If you go in the portal today, you can still deploy many services in ASM, they are generally denoted as "classic".   The advice that I would generally give to my clients was that you would need to create new subscriptions based on security/management delineations.  At the time, the only way to grant someone access to the portal was make them a co-admin.  This presented several security challenges for many organizations, and often multiple subscriptions was the only way to go.

Multiple subscription architecture can be complex to deploy.  While most services would generally be unaffected, network and specifically VPN Gateway architecture could get quite complex.  Organizations looking for hybrid deployments would need to make several decisions, sometimes greatly increasing the cost in order to achieve a desired level of security.  A lot of customers simply accepted the risk and moved on.

Azure Resource Manager (ARM) and specifically Role Based Access Control fixed a lot of the issues.  One could now create management roles and assign those users only the access they required.  Networking could still pose a challenge in a multi-subscription architecture, and so, the pendulum swung the other way.  My general advice at the time was to go with one subscription (where possible).  I'm not the only one who thinks this way as a quick google search will reveal.

So what changed?  I think in order to understand this, lets walk through some considerations for subscription layouts.

1. Size of Company

I think that this is always going to be a consideration.  Larger companies with more complex organizational layouts will want a subscription architecture that reflects how they currently do business.  Subscriptions are still a great way to segment management and billing concerns.  Depending on the maturing of processes within the organization, effective subscription management can lead to business units/departments being directly accountable for their IT spend while granting them the ability to securely deploy resources as required without IT involvement.

2. Dev/Test Scenarios

Microsoft has always had the concept of MSDN accounts in Azure.  An MSDN subscription could get you as much as $150 of free Azure spend per month.  This lead to many small (and unmanageable) Azure subscriptions that would be loosely associated with an organization (or that could have organization data on it).  Microsoft came out with the concept of an Enterprise Dev/Test Subscription and this is still a great reason to have a separate sub.

3. Billing

Billing, and the ability to surface IT spend in dynamic ways, has always been a concern.  The cloud makes a lot of this easier to do.  I used to generally not recommend creating new subs solely for billing purposes as the networking complexity, in my mind, far outweighed the benefits of separate billing.  This became more true as ARM released features around tagging, and being able to sort bills by tags and resource groups.  An effective resource group layout and tagging strategy could solve most billing concerns.

4. Management Concerns

This is described above.  ASM used to have only the concept of a co-admin, and no granular controls.  ARM fixes many of these issues and allows for flexible single subscription deployment.

5. Subscription Limits

Each subscription has an associated set of limits.

But that doesn't answer the question!  You're right, it doesn't!  But it was as good a segue as any to chat about considerations at the subscription level.  Reading above, you'll note that my main concern with a multi-subscription Architecture was networking.  Creating complex mesh topologies just didn't seem worth the effort in most cases.  Luckily, late last year, Microsoft announced the idea of VNET Peering.

What is VNET Peering?  The link provided goes into a lot of technical detail on what exactly VNET peering is, but the easiest way to understand this is that this is a function of software defined networking.  Essentially, this service allows you to magically route traffic between VNETs with different address spaces with no overhead and no throughput restrictions.

VNET peering solves the important problem of hybrid networking.  I can now easily connect VNETs in the same region and different subscriptions together without having to create complex mesh topologies.

Here is an example:

In conclusion, I used to typically recommend to keep it simple, and stay with one subscription where possible.  While this is still good advice, VNET peering solves a lot of the technical challenges with multi-subscription hybrid deployments in Azure.  We can now be more free to fit subscription architecture to company requirements without the downsides of a more complex networking topology.

Wednesday, June 7, 2017

Video Review: Cloud Post Exploitation Techniques

I just watched an interesting video from the recent Infiltratecon titled Cloud Post Exploitation Techniques.  You can watch the video by clicking here.  The talk was put on by a couple members of the Azure Red team, who focus on trying to break into the public cloud service and feed information on how to do this to blue team defenders.

What I really liked about this talk was how it reinforced my thinking on the subject of cloud security.  One of my favorite discussion points in cloud security discussions with customers is the idea that storage in Azure is a public facing service.  Put as many firewalls in place as you want, I don't actually need to bypass those devices to get access to your data.  It really is a change from the traditional way of thinking about security.

Here are some points from the presentation:
  • Think services, not servers
As mentioned above, this is a fundamental tenant of cloud security.  Everything is a service, so you need to switch from prevention techniques of the past to detection/response techniques of the future.  The need to audit is greater in the cloud as mistakes automatically open services to the internet.
  • Subscription Admins are the new Domain admins
This, over and over again!  I remember when, in Azure, they introduced the VMAccess extension which allowed you to reset RDP credentials from the portal.  While I agree you have to balance functionality with security, this one step grants your subscription admins full access to all of your VMs.  It can be a dangerous though, and also factors in to how RBAC needs to be deployed in Azure to ensure you are keeping with your segregation of duties requirements.
  • Using the cloud to pivot
This is every security guys worst nightmare.  Effective strategies need to be in place to understand the access the cloud environment has to your corporate environment, what tools you can use to defend against that north/south traffic, and what security considerations should be in place.

So now what?

As the talk was done at an offensive security convention, there wasn't much time put into effective mitigation strategies.  It is also important to note that all the attacks shown relied on access to a subscription admin account.  So, what can we do to help mitigate attacks on subscription admins?

The first thing to do is to ensure that all your subscription admins have two factor authentication turned on.  Actually, I would probably extend this to anyone who has access to the management APIs for any services in Azure.  Here is a link for more information:

The second thing to take a look at is role based access control in the Azure portal.  With this service you can limit privileges to the management apis.  This is a great technique in reducing the attack surface, and you should probably be employing separate admin accounts in Azure, much like you would have done on-premises. 

Along the lines of reducing the attack surface of your Azure components, Azure Resource Policy can help control what types of services are deployed.  Controlling the types allow the blue teamers to use their "lists" effectively. 

Our solution wouldn't be complete without some monitoring and threat detection.  Two particular components come to mind.  The first is Azure Active Directory Identity Protection.  The goal of this service is to provide security intelligence on your logins and user accounts.  The second is Azure Monitor and, more specifically, the Azure Activity Log. Using this service could allow you to create alerts to be informed of key events occurring in your Azure subscription.

In conclusion, there are techniques in Azure that can help defend against the threats shown in this video.  The crown jewels in Azure are subscription admins, and there are things we can do to help mitigate threats against those accounts.

Sunday, June 4, 2017

Azure Bot Service: A basic timer bot

So far in the process, we have created a bot that responds to user input via the prompt dialog process and then sends a request to azure automation via a webhook.  The next step to address in our bot is how we inform the user who made the request about the progress or, should the need arise, any errors.  The goal of this post is to build a very very basic timer bot that uses some concepts of proactive messages. I want to better understand the concepts in play, the data required, and the user experience.

The code I built below is derived from this example on github.   Here is the test code I wrote:

using Microsoft.Bot.Builder.Dialogs;
using System;
using System.Threading.Tasks;
using Microsoft.Bot.Connector;
using System.Threading;

namespace TestBot.Dialogs
    public class TimerDialog : IDialog<object>
        private string fromId;
        private string fromName;
        private string toId;
        private string toName;
        private string serviceUrl;
        private string channelId;
        private string conversationId;
        private string message;

        public Task StartAsync(IDialogContext context)

            return Task.CompletedTask;

        private async Task MessageReceivedAsync(IDialogContext context, IAwaitable<IMessageActivity> result)
            var activity = await result;

            this.toId = activity.From.Id;
            this.toName = activity.From.Name;
            this.fromId = activity.Recipient.Id;
            this.fromName = activity.Recipient.Name;
            this.serviceUrl = activity.ServiceUrl;
            this.channelId = activity.ChannelId;
            this.conversationId = activity.Conversation.Id;

            PromptDialog.Number(context,this.AfterNumberGiven , "Set a timer for how many seconds?");

        public async Task AfterNumberGiven(IDialogContext context, IAwaitable<long> result)
            var amountOfTime = await result;
            var t = new Timer(new TimerCallback(TimerEvent));
            t.Change(amountOfTime*1000, Timeout.Infinite);
            this.message = $"Your {amountOfTime} second timer is up.  It started {DateTime.Now}";
            await context.PostAsync($"I will contact you in {amountOfTime} seconds");

        public void TimerEvent(object target)
            var userAccount = new ChannelAccount(this.toId, this.toName);
            var botAccount = new ChannelAccount(this.fromId, this.fromName);
            var connector = new ConnectorClient(new Uri(serviceUrl));

            var message = Activity.CreateMessageActivity();
            message.From = botAccount;
            message.Conversation = new ConversationAccount(id: this.conversationId);
            message.Text = this.message;
            message.Locale = "en-Us";

So the above code is actually quite simple.  Whereas the example on github went through the trouble of storing all data in a static object, I simply stored it in this dialog itself.  Based on my understanding of how state works, this dialog is essentially serialized into memory between each request.  Here is what the interaction looks like in the emulator.

You will note that I was able to kick off multiple timers and have them respond differently.  Okay, so here were some notes I made:

1)  I wanted to know more about the data required to continue a conversation.

Here is a snippit from the debugger of my interaction.  Please note that these were captured from the emulator.


The key pieces above are the channelId and the serviceURL.  These probably vary greatly based on the channel being selected and I wonder what they look like in slack, etc.

2)  What is the minimum set of information I need to issue a response?

In my use case, someone may send a request to my bot to kick off an automation job in a private message, but I might still want to broadcast that to the general channel in slack so that everyone knows the process of jobs.  You'll note in the code sample above, I actually don't use the toId/toName in my message response.  So it seems, at least in the emulator, you don't need to specify those fields.

When I tried removing the message.From property, the emulator generated a 400 error message.

When I decided to create a new direct conversation, the emulator essentially erased my previous history and replaced it with the new conversation.  This is good to know, so essentially, I do not NEED to keep the conversation id, although it  may lead to weird user behavior as state will be lost.

Ultimately, I guess that some of this behavior will be channel specific, and I'll need to invest further time with slack on all the different options.

This was a good first step and a very basic example on playing around with proactive messages.  Now that I have some understanding of what is involved,  I will now have to add some external storage, and a method for Azure Automation to communicate back to the user.  A future post for sure!