Windows Azure Retrospective (Year of Azure – Week 13)

Hey all! Another short post this week. Sorry. Been busy playing catch-up and just don’t have time to get done what I set out to do. Partially because I was speaking at the Minnesota Developer’s Conference this week and still had to finish my presentation, “Windows Azure Roadmap” and I’m also trying to submit a couple last minute session abstracts for CodeMash. Submitting one about Mango/Win8 with a cloud back end and another on PHP+Azure.

My “roadmap” session was more of a retrospective then a roadmap, due largely to no big announcements from the MS/BUILD conference earlier this month. But it was fun pulling this together and realizing exactly how far Azure has come as well as shedding some light on its early days.

All this said, I’m going take today to start something I’ve been intending to do for months now but simply never made the time to start, I want to share my “roadmap” presentation, complete with my rather lacking presenter notes. Please feel free to take and modify, just give credit where its due either to me or the people I “borrowed” content from. I tried to highlight them when I knew who who the author was.

Part of the reason for finally getting to this, even if only in a small way, is that today marks the end of my first year as a Microsoft MVP. Tomorrow I may or may not receive an email telling me I have been renewed. The experience has been one of the most rewarding things I’ve ever done. Being part of the Microsoft MVP program has been incredible and I’ll forever be grateful for being a part of it. I’ve learned so much, that I can’t help but feel a sense of obligation to give back. I’ll be posting all my other presentations online as well, as soon as I can figure out how to get WordPress to format a new page in a way that doesn’t completely suck. Might have to dust off my raw HTML coding skills.

So until next time…

Leveraging the RoleEntryPoint (Year of Azure – Week 12)

So the last two weeks have been fairly “code lite”, my apologies. Work has had me slammed the last 6 weeks or so and it was finally catching up with me. I took this week off (aka minimal conference calls/meetings), but today my phone is off, and I have NOTHING on my calendar. So I finally wanted to turn my hand to a more technical blog post.

When not architecting cloud solutions or writing code for clients, I’ve been leading Azure training. Something that usually comes up, and I make a point of diving into fairly well, its how to be aware of changes in a service’s environment. In the pre 1.3 SDK days, the default role entry points always had a method in them to handle configuration changes. But since that has gone away and we have the ability to use startup scripts, not as much attention gets paid to these things. So today we’ll review it and call out a few examples.

Yeah for code samples!

Methods and Events

There are two groups of hooks that allow us to respond to events or changes in role state/status; methods declared in the the RoleEntryPoint class and events in the RoleEnvironment class.But before we dive into these two, we should understand the lifecycle of a role instance.

According to an excellent post by the azure team from earlier this year, the sequence of events in role instances we can respond to are, OnStart, Changing, Changed, Stopping, and OnStop. I’ll add two items to this, Run, which follows OnStart, and StatusCheck which is used by the Azure agent to determine if the instance is “ready” to receive requests from the load balancer, or is “busy”.

So lets walk through all these one by one.

OnStart is where it all begins. When a role instance is started the Azure Agent will reflect over the role’s primary assembly and upon finding a class that inherits from RoleEntryPoint, it will call that class’s OnStart method. Now by default, that method will usually look like this:

public override bool OnStart()
{
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;

    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

    return base.OnStart();
}

And if we created a WorkerRole, we’ll also have a default Run method that looks like this:

public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
    {
        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");

    }
}

The Run will be called after the OnStart. But here is a curveball, we can add a Run method to a webrole and it will be called by the Azure Agent.

Next up, we have the OnStop method.

public override void OnStop()
{
    try
    {
        // Add code here that runs when the role instance is to be stopped
    }
    catch (Exception e)
    {
        Trace.WriteLine("Exception during OnStop: " + e.ToString());
        // Take other action as needed.
    }
}

This method is a great place to try and allow our instance to shut down in a controlled and graceful manner. The catch is that we can’t take more than 30 seconds or the instance will be shut down hard. So anything we’re going to do, we’ll need to do quickly.

We do have another opportunity to start handling shut down. the RoleEnvironment.Stopping event. This is called once the instance has been taken out of the load balancer, but isn’t called when the guest VM is rebooted. Because this is an event, we have to create not just the event handler, but also wire it up:

RoleEnvironment.Stopping += RoleEnvironmentStopping;

private void RoleEnvironmentStopping(object sender, RoleEnvironmentStoppingEventArgs e)
{
    // Add code that is run when the role instance is being stopped
}

Now related to the load balancer, and another event we can handle is the StatusCheck. This can be used to tell the Agent if the role instance should or shouldn’t get requests from the load balancer.

RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;

// Use the busy object to indicate that the status
// of the role instance must be Busy
private volatile bool busy = false;

private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
   if (this.busy)
      e.SetBusy();
}

But we’re not done yet…

Handling Environment Changes

Now there are two more events we can handle, Changing and Changed. These events are ideal for handling changes to the service configuration. We can optionally decide to restart our role instance by setting the event’s RoleEnvironmentChangingEventArgs.Cancel property to true during the Changing event.

RoleEnvironment.Changing += RoleEnvironmentChanging;

private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}

RoleEnvironment.Changed += RoleEnvironmentChanged;

private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}

The real value in both these is detecting and handling changes. If we just want to iterate through changes, we can put in a code block like this:

// Get the list of configuration changes
var settingChanges = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>();

foreach (var settingChange in settingChanges)
{
    var message = "Setting: " + settingChange.ConfigurationSettingName;
    Trace.WriteLine(message, "Information");
}

If you wanted to only handle Topology changes (say a role instance being added or removed), you would use a snippet like this:

// topology changes
var changes = from ch in e.Changes.OfType<RoleEnvironmentTopologyChange>()
              where ch.RoleName == RoleEnvironment.CurrentRoleInstance.Role.Name
              select ch;
if (changes.Any())
{
    // Topology change occurred in the current role
}
else
{
    // Topology change occurred in a different role
}

Lastly, there are times where you may only be updating a configuration setting If you want to test for this, then we’d use a snippet like this:

if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
    e.Cancel = true; // don't recycle the role
}

Discovery of the RoleEntryPoint

This all said, there are two common questions that come up: how does Azure find the entry point and can I set up a common entry point to be used by multiple roles? I’ll address the later first.

The easiest way I’ve found to create a shared RoleEntryPoint is to set it up in its own class library, then add a reference to that library to all role instances. In each role instance, change their default RoleEntryPoints to inherit from the shared class. Simple enough to set up (takes less than 5 minutes) and easy for anyone used to doing object oriented programming to wrap their heads around.

The first question, about discovery is a bit more complicated. If you read through that entire thread, you’ll find references to a “targets” file and the cloud service project. Prior to the new 1.5 SDK, this was true. But 1.5 introduced an update to the Web and Worker role schemas, a new element that we can use to specify the assembly to be searched for the RoleEntryPoint, NetFxEntryPoint. Using this, you can point directly to an assembly that contains the RoleEntryPoint.

Both approaches work, so use the one that best fits your needs.

And now we exit…

Everything I’ve put in this post is available in the MSDN files. So I haven’t really built anything new here. But what I have done is create a new Cloud Service project that contains all the methods and events and even demonstrates the inheritance approach for a shared entry point. It’s a nice reference project that you can copy/paste from when you need examples without having to hunt through MSDN. You can download it from here.

That’s all for this week. So until next time!

BUILD/Windows Summary (Year of Azure–Week 11)

When I posed my last update, I had planned on doing some live blogging for week 11 from the BUILD conference. Unfortunately, as a consultant personal desires are “subject to the demands of the service” as they say in one of my favorite book series. So instead of being in LA, I spent the week in Tampa working long hours with a great team to prepare a proposal for a client. Sadly, this left me with almost no time for scan news from the conference.

Top top off the lack of sleep from this week, I have a good friend some into town today and we have plans that will consume the entire day. So this post will unfortunately not be what I had originally hoped. With all that said, on to the news…

The Team Blog has been busy with updates:

There were also some great Azure sessions at the conference with recordings now available. I’d recommend Mark Russinovich’s “Inside Windows Azure”, “What’s new in WIndows Azure” with James Conard, and “Building Windows 8 and Windows Azure Apps” by Steve Marx“. But there’s lots of great sessions so be sure to check the latest.Well I need to get ready for company, so until next time!

The Virtual DMZ (Year of Azure Week 10)

Hey folks, super short post this week. I’m busy beyond belief and don’t have time for much. So what I want to call out is something that’s not really new, but something that I believe hasn’t been mentioned enough. Securing services hosted in Windows Azure so that only the parties I want to have connect to can.

In a traditional infrastructure, we’d use Mutual Authentication and certificates. Both communicating parties would have to have a certificate installed, and exchange/validate that when establishing a connection. If you only share/exchange certificates with people you trust, this makes for a fairly easy way to secure a service. The challenge however, is that if you have 10,000 customers you connect with in this manner, you now have to coordinate a change in the certificates with all 10,000.

Well, if we add in the Azure AppFabric’s Access Control Service, we can help mitigate some of those challenges. Set up a rule that will take multiple certificates and issue  single standardized token. I’d heard of this approach awhile back but never had time to explore it or create a working demo of it. Well I needed one recently so I sent out some network calls to get a demo recently and fortunately had a colleague down in Texas found something ON MSDN that I’d never run across, How To: Authenticate with a Client Certificate to a WCF Service Protected by ACS.

I’ve taken lately to referring to this approach as the creation of a “virtual DMZ”. You have on or more publically accessible services running in Windows Azure with input endpoints. You then, secured by certificates an the ACS, have another set of “private” services, also with input endpoints.

A powerful option, and one that by using the ACS isn’t overly complex to setup or manage. Yes, there’s an initial hit with calls to the secured services because they first need to get a token from the ACS before calling the service, but they can then cache that token until it expires to make sure subsequent calls are not impacted as badly.

So there we have it. Short and sweet this week, and sadly sans any code (again). So until next time… send me any suggestions for code samples. Smile

Displaying a List of Blobs via your browser (Year of Azure Week 9)

Sorry folks, you get a short and simple one again this week. And with no planning what-so-ever it continues the theme of the last two items.. Azure Storage Blobs.

So in the demo I did last week I showed how to get a list of blobs in a container via the storage client. Well today my inbox received the following message from a former colleague:

Hey Brent, do you know how to get a list of files that are stored in a container in Blob storage? I can’t seem to find any information on that.  At least any that works.

Well I pointed out the line of code I used last week, container.ListBlobs(), and he said he was after an approach he’d seen that you could just point a URI at it and have it work. I realized then he was talking about the REST API.

Well as I turns out, the Rest API List Blobs operation just needs a simple GET operation. So we can execute it from any browser. We just need a URI that looks like this:

http://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list

All you need to do is replace the underlines values. Well, almost all. If you try this with a browser (which is an anonymous request), you’ll also need to specify the container level access policy, allowing Full public read access. If you don’t, you may only be allowing public read access for the blobs in the container, in which case a browser with the URI above will fail.

Now if you’re successful, your browser should display a nice little chunk of XML that you can show off to your friends. Something like this…

image

Unfortunately, that’s all I have time for this week. So until next time!

Follow

Get every new post delivered to your Inbox.

Join 1,129 other followers