Long Running Queue Processing Part 2 (Year of Azure–Week 20)

So back in July I published a post on doing long running queue processing. In that post we put together a nice sample app that inserted some messages into a queue, read them one at a time and would take 30 seconds to process each message. It did processing in a background thread so that we could monitor it.

This approach was all good and well but hinged on us knowing the maximum amount of time it would take us to process a message. Well fortunately for us in the latest 1.6 version of the Azure Tools (aka SDK), the storage client was updated to take advantage of the new “update message” functionality introduced to queues by an earlier service update. So I figured it was time to update my sample.

UpdateMessage

Fortunately for me given the upcoming holiday (which doesn’t leave my time for blogging given that my family lives in “the boonies” and haven’t yet opted for an internet connection much less broadband, updating a message is SUPER simple.

myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);

All we need is the message we read (which contains the pop-receipt the underlying API use to update the invisible mssage), the new timespan, and finally a flag to tell the API if we’re updating the message content/payload or its visibility. In the sample above we of course are setting its visibility.

Ok, time for turkey and dressing! Oh wait, you want the updated project?

QueueBackgroundProcess w/ UpdateMessage

Alright, so I took exactly the same code we used before. It inserts 5 messages into a queue, then reads and processes each individually. The outer processing loop looks like this:

while (true)
{
// read messages from queue and process one at a time…
CloudQueueMessage aMsg = myQueue.GetMessage(new TimeSpan(0,0,30)); // 30 second timeout
// trap no mesage.
if (aMsg != null)
{
Trace.WriteLine(“got a message, ‘”+aMsg.AsString+“‘”, “Information”);

// start processing of message
Work workerObject = new Work();
workerObject.Msg = aMsg;
Thread workerThread = new Thread(workerObject.DoWork);
workerThread.Start();

while (workerThread.IsAlive)
{
myQueue.UpdateMessage(aMsg, new TimeSpan(0, 0, 30), MessageUpdateFields.Visibility);
Trace.WriteLine(“Updating message expiry”);
Thread.Sleep(15000); // sleep for 15 seconds
}

if (workerObject.isFinished)
myQueue.DeleteMessage(aMsg.Id, aMsg.PopReceipt); // I could just use the message, illustraing a point
else
{
// here, we should check the queue count
// and move the msg to poison message queue
}
}
else
Trace.WriteLine(“no message found”, “Information”);

Thread.Sleep(1000);
Trace.WriteLine(“Working”, “Information”);
}

The while loop is the processor of the worker role that this all runs in. I decreased the initial visibility timeout from 2 minutes to 30 seconds, increased our monitoring of the background processing thread from every 1/10th of a second to 15 seconds, and added the updating of the message visibility timeout.

The inner process was also upped from 30 seconds to 1 minute. Now here’s where the example kicks in! Since the original read only listed a 30 second visibility timeout, and my background process will take one minute, its important that I update the visibility time or the message would fall back into view. So I’m updating it with another 30 seconds every 15 seconds, thus keeping it invisible.

Ta-da! Here’s the project if you want it.

So unfortunately that’s all I have time for this week. I hope all of you in the US enjoy your Thanksgiving holiday weekend (I’ll be spending it with family and not working thankfully). And we’ll see you next week!

Advertisement

Enhanced Visual Studio Publishing (Year of Azure–Week 19)

With the latest 1.6 SDK (ok, now its actually called Azure Authoring Tools), Scott Guthrie’s promise of a better developer publishing experience has landed. Building upon the multiple cloud configuration options that were delivered back in September with the Visual Studio tools update, we have an even richer experience.

Now the first thing you’ll notice is that the publish dialog has changed. The first time you run it, you’ll need to sign in and get things set up.

image

Clicking the “Sign in to download credentials” link will send you to the windows.azure.com website where a publish-settings file will be generated for you to download. Following the instructions, you’ll download the file, then import it into the publishing window shown above. Then you can chose a subscription from the populated drop down and proceed.

A wee bit of warning on this though. If you have access to multiple subscriptions (own are or a co-admin), the creation of a publish-settings file will install the new certificate in each subscription. Additionally, if you click the the “Sign in to download”, you will end up with multiple certs. These aren’t things to be horrified about it, just wanted to make sure I gave a heads up.

Publish Settings

Next up is the publication settings. Here we can select a service to deploy too or create a new one (YEAH!). You can also easily set the environment (production or staging), the build configuration, and the service configuration file to be used. Setting up remote desktop is also as easy as a checkbox.

image

In the end, these settings to get captured into a ‘profile’ that is saved and can then be reused. Upon completion, the cloud service will get a new folder, “Profiles”. In this folder you will find an xml file with the extension azurePubxml that contains the publication settings.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
  <PropertyGroup>
    <AzureCredentials>my subscription</AzureCredentials>
    <AzureHostedServiceName>service name</AzureHostedServiceName>
    <AzureHostedServiceLabel>service label</AzureHostedServiceLabel>
    <AzureSlot>Production</AzureSlot>
    <AzureEnableIntelliTrace>False</AzureEnableIntelliTrace>
    <AzureEnableProfiling>False</AzureEnableProfiling>
    <AzureEnableWebDeploy>False</AzureEnableWebDeploy>
    <AzureStorageAccountName>bmspublic</AzureStorageAccountName>
    <AzureStorageAccountLabel>bmspublic</AzureStorageAccountLabel>
    <AzureDeploymentLabel>newsdk</AzureDeploymentLabel>
    <AzureSolutionConfiguration>Release</AzureSolutionConfiguration>
    <AzureServiceConfiguration>Cloud</AzureServiceConfiguration>
    <AzureAppendTimestampToDeploymentLabel>True</AzureAppendTimestampToDeploymentLabel>
    <AzureAllowUpgrade>True</AzureAllowUpgrade>
    <AzureEnableRemoteDesktop>False</AzureEnableRemoteDesktop>
  </PropertyGroup>
</Project>

This file contains a reference to a storage account and when I looked at the account I noticed that there was a new container in there called “vsdeploy”. Now the folder was empty but I’m betting this is where the cspkg was sent to before being deployed and subsequently deleted. I only wish there was an option to leave the package there after deployment. I love having old packages in the cloud to easily reference.

If we go back into the publish settings again (you may have to click “previous” a few times to get back to the “settings” section_ and can select “advanced” you can set some of the other options in this file. Here we can set the storage account to be used as well as enable IntelliTrace and profiling.

The new experience does this using a management certificate that was created for us at the beginning of this process. If you open up the publish settings file we downloaded at the beginning, you’ll find its an XML document with an encoded string representing the management certificate to be used. Hopefully in a future edition, I’ll be able to poke around at these new features a bit more. It appears we may have one of more new API’s at work as well as some new options to help with service management and build automation.

What next?

There’s additional poking around I need to do with these new features. But there’s some great promise here. Out of the box, developers managing one or two accounts are going to see HUGE benefits. For devs in large, highly structured and security restricted shops, they’re more likely to keep to the existing mechanisms or looking at leveraging this to enhance their existing automated processes.

Meanwhile, I’ll keep poking at this a little bit as well as the other new features of this SDK and report back when I have more.

But that will have to wait until next time. Smile

Windows Azure In-place Upgrades (Year of Azure – Week16)

On Wednesday, Windows Azure unveiled yet another substantial service improvement, enhancements to in-place upgrades. Before I dive into these enhancements and why they’re important, I want to talk first about where we came from.

PS – I say “in-place upgrade” because the button on the windows azure portal is labeled “upgrade”. But the latest blog post calls this an “update”. As far as I’m concerned, these are synonymous.

Inside Windows Azure

If you haven’t already, I encourage you to set aside an hour, turn off your phone, email, and yes even twitter so you can watch Mark Russinovich’s “Inside Windows Azure” presentation. Mark does an excellent job of explaining that within the Windows Azure datacenter, we have multiple clusters. When you select an affinity group, this tells the Azure Fabric Controller to try and put all resources aligned with that affinity group into the same cluster. Within a cluster, you have multiple server racks, each with multiple servers, each with in turn multiple cores.

Now these resources are divided up essentially into slots, with each slot being the space necessary for a small size Windows Azure Instance (1 1.68ghz core, and 1.75gb of RAM). When you deploy your service, the Azure Fabric will allocate these slots (1 for a small, 2 for a medium, etc…) and provision a guest virtual machine that allocates those resources. It also sets up the VHD that will be mounted into that VHD for any local storage you’ve requested, and configure firewall and load balancers for any endpoints you’ve defined.

These parameters, the instance size, endpoints, local storage… are what I’ve taken to calling the Windows Azure service signature.

Now if this signature wasn’t changing, you had the option of deploying new bits to your cloud service using the “upgrade” option. This allowed you to take advantage of the upgrade domains to do a rolling update and deploy functional changes to your service. The advantage of the in-place upgrade, was that you didn’t “restart the clock” on your hosts costs (the hourly billing for Azure works like cell phone minutes), and it was also faster since the provisioning of resources was a bit more streamlined. I’ve seen a single developer deploying a simple service eat through a couple hundred compute hours in a day just by deleting and redeploying. So this was an important feature to take advantage of whenever possible.

If we needed to change this service signature, we were forced to either stop/delete/redeploy our services or deploy to another slot (staging or a separate service) and perform either a VIP or DNS swap. With this update, much of these imitations have been removed. This was because in the case of a change in size, you may have to move the instance to a new set of “slots” to get the resources you wanted. For the firewall/load balancer changes, I’m not quite sure what the limitation was. But this was life as we’ve known it in Azure for last (dang, has if really been this long?)… 2+ years now.

What’s new?

With the new enhancements we can basically forget about the service signature. The gloves are officially off! We will need to the 1.5 SDK to take advantage of changes to size, local storage, or endpoints, but that’s a small price to pay. Especially since the management API also already supports these changes.

The downside, is that the Visual Studio tools do not currently take advantage of this feature. However, with Scott “the Gu” Guthrie at the helm of the Azure tools, I expect this won’t be the case for long.

I’d dive more into exactly how to use this new feature, but honestly the team blog has done a great job and and I can’t see myself wanting to add anything (aside from the backstory I already have). So that’s all for this week.

Until next time!

My Presentations Posted (Year of Azure – Week 15)

Ok, barely getting this one in. It could be because it has been a week full of multiple priorities, or simply distractions (I did finally pick up Gears of War 3). Regardless, I have two seperate posts I’m working on but neither is ready yet. So instead of code I finally made the time to update the resource page with some additional powerpoints and other media.

I have a few more, but I’ve left out ones that were created specifically for clients. So enjoy!

Meanwhile, one quick tip to make up for the lack of a “real” update. Under isolated circumstances, RoleEnvironment.IsAvailable may not return the proper result when running in the 1.4 SDK’s development fabric. I’ve seen it happen where it won’t return the proper result if you all it anywhere but inside the RoleEntryPoint based classed. However, upgrading to the 1.5 SDK can quickly fix this.

Configuration in Azure (Year of Azure–Week 14)

Another late post, and one that isn’t nearly what I wanted to do. I’m about a quarter of the way through this year of weekly updates and frankly, I’m not certain I’ll be able to complete it. Things continue to get busy with more and more distractions lined up. Anyways…

So my “spare time” this week has been spent looking into configuration options.

How do  know where to load a configuration setting from?

So you’ve sat through some Windows Azure training and they explained that you have the service configuration and you should use it instead of the web.config and they covered using RoleEnvironment.GetConfigurationSettingValue. So you know how to get a setting from with location? This is where RoleEnvironment.IsAvailable comes into play.

Using this value, we an write code that will pull from the proper source depending on the environment our application is running in. Like the snippet below:

if (RoleEnvironment.IsAvailable)
    return RoleEnvironment.GetConfigurationSettingValue("mySetting");
else
    return ConfigurationManager.AppSettings["mySetting"].ToString();

Take this a step further and you can put this logic into a property so that all your code can just reference the property. Simple!

But what about CloudStorageAccount?

Ok, but CloudStorageAccount has methods that automatically load from the service configuration. If I’ve written code to take advantage of this, they’re stuck. Right? Well not necessarily. Now you may have a seen a code snippet like this before:

CloudStorageAccount.SetConfigurationSettingPublisher(
    (configName, configSetter) =>
        configSetter(RoleEnvironment.GetConfigurationSettingValue(configName))
);

This is the snippet that needs to be done to help avoid the “SetConfigurationSettingPublisher needs to be called before FromConfigurationSetting can be used.” error message. But what is really going on here is that we are setting a handler for retrieving configuration settings. In this case, RoleEnvironment.GetConfigurationSettingValue.

But as is illustrated by a GREAT post from Windows Azure MVP Steven Nagy, you can set your own handler, and in this handler you can role your own provider that looks something like this:

public static Action<string, Func<string, bool>> GetConfigurationSettingPublisher()
{
    if (RoleEnvironment.IsAvailable)
        return (configName, configSetter) =>
      configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
    return (configName, configSetter) =>
        configSetter(ConfigurationManager.AppSettings[configName]);
}

Flexibility is good!

Where to next?

Keep in mind that these two examples both focus on pulling from configuration files already available to us. There’s nothing stopping us from creating methods that pull from other sources. There’s nothing stopping us from creating methods that can take a single string configuration setting that is an XML document and hydrate it. We can pull settings from another source, be it persistent storage or perhaps even another service. The options are up to us.

Next week, I hope (time available of course) to put together a small demo of how to work with encrypted settings. So until then!

PS – yes, I was renewed as an Azure MVP for another year! #geekgasm

Windows Azure Retrospective (Year of Azure – Week 13)

Hey all! Another short post this week. Sorry. Been busy playing catch-up and just don’t have time to get done what I set out to do. Partially because I was speaking at the Minnesota Developer’s Conference this week and still had to finish my presentation, “Windows Azure Roadmap” and I’m also trying to submit a couple last minute session abstracts for CodeMash. Submitting one about Mango/Win8 with a cloud back end and another on PHP+Azure.

My “roadmap” session was more of a retrospective then a roadmap, due largely to no big announcements from the MS/BUILD conference earlier this month. But it was fun pulling this together and realizing exactly how far Azure has come as well as shedding some light on its early days.

All this said, I’m going take today to start something I’ve been intending to do for months now but simply never made the time to start, I want to share my “roadmap” presentation, complete with my rather lacking presenter notes. Please feel free to take and modify, just give credit where its due either to me or the people I “borrowed” content from. I tried to highlight them when I knew who who the author was.

Part of the reason for finally getting to this, even if only in a small way, is that today marks the end of my first year as a Microsoft MVP. Tomorrow I may or may not receive an email telling me I have been renewed. The experience has been one of the most rewarding things I’ve ever done. Being part of the Microsoft MVP program has been incredible and I’ll forever be grateful for being a part of it. I’ve learned so much, that I can’t help but feel a sense of obligation to give back. I’ll be posting all my other presentations online as well, as soon as I can figure out how to get WordPress to format a new page in a way that doesn’t completely suck. Might have to dust off my raw HTML coding skills.

So until next time…

Leveraging the RoleEntryPoint (Year of Azure – Week 12)

So the last two weeks have been fairly “code lite”, my apologies. Work has had me slammed the last 6 weeks or so and it was finally catching up with me. I took this week off (aka minimal conference calls/meetings), but today my phone is off, and I have NOTHING on my calendar. So I finally wanted to turn my hand to a more technical blog post.

When not architecting cloud solutions or writing code for clients, I’ve been leading Azure training. Something that usually comes up, and I make a point of diving into fairly well, its how to be aware of changes in a service’s environment. In the pre 1.3 SDK days, the default role entry points always had a method in them to handle configuration changes. But since that has gone away and we have the ability to use startup scripts, not as much attention gets paid to these things. So today we’ll review it and call out a few examples.

Yeah for code samples!

Methods and Events

There are two groups of hooks that allow us to respond to events or changes in role state/status; methods declared in the the RoleEntryPoint class and events in the RoleEnvironment class.But before we dive into these two, we should understand the lifecycle of a role instance.

According to an excellent post by the azure team from earlier this year, the sequence of events in role instances we can respond to are, OnStart, Changing, Changed, Stopping, and OnStop. I’ll add two items to this, Run, which follows OnStart, and StatusCheck which is used by the Azure agent to determine if the instance is “ready” to receive requests from the load balancer, or is “busy”.

So lets walk through all these one by one.

OnStart is where it all begins. When a role instance is started the Azure Agent will reflect over the role’s primary assembly and upon finding a class that inherits from RoleEntryPoint, it will call that class’s OnStart method. Now by default, that method will usually look like this:

public override bool OnStart()
{
    // Set the maximum number of concurrent connections
    ServicePointManager.DefaultConnectionLimit = 12;

    // For information on handling configuration changes
    // see the MSDN topic at http://go.microsoft.com/fwlink/?LinkId=166357.

    return base.OnStart();
}

And if we created a WorkerRole, we’ll also have a default Run method that looks like this:

public override void Run()
{
    // This is a sample worker implementation. Replace with your logic.
    Trace.WriteLine("WorkerRole1 entry point called", "Information");

    while (true)
    {
        Thread.Sleep(10000);
        Trace.WriteLine("Working", "Information");

    }
}

The Run will be called after the OnStart. But here is a curveball, we can add a Run method to a webrole and it will be called by the Azure Agent.

Next up, we have the OnStop method.

public override void OnStop()
{
    try
    {
        // Add code here that runs when the role instance is to be stopped
    }
    catch (Exception e)
    {
        Trace.WriteLine("Exception during OnStop: " + e.ToString());
        // Take other action as needed.
    }
}

This method is a great place to try and allow our instance to shut down in a controlled and graceful manner. The catch is that we can’t take more than 30 seconds or the instance will be shut down hard. So anything we’re going to do, we’ll need to do quickly.

We do have another opportunity to start handling shut down. the RoleEnvironment.Stopping event. This is called once the instance has been taken out of the load balancer, but isn’t called when the guest VM is rebooted. Because this is an event, we have to create not just the event handler, but also wire it up:

RoleEnvironment.Stopping += RoleEnvironmentStopping;

private void RoleEnvironmentStopping(object sender, RoleEnvironmentStoppingEventArgs e)
{
    // Add code that is run when the role instance is being stopped
}

Now related to the load balancer, and another event we can handle is the StatusCheck. This can be used to tell the Agent if the role instance should or shouldn’t get requests from the load balancer.

RoleEnvironment.StatusCheck += RoleEnvironmentStatusCheck;

// Use the busy object to indicate that the status
// of the role instance must be Busy
private volatile bool busy = false;

private void RoleEnvironmentStatusCheck(object sender, RoleInstanceStatusCheckEventArgs e)
{
   if (this.busy)
      e.SetBusy();
}

But we’re not done yet…

Handling Environment Changes

Now there are two more events we can handle, Changing and Changed. These events are ideal for handling changes to the service configuration. We can optionally decide to restart our role instance by setting the event’s RoleEnvironmentChangingEventArgs.Cancel property to true during the Changing event.

RoleEnvironment.Changing += RoleEnvironmentChanging;

private void RoleEnvironmentChanging(object sender, RoleEnvironmentChangingEventArgs e)
{
}

RoleEnvironment.Changed += RoleEnvironmentChanged;

private void RoleEnvironmentChanged(object sender, RoleEnvironmentChangedEventArgs e)
{
}

The real value in both these is detecting and handling changes. If we just want to iterate through changes, we can put in a code block like this:

// Get the list of configuration changes
var settingChanges = e.Changes.OfType<RoleEnvironmentConfigurationSettingChange>();

foreach (var settingChange in settingChanges)
{
    var message = "Setting: " + settingChange.ConfigurationSettingName;
    Trace.WriteLine(message, "Information");
}

If you wanted to only handle Topology changes (say a role instance being added or removed), you would use a snippet like this:

// topology changes
var changes = from ch in e.Changes.OfType<RoleEnvironmentTopologyChange>()
              where ch.RoleName == RoleEnvironment.CurrentRoleInstance.Role.Name
              select ch;
if (changes.Any())
{
    // Topology change occurred in the current role
}
else
{
    // Topology change occurred in a different role
}

Lastly, there are times where you may only be updating a configuration setting If you want to test for this, then we’d use a snippet like this:

if ((e.Changes.Any(change => change is RoleEnvironmentConfigurationSettingChange)))
{
    e.Cancel = true; // don't recycle the role
}

Discovery of the RoleEntryPoint

This all said, there are two common questions that come up: how does Azure find the entry point and can I set up a common entry point to be used by multiple roles? I’ll address the later first.

The easiest way I’ve found to create a shared RoleEntryPoint is to set it up in its own class library, then add a reference to that library to all role instances. In each role instance, change their default RoleEntryPoints to inherit from the shared class. Simple enough to set up (takes less than 5 minutes) and easy for anyone used to doing object oriented programming to wrap their heads around.

The first question, about discovery is a bit more complicated. If you read through that entire thread, you’ll find references to a “targets” file and the cloud service project. Prior to the new 1.5 SDK, this was true. But 1.5 introduced an update to the Web and Worker role schemas, a new element that we can use to specify the assembly to be searched for the RoleEntryPoint, NetFxEntryPoint. Using this, you can point directly to an assembly that contains the RoleEntryPoint.

Both approaches work, so use the one that best fits your needs.

And now we exit…

Everything I’ve put in this post is available in the MSDN files. So I haven’t really built anything new here. But what I have done is create a new Cloud Service project that contains all the methods and events and even demonstrates the inheritance approach for a shared entry point. It’s a nice reference project that you can copy/paste from when you need examples without having to hunt through MSDN. You can download it from here.

That’s all for this week. So until next time!

BUILD/Windows Summary (Year of Azure–Week 11)

When I posed my last update, I had planned on doing some live blogging for week 11 from the BUILD conference. Unfortunately, as a consultant personal desires are “subject to the demands of the service” as they say in one of my favorite book series. So instead of being in LA, I spent the week in Tampa working long hours with a great team to prepare a proposal for a client. Sadly, this left me with almost no time for scan news from the conference.

Top top off the lack of sleep from this week, I have a good friend some into town today and we have plans that will consume the entire day. So this post will unfortunately not be what I had originally hoped. With all that said, on to the news…

The Team Blog has been busy with updates:

There were also some great Azure sessions at the conference with recordings now available. I’d recommend Mark Russinovich’s “Inside Windows Azure”, “What’s new in WIndows Azure” with James Conard, and “Building Windows 8 and Windows Azure Apps” by Steve Marx“. But there’s lots of great sessions so be sure to check the latest.Well I need to get ready for company, so until next time!

The Virtual DMZ (Year of Azure Week 10)

Hey folks, super short post this week. I’m busy beyond belief and don’t have time for much. So what I want to call out is something that’s not really new, but something that I believe hasn’t been mentioned enough. Securing services hosted in Windows Azure so that only the parties I want to have connect to can.

In a traditional infrastructure, we’d use Mutual Authentication and certificates. Both communicating parties would have to have a certificate installed, and exchange/validate that when establishing a connection. If you only share/exchange certificates with people you trust, this makes for a fairly easy way to secure a service. The challenge however, is that if you have 10,000 customers you connect with in this manner, you now have to coordinate a change in the certificates with all 10,000.

Well, if we add in the Azure AppFabric’s Access Control Service, we can help mitigate some of those challenges. Set up a rule that will take multiple certificates and issue  single standardized token. I’d heard of this approach awhile back but never had time to explore it or create a working demo of it. Well I needed one recently so I sent out some network calls to get a demo recently and fortunately had a colleague down in Texas found something ON MSDN that I’d never run across, How To: Authenticate with a Client Certificate to a WCF Service Protected by ACS.

I’ve taken lately to referring to this approach as the creation of a “virtual DMZ”. You have on or more publically accessible services running in Windows Azure with input endpoints. You then, secured by certificates an the ACS, have another set of “private” services, also with input endpoints.

A powerful option, and one that by using the ACS isn’t overly complex to setup or manage. Yes, there’s an initial hit with calls to the secured services because they first need to get a token from the ACS before calling the service, but they can then cache that token until it expires to make sure subsequent calls are not impacted as badly.

So there we have it. Short and sweet this week, and sadly sans any code (again). So until next time… send me any suggestions for code samples. Smile

Displaying a List of Blobs via your browser (Year of Azure Week 9)

Sorry folks, you get a short and simple one again this week. And with no planning what-so-ever it continues the theme of the last two items.. Azure Storage Blobs.

So in the demo I did last week I showed how to get a list of blobs in a container via the storage client. Well today my inbox received the following message from a former colleague:

Hey Brent, do you know how to get a list of files that are stored in a container in Blob storage? I can’t seem to find any information on that.  At least any that works.

Well I pointed out the line of code I used last week, container.ListBlobs(), and he said he was after an approach he’d seen that you could just point a URI at it and have it work. I realized then he was talking about the REST API.

Well as I turns out, the Rest API List Blobs operation just needs a simple GET operation. So we can execute it from any browser. We just need a URI that looks like this:

http://myaccount.blob.core.windows.net/mycontainer?restype=container&comp=list

All you need to do is replace the underlines values. Well, almost all. If you try this with a browser (which is an anonymous request), you’ll also need to specify the container level access policy, allowing Full public read access. If you don’t, you may only be allowing public read access for the blobs in the container, in which case a browser with the URI above will fail.

Now if you’re successful, your browser should display a nice little chunk of XML that you can show off to your friends. Something like this…

image

Unfortunately, that’s all I have time for this week. So until next time!