Uploading Image Blobs–Stream vs Byte Array (Year of Azure Week 4)

Ok, I promised you with when I started some of this Year of Azure project that some of these would be short. Its been a busy week so I’m going to give you a quickly based on a question posted on MSDN Azure forums.

The question centered around getting the following code snippit to work (I’ve paraphrased this a bit).

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21.  
  22. blob.UploadFromStream(streams);
  23.  
  24. imgs.Dispose();
  25. streams.Close();

Now the crux of the problem was that the resulting image in storage was empty (zero bytes). And Steve Marx correctly pointed out, the key thing missing is the resetting the buffer to position zero. So the corrected code would look like this. Note the addition of line 22. If fixes things just fine.

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21.  
  22. streams.Seek(0, SeekOrigin.Begin);
  23. blob.UploadFromStream(streams);
  24.  
  25. imgs.Dispose();
  26. streams.Close();

But the root issue I still have here is that the original code sample is pulling a byte array but not doing anything with it. But a byte array is still a valid method of uploading the image. So I reworked the sample a bit to support this..

  1. MemoryStream streams = new MemoryStream();
  2.  
  3. // create storage account
  4. var account = CloudStorageAccount.DevelopmentStorageAccount;
  5. // create blob client
  6. CloudBlobClient blobStorage = account.CreateCloudBlobClient();
  7.  
  8. CloudBlobContainer container = blobStorage.GetContainerReference("guestbookpics");
  9. container.CreateIfNotExist(); // adding this for safety
  10.  
  11. string uniqueBlobName = string.Format("image_{0}.jpg", Guid.NewGuid().ToString());
  12.  
  13. CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName);
  14. blob.Properties.ContentType = "image\\jpeg";
  15.  
  16. System.Drawing.Image imgs = System.Drawing.Image.FromFile("waLogo.jpg");
  17.  
  18. imgs.Save(streams, ImageFormat.Jpeg);
  19.  
  20. byte[] imageBytes = streams.GetBuffer();
  21. blob.UploadByteArray(imageBytes);
  22.  
  23. imgs.Dispose();
  24. streams.Close();

We pulled out the reset of the stream and replaced UploadFromStream and replaced it with UploadByteArray.

Funny part is that while both samples work, the resulting blobs are different size. And since these aren’t the only way to upload files, there might be other sizes available. But I’m short on time so maybe we’ll explore that a bit further another day . The mysteries of Azure never cease.

Next time!

Long Running Queue Processing (Year of Azure Week 3)

Sitting in Denver International Airport as I write this. I doubt I’ll have time to finish it before I have to board and my portable “Azure Appliance” (what a few of us have started to refer to my 8lb 17” wide-screen laptop as) simply isn’t good for working with on the plane. I had hoped to write this will working in San Diego this week, but simply didn’t have the time.

And no, I wasn’t at Comic Con. I was in town visiting a client for a few days of cloud discovery meetings.  Closest I came was a couple blocks away and bumping into two young girls in my hotel dressed as the “Black Swan” and Snookie from True Blood. But I’m rambling now.. fatigue I guess.

Anyways, a message came across on a DL I’m on about long running processing of Azure Storage queue messages. And while unfortunately there’s no way to renew the “lease” on a message, it did get me to thinking that this week my post would be about one of the many options for processing a long running queue message.

My approach will only work if the long running process is less than the maximum visiblitytimeout value allowed for a queue message. Namely, 2 hours (thanks to Mike Collier for pointing out my typo) Essentially, I’ll spin the message processing off into a background thread and then monitor that thread from the foreground. This approach would allow me to multi-thread processing of my messages (which I won’t demo for this week because of lack of time). It also allows me to monitor the processing more easily since I’m doing it from outside of the process.

Setting up the Azure Storage Queue

Its been a long time since I’ve published an Azure Storage queue sample, and the last time I messed with them I was hand coding against the REST API. This time we’ll use the StorageClient library.

  1. // create storage account and client
  2. var account = CloudStorageAccount.FromConfigurationSetting("AzureStorageAccount");
  3. CloudQueueClient qClient = account.CreateCloudQueueClient();
  4. // create a queue object to use to manipulate the queue
  5. CloudQueue myQueue = qClient.GetQueueReference("tempqueue");
  6. // make sure our queue exists
  7. myQueue.CreateIfNotExist();

Pretty straight forward code here. We get our configuration setting, and create a CloudQueueClient that we’ll use to interact with our Azure Storage service. I think use that client to create a CloudQueue object which we’ll use to interact with our specific queue. Lastly, we create the queue if it doesn’t already exist.

Note: I wish the Azure AppFabric queues had this “Create If Not Exist” method. *sigh*

Inserting some messages

Next up we insert some messages into the queue so we’ll have something to process. That’s even easier….

  1. // insert a few (5) messages
  2. int iMax = 5;
  3. for (int i = 1; i <= iMax; i++)
  4. {
  5.     CloudQueueMessage tmpMsg = new CloudQueueMessage(string.Format("Message {0} of {1}", i, iMax));
  6.     myQueue.AddMessage(tmpMsg);
  7.     Trace.WriteLine("Wrote message to queue.", "Information");
  8. }

I insert 5 messages numbered 1-5 using the CloudQueue AddMessage method. I should really trap for exceptions here, but his works for demonstration purposes. Now for the fun part..

Setting up our background work process

Ok, now we have to setup an object that we can use to run the processing on our message. Stealing some code from the MSDN help files, and making a couple minor mods, we end up with this…

  1. class Work
  2. {
  3.     public void DoWork()
  4.     {
  5.         try
  6.         {
  7.             int i = 0;
  8.  
  9.             while (!_shouldStop && i <= 30)
  10.             {
  11.                 Trace.WriteLine("worker thread: working…");
  12.                 i++;
  13.                 Thread.Sleep(1000); // sleep for 1 sec
  14.             }
  15.             Trace.WriteLine("worker thread: terminating gracefully.");
  16.             isFinished = true;
  17.         }
  18.         catch
  19.         {
  20.             // we should do something here
  21.             isFinished = false;
  22.         }
  23.     }
  24.  
  25.     public CloudQueueMessage Msg;
  26.     public bool isFinished = false;
  27.  
  28.     public void RequestStop()
  29.     {
  30.         _shouldStop = true;
  31.     }
  32.     // Volatile is used as hint to the compiler that this data
  33.     // member will be accessed by multiple threads.
  34.     private volatile bool _shouldStop;
  35. }

The key here is the DoWork method. This one is setup to take 30 seconds to “process” a message. We also have the ability to abort processing using the RequestStop method. Its not ideal but this will get our job done. We really need something more robust in my catch block, but at least we’d catch any errors and indicate processing failed.

Monitoring Processing

Next up, need to launch our processing and monitor it.

  1. // start processing of message
  2. Work workerObject = new Work();
  3. workerObject.Msg = aMsg;
  4. Thread workerThread = new Thread(workerObject.DoWork);
  5. workerThread.Start();
  6.  
  7. while (workerThread.IsAlive)
  8. {
  9.     Thread.Sleep(100);
  10. }
  11.  
  12. if (workerObject.isFinished)
  13.     myQueue.DeleteMessage(aMsg.Id, aMsg.PopReceipt); // I could just use the message, illustraing a point
  14. else
  15. {
  16.     // here, we should check the queue count
  17.     // and move the msg to poison message queue
  18. }

Simply put, we create our worker object, give it a parameter (our message) and create/launch a thread to process the message. We then use a while loop to monitor the thread. When its complete, we check isFinished to see if processing completed successfully.

Next Steps

This example is pretty basic but I hope you get the gist of it. In reality, I’d probably be spinning up multiple workers and throwing them into a collection so that I can then monitor multiple processing threads. As more messages come in, I can put them into collection in place of threads that have finished. I could also have the worker process itself do the deletion if the message was successfully processed. There are lots of options.

But I need to get rolling so we’ll have to call this a day. I’ve posted the code for this sample if you want it, just click the icon below. So enjoy! And if we get the ability to renew the “lease” on a message, I’ll try to swing back by this and put in some updates. Smile

https://skydrive.live.com/embedicon.aspx/.Public/Year%20of%20Azure/YOA%20-%20Week%203.zip?cid=61aea8168d26ea6b&sc=documents

A rant about the future of Windows Azure

Hello, my name is Brent and I’m a Mirosoft fanboy. More pointedly, I’m a Windows Azure Fanboy. I even blogged my personal feelings about why Windows Azure represents the future of cloud computing.

Well this week, at the World Wide Patner conference, we finally got to see more info on MSFT’s private cloud solution. And unfortunately, I believe MSFT is missing the mark. While they are still talking about the Azure Appliance, their “private cloud” is really just virtalization on top of Hyper-V. Aka the Hyper-V cloud.

I won’t get into debating defintions of what is a cloud, instead I want to focus on what Windows Azure brings to the table. Namely a stateless, role based application architecture model, automated deployment, failover/upgrade management, and reduced infrastructure management.

The hyper-V cloud doesn’t fill any of these (at least well). And this IMHO is the opportunity MSFT is currently missing. Regardless of the underlaying implementations, there is an opportunity here for lacking a better term, a common application server model. A way for me to take my Azure roles and deploy them both on premises or in the cloud.

I realize I’d stil need to manage the hyper-V cloud’s infrasture. But it would seem to me that there has to be a happy middle ground where that cloud can automate the provisioning and configuration of VM’s and then automating the deployments of my roles to this. I don’t necessarially need WAD monitoring my apps (I should be able to use System Center).

Additionally, having this choice of deployment locales, with the benefits of the scale/failover would be a HUGE differentiator for Microsoft. Its something neither google or amazon have. Outside of a handful of smallish ISV startups, I think VMWare or Cisco are the only other outfits that would able to touch something like this.

I’m certain someone at MSFT has been thinking about this. So why I’m not seeing it on the radar yet just floors me. It is my firm believe that we need a solution for on-premises PaaS, not just another infrastructure management tool. And don’t get me wrong, I’m still an Azure fanboy. But I also believe that the benefits that Azure, as a PaaS solution, brings shouldn’t be limited to just the public cloud.

Windows Azure Accelerator for Web Roles (Year of Azure Week 2)

Tired of it taking 10-30 minutes to deploy updates to your website? Want a faster solution that doesn’t introduce issues like losing changes if the role instance recycles? Then, you need to look into the newly released Windows Azure Accelerator for Web Roles.

The short version is that this accelerator allows you to easily host and quickly update multiple web sites in Windows Azure. The accelerator will host the web site’s for us and also manage the host headers for us. I will admit I’m excited to start using this. We’re locking down details for a new client project and this will fit RIGHT into what we’re intending to do. So the release of this accelerator couldn’t be more timely. At least for me.

Installation

I couldn’t be more pleased with the setup process for this accelerator. It uses the web platform installer complete with dependency checking. After downloading the latest version of the Web Platform Installer (I’d recently rebuilt my development machine), I was able to just run the accelerator’s StartHere.cmd file. It prompted me to pull down a dependency (like MVC3, hello!) and we were good to go. The other dependency, Windows Azure Tools for VS2010 v1.4 I already had. Smile

I responded to a couple y/n prompts and things were all set.

Well, almost. You will need a hosted Windows Azure Storage account and a hosted service namespace. If you don’t already have one, take advantage of my Azure Pass option (to the right). If you want to fully use this, you’ll also need a custom domain and the ability to create a forwarding entry (aka c-name) for the new web sites.

Creating our first accelerated site

I fired up Visual Studio and kicked off a new cloud project. We just need to note that we now have a new option, a “Windows Azure Web Deploy Host” as shown below.

image

Now to finish this, you’ll also need our Azure Storage account credentials.  Personally, I set up a new one specifically to hold any web deploys I may want to do.

Next, you’ll be asked to enter in a username and password to be used for administering the accelerator via its web UI. A word of warning, there does not appear to be any password strength criteria, so please enter in a solid, strong password.

Once done, the template will generate a cloud service project that contains a single web role for our web deploy host/admin site. The template will also launch a Readme.htm file that explains how to go about deploying our service. Make sure not to miss the steps for setting up the remote desktop. Unfortunately, this initial service deployment will take just as long as always. It also deploys 2 instances of the admin portal, so if you want to save a couple bucks, you may want to back this off to 1 instance before deploying.

You’ll also want to make sure when publishing the web deploy host, that its setup for remote desktop. If you haven’t done this previously, I recommend look at Using Remote Desktop with Windows Azure Roles on MSDN.

NOTE: Before moving on to actually doing a web deploy, I do want to toss out another word of caution. The Windows Azure storage credentials and admin login we entered were both stored into the service configuration for the admin site role. And these are stored in CLEAR TEXT. So I you have concerns about security of these types of things, you may want to make some minor customizations here.

Deploying our web site

There are two steps to setting up a new site that we’ll managed via the accelerator, we will have to define the site in the accelerator admin portal, and also create our web site project in Visual Studio. Its important to note that its not required that this site be a Windows Azure Web Role. But not using that template will limit you a bit. Namely, if you plan to leverage Windows Azure specific features you will have a little extra work to do. It could be a simple as adding assembly references manually, to having to write some custom code. So pick what’s right foimager your needs.

I started by following the accelerator’s guidance and defining my web site in the accelerator host project I just deployed. I enter a name and description and just keep the rest of the fields at their defaults.

A quick check of its status shows that its been created, but isn’t yet ready for use. You’ll also see one row on the ‘info’ page for each instance of our service (in my case only one because I’m cheap *grin*).

One thing I don’t get about the info page is that the title reads “sync status”. I would guess this is because it shows me the status of any deployments being sync’d. I agree with the theory of this, but I think the term could mislead folks. Anyways… moving on, we have a site to create.

I fire up Visual Studio 2010 and do a file->new->project. Unless the accelerator documentation, I’m going to create a web role instead (I use the MVC2 template). Next, we’ll build and publish it. You will get prompted for a user name and password, so use the remote desktop credentials you used when publishing the service host initially.

Things should go fairly smooth, but in my case I get a 500 error (internal error in the service) when trying to access to the MSDEPLOYAGENTSERVICE. So I RDP’d into the service to see if I could figure out what went wrong. I’m still not certain what is wrong, but eventually it started working. It may have been me using the Windows Azure portal to re-configure RDP. I could previously RDP into the box just fine, but I couldn’t seem to get the publish to work. I just kept getting a internal server error (500). Oh well.

Once everything is ok, you’ll see that that the site’s status has changed to “deployed”.

image

Its fast, but so what?

So, within 30 seconds, we should be seeing the new site up and running. This is impressive, but is that all? Actually no. What the accelerator is doing is managing the IIS host header stuff, making it MUCH easier to do multiple web sites. This could be just managing your own blog and a handful of other sites. But say you have a few, or a hundred web sites that you need to deploy to Azure, this can make it pretty easy. ISV’s would eat this up.

I could even see myself using this for doing demos.

Meanwhile, feel free to reverse engineer the service host project. There’s certainly some useful gems hidden in this code.

Kicking off a year of Azure–Week 1

Yesterday I received an email asking me to gather up data to be used in consideration of the renewal of my Microsoft MVP award. As I set about this, I realized that I’m simply not satisfied with the amount of technical blogging on Windows Azure that I’ve done over the last year (12 posts since 10/1, only 4 were directly technical). So I decided to set myself a stretch goal, to write one technical blob post per week for a full year.

Mind you, these will mostly be short tips/tricks. But at least once a month I plan do to a more comprehensive dive into something. I’ll pull from personal experiences where possible (client confidentiality and all that), but I think there’s also some value to be culled from areas like the MSDN Azure forums. I figure not only can I explore some common tasks beyond just a couple quick examples on a message board, but it would also help me hone my Azure skills further. Sorta like regular exercise (another item I need to get better about doing).

So without further delay, off to episode 1! Hopefully it will be better then the Star Wars movie.

BTW, if you’re new to working with Azure Tables, you may want to check out this hands on lab.

Getting last 9 seconds of rows from an Azure Table

Just yesterday a question was posted on the Windows Azure forums regarding the best way to query an Azure Table for rows that occurred in the last 9 seconds. Steve Marx and I both responded with some quick suggestions but this weeks’ tip I wanted to give more of a working example.

Fist I’m going to define my table using a partial date/time value for the partition key, and a full timestamp for the row key. The payload we’ll make a string just to round out my example. Here’s my table row represented as a class.

  1. public class TableRow : Microsoft.WindowsAzure.StorageClient.TableServiceEntity
  2. {
  3.     public string Payload { get; set; }
  4.     // required parameterless constructor
  5.     public TableRow(): this(string.Empty) {}
  6.     // overloaded version that sets payload property of row
  7.     public TableRow(string _payload)
  8.     {
  9.         // PartitionKey goes to nearest minute
  10.         DateTime tmpDT1 = DateTime.UtcNow; // capture single value to use for subsequent calls
  11.         // format as ticks and set partition key
  12.         PartitionKey =
  13.             new DateTime(tmpDT1.Year, tmpDT1.Month, tmpDT1.Day, tmpDT1.Hour, tmpDT1.Minute, 0).Ticks.ToString(“d19″);
  14.         // use original value for row key, but append guid to help keep unique
  15.         RowKey = String.Format(“{0} : {1}”, tmpDT1.Ticks.ToString(“d19″), Guid.NewGuid().ToString());
  16.         // just make it empty
  17.         Payload = _payload;
  18.     }
  19. }

Note that I’ve overloaded the constructor so that’ it’s a tad easier to insert new rows by simply creating with a payload property. Not necessary, but I like convenience. I also built out a simple table class which you can do by following the hands on lab link I posted above.

Now I’m going to use a worker role (this is Windows Azure after all) to insert two messages every second. This will use the TableRow class and its associated service context. Here’s how my run method looks…

  1. public override void Run()
  2. {
  3.     // This is a sample worker implementation. Replace with your logic.
  4.     Trace.WriteLine(“WorkerRole1 entry point called”, “Information”);
  5.     // create storage account
  6.     var account = CloudStorageAccount.FromConfigurationSetting(“AzureStorageAccount”);
  7.     // dynamically create the tables
  8.     CloudTableClient.CreateTablesFromModel(typeof(SampleTableContext),
  9.                                 account.TableEndpoint.AbsoluteUri, account.Credentials);
  10.     // create a context for us to work with the table
  11.     SampleTableContext sampleTable = new SampleTableContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
  12.     int iCounter = 0;
  13.     while (true)
  14.     {
  15.         // there really should be some exception handling in here…
  16.         sampleTable.AddRow(string.Format(“Message: {0}”, iCounter.ToString()));
  17.         Thread.Sleep(500); // this allows me to insert two messags every second
  18.         Trace.WriteLine(“Working”, “Information”);
  19.     }
  20. }

Ok. now all that remains is to have something that will query the rows. I’ll create a web role and display the last 9 seconds of rows in a grid. I use the default ASP.NET web role template and use a gridview on the default.aspx page to display my results. I took the easy route and just plugged some code into the Page_Load event as follows:

  1. // create storage account
  2. var account = CloudStorageAccount.FromConfigurationSetting(“AzureStorageAccount”);
  3. // dynamically create the tables
  4. CloudTableClient.CreateTablesFromModel(typeof(SampleTableContext),
  5.                             account.TableEndpoint.AbsoluteUri, account.Credentials);
  6. // create a context for us to work with the table
  7. SampleTableContext tableContext = new SampleTableContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
  8. // straight up query
  9. var rows =
  10.     from sampleTable in tableContext.CreateQuery<ClassLibrary1.TableRow>(“Rows”)
  11.     where sampleTable.RowKey.CompareTo((DateTime.UtcNow – TimeSpan.FromSeconds(9)).Ticks.ToString(“d19″)) > 0
  12.     select sampleTable;
  13. GridView1.DataSource = rows;
  14. GridView1.DataBind();

This does the job, but if you wanted to leverage our per minute partition keys to help narrow down the need for any full-table scans, the query would look more like this:

  1. // partition enhanced query
  2. DateTime tmpDT1 = DateTime.UtcNow; // capture single value to use for subsequent calls
  3. DateTime tmpPartitionKey = new DateTime(tmpDT1.Year, tmpDT1.Month, tmpDT1.Day, tmpDT1.Hour, tmpDT1.Minute, 0);
  4. var rows =
  5.     from sampleTable in tableContext.CreateQuery<ClassLibrary1.TableRow>(“Rows”)
  6.     where
  7.         (sampleTable.PartitionKey == tmpPartitionKey.AddMinutes(-1).Ticks.ToString(“d19″) || sampleTable.PartitionKey == tmpPartitionKey.Ticks.ToString(“d19″))
  8.         && sampleTable.RowKey.CompareTo((DateTime.UtcNow – TimeSpan.FromSeconds(9)).Ticks.ToString(“d19″)) > 0
  9.     select sampleTable;

So there you have it! I want to come back and visit this topic again soon as I have some personal questions about the efficiency of the linq based queries to Azure storage that are generated by this. But I simply don’t have the bandwidth today to spend on it.

Anyways, I’ve posted the code from my example if you’d like to take a look. Enjoy!

Follow

Get every new post delivered to your Inbox.

Join 1,129 other followers