Kicking off a year of Azure–Week 1

Yesterday I received an email asking me to gather up data to be used in consideration of the renewal of my Microsoft MVP award. As I set about this, I realized that I’m simply not satisfied with the amount of technical blogging on Windows Azure that I’ve done over the last year (12 posts since 10/1, only 4 were directly technical). So I decided to set myself a stretch goal, to write one technical blob post per week for a full year.

Mind you, these will mostly be short tips/tricks. But at least once a month I plan do to a more comprehensive dive into something. I’ll pull from personal experiences where possible (client confidentiality and all that), but I think there’s also some value to be culled from areas like the MSDN Azure forums. I figure not only can I explore some common tasks beyond just a couple quick examples on a message board, but it would also help me hone my Azure skills further. Sorta like regular exercise (another item I need to get better about doing).

So without further delay, off to episode 1! Hopefully it will be better then the Star Wars movie.

BTW, if you’re new to working with Azure Tables, you may want to check out this hands on lab.

Getting last 9 seconds of rows from an Azure Table

Just yesterday a question was posted on the Windows Azure forums regarding the best way to query an Azure Table for rows that occurred in the last 9 seconds. Steve Marx and I both responded with some quick suggestions but this weeks’ tip I wanted to give more of a working example.

Fist I’m going to define my table using a partial date/time value for the partition key, and a full timestamp for the row key. The payload we’ll make a string just to round out my example. Here’s my table row represented as a class.

  1. public class TableRow : Microsoft.WindowsAzure.StorageClient.TableServiceEntity
  2. {
  3.     public string Payload { get; set; }
  4.     // required parameterless constructor
  5.     public TableRow(): this(string.Empty) {}
  6.     // overloaded version that sets payload property of row
  7.     public TableRow(string _payload)
  8.     {
  9.         // PartitionKey goes to nearest minute
  10.         DateTime tmpDT1 = DateTime.UtcNow; // capture single value to use for subsequent calls
  11.         // format as ticks and set partition key
  12.         PartitionKey =
  13.             new DateTime(tmpDT1.Year, tmpDT1.Month, tmpDT1.Day, tmpDT1.Hour, tmpDT1.Minute, 0).Ticks.ToString(“d19”);
  14.         // use original value for row key, but append guid to help keep unique
  15.         RowKey = String.Format(“{0} : {1}”, tmpDT1.Ticks.ToString(“d19”), Guid.NewGuid().ToString());
  16.         // just make it empty
  17.         Payload = _payload;
  18.     }
  19. }

Note that I’ve overloaded the constructor so that’ it’s a tad easier to insert new rows by simply creating with a payload property. Not necessary, but I like convenience. I also built out a simple table class which you can do by following the hands on lab link I posted above.

Now I’m going to use a worker role (this is Windows Azure after all) to insert two messages every second. This will use the TableRow class and its associated service context. Here’s how my run method looks…

  1. public override void Run()
  2. {
  3.     // This is a sample worker implementation. Replace with your logic.
  4.     Trace.WriteLine(“WorkerRole1 entry point called”, “Information”);
  5.     // create storage account
  6.     var account = CloudStorageAccount.FromConfigurationSetting(“AzureStorageAccount”);
  7.     // dynamically create the tables
  8.     CloudTableClient.CreateTablesFromModel(typeof(SampleTableContext),
  9.                                 account.TableEndpoint.AbsoluteUri, account.Credentials);
  10.     // create a context for us to work with the table
  11.     SampleTableContext sampleTable = new SampleTableContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
  12.     int iCounter = 0;
  13.     while (true)
  14.     {
  15.         // there really should be some exception handling in here…
  16.         sampleTable.AddRow(string.Format(“Message: {0}”, iCounter.ToString()));
  17.         Thread.Sleep(500); // this allows me to insert two messags every second
  18.         Trace.WriteLine(“Working”, “Information”);
  19.     }
  20. }

Ok. now all that remains is to have something that will query the rows. I’ll create a web role and display the last 9 seconds of rows in a grid. I use the default ASP.NET web role template and use a gridview on the default.aspx page to display my results. I took the easy route and just plugged some code into the Page_Load event as follows:

  1. // create storage account
  2. var account = CloudStorageAccount.FromConfigurationSetting(“AzureStorageAccount”);
  3. // dynamically create the tables
  4. CloudTableClient.CreateTablesFromModel(typeof(SampleTableContext),
  5.                             account.TableEndpoint.AbsoluteUri, account.Credentials);
  6. // create a context for us to work with the table
  7. SampleTableContext tableContext = new SampleTableContext(account.TableEndpoint.AbsoluteUri, account.Credentials);
  8. // straight up query
  9. var rows =
  10.     from sampleTable in tableContext.CreateQuery<ClassLibrary1.TableRow>(“Rows”)
  11.     where sampleTable.RowKey.CompareTo((DateTime.UtcNow – TimeSpan.FromSeconds(9)).Ticks.ToString(“d19”)) > 0
  12.     select sampleTable;
  13. GridView1.DataSource = rows;
  14. GridView1.DataBind();

This does the job, but if you wanted to leverage our per minute partition keys to help narrow down the need for any full-table scans, the query would look more like this:

  1. // partition enhanced query
  2. DateTime tmpDT1 = DateTime.UtcNow; // capture single value to use for subsequent calls
  3. DateTime tmpPartitionKey = new DateTime(tmpDT1.Year, tmpDT1.Month, tmpDT1.Day, tmpDT1.Hour, tmpDT1.Minute, 0);
  4. var rows =
  5.     from sampleTable in tableContext.CreateQuery<ClassLibrary1.TableRow>(“Rows”)
  6.     where
  7.         (sampleTable.PartitionKey == tmpPartitionKey.AddMinutes(-1).Ticks.ToString(“d19”) || sampleTable.PartitionKey == tmpPartitionKey.Ticks.ToString(“d19”))
  8.         && sampleTable.RowKey.CompareTo((DateTime.UtcNow – TimeSpan.FromSeconds(9)).Ticks.ToString(“d19”)) > 0
  9.     select sampleTable;

So there you have it! I want to come back and visit this topic again soon as I have some personal questions about the efficiency of the linq based queries to Azure storage that are generated by this. But I simply don’t have the bandwidth today to spend on it.

Anyways, I’ve posted the code from my example if you’d like to take a look. Enjoy!


3 Responses to Kicking off a year of Azure–Week 1

  1. Pingback: Windows Azure and Cloud Computing Posts for 7/8/2011+ - Windows Azure Blog

  2. Why did you have to append the ticks to the row key, isnt partition key+row key TOGETHER defines a unique key?

    • Brent says:

      I didn’t necessarrily have too, but by including it, it allows me query just by the rowkey to get a timeframe (the first query example). However, if you’re always going to include the parition key (the second query example), you don’t necessarially need to include the to the second value in the partition key.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: