Azure Automation And SQL DB

Authors Note: The version of this script on GitHub was updated in May of 2016 to include the Azure AD tenant ID where the credential exists, as well as the Subscription ID to make sure the proper subscription is selected. This should help situations where there are multiple subscriptions. You will also need to included these variables values when setting up your job schedule.

What a difference two months makes. Two months ago, I helped an old friend out with a customer by working up a sample of using Azure Automation to resize an Azure SQL DB. At the time, importing modules wasn’t as seemless an experience as anyone would have liked. So I intended to blog about it, but simply didn’t get around to it.

Until today! With our recent addition of the Azure Automation module gallery, this has just gotten easier. Given that someone invested in helping build that functionality out. I figured I couldn’t slack off and not write about this any longer. So here we go…

Resizing Azure SQL DB’s

First off we need to address a bit of a misconception. While Azure SQL DB prices are stated in terms of “per month”, the reality is that you are billed “per hour” (see the FAQ). You can adjust both the service tier, performance level, or eDTU’s as often as you want. Within a given hour, the price you pay will be the highest option you exercised during that hour.

This means that if you either have predictable performance needs (highs or lows), or are actively monitoring your database utilization (see my previous blog post), you can tweek your database capacity accordingly.

To take advantage of this, we can use Azure Automation and some PowerShell.

Preparing the Database for resizing

Before I dive into setting up the automation itself, lets discuss a few details. My approach will leverage Azure’s Role Based Access Control (RBAC) features. So manipulating the database will be done by an identity that I’m setting up specifically for Automation and it will have access to the database(s) I want to manipulate.

Presuming you have access as an Administrator to the Azure AD instance that’s associated with your subscription’s management, locate Active Directory in the portal (search or browse) and select it. This should re-direct you to the old portal (at the time of this writing, Azure AD is not currently available in the new portal). Once there, add a new user.

image

You can use your own identity for this, but I’m much more a fan of setting up “service” identities for these things. If you don’t have administrative access AD in your subscription (not uncommon if its federated with your on-premises domain), then you’ll need to work with whomever manages your Active Directory domain to get the service identity created.

With the identity created,  we can now go to the Azure SQL Database server that’s hosting our database, and add the service identity as a user with the “Owner” role.

image

This grants the service identity to resize (amoung other things), all the databases on this server.

Setting up ourAutomation

Now we have ot set up an Automation Account in our Azure subscription. Its this account that will be the container for all our automation tasks, their assets, and any schedules we have them operate under. In the new Azure portal, click on “+ New”, then select “Management”. Alternatively, you can enter “Automation” into the search box and follow that path.

image

Give your Automation account a globally unique name, and make sure its associated with the proper subscription, resource group, and region (Automation isn only available in a handful of regions currently).

With the account created, we’ll first add the service identity we created earlier to the Automation account as a Credential asset. A Credential asset will be used by our automation script to execute the commands to resize the database. Other assets include PowerShell Modules, certificates, pre-set variables, etc… This also allows you to have multiple scripts that reuse the same Credential asset without each having to be updated each time the credential changes (say for periodic password changes).

When I first did this a couple months ago, I had to manually add in another PowerShell module. But they’ve started importing more of them by default, so that’s no longer necessary (unless you are tied to specific modules that are not available by default or you need to tag a newer/older version of something that’s already included).

With the credential in place, we can now add the Runbook. I’ve created one for you already, so just download that, and then in the Azure Automation account, import and existing runbook.

image

Set the type to “PowerShell Workflow” and give it a meaningful name (one will likely default), and give it a description. The import should only take a few seconds, then you can select the new Runbook. Once in, you can go into “edit” mode to either made changes, or test it. I highly recommend a test run this way first. To execute a test, you’ll provide the parameters called out in the script…

CredentialAssetName – the name we can the Credential Asset we created

ResourceGroupName – My script works on new Resource Manager based Azure SQL DB’s. So we want to know what resource group the database is in.

ServerName – the name of the database server (just the first part, leave out the ‘.database.windows.net’). This will need to be in all lower case.

DatabaseName – the name of the database to be resized

NewEdition – what is the new edition we’re resizing to (Basic, Standard, etc…)

NewPricingTier – what tier are we moving to S1, S2 (leave blank if the Edition is basic)

Fill in each appropriately and “Start” the test. Depending on the size of your database, this could take some time, so go get something to drink, check email, etc…

Done? Good! Hopefully the test ran well, So lets now publish our runbook. Publishing is important because it allows us to edit and test one version of the runbook while another is used by any scheduled occurances.

image

With the publish complete, our final step is to schedule this run. In my case, I know each afternoon, I want to set the database back down to “Basic”, because I won’t be using it in the evening (its just a dev database). I’ve got the option of creating a new schedule, or associating the job with an existing one. This allows multiple runbooks to be executed on a single schedule entry.

image

With the schedule set, we just need to set the parameters, much like we did when we tested the script.

image

Click “Ok” a couple times, and our runbook will be all set to run on our schedule. We can also execute the runbook “on demand” by selecting it in the portal, and clicking on “Start”.

Note: Once created, the portal does not currently let us edit the schedule. You’ll need to remove and readd-it. But you can turn schedules on/off.

Monitoring our Runbook

So each time the RunBook is executed by the schedule this is referred to as a job. Either in the Automation Account, or in the Runbook itself, we can select “jobs” and view any jobs that have recently. This will include both ones that were done “on demand”, or via the scheduler. In this you can look at the output, as well as the logs that were written.

image

And there we have it!

So what’s next?

For me, next up is lunch. For you, hopefully this will work out of the box for you and help you save a few bucks here and there. I would also recommend you check out some of the scripts in the gallery and think of other interesting uses for Azure Automation.

Until next time!

PS – HUGE thanks to the team that has open sourced Live Writer. Works like a dream!

Advertisement

SQL Azure Throttling & Error Codes (Year of Azure–Week 18)

Ok, last week was weak. Hopefully my grammar is correct. So I want to make it up too you by giving you some meat this week. I’ve been involved lately in several discussions regarding SQL Azure capacity limits (performance, not storage) and throttling. Admittedly, this little topic, often glanced over, has bitten the butts of many an Windows Azure project.

So lets look at it a little more closely. Shall we?

Types of Throttling

Now if you go read that link I posted last week on SQL Azure Error Messages, you’ll see that there are two types of throttling that can occur:

Soft Throttling – “kick in when machine resources such as, CPU, IO, storage, and worker threads exceed predefined thresholds”

Hard Throttling – “happens when the machine is out of a given resource”

Now SQL Azure is a multi-tenant system. This means that we have multiple tenants (databases) that will sit on the same physical server. There cold be several hundred databases that share the same hardware. Now SQL Azure will use soft throttling to try and make sure that all these tenants get a minimum number of resources. When a tenant starts pushing those limits, soft throttling will kick in and the SQL Azure fabric will try to move tenants around to rebalance the load.

Hard throttling means no new connections. You’ve maxed things out (storage space, worker process, cpu) and drastic steps should be taken to free those resources up.

SQL Azure Resources

Also in that article, we find the various resource types that we could get throttled on:

  • Physical Database Space
  • Physical Log Space
  • LogWriteIODelay
  • DataReadIODelay
  • CPU
  • Database Size
  • Internal
  • SQL Worker Threads
  • Internal
    Now when an error message is returned, you’ll have throttling types for one or more of these resources. The type could be “no throttling”, could be “soft”, or it could be “hard”.

Throttling Modes

Now if all this wasn’t confusing enough, we have three types of throttling. Well technically four if you count “no throttling”.

Update/Insert –  can’t insert/update, or create anything. But can still drop tables, delete rows, truncate tables, read rows.

All Writes – all you can do is read. You can’t even drop/truncate tables.

Reject All – you can’t do anything except contact Windows Azure Support for help.

Now unlike the throttling type, there is only one mode returned by the reason.

But what can we do to stop the errors?

Now honestly, I’d love to explain Andrew’s code to you for deciphering the codes. But I was never good with bitwise operations. So instead, I’ll just share the code and along with a sample usage. I’ll leave it to folks that are better equipped to explain exactly how it works.

The real question is can we help control throttling? Well if you spend enough time iterating through differing loads on your SQL Azure database, you’ll be able to really understand your limits and gain a certain degree of predictability. But the sad thing is that as long as SQL Azure remains a shared multi-tenant environment, there will always be situations where you get throttled. However, those instances should be a wee bit isolated and controllable via some re-try logic.

SQL Azure is a great solution, but you can’t assume the dedicated resources you have with on-premises SQL Server solution. You need to account for variations and make sure your application is robust enough to handle intermittent failures. But now we’re starting down a path of trying to design to exceed SLA’s. And that’s a topic for another day. Winking smile

SQL Azure Error Codes (Year of Azure–Week 17)

Ok, I’m no counting last week as a ‘Year of Azure’ post. I could,but I feel it was even too much of a softball for me to bare. Unfortunately, I was even less productive this week in getting a new post out. I started a new client and the first weeks, especially when travelling are horrible for me doing anything except going back to the hotel and sleeping.

However, I have spent time the last few week working over a most difficult question. The challenges of SQL Azure throttling behaviors and error reporting.

Reference Materials

Now, on the surface SQL Azure is a perfect wonderful relational database solution. However, when you begin subjecting it to a significant load, its limitations start becoming apparent. And when this happens, you’ll find you get back various error codes that you have to decode.

Now, I could dive into an hours long discussion regarding architectural approaches for creating scalable SQL Azure data stores. A discussion mind you which would be completely enjoyable, very thought provoking, and for which I’m less well equipped then many folks (databases just aren’t my key focus, I leave those to better…. er…. more interested people *grin*). For a nice video on this, be sure to check out the TechEd 2011 video on the subject

Anyways…

Deciphering the code

So if you read the link on error codes, you’ll find that there’s several steps that need to be decoded. Fortunately for me. While I have been fairly busy, I have access to a resource that wasn’t. One fairly brilliant Andrew Espenes. Now Andrew was kind enough to take on a task for me and look at deciphering the code. And in a show of skill that demonstrates to me I’m becoming far older then I would like to believe,

Anyways, pulled together some code that I wanted to share. Some code that leverages a technique I haven’t used since my college days of developing basic assembly (BAL) code. Yes, I am that old.

So lets fast forward down the extensive link I gave you earlier to the “Decoding Reason Codes” section. And our first stop will actually be adjust the reason code into something usable.  The MSDN article says to apply modulo 4 to the reason code:

ThrottlingMode = (AzureReasonDecoder.ThrottlingMode)(codeValue % 4);

Next determine the resource type (data space, CPU, Worker Threads, etc…):

int resourceCode = (codeValue / 256);

And finally, we’ll want to know the throttling type (hard vs. soft):

int adjustedResourceCode = resourceCode & 0x30000;
adjustedResourceCode = adjustedResourceCode >> 4;
adjustedResourceCode = adjustedResourceCode | resourceCode;
resourceCode = adjustedResourceCode & 0xFFFF;
ResourcesThrottled = (ResourceThrottled)resourceCode;

Next Time

Now I warned you that I was short on time, and while I have some items I’m working on for future updates I do want to spend some time this weekend with family. So I need to hold some of this until next week when I’ll post a class Andrew created for using these values and some samples for leveraging them.

Until next time!

Introduction to SQL Azure

Yet another simple attempt to document my continuing adventures with the Windows Azure Platform. In today’s edition, our hero attempts to host a simple on-premise database to SQL Azure and access it from an on-premise application.

Create our SQL Azure Server/Database

I’ve covered getting Windows Azure platform benefits in another post. So we’ll presume you have already either purchased a subscription for SQL Azure or claimed any benefits you’re entitled too. Now my MSDN Premium subscription benefits include 3, 1gb instances of SQL Server. Today I’m going to setup my SQL Azure server and create one of those instances.

I start by pointing my favorite browser to https://sql.azure.com and signing in with the Live-ID that is associated with my subscription. Once signed in, you should be at the SQL Azure Developer Portal’s landing page (as seen below) with a list of “projects” (aka subscriptions) we have associated with my Live-ID displayed.

clip_image002

You’ll click on a project project and if prompted, accept the terms of service. Since this is my first time logging into my SQL Azure subscription, I need to start by creating a SA (system admin) account and specifying a region for my database. The region selection here is important for reasons I’ll get to later in this article.

clip_image004

So fill in the form, select a location (aka datacenter), and click on “Create Server”. After a few seconds, our server has been partitioned and we have a master database already created within it. The only thing that was lacking here is that I really would have liked to been able to try and provide a unique name for my server. Oh well. 🙂

clip_image006

At this point we need to go to the “Firewall Settings” tab and adjust the security settings for our database. If we don’t, we won’t be able to connect to it. These settings will apply to all database instances within our SQL Azure Server. For now, I’m going to set it to allow for any IP to connect. This is not something I would generally recommend. It would be a good practice to add settings into this to only allow connections from our application, network submask, or maybe even only from Microsoft hosted services. But I’m feeling a touch lazy today.

Once we’ve given our firewall settings a few minutes to be applied, select the “master” database instance and test connectivity. Once verified, we’re ready to get connected.

Connecting SQL Azure

There are already many blog posts available that explain how to get SQL Server 2008 Management Studio to connect to SQL Azure. However, if you tag the SQL Server Management Studio 2008 R2 CTP you’ll find this less problematic. Using this version, it’s as simple as putting in your server name (available from the portal and in the form of <somevalue>.database.windows.net) into the login box along with your Administrator username and password.

If you can’t use the R2 CTP, you can find alternatives to get SQL Server 2008 Management Studio working in various blog articles.

Once that connection is established, I can run a couple exported SQL Scripts I already have to create my database. I generated these scripts from a local SQL database. In my case, the scripts required only two minor changes before I got them to run successfully.

Connecting to it from the Visual Studio 2010 RC’s Server Explorer is just as simple. Same goes for having your applications connect, you just use a connection string and you’re in. The only thing all these methods have in common is that you need to have port 1433 open for outbound traffic and the IP address you are connecting from has to be allowed via the SQL Azure firewall settings.

I started by creating a new windows forms project and adding a data source to it. I could have set this using a connection string I generated from the portal or via any preconfigured data connections. I then added that data source to my form and launched the project. Simple as pie! Without writing a single line of code I could connect to and update my SQL Azure database from an on-premise application.

Why was that region selection important?

I said I’d get back to this and I have.

The reason that the region selection is so important is because you don’t get charged for bandwidth within a datacenter (region). Additionally, by locating your database within the same center as your application, you reduce any connection latency. Yeah, we could host the DB in Asia and run the app in the US, but that’s only going to slow down performance AND cost us more. So there’s no reason we should do it. This is also why you may want to run a local copy of the database when doing development. No sense paying bandwidth costs for accessing a hosted server when a local copy of SQL Express will do the job nicely.

But wait, what about the infamous “smoking hole” disaster recover scenario? Does picking another region affect my disaster recovery plan? The short answer is “no”. People with a greater insight into how SQL Azure is built tell us that there are 3 copies of your database are automatically created and maintained and that at least one of those should be geographically diversified. If your database crashed, or the data center itself is destroyed by a giant radiation mutated lizard, a backup copy will be activated and the logs applied.

In fact, you may not even notice if a simple database crash takes place. We’ll leave the question of “but don’t I need to know” for another day.

Some closing thoughts

In going through this exercise, there are a couple things that become apparent to me. First off, there are going to be some that will detract SQL Azure as being too “dumbed down”. I’m admittedly not a DB guy. I know enough to stay away from bad practices, create the stored procedures/functions I need to do my job, and can tell the difference between a physical and logical database schema. Hell, I’ve even been known to help debug a query performance issue on rare occasion. However, I am by no means someone that enjoys spending their time optimizing databases and monitoring their performance. As such, SQL Azure does a good job of meeting my basic needs. In its current state, it may not be an enterprise level solution. But you, I think it makes a pretty decent operational data store that would serve a simple application pretty well.

For those that are really into RDBMS systems and love tweaking and tuning them like a hot-dog prepping for the quarter mile, they are likely to be disappointed by SQL Azure. However, I’m confident that the SQL Azure team is committed to this product. We’ve already seen the 1.1 version released with changes based directly on feedback they’re received from the community. I also believe that the current size limits are based more on ensuring they have a stable service offering then on any real limitation. It wouldn’t surprise me if we see SSRS and +25gb instances available before the end of the year. And I don’t think its too far fetched to predict that +1tg instances will happen eventually.

I’m also getting more of an idea of what MSFT may be thinking when it comes to the online subscriptions. This is admittedly a fairly new area for them (compared to license based software distribution) and as such is subject to change, but it really strikes me that they are trying to encourage folks to tie given applications to a specific subscription rather than have multiple projects within a single subscriber account. It could go either way, but for enterprises that will be interested in billing charges back, it makes sense to have a subscription for each “project” for simplicity.

I guess only the future will really tell the tale.