Managing Azure Resource Manager Storage Accounts Through Azure Management Studio

Over this weekend we released a new version of Azure Management Studio (AMS). In this release we included support for managing Azure Resource Manager (ARM) Storage Accounts (One of most requested feature in AMS). This blog post talks about this enhancement and what we are working on right now.

As you may already know that connecting to ARM resources require you to sign in into your Azure Active Directory (Azure AD). So first we will talk about how you can connect to your Azure AD using AMS and authorize the application and add your Azure Subscription in AMS.

Add Subscription

Adding an Azure Subscription to AMS is super easy! First and foremost you will need an Azure AD Name your user account is associated with. Finding Azure Active Directory Name is a 3 step process as described here. Pick the Azure AD Name which has the subscriptions you want to work on.

Once you get the Azure AD Name the second step is to run Azure Management Studio and right click on “Subscriptions” node under “Connection Group” panel and navigate to “Add Subscription Connection” option.

image

When you click on this option, a new window will be shown which gives you two options to add an Azure Subscription. Pick the 1st one (“Use Azure Resource Manager API”) and then specify the name of your Azure AD. AMS still supports managing your Azure Subscription using X509 Certificates (Classic way) and if you want to connect to your Azure Subscription that way, please choose the 2nd option. Please note that you can only manage Classic resources (Classic storage accounts, Cloud Services etc.) if you choose this option.

image

Once you click “Next” you will be taken to your Azure AD for authentication & authorization. You can sign in using your “Work or School” account or “Microsoft” account. Please ensure that the user account you sign in with is associated with the Azure AD you specified above.

image

After you have signed in successfully, Azure AD will ask you to authorize the application. Click “Accept” to continue.

image

Now AMS will bring a list of Azure Subscriptions that you have access to which are part of the Azure AD you just signed in into.

image

Simply select the subscriptions you wish to manage through AMS. If you want to give them a different name, you can do so by changing the value in “Friendly name” field. Also you can put these Azure Subscriptions in different connection groups as per your requirement.

Now you may ask how to distinguish between subscriptions added via this way and the classic way. Well, it’s super easy! Subscriptions added through Azure Resource Management API are shown by green icon and the later are shown by blue icon as shown below.

image

Managing Azure Resource Manager (ARM) Storage Accounts

Next, let’s talk about how you can manage ARM Storage accounts using AMS. Once you have connected an Azure Subscription using ARM API, you will see “Storage Accounts” node there. Double clicking on it will fetch all the ARM storage accounts in that Azure Subscription.

image

Please note that AMS only fetches “ARM” storage accounts and not “Classic” storage accounts. You can manage “Classic” storage accounts by adding an Azure Subscription using X509 certificate.

Using AMS, you can create a new “ARM” storage account, edit its properties and even delete those storage accounts. Because “ARM” is backed by “Role-based access control (RBAC)”, you need proper permissions to perform these operations.

New Storage Account

Creating a new storage account is super simple! Simply right click on the “Storage Accounts” node and select “New Storage Account…” option.

image

On the subsequent window, specify the parameters like account name, resource group, account type etc. and you should be good to go!

image

Edit Storage Account

Editing a storage account properties is again super simple! You simply right click on the storage account and select “Properties…” option to view and change the properties.

image

From this screen itself you can manage storage account keys, change storage account redundancy type (Standard GRS to Standard RAGRS for example), change access tier for blob storage accounts (Hot to Cool for example) and more.

image

Delete Storage Account

Similarly you can delete the storage account by clicking on the “Delete” option in the storage account’s context menu. The next screen asks you to confirm the deletion press “OK” to confirm and you are done.

image

Explore Storage Account

To explore storage account contents, simply double click on the storage account. When you do that, AMS will fetch the account key for the storage account and open the nodes. From there, you can manage blobs, tables & queues. Please note that you must have permissions to fetch the storage account keys in order to explore the storage account contents.

What’s Next

This release was quite important for us for many reasons. First, we implemented one of the most requested feature in AMS. Next, this released paved the path for next set of features related to management of ARM resources. Now that we have implemented connecting to an Azure Subscription using ARM API, it becomes somewhat easier for us to implement ARM resources management. So please expect to see those features show up in AMS in upcoming releases.

On our immediate roadmap we have a feature that would help you with management of license key in AMS. This has been number 1 request when it comes to customer service and we want to ease your pain there.

Feedback

As always your feedback is really important to us. Please continue to send in your feedback. We will try our best to take care of it at the earliest! You can submit a feature request or vote for a feature on our UserVoice page at http://cerebrata.uservoice.com or send us an email at support@cerebrata.com.

Tagged with: , , , , ,

Cerebrata Is Now Part of Cynapta Software

It gives me great pleasure to announce that Cynapta has acquired Cerebrata from Redgate. In the post, I will talk about who we are (to set the context) and what this acquisition means for you as a user of Cerebrata products. We will also talk about the future direction we’re taking.

Who We Are

You may already know this but I founded Cerebrata back in 2007 and we built Azure tools there. Cerebrata got acquired by Redgate in 2011 and after working with the amazing team there for 18 months we parted ways as I wanted to do something different, so I set up Cynapta to explore more opportunities for tools to help Azure developers. After trying a number of things for the next year or so, I started building Cloud Portam, and I’m proud to say that it is one of the leading tools for managing Azure resources. Both Cerebrata and Cloud Portam are mature and robust tools designed with one simple goal – Make your life as an Azure developer easy! I would like to believe that we’re doing a good job at it (you as a user are the final judge of that though).

What does this acquisition mean to you

At Cynapta we are solely focused on making your job as an Azure developer easier by providing best of the breed utilities. Azure Management Studio and Cloud Portam are live examples of that.

This sort of change can be disruptive for a product – I remember some people had concerns when Redgate took over Cerebrata, and I’m sure some people will have some concerns now.

I will assure you of one thing: as a Cerebrata user, things have not changed one bit for you as a result of this acquisition.

There are a few things I would like to highlight in case you are concerned about that.
Firstly, we intend to invest substantial time and effort into the development of AMS. Most of the team that was part of Cerebrata development remains with the product and these team members have been with Cerebrata for quite some time now. The team is coming to the end of some work on big underlying changes which I’ll discuss more below, so you should actually see the rate at which we add new features improve!

If you’re concerned about the level of support you will get, please don’t. We’re entirely dedicated to providing you the best Azure tools, and that includes the quality of our support. The original team is still in place, and we will do everything we can to address any issues you report as quickly as possible. Furthermore because I have been involved with Azure for many years now, you now have someone who speaks your language (in a manner of speaking) :).

If you still have some concerns, please feel free to reach out to me. I can be reached at gmantri @ cerebrata.com. I will try my best to alleviate your concerns.

Azure Management Studio (AMS) & Cloud Portam

I am pleased to announce that all AMS paid users get a 1 year complimentary access to Personal Edition of Cloud Portam. I would strongly encourage you to check out Cloud Portam.

We want you to have best tooling available to you for managing your Azure Resources. There are certain scenarios where AMS would be a better fit (e.g. dealing with local resources, cloud services etc.) while there are certain scenarios where using Cloud Portam makes sense (e.g. browser-based so that you can manage your Azure resources from everywhere). Furthermore there is some functionality which is present in only one of the tools. We believe that having access to both AMS and Cloud Portam would enable you to be more productive when it comes to managing your Azure resources.

With unfettered access to both of them, I’m confident that you will use the tool which make sense to you to deal with a particular scenario. I am really looking forward to have you use both products.

The way this complimentary access would work is that you would need to sign up for Personal Edition of Cloud Portam using your Microsoft Account or your Azure AD/Office 365 account. You get a fully functional 15 days trial when you sign up. At any point of time during your trial, please reach out to me for converting your trial account into complimentary account. When you do, please share your AMS license key so that we can verify the key. In the next few days, we will be introducing a functionality in Cloud Portam wherein you will be able to input your AMS license key and get your trial account converted into a complimentary one from inside Cloud Portam. You need not send me an email to get your account switched once we do that.

If you’re an existing Cloud Portam user, I encourage you to read the blog post about this acquisition on Cloud Portam’s blog as well: http://blog.cloudportam.com/cerebrata-is-now-part-of-cynapta-software.

What’s next for Azure Management Studio (AMS)?

You may be wondering about the future of AMS at this point. Let me assure you that AMS is very much alive and kicking.

I agree that there have not been many public-facing changes recently, but the team has been diligently working on support for ARM storage accounts. This has taken longer than we intended, but development is now finished and it’s almost ready to be released.

Now that this work is complete, we will be going through all the pending issues in User Voice and plan the development for the items. However I must say that not all features requested there will be implemented in AMS but wherever possible, you will have the feature in either AMS or Cloud Portam.

What about Azure Management Cmdlets (AMC)?

Unfortunately I cannot say the same about AMC :(. The last update to this product was some time ago, and Microsoft has all but replaced it with their own excellent PowerShell Cmdlets. With those things in mind, we have decided to shut down AMC product.

Please note that going forward you will not be able to download the product from our website. If you have purchased a license of the product, you will be able to use the product but no support will be provided on that.

Please reach out to me if you have any questions or concerns about us shutting down AMC.

And what’s the deal with Azure Explorer (AE)?

Redgate has decided to hold on to AE. They’ll be keeping hold of it, and nothing will be changing on that front, so you can continue to use AE as before.

Future Direction

Now let me take a moment and talk about our vision of where we want to be.

If you have been doing Azure development for some time now, I believe you will agree with me when I say that Azure is evolving at a breakneck speed. From just 3-4 services in 2008, now Azure has over 50+ services that you can use. Ideally we would want you to manage all of the services through our tools. It may take us some time to get there but we will get there. So expect to see some new features light up in either of our tools.

One interesting thing that has happened with Azure (and also with Microsoft) is that it no longer about Windows anymore (Microsoft changing the name from “Windows Azure” to “Microsoft Azure” is a live testament of that). They are embracing open source and other platforms. As a tool vendor building for Azure, I strongly believe we need to do the same. With that in mind, we’re planning on building new set of tools which will be based on current direction Azure is taking (especially with their whole Azure Resource Manager and Role-based access control mechanism). These tools will have to be cross-platform so that you’re not restricted to Windows only. Our responsibility (and it’s a big one) would be to enable you on the platform of your choice.

One thing we have realized is that often times there are certain “things” you would want your tool vendor to do. A good example would be backing up your storage accounts. You would want to delegate that task to someone as long as you are sure that the task will be done. We want to be that “someone” for you who will do the job for you. With products like “Service Fabric”, “Functions”, and “Azure Batch” etc. in Azure, Microsoft has made our job rather simple. So expect us venturing into that in near future.

Now I don’t want to set any false expectation. I do realize that our goals are ambitious and it will take us some time to get there but one thing is sure that we will get there. We are also counting on your support big time during our journey. We want to work on things which makes your job easier. We hope that we will get your support all the way.

Summary

To summarize this post, let me reiterate (one last time): for you as a Cerebrata user things have not changed. In fact, we hope that this acquisition would result in better things for you as far as managing your Azure resources are concerned. We have grand plans for our tools and we will need your support to achieve our goals. If you have any questions or concerns about this acquisition, plans for our existing products and future roadmap, please (actually pretty please with cherry on top) reach out to me at gmantri at cerebrata.com.

I want to take this moment to thank you for being a Cerebrata user and I hope that you choose to continue our relationship while we build tools and services that will help you make productive when working with Azure.

Tagged with: , , , , , ,

Accessing your geo-redundant endpoint

“Defcon zero. An entire Azure data center has been wiped out, billions of files have been lost.”

But not to worry, Azure will just fail over to another data center right? It’s automatic and totally invisible.

Well, not entirely. A failover doesn’t happen instantly so there’ll certainly be some downtime. There may also be more local connectivity concerns outside of Microsoft’s control that prevent connection. In these circumstances you might want to be able to access your replicated data until things are working properly again.

In Dec 2013 Microsoft previewed read-access geo redundant replication for storage accounts – which went Generally Available in May 2014. This means blobs, tables and queues are available for read access from a secondary endpoint at any time. Fortunately, third-party tooling and configuration scripts won’t need a complete re-write to support it, since the only thing you really need to do is use a different host for API requests.

Twice the bandwidth

Those who expect high performance from their Azure storage are may already be limiting reporting and other non-production operations. An additional benefit of the replicated data, is that you can divert all lower priority traffic to it, thus reducing the burden on the primary. Depending on the boldness of your assumptions, you could double the throughput to your storage by splitting unessential, ad hoc requests to the secondary endpoint.

Configuration

Replication can be configured in the Azure Management Portal to one of three modes: off, on, and on with read access. Officially these three modes are called:

  • Locally redundant. Data is replicated three times within the same data center.
  • Geo redundant. Replication is made to an entirely separate data center, many miles away.
  • Read access geo redundant. Replication is geo redundant and an additional second API endpoint is available for use at any time, not just after an emergency failover.

What can’t be configured is the choice of secondary location. Each data center is ‘paired’ with another – for example, North Europe is paired with West Europe, and West US is paired with East US. This also keeps the data within the same geo-politcal boundary (the exception being the new region in Brazil, which does its secondary in South Central US).

Behavioural matters

In a simple usage scenario, it’s unlikely you’ll run into issues with consistency between your primary and secondary storage. For small files you might only see a latency of few seconds. Whilst MS have not issued an SLA guarantee at this time, they state that replication should not be more than 15 minutes behind. For reporting purposes, you might not care about such a low latency. In any case, you can query the secondary endpoint to find out when the last synchronisation checkpoint was made.

It’s worth pointing out that transactions may not be replicated in the order that they were made. The only operations guaranteed to be made in order are ones relating to specific blobs, table partition keys, or individual queues. Replication does respect the atomicity of batch operations on Azure Tables though, and will be replicated consistently.

Accessing the endpoint

Accessing the replicated data is done with the same credentials and API conventions, except that ‘-secondary’ is appended to the subdomain for your account.

For example, if the storage account ordinarily has an endpoint for blob access such as https://robinanderson.blob.core.windows.net then the replicated endpoint will be https://robinanderson-secondary.blob.core.windows.net. Note that this DNS entry won’t even be registered unless read access geo redundant replication is enabled. This does mean that if someone knows your storage account name, they can tell if you have this mode enabled by trying to ping your secondary endpoint, for all the good it will do them.

When connecting to the secondary endpoint authentication is performed using the same keys as for the primary. Any delegated access (for example, SAS) will also work since these are validated using these keys.

Analytics

If monitoring metrics are enabled for blob, table or queue access, then those metrics will also be enabled for the secondary endpoint. This means there are twice as many metrics visible on the secondary, as the primary ones are replicated over as well.

Simply replace the word ‘Primary’ with ‘Secondary’ in the table name to access the equivalent metric, thus $MetricsHourlyPrimaryBlobTransactions becomes $MetricsHourlySecondaryBlobTransactions.

 

rageo-analytics-metrics

At the time of writing, there is no equivalent for the $logs blob container. Ordinarily, you can audit all read, write and delete operations made to your storage account. So whilst the aggregate monitoring analytics mentioned above are available for the secondary endpoint, you won’t know specifically which source IP addresses are issuing reads (though it’s unlikely you’d care).

Support for secondary storage in Azure Management Studio

Accessing the replicated data in AMS is fairly trivial if you’ve already got the original storage account registered – just right click and choose ‘Connect to geo-redundant secondary copy’ from the storage account context menu and a second, rather similar, storage account will be visible next to the first. It will behave entirely as if it were an ordinary storage account, except that it will be read-only and will display the last synchronisation time in the status bar.

rageo-quick-connect

 

Alternatively, there’s a checkbox on the ‘Add storage account’ dialog that allows you to specify access via the secondary endpoint, if you’ve not already registered the primary. Either way, once you’re looking at your data you can use the same UI features to search, query and download.

rageo-connect

To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

 

Drag and drop improvements – Azure Management Studio 1.4

We’ve now added better support for drag and drop in the latest version of Azure Management Studio (AMS). In this version you can drag block blobs both into and out of the AMS folder views.

So, for example, in the pictures below I drag a single selected file across AMS onto the desktop.

When you start the dragging, you cannot drop to start with (as you’d be copying the file into the folder that contains it), so the no drop annotation is displayed. We use the Windows shell to get a suitable graphic to display next to the cursor, so here we see its representation of a png image file.

Drag drop 1

Once the cursor is above a target that does support a drop, the drop description changes to reflect the action that will happen if you release the mouse and start the drop.

Drag drop 2

Of course, we don’t just support drag and drop inside the tool, but also allow other applications to accept the drop. In particular the shell is happy to take the drop of a data stream that we offer it.

Drag drop 3

When the user elects to drop onto a folder, this will make AMS fetch the blob content and stream it to the shell as a byte stream. The shell can use the name information included within the transfer object to create a file of the correct name which it can then fill with the content.

Drag drop 4

Of course, we don’t just want to be able to drag blobs out of AMS. We have also improved AMS so that it can handle more types of items that are dragged on to it. Drag and drop is a little complicated, and we’ll try to give a better overview of it below, but essentially the drop target (AMS) can look at the formats of the data which the source offers. Typically a source may offer a list of files on the local file system, and we have been able to handle this kind of source for a long time in AMS. If you drag a file out of a zip file though, this is offered to the target as a byte stream (plus some metadata) and AMS now knows how to handle this kind of information.

When you drag one or more of the files contained in a .zip file:

Drag drop 5

AMS happily accepts that as a drop target:

Drag drop 6

And dropping leads to a transfer executing, which is logged in the transfer panel:

Drag drop 7

As we’ll discuss in a moment, copy and paste uses a fairly similar mechanism behind the scenes so will work in the same way.

 

So how does Drag and Drop work then?

Drag and drop has been around since the old days and relies on COM interfaces to do its work. It revolves around the IDataObject interface. which essentially describes a dictionary which interested parties can both query for various properties (that correspond to different renderings of data). and also set properties to reflect the progress of any data transfer that is happening.

When a drag operation is started, the source makes an instance of this class, populates it with relevant data and then calls into a shell helper method passing the DataObject as one of the arguments. This shell helper method will then take care of executing the drag as the cursor moves across the screen, interacting with the drop sources that are passed over in the process, until the drop happens on a particular target or it is cancelled (by pressing the Escape key). If you drag from AMS then we put at least two renderings of the data into the DataObject – one that is a serialized .NET object that only AMS understands. which it will use if you drag from AMS into itself, and a second data format that offers the data as a stream. In this second format. the data is offered as a set of metadata about the name of the item together with an OLE stream which the target can use to pull the data in blocks of bytes.

The DataObject is also used to reflect the semantics of the action itself. The target can set values to reflect whether it wants the action to be a move or a copy, and it will also set a value to say whether the drop was successful and whether the source needs to carry out the delete part of any Move. It will also populate the DataObject with the drag image which is shown by a window that the shell creates next to the cursor when you are dragging, and potentially a piece of description text describing the operation.

When the cursor moves over a potential drop target, this target gets a callback and can then freely interrogate the DataObject to determine if it contains suitable data for it to process. It can return a result back to the shell, which can use this to determine which cursor it displays – one showing that the drop is available or the no entry sign which reflects that the target isn’t able to handle the data that is being dragged. The target is also free to change the displayed text.

How do I do it then?

There are many useful blog posts out there that cover the rather arcane methods involved.

One ends up working at the level of COM which is supported fairly well inside .NET. The only lacking feature (as far as I know) is a way to detect that the COM object is no longer being used by external parties via COM… in C++ one can keep an eye on the reference count, but in the .NET world there is no way to see if the .NET created CCW (COM callable wrapper) is being used, and so the only way to detect that the object is no longer used is to add a Finalizer to its type.

You also go back to the days of managing your own memory, with you being needed to do Global Lock and Unlock, and also allocate using Marshal.AllocHGlobal.

There are also a few extra interfaces you might want to implement – IAsyncOperation, for example, which allows the Shell to do a data transfer without blocking.

Getting all these parts to work together took some effort, and was helped a fair amount by a working implementation of some of this inside the Azure Explorer tool that we have made freely available for some time. We started with the Azure Explorer implementation and then merged in bits and pieces from various blog posts as we needed more functionality.

The good news is that you almost get cut-and-paste for free after you’ve done the work implementing drag and drop, as this transfer process is also centred on the idea of a DataObject. The key difference is that you place the DataObject on the clipboard for other applications to find, and in order to enable your paste menu you may need to subscribe to clipboard change events so see if the clipboard contains a suitable format.

Was it worth it?

When you are dealing with the file system on your local machine, and something like Blob storage which is typically displayed using a folder and files metaphor, it feels more natural to drag and drop files around, and have the system interpret this as a series of transfer operations.

Hopefully, our users will find it useful.

To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

New! Azure Management Studio 1.4 – geo-redundancy, improved drag and drop, and more

Today we’ve released version 1.4 of Azure Management Studio. We’ve added many highly requested new features to this release, including improvements to the drag and drop functionality, and support for accessing files from the secondary of a geo-redundant storage account.

This release also includes many other features to make users lives easier, such improvements to blob search and the ability to kill role instances. Find out more below.

 

Added support for accessing a geo-redundant secondary

Azure Management Studio (AMS) now supports accessing files from the secondary of a geo-redundant storage account, enabling you to inspect storage without impacting the performance of the primary. The secondary accounts can be added to the Storage Account section of the tree by selecting the check box “Access via geo-redundant backup”:

Geo Redundancy

The read-only account will be added to the list of storage accounts in the tree.

Geo Redundancy 2

Find out more about this new feature.

 

Improved drag and drop support

You can now drag block blobs both into and out of the folder views in Azure Management Studio. So if you have log files stored in a Blob Container, you can drag them out of the file explorer and drop them onto your desktop, or into Windows Explorer, to download them.

We have also improved AMS so that it can handle more types of items that are dragged into it. So you can now drop “Virtual Files” files (for example, the files in a .zip file) into AMS and have them uploaded to Blob Storage. Find out more about this new feature.

Drag drop

 

Improved blob search

Blob search in AMS has been improved to include the ability to search the blob metadata. To search the metadata, type either of the following into the search box in the tool:

  • metadataname:NameSearchText
    This will search all the child blobs and list any ones that have metadata with the name containing the text “NameSearchText”
  • metadatavalue:ValueSearchText
    This will search all the child blobs and list any ones that have metadata with a value containing the text “ValueSearchText”

Blob search 1

To help find specific blobs there is also new functionality to filter the list of blobs returned. You can filter the file list to show:

  • Page blobs or block blobs
  • Blobs which have a lease taken out on them
  • Blobs that have been modified within a certain time frame
  • Blobs within a range of sizes

Blob search 2

 

Added ability to kill role instances

If you wish to kill an individual instance of a role in a hosted service you can click the Delete button in the Operations menu. Alternatively, you can find it in the right click menu on an instance.

Roll instances

 

Added ability to create A8 and A9 sized Virtual Machines

In the Create Virtual Machine dialog you can now create the new A8 and A9 sizes of Virtual Machines.

A8 A9

 

Copy a connection string from the storage account node

You can now get a connection string for a Storage Account by right-clicking on the Storage Account.

Connection string

Copy Blob URL works with multiple selections

If you select several Blobs in the Blob Explorer, you can right-click and select Copy Blob from the menu to put a list of URLs onto the clipboard.

 

Menu item to view the release notes

You can now view the release notes for Azure Management Studio from the “Release Notes” menu item on the Help menu.

 

To try out all these new features, download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).

We hope you enjoy trying out the new features – as always, we’d love to hear your feedback in the comments below.

 

 

Subscribe

Archives