Announcing the launch of “Cerebrata Cerulean” in Public Preview

It gives me immense pleasure to launch our latest product Cerebrata Cerulean in public preview. In this blog post I will talk about what this product is, about its features and our future roadmap with this product.

What is Cerebrata Cerulean?

Cerebrata Cerulean (or simply Cerulean) is the latest offering from Cerebrata. In short, it is a cross-platform desktop tool for managing your Azure resources. Built using GitHub’s Electron & Facebook’s ReactJS frameworks, this tool will work on Windows, Mac & Linux.

Azure Services Supported

Next, let’s talk about the Azure Services you can manage using Cerulean. As of today, you can manage 3 services using this tool – Azure Cosmos DB (DocumentDB), Azure Redis Cache and Azure Search. Support for more services will be added soon.

Now let’s talk about features of each of these services support in details.

Azure Cosmos DB

As of writing of this post, Cerulean has full support for DocumentDB API of Cosmos DB. We will be adding other APIs (Tables, Graph, Mongo) real soon. When it comes to DocumentDB, using Cerulean you can:

  • Connect to one or more of your Azure Cosmos DB accounts. On Windows platform, you can also connect to DocumentDB emulator.
  • Manage DocumentDB databases in your Cosmos DB accounts. You can list, create and delete DocumentDB databases.
  • Manage collections in your DocumentDB databases. When it comes to managing collections
    • You can list, create, update and delete collections.
    • When creating collections, you can specify request throughput, configure partitioning strategy, configure indexing policies, and set default time-to-live (TTL) policy.
    • You can update request throughput on the fly. When updating request throughput, you can change Request Units/Seconds (RU/S) for that collection as well as enable/disable Request Units/Minute (RUPM).
    • You can update indexing policies of the collection on the fly.
    • You can change documents time-to-live (TTL) policy for that collection.
    • You can view collection statistics.
    • You can view collection’s system properties.
  • Manage users in your DocumentDB databases. You can list, create, update and delete users.
  • Manage documents in a collection. When it comes to managing documents
    • You can query for documents, create, update and delete documents.
    • When querying for documents in a partitioned collection, you can specify partition key attribute along with your query. Tool allows you to specify all kinds of partition key values (string, number, boolean, null). You can also execute queries that span across partitions.
    • You can save most frequently used queries in the application itself so that you don’t have to type them over and over again.
  • Manage document attachments. When it comes to managing document attachments
    • You can list, create, update and delete document attachments.
    • For creating attachments, you can either select a file from your local machine or specify the URL of a publicly available resource on the Internet.
  • Manage stored procedures in a collection. When it comes to managing stored procedures
    • You can list, create, update and delete stored procedures.
    • You can execute stored procedures using the tool itself.
    • You can even update stored procedures in a partitioned collection (not natively supported by DocumentDB API).
  • Manage triggers in a collection. When it comes to managing triggers
    • You can list, create, update and delete triggers.
    • You can even update triggers in a partitioned collection (not natively supported by DocumentDB API).
  • Manage user defined functions in a collection. When it comes to managing user defined functions
    • You can list, create, update and delete user defined functions.
    • You can even update user defined functions in a partitioned collection (not natively supported by DocumentDB API).

To learn more about these features, please visit

Azure Redis Cache

When it comes to managing your Azure Redis Cache accounts, using Cerulean you can

  • Connect to one or more of your Azure Redis Cache accounts.
  • You can view the list of databases in your account.
  • You can manage keys in a database. When it comes to managing keys
    • You can create new keys in a database. All key types are supported.
    • You can search for keys in a database. Partial and wild-card searches are supported.
    • You can edit key values.
    • You can delete one or more keys from a database.
    • You can rename a key.
    • You can change a key’s expiry.
  • You can delete all keys in a database with a single click (FLUSHDB) command.
  • You can delete all keys from all databases with a single click (FLUSHALL) command.
  • You can view the list of clients connected to your Azure Redis Cache account. You can kill client connections from the tool itself.
  • You can view slow log entries. You can also clear slow log entries.
  • You can monitor the performance of your Azure Redis Cache account through interactive charts. We have included over 250 monitoring elements including % CPU utilization, % memory utilization etc.
  • We have also included a basic cache terminal using which you can directly execute Redis commands against your Redis Cache account.

To learn more about these features, please visit

Azure Search Service

When it comes to managing your Azure Search Service accounts, using Cerulean you can

  • Connect to one or more of your Azure Search Service accounts.
  • You can manage indexes in your account. When it comes to managing indexes
    • You can list, create, update or delete indexes.
    • You can manage index fields in an index. When creating an index field, you can specify index data type, attributes and analyzers.
    • You can manage index scoring profile. You can define scoring profile functions as well as weights.
    • You can manage CORS policies for an index.
    • You can manage “Char Filters” for an index.
    • You can manage “Token Filters” for an index.
    • You can manage “Tokenizers” for an index.
    • You can manage “Analyzers” for an index. You can create custom analyzers as well using this tool.
  • You can manage data sources in your account. When it comes to managing data sources
    • You can list, create, update or delete data sources.
    • Azure SQL, Azure Blob, Azure Table and DocumentDB type data sources are supported.
    • You can also specify data change detection and data deletion detection policies.
  • You can manage indexers in your account. When it comes to managing indexers
    • You can list, create, update or delete indexers.
    • You can schedule an indexer or run the indexer on demand from within the application.
    • You can check the status of an indexer.
    • You can reset an indexer.
    • You can disable/enable an indexer.
  • You can manage documents in an index. When it comes to managing documents
    • You can query for documents. Using this tool, you can define advanced queries as well.
    • You can create, update or delete documents.
    • You can search documents by keys.

To learn more about these features, please visit

As you can see we have been extremely thorough in supporting the Azure services we have included in the tool. I don’t think there’s one single tool which provides you all of these features for these services.


Next obvious question that comes to mind is how Cerulean will be priced. There has been some changes and I want to talk about those here.

First, Cerulean is offered to you as a “Subscription” based product. There’s no license key that you will get. Instead you will purchase a subscription.

If you are an individual developer, we are offering you a flexibility of either getting a monthly personal subscription or an annual personal subscription. Not only these are very reasonably priced but also give an option of paying for the tool as long as you want to use it. You can cancel your subscription any time! No questions asked (though we will be sad pandas to see you go ).

From our experience selling other Cerebrata products, we found that licenses are purchased in bulk often by a manager for her/his team or by a reseller. We obviously want to support that. For such users, we are offering professional subscription. Again, not only these are very reasonably priced but also have the feature where a subscription can be transferred to another user in the team. Today that happens by raising a support request but eventually we will come up with an account management portal using which you will be able to manage your subscriptions without ever reaching out to support.

Another big advantage for you with subscription based approach is that you can now install and use Cerulean on as many machines as you like (unlike Azure Management Studio where you can only use the software on 2 machines).

For pricing, please visit

Future Roadmap

We have been working on this product for last 8 – 9 months now and it has been in closed private beta for last 4 months or so with lots of users. We have been iterating and improving the product like crazy and what you see today is a result of feedback from our private beta users (we can’t thank them enough!).

But to be honest, we have just scratched the surface (or not even that). Azure has grown leaps and bounds and we intend to support management of all the major services in Cerulean. Our intention is to bring the ease of managing your Azure resources to your desktop. Is it going to happen overnight? No! Will it happen? Yes, it will!!! We will need to your continued support and encouragement while we undertake this journey. We are hoping that you will be a part of this journey with us by providing us constant feedback – Good, bad, ugly; we can take all .

On our immediate roadmap after this is support for Azure Service Bus (work has already underway on that) and Azure Cosmos DB.

Azure Management Studio

Now that we have this new product, I am sure a lot of you will have a question or two about Azure Management Studio. Azure Management Studio is supported today and will be supported a long time to come. We do realize that a lot of users like you depend on Azure Management Studio for managing Azure Storage, Diagnostics and Subscriptions and it would be a grave mistake on our part to stop supporting that product. Our goal is to bring all the goodness available in Azure Management Studio into Cerulean and till the time that happens, Azure Management Studio will be completely supported. However, I will be honest in saying that we will not be adding support for any new services in Azure Management Studio. So, if you want to see support for a new Azure Service, please send them to us by all means. We will be incorporating support for that in Cerulean only.

I would be more than happy to answer any questions/concerns that you may have on Azure Management Studio. Please feel free to reach out to me directly. I can be reached at gmantri @ I will try my best to answer your queries and concerns.


All in all, I am quite excited about this development and what lies ahead of us. Needless to say, your support and encouragement is required for us to be successful in providing you with best Azure management tooling. I am quite hopeful that you will give Cerulean a try and will make it the tool of your choice for managing your Azure resources.

Let me close by saying that providing you with best Azure tooling is our sole goal!

As always, if you have any comments or thoughts please do share with us. If you need to reach out to me, I can be reached at gmantri @

Thanks for reading and have a great day!

Tagged with: , , , , , ,

Managing Azure Resource Manager Storage Accounts Through Azure Management Studio

Over this weekend we released a new version of Azure Management Studio (AMS). In this release we included support for managing Azure Resource Manager (ARM) Storage Accounts (One of most requested feature in AMS). This blog post talks about this enhancement and what we are working on right now.

As you may already know that connecting to ARM resources require you to sign in into your Azure Active Directory (Azure AD). So first we will talk about how you can connect to your Azure AD using AMS and authorize the application and add your Azure Subscription in AMS.

Add Subscription

Adding an Azure Subscription to AMS is super easy! First and foremost you will need an Azure AD Name your user account is associated with. Finding Azure Active Directory Name is a 3 step process as described here. Pick the Azure AD Name which has the subscriptions you want to work on.

Once you get the Azure AD Name the second step is to run Azure Management Studio and right click on “Subscriptions” node under “Connection Group” panel and navigate to “Add Subscription Connection” option.


When you click on this option, a new window will be shown which gives you two options to add an Azure Subscription. Pick the 1st one (“Use Azure Resource Manager API”) and then specify the name of your Azure AD. AMS still supports managing your Azure Subscription using X509 Certificates (Classic way) and if you want to connect to your Azure Subscription that way, please choose the 2nd option. Please note that you can only manage Classic resources (Classic storage accounts, Cloud Services etc.) if you choose this option.


Once you click “Next” you will be taken to your Azure AD for authentication & authorization. You can sign in using your “Work or School” account or “Microsoft” account. Please ensure that the user account you sign in with is associated with the Azure AD you specified above.


After you have signed in successfully, Azure AD will ask you to authorize the application. Click “Accept” to continue.


Now AMS will bring a list of Azure Subscriptions that you have access to which are part of the Azure AD you just signed in into.


Simply select the subscriptions you wish to manage through AMS. If you want to give them a different name, you can do so by changing the value in “Friendly name” field. Also you can put these Azure Subscriptions in different connection groups as per your requirement.

Now you may ask how to distinguish between subscriptions added via this way and the classic way. Well, it’s super easy! Subscriptions added through Azure Resource Management API are shown by green icon and the later are shown by blue icon as shown below.


Managing Azure Resource Manager (ARM) Storage Accounts

Next, let’s talk about how you can manage ARM Storage accounts using AMS. Once you have connected an Azure Subscription using ARM API, you will see “Storage Accounts” node there. Double clicking on it will fetch all the ARM storage accounts in that Azure Subscription.


Please note that AMS only fetches “ARM” storage accounts and not “Classic” storage accounts. You can manage “Classic” storage accounts by adding an Azure Subscription using X509 certificate.

Using AMS, you can create a new “ARM” storage account, edit its properties and even delete those storage accounts. Because “ARM” is backed by “Role-based access control (RBAC)”, you need proper permissions to perform these operations.

New Storage Account

Creating a new storage account is super simple! Simply right click on the “Storage Accounts” node and select “New Storage Account…” option.


On the subsequent window, specify the parameters like account name, resource group, account type etc. and you should be good to go!


Edit Storage Account

Editing a storage account properties is again super simple! You simply right click on the storage account and select “Properties…” option to view and change the properties.


From this screen itself you can manage storage account keys, change storage account redundancy type (Standard GRS to Standard RAGRS for example), change access tier for blob storage accounts (Hot to Cool for example) and more.


Delete Storage Account

Similarly you can delete the storage account by clicking on the “Delete” option in the storage account’s context menu. The next screen asks you to confirm the deletion press “OK” to confirm and you are done.


Explore Storage Account

To explore storage account contents, simply double click on the storage account. When you do that, AMS will fetch the account key for the storage account and open the nodes. From there, you can manage blobs, tables & queues. Please note that you must have permissions to fetch the storage account keys in order to explore the storage account contents.

What’s Next

This release was quite important for us for many reasons. First, we implemented one of the most requested feature in AMS. Next, this released paved the path for next set of features related to management of ARM resources. Now that we have implemented connecting to an Azure Subscription using ARM API, it becomes somewhat easier for us to implement ARM resources management. So please expect to see those features show up in AMS in upcoming releases.

On our immediate roadmap we have a feature that would help you with management of license key in AMS. This has been number 1 request when it comes to customer service and we want to ease your pain there.


As always your feedback is really important to us. Please continue to send in your feedback. We will try our best to take care of it at the earliest! You can submit a feature request or vote for a feature on our UserVoice page at or send us an email at

Tagged with: , , , , ,

Cerebrata Is Now Part of Cynapta Software

It gives me great pleasure to announce that Cynapta has acquired Cerebrata from Redgate. In the post, I will talk about who we are (to set the context) and what this acquisition means for you as a user of Cerebrata products. We will also talk about the future direction we’re taking.

Who We Are

You may already know this but I founded Cerebrata back in 2007 and we built Azure tools there. Cerebrata got acquired by Redgate in 2011 and after working with the amazing team there for 18 months we parted ways as I wanted to do something different, so I set up Cynapta to explore more opportunities for tools to help Azure developers. After trying a number of things for the next year or so, I started building Cloud Portam, and I’m proud to say that it is one of the leading tools for managing Azure resources. Both Cerebrata and Cloud Portam are mature and robust tools designed with one simple goal – Make your life as an Azure developer easy! I would like to believe that we’re doing a good job at it (you as a user are the final judge of that though).

What does this acquisition mean to you

At Cynapta we are solely focused on making your job as an Azure developer easier by providing best of the breed utilities. Azure Management Studio and Cloud Portam are live examples of that.

This sort of change can be disruptive for a product – I remember some people had concerns when Redgate took over Cerebrata, and I’m sure some people will have some concerns now.

I will assure you of one thing: as a Cerebrata user, things have not changed one bit for you as a result of this acquisition.

There are a few things I would like to highlight in case you are concerned about that.
Firstly, we intend to invest substantial time and effort into the development of AMS. Most of the team that was part of Cerebrata development remains with the product and these team members have been with Cerebrata for quite some time now. The team is coming to the end of some work on big underlying changes which I’ll discuss more below, so you should actually see the rate at which we add new features improve!

If you’re concerned about the level of support you will get, please don’t. We’re entirely dedicated to providing you the best Azure tools, and that includes the quality of our support. The original team is still in place, and we will do everything we can to address any issues you report as quickly as possible. Furthermore because I have been involved with Azure for many years now, you now have someone who speaks your language (in a manner of speaking) :).

If you still have some concerns, please feel free to reach out to me. I can be reached at gmantri @ I will try my best to alleviate your concerns.

Azure Management Studio (AMS) & Cloud Portam

I am pleased to announce that all AMS paid users get a 1 year complimentary access to Personal Edition of Cloud Portam. I would strongly encourage you to check out Cloud Portam.

We want you to have best tooling available to you for managing your Azure Resources. There are certain scenarios where AMS would be a better fit (e.g. dealing with local resources, cloud services etc.) while there are certain scenarios where using Cloud Portam makes sense (e.g. browser-based so that you can manage your Azure resources from everywhere). Furthermore there is some functionality which is present in only one of the tools. We believe that having access to both AMS and Cloud Portam would enable you to be more productive when it comes to managing your Azure resources.

With unfettered access to both of them, I’m confident that you will use the tool which make sense to you to deal with a particular scenario. I am really looking forward to have you use both products.

The way this complimentary access would work is that you would need to sign up for Personal Edition of Cloud Portam using your Microsoft Account or your Azure AD/Office 365 account. You get a fully functional 15 days trial when you sign up. At any point of time during your trial, please reach out to me for converting your trial account into complimentary account. When you do, please share your AMS license key so that we can verify the key. In the next few days, we will be introducing a functionality in Cloud Portam wherein you will be able to input your AMS license key and get your trial account converted into a complimentary one from inside Cloud Portam. You need not send me an email to get your account switched once we do that.

If you’re an existing Cloud Portam user, I encourage you to read the blog post about this acquisition on Cloud Portam’s blog as well:

What’s next for Azure Management Studio (AMS)?

You may be wondering about the future of AMS at this point. Let me assure you that AMS is very much alive and kicking.

I agree that there have not been many public-facing changes recently, but the team has been diligently working on support for ARM storage accounts. This has taken longer than we intended, but development is now finished and it’s almost ready to be released.

Now that this work is complete, we will be going through all the pending issues in User Voice and plan the development for the items. However I must say that not all features requested there will be implemented in AMS but wherever possible, you will have the feature in either AMS or Cloud Portam.

What about Azure Management Cmdlets (AMC)?

Unfortunately I cannot say the same about AMC :(. The last update to this product was some time ago, and Microsoft has all but replaced it with their own excellent PowerShell Cmdlets. With those things in mind, we have decided to shut down AMC product.

Please note that going forward you will not be able to download the product from our website. If you have purchased a license of the product, you will be able to use the product but no support will be provided on that.

Please reach out to me if you have any questions or concerns about us shutting down AMC.

And what’s the deal with Azure Explorer (AE)?

Redgate has decided to hold on to AE. They’ll be keeping hold of it, and nothing will be changing on that front, so you can continue to use AE as before.

Future Direction

Now let me take a moment and talk about our vision of where we want to be.

If you have been doing Azure development for some time now, I believe you will agree with me when I say that Azure is evolving at a breakneck speed. From just 3-4 services in 2008, now Azure has over 50+ services that you can use. Ideally we would want you to manage all of the services through our tools. It may take us some time to get there but we will get there. So expect to see some new features light up in either of our tools.

One interesting thing that has happened with Azure (and also with Microsoft) is that it no longer about Windows anymore (Microsoft changing the name from “Windows Azure” to “Microsoft Azure” is a live testament of that). They are embracing open source and other platforms. As a tool vendor building for Azure, I strongly believe we need to do the same. With that in mind, we’re planning on building new set of tools which will be based on current direction Azure is taking (especially with their whole Azure Resource Manager and Role-based access control mechanism). These tools will have to be cross-platform so that you’re not restricted to Windows only. Our responsibility (and it’s a big one) would be to enable you on the platform of your choice.

One thing we have realized is that often times there are certain “things” you would want your tool vendor to do. A good example would be backing up your storage accounts. You would want to delegate that task to someone as long as you are sure that the task will be done. We want to be that “someone” for you who will do the job for you. With products like “Service Fabric”, “Functions”, and “Azure Batch” etc. in Azure, Microsoft has made our job rather simple. So expect us venturing into that in near future.

Now I don’t want to set any false expectation. I do realize that our goals are ambitious and it will take us some time to get there but one thing is sure that we will get there. We are also counting on your support big time during our journey. We want to work on things which makes your job easier. We hope that we will get your support all the way.


To summarize this post, let me reiterate (one last time): for you as a Cerebrata user things have not changed. In fact, we hope that this acquisition would result in better things for you as far as managing your Azure resources are concerned. We have grand plans for our tools and we will need your support to achieve our goals. If you have any questions or concerns about this acquisition, plans for our existing products and future roadmap, please (actually pretty please with cherry on top) reach out to me at gmantri at

I want to take this moment to thank you for being a Cerebrata user and I hope that you choose to continue our relationship while we build tools and services that will help you make productive when working with Azure.

Tagged with: , , , , , ,

Accessing your geo-redundant endpoint

“Defcon zero. An entire Azure data center has been wiped out, billions of files have been lost.”

But not to worry, Azure will just fail over to another data center right? It’s automatic and totally invisible.

Well, not entirely. A failover doesn’t happen instantly so there’ll certainly be some downtime. There may also be more local connectivity concerns outside of Microsoft’s control that prevent connection. In these circumstances you might want to be able to access your replicated data until things are working properly again.

In Dec 2013 Microsoft previewed read-access geo redundant replication for storage accounts – which went Generally Available in May 2014. This means blobs, tables and queues are available for read access from a secondary endpoint at any time. Fortunately, third-party tooling and configuration scripts won’t need a complete re-write to support it, since the only thing you really need to do is use a different host for API requests.

Twice the bandwidth

Those who expect high performance from their Azure storage are may already be limiting reporting and other non-production operations. An additional benefit of the replicated data, is that you can divert all lower priority traffic to it, thus reducing the burden on the primary. Depending on the boldness of your assumptions, you could double the throughput to your storage by splitting unessential, ad hoc requests to the secondary endpoint.


Replication can be configured in the Azure Management Portal to one of three modes: off, on, and on with read access. Officially these three modes are called:

  • Locally redundant. Data is replicated three times within the same data center.
  • Geo redundant. Replication is made to an entirely separate data center, many miles away.
  • Read access geo redundant. Replication is geo redundant and an additional second API endpoint is available for use at any time, not just after an emergency failover.

What can’t be configured is the choice of secondary location. Each data center is ‘paired’ with another – for example, North Europe is paired with West Europe, and West US is paired with East US. This also keeps the data within the same geo-politcal boundary (the exception being the new region in Brazil, which does its secondary in South Central US).

Behavioural matters

In a simple usage scenario, it’s unlikely you’ll run into issues with consistency between your primary and secondary storage. For small files you might only see a latency of few seconds. Whilst MS have not issued an SLA guarantee at this time, they state that replication should not be more than 15 minutes behind. For reporting purposes, you might not care about such a low latency. In any case, you can query the secondary endpoint to find out when the last synchronisation checkpoint was made.

It’s worth pointing out that transactions may not be replicated in the order that they were made. The only operations guaranteed to be made in order are ones relating to specific blobs, table partition keys, or individual queues. Replication does respect the atomicity of batch operations on Azure Tables though, and will be replicated consistently.

Accessing the endpoint

Accessing the replicated data is done with the same credentials and API conventions, except that ‘-secondary’ is appended to the subdomain for your account.

For example, if the storage account ordinarily has an endpoint for blob access such as then the replicated endpoint will be Note that this DNS entry won’t even be registered unless read access geo redundant replication is enabled. This does mean that if someone knows your storage account name, they can tell if you have this mode enabled by trying to ping your secondary endpoint, for all the good it will do them.

When connecting to the secondary endpoint authentication is performed using the same keys as for the primary. Any delegated access (for example, SAS) will also work since these are validated using these keys.


If monitoring metrics are enabled for blob, table or queue access, then those metrics will also be enabled for the secondary endpoint. This means there are twice as many metrics visible on the secondary, as the primary ones are replicated over as well.

Simply replace the word ‘Primary’ with ‘Secondary’ in the table name to access the equivalent metric, thus $MetricsHourlyPrimaryBlobTransactions becomes $MetricsHourlySecondaryBlobTransactions.



At the time of writing, there is no equivalent for the $logs blob container. Ordinarily, you can audit all read, write and delete operations made to your storage account. So whilst the aggregate monitoring analytics mentioned above are available for the secondary endpoint, you won’t know specifically which source IP addresses are issuing reads (though it’s unlikely you’d care).

Support for secondary storage in Azure Management Studio

Accessing the replicated data in AMS is fairly trivial if you’ve already got the original storage account registered – just right click and choose ‘Connect to geo-redundant secondary copy’ from the storage account context menu and a second, rather similar, storage account will be visible next to the first. It will behave entirely as if it were an ordinary storage account, except that it will be read-only and will display the last synchronisation time in the status bar.



Alternatively, there’s a checkbox on the ‘Add storage account’ dialog that allows you to specify access via the secondary endpoint, if you’ve not already registered the primary. Either way, once you’re looking at your data you can use the same UI features to search, query and download.


To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).


Drag and drop improvements – Azure Management Studio 1.4

We’ve now added better support for drag and drop in the latest version of Azure Management Studio (AMS). In this version you can drag block blobs both into and out of the AMS folder views.

So, for example, in the pictures below I drag a single selected file across AMS onto the desktop.

When you start the dragging, you cannot drop to start with (as you’d be copying the file into the folder that contains it), so the no drop annotation is displayed. We use the Windows shell to get a suitable graphic to display next to the cursor, so here we see its representation of a png image file.

Drag drop 1

Once the cursor is above a target that does support a drop, the drop description changes to reflect the action that will happen if you release the mouse and start the drop.

Drag drop 2

Of course, we don’t just support drag and drop inside the tool, but also allow other applications to accept the drop. In particular the shell is happy to take the drop of a data stream that we offer it.

Drag drop 3

When the user elects to drop onto a folder, this will make AMS fetch the blob content and stream it to the shell as a byte stream. The shell can use the name information included within the transfer object to create a file of the correct name which it can then fill with the content.

Drag drop 4

Of course, we don’t just want to be able to drag blobs out of AMS. We have also improved AMS so that it can handle more types of items that are dragged on to it. Drag and drop is a little complicated, and we’ll try to give a better overview of it below, but essentially the drop target (AMS) can look at the formats of the data which the source offers. Typically a source may offer a list of files on the local file system, and we have been able to handle this kind of source for a long time in AMS. If you drag a file out of a zip file though, this is offered to the target as a byte stream (plus some metadata) and AMS now knows how to handle this kind of information.

When you drag one or more of the files contained in a .zip file:

Drag drop 5

AMS happily accepts that as a drop target:

Drag drop 6

And dropping leads to a transfer executing, which is logged in the transfer panel:

Drag drop 7

As we’ll discuss in a moment, copy and paste uses a fairly similar mechanism behind the scenes so will work in the same way.


So how does Drag and Drop work then?

Drag and drop has been around since the old days and relies on COM interfaces to do its work. It revolves around the IDataObject interface. which essentially describes a dictionary which interested parties can both query for various properties (that correspond to different renderings of data). and also set properties to reflect the progress of any data transfer that is happening.

When a drag operation is started, the source makes an instance of this class, populates it with relevant data and then calls into a shell helper method passing the DataObject as one of the arguments. This shell helper method will then take care of executing the drag as the cursor moves across the screen, interacting with the drop sources that are passed over in the process, until the drop happens on a particular target or it is cancelled (by pressing the Escape key). If you drag from AMS then we put at least two renderings of the data into the DataObject – one that is a serialized .NET object that only AMS understands. which it will use if you drag from AMS into itself, and a second data format that offers the data as a stream. In this second format. the data is offered as a set of metadata about the name of the item together with an OLE stream which the target can use to pull the data in blocks of bytes.

The DataObject is also used to reflect the semantics of the action itself. The target can set values to reflect whether it wants the action to be a move or a copy, and it will also set a value to say whether the drop was successful and whether the source needs to carry out the delete part of any Move. It will also populate the DataObject with the drag image which is shown by a window that the shell creates next to the cursor when you are dragging, and potentially a piece of description text describing the operation.

When the cursor moves over a potential drop target, this target gets a callback and can then freely interrogate the DataObject to determine if it contains suitable data for it to process. It can return a result back to the shell, which can use this to determine which cursor it displays – one showing that the drop is available or the no entry sign which reflects that the target isn’t able to handle the data that is being dragged. The target is also free to change the displayed text.

How do I do it then?

There are many useful blog posts out there that cover the rather arcane methods involved.

One ends up working at the level of COM which is supported fairly well inside .NET. The only lacking feature (as far as I know) is a way to detect that the COM object is no longer being used by external parties via COM… in C++ one can keep an eye on the reference count, but in the .NET world there is no way to see if the .NET created CCW (COM callable wrapper) is being used, and so the only way to detect that the object is no longer used is to add a Finalizer to its type.

You also go back to the days of managing your own memory, with you being needed to do Global Lock and Unlock, and also allocate using Marshal.AllocHGlobal.

There are also a few extra interfaces you might want to implement – IAsyncOperation, for example, which allows the Shell to do a data transfer without blocking.

Getting all these parts to work together took some effort, and was helped a fair amount by a working implementation of some of this inside the Azure Explorer tool that we have made freely available for some time. We started with the Azure Explorer implementation and then merged in bits and pieces from various blog posts as we needed more functionality.

The good news is that you almost get cut-and-paste for free after you’ve done the work implementing drag and drop, as this transfer process is also centred on the idea of a DataObject. The key difference is that you place the DataObject on the clipboard for other applications to find, and in order to enable your paste menu you may need to subscribe to clipboard change events so see if the clipboard contains a suitable format.

Was it worth it?

When you are dealing with the file system on your local machine, and something like Blob storage which is typically displayed using a folder and files metaphor, it feels more natural to drag and drop files around, and have the system interpret this as a series of transfer operations.

Hopefully, our users will find it useful.

To try out this new feature download your free trial of Azure Management Studio now. Existing users can get the latest version from within Azure Management Studio (go to Help – Check for Updates).