Azure as part of a development process

To enable any proper development process, multiple deployment environments are essential. Historically, the biggest problem I have encountered here are the issues related to creating and maintaining and sharing these environments. Although many of these problems can be addressed through a private cloud, our dev team has been finding Azure to be a great tool in the development lifecycle. I’ll be sharing some of the benefits, tips and lessons learnt with Azure.

The pains of non-cloud, developer environments

Mark, works for a typical dev house. In his dev team, when a new project gets kicked off, the team’s first step is to organize the required servers for all the environments (typically dev, QA, UAT and production). Jill in IT will then be requested to create these environments. Often, this results in a long drawn out process with Jill asking questions such as “What specs are required for the servers?”. This question is ridiculous at this point because even the customer isn’t quite sure what they need yet. So Mark makes an educated guess, and multiplies it by 2 to just be safe. He then informs an unimpressed Jill that the servers will also need access from your external developers and customers, and the process complicates further. After the environments are up, Jill’s burden continues with the maintenance and support of these environments, including the continuing support of Mark’s previous project’s servers. Worse yet… after all this energy and cost is expended at the beginning, Mark’s POC shows that the customer will be better off with Excel and the project is scrapped.. just in time for Jill to have the first server ready.

Benefits of Azure

In response to similar pains, recently, our development environments have all been created in Azure. In addition to fixing many obvious pains, we also had the following benefits:

  • Typically we have 3 environments (dev, QA, UAT) up within an hour
  • The costs are spread through the project, rather than all upfront
  • It’s very easy to tear down and re-create servers to quickly resolve big issues e.g. corrupt servers
  • It’s simple to enable access for the customer and offsite developers
  • We were able to quickly and cheaply simulate various server configurations. These could then be used to predict production environment requirements
  • Table storage is great for logging
  • Remote monitoring of the environments becomes easier

Using Azure with the demo environment also helps our customers better understand Azure and the benefits of a cloud hosted solution. This often leads to situations where the customer is swayed into using Azure for production, getting over the reluctance some customers have with the cloud.

Our approach

We are still busy tweaking our process with the Azure development environments. Therefore, we still have a couple of issues to iron out e.g. we currently aren’t using our build server in this process and deployments aren’t automated but are scripted.

Our process is currently as follows.. After the initial kick-off meeting with the developers, all 3 basic environments are created in Azure. We are now ready for the first iteration. Throughout each day, the shared developer server is updated by each developer – this process needs improvement, and our planned re-introduction of the build server will certainly help. Successful builds are packaged and deployed by the developer, assigned the “Deployment manager” role, to the QA environment. This happens twice per week. After successfully testing the deployment in the QA environment, the build is deployed to the UAT environment which will be used in the next iteration customer demo. Our iterations are typically 2 weeks. In addition to the above, we also create ad-hoc QA environments to simulate multi-server production configuration scenarios.

The above approach has led to very productive use of developer time, easy adaption of the infrastructure based on agile requirements and stable builds for the UAT environment.

Gotchas, tips and tricks

It’s critical to understand that Azure isn’t exactly the same as your usual in-house environment. Some architectures work much better in Azure than others and having your servers in Europe or the USA will have some impact. Luckily, understanding the issues can mostly mitigate them.

For starters, apps that are chatty with the DB are going to have performance issues. Therefore, avoid architectures that encourage this. Our South African bandwidth is sometimes an issue but wasn’t as disruptive as we expected with the exception of uploading large customer data sets which is both time consuming and costly.

If the production solution won’t be hosted in Azure, the security options can get complicated. Try to keep with security configurations that will work in both Azure as well as the customer’s environment. Check out https://www.windowsazure.com/en-us/develop/net/best-practices/security/ for Azure security guidance.

As our entire developer environment is now moving over to Azure, it’s critical that your Azure account is managed correctly. Early on, we had an incident where our account was suspended resulting in a very stressed team for a few hours. A well-defined process for turning off inactive servers is a must, as mismanagement of the account can get expensive and wasteful.

In summary

The usual steep learning curve and pain associated with introducing radical changes and new tech wasn’t there with Azure. Azure is very intuitive and it all just works. Using it for our internal dev environment is a great way to solve internal IT problems and skill up the team for the coming wave as enterprise customers begin adopting this platform.

For more detailed/technical information, contact Intervate Information Systems & Architecture consulting at craigh@intervate.com.

Why can I find anything on the internet ..but I can’t find my office HR policy?

Like all IT nerds, my smartphone is always within arm’s reach to answer any question. “How powerful was the computer on the first lunar lander?”, “What was the original Metallica line-up?” or “How does an MRI machine work?”.. Within 30 seconds I’ll have an answer. Yet if we have a question about the content generated by our very own organizations, we know we’re in for a long painful process that may not yield results. As a consultant, I see this problem at almost every customer. In the age of knowledge how can a problem this fundamental be so widespread?

The primary cause of the problem comes down to organization’s failure of understanding and embracing search as a primary method of navigating content and data. Search is often a second class citizen for navigating the organization’s content as most organizations focus on hierarchies to navigate their content. A content hierarchy is the folder structure on your computer, the site structure on your intranet or the menu/sub-menu structure on your site. For the amount of content in a typical company, a content hierarchy simply doesn’t scale as a navigation mechanism. There was once a time that even the internet was navigated in hierarchies. Yahoo’s big draw-card was the category/sub-category listing of websites. Ultimately, content becomes unmanageable this way. We’ve all experienced not being able to find a file on our own computers using the folder structure that we ourselves set up. Tagging is a step up from a pure folder structure but ultimately also fails for similar reasons.

This is where search steps in. The modern user is comfortable with search as their primary method for navigating – I even navigate my phone contacts this way. It’s worth noting though, that simple search also starts failing with enough content. The modern search actually uses more than just your search text to search. The search is done with a specific content of who you are, where you are, your previous searches etc. to ensure better results for you.

In contrast to organizational search, it’s worth taking a quick (simplified) look at what makes Google and Bing so effective at finding the content you are looking for.

1. You create a new blog or site

2. Google bots crawling the web index your content. This index contains such information such as what words and location are associated with your content.

3. Google estimates your page authority, based primarily on the number and authority of links to your site.

Now when a user performs a search, Google determines firstly which content matches the search text. Secondly, Google then ranks these results based on the scoring of the text match and, more importantly, the page authority.

Now, contrast this to an enterprise search. The search crawls the content ..but estimation of page authority gets a little trickier. As it turns out, links to your content isn’t an effective way of scoring the content in an organization. So what is needed to effectively rank those search text matches, which could be thousands of documents? This next step is where many enterprise search configurations fail. The answer is that the indexer needs to categorize the content and relate it to an author in a department. When a user does a search, the engine must rank the results based not only on who the author is in the organizational chart but also who the user who performed the search is. In other words, content which is closer to me in the organization (e.g. my manager, subordinates or team) must rank higher. In addition, content that matches my interests or projects should also rank higher. This can only be achieved by purchasing a suitably powerful enterprise search solution and ensuring it’s configured and managed correctly.

Every day, countless people are spending hours of their lives creating knowledge and content for their enterprises: policy documents, tender responses, technical specifications, strategies etc. Only to have this content lost in the sea of noise the moment it’s saved to their intranet. Seems like a bit of a waste not to allow their relevant content to show up in your search doesn’t it?