Enabling the mobile workflorce

The short history between the mobile device and the enterprise has been turbulent. Consumer innovation in this space has happened at a pace that has left most enterprises uncomfortable at best and paralyzed at worst. Luckily, most have organizations have started accepting this intrusive technology into their thinking. Some have even started successfully exposing their LOB systems to these new interfaces. Others have implemented marketing driven apps for their customers. What’s missing is that few enterprises have embraced the inherent features of this technology to enhance and change the way they do business internally by enabling their mobile workforce.

Most businesses have a portion of their workforce that is mobile or out in the field. We’re talking about the field engineers, the salespeople, the fleet.. the people whose work isn’t behind a desktop. These staff typically have mobile devices or can now be cheaply equipped with these devices. With the right apps/ software, these devices can not only enhance productivity but often enable entirely new models of working that weren’t possible in the past. These new models of work can be radical new market disrupters.

Mobile technology enables use cases that simply weren’t possible. More specifically, solutions using this technology can:

  • track the time and location covered by the field agent, enabling new metrics
  • capture photographs and video to supplement traditional form capture
  • enable offline work scenarios in areas of low connectivity. Syncing again when back on-line
  • navigate the agent to specific locations e.g. Broken infrastructure
  • live capture of data when augmented with other sensors

Ultimately, all tasks and the associated resolution can have supporting evidence of the GPS and camera.

Although these solutions can be ground-breaking, organization’s strategy should addresses the risks that come with bleeding edge solutions. A mobile orientated solution needs be rolled out with a device management product. This is needed to ensure functionality such as remote wiping.

The mobile solution also needs to be sufficiently integrated. A phone with an app is hardly useful without appropriate integration into the LOB system. This integration can be costly with Legacy systems where integration often isn’t simple and clean.Currently, the market also has a diverse selection of devices. This can result in costly development as the apps will require some redevelopment and customizing for each platform. Although HTML 5 is a way out of this, it comes at a loss of functionality e.g. Apps that need to function without connectivity aren’t best done with HTML5.

The reality is that these market-disrupter mobile solutions are becoming commonplace. Many businesses have unexpectedly been put in jeopardy by some small, unknown start-up. It is critical that an organization’s mobile strategy needs to go beyond simply managing BYOD. However, with these potential pitfalls, a knee-jerk app strategy will be a waste of resources.  A prudent first move would then be a simple, proof of concept (POC) focused on a few high-value use cases. Take the first step.. but make it lean and do it soon.

The science of debugging

The perception is that software development is developers creating features by writing code. The reality is different. As developers, we actually spend most of our time finding and fixing bugs… Feature development is just what we do in those little breaks between debugging.
For something that occupies most of our time, you’d think most be very good at it.. But often not the case. The sad thing is that we often don’t use humanities best tool for acquiring knowledge: The scientific method. When people say science, thoughts often go to beakers, lab coats and weird hair. Where actually, science is just a series of steps used to acquire knowledge. Each step is critical in the process.
The steps are as follows (shortened for relevance):
  1. Ask a question –  Obviously, we can’t find an answer if we don’t have a question
  2. Do background research – someone may have already answered our question or had a similar question.
  3. Construct a hypothesis – A hypothesis is a potential answer. The hypothesis must be unpacked into a series of predictions.
  4. Test the predictions with an experiment – Critically, we test to prove them WRONG. The sooner it’s proved wrong, the sooner we stop wasting time on a flawed hypothesis.
  5. Analyse your data and draw a conclusion – Spend time on the results. Nothing wastes more time than misunderstood results.
  6. Communicate your results – Communicating our results allows others to validate the methodology and test the results
  7. ..aaaaannnd repeat
Back to debugging…
When we have a bug, we have a question: “what is causing this sh*t to break!”. The scientific method is the most effective way to this knowledge if applied as a simple heuristic. First step is background research.. Google the error code, method and other relevant information. This is done right up front. We shouldn’t waste time on a problem that someone has already solved.
Based on our findings (or lack of), we construct a hypothesis – a falsifiable hypothesis which has testable predictions. If our hypothesis is un-falsifiable, it is too abstract and should be decomposed into other hypothesis. Importantly, all assumptions are just camouflaged hypothesis. If our hypothesis is both unfalsifiable and can’t be decomposed, it’s an answer that isn’t worth wasting further energy on as there’s no knowledge to be gained. For example: as nice as it feels to vent about it being Windows fault, that’s not a particularly useful hypothesis.
Next, we do an experiment i.e. prod the application in a way that tests the hypothesis and resulting predictions. Analyse our results of the test.. Don’t argue with the results but let the results lead us. If it didn’t behave as expected, we’ve just learned something valuable about our assumptions.
Lastly communicate our results. Most aha moments come from a simple chat to a fellow developer who challenges our biases.
It’s science bitches!

Azure as part of a development process

To enable any proper development process, multiple deployment environments are essential. Historically, the biggest problem I have encountered here are the issues related to creating and maintaining and sharing these environments. Although many of these problems can be addressed through a private cloud, our dev team has been finding Azure to be a great tool in the development lifecycle. I’ll be sharing some of the benefits, tips and lessons learnt with Azure.

The pains of non-cloud, developer environments

Mark, works for a typical dev house. In his dev team, when a new project gets kicked off, the team’s first step is to organize the required servers for all the environments (typically dev, QA, UAT and production). Jill in IT will then be requested to create these environments. Often, this results in a long drawn out process with Jill asking questions such as “What specs are required for the servers?”. This question is ridiculous at this point because even the customer isn’t quite sure what they need yet. So Mark makes an educated guess, and multiplies it by 2 to just be safe. He then informs an unimpressed Jill that the servers will also need access from your external developers and customers, and the process complicates further. After the environments are up, Jill’s burden continues with the maintenance and support of these environments, including the continuing support of Mark’s previous project’s servers. Worse yet… after all this energy and cost is expended at the beginning, Mark’s POC shows that the customer will be better off with Excel and the project is scrapped.. just in time for Jill to have the first server ready.

Benefits of Azure

In response to similar pains, recently, our development environments have all been created in Azure. In addition to fixing many obvious pains, we also had the following benefits:

  • Typically we have 3 environments (dev, QA, UAT) up within an hour
  • The costs are spread through the project, rather than all upfront
  • It’s very easy to tear down and re-create servers to quickly resolve big issues e.g. corrupt servers
  • It’s simple to enable access for the customer and offsite developers
  • We were able to quickly and cheaply simulate various server configurations. These could then be used to predict production environment requirements
  • Table storage is great for logging
  • Remote monitoring of the environments becomes easier

Using Azure with the demo environment also helps our customers better understand Azure and the benefits of a cloud hosted solution. This often leads to situations where the customer is swayed into using Azure for production, getting over the reluctance some customers have with the cloud.

Our approach

We are still busy tweaking our process with the Azure development environments. Therefore, we still have a couple of issues to iron out e.g. we currently aren’t using our build server in this process and deployments aren’t automated but are scripted.

Our process is currently as follows.. After the initial kick-off meeting with the developers, all 3 basic environments are created in Azure. We are now ready for the first iteration. Throughout each day, the shared developer server is updated by each developer – this process needs improvement, and our planned re-introduction of the build server will certainly help. Successful builds are packaged and deployed by the developer, assigned the “Deployment manager” role, to the QA environment. This happens twice per week. After successfully testing the deployment in the QA environment, the build is deployed to the UAT environment which will be used in the next iteration customer demo. Our iterations are typically 2 weeks. In addition to the above, we also create ad-hoc QA environments to simulate multi-server production configuration scenarios.

The above approach has led to very productive use of developer time, easy adaption of the infrastructure based on agile requirements and stable builds for the UAT environment.

Gotchas, tips and tricks

It’s critical to understand that Azure isn’t exactly the same as your usual in-house environment. Some architectures work much better in Azure than others and having your servers in Europe or the USA will have some impact. Luckily, understanding the issues can mostly mitigate them.

For starters, apps that are chatty with the DB are going to have performance issues. Therefore, avoid architectures that encourage this. Our South African bandwidth is sometimes an issue but wasn’t as disruptive as we expected with the exception of uploading large customer data sets which is both time consuming and costly.

If the production solution won’t be hosted in Azure, the security options can get complicated. Try to keep with security configurations that will work in both Azure as well as the customer’s environment. Check out https://www.windowsazure.com/en-us/develop/net/best-practices/security/ for Azure security guidance.

As our entire developer environment is now moving over to Azure, it’s critical that your Azure account is managed correctly. Early on, we had an incident where our account was suspended resulting in a very stressed team for a few hours. A well-defined process for turning off inactive servers is a must, as mismanagement of the account can get expensive and wasteful.

In summary

The usual steep learning curve and pain associated with introducing radical changes and new tech wasn’t there with Azure. Azure is very intuitive and it all just works. Using it for our internal dev environment is a great way to solve internal IT problems and skill up the team for the coming wave as enterprise customers begin adopting this platform.

For more detailed/technical information, contact Intervate Information Systems & Architecture consulting at craigh@intervate.com.

Why can I find anything on the internet ..but I can’t find my office HR policy?

Like all IT nerds, my smartphone is always within arm’s reach to answer any question. “How powerful was the computer on the first lunar lander?”, “What was the original Metallica line-up?” or “How does an MRI machine work?”.. Within 30 seconds I’ll have an answer. Yet if we have a question about the content generated by our very own organizations, we know we’re in for a long painful process that may not yield results. As a consultant, I see this problem at almost every customer. In the age of knowledge how can a problem this fundamental be so widespread?

The primary cause of the problem comes down to organization’s failure of understanding and embracing search as a primary method of navigating content and data. Search is often a second class citizen for navigating the organization’s content as most organizations focus on hierarchies to navigate their content. A content hierarchy is the folder structure on your computer, the site structure on your intranet or the menu/sub-menu structure on your site. For the amount of content in a typical company, a content hierarchy simply doesn’t scale as a navigation mechanism. There was once a time that even the internet was navigated in hierarchies. Yahoo’s big draw-card was the category/sub-category listing of websites. Ultimately, content becomes unmanageable this way. We’ve all experienced not being able to find a file on our own computers using the folder structure that we ourselves set up. Tagging is a step up from a pure folder structure but ultimately also fails for similar reasons.

This is where search steps in. The modern user is comfortable with search as their primary method for navigating – I even navigate my phone contacts this way. It’s worth noting though, that simple search also starts failing with enough content. The modern search actually uses more than just your search text to search. The search is done with a specific content of who you are, where you are, your previous searches etc. to ensure better results for you.

In contrast to organizational search, it’s worth taking a quick (simplified) look at what makes Google and Bing so effective at finding the content you are looking for.

1. You create a new blog or site

2. Google bots crawling the web index your content. This index contains such information such as what words and location are associated with your content.

3. Google estimates your page authority, based primarily on the number and authority of links to your site.

Now when a user performs a search, Google determines firstly which content matches the search text. Secondly, Google then ranks these results based on the scoring of the text match and, more importantly, the page authority.

Now, contrast this to an enterprise search. The search crawls the content ..but estimation of page authority gets a little trickier. As it turns out, links to your content isn’t an effective way of scoring the content in an organization. So what is needed to effectively rank those search text matches, which could be thousands of documents? This next step is where many enterprise search configurations fail. The answer is that the indexer needs to categorize the content and relate it to an author in a department. When a user does a search, the engine must rank the results based not only on who the author is in the organizational chart but also who the user who performed the search is. In other words, content which is closer to me in the organization (e.g. my manager, subordinates or team) must rank higher. In addition, content that matches my interests or projects should also rank higher. This can only be achieved by purchasing a suitably powerful enterprise search solution and ensuring it’s configured and managed correctly.

Every day, countless people are spending hours of their lives creating knowledge and content for their enterprises: policy documents, tender responses, technical specifications, strategies etc. Only to have this content lost in the sea of noise the moment it’s saved to their intranet. Seems like a bit of a waste not to allow their relevant content to show up in your search doesn’t it?