Virtualization Technology News and Information
Article
RSS
Why is Application Management Still So Hard?

Welcome to Virtualization and Beyond

Contributed by Michael Thompson, Principal Product Market Management, SolarWinds

Why is Application Management Still So Hard?

Computers and applications along with some form of management have been in general use by business for well over 30 years. In that time we have invented the Internet, GPS and the modern mobile phone, but we still haven't figured out how to make sure our applications are always available and performing well.

So, why is it so hard? Well, many people might lay much of the blame on Microsoft's operating systems, but it isn't like people running Linux or iOS never have problems. In reality, there are a million ways to answer the question, but from a high level I'd say we are victims of our own success.

Within IT there has been intense innovation and extreme competition from thousands of vendors and stakeholders. This extremely dynamic environment has produced rapid advances in technology, but often in proprietary siloes. By the time people start figuring out how to make the pieces work together, the technology has moved on.

This leads to the next logical question: What is the best approach today to drive toward better and easier application availability and performance? There is no one answer, but there are a few categories of approaches that are probably the most likely routes to progress, although almost all of them have substantial drawbacks. Let's take a closer look.

Single vendor: A good model for this approach is probably Apple, but in the consumer electronics space. By maintaining tight control of all aspects of their environment, they have been able to provide a seamless and simplified user experience. Microsoft is probably the vendor closest to providing such an end-to-end solution for business, but others like Citrix or Amazon can also claim some aspects of this approach, even if on a more limited scope. Implementing as homogeneous of an environment as possible clearly had the ability to reduce complexity and improve integration. The obvious downside, however, is vendor lock-in and the fact that you will often be stuck with non-best of breed capabilities.

Industry standards/open source: In some instances, such as with the SNMP standard in the networking realm, industry standards have worked very well to help increase simplification and standardization across vendors and technologies. Unfortunately, this is the exception instead of the rule and even still is fairly siloed. Open source initiatives aren't exactly the same as industry standards, but can function in a similar way to provide a common approach across technologies and vendors. For example, Linux and OpenStack provide alternatives that allow a form of standardization across vendors and technologies. The drawback is that this approach typically requires a relatively high skillset and you are usually on your own to put the pieces together or correct any problems that come up.

Technology simplification: This approach involves using technology to drive simplicity into the user experience. By hiding the underlying complexity and ensuring that the most important information is front and center, it is possible to have a simpler experience. Integrating or working between silos also becomes much easier. This is in part the approach companies like SolarWinds and even many software as a service (SaaS) companies like Salesforce use to provide advanced capabilities with greater ease of use. As with anything, though, there are some tradeoffs. Simplification often requires a focus on the major use cases in a given area by concentrating on the stuff 80-90 percent of the people need and ignoring the remaining 10-20 percent. That's great if you are in the 80-90 percent, but not as good if you are in the 10-20 percent that have extremely specialized requirements. In the case of SaaS, there can also be a concern around control as your applications and data sit in someone else's data center.

Standardization and redundancy at scale: This is the broad approach I'd say some of the technology mass providers, such as Google or EBay, tend to use. You have extremely standardized, off-the shelf components that are completely redundant. If a blade breaks, take it out and put another one in. Forget trying to fix it online. It really is amazing the availability and uptime these vendors have achieved with this approach. Unfortunately, not every credit union, regional health care system or manufacturing company can achieve the scale and business economics to implement this approach.

With each of these approaches having their own benefits and drawbacks, it's easy to see why no single approach has really come to the forefront. So, what is the ordinary user supposed to do? While most IT teams can't completely use just one approach to solve the problem, a hybrid approach can prove valuable. By picking one or two of these approaches and implementing them to the degree feasible, it is possible to get improved application availability and performance without a huge increase in staff or expense. The reality is you'll probably have to accept the 80/20 rule previously mentioned: You typically won't be able to implement one or even a combination of two approaches to the degree of 100 percent, but if you can implement 80 percent, you'll have made significant progress. 

Do you have any insights to share as a result of your successes or failures in implementing any of these approaches? Are there other approaches I didn't mention that you've had success with? If so, please leave a comment.

##

If you missed it, make sure to read the the original post announcing the Virtualization and Beyond series.

About the Author

Michael Thompson is a principal for product marketing management at SolarWinds. Prior to this role, he served as director of business strategy for virtualization and storage. Michael has worked in the IT management industry for more than 11 years, including leading product management teams and portfolios in the storage and virtualization/cloud spaces for IBM. He holds a master of business administration and a bachelor's degree in chemical engineering. 

Published Thursday, February 20, 2014 6:39 AM by David Marshall
Comments
@VMblogcom - (Author's Link) - March 20, 2014 6:58 AM

As a company that does virtualization monitoring, the most prominent pain point we hear from customers is around storage I/O per second (IOPS). The speed at which the datastore can read and write data to the physical storage is still the most common limiting

@VMblogcom - (Author's Link) - April 22, 2014 7:27 AM

What IT administrator hasn't asked the question, "Will I still have a job if or when my company adopts the cloud?" A couple quick Google searches will return tons of opinions regarding the comparison of on-premise virtualization verses off-premise cloud

To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<February 2014>
SuMoTuWeThFrSa
2627282930311
2345678
9101112131415
16171819202122
2324252627281
2345678