Virtualization Technology News and Information
Article
RSS
How to Succeed with Legacy Hardware

 

Written by Oren Eini, CEO, Hibernating Rhinos

Legacy is an interesting term. In general usage, it refers to something of value that has been passed down through the generations. When talking about legacy in technology we have something that was passed down to us, but usually it is neither wanted nor desired.

One of the hardest parts of working with legacy systems is that the infrastructure is often poorly understood and usually neglected, despite running the most critical parts of an organization. For example, COBOL (the computer programming language whose heydays ended in the early eighties) still runs most of the financial transactions in the world and IBM mainframes are still the beating hearts and guiding hands for many organizations.

There is no need to look far in search of legacy systems and hardware. It used to be the case that by the time you purchased a new computer, it was already obsolete. Moore's law, stating the performance of a central processing unit (CPU) will double every 18 months has held for decades, from 1975 to 2012. That has led to challenges for organizations large and small that are trying to budget their computing needs.

Effectively, businesses had to replace all their machines on a three year cycle. Otherwise, nothing modern could - or would - really work. This also bred a certain type of laziness with developers. Just by waiting for the passage of time, machines would be updated and software would run faster. Writing efficient software is hard; it takes more time and effort than just doing the minimum amount of work that  is absolutely necessary and letting the hardware handle the rest.

Moore's law eventually runs into the limits of physics and it is no longer the case that you can just double your performance by getting new hardware. The qualitative difference between a new machine and one that is five-years-old is minuscule when you compare the specs of machine from 2000 and 2005.

The New Rules of Performance

If computing power is at the end of the road, how do you continue to improve performance? You need to actually design and build systems that are efficient in their use of resources. This cost more and requires higher expertise, but decades of relying on the hardware to be faster means that there is a lot of performance optimizations that are available if you are willing to invest the time. And why would you want to dedicate the time and effort to build more complex (albeit efficient) systems?

Because you no longer expect to throw away perfectly good machines once they are past some arbitrary due date. You can reuse existing systems and hardware for much longer period of times, greatly reducing your expenditure. This requires a shift in what you demand from your software - a push toward efficiency.

Being efficient means making the maximum use of the resources that are available to your organization. It's about getting more out of the older hardware you already have by no longer dedicating a full machine for one single application, but rather, many. The good thing about this is that there is more efficient software available to utilize in these scenarios today than ever before.

Modern software allows us to host applications and services on smaller machines. These don't have to be actual machines, of course. Server-side software typically gets the best hardware in the organization (and suffers less pressure to be optimized as a result); however, to be efficient there, consider taking one big server that can run multiple small virtual machines (VMs) on it with much higher density (and thus, reduced costs).

From my own experience, I'm constantly looking to build more efficient software in order to gain better performance at a higher scale. It can take up to two years with a dedicated team rooting out inefficiencies in the software and practices to get to a comfortable place. However, the work pays off when you're able to gain a ten-fold increase in performance across the board (with some operations showing even higher performance boost), that has caused several unintended (but very nice and desirable) consequences.

With more efficient software, you're able to run on smaller and less powerful machines; for example, a Raspberry Pi literally fits in the palm of your hand compared to the industrial System on Chip equivalent. And an added bonus: they're inexpensive, bringing it almost to a point where you can consider these machines disposable.  

And, of course, the ability to run software with fewer resources (and still with better performance than before) translates directly into reduced operational costs for users. The more efficient the software runs, the better it will run on memory and computing power which, just like legacy systems, has limited resources you have to make the most out of.

##

About the Author

 

Oren Eini, CEO and founder of Hibernating Rhinos, has more than 20 years of experience in the development world with a strong focus on the Microsoft and .NET ecosystem. Recognized as one of Microsoft's Most Valuable Professionals since 2007, Oren is also the author of "DSLs in Boo: Domain Specific Languages in .NET." He frequently speaks at industry conferences such as DevTeach, JAOO, QCon, Oredev, NDC, Yow! and Progressive.NET. An avid blogger, you can also find him under his pseudonym as Ayenda Rahien at http://www.ayende.com/Blog/.

Published Monday, August 13, 2018 7:33 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<August 2018>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
2627282930311
2345678