By Tony Perez, Cloud Solutions Architect
at Skytap
During my day job, I talk with many customers
who have high hopes for moving their IT infrastructure to the cloud, but
believe it's impossible to include legacy workloads in that migration. They
believe that these applications, which are typically running on IBM Power
server hardware, are stuck in the data center. When I ask why these
applications can't be moved to the cloud, I usually get one of the following
answers:
- "My
application has hard-coded IP addresses compiled into the source code."
- "My application
is based on IBM's AS/400 or more recently called IBM i, or IBM AIX."
- "There is no
longer anyone around who knows about the code or applications that are
still running."
While these factors do make the migration
process more complicated, they're not the deal-breakers many people assume they
are. Let's go through each one and discuss the alternatives and workarounds
that make the cloud possible even for IBM Power applications.
"My
application has hard-coded IP addresses compiled into the source code."
This issue, which is surprisingly common, is a
holdover from the days when no one thought a server could ever run anywhere
except in the data center. Today, applications referencing other servers or
services don't hard-code those references into the application code. Instead,
they use DNS or a similar naming service to connect to other servers, or use a
programmatic variable in the code that reads the actual external reference from
a data source that can be updated without changing the application code itself.
But some legacy-based applications have a 30+
year-old code base. In those days, no one used naming services or built code
variable abstraction into the core logic. If one server had to talk to another
server, the IP address of that external server was baked right into the source
code and then compiled into the executable object that became the running
application. When everything ran on-prem, this solution worked fine. But all
those connections break when moved to the cloud.
This is a difficult problem, but there are
workarounds that allow these applications to be moved to the cloud
successfully. Here's an example from one of my recent customers. Its corporate
data center was shutting down in favor of Microsoft Azure, but it had a
substantial application written in AIX that contained hard-coded IP addresses.
The application was based on a software package from a vendor that no longer
exists. It could not do a "big bang" migration and magically move all the
application components in a single maintenance window. In fact, the migration
would need to take weeks.
It ended up using a "vxlan" solution that
extended some of the existing subnets so they simultaneously existed and were
active in both Azure and on-prem. That allowed them to incrementally move LPARs
with hard-coded IP addresses from on-prem to "the cloud" without having to
change anything. Vxlan types of subnet extension technology are not recommended
for true long-term production usage, but as a temporary stop-gap measure to
facilitate a migration to the cloud, it worked perfectly. Processes like this
can enable applications to be carefully migrated to the cloud with hard-coded
IP addresses intact.
"My
application is based on IBM's AS/400 or IBM AIX"
Until a few years ago, these applications simply
couldn't be run in the cloud because the major public cloud providers use a
different chipset and architecture than what's used by IBM i(AS/400) or AIX
(x86 rather than PowerPC). But now Microsoft Azure, IBM, Google, and a host of
smaller players all offer some type of SaaS service that allow IBM i or AIX
applications to be lifted and shifted to the cloud unchanged. Each provider
offers a slightly different range of capabilities, but in the end, all of them
enable users to move legacy applications based on IBM i or AIX to the cloud
without "substantially" changing any of its original architecture.
"Substantially" might vary from vendor to vendor, but they all allow a "lift
and shift'' concept rather than re-architecting.
The service that I'm most familiar with is
Microsoft Azure and the ability to host IBM Power workloads running inside an
Azure data center. My default presumption is that it's possible to move what is
known as an "LPAR" (an IBM Power Virtual Machine) to the cloud and not have to
change the application architecture. It is true that certain operational
techniques must change in the cloud. For instance, there is no such thing as a
"physical tape drive" in the cloud. But as far as the applications go, the default
thinking model is "lift and shift", not re-write.
"There
is no longer anyone around who knows about the code or applications that are
still running."
This is another challenging issue and will
only get more acute as more engineers that specialize in IBM Power retire.
Organizations are afraid to migrate legacy applications or make any changes to
their code without an expert on hand. In this case, rebuilding the entire
application from scratch isn't viable. Another option to decrease the
complexity and risk of a migration project is to apply a "strangler" pattern.
As discussed earlier, it's possible to lift
and shift complicated IBM Power-based applications to the cloud. Once in the
cloud, they exist in the same physical data center as other new applications
and services that you are creating. This means that the legacy applications
will have low latency talking to the other pieces of the application landscape.
Now that everything is under one roof, it releases a little bit of the pressure
to deal with legacy components right away. The team can slowly replace legacy
application services piece by piece (this is the "Strangler Fig" method first proposed by
Martin Fowler). You can also switch commodity services to leverage what the
cloud might offer. For instance, if a legacy IBM i application written in Cobol
or RPG leveraged a file server to store documents, intercept that file server
path, and mount an NFS location in Azure Files or Azure Blog, or similar in
Google. The team can chip away at the legacy application in a low-risk,
low-cost model rather than attempting a total rewrite that is high risk and
high expense. This low-risk process means that the lack of specialized IBM
talent doesn't matter as much.
Even in these three cases, it's possible to
move IBM Power applications based on IBM i or AIX to the cloud - it just
requires a little imagination and technical creativity. In the case of a data
center exit, I strongly advise IT teams to try everything they can think of to
move those legacy applications without disturbing how they are architected or
operate. Lift-and-shift into the cloud into a low latency environment with
other modern application components, then chip away at the legacy components.
Either let them "run forever" in an "as is" mode, or apply strangler techniques
and slowly migrate them to modern technology over time.
##
ABOUT
THE AUTHOR
Tony Perez is a Cloud
Solution Architect at cloud infrastructure provider Skytap. Tony has deep experience
as a solution architect in the information technology and services industry in
engineering roles in Sales, Customer Success, Cloud, Monitoring Performance,
Mobile Applications, Professional Services, and Automated Software Testing. He
began his career at Sequent Computer Systems and Oracle and has worked at
Netscape, Mercury Interactive, Argogroup and Keynote.