Our take on modernizing legacy applications at speed

A case for legacy modernization

Everyone has at some point in their career stumbled across this kind of application: a custom-built behemoth based on an aging software stack, with code strung together by developers that have since left the company and documentation that ranges from sparse to non-existent. Legacy applications of this kind tend to share a common set of issues:

Loss of productivity

Developers waste 23% of their working time due to wrestling with technical debt [1]. Moreover, productivity declines by 50% in the most complex regions of the code [2].

Code no man’s land

Legacy application code is full of ”no-go” zones, which no one wants to touch. According to studies, code with weak ownership tends to have 6x more bugs [3]. 

Inability to move to the cloud

These applications are based on legacy technology that cannot be easily moved to the cloud, hindering the reduction of your data center footprint.

Rising costs

Updating a legacy application is typically much more expensive than a modern one. Longer development times cost more staff and contractor hours. Not to mention the cost of missed business opportunities or system outages [5].

Difficulties in retention & recruitment

Developers working in the most complex regions of the code have a 10x greater probability of leaving the firm [2]. Moreover, they want to perform impactful work and avoid legacy code [4].

Security concerns

The absence of continual updates from vendors means vulnerabilities in older software and hardware are unaddressed, making them prime targets for cyber-attacks. OWASP positions outdated software as the no. 1 insider threat today [6].

Lack of support

As technology progresses, support for older systems dwindles. Software vendors prioritise newer systems, gradually making patches and updates scarce for legacy ones.

Modernization of such a system is usually held off for as long as possible. Indeed, these applications usually still do what they are supposed to. As Marianne Bellotti put it in her provocatively titled book “Kill it with Fire: Manage Aging Computer Systems”:

Legacy technology exists only if it is successful. Technology that isn’t used doesn’t survive decades.

Still, at some point, the heroics of the maintenance team will no longer be enough and the risk of maintaining the status quo will outweigh the risk of implementing change. This is when you will need to embark on an application modernization journey.

Our approach to modernizing legacy applications

Here at Exerizon, we have developed a structured approach to application modernization, which can be summarized through the below illustration:

It all starts with an assessment phase, which allows us to review the application through multiple lenses: business, technology, and cost. Based on this information we can formulate a modernization strategy for this application (as per the practical taxonomy defined by Gartner) as well as a roadmap highlighting the various stages of the project. Running the assessment is extra effort, however, it helps add predictability and avoid sunk costs related to adopting the wrong strategic decision at the project’s outset.

The roadmap itself is based on applying incremental improvements to the system. Application modernization can be a daunting endeavour, so we recommend our customers never to dive head-first into a “big bang” transformation.

To highlight the value of evolutionary modernization and showcase how this can be achieved in practice, let us assume we want to modernize a hypothetical on-premises application based on .NET Framework and SQL Server. Let us also assume we are using AWS as our go-to cloud provider. Here is how such a project could be split into phases:

1 - Lift & shift phase

Strategy: Leverage a mixture of managed (e.g. AWS MGN) and native (e.g. Always On Availability Groups for migrating SQL Server) migration techniques to bring the application to the cloud with as minimal effort as possible. Upgrade the environment to the newest Windows Server version and leverage EMP in case of any application incompatibilities with the new environment.

Stage outcome:

  • Application in the cloud in a matter of weeks - running on right-sized, more performant hardware (Nitro platform), as well as a modern OS receiving the newest security patches.

  • Access to the automation capabilities of the cloud, like the ability to launch ephemeral testing environments.

  • Local data centre footprint reduced (source hardware can be divested).

2 - Platform setup phase

Strategy: Migrate the database to the newest version of SQL Server on RDS (e.g. using AWS DMS). Move the application and its dependencies to .NET Core. Instrument the application for observability, using the features of the new runtime. Change the underlying OS to Amazon Linux. Containerize the application and build a CI/CD process around it, including automated test coverage.

Stage outcome:

  • A decent foundation in place that includes CI/CD, service templates, automated testing, and release management – allowing for safe and efficient code refactoring, as well as faster delivery of new features.

  • End-to-end observability is available for the application, allowing it to pinpoint any issues around CPU, memory, and network that might be introduced while implementing changes to the system.

  • License cost savings through moving away from Windows Server

3 - Modularization phase

Strategy: Put in place a modular monolith approach to encapsulate business logic into modules focused on specific domains. Transition module ownership to separate product teams. Appoint a platform team (SRE, DevOps, infrastructure, DevEx) that will manage the technical foundations on behalf of the product teams.

Stage outcome:

  • A deep team understanding of domain isolation boundaries gained through trial-and-error (the cost of bad decisions is low at this point, compared to going “all in” microservices from the start)

  • Identification of all deeply rooted dependencies that might hinder migration towards microservices in the future (e.g. strong consistency needs, distributed transactions, WCF/MSMQ).

  • An architecture allowing for parallel development of business features and refactoring of modules by independent teams.

  • A platform team is in place to reduce toil and automate any undifferentiated heavy lifting for individual teams.

4 – Decomposition phase

Strategy: Incrementally carve out application modules into microservices, independently deployable to EKS. Use AWS Migration Hub Refactor Spaces to implement the “strangler pattern” on top of the application and allow for evolutionary architecture changes. Build the first version of an Internal Developer Platform, using cloud services like AWS Proton.

Stage outcome:

  • The application is split into independent microservices, each owned by a single cross-functional team – allowing for maximum business agility.

  • Ability to introduce modern technologies within the limited scope of a single microservice, e.g. switch from SQL Server to DynamoDB

  • Self-service capabilities are available for product teams via an Internal Developer Platform

While there is no “one size fits all” approach and boundaries between phases are seldom that clear cut, through the above example you can still see the value of evolutionary modernization. Each phase of the process focuses on a manageable set of goals, introduces tangible value for the business, and lays the groundwork for the next phase. As a result, the modernization effort becomes far less overwhelming and far more predictable.

Modernization best practices

Here are other application modernization best practices to keep in mind:

  • Define measurable goals for the migration initiative and track progress against these metrics as you move along – a structure we found lends itself well here is OKR, which allows us to tie the modernization effort to business goals.

  • Don’t lose sight of documentation during the modernization effort. An incremental approach to building up application documentation along with new code will pay dividends in the future. Document every architecture decision as well.

  • Define budget limits in the cloud early. Set up alarms to pinpoint anomalies in cloud costs from the get-go, especially if your team is new to cloud services when embarking on the modernization journey.

  • Start from the most problematic parts of the code by finding “hot” parts of the code with both high complexity and developer activity. Using git logs for this purpose might go a long way.

  • Leverage software intelligence tools to further reason about the legacy codebase and support refactoring. Depending on your budget, these tools can range from open-source helper solutions (e.g. Porting Assistant for .NET from AWS or .NET Upgrade Assistant) to enterprise-grade platforms like CAST Imaging. Remember never to trust tools blindly though.

  • Approach automated testing wisely. Having automated regression tests in place is pivotal for supporting the refactoring effort. Invest in end-to-end tests first. There are tools available to quickly generate these kinds of tests based on application behaviour. A black-box approach is also much easier to implement at this point since legacy applications have rarely been designed with isolated testing in mind. As you progress, shift the onus to white-box testing, ideally following the Test Pyramid for building your test suite.

  • Create a platform team at the right time. On average, developers report 22% of their time being wasted due to obstacles and inefficiencies in their work environment. Creating a dedicated team to remove this burden from the hands of developers will speed up the modernization process and relieve product teams from reinventing the wheel.

  • Understand gaps in team competencies – make sure the right skills are in place before starting the migration (architecture, DevOps, UI/UX, QA, cloud). Upskill your team through tailored training programs or extend it through cooperating with a specialized modernization partner, like Exerizon.

Why Exerizon?

Application modernization initiatives are complex and can be overwhelming for teams. A legacy modernization project takes at least months, usually years. It requires a specialized (and quite niche) skill set to resolve technical challenges and find common pitfalls. The Exerizon modernization SWAT team can help you hit the ground running here. We can augment your team, assess the system in question, and build critical momentum for the project. By design, you will spend only a short time with us (up to 1 year), but you will be left with all the necessary tools and processes to handle the modernization effort going forward. Please reach out to us, if you want to find out more:

In need of additional inspiration? Take a look at the case studies coming from our team:

New SDK to manage 50+ applications

An organization we’ve worked with built 50+ products, but under the hood their technology was getting stale (about 10-15 years old). It’s a clever idea to have a unified technology stack, but not upgrading software versions for 10+ years started to create issues. The challenge to modernizing however was that there were a lot of deeply rooted dependencies between the products (e.g. distributed transactions), which resulted in breaking changes in almost every new major version onward. Upgrading everything at once was not a practical choice.

The solution we proposed was to build 150+ cross-product libraries and remove these deeply rooted dependencies. It allowed to streamline work across teams and migration of one product was reduced from an average of 2 months into 1 day. It also allowed the creation of templates for every new service to build products. It reduced the provisioning time of a new product from 10 days to 3 minutes.

From spaghetti code into 50+ domains

In this example of a 10+ years application modernization initiative, the organization migrated away from 5M lines of spaghetti code, which followed very few coding standards and included services, where the UI layer used SQL queries to fetch data. We helped split the codebase into 50+ domains by using Domain Driven Design (DDD), Command Query Responsibility Segregation (CQRS), and message queues. The whole ecosystem is still being rewritten, but it’s being done step by step. With every new business feature developed, everything that is needed from the old ecosystem is ported into a new one. It's an extra 30-50% of the time overhead but allows client to fund the whole project without resorting to a risky “big bang” approach.

New release process that increased 30x production deployment frequency across 3 teams

It was a side effect of the .NET solution migration to AWS. Releasing once a month and only having a year to complete the project required changes to the release process. The company and most products relied on Java, Python, and bash. Products built on .NET were something new. The major changes were related to reducing manual work and changing company policies to enable automated deployments in a regulated industry. We managed to shift the mindset of 300+ engineers by showing them via a live presentation an automated release process to production.

 
Previous
Previous

Navigating the symbiosis: How Generative AI and Data Governance can support each other

Next
Next

Implementing defense-in-depth for your AWS network