After two decades of working with highly regulated organisations, we’ve developed some strong opinions about legacy systems some of which run counter to prevailing industry narratives. Whilst other consultancies and vendors may push their “proven frameworks” and “transformational roadmaps,” we’ve watched well-intentioned modernisation initiatives stumble over the same fundamental misunderstandings about what legacy systems actually are and why they persist.
The conversation about legacy systems has become dominated by what we might charitably call “vendor-driven oversimplification.” Many will know Gartner’s Five Rs (Encapsulate, Rehost, Refactor, Rearchitect, Rebuild, Replace). We also have a lot of time for Martin Fowler’s Strangler Fig pattern. These are useful approaches that help structure thinking. But in some cases they’ve become substitutes for thinking rather than tools to support it.
Here’s what we’ve learned about the uncomfortable realities that these frameworks rarely address.
What the Frameworks Don’t Tell You
The “Kick the Can” Political Reality
The thorny issue with legacy systems isn’t just technical debt, it’s “political debt”. We’ve seen organisations where everyone acknowledges that legacy systems need attention, but nobody wants to be the leader who champions a multi-year, high-risk, low-visibility modernisation programme.
The brutal arithmetic: Legacy modernisation rarely delivers quick wins that make careers. It’s expensive, disruptive, and the benefits are often invisible to users (systems that work vs. systems that work slightly better). Meanwhile, the political cost of failure is enormous.
This challenge is compounded by what might be called the “archaeological problem”. Legacy systems often lack comprehensive documentation, and the people who built them may have long since moved on. As Mari recently noted, project teams frequently become “tech archaeologists,” trying to understand decades-old systems where no one remaining has complete knowledge of how everything works.
The maintenance contract trap: Many organisations find themselves in a vicious cycle where spiralling maintenance costs consume increasing portions of IT budgets, leaving little room for modernisation investment. We’ve seen departments spending 60-70% of their technology budget on keeping legacy systems operational, then wondering why they can’t fund replacement programmes.
This creates a particularly cruel irony: organisations that most need modernisation are often least able to afford it, whilst those with healthy technology budgets may not face sufficient pressure to justify transformation risks.
The Myth of Rational Decision-Making
The industry frameworks assume that legacy modernisation decisions are primarily technical and economic. In our experience, they’re also political and emotional. We’ve seen technically sound modernisation business cases die because of stakeholder relationships, and technically questionable projects succeed because the right sponsor championed them.
Reality check: If your modernisation strategy doesn’t account for the deputy director who’s terrified of change, the vendor relationship that’s been in place for fifteen years, or the team that built the original system and takes criticism of it personally, you’re planning in a fantasy world.
The “Big Bang” Delusion Persists
Despite decades of evidence that wholesale system replacement rarely works, organisations continue to attempt it. Why? Because the Strangler Fig pattern whilst intellectually appealing requires a level of architectural sophistication and organisational patience that many simply don’t possess.
Our observation: The organisations most likely to need legacy modernisation are often the least equipped to execute incremental approaches. They lack the architectural maturity, the integration capabilities, and the change management discipline that gradual migration requires. But this doesn’t mean they should avoid modernisation; it means they need to be more strategic about it.
As my colleague Mari explored in “How is de-risking a legacy modernisation project different?”, legacy modernisation projects face unique challenges that greenfield developments don’t encounter. Including complex dependencies, incomplete system knowledge, and the need for continuous regression testing whilst building new capabilities.
The False Economy of “Do Nothing”
Whilst we’re advocates for pragmatic thinking about legacy systems, we’ve also seen the genuine costs of indefinite postponement. Legacy systems don’t improve with age, and the “do nothing” approach often proves more expensive than controlled modernisation.
The compounding cost problem: Every year of deferred modernisation typically makes eventual replacement more difficult and expensive. Skills become scarcer, vendor support diminishes, integration challenges multiply, and the gap between legacy and modern capabilities widens.
The security reality: Legacy systems in regulated environments aren’t just maintenance headaches; they’re increasingly security liabilities. We’ve seen organisations where patching and security management consumes more effort than replacement would require.
The sustainability reality: Legacy systems can run on comparably inefficient hardware and architectures that consume disproportionate energy for the value delivered. However, wholesale replacement isn’t automatically more sustainable. If new systems are built with a ‘brute force’ approach to computing that can be equally wasteful. Embracing patterns such as auto scaling, asynchronous queuing and low carbon hosting locations is key.
The quality assurance challenge: Legacy modernisation projects face unique testing and quality challenges that greenfield projects simply don’t encounter. The need for comprehensive regression testing, managing complex dependencies, and ensuring system integrity whilst building new capabilities requires fundamentally different approaches from clean slate development.
The AI Modernisation Trap
The latest variant of modernisation hype suggests that AI adoption necessitates wholesale legacy replacement. This is demonstrably false and potentially ruinous. Some of the most effective AI implementations we’ve delivered have been built around existing legacy systems, using event-driven architectures to stream data without requiring core system changes.
Controversial stance: Organisations rushing to replace legacy systems “for AI” are often solving the wrong problem. Poor data quality, inadequate integration patterns, and weak governance will sink AI initiatives regardless of underlying system age. However, organisations that never modernise their integration capabilities will struggle to capitalise on technological advances.
What Actually Works: Pragmatic Modernisation
The goal isn’t to modernise everything or to modernise nothing; it’s to modernise strategically. This means acknowledging both the value that legacy systems continue to deliver and the genuine costs of indefinite postponement.
Start with Integration, Not Replacement
Rather than beginning with system replacement plans, start by making your legacy systems more accessible. Modern integration patterns (APIs, event streaming, data virtualisation) can unlock tremendous value from existing systems without requiring replacement.
As we’ve explored extensively in our “Open Your Eyes to APIs” white paper, APIs done properly can be significant strategic enablers, allowing organisations to construct modular, reusable capabilities rather than monolithic single-use solutions.
Practical example: AI fraud detection on legacy payment systems by implementing Change Data Capture (CDC) to stream transaction events in real-time. The legacy payment processing logic remains unchanged and proven, whilst AI systems analyse patterns and trigger interventions based on event streams.
Event-Driven Architecture as a Legacy Bridge
One of our strongest recommendations for legacy modernisation is implementing event-driven data streaming as a bridge between legacy systems and modern capabilities. This approach offers several advantages that the traditional frameworks underplay:
-
Loose coupling: Legacy systems continue operating unchanged whilst modern services consume event streams
-
Real-time capabilities: AI and analytics systems can react to business events as they occur
-
Audit trails: Event streams provide complete histories for regulatory compliance and AI explainability
-
Scalable integration: Multiple modern services can consume the same event streams without impacting legacy performance
As we discussed in our “Beyond the Hype” podcast episode on Event-Driven Architecture, EDA has matured beyond architectural buzzword into a practical strategy for organisations navigating legacy systems whilst enabling AI and cloud capabilities.
The Strangler Fig Pattern: Buyer Beware
Whilst Martin Fowler’s Strangler Fig pattern is conceptually sound, in our experience it’s far more difficult to execute than the literature suggests. The pattern assumes:
-
Architectural boundaries that may not exist: Legacy monoliths often lack the clean interfaces that gradual replacement requires
-
Organisational patience: Business stakeholders rarely tolerate the extended timelines that incremental replacement demands
-
Technical sophistication: Successfully implementing façades, proxies, and dual-write patterns requires skills that many teams lack
Major challenges include:
-
Data Synchronisation: Avoiding data loss / accidental use of stale data
-
Routing complexity: Deciding how and when to route between new and old systems can quickly become complex and brittle
-
Integration with legacy code: Sometimes you’ll need to make some changes to legacy code which can be hard especially if test coverage is non-existent
-
Performance bottlenecks: Often new and old systems are not optimised to co-exist and then get hit by slowdowns
-
Testing complexity: Mixed environments make automated testing harder due to evolving integration points
Our recommendation: Use Strangler Fig concepts selectively, focusing on discrete functional areas rather than attempting organisation-wide application. The pattern works best when applied to well-bounded system components with clear interfaces, not as a wholesale approach to legacy transformation.
We’ve seen success with desktop interoperability platforms that gradually migrate user interfaces whilst leaving backend systems unchanged. Providing immediate user experience improvements whilst avoiding backend disruption.
The Five Rs: What They Don’t Tell You
Gartner’s Five Rs provide useful categories, but they obscure some critical realities:
Encapsulate: Often overlooked in the standard Five Rs, this involves wrapping legacy systems with modern interfaces (APIs, web services, microservices facades) without changing the underlying code. It’s frequently the most pragmatic starting point for legacy modernisation, allowing you to expose legacy functionality through modern integration patterns whilst preserving proven business logic. However, encapsulation can become a technical debt trap if used as a permanent solution rather than a stepping stone. You’re essentially building modern complexity on top of legacy complexity, which can compound maintenance challenges over time. Best used when you need to integrate legacy systems with modern applications quickly, but shouldn’t be seen as a substitute for addressing fundamental architectural issues.
Retain & Rejuvenate (the hidden sixth R): Often the most cost-effective approach, but it requires admitting that your legacy system might actually be meeting business needs effectively. This is psychologically difficult for organisations that have spent years criticising their technology estate. AI based coding tools are making this option more viable (although they aren’t silver bullets). Skills availability for languages such as COBOL are often a barrier to maintaining and rejuvenating legacy code bases – the previous encapsulate option is often more viable.
Rehost (“Lift and Shift”): Works well for infrastructure modernisation but does nothing to address application-level technical debt. Don’t expect business process improvements from rehosting alone. Bear in mind operating costs can actually increase going down this route – the classic example being moving from on premise physical servers to IaaS.
Additionally, as we’ve explored in our Conscientious Computing series, moving from efficient on-premises infrastructure to cloud locations with higher carbon intensity electricity grids can increase your environmental footprint: the ‘hardware is cheap’ mentality often overlooks these broader impacts.
Refactor: Often seen as the “safe” modernisation option, refactoring involves improving code structure whilst preserving functionality. However refactoring requires understanding business logic that may be poorly documented and embedded in institutional knowledge that’s walking out the door. The process frequently uncovers deeper architectural problems that can’t be solved by incremental improvements alone. What starts as “cleaning up the code” can rapidly spiral into touching half the codebase. Unlike modern applications built with clear separation of concerns, legacy systems often have tangled dependencies that make isolated refactoring nearly impossible. The approach works best when applied to well-understood, bounded components rather than attempting to refactor entire legacy systems incrementally.
Rearchitect: Often necessary for genuinely transformational change, but requires significant architectural vision and technical leadership. Demands careful planning and often benefits from bringing in specialist expertise or investing in upskilling existing teams. Success depends on having clear architectural goals and the organisational commitment to see the change journey through.
Replace: The most expensive and risky approach, but sometimes genuinely necessary. Often chosen when patience for incremental approaches has been exhausted, though occasionally the right choice for systems that have become genuine barriers to organisational objectives.
Breaking the Maintenance Contract Trap
One practical challenge that frameworks rarely address is escaping expensive maintenance arrangements that consume modernisation budgets. Some strategies we’ve seen work:
Selective modernisation: Rather than wholesale replacement, modernise specific high-maintenance components that drive disproportionate support costs.
Vendor renegotiation with alternatives: Use modernisation planning as leverage in maintenance contract discussions. Vendors often prefer reduced margins to complete customer loss.
Hybrid support models: Combine reduced vendor support with internal capability building, gradually shifting maintenance responsibilities whilst reducing costs.
Beyond the Hype: Practical Recommendations
1. Assess Legacy Value, Not Just Legacy Risk
Most legacy assessments focus on technical debt, security vulnerabilities, and maintenance costs. Fewer organisations systematically assess the business value that legacy systems continue to deliver. Before planning replacement, understand what you might lose and honestly assess what you’re sacrificing by standing still.
Hard questions: If your legacy system handles millions of transactions reliably, what’s the cost of that reliability becoming unreliable? If your maintenance costs are consuming 70% of your IT budget, what innovations are you forgoing?
2. Calculate the True Cost of Delay
Create honest financial models that compare modernisation costs against the compounding costs of legacy maintenance, lost opportunities, and increasing technical debt. Include hard-to-quantify factors like recruitment challenges (talented people don’t want to work on obsolete technology) and competitive disadvantage. Include sustainability considerations in your cost models. Legacy systems may be energy-inefficient but rushing to cloud-first solutions without considering carbon intensity and right-sizing can actually increase your environmental footprint. As we’ve discussed in our work on sustainable software, the goal is conscientious computing that balances performance, cost, and environmental impact.
3. Find Politically Viable Starting Points
Look for modernisation opportunities that provide visible benefits quickly whilst building organisational confidence for larger changes. User interface improvements, integration capabilities, and data accessibility often deliver immediate value whilst establishing foundations for deeper transformation.
4. Fix Data Before Fixing Systems
Poor data quality will sink AI initiatives regardless of underlying system modernity. Instead of replacing systems to “enable AI,” focus on data governance, quality management, and accessibility. Event streaming and CDC can often liberate data from legacy systems without requiring wholesale replacement.
5. Plan for Hybrid Architectures
The binary choice between “legacy” and “modern” is false. Plan for architectures that combine legacy systems (for proven business logic) with modern capabilities (for integration, user experience, and analytics). This hybrid approach reduces risk whilst enabling innovation.
6. Invest in Integration Capabilities
Modern organisations are distinguished not by having only modern systems, but by having excellent integration between systems of different vintages. APIs, event platforms, and data pipelines are often better investments than wholesale system replacement.
As we’ve discussed in our “Beyond the Hype” podcast, including episodes on technology strategies and observability, success comes from aligning technology investments with genuine business value rather than following architectural fashion.
The Uncomfortable Reality
Legacy systems persist for complex reasons that combine technical, economic, and political factors. Whilst some legacy systems continue to deliver genuine business value at acceptable cost, others represent accumulated technical debt that compounds daily. The challenge is distinguishing between these cases and making strategic decisions accordingly.
Balanced perspective: The goal is to modernise strategically, not to modernise all or nothing. This means preserving legacy systems that continue to deliver value whilst addressing those that have become genuine barriers to progress. Sometimes the most radical thing you can do is decide not to modernise. Other times, the most responsible thing is to bite the bullet and fund necessary transformation.
A Framework-Free Approach
Rather than applying predetermined frameworks, we recommend starting with honest questions:
· What specific problems are we trying to solve? “Modernisation” isn’t a problem statement.
· What value do current systems deliver, and what are we sacrificing by maintaining them? Consider both sides of the equation.
· What are our actual constraints? Budget, skills, risk tolerance, and regulatory requirements matter more than architectural purity.
· What’s our real timeline, and what’s the cost of delay? Transformation takes longer than anyone expects, but indefinite postponement also has costs.
· Where can we start that provides visible value whilst building towards larger goals? Find politically viable entry points that create momentum for deeper change.
Conclusion: Strategic Technology Decisions in Complex Environments
The legacy modernisation industry has evolved sophisticated frameworks, proven patterns, and compelling case studies. These tools can be valuable, but they’re not substitutes for contextual thinking, honest assessment, and pragmatic decision-making that acknowledges both the costs of change and the costs of standing still.
Legacy systems become problematic when they prevent organisations from meeting their objectives; but those objectives may include maintaining service continuity, managing transformation risk, and operating within realistic budget constraints. The most effective modernisation strategies acknowledge these tensions rather than ignoring them.
The real transformation challenge isn’t technical; it’s building organisational capability to make strategic technology decisions that balance competing priorities: preserving what works whilst addressing what doesn’t, managing risk whilst enabling innovation, and working within political and financial realities whilst advancing long-term objectives. This includes considering the environmental impact of our technology decisions. Legacy systems may be inefficient, but modern solutions that prioritise speed over sustainability can be equally wasteful. The goal is conscientious computing that acknowledges the real-world impacts of our architectural choices
In our experience, the organisations that succeed at legacy modernisation are those that treat it as an ongoing strategic capability rather than a one-off project, and that make decisions based on evidence and context rather than industry hype or vendor marketing.