written by Daniel Bienkowski
We live in an ever changing world and the pace of change in technology has accelerated. Every day brings new achievements, new discoveries and innovations in one of the many area of technology . It is therefore inevitable that technology can become obsolete. One of the greatest areas of change can be seen in web development, where frameworks and libraries are regularly tweaked, and major browser vendors release new versions every 6 weeks. This pace has contributed to the recently popular statement that if I learn a framework by the time it is learned and mastered, it is already outdated.
It is no different with the .NET Framework. In 2019, Microsoft announced that the .NET Framework 4.8 would be the last release of the framework, and it would be replaced by .NET Core – which now means that the .NET framework has become a legacy, and no one should be creating solutions using this technology anymore.
But wait, what about projects that started before the wave of .NET Core popularity that rely on the .NET Framework? And this is certainly a substantial number because, in the Stack Overflow survey from 2021, .NET Framework with .NET 5 / .NET Core were placed first and second respectively in popularity. First of all, forced migration seems inevitable, because eventually the old technology will no longer be supported. Additionally, new technology tends to be, but not always, better designed and more efficient. For this reason, many organisations will face a challenge in the near future. Everyone would like to use new technologies, especially if they are better, more efficient and better designed.
How do you ensure your mission critical, legacy, applications support your business objectives. Whilst reducing the impact on customer services and revenue?
One of the first options that comes to mind would be to completely rewrite the application. After rewriting the entire system, the legacy system is deprecated and the new one fully deployed at the same time. Hence the name of this approach – Big Bang.
Whilst this approach appears to be straight forward the practical reality is quite different.
This approach in particularly large and complex projects, can be very time-consuming and it can be difficult to manage. In addition all the benefits are back-loaded i.e. you carry all the risks and costs associated with the legacy application all the way through the project
The cut over period is relatively long. As you need to ensure the target system passes all your functional and non functional tests. Your business leaders and product owners can’t afford these mission critical applications to be off-line for days. There is also a risk that whilst you are developing the target system there is an issue with funding or availability of resources. As a result your target system may never get finished and your legacy system continues to represent a material risk to your business,
At the same time, to be able to grow your business, there is a (constant change) need to implement new and innovative ideas, features and new business change requests. Here, these must be duplicated (replicated) – they must be added to the old project as well as to the new one, which dramatically affects the complexity of the project and makes planning even more difficult.
Fortunately, the following concept comes to the rescue – “Strangler Pattern”.
Applying the Strangler Pattern allows you to transition from baseline to target in a way that reduces risk. The pattern results in a gradual transition from a legacy architecture to a modern architecture ,in a way that is almost invisible to the outside observer.
How does it work
The name of the pattern comes from an article by Martin Fowler. He observed the behaviour of plants in tropical forests that live in trees and slowly “strangle” them. The analogy is a perfect because just as a plant grows on a host tree, gradually overpowering it. The pattern will do the same with target system, gradually replacing legacy system.
The main construct of this pattern is the use of a proxy or façade, which redirects requests from a Front-End or any external system to the legacy or target system. The proxy or façade represents entry points to an existing system. What the outside observer does not see is the transformation of services from the legacy system into a new set of services. When the new service starts (comes online), the proxy or façade is modified to route calls that previously went to the service on the legacy system to the new service. Eventually, The services in the old system will be “strangled” in favour of the new services.
The pattern has a lot of advantages when used in the context of legacy software. The ability to iterate over time and replace features gradually. Delivering greater flexibility and control, as well as a higher level of efficiency and availability. The incremental nature of the pattern will reduce risk as existing legacy software components run in parallel with newly developed components which will be replaced, and new components and features can be released as soon as they are ready. This makes any new change possible to reverse quickly. Undoubtedly, the great advantage of this concept is the space to reduce the technical debt that has accumulated in the legacy code. Two areas that need further explaination:
- During the process of replacing old components, new ones will be created in a way that eliminates unnecessary complexity that may have accumulated over time, which will significantly improve code quality, reduce errors, and improve the process of diagnosing and resolving any problems that arise. New features or larger pieces of code such as components or services will be developed from scratch, freed from the constraints of legacy code.
- New technology stacks and paradigms can be adopted earlier in the development process. Code can be written based on DDD (Domain Driven Development), TDD (Test Driven Development) leveraging DevOps best practice. A completely new architectural concept will support the transformation process from a monolithic system to a system based on microservices. Each microsevices has its own independent database. This scenario has become popular in recent years as it offers the ability to redesign, rewrite or replace without affecting the underlying infrastructure, and offers better compatibility with modern practices such as loose coupling, scaling, and has a positive impact on system maintainability.
The obvious disadvantage of the pattern is the need to run both legacy and target systems in parallel. This may result in increased costs for a period of time and requires people with the skills to support both. There is a peak level of complexity with this pattern. Once completed complexity should be at a much lower. If only we lived in a world where change freezes were like ice-ages. Where there were no change requests for months but in the real-world change is constant.
You must consider using the Strangler Fig pattern to incrementally migrate your legacy systems. Gradually replacing specific pieces of functionality with new applications and services
NashTech has the expertise and experience to help you modernise your legacy systems. Ensuring they are scalable, secure, performant and manageable.