Modernising Legacy Applications



In our whitepaper “Old Software. New Experience.”, we discuss modernising applications by decoupling the new website or mobile app (often called the ‘front end’) from the existing business applications we would like to modernise.

The industry standard for dealing with the decoupling of ‘front-end’ from ‘back-end’ is the use of a digital contract between both sides that states how each party will communicate with the other. This is known as an application programming interface or ‘API’. By designing the new user experience first and then working with the development team to build an API for the existing business system, that contract is informed by the future state of the system, rather than current state of the system.

The challenge we face however is that there are a bunch of different kinds of ‘legacy’ systems which require different strategies to decouple to a point where an ‘API’ can be provided. In this post we are going to look at a set of common legacy system design patterns and how you can tackle modernising them.

Mainframe

So, the irony of this article is that the oldest technology in the group actually supports RESTful APIs natively using modern interconnect technologies - at least IBM mainframes using zOS Connect. Take a look at this example documentation and code showing you how to get zOS, the operating system of the 'popular' Z-Series IBM mainframe, to connect to modern applications relatively easily.



You can see pretty quickly from the diagram above that the zOS Connect approach represents an intermediation layer between the existing application and the outside world. This is in fact precisely how you would want to approach developing an API to modernise the experience of an existing system: put a modern layer in front of your existing software and focus on making that simple and secure.

Of course, the problem that developers face using zOS is that its inherent closed-source and high complexity makes correct configuration of the mainframe environment itself a huge challenge.

On Premise

Twenty years ago, a reliable inter office network for medium sized businesses was generally delivered via ISDN connection and made today’s mobile broadband service charges look like a bargain. A very common pattern in mid-market software being developed in the 1990s and 2000s was a database being installed in each office due to poor inter-office network performance. There would usually be some kind of ‘fat client’ desktop application that would speak to this quasi-central database. By having the database server close to the users of the software, users could be productive within an office environment while still sharing data within that office.

As businesses grew and more offices were connected, there were all sorts of data synchronisation solutions provided by vendors, but the systems generally followed the same pattern of keeping database servers located within the physical network of the office. One alternative to this was Microsoft Remote Desktop solutions, which attempt to move the entire desktop, software and database into a remote environment. The problem with this was that it didn’t do anything about the experience of using the often ugly fat client software. If anything it made the experience even more awkward and annoying than before.

The challenge with modernising this kind of application is that it is distributed. It makes sense to move the application’s data and underlying business logic to a centralised hosting environment like Amazon Web Services, while also building a modern API contract for modern software to leverage.

There are a few strategies that may work in this scenario.

Strategy #1: Aggregate data as read-only without migrating the business logic The use-case for this strategy would be developing services that add value without needing to update information back to the legacy systems. The perfect example of this use-case would be building reporting dashboards.

Strategy #2: Aggregate data and also sync it back without migrating the business logic The use-case for this strategy would generally be some kind of enhanced feature integration that actually needs to update the original data. An example of this we have done previously is providing on-line portal for self-service invoice approvals for older strata management solutions.

Strategy #3: Aggregate data and replicate pieces of business logic The use-case for this strategy would be pushing features of the system into an online portal or mobile application of some kind. This is very similar to Strategy #2, however some of the ‘processing’ needs to be replicated on the new aggregated environment.

To execute any of these strategies requires some well designed solutions to merge and potentially clean (homogenise) data from the many different offices as well as potentially reverse engineering business logic from the legacy application. One approach we have used is to deploy custom written ‘agents’ that sit on the legacy systems and syphon off data to a secure cloud-service - the only problem is that it’s not an easy piece of engineering to undertake.

Modern-ish

It’s hard to imagine that there have been web-apps being developed since the late 1990’s - this means there are applications out there with twenty year-old code, experiences and security!

One of the biggest issues older applications have is they ‘bake in’ business logic into the code which was used to drive the user interface. Over the past decade there has been a fundamental shift to decouple the ‘logic’ from the ‘interface’ using what is commonly termed ‘model-view-controller’ or ‘MVC’. To understand MVC think of the model as the information you want, the view as the way of looking at the information and the controller as the brains behind the operation knowing what to do with the information. If you mix up the way you see the information with the brains behind the operation, you always need to see the information the same way to know what to do with it - in practical terms, if you want to see the same information in a mobile application as a website, we do not want all the rules for the brains in the mobile app AND the website, we want to separate them out from the way we see the information and just write those rules once.

So, to modernise a ‘modernish’ web application best done using the following steps:

1. Design the Experience

Go through an experience-design workflow (see link to whitepaper) to understand how you want to make a user-centric application.

2. Design the Prototype

Design the experience from the perspective of the end user - that is, make an interactive prototype of the finished application so you can identify all the little interactions required to bring it to life. To find out more about designing a user-centered experience download this whitepaper.

3. Design the API Once you understand the full experience, you can start to understand how your experience needs to interact with the legacy systems. Design the contract between the experience and the legacy system to be simple and terse. Don’t over-engineer it, but whatever you do - make it truly RESTful

RESTful Design Cheat Sheet

RESTful design uses HTTP Verbs. The HTTP method should be used to express the action to be performed on a resource (which is expressed by URI path). The HTTP verbs are a standard interface for communicating via a REST service:

GET - This retrieves a list of resources, and does not modify them. POST - Generally used to create a new resource. PUT - Replaces a resource with updated information. DELETE - Removes a resource.

In RESTful design, HTTP verbs must be idempotent except HTTP POST In theory the GET, PUT, and DELETE verbs should be idempotent, meaning you can call them over and over without any additional side effect. For example, you should be able to send a DELETE to a URI over and over without breaking the system. Even if the resource has been deleted you should send back a code, but not throw an exception or crash.

Often times when designing REST interfaces, send a 200(OK) when a resource is deleted. Then on subsequent deletes send a 204 or 404 so the caller knows the resource is no longer present. With a PUT, however, even if the caller sends a PUT with the same information over and over they’ll get the same response. It should update the resource with whatever information is sent.

In RESTful design, services should be representation oriented Each service should be addressable through a specific URI and representations are exchanged between the client and service. With a GET operation you are receiving a representation of the current state of that resource. A PUT or POST passes a representation of the resource to the server so that the underlying resource's state can change.

Endpoint URI path should not contain verbs like `searchbymobile` or `add` instead it should be organised in a hierarchical structure reflecting the resources (and potentially resource relationships) through the use of URIs. URIs are standardised and well-known. Using a unique URI to identify each of your services make each of your resources linkable.

Example:

Non-RESTful URI GET /searchbymobile?value=xxxxx

RESTful URI GET /clients?mobile=xxxxx Or GET /clients?q=xxxxx&type=mobile

From the above example, we can see that we are expecting to get clients, and we are searching by mobile.

Non-RESTful URI GET /clientInvoices?clientid=xxx

RESTful URI GET /client/xxxxxx/invoices

From the above example, we can see that we are expecting to get client invoices, and we are searching by client id.



4. Build the API layer

This may well be the most challenging part - depending on the legacy technology you have it may make more sense to use an more modern intermediate layer to be the actual API on top of the old code then writing the ‘veneer’ of that functionality in the legacy code. In most cases it makes sense not to surface that legacy application as a public API since as updating security and encryption requirements in a legacy code-base can be troublesome. It may be easier to put the legacy API under a matching modern API layer that takes care of the data and security plumbing with a lot less effort.. This will depend on the language, framework and application-server stack you’re using for the legacy application.

We often find that using an interactive prototype of the new experience provides a great starting point for understanding the interactions between the client and the server - in essence, we find using the prototype informs us how best to design a terse and secure API that works.

If you want to find out more about Interactive Prototypes, download the Old Software, New Experience white paper.

Conclusions

There are many paths to get a modern experience on top of legacy platforms no matter what kind of legacy systems you have. There are potential skills gaps for a developer working on the legacy technology versus developing new secure interfaces for modern experiences, but updating those skills is a worthwhile investment.

The real key to success in modernising an application is having clarity on how you want to execute your new experience over the top of the legacy system, and then decoupling that new experience from the legacy system as much as possible.

Leave a Reply

Or

Your email address will not be published. Required fields are marked *