Wrapping an API around it

Using integration APIs as a way of decoupling legacy systems from modern implementations.

Today, mobile apps, desktop apps, and even most websites are essentially standalone applications that connect to hosted services through communications contracts known as application programmer interfaces (APIs). These APIs provide a standardised set of services that are built once to service a wide array of potential ‘clients’ – such as mobile apps across multiple operating systems, desktop applications, websites, and even other services. This is the most effective way to integrate with partners that may want to access systems for effective supply chain integration or value-added services. By using an API, the contract is defined, so multiple implementations can simply use the same contract for whatever it is they need to do.

The problem many organisations face is that many legacy applications have a serious problem when it comes to dealing with this modern world of APIs.

In this article, we are going to explore the ‘big six’ challenges when developing an API to expose legacy services. You may be wrapping up an already developed legacy API or you may need to figure out how to expose existing business logic in a modern API. Whatever your mission, and ways of dealing within them.

The Big Six Challenges
1- API Contract Scope
2- Service Aggregation
3- User Authorisation
4- API Versioning
5- Performance
6- Testing and Regression

If you are delivering new digital services, apps or websites on top of existing legacy systems, you want to be able to take these six challenges in your stride and have solutions at the ready.

Challenge #1: API Scope
The first issue is that APIs are designed in isolation of a consumer of ‘the contract’. The military adage “No plan survives first contact with the enemy” comes to mind. It is all too easy to ‘overcook’ or ‘undercook’ an API without having a clear objective – too many features and too much data that nobody ever ends up using to it’s potential or missing features that are essential for using the service.

The easy solution is not to start with the API at all – instead, redesign the user experience, model that experience down to a user journey and finally map the journey to a user interface prototype. From the user journey and prototype, it is possible to model out the API calls that will be required to achieve the result.

This does not mean the API will be perfect, but having a design reference makes the development scope for that API much clearer.

Challenge #2: Service Aggregation
Not all legacy systems can keep up with the demands of modern environments. Out of date insecure encryption, inconsistent methods of communication – batch files, EDI, SOAP, REST all with different authentication mechanisms. There is a compelling argument to be made for service aggregation.

Aggregation of services is the bringing together multiple legacy services under a single authentication mechanism and communication contract. There are actually two patterns service aggregation like this – one approach is to wrap up multiple systems under a single API and embed the legacy service orchestration into this layer, the other is to wrap up individual services into separate API contracts and then create intermediate server-side API ‘orchestration’ calls that provide combinatorial services as a single endpoint. Either way, we want to have a consistent, modern authentication mechanism for our services and we want modern clients to be able to use those services in a consistent way.

Aggregation of legacy services simplifies the integration effort for modern applications that expect to communicate using Representational State Transfer (REST) over HTTPS and Java Script Object Notation (JSON). This is usually provided through API middleware. There is a LOT of choices when it comes to API middleware – heavyweight solutions like Mulesoft or APIgee are used by many of our larger clients for this task, however we most often use ACT.Framework for this purpose as it is quick to integrate, extremely fast in production, and able to be deployed at very low cost.

Challenge #3: User Authorisation
Just because I am allowed into a system (ie: authenticated), that does not mean I should be able to access everything within that system or perform every available action. For example, I may want the payroll processing users to access data on specific employees but not others, or be able to add a pay records, but not delete them. This is called user authorisation or permissions – and it is different from authentication.

Just as modern, consistent authentication mechanisms are essential for a modern API, simple ways of managing service authorisation is just as important. Many legacy services don’t have consistent ways of managing authorisation and there is no industry standard for creating or enforcing complex permissions that we can just use, however we can handle authorisation errors using standardised RESTful approaches through HTTP status codes – this provides your modern API with consistent error handling behaviour that consumers of the API can leverage.

Challenge #4: API Contract Consistency
As soon as a legacy system is being used by modern clients in production, there is an explicit agreement defined by the API as to how and what should be provided to and from the system. This means that changes to that agreement should be taken very seriously. If at all possible, changes to the legacy service that affect data should be versioned into API releases, allowing downstream systems that use the API to update to the newer version of the API over time.

Contract consistency can be very hard to enforce in a legacy system so often implementing the actual versions and handling differences in API version behaviour is offloaded to an API middleware layer instead of to the legacy system itself.

Challenge #5: performance
Very often legacy systems exhibit poor performance. A benefit of wrapping up a legacy system behind API middleware is that this creates new opportunities to improve performance. How can adding middleware improve performance? Most often, this is achieved through caching.

Caching is a process of retrieving commonly used data from a slow system and retaining it temporarily in a fast system – in the case of API middleware, we would store common requests or frequently used common datasets in the API middleware layer and reference those instead of calling the legacy service. This approach requires understanding the design and usage of the data being cached to perform well.

It is also worth noting that caching can introduce security vulnerabilities, so it is important to ensure that requests that use caches still apply the same authentication and authorisation rules as the legacy platform. These could be thought of as ‘global caching’, ‘role based caching’ and ‘profile based caching’.

When it comes to implementing caching mechanisms it is supremely improtant to understand the relationships between the cached data and which API calls make that data no longer up to date – known as a ‘dirty cache’. Mapping out this relationship can provide your legacy applications with high performance while retaining system consistency.

Challenge #6: Testing and Regression
Older systems can have old ways of thinking about testing and quality. This can manifest as regression issues every time a new version of the legacy software is deployed. It can also manifest as very long release cycles. Either way, testing and regression of the legacy system need to be considered when providing an API over the top of the service(s). There is always the risk that gaps in the testing and release process of the legacy platform will flow through to the API and the clients that consume the API.

The benefit of having an API wrapper around the legacy system is this provides you with a service contract against which you can perform functional testing. Functional testing is the process of performing a test on a function or feature of the system – end to end. This is precisely what the API is providing you – a way of calling one feature ‘end to end’.

Using a functional testing framework like the ACT.Framework’s Declarative Testing system allows the developer of the API to not only test the API but very quickly develop a comprehensive set of regression tests for the legacy system that can provide a level of assurance upstream from the legacy application as to the source and nature of regression problems prior to going into production while dramatically reducing release cycle times.


Taking into account the ‘big six’ challenges when it comes to technology modernisation efforts can provide the foundations you need to establish security, consistency, agility, quality and performance modern consumers would expect.


The Dealmaker’s Guide To Tech Risk

Four questions that reveal insights into any software business to mitigate your risk.

Click here to get access


Level 25, 88 Phillip Street
NSW 2000

+61 2 8024 5975
[email protected]

©2003-2020 Pixolut Pty Ltd trading as Thinking.Studio
All Rights Reserved