By Robert Greenwood (Guest Author)
On my API journey one thing has become increasingly obvious, without the ability to allow consumers to try out APIs in an environment that is as close to production environment as possible you are essentially just hoping they implement it correctly - they don't.
The eutopia is that your API consumers should be able to just get credentials, build and test their app against your API contract and go-live, they are after all building against your well thought out and standardised contract right?
I’ve found that the reality is quite different.
Many APIs that are released create the façade of a deeply rooted, front to back digital organisation, often this simply isn’t the case. Behind that shiny new API is often legacy a legacy systems and RDBMS who's data was simply never intended to be shared beyond an internal corporate system.
Those systems are more than likely full of business rules that have evolved over years and were definitely not intended to support any other tenant than your corporate on premise user. Can you really risk letting unknown API consumers come and “play” in this production environment? Of course not!
So what’s the answer? Simple, let’s use pre-prod I hear you cry!
But as you start to delve into environment access, more worries begin to surface. Not only are you trying to keep your consumers in their own tenancy in a system never designed for this – but they are now potentially creating data on your pre-prod system that looks an awful lot like real data, with personal details, containing sensitive information – they want it act just like production and they want 24/7 access to it! Your pre-prod environments have just become production like, they need to be available whenever your consumer needs them, they need to be GDPR compliant….
So what do we do? Create a suite of Mocks, deploy virtual assets, remove “try me”, wait ‘till it goes pop or the regulator is at your door?
Having identified this challenge our approach has been to create a Sandbox based upon a mixture of mock responders (Virtual Assets), algorithms to match contract defintions and Machine Learning techniques to learn what good and bad API traffic looks like.
Sounds complex, but in reality it’s not. Delivery teams are great at creating the mock responses for the good 200 and 201 responses, but maybe not so great at mimicking the many 4xx business rules and 5xx responses. If we can “learn” all these bad request patterns using data harvested from test suites and respond accordingly – our delivery teams effort of creating 4xx and 5xx responses is slashed if not removed altogether. If we also add in the ability to use transient object stores for those risky PUT and POST requests, tagging them with the consumers key, bingo instant multi-tenancy capability where there was none previously. And now we have our multi tenanted object store we can create purge and refresh rules for this data to reduce GDPR and DP risks automatically, phew!
Having an API sandbox that can protect your organisation from the risk of accepting and managing “try out” data in real systems, and that can learn an APIs good and bad behaviours from real traffic analysis is in my opinion going to be key to the success of your organisations API adoption, especially if you have a technology legacy – why build mock responders for every evolving business rule when you can just learn it?
コメント