It is essentially an attitude and value problem. See Torvalds' email titled "WE DON'T BREAK USER SPACE" and Rich Hickeys talk "Spec-ulation" on YouTube.
Consequently, the fix is to move to another vendor.
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Follow the wormhole through a path of communities !webdev@programming.dev
It is essentially an attitude and value problem. See Torvalds' email titled "WE DON'T BREAK USER SPACE" and Rich Hickeys talk "Spec-ulation" on YouTube.
Consequently, the fix is to move to another vendor.
There is no sure-fire technical solution. So you name and shame, far and wide, until it affects their bottom line.
We're both in a really niche market and the other vendors don't seem much better!
This is not a problem that has a technical solution. This requires a business solution—stop doing business with that vendor. Whatever service agreement exists between your companies is either not being enforced or was negotiated by a drunken mule.
Appreciate the input. You aren't wrong!
Really depends on your infrastructure, but I'd set up some snapshot tests that just make calls to the APIs with known responses, and run that in a cronjob and have it alert you if it fails.
They haven't so far broken the historical data, so I can't directly compare a response to a known good, sadly.
Don't these clowns version their API?
Not that I've seen! No endpoint tells me anything about the API or endpoint. Would that be in the response headers, maybe? I'll check, but they're bad at change control anyway and they use slightly different versions of their systems for each customer, so there's not really a unified version number anyway.
edit: Nothing in the headers.
I mean.... We version ours in the url.
/api/v1/some_enpoint
That way if, for whatever reason, you need to roll a breaking change, you do it in a new version mapped to a new url.
I'm sorry for what you're going through, I've been there before.
They've never rolled out a breaking change INTENTIONALLY, which is a fun distinction!
You can compare the status to a 500 or a 404 though, to see if it’s running?
When it breaks, you’ll know.
I do that, at least. Most recent problem was one endpoint returning [] instead of a bunch of JSON, still with a 200.
Oh duh, I should have known someone would return an empty object/collection or a string or something (“Error”) and 200!
I feel like sometimes monitoring is a bit like whack-a-mole.
That's my feeling, too.
I'm just thrown by you saying you have a vendor that sucks donkey balls. If you only have one that sucks donkey balls, that seems unreal to me.
My group supports around 65 applications, and I'd find it a hell of a lot easier to list the vendors that don't suck donkey balls.
I think there's one. Maybe.
You might be losing more money using this one than changing for a more expensive but competent provider.
I have only came across one provider that we couldn't replace and in that case we got them to export their data directly instead of wasting time using their awful API.
Luckily it's not up to me, but I agree.
I've been complaining about the API for their main custom application, but they also have a ton of data in Salesforce and they screwed up when they set it up, so it's not multitenanted or anything. I can't have the API because I would be able to see and modify every customers' data.
They're awesome.
synthetics. the big question is how often to run the checks and how many you will need to make for your use cases.
In my last place of work we just used a small perl script for such monitoring. You just recursively parse the whole body, save which paths exist and what type of data they have into db. When something changes it posted an alert to a webhook. Your case is a bit more complicated, but not by much.
I'm not sure what you mean on the first part. I've read that you should be able to sort of walk through a RESTful API via references to other "tables", but this API doesn't work like that. There's no endpoint that lists endpoints.
All of the responses are dozens to hundreds of lines of JSON, often with a few of the fields for each entry being present or absent depending.
Do they use openapi or swagger or something? If so you should be able to do something like use changedetect.io on their swaggerdocs page.
They generate a swaggger file for me on request with a lag time of weeks usually, but for only one of the APIs. The others are documented in emails basically. This is a B2B type of thing, they are not publicly available APIs.
Ask them to generate a schema file that you can download from the api. Or at least an endpoint that returns a hash of the current api schema file. That's cheap versioning telling you if something changes.
You can always use the swagger schema to verify the api. So ask some basic questions what should always be true and put that into validation scripts. If they use a framework, HEAD requests usually tell you some things.
Last really bad vendor had an openapi page that listed the endpoints but the api wouldn't adhere to the details given there. I discovered that their website used the api all the time and surfing that i was able to discover which parameters were required etc.
Last idea is statistics. Grab any count data you can get, like from pagination data and create a baseline of available data over time. That gives you an expected count and you can detect significant divergences.
I tend to show up at the vendors it guys in person and bribe them into helping me behind their bosses backs. Chocolate, coffee and some banter can do wonders.
I'm 3,500 miles from the vendor's devs, sadly.
Asking them to put the swagger file itself behind the API is a good idea. Their dev backlog is 3-24 months.
I used the same trick to determine the required headers and parameters - I checked their website which uses the same API.
The source of their delays is that different devs or teams "own" different endpoints and make their changes without documenting. It's annoying, stuff like the same data being in field "hostId" on one endpoint but "deviceId" on another.
Are any of their apis a GET that returns lists? I create a lot of automated api tests. You might be able to GET a list of users (or whatever) then pick a random 10 user_ids and query another api, say user_addresses and pass in each id one at a time and verify a proper result. You don't have to verify the data itself, just that the values you care about are not empty and they key exists.
You can dynamically test a lot this way and if a key gets changed from 'street' to 'street_address' your failing tests should let you know.
Unfortunately on the main API I use of theirs, there's an endpoint with a list of objects and their IDs, and those IDs are used everywhere else. The rest of the endpoints aren't connected. I can't walk e.g. school > students > student > grades or something
I made my career out of automated testing with a focus on apis. I'm not aware of any easy tool to do what you want. The easiest way to quick whip up basic api tests that I've found is python/pytest with requests. You can parameterize lots of inputs, run tests in parallel, easily add new endpoints as you go, benchmark the apis for response times, etc. It'll take a lot of work in the beginning, then save you a lot of work in the end.
Now, AI will be able to make the process go faster. If you give it a sample input and output it can do 95% of a pytest in 10s. But beware that last 5%.
Yeah I would use python and pytest, probably.
You need to decide what you expect to be a passing case. Known keys are all there? All values in acceptable range? Do you have anything where you know exactly what the response should be?
How many endpoints are there?
A couple approaches are setting up a batch process on a frequent interval to call the API and run tests against the responses, another is to have the service consumer publish events to a message bus & monitor the events. It depends on things like do I own both the service and client or just client, can I make changes to the client or just add monitoring externally, and if I can run test requests without creating/updating/destroying data like a read only service, or if I need real requests to observe.
The main one I have issues with is a read only API. I guess I make it harder on myself from this perspective by not maintaining one big client, but lots of separate single-purpose tools.
Yeah then I would setup a call or set of calls on an interval to test the response on, and if a critical test fails send an alert, if there are less critical alerts maybe treat as warnings and send a report periodically. In either case I'd log and archive all of it so if they are bullshitting or violating contact SLAs I'll have some data to reference.
They do have an API Accuracy SLA but it's not defined anywhere so we do our best. They've only avoided penalties a few months out of the last several years!
Oof that is a rough one. If they are just absorbing the penalties it sounds like the penalties need to be increased to make it more financially necessary to change the incentive to actually do the work, but in the meantime I'd just collect and report on as much data as I could.
Check out Semantic Versioning if they use it.
It's very nice.
No, they don't have version numbers and they don't provide release notes when they change things intentionally. The more common problem for me is when they break it and don't notice.