> (E.g. I can't see a client going "oh!, there's a new business function I haven't seen yet, let me invoke that!".)
Of course not, nobody thinks that. That notion does not exist.
> With the rels/links, you're just moving the coupling away explicit URLs to the names/identities of rels/links in the response.
I suppose, but that's a much looser coupling than the alternative (i.e. writing in the documentation "the comments URL is http://example.com/comments - you can't rearrange your URL structure, you can't start using a different domain (e.g. a CDN) for comments, existing clients can't use other sites that implement the same API, etc).
HATEOAS is about building general protocols rather than site-specific APIs. That it makes it easier to change your own URLs is just a bonus.
I'm pretty sure Fielding does, see "improved on-the-fly":
"The transitions may be determined (or limited by) the client’s knowledge of media types and resource communication mechanisms, both of which may be improved on-the-fly (e.g., code-on-demand)."
Hypermedia is great for intelligent clients (e.g. humans) who can adapt to, say, a webpage changing and new fields suddenly showing up in the hypermedia (HTML) that are now required.
However, for an application, it's going to be hard-coded to either do:
1) POST /employee with name=foo&age=1, GET /employee?id=1
Or
2) GET /hateoas-entry-point, select "new employee" link, fill out the 2 fields (and only 2 fields) it knew about when the client was programmed (name, age), post it to the "save employee link", go back to "/hateoas-entry-point", select "get employee" link, fill in the "id=1". (...or something like that).
In either scenario, the non-human client is just as hard-coded as the other--it's either jumping to external URLs or jumping to internal links. Either way those URLs/links (or link ids) can't change and the functionality is just as frozen.
Perhaps the benefits of hypermedia would be more obvious if Fielding built a toy example or two that we could all touch and feel instead of just dream up. But so far there seem to be a lot of non-HATEOAS REST APIs that are doing just fine sans hypermedia.
I think it's amazing that you're the first person I've ever seen make this very obvious point, which was the first thing that popped into my head when I first started reading about REST and HATEOAS APIs (or as I would call them, navigational APIs*). It's always seemed to me that REST tutorials and evangelism ought to address this basic objection upfront, if they do have good counterarguments, but I've never seen them do so.
(A good HATEOS client would take the form of a graph navigator - this style of programming is one I associate more with "AI" than with typical web programming patterns. Which doesn't make it bad necessarily, but the REST material I've seen doesn't actually get into the navigational client programming side of things, which is the actual interesting part.)
This has been my feeling for a long time, and I have never seen a HATEOAS proponent address it to my satisfaction. Frankly I think it is a pretty important point. Are we expecting automated consumers of a REST API to be curious and spontaneous the way human users of the web are?
For me, the 2 things that set off my bullshit detector about HATEOAS are these:
1. Everybody who buys into it, including Roy Fielding himself, has to constantly say "no, that's not what I meant" and "you're not doing it right" if a REST service doesn't use HATEOAS. It reminds me of the response you get when you point out to college kids how bad Communism works out in the real world (see USSR, North Korea, et al): "No no no, it could really work, just nobody has done it right yet"
and 2: All the arguments eventually come back to an Appeal to Authority. "Well, Roy Fielding's dissertation says...". I sense that the real argument eventually comes down to whether a service can be called RESTful according to Fielding's dissertation, vs whether or not HATEOAS is actually a good idea.
I use HATEOAS because I derive very specific benefits from the client/server decoupling it enables. I have used it to build a large business application.
I agree that far too many people talk about HATEOAS that have never really used it on real projects and that is unfortunate. However, I suggest you avoid throwing out the concept because the messengers are inadequ
With the exception of spiders and other AI-like things, no, we are not expecting clients to spontaneously consume RESTful services in meaningful ways. REST clients are generic. They don't know anthing about specific services, thus allowing those services to evolve independently. A client that is coupled to a specific service is not RESTful, nor is any API that can only be consumed by such a client.
You comment is correct, despite the downvotes. The hypermedia isn't there so AI or spiders can navigate, it is there to reduce coupling.
--
A Rant follows. Disclaimer: I'm not an expert on the subject so feel free to correct me or anything.
The WWW itself is "RESTful". Even Hacker News is RESTful. You don't care about the URLs that show in your address bar. You don't construct them based on an ID-number on the side of each post. You just click around and submit forms.
With hypermedia, your consumer doesn't need to know about those specifics of URL construction. URLs are transparent. You just query for Resources and follow links.
YES, absolutely ZERO coupling is MUCH easier with boring CRUD-like stuff, such as Microsoft's OData, GData or AtomPub.
Maybe when you're writing a good consumer tuned for usability (like a Twitter client) you, app builder, will need to know some details on the service beforehand, but even then, having hypermedia would be cool. With a nice and pretty RESTful API, Twitter could just roll out new features, such as 'people who recently retweeted you' and you'd have a new section show up on your app instantly, since it discovers resources. Maybe you could even, like, load icons for new sections via links on the API itself...
Another example: I used to work on an App that used hypermedia: an Enterprise Content Manager with an OData API. Enterprise people used an decoupled enterprise client called Excel to connect to our Enterprise server and build random reports themselves.
So say you want to get data from a RESTful web api, do you have to customize your generic REST client? Because everything I've ever written that called an external API had to know what it was looking for in advance. Like to interact with Twitter's API, I went to their documentation page and read up on what URL's to call for the information I needed.
If you want a client coupled to a specific service then you don't want REST, you want a classic client-server architecture, which is more or less the antithesis of REST. But everyone insists on calling it REST when it goes over HTTP, then they complain that the apple tastes nothing like an orange.
> both of which may be improved on-the-fly (e.g., code-on-demand).
It's funny that your'e using this quote to prove your point, when it actually identifies the perfect example that you're looking for.
Image that we have a relatively "dumb" client that can only understand our media type and follow URLs to the next resource. We already agree that this is useful when you have an intelligent actor (human), so let's move on to the part that your'e interested in: "improved on-the-fly".
Let's take our dumb client and add one feature: A Javascript engine. This gives us the "code-on-demand" that Fielding referenced. You can now improve your dumb little client, by adding application logic that can be executed on-the-fly. Your client can now be upgraded to understand new media types, or to change the behavior of interaction with existing media types. And yes, this means the client can be upgraded even when there is no human interaction.
Want a real world example?
I recently wrote a javascript application that automatically runs a set of comprehensive tests on HTTP requests (and their caching properties) and collects the data. I was able to turn the dumb client into a smarter one (for my purposes). It has a single entry point and programmatically follows many different kinds of links. My tests run on approximately 1 billion clients, and I didn't have to update any client software to make that happen. If I want to check the cache behavior of resources behind my business partner's proxy server, guess what: I can upgrade his client (his browser) on the fly to do the testing for me.
Want another real world example?
We used javascript in a webview (dumb client) on mobile platforms to create unit tests for some native functionality. Our dumb client happens to have a bridge to the native code on the mobile device. This allows us to write one set of unit tests in javscript and run it on multiple mobile platforms. This is a great example that falls outside of the standard desktop web browser examples.
So let's take a look at your example.
> In either scenario, the non-human client is just as hard-coded as the other--it's either jumping to external URLs or jumping to internal links.
We were smart enough to develop a dumb client that can execute code on demand. Now let's upgrade our client to validate the input fields _before_ POSTing back to the server. Bam, we just improved our client on-the-fly, without modifying the client software.
So you may be thinking: But how useful is this to me? If I'm developing a new network-based business analysis tool for my company, it's probably not going to run on 1 billion clients. Do I really need to consider hypermedia as engine state and on-the-fly updates? Well no, of course not, that would be overkill. As Fielding put it:
"REST is intended for long-lived network-based applications that span multiple organizations. If you don’t see a need for the constraints, then don’t use them."
I would not necessarily expect my client application to invoke these functions automatically. It can be very helpful writing client apps when there is a human making these decisions.
One of the primary benefits of HATEOAS is state management of your resources becomes easier. My client does not need to burden itself with interrogating the state of the resource, knowing available business functions and when they should be invoked, knowing the location of these business functions, etc. This is managed on the server via API and the client can stick to "following the links."
e.g.
"Create Blog Post" is a REL that I have discovered at application root. After invoking this function and creating the post, I receive a 201 with a post resource that looks like below:
PostResource, SomeData, Link -> "/Modify Blog Post", Link -> "/Delete Blog Post", etc.
In the above example, my client application can display the links it is given and as a user I will modify or delete the post by simply selecting these functions. The client app is acting as a state machine for my user, and they are given the option to transition to various states based on the links returned with each response from the API.
There are many other benefits to HATEOAS, of course, one of which is it allows your application/api to grow and change over time (maybe I want to change a non-core URI down the road?)...this can be done without a lot of pain points by leveraging HATEOAS.
TLDR: HATEOAS constrained APIs are meant to provide direction (in the form of links) to consumers. This can lead to lightweight client agents that do not need to worry about application state.
The assumption here seems to be that a user is interacting directly with the client. in that case i can see where HATEOAS has some benefits. I see far less benefit in using HATEOAS for systems that are primarily used within lower layers of a system where the primary consumer is another computer, yet REST purists generally don't seem to acknowledge the difference. So tell me, why should I use HATEOAS in an API designed for non-human consumers?
A web app is a browser for your business domain. When implemented on top of a REST/HATEOAS service, it should be a layer that knows how to interpret and present interactions to work with the business objects coming across the wire.
URIs and hypermedia just turn out to be a really good way to structure this kind of architecture.
(E.g. I can't see a client going "oh!, there's a new business function I haven't seen yet, let me invoke that!".)
With the rels/links, you're just moving the coupling away explicit URLs to the names/identities of rels/links in the response.
Unless you anticipate changing your URLs often, I don't see this as being terribly useful.