Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Introspected REST: An Alternative to REST and GraphQL (introspected.rest)
190 points by zaiste on Nov 10, 2018 | hide | past | favorite | 59 comments


It looks like the meat starts at section 9.

- Sections 1-8 are a summary of REST, HATEOS, their problems, and reflections on the problem that every endpoint is `application/json` and not something more specific to the intended use case.

- Sections 9-11 discuss a variant of REST that doesn't use Media Type differentiation for capabilities, but rather composable nuggets of semantics called MicroTypes. The MicroType schemata is accessible through an introspection interface; one suggestion is to use (cacheable) OPTIONS. Clients would transmit their desired MicroTypes to an endpoint using the Accept-Type header.

If I'm reading this correctly, the author is suggesting the clients be allowed to specify the middleware chain used by the server to compose the response data (cf pagination, query, filter MicroTypes), and that these programmable chains are attached to every traditional REST endpoint collection. In one sense, this is an attempt to marry the resolver chain concept from GraphQL into the world of RESTish JSON APIs.

I think you could pull this off only with a dedicated server library because I don't see real world developers using this technique successfully on the current crop of HTTP webserver APIs. There's also the very real performance issues that come up when clients are allowed to control resolution; we see companies using GraphQL in production locking down the ability to do custom queries.


I think the best argument the author could have made is an example. If a proposal like this cannot be illustrated in an example (like it can be done for REST or GraphQL), it's likelihood of adoption is going to be low...


What I got out of it... REST as practiced now is bad. To find out how to make it good, read this 40 page manual. I prefer to use the system that doesn’t require long form specification reading.


> reflections on the problem that every endpoint is `application/json` and not something more specific to the intended use case.

This is a feature, not a bug. If a JSON decoder can interpret the format, then it is by definition `application/json`. The alternative is to go back to miriads of incompatible data exchange formats, and each dev writing his own buggy parser, or installing yet another dependency to his application.


Not sure if this allows clients to compose a server-provided selection of middleware, or if it allows them to provide completely custom middleware?

If it's the latter, as well as performance I'd be deeply concerned about security.


There is some good stuff in here, but it misses the biggest pain point developers face when trying to make a robust hypermedia API: tooling. There is a lack of tooling on the server side and almost no tooling on the client side. The amount of effort required to serve a RESTful API requires a lot of upfront investment. Then convincing the client developers to take advantage of the hypermedia affordances requires a lot of time and energy. If one does manage to convince them, the client tooling requires even more upfront investment. The only attempt at client tooling I have seen is the work done by Mike Amundsen.

One thing GraphQL got right was focusing on the client tooling. If the client developers are bought in to the protocol/specification then the server developers will naturally come along. The reverse has not been true in my case.

This opinion is based on my experience building hypermedia APIs, consuming hypermedia APIs and helping with the HAL hypermedia spec.


This tooling-first approach is basically what we're going for with http://www.hyperfiddle.net/ – the requirements of the killer app are what drives the hypermedia API, the mechanics of which are extremely innovative and weird. One key difference is Hyperfiddle's I/O layer is decoupled from transport, it is not limited to http or client/server, which opens a whole spectrum of I/O configurations with different performance characteristics, including "ship the api definition over there so it can run near that secure database" like html/javascript layer apps are shipped over the wire. We think Hyperfiddle can emit Siren-compliant representations (or any other general purpose hypermedia mimetype), though Siren can not express the entire continuum of I/O and data ownership that Hyperfiddle's protocol can (and thus Hyperfiddle probably cannot be built directly on Siren – the tools must come first). We solve the caching problems with an immutable database (Datomic) which permits idealized caching of everything at every layer. If the constraint of an immutable database sounds like a dealbreaker today, it probably is; but it unlocks a whole new frontier of capabilities that apps ten and twenty years from now will require. Why are we designing new protocols for the requirements of yesterday?


This is definitely interesting and I will spend some more time checking this out. This tooling looks more advanced than pretty much anything else out there, but from what I can tell it is a server-side driven approach. I have been trying to imagine what tooling that is client-side first looks like. It should not matter _how_ a server implements something, such as Siren, only that it properly follows the semantics of the Siren protocol.


It can be server driven, but it doesn't have to be. It depends where the data is and what the permissible access patterns are and which process is responsible for enforcing them. These days the data that matters is in server-side databases with tightly controlled access patterns so it is pretty weird for the client to be in charge. Hyperfiddle's data protocols are sufficiently abstract to run anywhere in the continnuum of data ownership [1], but so far we've only bothered to implement the parts of it that matter to today-era businesses. Are we thinking about this the same way or have I missed the mark? [1] http://www.dustingetz.com/:urbit-continuum-of-data-ownership...


Wow, this is really interesting. Wish I worked on it!

What tech and languages do you guys use? And are you hiring?


Just would like to say that being able to leverage an immutable DB at work is amazing, just give it a try and value (timestamp) and you can roll back to any point in the dbs history. Has saved so much heartache between debugging and undoing changes


which database?


Hi, author here. That was one person publication so tough to come up with tooling as well :)

I think we just wanted to show that it is possible to come up with a different (better we think) architecture other than REST and that's a big thing I think. Relevant answer: https://news.ycombinator.com/item?id=18425581


Also submitted a couple months ago. Only got a few comments back then: https://news.ycombinator.com/item?id=15211604

The first sentence of TFA captures one of the most annoying things about REST discussions: "In this manifesto, we will give a specific definition of what REST is, according to Roy, and see the majority of APIs and API specs (JSONAPI, HAL etc) fail to follow this model."

At this point I've read or heard dozens of claims like this, that almost everyone is doing REST wrong. It's well past time to stop blaming all the REST implementers in the world for being too dumb to understand Fielding's brilliant vision. If most software developers can't get REST right, then either proponents have consistently done a crappy job explaining the idea, or it's not as great an idea as they think.

Skimming the table of contents, it looks like the authors have thought deeply about the problems with REST and come up with some well-reasoned solutions. So they're answering the "REST vs Introspected REST" question. But much more relevant to me is the "Introspected REST vs GraphQL" question. What would make someone choose this over GraphQL? Introspected REST has a lot of catching up to do to match GraphQL's tooling and market share.


Author here: The problem with GraphQL is that it has to re-invent everything on top of HTTP. Introspected REST reuses HTTP properties and architecture by default, making more robust and compatible with existing clients. Also related: https://news.ycombinator.com/item?id=18425581


Well the problem with revolutionary (take this adjective literally) ideas is that most other won’t understand their essential differences and bend them back into familiar perspectives.

In the case of REST, it has been mistreated as SOA-WS for JSON or an RPC for SPAs... hardly anyone bothers with res links.

I would add that hardly anyone bothers with rdf json, while hurriedly reinventing the XML Schema horror movie ;)


The developers that would hypothetically adopt and implement a new paradigm like this would probably love to just see some straight forward examples of client and server usage


Can we just have back SOAP/WSDL and return to rational design and interoperability, or at least limit "REST" to a web facade? I think after 10-15 years of mucking around with "REST" (or what people think it is, as rightfully pointed out in TFA) it's very clear that there's not going to be a common understanding, let alone standard for it. As a freelancer having worked on maintaining many "REST" trainwrecks, I can tell you that naive REST spaghetti is absolutely much worse than any SOA design ever was. The technical debt and high maintenance might not be apparent while you happily code away your new "microservice"; but I can assure you you've just traded a tiny bit of upfront design for a long-term puzzle you're leaving behind.

Did you know WSDL has supported "REST"-like encoding of parameters in URLs and operations as HTTP verbs since 2001?


Even assuming we'd want XML as the message/schema format back (with all the security, performance and readability issues it entails), SOAP/WSDL/WS-* was any better than rest in standardization back in the day.

For instance, there were multiple different ways you could format your message (RPC/encoded, RPC/literal, Document/encoded, Document/literal, Document/literal wrapped) and different implementations supported different formats. There were all kinds of extra features that were never supported across board like multi-part messages etc.

Before WS-I there was practically zero guarantee that two SOAP implementations would ever be able to interoperate (and please remember, while REST has no standard in practice, it's dead simple to implement REST by hand - the same couldn't be say about SOAP!). WS-I only came out in 2004 or so, and by then it was already too late. I'd say the SOAP ecosystem is the definitive case study for trainwrecks.

If you really want a standard method for RPC then you're much better off with a modern implementation like gRPC or Thrift or Cap'n proto. Please don't go back to the nightmare called SOAP.


I feel that the point is not about the transfer representations but about the interaction model. REST makes sense as long as you can sanely map your interactions onto simple modification of something that can be meaningfully described as "resource". For typical application that implements non-trivial bussines processes this means that you either expose low-level implementation details (ie. how you internally represent progress of some process) in your API or you implement "REST" API that is sufficiently far from what REST is supposed to mean that it stops to make sense to use that moniker.

On the other hand SOAP is good match for such applications because it is simply an RPC mechanism, albeit with unnecesarily complex marshalling and transport layers underneath.


It's almost like interoperability is hard or something.


The worst developer hell I've ever been through was fighting through figuring out poorly documented SOAP APIs.

Why the hell would you wish that pain on anyone.


> The worst developer hell I've ever been through was fighting through figuring out poorly documented SOAP APIs.

Except for undocumented "REST" APIs with totally arbitrary encodings of parameters into URL path steps and query parms, and needless excessive network roundtrips. Frameworks such as JAX-RS, Spring, and Swagger/Open API even invite you to use "subresources" as out-of-band agreed URL schemes ("lousy coupling"), totally missing the entire point of REST: that you dynamically learn interactions on "resources" via hyperlink URLs as a means for loose coupling.


I had to support a SOAP-based API (PayPal v1) and it was horrific.


OTOH, I've worked with EBICS and other ISO 20012 stuff, and while very intense and XML-heavy, it felt like adequate and robust in a way that JSON and schemaless simply doesn't for the task at hand (talking about representing very complex inter-banking businesses on stocks/derivatives with multi-leg deals, foreign exchanges and currencies, IBANs, ISINs, sub-second trading dates, etc.). Even used SOAP/MTOM in 2017, and it worked well enough.


OpenAPI is gaining momentum.

3GPP for example have adopted it for several interfaces inside the 5G core network.

https://en.m.wikipedia.org/wiki/OpenAPI_Specification


This looks a lot like OData - a REST-ful API standard with schema introspection, patterns for defining and traversing resource relationships, well-defined guidance around mechanics, tooling support. In particular, the "Microtypes" concept resembles how entities work in OData - rich query support for collections (eg. sorts, filters, order by), "expansions" on related resources, even inheritance semantics (ie. being able to request a derived entity as its parent type).


The fact that OData calls itself not only RESTful, but literally "the best way to REST", while using requests like this:

    GET serviceRoot/People('russellwhyte')/Microsoft.OData.SampleService.Models.TripPin.GetFavoriteAirline()
is an absolute insult to REST and the target developer audience.

https://www.odata.org/getting-started/basic-tutorial/#bounde...


I think this is a bit cherrypicked. The example you're using is a fully qualified bound function. OData support for actions and functions explicitly exist to provide affordances for how to do RPC within OData. You can easily model this API in OData without requiring a function (eg. having a navigation property reference called "favoriteAirline"). Moreover, you can typically invoke functions without a fully qualified prefix (save cases where there is ambiguity).

For the most part, OData does a good job at letting folks opt into complexity, allowing integrators to make full use of APIs without needing to know anything about $metadata, inheritance mechanics, functions, etc.


Semantic URL routing isn’t part of REST.


I never understood why people love REST. Just use some kind of JSON-RPC, or something like that. In my experience most of so-called "REST" interfaces are just poorly written RPC. Some people even think that REST means HTTP + JSON. It's extremely rare to encounter a true REST interface with e.g. HATEOAS, proper caching, etc. And if many developers can't utilize technology, probably technology is not good enough to be commonly used. Web Services with their WSDL were the best thing. They were too complex, they use XML which is apparently out of fashion today, but the idea was solid, we just need something simpler but with good enough tooling.


REST aligns _very_ well with CRUD. Most software is, basically, CRUD.

So by saying "we're going to go with REST", you can determine about 90% of your API design more or less instantly, and it rarely gets in the way.

You still have to get your domain modeling right, and sometimes you need to make some extra resources. But you have a design that, basically, works.


I disagree that most apps are CRUD. To the contrary, most apps starting out as naive CRUD have complex implicit constraints related to eg. in what state you can modify resources in a particular way.

Even for CRUD apps with simple master/detail data relationships it doesn't make sense to tie your domain design to network requests.


If you are talking to devices that you can't even access/control, REST is a better way of interacting with the sensors. RPC is out of the question because it tights implementation with the API. However if you talk to a client that you can control, it's up to you, if you want to use RPC then it's fine.


> but the idea was solid, we just need something simpler but with good enough tooling.

You mean like GraphQL? ;)


I always used to think it was terrible, just based on the name.

Then I read the actual documentation, and I felt like it was what I had been missing before.

Now I just need to find a decent implementation.


REST is maximally interoperable and long lived.

E.g. you can stick an off-the-shelf caching reverse proxy in front of a server.

E.g. you can split data centers and direct your hypermedia links there.

I think there are often more important things then just interoperability and flexibility, but those were the guiding principals of REST.


HTTP-JSON—RPC can do this, though. You don’t need the weird hypermedia or pseudo-OO ideas that are usually implied by “REST”; you just need to POST when you need to POST, GET when you need to GET, and set your headers properly.

Most of the benefit of REST over SOAP was the fact that it used HTTP correctly instead of implementing a redundant protocol on top of it. Having machine-readable API contracts is still extremely valuable; we just found better ways of doing that.


Right. HTTP (i.e. REST sans hypermedia) gets you interoperability.

It doesn't get you the "flexibility" of hypermedia/HATEOS, but it depends on your application if you need that. Most projects don't call for 15+ year APIs.


Generally speaking, I think a lot of developers get into trouble by not really putting the effort into grasping the ideology of architectural frameworks like REST, 12factor, and React.

For example, our API needed a way to serve a different view of an existing model. How do I get the API to know which view to serve? When I asked around, they said the best way was to do /cars/1/prices, which I didn't like because I feel it breaks REST. There's no price model to the car, prices are fields of cars, at least for now.

I had to think for a few seconds before coming up with just using a query parameter to set which view to serve, /cars/1?view=pricelist, preserving REST. But most coders just take the first thing that comes to mind, and then wonder why their applications are so messy after a few years.

Coders seem to not want to bother learning how existing solutions are supposed to work before jumping to a half-baked newer solution just because it seems more intuitive. If you understand REST, then you can see how GraphQL can improve certain aspects of API interaction.

But it's not a panacea any more than React is. If you understood how HTML, CSS, and Javascript are supposed to work, then you can see how React improves on it. But if you can't then your React applications will be just as horrendous as your jQuery ones were.


REST allows for clients and servers to evolve independently. If you can constrain that you can probably design something significantly simpler. That’s not a shortcoming of REST, though. The channel between your mobile client and your backend would probably be better served with something more like RPC than academic REST.


That's our proposal/challenge here, come up with a model that tries to merry the best of both world (RPC and REST) but still adheres to HTTP semantics and allow clients/servers to evolve independently.


Kudos to the author for going the extra length of trying to maintain semantic interoperability and reuse by showing how to be backwards compatible with e.g. JSON-LD. This makes it possible to continue to build upon all the vocabularies already created.


Thanks! That was the real challenge!


Hi, author here. AMA. But I would like to make a couple of points.

For starters, a lot of people are looking for a TL;DR. There is no such thing. Same with tooling.

This is an open publication and should be considered as is. The intention was to come up with a better architectural design than REST, reusing existing Internet architecture. And that was the real challenge. Because REST and HTTP have been built almost by the same person. Bending existing Internet architecture to fit another architectural style for networked services is extremely difficult, but apparently not impossible.

GraphQL for instance, uses HTTP just for the transport layer. That's a big assumption to make there, anyone can build very flexible stuff if you are about to re-design everything HTTP gives on top of HTTP.

Another note is the reason that it takes so much time to get to the actual model (section 9) is to make sure the reader understands what is REST and where REST fails. And I haven't really seen any other document/publication explaining REST so extensively. So for those who are complaining that none gives a definition of REST, then go through section 1-6 and you should have it.

Last but not least, Introspected REST is compatible with existing REST architecture, it just makes it more robust and flexible. So for the tiny hello world example, it doesn't really make any difference other than exposing some meta data through the OPTIONS endpoint, like the (JSON) schema and the linking. But for complex APIs it should give huge advantages compared to REST.

And again: this is an open publication. From that to actual implementation there are many steps needed to be taken (for starters defining the necessary microtypes).


TLDR?


I'm 11 sections in and haven't found an example other than a worryingly-complex media type specification..


TL;DR: An argument for (with an included example spec) a protocol that solves many of the problems that graphql solves but at the REST layer (e.g. rather than ontop of the REST layer).

(As a personal aside: The difference between this spec and GraphQL is relatively minor, they seem equally complex in implementation details (e.g. correctly implementing a graphql server/resolvers is no cakewalk; and correctly implementing either in a less-programming-time way would require a large amount of meta-programming; something that this spec doesn't necessarily solve, or even seem to hint at, yet uses as an argument for it), the only real improvement is that this one is a layer lower (theoretically one less layer of abstraction/indirection improves performance and simplicity) and is not owned by Facebook).


Yeah seriously... +1 to this. Even just a handful of examples of client and server usage would be far more useful for determining if it’s worth digging deeper


This is a formal proposition. If it catches on I expect others to explain it in various levels of complexity but this document is intended to be dense and deeply descriptive.


Sure, but couldn't it begin with an executive summary?

If you take RFCs as an example, many start with a simplified description or problem statement before then proceeding to get into the details.


For me, its basically to add support for metadata in REST, by using things like JSON-LD to describe the layout of data and be competitive with what GraphQL is offering.

Of course there are a common definition for data and metadata, trying to define a common standard.

It's funny how in the end this will end into reinvent a thing like protobuf, but in JSON, much more verbose, and that can't be read by a human anyway.

GraphQL at least do all this while being human-friendly and allowing to define whats needed 'by hand' or by a machine. (But i also dont think this is a great thing to use if the communication is internal and M2M anyway, but its a great solution to serve some api to a bigger crowd of heterogeneous clients)


> TLDR?

Some guy tries to make a case regarding web API design by being both overly pedantic and opinionated on what REST is and how everyone is doing it wrond, proceeds to assert that REST done according to the author's opinion is also wrong, and from that point on (which drags through a dozen sections to make) the author presents "a manifesto" which is the author's opinion on how web APIs should be designed and produces a convoluted definition of a strategy that solves nothing but increases complexity.

Some assertions are bafling at best, such as claiming that JSON is somehow not a media type but a message type, which is just wrong. The silliness continues in other baseless assertions such as asserting that developers are not aware that document types such as JSON are used to define JSON-based document formats, or that "Creating a new Media Type for our API is generally considered bad practice", which is just plain wrong, or for some reason conflating the thery part of a URL with the media type which is an assertion that raises some questions.


ok I think I'm missing something here


I wasn't able to read this entirely it's a very dense spec.

That said , even if the arguments in this spec are solid the entire industry are sold itself to GraphQL.

GraphQL has won and it won't change.


> the entire industry are sold itself to GraphQL.

Which industry? I presume you don't mean software development which has largely ignored graphQL? BTW I speak as an early adopter who has used graphQL in production.


2007: SOAP has won and it won't change

2012: REST has won and it won't change

2018: GraphQL has won and it won't change

2023: ...


Hahahahahaha.

Perfect.

Maybe 2023: gRPC has won and it won't change


That sounds like a pretty absurd statement. Can you qualify this further?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: