Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In reality postgREST sucks... it's fine for simple apps, but for something bigger its pain in the butt.

* there is no column level security – e.g. I want to show payment_total to admin, but not to user (granted a feature for postgres likely). * with the above, you need to create separate views for each role or maintain complex functions that either render a column or return nothing. * then when you update views, you need to write sql... and you can't just write SQL, restart server and see it applied. You need to execute that SQL, meaning you probably need a migration file for prod system. * with each new migration it's very easy to loose context of what's going on and who changed what. * me and my colleague been making changes to the same view and we would override changes from each other, because our changes would get lost in migration history – again it's not one file which we can edit. * writing functions in PlSQL is plain ass, testing them is even harder.

I wish there would be some tool w/ DDL that you can use to define schemas, functions and views which would automatically sync changes to staging environment and then properly these changes on production.

Like when you can have a flask kind of app build in whatever metalanguage w/ ability to easily write tests, then and only then postgREST would be useful for large-scale systems.

For us, it's just easier to build factories that generate collection/item endpoints w/ a small config change.



>I wish there would be some tool w/ DDL that you can use to define schemas, functions and views which would automatically sync changes to staging environment and then properly these changes on production.

A long time ago, I joined a company where the "DB Migration Script" was an ever-growing set of migrations that took close to an hour to run. There were the same issues of lack of context, history, etc.

Since we were a Microsoft shop, I took the following approach to use Visual Studio DB projects and the SQL Server sqlpackage tool. Every table, stored procedure, schema, user, index, etc. was represented by a single file in the project, so it would have full git history and DB changes and the code that relied on them would be in the same commmit. (Data migrations still had to be stored separately in up/down migration files)

The "build" was to create the SQLPackage dacpac file from the DB project, and deploy was to apply the dacpac to the target database and then run data migrations (which were rare). Since the dacpac represented the desired end state of the database, it didn't require a DB to be in a specific state first, and it allowed the same deploy to run as part of CI, manual testing, staging deploy, and production deploys. It also generally took less than 5 seconds.


Why wouldn't the "script" be all of the necedssary commands to create the entire database?

If any migration was necessary to transform one table structure to another, that wouldn't be useful to keep around long term, nor interesting once the new table is established. It might be kept as a historical artifact, but why would you on average care beyond what the current schema is now, along with its documentation?


> Why wouldn't the "script" be all of the necedssary commands to create the entire database?

That pre-dated me, so I have no idea. It's also why I simply jettisoned the schema migration script altogether, since the dacpac covered both migrations and creating new databases.

Some other "fun" things in the database were "expansion columns" just in case new columns were needed. Many tables had "int_1", "int_2", "int_3" etc that were not used, but ready to be used should the need arise.


> If any migration was necessary to transform one table structure to another, that wouldn't be useful to keep around long term, nor interesting once the new table is established.

This bears repeating.

Migrations are transient. You run them once, you update your backups, and you get rid of them. That's it.

The only scenario where I ever needed to have multiple migration scripts at hand was when working on client-side databases, and we had to support scenarios where users running old versions of their clients were prompted to upgrade while skipping multiple releases. Nevertheless, for that scenario I also had in place a "delete and start from scratch" scenario.


> * with each new migration it's very easy to loose context of what's going on and who changed what.

> * me and my colleague been making changes to the same view and we would override changes from each other, because our changes would get lost in migration history – again it's not one file which we can edit.

> I wish there would be some tool w/ DDL that you can use to define schemas, functions and views which would automatically sync changes to staging environment and then properly these changes on production.

Declarative schema management solves this, and allows you to manage your database definitions (SQL CREATE statements) just like any other codebase: https://www.skeema.io/blog/2023/10/24/stored-proc-deployment...

With a declarative system, the tooling is responsible for diff'ing the CREATE statements between your current state and desired state, and then generating the appropriate DDL to transition between those states.

My tool Skeema is specific to MySQL/MariaDB, but I've linked some solutions for Postgres here: https://news.ycombinator.com/item?id=39236111


> there is no column level security – e.g. I want to show payment_total to admin, but not to user (granted a feature for postgres likely).

Its a feature postgres has.

  GRANT SELECT 
    ON public_table
    TO webuser;
  REVOKE SELECT 
    ON public_table (payment_total) 
    FROM webuser;
Not sure why you think it doesn't exist or doesn't work with PostgREST.


The Postgres documentation seems to disagree:

> A user may perform SELECT, INSERT, etc. on a column if they hold that privilege for either the specific column or its whole table. Granting the privilege at the table level and then revoking it for one column will not do what one might wish: the table-level grant is unaffected by a column-level operation.


That doesn't mean there isn't column-level security. It just means that `grant` and `revoke` alone are not the way to do it. The response to this criticism is the same as the response to many other criticisms among these comments: use views.

https://dba.stackexchange.com/a/239656/228983


You're right, its a feature I don't use, and I didn't read enough if the docs to get the right grants.


This is exactly why I've never used postgrest


This is also how permissions work in most other systems (eg with Unix permissions, chmod u-r matters only if there isn't a group or world read permission).


While the OP was wrong on that, the problem I found is that Postgrest threw only slightly helpful errors when someone would try to access a column that they didn't have permission to, rather than exclude that column from the result set and maybe throw a warning, which meant that you had to maintain multiple different versions of queries. This was a while ago, but coupled with my dislike of the query string syntax and some other stylistic choices I migrated to Hasura.


So you have different queries per access level/role? Like userProducts, managerProducts, supportProducts, adminProducts...?


Yep, if the products table had columns that were only accessible to some users, SELECT * would throw an error in those cases.


In my case I use Postgraphile. If you select a column that you can't access, you get an error - I don't see a problem there.

PostgREST doesn't support column selection?


It does, but if you `SELECT *` and you don't have access to all the columns you will get an error. You need to select columns that you have access to and if you select all columns but don't have access to all columns you get an error. I think it's the same in Postgraphile, no?


So either the web app has one policy as a whole, or you have to pass authentication to the database?


I don’t think passing auth to the database is crazy. It’s almost an exact parallel to filesystems, which everyone is fine with.

It’s far weirder to me is that the accepted status quo is to run transactions as a user that can read anything, and then try to have the app filter out stuff the user shouldn’t see.

Just make the transaction run as a user that only has permissions to read what you want, and stop worrying about whether your auth system handles edge cases.

This status quo is like having a filesystem where everything is owned by root, then writing a Fuse driver that lets users read those file by trying to infer permissions from the path. It’s a weird thing.

I’ve always assumed the reason databases had such poor access control (or little usage of the good access control) was because it was too slow in ye olden days when disks were slow, clock cycles were limited, and read replicas were not yet common.

In the modern world, I don’t know why anyone would prefer that their database can leak data in SQL injection attacks. The costs seem low, and the benefits seem high.


I'd love to hear an explanation of how a modern web or mobile app backend could reasonably be expected to create a new native DB role for every new end user and manage their DB access credentials securely, such that the overall risk of a SQL injection attack is lessened compared to the single-role-per-app model.


> reasonably be expected to create a new native DB role for every new end user and manage their DB access credentials securely, such that the overall risk of a SQL injection attack is lessened compared to the single-role-per-app model.

They don't have to be native DB roles. Row security policies extend far beyond an "owner".

A naive version would use SQL to set a local variable to the user's ID, with row level security policies on each row that check that local variable. This is still very vulnerable to SQL injection, though, because if attackers can execute arbitrary SQL then they can also set that variable.

A less naive version would involve setting a local variable to a JWT or other client-side secret, and having RLS validate access against a stored hash of that JWT or other secret. The app's DB account has no access to anything other than SELECTs on the JWT hash -> user ID mapping table, with RLS on all the other tables. JWT's would need to be generated by another service using it's own DB account that only has access to check passwords and write into the JWT table so that they can't use the compromised connection to just generate a new JWT they can hash themselves.

That login service can be written to be virtually immune to SQL attacks because it doesn't have to handle generalized queries. Ban all the SQL control characters in usernames/passwords and 400 any request that contains them. Hash and base64 both the username and password so they get converted to something where SQL injections are impossible because of the character set used. There's a bunch of options; this doesn't have to handle general-purpose SQL queries, so it can afford to do stuff that would break normal queries like b64'ing the inputs.

You end up with a system where you need a valid JWT to hit the API, but you also need to include a valid JWT in any SQL injection attacks for them to work. There's no point in SQL injection attacks; one of the prerequisites is "a valid authentication token for the user", at which point the attacker could just connect to the API as them.


> Ban all the SQL control characters in usernames/passwords and 400 any request that contains them. Hash and base64 both the username and password so they get converted to something where SQL injections are impossible because of the character set used.

The problem that this tries to solve has been solved by every SQL database for a long time. Bind-parameters in queries are your friend, never build a query using string concatenation.


Yeah, and I use them, but I still get paranoid. Maybe that's due to my lack of understanding of them, my mental model is that it still resembles string concatenation on the backend. Now that I type it out, that sounds wrong, so I probably need to take a look at that.


You definitely need to spend a little time with them, they are safe and don't require all the crazy workarounds you've detailed to solve a non-issue.


This feels like a custom solution, which will require some code in the database. It might work, and it might even be a good idea if the database is shared between many apps, but if the DB serves one app, I'd rather write code in a coding language than in (PL/)SQL.


> That login service can be written to be virtually immune to SQL attacks because it doesn't have to handle generalized queries. Ban all the SQL control characters in usernames/passwords

Why? If that login service is a third-party provider, what does this accomplish?


I tend to think of the database less as a filesystem and more as a block store. The filesystem layer in a real system is in fact implemented by the driver (and OS), and it might be implemented in terms of primitives that are persisted in the block device, but the block device doesn’t have a magical internal scripting language or query engine that the driver delegates to. The driver is in charge, and the persistence layer stores and loads what the driver tells it to.


> I'd love to hear an explanation of how a modern web or mobile app backend could reasonably be expected to create a new native DB role for every new end user and manage their DB access credentials securely.

Create all the individual user roles as NOLOGIN and the grant a single authenticator role used by the app rights to assume the user roles.

It's exactly as secure as the “single app account with authz managed by the app” model, but lets the DB deal with authz. In either case, authn is on the app side, except for authn of the app to its DB account.

But it also means that, since the app role itself has virtually no access and can only transition into a user role, you can't compromise multiple users worth of data at a time via an injection attack.


I responded to the other user, but you can get around the app role being able to transition to other roles by passing the session auth token to the DB via a local variable in the transaction/query.

Create an extra session table with a column that stores the hashes of session auth tokens (JWT, shared secret, whatever) and another column that stores the user ID that owns it. Give the app's DB account access to read from, but not write to, that table.

Then set up RLS on each row with a policy that reads that JWT/etc local variable, hashes it, looks up the hash in that table and compares the user ID to the "owner" of the row.

The app account is unable to create JWT's, so the attacker will need either a valid JWT or a hash collision (which means you have bigger problems anyways).

Then isolate the "login service" into it's own application with its own DB account that has access to check passwords and write sessions, and that's it. Make it super picky about what it will run; it only handles usernames and passwords, so it can probably just 4xx any requests that contain SQL characters or just base64 all of its inputs so the character set doesn't even include injection as a possibility.

It ends up with an app account that basically only has CONNECT permissions unless the attacker gives it a valid JWT, at which point they could just impersonate the user anyways.


You'd need another role to create the user roles and grant them to the app role. It should be possible, though you'd need to use a separately authenticated connection if not a separate app to ensure proper isolation.


I'm getting lost. How do the DB and app know about each other in this scenario?


Not OP, but I'm presuming they mean something like adding a "db_account_password" column to the users table to store the password for that user's database account (i.e. not their password, but the password the app uses to log in to the database as them), so the app can get the credentials to log in to a user's database account before running queries as them.

You'd configure it like a normal webapp with the host, port, username, etc. It wouldn't be something you could just add to the config for an existing app, it would take custom code.


No app I've ever deployed does passthrough authz to the filesystem, the app itself runs as in the context of a particular user and manipulates files as it's own principal which is basically the same as what happens with databases.


I have, fairly frequently in systems that are designed to be able to propagate auth like NTLM. The app itself runs as a user that has virtually no permissions, and attaches the auth context to file access/database queries/HTTP calls/etc.

Last I checked, it was also the suggested auth implementation for "platform as a service" stuff like Firebase/Supabase. Saving people from having to write their own authz is a huge selling point for those platforms.


+1 to this


What happens when I’m allowed to view a column for some entities but not others?

In the real world authorization logic can get very complicated.

And who wants to migrate this kind of stuff once you’ve implemented it one way?

Honestly it’s best to keep your database doing what it does best: storing data.


> What happens when I’m allowed to view a column for some entities but not others?

Column-level rules are a DB design smell, because if some people have a reason to view a table that is coherent without a column, that suggests that you've got at least two different fact shapes communicated by the table that should be normalized into separate tables.

Each of those tables then needs appropriate RLS rules. Sometimes you do a join (OUTER on one side) of these tables, so that they look like one, and this also solves the combined row/column security challenge that comes up when you have the less normalized table.


Views seem much more appropriate a tool for customizing/restricting column access but I'm no DBA.


I'm looking at a table with columns intentionally removed because they're none of my business, but with calculations I'd want to do on those columns provided in a separate table as anonymity allows. Column-level rules seem like they'd make generating this export fairly easy, no?


The first is an option, but the second isn't if you choose not to do the first, because PostgREST is designed to handle this:

https://postgrest.org/en/v12/references/auth.html#jwt-impers...


Admit, I am partially wrong.

I was working with supabase and in there you can't set a role for the user, it's always `authenticated`.

I was looking for something more like

``` create policy "No payment_total for user" on todos for select (payment_total, invoice_number, payment_hash) using ( -- Dynamic code example (select auth.uid()) = user_id AND get_user_role(auth.uid()) != 'admin' );

```


Ok, this is not provided in the UI but why don't you use something like this?

- https://github.com/point-source/supabase-tenant-rbac - https://github.com/vvalchev/supabase-multitenancy-rbac


There's this extension https://postgresql-anonymizer.readthedocs.io/en/stable/ which lets you do this, eg: `MASKED WITH NULL` - though I don't have much hands on experience with it / haven't looked into it's implementation details so not sure what trade-offs it might come with.

My general feeling is that it's an extension you'd apply to your human users, whilst exempting your machine users, but it feels like it could work for both in the context of something like postgrest


> there is no column level security – e.g. I want to show payment_total to admin, but not to user (granted a feature for postgres likely)

So don't put it in the same table.

> me and my colleague been making changes to the same view and we would override changes from each other

You don't use version control and a test suite?


> So don't put it in the same table.

Nope, this suggestion sucks. Meaning I have to come up w/ table structure that caters to front-end API. Better idea is to create schemas, e.g. `public_users` and create views inside.

> You don't use version control and a test suite?

that's what I'm saying... we have like a 1k migration files already, what I want instead is `my-function.sql` that always has the current code and will automatically generate necessary migrations and shenanigans, so I don't have to worry about it.


> Nope, this suggestion sucks. Meaning I have to come up w/ table structure that caters to front-end API.

Yes custom schemas are preferable, but I think you're missing the point. If the only real cost of not having to write the font-end manually at all is massaging the db schema a little bit to partition access by role, that's still a pretty great tradeoff for a lot of scenarios. The system is simple to understand/audit and fast to create, the only apparent downside is that it rubs our sense of aesthetics the wrong way.

> we have like a 1k migration files already, what I want instead is `my-function.sql` that always has the current code and will automatically generate necessary migrations and shenanigans, so I don't have to worry about it.

I use entity framework a lot which does this sort of schema migration based on class definitions. It works well like 80% of the time, but when it doesn't it's annoying.

In any case, what I was suggesting to address the specific program from your original post is that if you have lots of conflicting schema updates, then your table schema should be versioned and any migration should specify what schema version it depends on, eg. have a TableVersions table that names all of your tables and associates a version number with it. Your migration scripts then specify what version they were written against and fail if the number differs. It's the same basic optimistic concurrency control we use elsewhere in databases, just applied to schemas.


I understand what you mean, however, I still think this is a bad DX overall.

There are a lot of good suggestions in these threads about possible tools and approaches, but it just misses the point that these solutions are bad DX and require maintenance.

I think general idea of postgres is good, it's just not ready yet. There needs to be a framework that handles views/functions in postgres as if it's a web app and then applies changes as if it's a terraform. All while taking back-ups in case something goes wrong.

Otherwise it's just more maintenance and it's easier to build API's w/ something else.


> but it just misses the point that these solutions are bad DX and require maintenance.

Everything requires maintenance, that's not an argument that it requires more maintenance than anything else. There are fewer moving parts overall, and fewer moving parts generally means less maintenance.


> * there is no column level security – e.g. I want to show payment_total to admin, but not to user

I think what you want is dynamic data masking. RDMBSes like IBM DB2[1] and SQL Server[2] offer it out of the box.

For PostgreSQL, there's the postgresql_anonymizer[3] extension. For which you could do something like:

  SECURITY LABEL FOR user ON COLUMN invoice.payment_total
  IS 'MASKED WITH VALUE $$CONFIDENTIAL$$';
Last time I tried it though, it didn't play well with transaction variables (`current_setting(my.username)`), so you could not combine it with RLS logic and application users[4]. I'll be exploring an alternative on https://github.com/steve-chavez/pg_masking.

To be fair, this discussion is more related to a PostgreSQL feature than PostgREST. PostgREST relies on PostgreSQL for all authorization logic by design.

[1]: https://www.ibm.com/docs/en/db2-for-zos/12?topic=statements-...

[2]: https://learn.microsoft.com/en-us/sql/relational-databases/s...

[3]: https://postgresql-anonymizer.readthedocs.io/en/latest/decla...

[4]: https://www.2ndquadrant.com/en/blog/application-users-vs-row...


When I used PostgREST for a readonly b2b API I recall that the recommended way (from the PostgREST docs) to employ it was creating postgresql schemas for different use cases: you create a schema that has views for tables in another schema and PostgREST exposes only the views in that schema. The views defined in the schema can, naturally, control what columns are visible, or provide whatever other abstractions one might want.

I suspect a lot of people ignore this documentation and expose tables directly, and then wonder where/how to control such things.

Yes, my memory is correct: from the PostgREST "Schema Isolation" documentation:

    "A PostgREST instance exposes all the tables, views, and stored procedures of a single PostgreSQL schema (a namespace of database objects). 
    This means private data or implementation details can go inside different private schemas and be invisible to HTTP clients.

    It is recommended that you don’t expose tables on your API schema. Instead expose views and stored procedures which insulate the internal 
    details from the outside world. This allows you to change the internals of your schema and maintain backwards compatibility. It also keeps 
    your code easier to refactor, and provides a natural way to do API versioning."
This has all been embiggened since I last used it. Now you can configure multiple schemas to be exposed. The active user ROLE (as determined by auth) controls access to exposed schemas, and a header in HTTP requests can specify the desired schema.

Given all of this, it is entirely possible to achieve precise control over column visibility, version control, and surface whatever abstraction you can imagine. However, you are expected to fully inculcate PostgreSQL schemes, roles and other database affordances. This has always been the mentality of PostgREST: it's supposed to be a "function" that exposes some subset of a database without any magic beyond the database itself. Implicit in this is the need for adequate CI/CD tools to control database objects: you're doomed the minute you have >1 developer involved without such tools.


Thanks for your response.

You are right that care needs to be taken in the authentication layer.

And yes, developing SQL in a shared environment is a complex task and needs to be accompanied by proper tooling.


> In reality postgREST sucks... it's fine for simple apps, but for something bigger its pain in the butt.

In reality REST sucks. It should and only be used for files like WebDAV or S3.


I hear this sentiment echoed a fair amount without really saying why it sucks. And sometimes I wonder if the writer omitted it for brevity, or is actually just echoing something they read on the internet without really believing or understanding it themselves, because saying something sucks where clearly other people don’t know that it sucks (or else why would it be so popular) positions the writer in a dialogue as mysteriously knowledgeable. Hard to argue against points you don’t even know the other person is making and all.


> without really saying why it sucks

I think you've heard quite a lot. There is a reason why people invent GraphQL or BFF

In a nutshell, I think today's Web APIs can't be simply descriped as a bunch of "resources" or "states" (which Roy Fielding used in his PhD paper about REST). A database system like PG is much more complex. The SQL itself is an API and comes with a data transfer protocol. You get a leaky abstraction by mapping it to REST.


> There is a reason why people invent GraphQL or BFF

Sure, it was because creating servers--REST or otherwise--with code (looking at you, Spring, Django, and RoR) is such a slow, laborsome task that rapidly iterating frontend teams got tired of waiting around for ill-fitting backends to catch up, that they built their own abstraction layers: BFF and then GraphQL. That wouldn't be a problem if backend teams were as nimble. Things like PostgREST, Prisma, Hasura, etc. allow them to be that nimble. Ironically, that brings folks like Wundergraph full circle, where they relegate GraphQL to being a development language for defining REST endpoints. If you're going to use a query language like GraphQL to define your REST services, then you might as well use the OG query language SQL to define your REST services, if circumstances allow. That's precisely what PostgREST does.

So, yes, I have heard a lot arguments for why REST sucks. I just don't think they apply anymore (if they ever did).


so why not just pipeline sql n' everything inside h2? Or else you have to deal with shit like

> Essential for the architecture is the conceptual split between data retrieval via HTTP GET and data-modifying requests which use the HTTP-verbs POST, PUT, PATCH, DELETE

The whole argument from OP's article is about the god damn HTTP verbs, I totally blame REST on this.

Just use the CONNECT or PROXY verb for the sake of it and move on.


But surely that abstraction, leaky or otherwise, is there to prevent (hopefully) the security issues related to SQL?


Yeah, he omitted it for brevity alright & you would just have to deal with it. Some day you might, too, become Mysterio Mister Knowledgeable


what are you supposed to use instead, JSON-RPC?


"I wish there would be some tool w/ DDL that you can use to define schemas, functions and views which would automatically sync changes to staging environment and then properly these changes on production."

There is. Liquibase (or Flyway, or...) Add it to your build/CICD/deployment process.

I have used it even for TDD stored procedure automated integration testing + CICD deployment alongside .jar/(JVM-using Scala) regular code + SQL DML/DDL deployment for a serious production app.


We solved some of this by having a "src" folder with subfolders: "functions", "triggers" and "views". Then a update-src.sql script that drops all of those and recreates them from source files. This way we can track history with git and ensure a database has the latest version of them by running the script and tests (using pgtap and pg_prove).


Many of the migration tools I've worked with include the concept of scripts that are run every time migrations are applied, in addition to your standard migration files. So things like views, functions, SPs can go in these "every time" files rather than being duplicated in migrations every time they change.


The biggest issue with postgREST for me is that it doesn't support transactions well. You can't two 2 insert and keep a consistent state if one fails. That alone is a deal breaker.


You have explicit control over transactions with functions https://postgrest.org/en/latest/references/api/functions.htm....

I think this sentiment stems from users of postgrest-js[1], which is a JS library that gives an ORM feel to PostgREST requests. Under that abstraction, users don't realize they're using a REST API, instead of a direct postgres connection.

So in this case users are really asking for "client-side transactions"[2], which are not supported in PostgREST.

[1]: https://github.com/supabase/postgrest-js

[2]: https://github.com/PostgREST/postgrest/issues/286


Sure you can. Just write a function.


You posted a lot of comments in response to complaints or critiques of postgrest (almost every one at the time of my reading).

Most of them are very terse, rude/dismissive, and in my view fall on the wrong side of hackernews etiquette. eg, "Sure you can. Just X" is not a educational or persuasive construction and neither are several other comments you've made on this submission.

If nothing else, I'd encourage you to respond to the best version (steelman) of these critiques, which will improve the quality of discussion and be more persuasive.


I encourage you to read the HN guidelines and then read the comments that I've replied to, because many of them could with considerable justification be regarded as unkind, snarky, and smug.

I would also encourage you not to make and take things personally, and instead stick to the substance. Every one of my comments has been about the technological approach under consideration. If they're terse, it's because I don't want to waste people's time. If they're "not educational or persuasive" to you then you're free to disregard them, or say why exactly they fail. Plenty of the comments here, in my view, are "not educational or persuasive", so I said so and gave my reasons for having that view. I would encourage you to do the same.


Yeah, I fully agree. The tooling for putting that much logic into the database is just not great. I've been decently happy with Sqitch[0] for DB change management, but even with that you don't really get a good basis for testing some of the logic you could otherwise test in isolation in app code.

I've also tried to rely heavily on the database handling security and authorization, but as soon as you start to do somewhat non-trivial attribute-/relationship-based authorization (as you would find in many products nowadays), it really isn't fun anymore, and you spend a lot of the time you saved on manually building backend routes on trying to fit you authz model into those basic primitives (and avoiding performance bottlenecks). Especially compared to other modern authz solutions like OPA[1] or oso[2] it really doesn't stack up.

[0]: https://github.com/sqitchers/sqitch

[1]: https://www.openpolicyagent.org

[2]: https://www.osohq.com


REST itself is crap, at least when it comes to "APIs" serving things besides webpages. There's quite literally no obvious reason that an API actually should be RESTful besides that a lot of web developers believe that it's easier to understand. If REST is so great, then why do you keep finding yourself relying on documentation in order to use it?


I won't say REST is perfect, but I much prefer it to an unstructured api where anything goes. You didn't suggest that, but you really didn't suggest any alternative.

What's the alternative to relying on documentation? Is relying on documentation even a bad thing?


I didn't suggest an alternative because, while I have more specific opinions on that matter, almost any alternative a person pulls out of a hat would be superior to REST.

> I much prefer it to an unstructured api where anything goes.

You're entitled to your opinion, and while I'm sure you didn't mean it to be a straw man, it's essentially the type of straw man I hear a lot when I broach this subject.

Whether an API is "unstructured" doesn't depend that much on what said API is advertised to be acting like. Plenty of RESTful APIs in the wild don't completely adhere to REST or supplemental "standards" like JSON:API. My point about documentation is that, because using a REST API inevitably means reading documentation, and because assumptions about a REST API cannot always be made, then one might as well abandon REST and build an API that doesn't include the extras of REST that are rarely necessary. This doesn't imply unstructuredness. Most programmers don't like building things that don't have a useful amount of predictability to them, so to me the worry about structure is actually concern over very junior programmers doing very junior programmery things. I'm just not interested in that problem, and I don't think most programmers need to be.

So let's just say a programmer, or a team of programmers, implement an API that uses their own convention that they invented, and they provide extremely readable and thorough documentation. Where's the problem?

Documentation is a necessity. One of my arguments against REST is that it implies a high amount of intuitiveness that it can only even attempt to possess with extremely simplistic data. As soon as it makes sense to have a POST endpoint that acts more like an RPC than a REST API, that throws the entire decision to adhere to REST under question, and that sort of thing is not uncommon.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: