Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Dangerous Web Security Features (tunetheweb.com)
87 points by based2 on April 20, 2019 | hide | past | favorite | 40 comments


Disagree with HSTS being 'dangerous' in 2019. There are not really any good excuses left to have any parts of your website (new/different subdomains included) unable to use https. On the other hand, HPKP is a lot easier to mess up and is more situational, but HSTS should be standard by now.

The author's recommendations are still good (If everyone tried to set up strict HPKP+CSP on their websites, I can imagine how many would break), but I view things like "If you've a sub-site that you never got round to securing (e.g. http://blog.example.com), then you've just forced yourself to upgrade with this policy." as a positive, not a negative (hence the word 'upgrade').


I definitely agree there's no excuse not to use TLS anymore.

I recently ran into this on a city of Vancouver website for voting information. While any page that had a form used HTTPS, all other pages forced the user to use HTTP. Like, for those pages, it redirected to unencrypted even if you typed in https://. Including the page with polling location information.

So any malicious actor in a privileged position, like a public WiFi network operator, could have effectively prevented people from getting accurate poll location information, effectively DoSing prospective voters.

I tried to bring this to their attention, but I got a response telling me their IT guy says it's not a problem because no user data is submitted on those pages. Never mind that it was probably more work to selectively enable TLS per URL and leaves important content vulnerable to manipulation. Incredible!

It's like the old "all of our download links use HTTPS", but the downloads page is served unencrypted. Frustrating.


Maybe all they can afford is a really old or weak server machine, and are afraid of the extra computational power that enabling https on all pages would incur.


Elliptical curve based keys are now supported by all major browsers and require much less processing power and overhead compared to traditional RSA keys.


> there's no excuse not to use TLS anymore

I have a personal site that consists entirely of low-value static assets:

http://flownet.com/ron/

Why should I use TLS for that?


Anyone on the path between your server and clients can change the content however they want. Inject ads, mining scripts, redirect to a phishing website, try to push a virus to your client etc.

There are even ISPs that do that: https://security.stackexchange.com/questions/157828/my-isp-b...

https://thenextweb.com/insights/2017/12/11/comcast-continues...


HPKP as people know it is being withdrawn from browsers so there is no need to discuss it further.

Source: https://chromestatus.com/feature/5903385005916160


Caching. If Netflix wants to let company or University x cache their shows, they can just distribute over http encrypted blocks of content. Same with package distribution (although these are usually verified via signing)


SXG would allow that without breaking security: https://wicg.github.io/webpackage/draft-yasskin-http-origin-...

And Netflix and similar already handle caching in a much better way: they effectively hand CDN nodes to ISPs, and then direct requests specifically to the right CDN.


Netflix can't really be cached anyway because of DRM + rotating keys. You'd need to cache for every device (type/model) and get the big wigs from Hollywood to accept allowing data storage in caches everywhere.

Also, I don't want my employer to know what movies I'm watching. I'd personally opt out (or not use Netflix) if this became an option.


Well, I think Netflix has cache/edge servers that they deploy to ISPs instead. That said, last I looked at it, the model described (keys through https, enc. chunks through http) was used by BAMTech for example.


That's true about Netflix, large networks can apply for boxes with Netflix caches for free (or at least they used to be free, maybe there's a small cost today) if they want to reduce load on their peering/network.


Your employer doesn’t just preload a root cert and use a middlebox to log everything anyway?


Actually doing this costs $$$ (and is probably less useful than many people think it will be)

Most employers are always looking for opportunities to cut costs, and so the $$ product that is superficially similar gets chosen not the $$$ product that will "log everything".

The $$$ solution is to literally MITM every TLS session. Build two session, one in which you pretend to be the server, one in which you pretend to be the client, and log everything as you copy it from one to the other. You need a fair amount of horsepower (CPU or specialist ASIC) and somewhere to keep all those logs, neither is cheap.

The $$ solution may have the option to resort to that, but it will lack the horsepower to do it at line rate, (e.g. you may have an appliance on a 100Mbps network that can't do this faster than 5Mbps) or the storage to log at line rate (at 100Mbps that's about 1TB per day) so unless you're being tracked specifically it's probably only doing this:

1. When you makes a new TLS session it stalls you, to see who the far end is, until TLS 1.2 they'll present their certificate in plain text, so the middlebox can read it without needing to actually MITM you. Unless the target is blocked you're allowed through, and probably not logged to save storage.

2. When a session resumption happens, the middlebox has no idea who you're talking to, but it presumes that you must be talking to someone it previously allowed, so, it makes sense to just allow this and not inspect it beyond maybe noting which IP you connected to and when.

There are a lot of ways to screw this up that destroy security, but that's the nature of cheap security appliances, you pay money, you feel better, you probably don't get any actual security.

This all gets even more amusing for TLS 1.3, notice how in bullet point 2 these boxes let through all resumptions? TLS 1.3 deliberately looks exactly like a TLS 1.2 session resumption except with a "Ssh! This is actually TLS 1.3" marker so that a TLS 1.3 server knows what's really going on. For plenty of people TLS 1.3 "just worked" because their "security" appliance doesn't actually deliver any security.


My employer MITMs all content using the first approach. I don't think they have the storage to log all content, certainly not all downloads, but they do scan it for DLP, e.g., uploading stuff that looks like source code. It seems genuinely useful for things like preventing accidental pushes to GitHub (which is a thing I've seen happen multiple times).

Also, my employer provides a guest wifi for personal devices that doesn't do MITMing at all, and if for some reason I were watching Netflix at work, I'd presumably watch it on a personal device anyway. (Even if I was projecting to a conference room at work, I'd still plug in a personal device to the projector.)


(2015), edited (2018).

I've had week-long HSTS on my personal website for a few years (which is short enough that most clients ignore it) out of an abundance of caution/FUD, and it hasn't really been a problem - I have had periods where my cert expired (for complicated reasons, I renew Let's Encrypt certs manually every three months, and sometimes I don't get around to it in time), but I didn't remove the regular HTTP 301 to HTTPS during that time. So I don't think permanent / preloaded HSTS would have been a problem.

On other sites I've set up since then, I've built them on top of hosting that assumes reliable HTTPS and renews it for me, e.g., Twisted with txacme or AWS Cloudfront with Amazon's CA. So I've been able to assume working HTTPS from day one.

In October 2017, Google announced plans to deprecate self-service HPKP for exactly the reasons outlined in this article, which took effect in late 2018. See https://developers.google.com/web/updates/2018/04/chrome-67-... and the links provided . If you're a major site and you really, really know what you're doing and you're confident the risks outlined in this article , you can still get a hard-coded HPKP entry in the browser source code.


As others have pointed out, this post is from 2015 and a lot has changed since then. I've added an updated section at the end, to clarify my thoughts since then (mostly unchanged to be honest, except for CSP): https://www.tunetheweb.com/blog/dangerous-web-security-featu...


> The impact of an incorrect CSP policy, or browser issue could vary from a "Tweet This" button not loading (no big deal), to ads not loading (hurting your income), to stylesheets not loading (basically your whole website is broken).

I really don't like the sentiment that you shouldn't add a security feature because it might be difficult. Any change comes with risk of regression, but CSP isn't even domain-level like the other ones, it only affects the resources it's attached to. It shouldn't be any scarier than making any other change to your site/app.

Replace with 'the impact of replacing your md5-ed passwords with properly hashed ones... basically your whole website is broken'. Of course, that's a reductio ad absurdum in some cases, but if it protects you from XSS it might be a good analogy.


OP here and I disagree. Implementing a CSP for a page is hard (given the many different browsers), implementing it for a site is really hard! And yes it does pretty much need to be "domain level" to be effective.

It's easy to test if a password algorithm change fails, not so much for CSP. And the reporting options are next to useless because they are so noisy.

That's not to say people shouldn't implement CSP - it's a great option (now - less so in 2015 when this post was written). But they shouldn't just copy and paste a CSP policy from a random blog post they found, get an A+ on a security scanning tool and feel proud, without realising that they may have broken part of their website or implemented a pointless CSP. That was intention of this post and apologies if it read as "don't use then cause they are hard".


> It's easy to test if a password algorithm change fails, not so much for CSP.

Probably a bad example, because the former is server-side. But why is CSP harder to test than any other client-side change, like rewriting your login page/component?

> And yes it does pretty much need to be "domain level" to be effective.

I meant to say that you can add it as a XSS prevention to example.com/app/ and not worry about example.com/static/ or example.com/blog/


> But why is CSP harder to test than any other client-side change, like rewriting your login page/component?

Maybe not so much now. But when I wrote that post there were lots of bugs and missing features across the various browsers (examples in the post). It was early days for CSP and it’s got better since (hence why I now do recommend CSP), but regardless CSP is a complex technology and tough to get right. Each page and each browser might have its own CSP requirements (e.g. when polyfills are included on a bad for example).

> I meant to say that you can add it as a XSS prevention to example.com/app/ and not worry about example.com/static/ or example.com/blog/

Then your cookies are at risk. Yes cookies can be scoped to a path, but few do that. And they can be made HttpOnly which is more common, but still not used anywhere near enough (8.31% of cookies - https://github.com/mikewest/http-state-tokens/blob/master/RE...). Additionally if you have a vulnerability on /static/ then you can hijack the app link or login link to send users to badexample.com or similar when they want to login to the app.

It will still offer some protection if only on /app/ and is better than nothing, but still preferable to have it on the whole domain. It’s like HTTPS - having it on just one page is an anti-pattern that shouldn’t be used anymore.


Can we add minimum password complexity requirements to this list? There is nothing more annoying than having to adjust my already 128-bits of entropy password because the website feels I need a special character. Plus, now hackers have a guide for what the password looks like.


NIST 800-63b actually recommends against character class requirements[1] in favor of minimum length requirement and blacklists of breached passwords and other obvious passwords. Sites that require special characters are not following the current best practice.

[1]: https://pages.nist.gov/800-63-3/sp800-63b.html


Isn‘t any obvious password already in the list of breached passwords? ;)


And then we come to the most dangerous item, because you have least control over it: preloading HSTS right into the browser.

I can't say specifically why, but there's something about a browser that treats a certain list of sites specially, by default, that just doesn't sit well with me. I've had this feeling ever since I heard about the feature. Not exactly net neutrality, but somewhat reminds me of it.


Really, this solution is because the inverse would be to extreme. We cannot say "https by default" except for this list. So instead we go with a list of https only. Really though, most traffic should move to https.

Moreover, this solves a real problem. We want sites like paypal, facebook, and gmail to really demand HTTPS. There should not be a race to MitM fresh browser installs.


Sounds similar to web browsers coming with their own CA certificates instead of using system-wide ones, leading to poor integration and inconsistencies. Though a centralized database of rules for websites sounds awkward on its own.


> Sounds similar to web browsers coming with their own CA certificates instead of using system-wide ones, leading to poor integration and inconsistencies.

Well, the only notable browser which does this is Mozilla's Firefox, and not coincidentally the only root trust store where you can actually see how the sausage is made is Mozilla's. All the other big trust stores (Apple, Microsoft, Google) are black boxes. Presumably they have one or more employees dedicated to this stuff, but since we're not shown their working it might equally be the product of an intern throwing darts at a list.

Right now for example Mozilla is discussing Certinomis, a small CA which doesn't seem to be very good at the technical aspects of their job, issuing certificates for DNS names with spaces in them, typo'ing dates, filling parameters out incorrectly, nothing that screams "evil" but certainly more clumsy than we'd prefer. Are other trust stores thinking about Certinomis? You'll only find out if one of them announces a change.



This article is probably much more harmful than the security features it describes as dangerous...


I still very strongly oppose HSTS's "No user recourse" policy, and don't deploy it on my sites purely for that reason.

I get the reasoning, but it's still unethical to the user.

(as an aside, hsts applies to all ports with no option to disable this, something to keep in mind.)


I agree with you. That is also why I hate HSTS and will not use it. The user must be given permission to override (for any reason they want to, including one you don't know) anything, and must have enough ropes to hang yourself and also a few more just in case.

HSTS is terrible, even if you support HTTPS (which you probably should).


> I hate HSTS and will not use it.

HSTS is just a flag. What you "hate" is that a piece of software under ~YOUR~ control (the web browser) happens to be correctly enforcing the intent of that flag. If you don't like HSTS, it is ~YOUR~ choice whether you use a web browser that will follow your instructions or someone else's. If your web browser does not offer a method to disable HSTS like Firefox does, switch to a browser that is willing to follow your instructions.

> HSTS is terrible

That is an absurd opinion. HSTS makes the internet objectively safer by requiring trust between your web browser and web servers. Who would want optional trust between themselves and their online bank to be the default state of affairs? If a website deals with money (or other items of value) and it is not HSTS preloaded, it is being stupidly and dangerously unsafe.


I know that about the flag, and managed to hack Firefox so that HSTS won't work. (Although I wrote a document how to write a better web browser program, but it isn't implemented.)

Trust between whatever (and other options about what features to enable/disable/alter, such as what fonts to use, any kind of URL redirections, etc) should be defined by the user. There are several reasons you may wish to alter the settings. (Anyways, you can configure cookies for secure connections only; the user should also be allowed to configure whether or not to use key pinning for secure cookies (individually per domain).)

(I also think that both HTTP and HTTPS are bad for banking anyways, and that a specialized kind of bank/money protocol that can be worked over SSH might be working better (preferably that can be used with text, so that even without specialized software you can still do some stuff with it; IRC does that too and that is what makes IRC good). This is independent of the above stuff, though.)


All of the above assumes that the broader population are well informed about the nuances of digital security. The reality is most people have better things to do.

As for your third paragraph—I would agree that banks should act more like APIs and less like their own countries. But that is independent from the use of a protocol. Arguing between TLS (HTTPS) and SSH borders on bikeshedding.

The reality is more than a billion people are successfully using HTTPS to safely and securely manage their money, and HSTS is improving the guarantees around that security. I'm sure you've solved every security and trust risk with this hypothetical bank-over-IRC-over-SSH protocol with non-mandatory security... but I'm not yet convinced.


You are correct about that; you could do HTTPS (or otherwise TLS) too would also work, to define such API; it does not have to be SSH.

(Actually, now that I consider, HTTP requests (with HTTPS) might be better than SSH anyways for many of the things being done, such as downloading a bank statement. So, it may likely do better. However, the common authentication protocols are better in SSH, but similar kind of authentication protocols is probably still possible with HTTPS too, just it does not seem to be as commonly used, but perhaps it should be.)


On Chrome, you can type `thisisunsafe` to skip it.


How is it unethical to protect your users from MITM attacks?

Why would any end user need “recourse” to connect to your server incorrectly?

If you’re no longer serving content over HTTP anyway, what opportunity is lost?


I'm not an end user, and I should have the ability to bypass expired/invalid cert errors on both my own sites, as well any other site out of curiosity. Sure, lock it behind a flag or other hidden setting, that even covers enterprise clients because they already disable setting those on their domain computers, but it should be possible nonetheless to bypass the error.


That's an objection to the exposed features in your web browser, not the normal use of HSTS by websites.

Did you even try a Google Search?[0] If you use Firefox (which you should) it is possible to bypass that setting, using a feature locked behind "a flag or other hidden setting." Specifically, it is the about:config setting security.mixed_content.use_hstsc

My suggestion is to use Firefox Developer Edition and modify the security setting there—allowing you to bypass HSTS whenever you like without any accidental security risk for your normal browsing.

[0] https://campus.barracuda.com/product/websecurityagent/doc/73...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: