> (Sidebar: It is a common misconception in the programming world for some reason that the adage "It's a poor craftsman that blames his tools." means that someone blaming their tools proves they are a poor workman. The true meaning of the phrase is that it is a poor craftsman that has bad tools in the first place. A skilled craftsman does not use a dull saw without complaint. They get or make a sharp one, without complaint. A craftsman who uses poor tools without complaint is even poorer than the one who is at least complaining!)
No mention of the obvious answer here; because coders have jobs, employers and lives. Hence a lot of the time, code has to be written quickly and the company usually doesn't give a toss whether something's been done the 'right' way... so long as it works.
If you've got a deadline in thirty minutes, you're probably not going to try and do everything the best possible way. Same deal if you've got a family or friends to get back to, an event to go or anything else that requires you to get things done quickly.
You're rewarded more for getting things done quickly than you are getting them done properly or securely. When your client wants their site or program done as quickly as possible for a low cost, or your startup needs to 'move fast and break things'... security will usually suffer in the process.
That's what I call the 'happy path fallacy'. If code performs along the happy path, then great, move fast, break things and release. Never mind that outside the happy path there are tons of issues, bugs, things never even looked at until they suddenly are found to be the root cause of a major breach. Those parts do not get the attention they need because there is no direct economic incentive to do so.
Software is not at all done when it 'works'. It's only done when it works well for all input and there are no unknown paths of execution that lead to unexpected results. It's pretty rare for a codebase with more than a few 100 lines to be completely known to such a level that there isn't a way to cause it to do something the author did not intend to be there. Complexity is a very bad enemy in that respect and code tends to get extremely complex and hard to reason about if you don't have iron discipline during the design process.
That's precisely it. And it's been the same reason for the last few decades. The level of security in software won't change significantly until the cost of writing insecure software exceed the costs of writing secure software.
You missed the whole point of this essay. The fastest code to write is often insecure because languages, apis and so on are almost always insecure-by-default.
If you write code fast because you have a life at home to get to, and I write code fast because my employer needs it right now, and someone else writes code fast because they have a large ego to maintain, what really is the difference? If the simplest thing to do is also reasonably secure then the hope is a 30 minute deadline will not lead to an accidental security flaw.
I've been multiple places where requests to fix what was broken in the production system were shot down repeatedly because there "wasn't money in the budget to fix that system".
Yet if anyone added up the manhours it took to fix the same production issues that repeatedly popped up, it would've clearly been less expensive to fix the recurring bugs.
The first thing to fix is bad tooling, yes, but it's not the whole picture.
Once all your queries are safe by default, your languages have memory safety, your templates XSS-encode by default etc etc (this is basically the situation at most sane workplaces) - there are still security issues.
There are fewer security issues left, but they're gnarlier. What we see now are "application logic" and business logic issues. Overflow and language-level sharp edges exist, but these are a small proportion of the mistakes developers (in particular, web application) seem to be making.
Situations where the server unintentionally signs user-controlled data, bad OAuth2 flows, missing or incorrect authorisation checks, accepting data from the client as "validated" - these are much more common in my experience.
The other interesting thing I've noticed is that when developers who are used to "safe by default" frameworks have to step outside the framework for any reason (e.g. front-end is 90% angular, but 10% "bespoke" JS) they will make mistakes with very high likelihood.
The solution is tooling yes, but also education and process.
First you fix and raise awareness about the biggest cause of problems. It takes a long time. Then you do it again with the remaining biggest cause of problems. Etc.
There are tools and methods to address the "first world problems" of application logic vulnerabilities too. Maybe in another 20 years those are at the forefront.
I think such security issues are similar to the ones author described. They require decent language-level abstractions to disappear as well. Some sort of lightweight processes to deal with trust boundaries explicitly and easily.
What you'd need is almost dynamic live proof checkers hooked to your IDE capable of tracking the effects of code changes / refactoring to do that, linked to test cases themselves carrying proofs of testing the spec completely.
How else will you be guaranteed to catch Goto Fail and similar?
What's worse is when you hit the issue of "composability" in cryptography - two servers run different algorithms making different assumptions, and when they interact the assumptions fail to translate and neither provide the guarantee they should. Like cross-protocol attacks, such as when one server becomes a signing oracle for another.
Edit: and beyond that, we have cross architecture disagreement on results of calculations intended to be deterministic. Like how SPARC Bitcoin Qt binaries previously would have had the Bitcoin reward schedule loop every 255 years and go back to 50 coins per block and restarting the countdown.
There is actually a very simple answer to this question: We have 100's of implementations of roughly the same thing instead of just 1 implementation that is made bulletproof. Every new framework, every new HTTP server, every new language will cause another round of re-implementation and will cause yet another round of the same old bugs and security problems.
That's definitely a valid objection and I have absolutely no idea what could be done about that. But as it is the bug that takes down one machine today will take down other machines tomorrow and yet other ones five years from now.
So for entirely new bugs your objection stands, they could (and likely would) be disastrous. Even today a 'zero day' exploit for a major platform can be dealt with though, I don't see why we would not be able to deal with such exploits in a scenario where there is only one implementation of something, it's not as if right now we use the other implementations to keep things running, it's mostly a matter of impact all at once rather than spread out over time and infinitely repeating.
>Even today a 'zero day' exploit for a major platform can be dealt with though
But they can only be dealt with once they're discovered by people with an incentive to fix it. The NSA says they weren't using Heartbleed[1], but I can't think of a single reason they wouldn't lie about it. In any case, that was a massive security flaw that could have conceivably been exploited for more than a decade. If it affects 30 percent of systems instead of 90 percent, that seems like worthwhile hedging of bets.
Afterthought: If we really only had one implementation of something all our energy could go into making that one thing really good, which might even push the quality beyond the point where zero days would be an issue.
What does being bulletproof even mean? Let's talk through this with an example, not drawn from programming: let's talk about stopping actual bullets. When Operation Iraqi Freedom started, US troops were issued flak jackets, which do what they suggest - catch flak, i.e. shrapnel from bombs. They were not, despite what one might think, bulletproof vests. As OIF continued on, as forces transitioned from fighting conventional forces to fighting insurgency, it was decided that actual bulletproofing was required. So everyone was issued ceramic plates that inserted into the front and rear of the flak jacket. They were ceramic, unlike the metal plating on an armored vehicle, because as it turns out, simply stopping a bullet that's going at over 2,000 feet per second is a great way to break a ribcage. So the ceramic plates are designed to shatter on impact, spreading the force of impact over a larger surface area. Bulletproofing a person is a different task than bulletproofing a tank. And there are downsides to bulletproofing a person - they can't run as fast with 15 poinds of antiballistic ceramic strapped to them, and it turns out that ceramic doesn't breathe as well as, say, cotton. So your bulletproofed servicemember is now more vulnerable to heatstroke, which is a real problem in Iraq in the summer, when daytime temps can routinely exceed 120 degrees.
Bulletproofing, either in the literal or the metaphorical sense, is a series of tradeoffs and compromises. The idea that there's one right set of compromises for every HTTP server out there, not just in terms of safety but in a myriad of design decisions, is simply wrong.
The problem with "the same thing that's made bullet proof": What defines bullet proof and how is that "perfect" solution going to work for all different business types?
Bullet proof would at least mean that we no longer have these infinite series of regressions, that what is fixed stays fixed.
And 'obligatory xkcd's rarely are and I wished people would stop posting them as a way to settle an argument without being receptive to the core idea of that argument.
The word 'standard' never was on my mind, merely a way to get things implemented properly once rather than re-implemented over and over again.
How many HTML server components are there, in C alone? Then all the java ones, the ruby ones, the Go ones and all the other languages. Then all the crypto libraries and all their re-implementations and so on.
There has to be a better way to do code re-use and to avoid the NIH syndrome that seems to be one of the major generators behind all these re-implementations of roughly the same thing.
But it's so much easier to start again, rather than to delve into an existing code base, extend it properly and document it. There is little glory in a minor contribution to a much larger project. Starting over means it's your project, you get to be the big wheel and when you lose interest we have one more Swiss cheese to contend with.
Of course there is a better way... but the issue is: How are you going to make others use your way? How are you going to fund replacing the DECADES of existing tech?
The XKCD puts it perfectly... you create a better way - and God knows, there has to be many better ways than say JavaScript or PHP - how are you going to MAKE people switch?
Your better way - assuming it is perfect - is still crowded out and unable to gain any traction.
I mean... ANYTHING that gets made gets a "format war". What makes you think Google, IBM, Microsoft, Apple, etc would all agree on something long enough to make that happen? We can't even get a new DVD format without years of fighting...
What's perfect for Google (which revolves around Search) won't necessarily work for Microsoft (Which revolves around Server and Office)... Or Apple (Which revolves around... Magic?)... Or...
What language should they all agree on? What "bullet proof" product will meet all their needs? What style of programming does everything everyone needs?
You can't create that "perfect" solution, because people and companies that have such a massively different wants, needs and goals...
That's why things such as TDD, Modular code, Code reviews, etc all matter so much more than the specific platform you are on...
So the SafeAddInt8 in the blogpost may not be correct on architectures that use really strange representations.
Lots of C developers that wanted to make their code safer ran into a related trap in C. There it's totally undefined and the compiler may just throw the check out all together.
I wonder why Go doesn't just define signed integers to be in 2's complement representation. Are there any architectures out there that use something else for integers?
More so, using 'JO' (jump overflow) after every signed operation isn't that expensive, speedwise (branch prediction for the win) but it does make the code larger. http://boston.conman.org/2015/09/07.1
It would, but the x86 doesn't trap on signed overflow. There is an INTO instruction (interrupt on signed overflow) but you have to include it after every signed instruction, and when I tested that, it was apparent that INTO is not a "fast instruction".
Also, under Linux, when INTO fires, it is interpreted as a SIGSEGV. Not what I was expecting.
> Whether you want an exception or an error or some sort of Either/Option type or whatever depends on your language, but something ought to twitch here.
The problem with handling integer overflow is that the mechanisms for it are clunky. Is there a programming language that handles integer overflow using a better mechanism than those listed? (I mean without resorting to a BigNum.)
This is fantastic.