Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Honestly, I think we as programmers have the opposite problem. We value simplicity so much that we neglect planning for complexity, because we think our simple, elegant solutions will last forever.

It's really fascinating to study modern CPU design. Modern high-performance CPUs are horrendously complex, with the very notion of superscalar architectures and pipelining resulting in guaranteed complexity explosion. Yet, as far as we know, there is no way around this. You need pipelining and superscalar execution to get adequate instruction-level parallelism in real-world code. Unless you want to use microcontrollers for everything, that complexity must exist.

Compilers are another example. Many people who go to implement compilers read the Dragon Book and think that all the complexity in GCC and LLVM is needless bloat. Then they quickly discover that they can't compete with them in performance.

It is of course desirable to avoid complexity where possible. But all too often the response that we as engineers have to discovering that difficult problems require complex solutions is to become defensive, stick our head in the sand, and stand by our simple "solutions". This is how we ended up with crufty Unix APIs, the security problems of C and C++, the pain of shell scripts, and so forth. We need to learn how to accept when complexity is necessary and focus on managing that complexity.



But don't we already have microcontrollers in everything? My keyboard has a microcontroller in it. The harddrive has a microcontroller. USB has a microcontroller (or two, or three). Heck, even our CPUs now come with microcontrollers embedded in them!


Not in my experience. I see far more engineered overly complex solutions to simple problems than the opposite.


And then people say: lets rewrite it, the old solution is overly complex and has a lot legacy code that no one uses, we can do better. They imagine simple & perfect solutions because they underestimate complexity of real world problems, just like parent said.


I would say rewriting is OK if you are eliminating code while keeping the same functionality.

Obviously that is a generalization and there will be exceptions, but as someone else said in this thread "code is a liability not an asset".


I’m a fan of KISS and I spend an inordinate amount of time getting people to reframe problems in a manner that meets the needs more directly (straightforward is on the road to real simplicity, and is a good place to stop if you can’t reach the destination).

But I can’t agree with this observation. There are a lot of smart people who are bored and solve a problem we never had to keep from going nuts, but the solution is so complicated it drags everybody else down.

But there are also a lot of people out there who think that if you pretend hard enough that our problems are simple, then a simple solution can be used.

If you oversimplify the problem enough, everything looks easy.


Clearly, the complexity of modern-ish CPUs does push up instructions/cycle — and thus single-threaded performance. That said, I did an awful lot on machines running at hundreds, rather than thousands, of MIPS (and quite a bit at single digits...). If the “big core” superscalar designs hadn’t happened, it’s not at all clear to me that the world would be a worse place. Perhaps we’d have more transputer-like designs, for instance (which I admit bring their own complexity — but could at least be rather more transparent than the current mess of speculation and complex cache hierarchies.)


You don't really need pipelining, or branch prediction, or microcode loop caches or whatever else necessarily. If you're optimizing for single core performance over everything else, then that's how you squeeze as much as you can into the processor. I don't believe GPUs do branch prediction at all, for example.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: