Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'll second this. Unfortunately experience is a two-edged sword. I deal with a lot of old programmers and I continually hit the "fact" that was true in 1999 but is false today.

For example, I started in DOS (as did many older folk). Memory was severely constrained. So we developed habits to use memory (and disk space) very efficiently.

Like using bits in a byte, or a byte as an ID field. Or quibbling over the length of a Name field.

All those habits proved to be bad in the long run. And today memory (and disk space) are abundant. But these old habits are hard to break.

And that's before we talk about "modern" techniques, like version control etc.

Experience is great. I'm a fan. But all too often experience is also "there was a bug I windows 95, so that API call is unreliable ".



>many older folk...like using bits in a byte, or a byte as an ID field. Or quibbling over the length of a Name field. All those habits proved to be bad in the long run.

I'll bet you the people inventing these modern new architectures are among the more experienced engineers at the companies

who I would criticize by saying, we used to think it was a good idea to code around the limitations of old architectures and now we're accused of coding for PDP-11s and processors aren't PDP-11s any more. so what we should do now I'm told is code very specifically around the idiosyncracies of the new PDP-11,000's and that is what marks progress. Caches were invented to invisibly and silently make things faster, and that was a brilliant idea. Now new cache designers analyze what we used to do to insist we change how we code so they can cache what they want to cache not what we want to do, and all younger programmers talk about these days is coding for the cache (which itself is a cache of a cache of a cache). this is not progress, it's coding for the timing of the modern storage drum.


You can still obtain a ton of performance by minimizing the size of right things, processor caches are relatively small still. Stuff like the order of columns in a postgres database can still at worst multiply your table size


Yes, but we went from a time where these optimizations were absolutely unavoidable (so unavoidable that programmers would sacrifice any other thing for it) to one where the optimizations may be a nice to have, but for most organizations the maintainability of the code is much more crucial.

As mentioned before there are exceptions, like embedded programming, anything that needs to be finished within predictable time (game code, graphics, DSP code, etc). But even if you write a thing for a company website today I wouldn't dare to answer if making the code 10% less readable to make it use memory 10% less memory is a good trade off. Not that redability and memory use are necessarily mutually exclusive goals, but if you highly optimize code for one dimension, others will suffer at some point. And if you come from a generation where memory usage was king, it could be that you make these trade offs out of habit or principle despite them maybe no longer aligning with the goals of the project you're working on.

Anybosy who is good at their craft will choose how to deal with trade offs depending on the needs of the project and not based on remembered tradition.


With 20 years of experience in website development, I have led various site speed optimization teams. From my background, I pay particular attention to assets, JS profiling, and related aspects whenever I work on a web app. These practices are so ingrained that I perform them automatically and transparently, as they have been part of my process ever since.

The same thing might happen to these engineers when working on existing or new projects and writing C/C++ code; these optimizations will be part of their process.


I was born in the 90s so memory has been plenty for me during my programmers journey. Until I started doing more embedded signal peocessing stuff.

It is good to know how to use memory efficiently, even today. But it is also important to know that if you optimize for memory usage or speed you may be paying in another dimension. And in my experience a certain type of old programmer can have a total lack of awareness that e.g. in some cases readability, maintainability, ease of use for developers, display latency etc. can be dimensions that are priorized over another for really good reasons.

The admin equivalent of that is somebody who provisions the same hard disk space for a server today as they did in the 2000s and then have the machine run out of memory on every fourth kernel upgrade. It is good to only use the necessary resources, but not if you can't/won't handle the ugly consequences that may come with it.


> today memory (and disk space) are abundant.

It still has a cost though, which needs to be taken into account in many situations. At our company the cost of compute, RAM, and storage are some of the biggest ongoing concerns, which can make the difference between profitability and bankruptcy, and we’ve done a lot of work on it.


Especially in the cloud, where RAM is extremely expensive, and you're encouraged to go for very ram-inefficient horizontal scaling architectures.


Until the cost of storage is 0, this is all still a thing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: