Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Why do software nowadays take so much more memory and processing power (2020) (dietpi.com)
34 points by teleforce on Sept 16, 2023 | hide | past | favorite | 68 comments


You hear this all the time "software on old machines used to be blazing fast and now software on new machines are slow".

I was there for most of the old machines and my recollection is they were always slow, which is why there was a relentless race to upgrade to a faster machine and why for a very long time people replaced their computer once a year. If software and old machines were so snappy then there would have been no reason to buy a new computer, and the industry would have stalled in 1984.

This trope of "how did things get so slow, they were fast in ye olde dayes" is just an incorrect blurred memory of they way things were.

AND if the software was faster in Ye Olde Dayes, it is because it did less.

Of course it's not possible to generalise, but for the most part this is true - old software did less than modern software. The pervasive experience of using old computers was waiting.

My M1 Mac running 2 IDEs, 3 web browsers, email, terminals and several other apps is ridiculously, mind blowingly, insanely fast - so fast that anyone back in 1990 would have drooled themselves dry to have that speed.


My first computer had a 33MHz CPU and 16MB of RAM, and I believe the disk spun at a paltry 4200rpm. My pipe out to the Internet was provided via a 56k modem, attached to the serial port.

This machine had a physical power switch. You could not put it to sleep, so it required a cold boot every time. At boot, it would run a _very slow_ self test of all 16MB of memory, after which it would spend another minute loading drivers under DOS before spending another minute booting Windows.

If you wanted to go online, you spent another minute waiting for the modem to dial out, and 10-15sec for a well-optimized page to load in Netscape. In many cases, like wanting to look up information on some subject, it was faster to put the Encarta CD in the drive, wait for it to spin up and for the program to launch, and search from that.

If you wanted to do _both at the same time_, you could do that! But be prepared for the machine to become slow as molasses as it swapped out to that 4200rpm drive.

Fast forward to today. My laptop is in standby 80% of the time. It's usually reconnected to Wi-Fi before I can enter my password, and I can look up an article on Wikipedia in under 15 seconds of having picked the laptop up.

I disagree wholeheartedly that computing has become slower. I can achieve the same results in a fraction of the time it used to take.


I share your observations on the evolution of specs and of user experience. But I disagree with the conclusion. My observation is that the more powerful experience today is not proportional to the compute, memory and storage resources we are deploying.

Looking at the bigger picture, just having a lot of OSS nowadays having apps hiding in the background rather than being killed; constant updates and pinging of servers by apps; etc means that we now have a much bigger, bloated footprint.


Jesus Christ, Encarta, now there’s something which was deeply nested in the cache of my mind


Modern computers have more bandwidth and more latency. They are faster and also feel slower.

http://danluu.com/input-lag/


A computer is a sophisticated machine. Of course you are going to be able to point to here or there where it is true that input lag is worse or whatever, but the general concept that "older was faster" is wrong.


I am fully aware that my MacBook Pro is overall faster at compiling code than my Commodore 64 was ever capable of. I am also aware that getting a keystroke from a polled USB keyboard up through the many layers and context switches to get it to a screen takes more time than it took a much older system.


Those results show no clear correlation with age at all. It that low latency was possible in the 80s and it's still possible now. But throughout computer history there have been plenty of terrible high latency devices.

I agree with OP. Generally computers today are much faster than they were in the 90s.


Indeed. As I pointed out, they are faster at compiling code and slower at processing keystrokes.


Sure computers are faster today, but I think the point is that with all of this available horsepower that we have gotten quite lazy about having performant software. These GUI apps really only feel slightly faster than they did 20 years ago, but you'd think given the orders of magnitude more computing performance, the time it takes to do things like open applications, files, save, etc, would also progress at such a pace. Clearly there are inefficiencies when it takes over 10 seconds to open microsoft word on a blazingly powerful M1 cpu. using the cli on an m1, sure, everything follows as expected with lightning quick tooling at that level, but its in the gui software, the software most people use day to day, that the inefficiencies are exposed.


This isn't my experience. I just went to a Computer museum and they had a TRS-80 from the 1980s. The keyboard latency was incredibly low, and text editing (typing programs in Basic) felt a lot more responsive than using Google Docs, Microsoft Word 365 or a modern IDE.

A lot of things used to be slow, but they involved doing a lot of math or loading data from external storage. The computer "staples" of simple spreadsheets, text editing, using a calculator etc had a faster user experience than today's computers.

But you are actually right that software "did less". The thing is that most applications don't need to "do a lot". What a lot of users need from computers changed little, but we added a ridiculous amount of overhead to it.


There was stuff that was always slow, stuff that involved processing a whole lot of data. Like building, or archiving, or generating reports etc. People wanted faster computers for that. UI was not slow, text editing was not slow, networking was not slow (WWW was, of course, because loading GIFs on dialup took time, but telnet connections into a box with less compute and less memory than my home router has nowadays were snappy). User interaction used to be real-time, non-interactive data-crunching tasks were really slow.


Reduce your screen resolution to 640x480 and most software will feel a whole lot snappier, I bet. Pushing all those extra pixels to a 4k monitor adds a lot of overhead.


Not at all, I went from 1080p to 4k and back, and there is no effect:

teams still takes the same amount of time to load.

It takes me 25 minutes to scroll back 3 years in large chat in what’s app desktop client

It takes me like 2 minutes to scroll back same amount in telegram desktop client


Not if GPU accelerated. If software rendered(or bad GPU), then yes, you should see a big difference.


Probably because old software (e.g. emacs) feels a lot faster than new software (e.g. vscode) on the same hardware, that and nostalgia


Honestly vscode is quite snappy and I often seem to run into issues with Emacs performance.

For a long time printing a largeish string to a buffer would crash Emacs, and it still lags vscode in opening large files in my experience.


That's true, I remember starting applications and running to grab a snack from the fridge while they loaded. It was not unusual to have to be careful while using certain applications to not overwhelm them with work or they'd run out of memory and crash or freeze. I recall reading a discussion specifically on this point, suggesting ways that software should elegantly handle user input to avoid this problem instead of blaming the user for being too aggressive.


They were slow, yet now they should be fast theoretically, not in practice though.


Anecdote Alert: I remember using winamp in the olden days and everything about it was instant, no hint of lag. Then it died. Then it was resurrected and is as laggy as everything else is these days.


A quote from Henry Petroski that has stuck with me:

“The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.”


Petroski was completely wrong about this. It is just that all of the gains from hardware have gone to enabling additional features rather than making the same feature set faster. Things like ripgrep make it more than obvious how fast simple software could be on modern hardware, but incentives are what they are and for most of the apps out there more features will be more profitable than more speed.


While there is an element of truth to that we have sacrificed performance for ease of development, especially in cross platform use cases.

(Often a reasonable trade off, imho)


Oh my god ripgrep is fast.

I was astounded when I first used ripgrep.


For sure. The first time I used it I doubted I had used it correctly because I could not believe it had actually grepped through all the ~34k files in my Rails app in under 30 ms.



I mean, these days I summon a super compute cluster for autocomplete in my IDE and chat with another with a 100k context window to pretty print large JSONs.


Because we write extensible instead of composable software. We do not distribute functions, but apps.

How many instructions do you need in assembly to compute `1+1`? How many instructions do you need to open your search bar, get the calculator and do the calculation?

We also make software responsible for way too much. Cross-platform is a joke. How can we expect developers to consider every platform? Obviously they will switch to some other magical solutions that promise support (and deliver overhead)

Software lacking composability also mean that if the developer doesn't provide a feature, users will most likely never have it. It is therefore encouraged to over-feature everything in case somebody somewhere want it.


It seems like sometime around the late 2000s we literally gave up on the idea of composability as a profession.

I think economics has a lot to do with it. People buy apps and services, not functions and components. All the money is at the tail end of the development process when something becomes product, and that involves a lot of things that are at odds with composability.


It is also due to how unstable software is. Is it possible nowadays to write a simple calculator function and ensure that nobody will ever need to care again for the rest of the century?

Making software stable (including for example the web spec) would go a long way improving efficiency.


Few factors influence this. One is increased screen resolutions.

640x480 display requires about 1.2MB of memory to hold all pixels.

a 3840 x 2160 display requires 33MB of pixel data. That’s a ~28x increase. This also means that app assets like icons have increased in size and we use more of them because of larger screens we can fit more of them on the screen.

Next the storage size increased. Therefore we tend to keep data for longer even if it’s unused. More files on storage increases file system and SSD fragmentation, seek times, etc. and a lot of file systems don’t scale very well. Try creating a directory with million files or removing node.js directory with say 100,000 packages installed.

The internet connection speeds increased, so we keep more memory reserved for network buffers so the network stack can service full bandwidth if needed, and because of higher bandwidth we start serving more data in unoptimized formats like JSON because bandwidth is perceived as cheap.

Next up is bigger memory which allows a lot of slack when implementing non-performance critical pieces of code or applications.

When looked at in isolation, those things tend to often not be a big deal but when we put pieces together we get bloat. This gets compounded when applications start fighting for resources on the same machine and you get unanticipated interactions.


This is quintupled by so many of the applications treating memory and CPU utilization as if it was free. Electron is an outstanding example of this: let's just ship an interpreter and use javascript for "native" applications. So of course performance frequently sucks. But it's cheap to implement, so there we go.

When everything is a webpage, everything incurs the overhead associated with it. Steam takes up ~250 megs. Its webhelper is chewing ~2.5 gigs right now by contrast for me.

The bloat is real.

I still program against targets with < 1 meg of RAM, but that's fairly uncommon these days professionally.


> removing node.js directory with say 100,000 packages installed.

Pain. Just pain, on any system.


> This gets compounded when applications start fighting for resources on the same machine and you get unanticipated interactions.

One strategy for dealing with this is to use something like k8s to run the various components in different pods. That way you can size each pod such that there are always enough resources set aside for that component, and if it starts getting too big for its britches, you know about it before the whole thing comes crashing down (because they're the britches for just that component).

Seems innocent enough, but bearing in mind that internet traffic is bursty, you end up with a situation where your pods are 90% empty 90% of the time because they're all sized for the worst case scenario. The compute you're paying for goes unused most of the time but AWS will still be billing you for 100% it.

It's a sort of cancer of the request/response world that we live in. I think we need to go back to the 90's and make a different choice about the web. pub/sub maybe. The cases where we can't tolerate a few minutes of latency are quite small--provided that the data appears out-of-date during that time instead of the app being unusable, which is how we're handling it now.


It’s infeasible on the desktop because for each application you now have a management memory overhead where the container subsystem in the operating system eats up memory for book keeping data structures. Now imagine 1000 apps running and having memory eaten up by running 1000 containers on your desktop.


I don't disagree, but I think that container overhead is the smaller part of the problem. It was recently recommended to me that we should just allocate double the max we've ever seen it consume--switch from 99% unutilized to 99.9% unutilized. Better to be conservative than to get paged in the middle of the night.


RE: your point about resolutions, you are correct. And also one should add to this: higher color density. We had 16 or 256 colors. Now we have 24 or 32-bit colors which triples or quadruples the amount of memory required to specify any pixel color.


Trusting 100,000 packages ?

Shudders.


I know. Yet most folks take packages at face value as if it was a part of some standard library.


cries in npm


One reason for more memory is we have largely given up on the dll dream in favour of container level abstractions (which includes the way desktop apps are going). The idea of using the operating system to paint UI widgets is positively quaint, at least on the desktop.

We are also 64 bit now. Every pointer in every app is twice as wide as twenty years ago, and we love pointers, especially in higher level languages.

The CPU usage is a lot more mysterious. We do casually do things like audio and video decoding which previously used to use a lot of CPU, and where they are not offloaded to DSPs today they can be offloaded to inevitably underused spare cores. GPUs and associated acceleration are widespread enough that the pixel hit is felt mainly on the GPU. Hitting CPU limits tends to be the result of using arbitrarily complex pointer laden data structures to represent ideas such as what passes for CSS and the DOM. Those are not what you would invent if you cared for performance, which is a major factor in why mobile native UIs feel so much faster.


> We are also 64 bit now. Every pointer in every app is twice as wide as twenty years ago, and we love pointers, especially in higher level languages.

I really wish x32 and arm64ilp32 would have taken off.


Short answer:

A lot of desktop applications are built on web frameworks that ship with their own individual instance of Chrome to run.

On Windows 95 they were C/C++ (or equivalent, Delphi, etc.) and executing on the chip.


It is a very simple reason: abstraction.

The more abstract a software is, the less efficient. But it is also more flexible, easier to understand and modularise, and specially way cheaper and faster to develop.

Of all of the above cheaper and faster to develop is the main reason, by far. It is also safer with fewer bugs.

Software engineers are very expensive, the harder it is to develop something the longer it takes, the higher the economic risk.

To make something the "old way" takes years of work. People are not willing to wait nor pay for that, and the low level technologies will change over time so you could get trapped in a never ending rat's race.


Let me take a tangent to this, a couple of decades ago I was using Delphi and it was marvellous. A couple of years ago I tried to do some equivalent front-end building backed by a database, in Visual Studio and it was a bloody nightmare. Complexity beyond my understanding, little or no documentation, the automatically generated code was broken... It cost me several days and then I gave up.

So anyone saying that modern software is slower because it does more, better - well then I give you Visual Studio as proof that's not always true.


Application framework bloat is a thing. The typical OS today is filled with so many layers of garbage that it consumes free resources. If you look at Mac OS X then you will see Tiger apps could run in the low tens of MBs but now it's hundreds of MB. Things nowadays are wrappers around Electron FWs which is a total memory suck and slow as your imagination. Electron is the gutter of SW development but all the app developers are moving their apps to build on top of it, or already have.


A lot of it is to do with abstraction and reusability. E.g A third party lib uses hashmap for implementation but for my use-case an array would have suffice (like small number of integers over specific range). Since the 3rd party lib is essentially an implementation for union of many scenarios whereas I need only a specific set of it. This results into inevitable bloat.

The flip side is that you write all of the layers of your software and control the bloat. This now becomes an expensive option.


Well, another thing is, that old UI’s used to have no heavy animations, that feels slow because the users actions are not instantly triggered. You clicked on something and BAM it was there, without any complex animations. Frustrating animations came to later Windows Versions. Iirc there was switch offable minimize and maximize animations, that all.


This has very little to do with animations, as most OS use of complex animations tends to be handled very effectively (ie mission control on Mac or the task view on Windows 11). They do all of that while also handling complex translucency effects and not breaking a sweat, even on $500 hardware.

When you are launching a program and it doesn't do something immediately, this has far more to do with networking, external layers like antiviruses pre-scanning, infected device, or app(s) hogging available resources.


Graphics.

Everything else is a pittance and mostly the wrong answer. When people say "browsers" they're also wrong, because a more accurate answer is "browser compositors."

The backing layers for windows currently open on your operating system are orders of magnitude larger in size than entire operating systems, multiple times, actively running in memory by comparison.

So, I think that's the real answer. Everything else seems like a pittance until you're doing some desktop computing or web browsing that requires large data structures.

The two largest and most compute expensive processes running on the system you're using to read this post are probably your OS's window server, and the browser to read HN, and that's because your browser is largely also managing its own compositing system.

That's why.


Yeah I mean the computing world, especially on desktop, is more about UX and things _feeling_ good instead of speed.

Imo it’s a fine trade off 90% of the time.


It is more true about the Internet. More and more web pages use more bloated libraries, and more complex solutions. There is more data, more communication, more patterns, more services, more styles.

If you want a blog there is no need for fancy libraries with a fancy frameworks, probably.

I think this could be passed to the whole software / hardware situation.

Do you need a fancy desktop? Does not Gnome look better than XFCE? But you want a shiny desktop with nice animations, nice icons.

It is also required by the producent for the product to be more complicated and fragile. Now you cannot have offline software. There needs to be telemetry. Therefore there need to be internet logins. Therefore hackers are a higher threat. Why does my calculator on Xiaomi phone need permissions for gathering private data? I know maybe to select some units, but comes on!

https://idlewords.com/talks/website_obesity.htm

https://www.youtube.com/watch?v=iYpl0QVCr6U

https://thewebisfucked.com/


Very simple: as computers get faster, the incentives for reducing the need of resources of software decrease. You used to need to optimise, otherwise the software would be unusable or not fit on a floppy disk. You don't need to do that anymore, which is a good thing as it makes software cheaper and faster to write. Software is more cost-efficient because it needs to be less resource-efficient.


I recently made a WPF GUI tool. It's very simple, just a dozen controls total. It's all default textboxes, buttons, and flat colors, not even any images.

It weighs EIGHTY MEGABYTES, I have to ship a dozen files AND it still requires .net installed on the client machine.

However, my main job is embedded firmware. Honestly, I prefer working with such constrained systems. Kilobytes of flash and RAM, CPU so slow I have to actually consider what it does with each cycle. It's so much more fun than working with desktop software.


It weighs EIGHTY MEGABYTES,

And had you put in a bit more time and effort you could have gotten it down to 8 MB, and with even more effort, 800KB. But you didn't because it worked and it doesn't matter and no ones care and you had more important things to do with your time. Multiply this by the entire software industry and you have your answer.


Because software is more complicated. And more complicated software (in addition to the primary effect of being complicated) is further slowed by abstraction and indirection, which slows performance but doesn't increase functionality.

Software that ran on dos or windows 3.1 WAS fast. But it also didn't have much in the way of UI.


In addition to UI and all that entails (and I assume people not working with graphics or UIs often forget or underestimate the ungodly amounts that entails) theres safer memory models, encryptions, multi tasking, etc etc.


Because the end-users are paying for the costs of running software. So their resources are given zero value.

More interesting is just how expensive the server side is... It seems that you really need to get pretty bad point until someone cares about costs...


I concur there's obvious bloat and unnecessary cruft but also:

- Most text is now utf-8/Unicode, which requires at last 2x memory for the same text length in comparison with ASCII/ISO-8859.

- As others have said, code is 64-bit now, which multiplies the space and memory which used to be required by 32- or 16-bit apps.

- The relentless, unyelding and unforgiving pressure to produce results quickly under insane constraints discourages or forbids any kind of optimization. Now you just put your interpreted code with an Electron platform and call it an app.


>Most text is now utf-8/Unicode, which requires at last 2x memory for the same text length in comparison with ASCII/ISO-8859.

No? Are you thinking of UTI-16?


> No?

Or maybe author's native language is not English. :D UTF-8 does require twice as much bytes for Cyrillic, for example.

Besides, there's a lot of other operations that happen in modern OS that makes text editing more expensive. E.g. modern applications need to account for bidirectional text, so things like "determining the correspondence between a character on screen and in memory" suddenly require table lookups to determine if the character is RTL or LTR.


Your conjecture is correct. I should have said that utf-8 could require more memory. My native language is not English. Apologies for any incorrections.

Also, I concur with your point re: complex text editing.


Cyrillic can not be expressed in ASCII/ISO-8859 so comparing the size doesn't make sense.


Well, it can be expressed with a one-byte encoding like cp1251, cp866 or KOI8-R which are commonly (albeit incorrectly) called ASCII.


Software does more things, with more graphics, and there are more and more layers of abstraction which sometimes can make development much faster.


Thank you, teleforce, for necroing this in the middle of a yet another massive RAM and storage price crash.

We going back to 256 colour images, 320x240 textures in games, super low quality MP3 sounding worse than audio tapes? Going back to terminals?


It's rather simple, hardware is cheaper than the developer time required to optimize software. And there's also an unprecedent pressure to continually ship new features, everything else takes a back seat.


Reminds me of this [1] great talk from Casey Muratori.

[1]: https://www.youtube.com/watch?v=kZRE7HIO3vk




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: