>>“This code should last 20 years” should, for most people, be fairly low on the list of desires for a technology stack.
>> But let’s not kid ourselves that the stuff we’re writing is even intended to last a long time.
Well, it depends. If you write custom software for enterprises, they very much see it as a long term investment. Software grows with the company and is embedded in it. Nobody wants to pay for complete rewrites every five years..
Imagine if all that wealth spent on battling climate effects have been spent in the past to prevent them instead. What a world we could have. Dreamer I am, I know. Hope it is not too late and we still have some chances.
> The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used.[4][5][6][7] The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables[a] or slab, before the term byte became common.
> The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256.[8] The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte.[9] Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively.
Linux desktops definitely are usable but there are still rough edges.
e.g. in Gnome when I open a media file with VLC the focus remains on Files app even if the video opened fullscreen over the top. So when I press space to pause I instead open a second preview of the same file in the background.
Or I use the Gnome media player and it turns sub-tittles back on every time I use it, despite me previously turning them off.
Or the sleep system doesn't suspend to disk after an hour of RAM usage so that my battery is flat when I return latter.
Little stuff like that is an unnecessary pain, just for a lack of polish. I persevere with Linux because of all the little things it does better but it's a hard sell to friends and family.
Linux isn't perfect, but neither is Windows and it's actually easier to make people switch than you'd think, because so many things suck really hard with Windows (the update process for instance!) and people are often tired of it.
In my experience, the switch is hardest for people who are savy with Windows and who'd need to re-learn everything in order to be as proficient with Linux, but fortunately, Windows changing menus and settings at every release can help making the switch (that was my case, I used to be a Windows XP power user got a little bit pissed off when moving to Windows seven and then transitioned to Linux instead of Windows 8).
Seconded. The windows power users are the worst to land in Linux. Most users do just fine on Linux(kde or gnome), or on Mac, assuming someone installed it for them and takes 15 minutes showsing the correct apps. Surprisingly, a lot of them seem to like it better than windows.
But the people who know every Windows nook and cranny suffer hardest. They have so much to learn and unlearn, and they have work to do on a system that fights them. They'll be the loudest grumpiest customers in a switch. If they want to give it a chance and do the effort, it seems to take about a month before they reach equal productivity. If they don't want to give a different OS a chance, I rather see them continue running windows.
Windows has focus issues too. Eg. when I launch Windows Terminal, whether from the Start menu or the pinned icon in Taskbar, it renders the window focussed at first, but as soon as the shell finishes initialization the window loses focus.
Do you think so? My hardware vendor is Dell and I bought my XPS13 as a Developer Edition with Linux out of the box.
Everything I've read suggests that this is just a fiddly config issue and setting up the correct script combination would solve it. I just don't have the time of patience to get to the bottom of it right now.
Yes. If they shipped it with Linux, they should support it with Linux. At least the version they shipped you. If it fails to suspend properly, have them fix it. System integration is their job, not yours.
I use Linux, I like it, and I won't be switching. But the other day my wifi broke because swapping my GPU changed the device name of the NIC.
I think it's got a ways to go. I know plenty of people who would be stumped by something like that, even though it can be solved with some basic internet searching.
Well, not in my experience. I installed Fedora, which I think is fairly friendly distro, although the moment I wrote Fedora someone is going to say, try XXX instead. To those people I would say, I want to use linux, not distro hop linux.
I installed Fedora to actually stay there and ditch windows.
And I had several errors with this chip on my motherboard: Realtek ALC892. Mind that asking in Fedora forums was an extremely unpleasant experience, saying that my reports were useless. Which they probably were, because I am not a linux expert, and I do not know what is the command they want me to write to get an output.
That chipset is a decade old, extremely common, and not yet supported. The solution is to buy and use an USB soundcard.
The second problem is related to Wayland/xorg. I was using KDE plasma, with an nvidia card and the propietary drivers. The very latest driver flashed my screen which were useless.
I had to run a myriad of commands in order to install a previous version and not to get it updated, which again, it is not what I would consider ready for desktop. I should be able to go to the software center, be able to select any previous version (you cant), and click an option that says do not update this please.
So I manage to make them work, but they only worked fine with xorg, not with wayland. And then other apps, stopped working, with blank screens, because they are only supported on wayland (but can't use wayland with my nvidia 1050 ti card). If you report a bug, they will tick it with "not our bug" and so it is.
Honestly, the experience was horrible, and I did try to make it work the best I could and get involved with the community. Even here I can see I already see my comment with -4 points.
But that does not change the fact, that the desktop experience with linux is not yet ready, and closing your eyes and say it is, because you manage to use it. It will not help linux at all in the future.
These are NVIDIA and Realtek issues. Linux works fine on supported hardware. Closing your eyes and saying "Linux is not ready" will not help.
I've experienced issues with GMA500, could be resolved by Intel, never were. I've pinned package versions, patched driver - that's development, should not be exposed in GUI. I've switched to supported hardware, no issues.
No, those are my (the user) issues because I (the user) have a Nvidia and a Realtek, and I (the user) do would like to use linux but can't because of those issues.
Telling the user (me) to buy something else, not to use certain hardware, when there is not even a database of supported hardware, it is not desktop ready.
Imagine I want to buy an AM5 motherboard that is fully supported by linux. Where can I get such information?
Windows 11 does not run on older harder, "not desktop ready". macOS supports much less hardware than Linux, "not desktop ready". Android is "not smartphone ready" because drivers are not pushed upstream by hardware vendors.
Hardware you own does not respect freedom to switch OS. Nothing stops NVIDIA or Realtek from supporting Linux like Intel and AMD do.
It would be nice if there was a way to tell which users have mobile app installed and which users don't and charge us only for those users that will actually use the feature we are paying for.
+1 make it per user
This decision works against zulip long term, now we have to think before inviting/adding users to our server.
Third rug pulled from under us in a year: zerotier, rport and now zulip.
No, I don't care. I removed myself from Facebook before instagram was even a thing. Only thing I'd be pissed about would be ISP censorship, or someone taking away my domain, but there are strict rules around that already, so I don't worry.
Anything else is pretty much just inconvenience and the cost of doing business.
>> But let’s not kid ourselves that the stuff we’re writing is even intended to last a long time.
Well, it depends. If you write custom software for enterprises, they very much see it as a long term investment. Software grows with the company and is embedded in it. Nobody wants to pay for complete rewrites every five years..