Hacker Newsnew | past | comments | ask | show | jobs | submit | more vouaobrasil's commentslogin

On the other hand, subpixel rendering is absent from MacOS and makes it very difficult to using regular ol' 1920x1080 screens with modern MacOS. Yes, those Retina displays look nice, but it's a shame that lower res screens do not because they work perfectly fine except for the font rendering.


My first (and last) 1920x1080 monitor was 50lb CRT I picked up on the side of the road, in 2003.

I haven't owned a smartphone with a screen resolution that low, in over 10 years.

I think it's an amazing feat of marketing, by display companies, that people still put up with such low resolutions.


It's still a perfectly serviceable resolution.

Of course 16:19 pushed down display costs leading to the demise of 1920x1200 which is unforgivable ;-)

Those 120 pixels were sorely missed.


You can still get 16:10, they're just classed as "business professional" models with matching price tag.


Buy them refurbished instead, then.


Hm... I am reading this on a 1600x900 screen of my T420s Frankenpad while sitting in dusk in a German campsite. I ordered the screen some 10 years ago off Alibaba or something, and it is exactly the resolution and brightness I need. I hope I will die before this Frankenpad, because contemporary laptops are awful in so many aspects.

You know... as you age, you really can't read all those tiny characters anyway.


It sounds like you have a proper computer anyway, do you really care about non-fixed-width fonts? These are office suite and web browser fonts.

If something needed to be rendered in some particular way, it should have been a PDF. For everything else there’s vim.


> I think it's an amazing feat of marketing, by display companies, that people still put up with such low resolutions.

Stereo audio is still fine even though 5.1 exists

300 dpi printers are still useable even though 2400 dpi printers exist

double-glass windows are still fine even though triple-glass windows exist

2-wheel drive cars are still fine even though 4-wheel drive cars exist

Just because something new appears on the market that new thing does not need to take over from all predecessors when those predecessors are good enough for the intended purpose, especially not when the new thing comes with its costs - power use and higher demands on GPUs in case of display with higher resolutions than really needed.


And feet are fine, even though shoes exist.

Fire is fine, even though ovens exist.

We're animals, that are perfectly fine living naked in the wild (some still do today). It's all complete excess. Feel free to abandon the progression of tech, but, I challenge you to use a modern panel for a couple months, then try to go back to 1080p. It's like the console players who claimed 30fps was serviceable, fine. Sure, but nobody wants to go back to 30fps after they've used 60hz, or 144hz, for a non-negligible amount of time.

I also use a 1080p from time to time, it's servicable, but it's not comfortable, and it provides a far far inferior experience.


That's true that they aren't interested. But I still like such screens. I used one quite recently, and it worked just fine for my needs.


A Full HD CRT from the roadside in 2003? As if this was just a thing people had happen to them? Is this some elaborate joke I'm missing?

> I haven't owned a smartphone with a screen resolution that low

Smartphone in italics, because smartphones are known for their low pixel densities, right? What?

Did you own a smartphone at all in the past 10 years? Just double checking.

> I think it's an amazing feat of marketing, by display companies, that people still put up with such low resolutions.

And how did you reach that conclusion? Did you somehow miss display companies selling and pushing 1440p and 4K monitors left and right for more than a handful of years at this point, and yet the Steam Hardware Survey still bringing out 1080p monitors as the king month to month?

Sometimes I really do wonder if I live in a different world to others.


> As if this was just a thing people had happen to them?

No, literally, on the roadside, out for trash. Disposing of CRT has always been expensive since they can't fit in the trash and taking them to the dump has a fee for the all the lead. At the transition to LCD, they were all over the place, along with projection TVs. There was also a lot of churn when "slimmer" versions came out, that mostly halved the depth required. Again, it was literally 50lbs, and about 2ft in depth. It took up my whole desk. It was worthless to most anyone.

> Smartphone in italics, because smartphones are known for their low pixel densities, right? What?

Over 10 years ago I had an iPhone 6 plus, with 1080p resolution. All my phones after have been higher. Their pixel densities (DPI) are actually pretty great, but since they're small, their pixel counts are on the lower side. There's nothing different about smartphone displays. The display manufacturers use the same process for all of them, with the same densities.


I think the italics are because it's so weird that most people have more pixels on the 6" display in their pocket than on the 24" display on their desk.


CRT resolution was moreso limited by GPUs than the monitor itself. They don't have fixed pixels like LCD/OLED.


> GPUs than the monitor itself.

No, it was limited by the bandwidth of the beam driving system, which the manufactures, obviously, tried to maximize. This limit is what set the shadow mask and RGB sub pixels/strip widths. The electron beam couldn't make different color, different colored phosphor patches were used.

But, since bandwidth is mostly resolution * refresh, you could trade between the two: more refresh, less resolution. More resolution, less refresh. Early on, you had to download a "driver" for the monitor, which had a list of the supported resolutions and refresh rates. There was eventually a protocol made to query the supported resolutions, straight from the monitor. But, you could also just make your own list (still can) and do funky resolutions and refresh rates, as long as the drive circuit could accommodate.

This monitor could do something like 75Hz at 800x600, and I think < 60 at 1080p.


I got a 21" Hitachi Superscan Elite or Supreme around that time from a gamer.

Because that thing could only do the BIOS text modes, and standard VGA at 640x480 at 60 or 70Hz. Anything else just showed OUT OF SYNC on the OSD, and then switched off.

Except when you fed it 800x600@160Hz, 1024x768@144Hz, 1280@120Hz and 1600x1200@70 to 80Hz, or anything weird in between.

I could easily do that under XFree86 or early X.ORG. A gamer under DOS/Windows rather not, not even with Scitech Display Doctor, because most games at that time used the hardware directly, with only a few standard options to chose from.

OUT-OF-SYNC zing

Was nice for viewing 2 DIN-A4 side by side in original size :-)

Fortunately a Matrox I had could drive that, as could a later Voodoo3 which also had excellent RAMDACs and X support.


That sounds weird and fun, although I can't seem to find the pattern that would have resulted in those numbers. 1024×768@144 (8bpc) works out to 340 MB/s, while 800×600@160 (8bpc) works out to just 230 MB/s, should have been able to refresh even faster. Or is that some other limitation that's not bandwidth? [0]

Was a bit surprised by that double A4 thing btw, but I did the math and it checks out - paper is just surprisingly small compared to desktop displays. Both size and resolution wise (1612×1209 would have put you right up to 96 ppi, with regular 1600×1200 being pretty close too). It's kind of ironic even, the typical 1080p 24" 60 Hz LCD spec that's been with us for decades since just barely fits an A4 height wise, and has a slightly lower ppi. Does have some extra space for sidebars at least, I guess.

[0] Update: ah right, it wasn't a pixel clock limit being ran to the limit there, but the horizontal frequency.


That are the resolutions and frequencies I do remember having tested without trouble. Indeed I could go even faster on the lower ones, but didn't dare to for long, because they sometimes produced very weird high-frequent noises, sometimes unsharpness, and I didn't want to break that wonderful piece of kit.

Did care about 1600x1200 at then 75Hz mostly. All that other stuff was just for demonstration purposes for other people coming by, or watching videos fullscreen in PAL.

It seemed to be really made for that resolution at a reasonable frequency, with the BIOS & VGA thing just implemented to be able to see start up, changing settings, and all the rest just a 'side-effect'.


Yeah, DDC and EDID were standardized in '94, and were widely available and working well by '98 - if you were on Windows at least, running fresh hardware.

> This monitor could do something like 75Hz at 800x600, and I think < 60 at 1080p.

Assuming both modes were meant with 24-bit color ("true color"), that'd mean 17.36 Hz tops then for the FHD mode, ignoring video timing requirements. I don't think you were using that monitor at 17 Hz.

Even if you fell back to 16 bit color, that's still at most 26 Hz, which is miserable enough on a modern sample-and-hold style display, let alone on a strobed one from back in the day. And that is to say nothing of the mouse input feel.


They still had very real limitations in terms of the signal they accepted, and color CRTs specifically had discrete color patches forming discrete, fixed number of triads.


That does not imply that future progress is a good thing, nor does it imply that future progress will even be useful to the majority. It boggles my mind how some people make the logical inference that "all progress is good" based on "some past progress was useful".


Do you think that having fast, distraction-free access to relevant information is not useful?


It is useful, but it assumes the information is trustworthy. At least with websites you could discern some credibility based on the organization publishing the information. With AI summaries, it's just "Google's latest model seems to think this is the answer, good luck!"


You can always follow the source links.. or ask for them..


I think it's too much information to handle, and that information can interfere with the enjoymnet of what we already have. It might be useful in an economic sense, but its usefulness to life is questionable.

Personally, I think it's not such a good thing after all.


What's interesting is that:

0. Internet is initially pretty good.

1. Google introduces search algorithm that is pretty good.

2. SEO becomes a thing because of Google, and makes the web bad.

3. AI (including Google's AI) bypasses that.

4. The web is eradicated, and Google/other AI companies are the only place where you can get information...

Seems like the web would have been better off without Google.


You are forgetting:

0.5 User 1,563,018 puts their credit card details in to make the world’s first online transaction!

0.50001 The web is filled with spam and unsearchable for real information

0.9 Some smart nerds figure out an algorithm to find the signal in the noise

1.9 Google throws out “don’t be evil” because search is cost, ad’s are money

4.1 Google and the rest of the AI/s subvert human decision making in ways that marketers could only ever dream

∞.∞ Humans toil in slavery to the whims of our corporate overlords

I wanted a different dystopia.


Best thing is to turn off recommendations and history, which can be done with YouTube. You can also use uBlock Origin to block even more controls (on YouTube on other websites too), which make websites more unintuitive to use.


That, and the government support of big business in many other ways as well.


Not really. Renting will be forced via iterations of the prisoner's dilemma, and all it needs is a proportion of people who will rent at some point to keep the system going. A few standouts that make a break from renting will just be a mantlepiece that gives us the illusion of choice.


Well that's interesting and true. Looking at the OS landscape, the previous battleground, the software rent model has made a number of companies very rich. And yet, the few standouts gave us Linux and OSS. Do we need more than a few to achieve the same here? I'm personally not paying for Claude, Perplexity, Anthropic, OpenAI, etc... I prefer to obtain open-source weights and run them on my local GPU.


Even if that were true, they should still get less on principle. Making the elite richer gives them even more power that they don't need.


On what principle? Why should anyone be barred from negotiating the best deal they can get in a fair exchange?


The keyword is fair. The have the power to influence the government through bribes so it's never fair. And they can use their wealth to take advantage of the commons as well. They should be sent to prison, the majority of them.


All that matters here is that the CEO and company are negotiating without one side being able to apply some sort of external pressure to force the other to capitulate to their demands.

Unless of course you're claiming that CEOs are bribing "the government" to force companies to increase their pay, which would be quite the theory.

So again, what is the principle that says CEOs should be paid less than a company is willing to pay them to work in that role? Should all workers decline to maximize their compensation, or just those you deem to have too much money? If the latter, what is the threshold?


That was already a trend in software for a long time, unfortunately. Smartphones were largely responsible for this trend because they promoted disposable devices and vendor lock-in with endless new features that few people needed (I mean, needed in the sense of actually improving how they go about their daily life). And what about the web? Sophisticated javascript that prevents scrolling and that loads 100s of megabytes into your browser so that now a browser needs a new computer.

Software respecting user's time was a thing when computers were used by a minority, because the audience was discerning about such things. Nowadays, software is too complicated for the masses to understand, and so any advance will be adopted, whether it is ultimately a good thing or bad.

I feel like we need a second revolution in software, sort of like how open source provided the first.


> The United States is in a race to achieve global dominance in artificial intelligence. Whoever has the largest AI ecosystem will set the global standards and reap broad economic and security benefits

Correction. Whoever has the largest AI will be the first to play the "defect" strategy in the Prisoner's Dilemma, and will thus be the first to usher in a new era of an arms race where everyone loses except those who are at the very top and can take advantage of the losers.


> and will thus be the first to usher in a new era of an arms race where everyone loses except those who are at the very top and can take advantage of the losers.

Unless the AI is stupid, in which case the leader pushes the button and finds that either the AI hallucinates a solution that plain doesn't work, and/or turns out to be acting according to a completely different set of moral values than they wanted, values which can be the straw-est of straw men versions of either your own or you opponent's policies that nobody would ever actually expect to meet in a real political takeover scenario.


True, but the arms race does not have to be solely about the leader's use of AI. It is mainly the forced acceleration of economic development that will mainly lead to superficial changes in the long run that won't really be beneficial to anyone, and use a lot of energy in the mean time.


Does that change the urgency? Or just the perceived morality? TBH just because it's a question of strategy and survival doesn't change its importance.


Realizing it changes the urgency with which we should find a workaround that avoids the prisoner's dilemma altogether. We must go beyond thinking about strategy and "survival", because we have restricted our thinking of "survival" to a narrow economic domain. And that domain is the continued enrichment of the rich and calling it "strenthening the economy". However, we must find new ways of thinking that go beyond this narrow view that is, although seemingly good in the short-term, is leading to long-term disaster.

That will involve finding a new sort of courage to stop playing the arms race and re-defining the terms of the game so that economic supremacy is not the prime goal, as it is leading to global instability, climate change, inequality, and eventual technological dystopia.

I mean, do we really want to live in a world where everyone is an AI button-pusher, staying on the level of the superficial, which is rather meaningless in the grand scheme of things?


That sounds like a complicated way of wishing that if everyone would just take on selfless values, the world could be a better place. But those are a product of very specific cultures and you can't even extrapolate them to the majority of the world's population.

Reminds me of the plot twist in the Firefly movie.


Which is why I still code and write with Sublime Text!


i use nvim and arch btw


I hand write all my code directly in binary.


I do that too, and then execute it by hand as well, looking up every instruction on paper and changing the registers on paper as needed.


> i use nvim and arch btw

^ Has C# projects on github. Don't think that statement is true.

I use vim, kitty, and arch btw


> ^ Has C# projects on github. Don't think that statement is true.

Why?


I fucking love Sublime!


Yeah! Fast, good looking, no AI integration, ability to turn off blinking cursor and customize, and paying for it supports a small company in Australia rather than a huge, soulless megacorp.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: