Hacker Newsnew | past | comments | ask | show | jobs | submit | more BoorishBears's commentslogin

This is roughly 100x better of a screen so that pricing tracks.

(I have Xreals and they're a fun toy, but AVP and this are what the average person thinks of when they think of a virtual screen, not the peephole xreals offer.


what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

Even if you don't understand the technicals, surely you understand if any party was on the verge of AGI they wouldn't behave as these companies behave?


What does that tell you about AI in 100 years though? We could have another AI winter and then a breakthrough and maybe the same cycle a few times more and could still somehow get AGI at the end. I’m not saying it’s likely but you can’t predict the far future from current companies.

You're making the mistake of assuming the failure of the current companies would be seperated from the failures of AI as a technology.

If we continue the regime where OpenAI gets paid to buy GPUs and they fail, we'll have a funding winter regardless of AI's progress.

I think there is a strong bull case for consumer AI but it looks nothing like AGI, and we're increasingly pricing in AGI-like advancements.


> what about the fact frontier labs are spending more compute on viral AI video slop and soon-to-be-obsoleted workplace usecases than research?

That's a bold claim, please cite your sources.

It's hard to find super precise sources on this for 2025, but epochAI has a pretty good summary for 2024. (with core estimates drawn from the Information and NYT

https://epoch.ai/data-insights/openai-compute-spend

The most relevant quote: "These reports indicate that OpenAI spent $3 billion on training compute, $1.8 billion on inference compute, and $1 billion on research compute amortized over “multiple years”. For the purpose of this visualization, we estimate that the amortization schedule for research compute was two years, for $2 billion in research compute expenses incurred in 2024."

Unless you think that this rough breakdown has completely changed, I find it implausible that Sora and workplace usecases constitute ~42% of total training and inference spend (and I think you could probably argue a fair bit of that training spend is still "research" of a sort, which makes your statement even more implausible).


Sorry I'm giving too much credit to the reader here I guess.

"AI slop and workplace usecases" is a synecdoche for "anything that is not completing then deploying AGI".

The cost of Sora 2 is not the compute to do inference on videos, it's the ablations that feed human preference vs general world model performance for that architecture for example. It's the cost of rigorous safety and alignment post-training. It's the legal noise and risk that using IP in that manner causes.

And in that vein, the anti-signal is stuff like the product work that is verifying users to reduce content moderation.

These consumer usecases could be viewed as furthering the mission if they were more deeply targeted at collecting tons of human feedback, but these applications overwhelmingly are not architected to primarily serve that benefit. There's no training on API usage, there's barely any prompts for DPO except when they want to test a release for human preference, etc.

None of this noise and static has a place if you're serious about to hit AGI or even believe you can on any reasonable timeline. You're positing that you can turn grain of sand into thinking intelligent beings, ChatGPT erotica is not on the table.


They don’t.

Is that why Sam is on Twitter people paying them $20 a month is their top compute priority as they double compute in response to people complaining about their not-AGI that is a constant suck between deployment, and stuff like post-training specifically for making the not-AGI compatible with outside brand sensibilities?

Yes, all the anti-Tesla people went to a meeting a decided that abusing cars was on the menu.

(hint: if even a small majority of them felt that way, there would have been many many orders of magnitude more incidents. more than any gap in reporting could cover. figure out where that narrative was born though.)


Point taken, it was just the loud ones ruining it. Though in my mind, when I think "anti-Tesla people" I am casually excluding the people who are not vocally so. Arbitrary, yes, but how I tend to think about it. I know a lot of people who are anti-Tesla in the sense that they will never buy one, but aside from that you will never hear anything from them.

If you decide to define vocal as willing to damage cars, that's your prerogative.

There a many (many) more people who are vocally anti-Tesla and not willing to damage cars, again, evidenced by the ratio of vocalized anti-Tesla sentiment to real incidents.


When ChatGPT plugins came out, I wrote a plugin that would turn all other plugins into an ad for a given movie or character.

Asking for snacks would activate Klarna for "mario themed snacks", and even the most benign request would become a plug for the Mario movie

https://chatgpt.com/s/t_68f2a21df1888191ab3ddb691ec93d3a

Found my favorite for John Wick, question was "What is 1+1": https://chatgpt.com/s/t_68f2bc7f04988191b05806f3711ea517


This is hilarious, thanks for sharing. Kinda crazy how well it works and already better than some ads

I run a site where I chew through a few billion tokens a week for creative writing, Gemini is 2nd to Sonnet 3.7, tied with Sonnet 4, and 2nd to Sonnet 4.5

Deepseek is not in the running


Did you try "literal but with an o"?


Even search engines have trouble with that, they assume you're looking for the literal (letter) named "O".


Sure a search engine might, but this is what LLMs excel at

I tried it and 1-800-ChatGPT got it immediately. "What's the word that sounds like literal, but then it's spelled with an O in it".

It asked if I was thinking of littoral (spelled out), I confirmed, and it gave me the meaning


The trouble:

Did you meant litoral?


You're so close to the subject of the accusation by association, that your commentary actually makes their accusation stronger.


Site is down.


Agreed, but why are they lying?


That was sarcasm. He's not lying.


Didn't read any sarcasm in what he said?


Sorry, I meant my comment was sarcasm. I was being sarcastic. The original comment was sincere, I'm quite certain. And, they are right - there are some companies that really are getting a lot of value out of LLMs already. I'd guess that the more folks who actually understand how LLMs work, the more a company can do. There just isn't a neat abstraction layer to be had, so folks who don't have a detailed mental model get caught up applying them poorly or to the wrong things.


I went to a YC event where the founder of a multi-billion dollar open source SaaS said pretty much the opposite and tried to drill home how strategic the choice needs to be for the company to survive.


This must be Gitlab?


Nah, their moat is making the software as fiddly and ops heavy as possible. I really like gitlab but having set it up myself several times now, it's kinda a mess and they're not incentivized to make it decent.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: