Hacker Newsnew | past | comments | ask | show | jobs | submit | netdevphoenix's commentslogin

Love it when the media starts reflecting the bubble fears HN described for at least 2 years. OpenAI might not crash yet but talk of bubbles is likely to tighten AI industry companies access to investor money imo. The days of investors blindly showering money on anything AI are coming to an end looks like. Actual RoI of existing AI systems is likely to come under increasing scrutiny imo

> We now have high speed internet access everywhere

This is such a HN comment illustrating how little your average HN knows of the world beyond their tech bubble. Internet everywhere, you might have something of a point. But "high speed internet access everywhere" sounds like "I haven't travelled much in my life".


> The current generation of LLM's have convinced me that we already have the compute and the data needed for AGI, we just likely need a new architecture

This is likely true but not for the reasons you think about. This was arguably true 10 years ago too. A human brain uses 100 watts per day approx and unlike most models out there, the brain is ALWAYS in training mode. It has about 2 petabytes of storage.

In terms of raw capabilities, we have been there for a very long time.

The real challenge is finding the point where we can build something that is AGI level with the stuff we have. Because right now, we might have the compute and data needed for AGI but we might lack the tools needed to build a system that efficient. It's like a little dog trying to enter a fenced house, the closest path topologically between the dog and the house might not be accessible for that dog at that point because its current capabilities (short legs, inability to jump high or push through the fence standing in between) so while it is further topologically, a longer path topologically might be the closest path to reach the house.

In case it's not obvious, AGI is the house, we are the little dog and the fence represent current challenges to build AGI.


The notion that the brain uses less energy than an incandescent lightbulb and can store less data than YouTube does not mean we have had the compute and data needed to make AGI "for a very long time".

The human brain is not a 20-watt computer ("100 watts per day" is not right) that learns from scratch on 2 petabytes of data. State manipulations performed in the brain can be more efficient than what we do in silicon. More importantly, its internal workings are the result of billions of years of evolution, and continue to change over the course of our lives. The learning a human does over its lifetime is assisted greatly by the reality of the physical body and the ability to interact with the real world to the extent that our body allows. Even then, we do not learn from scratch. We go through a curriculum that has been refined over millennia, building on knowledge and skills that were cultivated by our ancestors.

An upper bound of compute needed to develop AGI that we can take from the human brain is not 20 watts and 2 petabytes of data, it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.


> it is 4 billion years of evolution in a big and complex environment at molecular-level fidelity. Finding a tighter upper bound is left as an exercise for the reader.

You have great points there and I agree. Only issue I take with your remark above. Surely, by your own definition, this is not true. Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.

Also, re "The human brain is not a 20-watt computer ("100 watts per day" is not right)", I was merely saying that there exist an intelligence that consumes 20 watts per day. So it is possible to run an intelligence on that much energy per day. This and the compute bit do not refer to the training costs but to the running costs after all, it will be useless to hit AGI if we do not have enough energy or compute to run it for longer than half a millisecond or the means to increase the running time.

Obviously, the path to design and train AGI is going to take much more than that just like the human brain did but given that the path to the emergence of the human brain wasn't the most efficient given the inherent randomness in evolution natural selection there is no need to pretend that all the circumstances around the development of the human brain apply to us as our process isn't random at all nor is it parallel at a global scale.


> Evolution by natural selection is not a deterministic process so 4 billion years is just one of many possible periods of time needed but not necessarily the longest or the shortest.

That's why I say that is an upper bound - we know that it _has_ happened under those circumstances, so the minimum time needed is not more than that. If we reran the simulation it could indeed very well be much faster.

I agree that 20 watts can be enough to support intelligence and if we can figure out how to get there, it will take us much less time than a billion years. I also think that on the compute side for developing the AGI we should count all the PhD brains churning away at it right now :)


"watts per day" is just not a sensible metric. watts already has the time component built in. 20 watts is a rate of energy usage over time.

This looks quite concerning imo. We all know this is a bubble. When the transformer implosion happens, you can be sure that OpenAI will be ground zero. All these investors feeding OpenAI and all these adjacent companies exposing themselves to OpenAI will suffer huge losses. Everyone is chasing growth so hard that they are making questionable choices regarding returns from a far future that may never come. And let's be clear, the future that is going to pay this off is a future where this tech or a direct successor to this tech brings about a level of general learning skills and autonomy that should be pretty close to a third revolution. Anything else is massive loves for all of these companies.

Nah this is a repeat of Google's early days. They built storage at such a scale that it was hard for anyone else to compete in anything that required storage like email.

OpenAI is doing the same with compute. They're going to have more compute than everyone else combined. It will give them the scale and warchest to drive everyone else out. Every AI company is going to end up being a wrapper around them. And OpenAI will slowly take that value too either via acquisition or cloning successful products.


But is OpenAI building that compute or are they renting it?

OpenAI and Anthropic are signing large deals with Google and Amazon for compute resources, but ultimately it means that Google and Amazon will own a ton of compute. Is OpenAI paying Amazon's cap ex just so Amazon can invest and end up owning what OpenAI needs over the long term?

For those paying Google, are they giving Google the money Google needs to further invest in their TPUs giving them a huge advantage?


Practically, it doesn't matter like it didn't matter for Google that storage got many orders of magnitude cheaper. By the time training a novel LLM and serving it to a billion users is trivial in the way that providing 1GB of email storage is today there will be other moats. They'll have decades of user history and a monitization framework that will be hard to overcome.

Google is a viable competitor here.

Everyone else is missing part of the puzzle. They theoretically could compete but they're behind with no obvious way of catching up.

Amazon specifically is in a position similar to where they were with mobile. They put out a competing phone but with no clear advantage it flopped. They could put out their own LLM but they're late. They'd have to put out a product that is better enough to overcome consumer inertia. They have no real edge or advantage over OpenAI/Google to make that happen.

Theoretically they could back a competitor like Anthropic but what's the point? They look like an also ran these days and ultimately who wins doesn't affect Amazon's core businesses.


FB seems to have figured it out finally and their stock took a huge hit for the investment of infra. Also, despite being behind in sota models and huge human capital investments for research, I believe they are benefiting greatly from oai and the likes.

Every image/video/text post on a meta app is essentially subsidized by oai/gemini/anthropic as they are all losing money on inference. Meta is getting more engagement and ad sales through these subsidized genai image content posts.

Long term they need to catch up and training/inference costs need to drop enough such that each genai post costs less than net profit on the ads but they’re in a great position to bridge the gap.

The end of all of this is ad sales. Google and Meta are still the leaders of this. OpenAI needs a social engagement platform or it is only going to take a slice of Google.


> Meta is getting more engagement and ad sales through these subsidized genai image content posts.

Do you have any sources backing this? As in "more engagement and ad sales" relative to what they would get with no genai content


How is Anthropic an also-ran when they lead the enterprise market?

Do they? Doesn’t big corporations just buy CoPilot from Microsoft where they already have a license for Office, Teams, GitHub, Visual Studio, Azure etc.?


You know Menlo is an Anthropic investor, right? The report is likely biased imo.

While I can see Anthropic or any other leading on API usage, it is unlikely that Anthropic leads in terms of raw consumer usage as Microsoft has the Office AI integration market locked down


> OpenAI is doing the same with compute.

No, it’s Amazon that’s doing this. OpenAI is paying Amazon for the compute services, but it’s Amazon that’s building the capacity.


Pretty sure this "compute is the new oil" thesis fell flat when OAI failed to deliver on GPT-5 hype, and all the disappointments since.

It's still all about the (yet to be collected) data and advancements in architecture, and OAI doesn't have anything substantial there.


It's absolutely no longer about the data. We produce millions of new humans a year who wind up better at reasoning then these models but don't need to read the entire contents of the Internet to do it.

A relatively localized, limited lived experience apparently conveys a lot that LLM input does not - there's an architecture problem (or a compute constraint).


AI having societally useful impact is 100% about the data and overall training process (and robotics...), of which raw compute is a relatively trivial and fungible part.

No amount of reddit posts and H200s will result in a model that can cure cancer or drive high-throughput waste filtering or precision agriculture.


I think GPT 5 is pretty good. My use case is vscode copilot and the GPT 5 Codex model and the 5 mini model are a lot better than 4.1. o4 mini was pretty good too.

Its slow as balls as of late though. So I use a lot of sonnet 4.5 just because it doesn't involve all this waiting even though I find sonnet to be kinda lazy.


Sure, GPT-5 is pretty good. So are a dozen other models. It's nowhere near the "scary good" proto-AGI that Altman was fundraising on prior to its inevitable release.

Even more so where is the model that is beating GPT-5? This level that fell flat should have been easy to jump over if the scaling narratives were holding.

Google, too, has a lot of compute. Not to mention the chips to power the compute.

And they own the compute, as opposed to renting some of it. And they have the engineers to utilise that compute.

If only everyone in the world had compute in their pockets or on their desk…

Every AI company is going to end up being a wrapper around them.

the race is for sure on: https://menlovc.com/perspective/2025-mid-year-llm-market-upd...


seems like a flawed assumption when the cost of tokens -> 0

Like in politics, all they care about is getting out before shtf and pass the bag to the next sucker while making $$$ in the meantime

Are we at the Pets.com stage of the bubble yet?

I started working in 1997 at the height of the dot com bubble. I thought it would go on forever but the second half of 2000 and 2001 was rough.

I know a lot of people designing AI accelerator chips. Everyone over 45 thinks we are in an AI bubble. It's the younger people that think growth is infinite.

I told them to diversify from their company stock but we'll see if they have listened after the bubble pops


You are stating a lot of things as fact that aren't really supported. We don't know this is a bubble, we don't know that there will be a transformer implosion, whatever that means, we don't know that OpenAI would ground zero if this is a bubble and it pops, etc..

No one ever knows before these things happen. These predictions are obviously always conjecture, they can’t be stated as fact, ever— at best you can give some supporting evidence often based on similar prior art

loves is typo for losses I assume?

Yes, sorry. I can't edit the post anymore

Sounds like for most implementations of DTs, you have to go all in which is likely overkill for many LoB apps doing CRUD with some custom logic on it. The ideal would be to be able to delegate some modules onto a separate system using DTs and the rest using your good old OOP

There are some analogies between your good old OOP and dependent types, in that a derived object in OOP has a "type" (which is reflected in its methods dictionary - it's not merely a runtime "tag") that varies at runtime based on program logic. You can implement some of this with typestate, but for full generality you need to allow for downcasts that might only become statically checkable when you do have full dependent types.

Given that we don't really have a precise definition of "alive", it should not be surprising that we are unable to tell the precise moment a person dies.

Miracle Max gave us a clear definition if I recall, you die when you are "all dead", as long as you are mostly dead, you are slightly alive...

I'll let myself out now.


“ If we were not perfectly convinced that Hamlet's Father died before the play began, there would be nothing more remarkable in his taking a stroll at night, in an easterly wind, upon his own ramparts, than there would be in any other middle-aged gentleman rashly turning out after dark “

My main question with all this LLM appliance is: is there any way to track the hallucination rate or make them improve when they inevitably hallucinate?

> They definitely did, Intel existing is probably an issue of national security at this point, if Intel fell then there'd be the risk of some other nation's company being part of the duopoly.

Mind elaborating? Who are the players in the duopoly?


We currently have an all American oligopoly on the CPU market - Intel, AMD, Apple(ARM) and Qualcomm(ARM).

There's hardly any non-American CPU designers out there


I'm not sure why Arm is in parenthesis twice, when it's a full-blown, non-American CPU designer on whose coat-tails Apple and Qualcomm have been riding.

Risc-V moved HQs to be a non-American CPU designer, but perhaps you don't find them credible (yet).


Apple and Qualcomm only use ARM ISA at this point.

And no, Apple and Qualcomm are the standard setters in ARM these days. Should they drop ARM for something else... ARM will be on the same trajectory where MIPS ended up.

RISC-V is just an ISA standard, the standard body is not a CPU designer in any shape or form.


Presumably referring to the logic foundry business where TSMC is the monopoly power and Intel, Samsung and SMIC are looking to turn it into a duopoly.

Or they could be referring to the Wintel monopoly (Windows+Intel), or the x86 duopoly (Intel+AMD), or the FPGA duopoly (Altera=>Intel + Xilinx=>AMD)...

Let's not forget GloFo although they are more interested in bulk at this point.mm

Global Foundries sent their EUV machine back (and paid a fat restocking fee to do it), they've stopped trying to compete at the leading edge of logic processes.

SMIC has a DUV multi-patterning 7 nm node which is already economically uncompetitive with EUV 7 nm nodes (except for PRC subsidies) and the economics of DUV only get worse further down, but at least they're trying and will certainly be the first client to use the Chinese EUV machines, whenever those come online.


Took me a bit of scrolling to find this. I believe most of the other folks are functional devs or something. The 5 functions on a single line wouldn't pass the code review in most .net/java shops.

The rule I was raised with was: you write the code once and someone in the future (even your future self) reads it 100 times.

You win nothing by having it all smashed together like sardines in a tin. Make it work, make it efficient and make it readable.


If you get an exception, you might not know where it comes from unless you get a stack trace. Code looks nice but not practical imo

I use Clojure all the time and I haven’t noticed the gripe you’ve got, but these are built in features of (somewhat) popular programming languages. Might not be for you but functional programming isn’t for everyone.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: