I spent 9 months at IBM in 1999. At that time, Lou’s legacy had already been solidified. He saved IBM. While not everyone agreed with his decisions, there was a culture of both honesty to customers and innovation that permeated the company. In contrast, look what happened to HP without such great leadership. Once a shining light in Silicon Valley, it turned into a shell of its former self.
He right. I’ve seen poverty in India, but I misunderstood it.
There was a 12 year old kid who guided our boat down the Narmada after we spread my Dads ashes. He was not in school because he wanted money.
I told him I’d pay him double and continue to pay him for his days work, if he’d go back to school during the day and only row boats at night.
He said no. Just give me what you owe me.
He had no hope that education in the government schools would meaningfully change anything for him. Poverty is not a single static state. It’s a negative feedback loop that requires systemic change to get out of.
My experience with the kids working in Laos was similar but I have a different interpretation.
Your answer was through the eyes of an adult. 12 year olds dont have concept of money. What is a lot of money or why they need to go to school.
Asking a 12 year old to understand the value of money and education and life is not fair to the child.
What’s actually going on, at least in Laos, is the parents are directing the kids to do these jobs. The kids don’t understand why, but it’s what their mom wants him to do and it makes her happy when they do it.
I think to address these problems, the better solution is to help parents be better parents. Get them jobs, get them educated, get them skills.
It is also that, at 12 years of age, that kid may have had enough of harsh life lessons already, like when he was promised something instead of being rewarded on the spot and later it turned out he was just being tricked. I imagine that something like that may have had a much more direct impact on his OP described decision - that hard earned "street smarts" keeping him grounded in his (undesirable) reality.
I watched a 13-16 year old Lao girl come to that realization in a very upset Facebook story. While I don’t fully understand the context, she was crying and venting that her mother doesn’t love her.
Prior to this fb story, the girl’s mother sold her older 16yo sister to Chinese guy for marriage. And frequently leaves her daughter to sleep on the street, because the girl isn’t able to get home by herself.
Despite all of that, she still felt her mother loved her and just then was when she realized it? I don’t know.
Unfortunately Youre correct, but also it takes years for these kids to process these realities. They’re just isn’t one moment where it’s like “maybe my parents don’t love me” time to change my behavior.
Sometimes, a visible example does that. I know of numerous people in Mumbai who do servant jobs, whose kids have gone to engineering schools and gotten corporate jobs. This sort of story - 1st in my family to go college - needs to be prominent for a parent to want to aspire to that for their kids.
Doesn't work for the absolutely destitute of course.
> He had no hope that education in the government schools would meaningfully change anything for him.
I have over 14 years of education in developed countries, and out of those, maybe 1 year combined meaningfully helped me in my jobs/career in terms of skills.
Everything else was self-taught/learned.
There's an enormous disconnect in educational systems between what skills will get people out of poverty, what skills are great for wealthy navel-gazing students and what skills some bureaucrat decided "everyone" should have (but no one does, because no one pays attention in those classes).
And when people lose faith in the public educational system is when you get dysfunctional societies for the majority of your citizens.
> I have over 14 years of education in developed countries, and out of those, maybe 1 year combined meaningfully helped me in my jobs/career in terms of skills.
I think you're underestimating the effect of 14 years of daily training in literacy and numeracy.
Most of the folks on this topic are focused on Meta and Yann’s departure. But, I’m seeing something different.
This is the weirdest technology market that I’ve seen. Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word and now that gets rewarded with billions of dollars in valuation.
“It was the most absurd pitch meeting,” one investor who met with Murati said. “She was like, ‘So we’re doing an AI company with the best AI people, but we can’t answer any questions.’”
Despite that vagueness, Murati raised $2 billion in funding...
From a certain angle, this is the market correcting towards the abstraction.
Between inflation, fiscal capture, and the inane plethora of ridiculous financial vehicles that are used to move capital around these days, the argument could be made that the money was already funny. This is just the drop of the final veil, saying "well it's not like these numbers mean anything anymore. I do have enough yachts. Fuck it, see what you can do with it".
If you have N startups, and expect at least 1 of them to make >N times what you invest in each, then investing that amount in each one of them will still be expected to have a positive ROI. If none of them "hits it big" then you lose all the investment money, but if any of them grow enormously you profit. Trying to pick winners & losers in advance is much more difficult, investing in an entire business sector is the VC version of an index fund.
That's fine but none of them expected to invest 2 BILLIONS to get a half-backed IaaS product as the result. That would be fine if the investment had been $10M, but that's the math isn't mathing with that level of funding.
That's been true for the last year or two, but it feels like we're at an inflection point. All of the announcements from OpenAI for the last couple of months have been product focused - Instant Checkout, AgentKit, etc. Anthropic seems 100% focused on Claude Code. We're not hearing as much about AGI/Superintelligence (thank goodness) as we were earlier this year, in fact the big labs aren't even talking much about their next model releases. The focus has pivoted to building products from existing models (and building massive data centers to support anticipated consumption).
A lot of them left in the first days on the job. I guess they saw what they were going to work on and peaced out. No one wants to work on AI slop and mental abuse of children on social media.
I don't understand how an intelligent person could accept a job offer from Facebook in 2025 and not understand what company they just agreed to work for.
With the amount of money Facebook was offering I could see them having a hard time refusing. If someone offered me 100 million dollars to work on AI I know I would have a hard time refusing.
Stated with no more evidence than the figure of $100M of compensation, which was started by Sam Altman on his brother's podcast. But surprisingly everyone seems to be entirely fine with this wild claim and not asking for proof.
Anthropic, frankly, needs to in ways the other big names don't.
It gets lost on people in techcentric fields because Claude's at the forefront of things we care about, but Anthropic is basically unknown among the wider populace.
Last I'd looked a few months ago, Anthropic's brand awareness was in the middle single digits; OpenAI/ChatGPT was somewhere around 80% for comparison. MS/Copilot and Gemini were somewhere between the two but closer to Open AI than Anthropic.
tl;dr - Anthropic has a lot more to gain from awareness campaigns than the other major model providers do.
Claude is ChatGPT done right. It's just better under any metric.
Of course OpenAI has tons of money and can branch off in all kind of directions (image, video, n8n clone, now RAG as a service).
In the end I think they will all be good enough and both Anthropic and OpenAI lead will evaporate.
Google will be left to win because they already have all the customers with the GSuite and OpenAI will be incorporated at massive loss in Microsoft, which is already selling to all the Azure customers.
>Anthropic feels like a one trick pony as most users dont need or want anthropic products.
I don't see what the basis for this is that wouldn't be equally true for OpenAI.
Anthropic's edge is that they very arguably have some of the best technology available right now, despite operating at a fraction of the scale of their direct competitors. They have to start building mind and marketshare if they're going to hold that position, though, which is the point of advertising.
If Claude Code is Anthropic’s main focus why are they not responding to some of the most commented issues on their GitHub? https://github.com/anthropics/claude-code/issues/3648 has people begging for feedback and saying they’re moving to OpenAI, has been open since July and there are similar issues with 100+ comments.
Hey, Boris from the Claude Code team here. We try hard to read through every issue, and respond to as many issues as possible. The challenge is we have hundreds of new issues each day, and even after Claude dedupes and triages them, practically we can’t get to all of them immediately.
The specific issue you linked is related to the way Ink works, and the way terminals use ANSI escape codes to control rendering. When building a terminal app there is a tradeoff between (1) visual consistency between what is rendered in the viewport and scrollback, and (2) scrolling and flickering which are sometimes negligible and sometimes a really bad experience. We are actively working on rewriting our rendering code to pick a better point along this tradeoff curve, which will mean better rendering soon. In the meantime, a simple workaround that tends to help is to make the terminal taller.
It’s surprising to hear this get chalked up to “it’s the way our TUI library works”, while e.g. opencode is going to the lowest level and writing their own TUI backend. I get that we can’t expect everyone to reinvent the wheel, but it feels symptomatic of something that folks are willing to chalk up their issues as just being an unfortunate and unavoidable symptom of a library they use rather than seeming that unacceptable and going to the lowest level.
CC is one of the best and most innovative pieces of software of the last decade. Anthropic has so much money. No judgment, just curious, do you have someone who’s an expert on terminal rendering on the team? If not, why? If so, why choose a buggy / poorly designed TUI library — or why not fix it upstream?
We started by using Ink, and at this point it’s our own framework due to the number of changes we’ve made to it over the months. Terminal rendering is hard, and it’s less that we haven’t modified the renderer, and more that there is this pretty fundamental tradeoff with terminal rendering that we have been navigating.
Other terminal apps make different tradeoffs: for example Vim virtualizes scrolling, which has tradeoffs like the scroll physics feeling non-native and lines getting fully clipped. Other apps do what Claude Code does but don’t re-render scrollback, which avoids flickering but means the UI is often garbled if you scroll up.
As someone who's used Claude Code daily since the day it was released, the sentiment back then (sooo many months ago) was that the Agent CI coding TUIs were kind of experimental proof-of-concepts. We have seen them be incredibly effective and the CC team has continued to add features.
Tech debt isn't something that even experienced large teams are immune to. I'm not a huge TypeScript fan, so seeing their choice to run their app on Node to me felt like a trade-off between development speed with the experience that the team had and at the expense of long-term growth and performance. I regularly experience pretty intense flickering and rendering issues and high CPU usage and even crashes but that doesn't stop me from finding the product incredibly useful.
Developing good software especially in a format that is relatively revolutionary takes time to get right and I'm sure whatever efforts they have internally to push forward a refactor will be worth it. But, just like in any software development, refactors are prone to timeline slips and scope creep. A company having tons of money doesn't change the nature of problem-solving in software development.
That issue is the fourth most-reacted issue, and third most open issue. And the two things above it are feature requests. It seems like you should at the very least have someone pop in to say "working on it" if that's what you're doing, instead of letting it sit there for 4 months?
Thanks for the reply (and for Claude Code!). I've seen improvement on this particular issue already with the last major release, to the extent that it's not a day to day issue for me. I realise Github issues are not the easiest comms channel especially with 100s coming in a day, but occasional updates on some of the top 10 commented issues could perhaps be manageable and beneficial.
How about giving us the basic UX stuff that all other AI products have? I've been posting this ever since I first tried Claude: Let us:
* Sign in with Apple on the website
* Buy subscriptions from iOS In App Purchases
* Remove our payment info from our account before the inevitable data breach
* Give paying subscribers an easy way to get actual support
As a frequent traveller I'm not sure if some of those features are gated by region, because some people said they can do some of those things, but if that is true, then that still makes the UX worse than the competitors.
It's entirely possible they don't have the ability in house to resolve it. Based on the report this is a user interface issue. It could just be some strange setting they enabled somewhere. But it's also possible it's the result of some dependency 3 or 4 levels removed from their product. Even worse, it could be the result of interactions between multiple dependencies that are only apparent at runtime.
>It's entirely possible they don't have the ability in house to resolve it.
I've started breathing a little easier about the possibilty of AI taking all our software engineering jobs after using Anthropic's dev tools.
If the people making the models and tools that are supposed to take all our jobs can't even fix their own issues in a dependable and expedient manner, then we're probably going to be ok for a bit.
This isn't a slight against Anthropic, I love their products and use them extensively. It's more a recognition of the fact that the more difficult aspects of engineering are still quite difficult, and in a way LLMs just don't seem well suited for.
Seems these users are getting it on VS code, while I am getting the exact same thing when using claude code on a Linux server over SSH from Windows Terminal. At this point their app has to be the only thing in common?
That's certainly an interesting observation. I wonder if they produce one client that has some kind of abstraction layer for the user interface & that abstraction layer has hidden or obscured this detail?
The novelty of LLMs are wearing off, people are beginning to understand them for what they are and what they are capable of, and performance has been plateauing. I think that's why people are starting to worry that the AI bubble is a repeat of the dotcom bubble, which was a similar technological revolution.
> Researchers are getting rewarded with VC money to try what remains a science experiment. That used to be a bad word
I’ve worked for multiple startups and I’ve watched startup job boards most of my career.
A lot of VC backed startups have a founder with a research background and are focused on providing out some hypothesis. I don’t see anything uncommon about this arrangement.
If you live near a University that does a lot of research it’s very common to encounter VC backed startups that are trying to prove out and commercialize some researcher’s experiment. It’s also common for those founders to spend some time at a FAANG or similar firm before getting VC funded.
Certainly research has made it into product with the help of the innovators that created the research. The dial is turned further here where the research ideas have yet to be tried and vetted. The research begins in the startup. Even in the dotcom era, the research prototypes were vetted in the conferences and journals before taking the risk to build production systems. This is no longer the case. The experiments have yet to be run.
Yeah, but Sutskever and Murati wouldn't even tell investors what they were working on, and LeCun only has a long-term research direction - not any breakthrough or prototype to commercialize.
I personally see this as a positive trend. VC in its earliest form was concerned with experiments that had high technology risk. I am thinking of companies like Genentech and scientists like biochemist Herbert Boyer, who had pioneered recombinant DNA technology.
After that, VC had become more like PE, investing in stuff that was working already but needed money to scale.
This is VCs FOMOing as global-economy-threatening levels of leverage are being bet on an AI transformation that, by even the most optimistic estimates, cannot achieve a tiny portion of the required ROI in the required time.
Yeah there has been some lamenting at all the money being thrown at technology hasn't been for anything truly game changing, basically just variations of full stack apps. A few failed mooonshots might be more interesting at least.
I agree, if anything spending money on high technology risk is Silicon Valley going back to its roots.
Nobody had a way to do silicon transistor manufacturing at scale until the traitorous eight flipped Shockley the bird and took a $1.4M seed investment from Sherman Fairchild.
Big bets on uncertain technology is what tech is supposed to be about.
> This is the weirdest technology market that I’ve seen.
You must have not lived through the dot com boom. There was almost everything under the sun was being sold under a website that started with an "e". ePets, ePlants, eStamps, eUnderwear, eStocks, eCards, eInvites.....
Those things all worked, and all of those products still exist in one form or another. It was a business question of who would provide it, not a technology question.
It's funny that the Netherlands seems to still live in the dotcom boom to this day. Want to adopt a pet? verhuisdieren.nl. Want to buy wall art? wall-art.nl. Need cat5 cable? kabelshop.nl. 8/10 times there is a (legit) online store for whatever you need, to the point where one of the local e-commerce giants (Coolblue) buys this type of domain and aliases them to their main site.
I was making commentary about the niche/independent nature of these online retailers (another example: graszaaddirect.nl, specialized in grass seeds), not that e-commerce itself survived the bubble.
Having a dense country where you reach any opposite end in <3 hours is probably a major factor. You don't really care where it's coming from (sometimes it's Germany) as delivery time is the same. That would not be the case for the US, you'd require a web of distributors.
Pretty funny, looks like it works in France too! animaux.fr redirects to a pet adoption service, cable.fr looks like a cable-selling shop. artmural.fr exists but looks like a personal blog from a wall artist, rather than a shop.
It did make sense though. ePlants could have cornered the online nursery market. That is a valuable market. I think people were just too early. Payment and logistics hasn’t been figured out yet.
Agree on weirdness but not on the idea of funding science experiments:
>> away from long-term research toward commercial AI products and large language models - LLMs
This feels more like what I see every day: the people in charge desperately looking for some way - any way - to capitalize on the frenzy. They're not looking to fund research; they just want to get even richer. It's pets.ai this time.
This doesn’t feel that new or surprising to me, although I suppose it depends what you consider the line between “science experiment” and “engineering R&D” to be.
Biotech has been a YC darling. Was Ginkgo Bioworks not doing science experiments?
Clean energy was a big YC fad roughly 15 years ago. Billions were invested towards scientific research into biofuels, solar, etc.
I can’t help but wonder: if we had poured the same amount of money into fusion energy research and development, how far might we have come in just three short years?
The minimum cost of capital just to run fusion experiments is probably $100m. And the power bills are probably almost as high as the ones from OpenAI, which is to say, they are the highest power bills in the history of mankind ...
If a "science experiment" has the chance to displace most labor then whoever's successful at the experiment wins the economy, period. There's nothing weird or surprising about the logic of them obsessively chasing it. They all have to, it's a prisoner's dilemma.
Fusion power has the chance to displace most power generation, and whoever is successful at the experiment wins the energy economy, period. However given the long timelines, high cost of research, and the unanswered technical questions around materials that can withstand neutron flux, the total 2024 investment into fusion is only around $10B, versus AI's 250+B.
I think there are two reasons. First, with AI, you get see intermediate successes and, in theory, can derive profit from them. ChatGPT may not be profitable right now but in the longer run, users will be paying whatever they have to pay for it because they are addicted to using it. So it makes sense to try and get as many users as you can into your ecosystem as early as possible even if that means losses. With fusion, you won't see profitability for a very very long time.
The second reason is by how much it's going to be better in the end. Fusion has to compete with hydro, nuclear, solar and wind. It makes exactly the same energy, so the upside is already capped unlike with AI which brings something disruptive.
the nice thing is you dont need to be cost competitive with other energy sources if there is no energy. and also that every form of energy is already subsidized like crazy. so as long as fusion is demonstrated to be reliable and scalable, then it can be left to the politicians to figure out how to allocate resources.
but i agree with the article about some points. proliferation and tritium I wouldnt say is much of an issue.
People are unsophisticated and see how convincing LLM output looks on the surface. They think it's already intelligent, or that intelligence is just around the corner. Or that its ability to displace labor, if not intelligence, is imminent.
If consumption of slop turns out to be a novelty that goes away and enough time goes by without a leap to truly useful intelligence, the AI investment will go down.
The calculator didn't make people better at math, it led to a society of people who can't do math without a calculator. And as a result math doesn't get done in many casual situations where it would be helpful, but people don't go to the trouble of pulling out the calculator.
So it's made it easier for people to be taken advantage of at the grocery store etc.
Most people are equally bad at math with or without a calculator. The problem for the average person isn’t that they can’t add two numbers, it’s that they can’t tell which numbers they should be adding in the first place.
I'd argue it's a failure of education or general lack of intelligence. The existence of a tool to speed the process up doesn't preclude people understanding the process.
I don't think this relates as closely to AI as you seem to. I'm simply better at building things, and doing things, with AI than without. Not just faster, better. If that's not true for you, you're either using it wrong or maybe you already knew how to do everything already - if so, good for you!
Technology know-how spreads rapidly, so no need to be first. Look how fast Google caught up with Gemini when they chose to, or how fast X.ai developed Grok.
Maybe it's cheap insurance to invest in, say, LeCun just in case JEPA or the animal intelligence approach takes off, but if it does show significant signs of progress there'd also be opportunity to invest later, or in one of the dozen copycats that will emerge. In the end it'll be the giants like Google and Microsoft that will win.
This looks more like a return to form than anything.
The first ventures were funding voyages to a New World thousands of miles away, essentially a different planet as far as the people then were concerned.
Venture capital for a new B2B application is playing it safe as far as risk capital goes
Yeah, that's quite unusual. Buisness was always terrible at being innovative, always dared to take only the safest and most minute of bets and the progress of technology was always paid for by the taxpayers. Business usually stepped in only later, when technology was ready and did what it does best, opimize manufacturing and put it in the hands of as many consumers as possible rakink in billions.
I wonder what changed. Does AI look like a safe bet? Or does every other bet seem to not have any reasonable return?
If you think about Theranos, Magic leap, openai, anthropic they are all the same, one idea thats kinda plausible (well if you don't look too closely), have a slick demo, and well connected founders.
Much as a lot of people dislike LeCun (just look at the blind posts about him) he did run and setup a very successful team inside meta, well nominally at least.
Magic leap delivered AR glasses running SLAM. It over sold on the market for it, didnt lie about whether it would work and didn’t test on patients looking for medicial care. You sound very uninformed. Theranos founders are serving prison sentences. Big
Difference.
You're right to feel like you're seeing something different. You are. But you're mistaking the symptom for the disease.
That's because you're trying to make sense of it as a technology market. It's not. It's a resource extraction market, and the VCs are the ones running the logging operation. Their sole mission is to find a dependable way to strip a forest bare, and they've been using the same playbook for decades.
Those "science experiments" you're talking about? They aren't the product. They're the story, the sizzle. They are the disposable lighter used to start the fire; the VCs have no intention of keeping it lit forever. The real tool is the chainsaw, and the "science experiment" is the brand name printed on the side.
Think of it as clear-cutting. The dot-com bubble was one forest. The story then was that a company losing millions selling pet food online was a "new economy" giant because it had "eyeballs." That was the sales pitch for the chainsaw. VCs funded hundreds of these operations, created a frenzy, and took the most plausible-sounding ones public. The IPO wasn't a milestone; it was the moment they sold the timber and exited the forest, leaving the stumps and worthless pulp for the pension funds and retail investors.
The "long-term" part of their strategy isn't about the health of any single tree or company. It's about finding the next forest to clear-cut. After dot-coms, it was social media. Now, it's the AI forest. They aren't betting on AI; they're betting on their ability to sell the world on the idea that this particular forest is magical and will grow forever.
So you're right, what you're seeing is weird. But it's not a new kind of weirdness. It's the oldest story in finance. A bubble being inflated so the smart money can cash out, leaving everyone else to marvel at the fancy new chainsaw after the forest is already gone.
> Researchers are getting rewarded with VC money to try what remains a science experiment.
That's not all that new. Commercial fusion power startups are an example. I think the first one was General Fusion, founded in 2002. Today, there are around 50 of them. Every single one of those "remains a science experiment", and probably has much lower chance of success than some of the AI science experiments.
Of course, fusion startups have apparently "only" received about $10 bn in funding to date, so pale in comparison to the overall AI market. But if you just look at the AI "science experiments", it's possible the amounts would be comparable.
Having raised more than $100M myself, I’m not sure I would call VC money a reward. However, VC money should be allocated in part to massive upside science experiments. PE money is focused on things already figured out.
It makes sense, it’s a simple expected value calculation.
There are trillions of labor dollars that can be replaced by software. The US alone has almost $12 trillion of labor annually.
If an AI company has a 10% shot of developing a product that can replace 10% of it, they are worth $120 billion in expected value. (These numbers are obviously just for illustration).
The unprecedented numbers are a simple function of the unprecedented market size. Nobody has ever had a chance of creating trillions of dollars of economic value in a handful of years before.
>If an AI company has a 10% shot of developing a product that can replace 10% of it, they are worth $120 billion in expected value.
that's not how profits work. Companies don't get paid for the value they create but for the value they can capture, otherwise the ffmpeg people would already be trillionaires.
If you have a dozen companies making the same general purpose technology, not product, your only hope is being able to slap ads on top of it, which is why they're so keen on targeting consumers rather than trying to automate jobs.
Yeah, it was intentionally oversimplified to illustrate a point. Not everybody knows what the phrase expected value means or realizes the size of the market that is being addressed here.
This is the same game of poker investors have been playing forever, there just are a few more zeros on the chips.
Has someone done a survey to ask devs on how much they are getting done vs what their managers expect with AI? I've had conversations with multiple devs in big orgs telling me that Managers and dev's expectations are seriously out of sync. Basically its
Manager: Now you "have" AI, release 10 features instead of 1 in the next month.
Devs: Spending 50% more working hours to make AI code "work" and deliver 10.
I think that's a good thing and VC getting back to it's roots. I'm glad that scientists doing AI are getting big money and don't know exactly what the product will be rather than some business person with a slick deck and hockey stick charts.
If a science experiment works and is transformational can be worth a trillion dollars, how much is it worth if it has a 5% chance of being transformational?
Get the popcorn ready for when that all implodes. Most of these folks getting funding don’t have the slightest clue on how to build a sustainable business.
When the bubble pops, and it’s very close to popping, there’s going to be a lot of burning piles of cash with no viable path to reviver that money.
If it ever feels weird - just watch Silicon Valley show again.
"Revenue? No no no no. Why would you go after revenue? If you show people revenue, they’ll ask ‘how much?’. And it will never be enough. The company that was the 100x-er or the 1000x-er becomes the 2x dog. But if you have no revenue, you can say you’re pre-revenue and you’re a potential pure play.”
We took it now from no revenue to no actual product, or even a concept of a product.
Because when the recipe is open and public, the product's success depends on Distribution (which has been cornered by MS, Google, Apple). This is good for the ecosystem but not sure how those particular VCs will get exits.
Very few startup products depend on distribution by Microsoft / Google / Apple. You're really just talking about a limited set of mobile or desktop apps there. Everything else is wide open. Kailera Therapeutics isn't going to live or die based on what the tech giants do.
Yes - I had similar thoughts when I saw the word "startup" used alongside something so far-out (same 'critique" should apply to Fei-Fei Li's World Labs - https://www.worldlabs.ai). These are VC-funded research labs (and there is nothing wrong with tat). Calling them "startups" as if they are already working on an MVP on top of an unproven (and frankly non-existent) technology seems a little disingenuous to me.
This site is a stroll down memory lane for me. I had one of the calculator watches as a kid. Used it everywhere, especially when shopping or going out to eat with parents. It was cool to cross check totals and check tax calculations. I was a nerdy kid and craved math.
This smacks so much of a Silicon Valley episode. “Pre-idea individuals” … Sounds like they want people with no opinions. Next we will say stuff like “No thought personas”
I like this post. I see this confusion all the time! What’s the difference between ChatGPT and gpt-5 or gpt-4o, and so on. OpenAI’s carefully crafted naming schemes don’t help. Though, I come from AWS so glass houses.
Anyway, agents are control systems that using planning, tools, and a collection of underlying models. ChatGPT is an agent. What kind? The kind optimized for the general user looking to do work with public knowledge. That’s the best definition I can come up with.
Anyway, let’s make sure people understand the difference between AI systems and AI models. The former is where a lot of startup activity will be for a decade. The latter will be in the hands of a few well funded behemoths.
I’m surprised by the argument. It’s not wrong. You need more data, but that presumes that the task is to pre-train on data. Additional compute is also useful for unearthing tacit capabilities in the models. This requires inference time scaling and post training usually on specific downstream tasks using RL. Sure that generates data, but it’s not the same as the Internet, and can be scaled.
I took 6.034 from Winston and still have the lecture notes and book. Though dated, they remind me of what was great about MIT. The constant change, upheaval, search for scientific truth, and desire to help humankind. RIP Patrick Winston.
1. Don’t engage in public with an antagonistic or upset user or reviewer.
2. The thread will unroll itself, and the immaterial ones will die out on their own.
reply