Hacker Newsnew | past | comments | ask | show | jobs | submit | snigsnog's commentslogin

I don't know anything about the field but apparently AlphaFold having "solved the problem of protein folding" is overhyped? https://old.reddit.com/r/bioinformatics/comments/1e0s55e/did...

It's not worthless, it's just not worldchanging as is even in the fields where it's most useful, like programming. If the trajectory changes and we reach AGI then this changes too but right now it's just a way to

- fart out demos that you don't plan on maintaining, or want to use as a starting place

- generate first-draft unit tests/documentation

- generate boilerplate without too much functionality

- refactor in a very well covered codebase

It's very useful for all of the above! But it doesn't even replace a junior dev at my company in its current state. It's too agreeable, makes subtle mistakes that it can't permanently correct (GEMINI.md isn't a magic bullet, telling it to not do something does not guarantee that it won't do it again), and you as the developer submitting LLM-generated code for review need to review it closely before even putting it up (unless you feel like offloading this to your team) to the point that it's not that much faster than having written it yourself.


The internet and smartphones were immediately useful in a million different ways for almost every person. AI is not even close to that level. Very to somewhat useful in some fields (like programming) but the average person will easily be able to go through their day without using AI.

The most wide-appeal possibility is people loving 100%-AI-slop entertainment like that AI Instagram Reels product. Maybe I'm just too disconnected with normies but I don't see this taking off. Fun as a novelty like those Ring cam vids but I would never spend all day watching AI generated media.


> The internet and smartphones were immediately useful in a million different ways for almost every person. AI is not even close to that level.

Those are some very rosy glasses you've got on there. The nascent Internet took forever to catch on. It was for weird nerds at universities and it'll never catch on, but here we are.


Well until the weird nerds at uni created things like Google, Facebook and so on...

The early internet and smartphones (the Japanese ones, not iPhone) were definitely not "immediately" adopted by the mass, unlike LLM.

If "immediate" usefulness is the metric we measure, then the internet and smartphones are pretty insignificant inventions compared to LLM.

(of course it's not a meaningful metric, as there is no clear line between a dumb phone and a smart phone, or a moderately sized language model and a LLM)


Yeah the internet kind of started with ARPANET in 1969 and didn't really get going with the public till around 1999 so thirty years on.

Here's a graph of internet takeoff with Krugman's famous quote of 1998 that it wouldn't amount to much being maybe the end of the skepticism https://www.contextualize.ai/mpereira/paul-krugmans-poor-pre...

In common with AI there was probably a long period when the hardware wasn't really good enough for it to be useful to most people. I remember 300 baud modems and rubber things to try to connect to your telephone handset back in the 80s.


Thats all irrelevant. Is/was there tremendous value to be had by being able to transport data? Of course. No doubt about it. Everything else got figured out and investments were made because of that.

The same line of thinking does not hold with LLMs given their non-deterministic nature. Time will tell where things land.


There's value in intelligence too.

Intelligence? No. Get the wording right. It’s driven by probability.

Calling intelligence “just probability” is like calling music “just vibrations” and thinking you said something deep.

ChatGPT has roughly 800 million weekly active users. Almost everyone around me uses it daily. I think you are underestimating the adoption.

How many pay? And out of that how many are willing to pay the amount to at least cover the inference costs (not loss leading?)

Outside the verifiable domains I think the impact is more assistance/augmentation than outright disruption (i.e. a novelty which is still nice). A little tiny bit of value sprinkled over a very large user base but each person deriving little value overall.

Even as they use it as search it is at best an incrementable improvement on what they used to do - not life changing.


Usage plunges on the weekends and during the summer, suggesting that a significant portion of users are students using ChatGPT for free or at heavily subsidized rates to do homework (i.e., extremely basic work that is extraordinarily well-represented in the training data). That usage will almost certainly never be monetizable, and it suggests nothing about the trajectory of the technology’s capability or popularity. I suspect ChatGPT, in particular, will see its usage slip considerably as the education system (hopefully) adapts.

The summer slump was a thing in 2023 but apparently didn't repeat in 2024: https://www.similarweb.com/blog/insights/ai-news/chatgpt-bea...

The weekend slumps could equally suggest people are using it at work.


Interesting, thank you for that. I’d be curious to see the data for 2025. I was basing my take off Google trends data - the kind of person who goes to ChatGPT by googling “chatGPT” seems to be using it less in the summer.

“Almost everyone will use it at free or effectively subsidized prices” and “It delivers utility which justifies its variable costs + fixed costs amortized over useful lifetime” are not the same thing, and its not clear how much of the use is tied to novelty such that if new and progressively more expensive to train releases at a regular cadence dropped off, usage, even at subsidized prices, would, too.

Even my mom and aunts are using it frequently for all sorts of things, and it took a long time for them to hop onto internet and smartphones at first.

The adoption is just so weird to me. I cannot for the life of me get LLM chatbot to work for me. Every time I try I get into an argument with the stupid thing. They are still wrong constantly, and when I'm wrong they won't correct me.

I have great faith in AI in e.g. medical equipment, or otherwise as something built in, working on a single problem in the background, but the chat interface is terrible.


> AI is not even close to that level

Kagi’s Research Assistant is pretty damn useful, particularly when I can have it poll different models. I remember when the first iPhone lacked copy-paste. This feels similar.

(And I don’t think we’re heading towards AGI.)


… the internet was not immediately useful in a million different ways for almost every person.

Even if you skip ARPAnet, you’re forgetting the Gopher days and even if you jump straight to WWW+email==the internet, you’re forgetting the mosaic days.

The applications that became useful to the masses emerged a decade+ after the public internet and even then, it took 2+ decades to reach anything approaching saturation.

Your dismissal is not likely to age well, for similar reasons.


the "usefulness" excuse is irrelevant, and the claim that phones/internet is "immediately useful" is just a post hoc rationalization. It's basically trying to find a reasonable reason why opposition to AI is valid, and is not in self-interest.

The opposition to AI is from people who feel threatened by it, because it either threatens their livelihood (or family/friends'), and that they feel they are unable to benefit from AI in the same way as they had internet/mobile phones.


The usefulness of mobile phones was identifiable immediately and it is absolutely not 'post hoc rationalization'. The issue was the cost - once low cost mobile telephones were produced they almost immediately became ubiquitous (see nokia share price from the release of the nokia 6110 onwards for example).

This barrier does not exist for current AI technologies which are being given away free. Minor thought experiment - just how radical would the uptake of mobile phones have been if they were given away free?


It's only low cost for general usage chat users. If you are using it for anything beyond that, you are paying or sitting in a long queue (likely both).

You may just be a little early to the renaissance. What happens when the models we have today run on a mobile device?

The nokia 6110 was released 15 years after the first commercial cell phone.


Yes although even those people paying are likely still being subsidized and not currently paying the full cost.

Interesting thought about current SOTA models running on my mobile device. I've given it some thought and I don't think it would change my life in any way. Can you suggest some way that it would change yours?


It will open access of llms to developers in the same way smart phones opened access to mobile general computing.

I really think most everyone misses the actual potential of llms. They aren't an app but an interface.

They are the new UI everyone has known they wanted going back as long as we've had computers. People wanted to talk to the computer and get results.

Think of the people already using them instead of search engines.

To me, and likely you, it doesn't add any value. I can get the same information at about the same speed as before with the same false positives to weed through.

To the person that couldn't use a search engine and filled the internet with easily answered questions before, it's a godsend. They can finally ask the internet in plain ole whatever language they use and get an answer. It can be hard to see, but this is the majority of people on this planet.

LLMs raise the floor of information access. When they become ubiquitous and basically free, people will forget they ever had to use a mouse or hunt for the right pixel to click a button on a tiny mobile device touch screen.


I think that's a nice reply and these products becoming the future of user computer interface is possible.

I can imagine them generating digital reality on the fly for users - no more dedicated applications, just pure creation on demand ('direct me via turn by turn 3d navigation to x then y and z', 'replay that goal that just was scored and overlay the 3 most recent similar goals scored like that in the bottom right corner of the screen', 'generate me a 3D adventure game to play in the style of zelda, but make it about gnomes').

I suspect the only limitation for a product like this is energy and compute.


Eh, quite the contrary. A lot of anti AI people genuinely wanted to use AI but run into the factual reality of the limitations of the software. It's not that it's going to take my job, it's that I was told it would redefine how I do work and is exponentially improving only to find out that it just kind of sucks and hasn't gotten much better this year.

> Very to somewhat useful in some fields (like programming) but the average person will easily be able to go through their day without using AI.

I know a lot of "normal" people who have completely replaced their search engine with AI. It's increasingly a staple for people.

Smartphones were absolutely NOT immediately useful in a million different ways for almost every person, that's total revisionist history. I remember when the iPhone came out, it was AT&T only, it did almost nothing useful. Smartphones were a novelty for quite a while.


I agree with most points but as a tech enthusiast, I was using a smart phone years before the iPhone, and I could already use the internet, make video calls, email etc around 2005. It was a small flip phone but it was not uncommon for phones to do that already at that time, at least in Australia and parts of Asia (a Singaporean friend told me about the phone).

A year after the iPhone came out… it didn’t have an App Store, barely was able to play video, barely had enough power to last a day. You just don’t remember or were not around for it.

A year after llms came out… are you kidding me?

Two years?

10 years?

Today, by adding an MCP server to wrap the same API that’s been around forever for some system, makes the users of that system prefer NLI over the gui almost immediately.


You paraphrased it incorrectly


… so presenting it as a paraphrase is misleading.


>Systems marketed for "solving crimes" get used for immigration enforcement

So for solving crimes.

I'm in favor, then!


I think you don’t have to look far to find warrantless arrests or illegal detentions under the guise of “immigration enforcement.” I also think you’d be hard pressed to point to a crime in those instances.


The ideal amount of mistakes is non-zero.

We should compensate those who are improperly arrested and quickly correct these violations, attempt to prevent them in the future, reprimand those involved if necessary, but absolutely keep pushing ahead at full steam on law enforcement efforts otherwise.

Hot take: some small number of unlawful arrests aren't the "neener neener neener, you can't stop illegal immigration" that folks seem to think they are.


> The ideal amount of mistakes is non-zero.

Why? And separately, do you believe that people wrongly arrested in the US are being compensated accordingly? The justice system in the US isn’t known for being easy or cheap to navigate, and I don’t think getting a warrant before detaining people is that huge of an ask.


Because these are human systems involving humans: there will always be mistakes. Advocating for the elimination of 100% of mistakes is a typical "rules for radicals" method of backdoor legislation through increased bureaucracy.

I'm not advocating to "move fast and break things," but that it's very easy and cheap for illegal immigration maximalists to advocate that society should "move never so nothing breaks." This type of obstruction is actually a form of conservative policy, but "it's for the causes I like so it's okay."

> don’t think getting a warrant before detaining people is that huge of an ask

The law doesn't require a warrant before detaining people - and shouldn't. This doesn't even make sense: "Hold on Mr. Bank Robber - I'm not detaining you, but pretty please don't go anywhere, I gotta go get a warrant first!"


Hey, I'm all for accounting for human error. But I don't think what we've been seeing in the news is not "hold on Mr. Robber, I need a warrant" (also, you don't need a warrant for that), nor is it "oops I arrested you by accident." It's people being taken off the street because of vague determinations about their identity, the types of jobs they're working, etc. That's not probable cause, and that's certainly not human error. That's an extrajudicial decision made intentionally to have a chilling effect.


> The ideal amount of mistakes is non-zero.

I’ve heard this argument in the context of capital punishment, and I find it incredibly unconvincing.


> I’ve heard this argument in the context of capital punishment, and I find it incredibly unconvincing.

This is more or less a false dichotomy.

Capital punishment is by definition irreversible, so mistakes aren't tolerable.

Being arrested is legally and reasonably far more correctable with few lasting consequences: we can absorb these mistakes in the rare events they occur.


Any law-enforcement also non-reversible. Do false positives get their years of life back? No. And there is far less scrutiny on that (see DA deal and all that).

Capital punishment just takes all of them instead of few-to-tens of percent of a life (often the most valuable years).


"Years of their life back" - I'm confused: how does a mistaken arrest result in "years of life" being lost in an immigration enforcement snafu?

You do realize that due process exists after an arrest?


Absolutely agree. Mistakes should be corrected immediately, protocol revised, and those responsible punished, if malicious acts are found. Otherwise, enforcement should be full stream ahead. Illegal immigration has hurt the US enormously and it's time that we enforce our laws.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: