Hacker Newsnew | past | comments | ask | show | jobs | submit | more roenxi's commentslogin

> most of the industrialized world viable. Poorer. Less comfortable.

If we're trying to use precise language, the economic modelling [0] actually suggests they will be wealthier and more comfortable than they are now. Just probably not as wealthy and comfortable as they could be under other hypotheticals.

[0] https://en.wikipedia.org/wiki/Economic_analysis_of_climate_c...


I tend to agree. There might be a big window for someone motivated to get a course "how to maximise the value of your house purchase" into schools to cover all that. If people are going to be greedy and selfish at least they shouldn't be stupid about it, land and housing policy is one of the more consequential things society deals with.


> The proof is as trivial as ...

That proof doesn't hold as an argument. You're arguing that if people got a message out then it isn't cancel culture, but if people didn't get a message out because they were cancelled then people just wouldn't talk about it. It is setting up a rhetorical position where taboos can't exist and we know that they do.

Cancel culture might not exist depending on what people think it mean. The term is a bit vague. But arguing that some people managed to push past the cancellation attempts doesn't mean that there isn't anything there. We'd expect cancel culture to have some cancellation attempts that ended in failure, the authoritarians are fallible humans too. And although they tend to be good at wielding government power the extreme authoritarians do tend to be ideologically isolated and so struggle to act when people pay attention to them.


Look, "cancel culture" is almost as vague a term as "communism" and tends to be used in the same way: as a thought terminating pejorative description for anything someone doesn't like.

If we want to have an actual conversation about it we'd have to come up with some kind of working definition of the term that was actually useful enough to discuss existing examples with.

The wikipedia article on cancel culture uses an example of people disassociating from harvey weinstein and ultimately charging him with crimes related to sexual abuse. Is this cancel culture?

If a university employee invites a celebrity to come give a lecture one evening and then a bunch of students ask the university to cancel the invitation, is this cancel culture? Is it morally wrong?

Is the person who makes the original statement deserving of some kind of extra protection for this speech over the responding person who is trying to criticize this speech?

A cursory look at the real world, actual examples, of how people attempted to use the term "cancel culture" it was invariably part of an attempt to prevent criticism of (mostly) right wing ideas.

What actually happened was some number of right wingers tried to give speeches and got yelled at and then started complaining about cancel culture and trying to prevent future criticisms.

Like, at the level we're discussing we're talking about things like ethics/morality/social standards, right? What is good and virtuous for society to permit and encourage. Trying to "cancel" people who are "bad" by using speech to criticize or contradict or even ask people to stop associating with them is a good thing.


All the confessions are highly subjective. If someone tried a refactor like the one at https://refactoring.com/catalog/replaceConditionalWithPolymo... there is a decent chance it should get picked up and reverted on code review.

Taking a switch statement and spreading it out over 3x classes is not a general improvement, it is very context specific. It makes the code difficult to navigate because what used to all be in one spot and easy to read is now spread out who-knows-where and there might be a special case lurking somewhere.


It's not as subjective as it is more of a case by case decision. This example is quite misleading but polymorphic classes are sometimes useful when the domain grows, when you have have to update new behaviors all the time.. In that case then the switch becomes harder to maintain. Classes isolate behavior so new types don't modify existing code. I'd stick with switch statements in all the other cases. Sure, this could be abused and make simple things unnecessarily complicated but am just pointing out that there's a use-case for it.


Yeah, the problem is when people create extra classes just to avoid the switch statement.


Yes, I've seen this misused and it resulted in an over-complicated codebase with no benefit.


Just because something didn't work out doesn't mean it was a waste, and it isn't particularly clear that the the LLM boom was wasted, or that it is over, or that it isn't working. I can't figure out what people mean when they say "AGI" any more, we appear to be past that. We've got something that seems to be general and seems to be more intelligent than an average human. Apparently AGI means a sort of Einstein-Tolstoy-Jesus hybrid that can ride a unicycle and is far beyond the reach of most people I know.

Also, if anyone wants to know what a real effort to waste a trillion dollars can buy ... https://costsofwar.watson.brown.edu/


> Just because something didn't work out doesn't mean it was a waste

Its all about scale.

If you spend $100 on something that didn't work out that money wasn't wasted if you learned something amazing. If you spend $1,000,000,000,000 on something that didn't work out the expectation is that you learn something close to 1,000,000,000x more than the $100 spend. If the value of learning is several orders of magnitude less than the level of investment there is absolutely tremendous waste.

For example: nobody qualifies spending a billion dollars on a failed project as value if your learning only resulted in avoiding future paper cuts.


It's not waste, it's a way to get rid of excess liquidity caused by massive money printing operations.


We currently have human-in-the-loop AGI.

While it doesn't seem we can agree on a meaning for AGI, I think a lot of people think of it as an intelligent entity that has 100% agency.

Currently we need to direct LLM's from task to task. They don't yet posses the capability of full real world context.

This is why I get confused when people talk about AI replacing jobs. It can replace work, but you still need skilled workers to guide them. To me, this could result in humans being even more valuable to businesses, and result in an even greater demand for labor.

If this is true, individuals need to race to learn how to use AI and use it well.


> Currently we need to direct LLM's from task to task.

Agent-loops that can work from larger scale goals work just fine. We can't letting them run with no oversight, but we certainly also don't need to micro-manage every task. Most days I'll have 3-4 agent-loops running in parallel, executing whole plans, that I only check in on occasionally.

I still need to review their output occasionally, but I certianly don't direct them task to task.

I do agree with you we still need skilled workers to guide them, so I don't think we necessarily disagree all that much, but we're past the point where they need to be micromanaged.


If we can't agree on a definition of AGI, then what good is it to say we have "human-in-the-loop AGI"? The only folks that will agree with you will be using your definition of AGI, which you haven't shared (at least in this posting). So, what is your definition of AGI?


AI capabilities today are jagged and people look at what they want to.

Boosters: it can answer PhD-level questions and it helps me a lot with my software projects.

Detractors: it can't learn to do a task it doesn't already know how to do.

Boosters: But actually it can actually sometimes do things it wouldn't be able to do otherwise if you give it lots of context and instructions.

Detractors: I want it to be able to actually figure out and retain the context itself, without being given detailed instructions every time, and do so reliably.

Boosters: But look, in this specific case it sort of does that.

Detractors: But not in my case.

Boosters: you're just using it wrong. There must be something wrong with your prompting strategy or how you manage context.

etc etc etc...


> We've got something that seems to be general and seems to be more intelligent than an average human.

We've got something that occasionally sounds as if it were more intelligent than an average human. However, if we stick to areas of interest of that average human, they'll beat the machine in reasoning, critical assessment, etc.

And in just about any area, an average human will beat the machine wherever a world model is required, i.e., a generalized understanding of how the world works.

It's not to criticize the usefulness of LLMs. Yet broad statements that an LLM is more intelligent than an average Joe are necessarily misleading.

I like how Simon Wardley assesses how good the most recent models are. He asks them to summarize an article or a book which he's deeply familiar with (his own or someone else's). It's like a test of trust. If he can't trust the summary of the stuff he knows, he can't trust the summary that's foreign to him either.


AFAICT "AGI" is a placeholder for peoples fears and hopes for massive change caused by AI. The singularity, massive job displacement, et cetera.

None of this is a binary, though. We already have AGI that is superhuman in some ways and subhuman in others. We are already using LLM's to help improve themselves. We already have job displacement.

That continuum is going to continue. AI will become more superhuman in some ways, but likely stay subhuman in others. LLM's will help improve themselves. Job displacement will increase.

Thus the question is whether this rate of change will be fast or slow. Seems mundane, but it's a big deal. Humans can adapt to slow changes, but not so well to fast ones. Thus AGI is a big deal, even if it's a crap stand in for the things people care about.


I think when people say "AGI" they might mean synthesis [1]. I'm not sure I have seen that yet in LLMs. Someone correct me if I'm wrong.

[1] https://en.wikipedia.org/wiki/Bloom's_taxonomy


> Just because something didn't work out doesn't mean it was a waste

Here i think it's more about opportunity cost.

> I can't figure out what people mean when they say "AGI" any more, we appear to be past that

What i ask of an AGI is to not hallucinate idiotic stuff. I don't care about being bullshitted too much if the bullshit is logic, but when i ask "fix mypy errors using pydantic" and instead of declaring a type for a variable it invent weird algorithms that make no sense and don't work (and the fix would have taken 5 minutes for any average dev).I mean, Claude 4.5 and Codex have replaced my sed/search and replaces, write my sanity tests, write my commit comment, write my migration scripts (and most of my scripts), and make refactor so easy i now do one refactor every month or so, but if it is AGI, i _really_ wonder what people mean by intelligence.

> Also, if anyone wants to know what a real effort to waste a trillion dollars can buy

100% agree. Pleas Altman, Ilya and other, i will hapilly let you use whatever money you want if that money is taken from war profiteers and warmongers.


> Just because something didn't work out doesn't mean it was a waste

One thing to keep in mind, is that most of these people who go around spreading unfounded criticism of LLMs, "Gen-AI" and just generally AI aren't usually very deep into understanding computer science, and even less science itself. In their mind, if someone does an experiment, and it doesn't pan out, they'll assume that means "science itself failed", because they literally don't know how research and science work in practice.


Maybe true in general, but Gary Marcus is an experienced researcher and entrepreneur who’s been writing about AI for literally decades.

I’m quite critical, but I think we have to grant that he has plenty of credentials and understands the technical nature of what he’s critiquing quite well!


Yeah, my comment was mostly about the ecosystem at large, rather than a specific dig to this particular author, I mostly agree with your comment.


> Just because something didn't work out doesn't mean it was a waste, and it isn't particularly clear that the the LLM boom was wasted, or that it is over, or that it isn't working

Agreed. Has there been waste? Inarguably. Has the whole thing been a waste? Absolutely not. There are lessons from our past that in an ideal world would have allowed us to navigate this much more efficiently and effectively. However, if we're being honest with ourselves, that's been true of any nascent technology (especially hyped ones) for as long as we've been recording history. The path to success is paved with failure, Hindsight is 20/20, History rhymes and all that.

> I can't figure out what people mean when they say "AGI" any more

We've been asking "What is intelligence" (and/or Sentience) for as long as we've been alive, and still haven't come to a consensus on that. Plenty people will confidently claim they have an answer, which is great, but it's entirely irrelevant if there's not a broad consensus on that definition or a well defined way to verify AI/people/anything against it. Point in case...

> we appear to be past that. We've got something that seems to be general and seems to be more intelligent than an average human

Hard disagree specifically as it regards to Intelligence. They are certainly useful utilities when you use them right, but I digress. What are you basing that on? How can we be sure we're past a goal-post when we don't even know where the goal-post is? For starters, how much is Speed (or latency or IOP/TPSs or however you wish to contextualize it) a function of "intelligence"? For a tangible example of that: If an AI came to a conclusion derived from 100 separate sources, and a human manually went through those same 100 sources and came to the same conclusion, is the AI more intelligent by virtue of completing that task faster? I can absolutely see (and agree with) how that is convenient/useful, but the question specifically is: Does the speed it can provide answers (assuming they're both correct/same) make it smarter or as smart as the human?

How do they rationalize and reason their way through new problems? How do we humans? How important is the reasoning or the "how" of how it arrives at answers to the questions we ask it if the answers are correct? For a tangible example of that: What is happening when you ask an AI to compute the sum of 1 plus 1? What are we doing when we're asking to perform the same task? What about proving it to be correct? More broadly, in the context of AGI/Intelligence, does it matter if the "path of reason" differs if the answers are correct?

What about how confidently it presents those answers (correct or not)? It's well known that us humans are incredibly biased towards confidence. Personally, I might start buying into the hype the day that AI starts telling me "I'm not sure" or "I don't know." Ultimately, until I can trust it to tell me it doesn't know/isn't certain, I wont trust it when it tells me it does know/is certain, regardless of how "Correct" it may be. We'll get there one day, and until then I'm happy to use it for the utility and convenience it provides while doing my part to make it better and more useful.


Eh, tearing down a straw man is not an impressive argument from you either.

As a counter-point, LLMs still do embarrassing amounts of hallucinations, some of which are quite hilarious. When that is gone and it starts doing web searches -- or it has any mechanisms that mimic actual research when it does not know something -- then the agents will be much closer to whatever most people imagine AGI to be.

Have LLMs learned to say "I don't know" yet?


> Have LLMs learned to say "I don't know" yet?

Can they, fundamentally, do that? That is, given the current technology.

Architecturally, they don't have a concept of "not knowing." They can say "I don't know," but it simply means that it was the most likely answer based on the training data.

A perfect example: an LLM citing chess rules and still making an illegal move: https://garymarcus.substack.com/p/generative-ais-crippling-a...

Heck, it can even say the move would have been illegal. And it would still make it.


If the current technology does not allow them to sincerely say "I don't know, I am now checking it out" then they are not AGI, was my original point.

I am aware that the LLM companies are starting to integrate this quality -- and I strongly approve. But again, not being self-critical and as such lacking self-awareness is one of the qualities that I would ascribe to an AGI.


> When that is gone and it starts doing web searches -- or it has any mechanisms that mimic actual research when it does not know something

ChatGPT and Gemini (and maybe others) can already perform and cite web searches, and it vastly improves their performance. ChatGPT is particularly impressive at multi-step web research. I have also witnessed them saying "I can't find the information you want" instead of hallucinating.

It's not perfect yet, but it's definitely climbing human percentiles in terms of reliability.

I think a lot of LLM detractors are still thinking of 2023-era ChatGPT. If everyone tried the most recent pro-level models with all the bells and whistles then I think there would be a lot less disagreement.


Well please don't include me in some group of Luddites or something.

I use the mainstream LLMs and I've noted them improving. They have ways to go still.

I was objecting to my parent poster's implication that we have AGI. However muddy that definition is, I don't feel like we do have that.


Have LLMs learned to say "I don't know" yet?

All the time, which you'd know very well if you'd spent much time with current-generation reasoning models.


I spend time with them, not hours every day. I am aware this is starting to get integrated and I like them more lately.

Still far from AGI however, which was my original point. Any general intelligent being would be self-aware and as such self-critical, by extension.


Greta Thunberg achieved nothing useful in practice and if the best mascot for a movement is an autistic teenager it bodes poorly for that movement's chances.

She personally is perfectly successful, but in terms of political effectiveness people should model themselves off movements that achieved something.


To me it seems that she achieved a lot, compared to the rest of the activists.

The opposite forces were too strong in the end, but that doesn't mean that she didn't do a lot.

I'm not sure I see any problem with an autistic teenager as a "mascot"; I know how much a political area despises her, but if they treated a child like that, they would probably have done much worse with a normal adult.

But of course she's not enough, and expecting that she on her own will solve climate warming is delusional.

Which movements would you recommend as models?


> Which movements would you recommend as models?

Supporting NGOs like https://edri.org, https://fsfe.org.


> Which movements would you recommend as models?

If we're talking about the scale of reforming the EU, I'd say the basket of things to look at are things like the rise of capitalism, liberalism, major religions, spread of democracy, the Enlightenment. There are a lot of smaller examples of polities reforming too but those are some nice big ones. The smaller ones tend to be quieter, less flashy affairs where someone organises people together to try and make life better.

> I know how much a political area despises her, but if they treated a child like that, they would probably have done much worse with a normal adult.

I like to believe the adults are more likely to run the numbers and say "hang on, rolling back industrial society for no obvious reason is a terrible idea and I'm probably going to fail anyway with these stupid tactics - progress is hard to stop".


> I like to believe the adults are more likely to run the numbers and say "hang on, rolling back industrial society for no obvious reason is a terrible idea

I guess you're a climate change denier, there's little to discuss then



?


Well obviously they want it, they voted for it. They probably see the situation in terms of something like class war. There are a bunch of people they don't like in society and they want to identify and marginalise them.

As for why politicians turn out this way, they're just pretty ordinary people (often quite impressive people actually, relative to the norm). Most people don't get an opportunity to show off how useless their political principles are because they have no power or influence. That's why there is always a background refrain of "please stop concentrating power to the politicians it ends badly".


I would assume by default that billionaires are politically active and causing a problem. However this link doesn't give a lot of hints about how or wherefore. I assume this is a jab at Thiel; but it is a bit light on in the synopsis department.

There are a huge number of threats to democracy and the biggest one is probably the total lack of principles and common sense possessed by the median voter. It is a real problem and a bigger one than some billionaire or even the consensus of the billionaires. Sometimes voters and capital come into actual conflict and generally the voters tend to win Pyrrhic victories when that happens.


> the biggest one is probably the total lack of principles and common sense possessed by the median voter.

Hard disagree.

The biggest problem is a misinformed electorate.

An accurate, honest and truthful press is vital for democracy; how else do people know whom to vote for! The fact this is being dismantled (often supplying deliberate misinformation) is truly worrying.

After all, the electorate is entitled to have a lack of principles and no common sense; nobody ever said democracy was perfect. However the electorate needs to be provided with an honest set facts on which they can base their decisions without cries of "fake news". Whatever their political leanings.


I don't know if you will find a time in US history where the press was accurate, honest, and truthful.

I agree with GP that a primary missing feature is a principled public - without principles people swing wildly in opinion depending on the topic and popular rhetoric.

I see this with much of my own family. They mostly consider themselves conservatives and Republicans of the small government and balanced budget era. Those presumed values go out the window though and when a particular political topic of the day comes up they seem to completely contradict it. The most egregious example in my family is a Ron Paul libertarian that somehow still holds those opinions while supporting virtually everything Trump does.


> I don't know if you will find a time in US history where the press was accurate, honest, and truthful.

1) Spare us the US defaultism!

2) If we are going to make this conversation about the USA, didn't US broadcast media have a 'fairness doctrine' that was abolished some years back? Hence the growth in outlets providing heavily biased dishonest news on broadcast media? I suggest this has driven much of the popular rhetoric of which you speak.

Frankly, every country has seen a growth in biased social media "news" sources regardless as to the broadcast media fairness doctrines that still exist in those countries. Deliberate misinformation and a lack of trust in journalism is real.


The topic is Silicon Valley fascism, this isn't the crusade to fight USA defaultism.


1. Consider preordering the book if you're already reacting to part of its premise; it should be a juicy read.

2. Regarding the power of billionaires vs the power of the median voter, consider that each lever in a system deserves attention before pulling on it or reconfiguring it. How can one determine "the biggest threat to democracy" without digging into the details?


> They tax and control everything, lock down distribution, prevent you from operating without rules.

You seem to be arguing that the EU should be doing that though. What about those of us who quite like the way Apple does things right now? I'm happy to pay extra for a lot of your dot points, I quite like someone to be acting as a firewall between my device and the unfettered soup that is stuff out on the internet.

Apple's product is a well curated walled garden. I certainly understand why there are a lot of people on HN who don't like that - they see 30% that they can't claim. But one of the reasons Apple is so successful is because they know how to create a great phone experience.


>> Apple is so successful is because they know how to create a great phone experience.

I disagree, may be they were at some time. Now they are successful because the walls of the well are so high. It is insanely difficult for us frogs to jump. Happy that governments are trying to bring those walls down

>> I am happy to pay extra for a lot of your dot points. Good for you because you trust them. Problem is I am not. I dont trust apple/google to make that decision for me. But they dont give that choice. They are making you sacrificing freedom, choice by masking them self as secure. But underlying motive is profits and control.

I heard a story that apple asked meta for comission on ads , when meta rejected they introduced features to remove access to usage metrics to 3rd party apps. If meta agreed , you might never see the privacy features app introduced.

The security you are thinking is a believable mirage. There are several users who have lost thousands of dollars to scammy appstore in app purchases/subsciptions and apple is doing shit to stop this.


> The security you are thinking is a believable mirage. There are several users who have lost thousands of dollars to scammy appstore in app purchases/subsciptions and apple is doing shit to stop this.

And the plan to make this the consensus view is to ban Apple-style curated app stores. That seems to be cheating. When Apple convinced me their App store model was better than the alternative they had to use, y'know, persuasion.

Nokia sorta died, but at the time back in the 2000s Apple had to get through the entire phone industry to establish the iPhone. If the Europeans had any idea how to manage this sort of ecosystem they'd still be running the show. They had an amazing market position to begin with. They flubbed it because no-one in the entire continent seems to know how to run an app store! Now they're legislating their bad ideas in. It is a very European approach to commercial innovation and success.


> And the plan to make this the consensus view is to ban Apple-style curated app stores.

Nobody is banning Apple-style curated app stores. They're banning the monopoly of only one app store.

> If the Europeans had any idea how to manage this sort of ecosystem they'd still be running the show.

Maybe Europeans won't engage in immoral profit-making practices? Also, Nokia didn't "sorta died". It was killed by Microsoft.


yes I agree, but we need to change with the age. in early 2000's it is hard to distribute apps/software, and 30% commission made sense.

now it is not, there are several people/companies who can make the app distribution better, efficient for all consumers. they can bring it down to a fraction (apple itself has by now bought it to a fraction of what it costs in 2000).only reason they are not passed down to consumer is because they made sure there is no competition (by force(google paying samsung to not develop its app store) or by design (Apple limiting 3rd party installs and discouraging webapps) - basically how a monopoly/duopoly behaves). it is bad for us consumers

if apple has developed all the tools libraries itself from scratch , put hardwork and sweat into it, i wont have a issue. we all know thats not the case and how much opensource tools helped.


There are a lot of edge cases where suicide is rational. The experience of watching an 80 year old die over the course of a month or few can be quite harrowing from the reports I've had from people who've witnessed it; most of whom talk like they'd rather die in some other way. It's a scary thought, but we all die and there isn't any reason it has to be involuntary all the way to the bitter end.

It is quite difficult to say what moral framework an AI should be given. Morals are one of those big unsolved problems. Even basic ideas like maybe optimising for the general good if there are no major conflicting interests are hard to come to a consensus on. The public dialog is a crazy place.


The stories coming out are about convincing high school boys with impressionable brains into committing suicide, not about having intellectual conversations with 80 year olds about whether suicide to avoid gradual mental and physical decline makes sense.


Yeah, that is why I wrote the comment. The stories are about one case where the model behaviour doesn't make sense - but there are other cases where the same behaviour is correct.

As jb_rad said in the thread root, hyper-focusing on the risk will lead people to overreact. DanielVZ says we should hyper focus, maybe even overreact to the point of banning AI because it can persuade people to suicide. However the best view to do is acknowledge the nuance where sometimes suicide is actually the best decision and it is just a matter of getting as close as possible to the right line.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: