Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is Chomsky an expert? Try reading his infuriatingly tone-deaf NYTimes editorial.


He’s doing the same to ChatGPT that he did to Skinner in the 1960s. Back then, it put him on the map, and he lives in the illusion that he was right. Now ChatGPT is pretty much Skinner’s work come to life. The BFS book Chomsky critiqued was literally “Verbal Behavior” - a book about how intelligence arises from “dumb” reinforcement learning of words. Obviously Chomsky must now claim that ChatGPT only pretends to be intelligent. Else his entire life’s work is proven wrong.


It’s a significant misunderstanding of Chomsky’s life’s work to think that ChatGPT would prove it wrong. Chomsky’s primary claim is a claim about how language acquisition works in humans. He argues, for example, that certain locality constraints on linguistic dependencies are ‘built in’ and not learned inductively. Thus a human does not ‘learn’ that (i) is ambiguous and (ii) is not:

(i) How often did you tell John that he should take out the trash? [how often did you tell, or how often to take it out]

(ii) How often did you tell John why he should take out the trash? [only means how often did you tell]

Nothing that ChatGPT can do suggests that Chomsky was wrong about this kind of thing. It’s really more of a blow to a certain kind of work in AI that was partly inspired by Chomsky – but not something that he himself ever took much interest in.

Now it’s true that Chomsky appears to be in the camp that says ChatGPT doesn’t really understand anything. But the focus of his own work has never been on debunking AI, or making claims about the true nature of understanding, or anything of that ilk.


> He argues, for example, that certain locality constraints on linguistic dependencies are ‘built in’ and not learned inductively.

Checking in late here, but one of the pillars of Chomsky's argument is the so-called "poverty of the stimulus" -- basically, that human babies simply don't receive enough training data to acquire language as rapidly and correctly as they demonstrably do. Chomsky therefore concludes that there must be some kind of pre-existing "language module" in the brain to account for this. Now, not everyone accepted this idea even at the time, but surely the argument is much less plausible for an LLM which is likely exposed to more training data than even an adult human.


>Now, not everyone accepted this idea even at the time, but surely the argument is much less plausible for an LLM which is likely exposed to more training data than even an adult human.

Yes indeed. Of course this doesn't show that Chomsky was wrong about humans. In any case, I've seen no evidence that current LLMs successfully learn the kinds of constraints I was talking about.


There is a difference between text and language and so far LLMs have told us nothing about language. LLMs being able to generalize to languages with a much smaller training corpus shows that maybe Chomsky is right about universal grammar.


> Now ChatGPT is pretty much Skinner’s work come to life.

I'm pretty sure that is based on a misunderstanding of Skinner, ChatGPT, or both


Or very lossy compression of what I mean. I have studied both quite extensively. Not 10,000 hours each, but hundreds for sure.


Did Skinner have anything to say about how the reinforcement works? Because with LLMs you do need the right sort of architecture, and the same with neurons, even though they don't use back propagation. Only humans are known to have language in the full sense, and there has to be some neural reason why that is. Maybe you could make an argument for cetaceans or certain birds, but again they must have the neural architecture for it.


Skinner (and behaviorists in general) did establish various 'laws' of behavioral reinforcement that do tend to hold in simple cases such as pigeons pecking at levers in return for food, etc. etc. Of course these laws had nothing interesting to say about language acquisition. I challenge anyone who thinks otherwise to actually try reading Verbal Behavior. It's an incredibly turgid and uninsightful book.


The difference between hundreds and 10k hours is roughly 10k hours.


I see that Watumull is one of the coauthors. I'm not sure what's going on with that, but Watumull is the common thread running through other bad papers with otherwise-sensible linguists' names tacked on to them, such as this bizarre paper about recursion: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3884515/ I haven't had a chance to read the NYTimes editorial, but I would be skeptical how much of it is really coming from Chomsky. He's 94 at this point, and while he's not senile in a medical sense, I don't think his judgement is what it used to be.


Chomsky is an expert of a few things, but I doubt that's what you actually mean to ask.

Regardless his editorial matches how scientists think of the human mind and how OpenAI's own creators describe GPT's design.


Citation please, for where you're saying the two parties agree on?


I've not said anything contentious or secret you can literally read the article in question above and also read the OpenAI website: https://openai.com/research/instruction-following:

> This is in part because GPT-3 is trained to predict the next word on a large dataset of Internet text, rather than to safely perform the language task that the user wants.


Chomsky writes that language models lack the ability to reason.

> Their deepest flaw is the absence of the most critical capacity of any intelligence: to say not only what is the case, what was the case and what will be the case — that’s description and prediction — but also what is not the case and what could and could not be the case. Those are the ingredients of explanation, the mark of true intelligence.

> [...] Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. A prediction might have been the statement “The apple will fall if I open my hand.” Both are valuable, and both can be correct. But an explanation is something more: It includes not only descriptions and predictions but also counterfactual conjectures like “Any such object would fall,” plus the additional clause “because of the force of gravity” or “because of the curvature of space-time” or whatever. That is a causal explanation: “The apple would not have fallen but for the force of gravity.” That is thinking.

I decided to ask ChatGPT why an apple falls, based on Chomsky's statement:

> Suppose you are holding an apple in your hand. Now you let the apple go. You observe the result and say, “The apple falls.” That is a description. Can you say why it falls?

ChatGPT responds in exactly the way Chomsky says it cannot:

> Yes, the apple falls due to the force of gravity. Gravity is a natural force that attracts objects with mass towards each other. When the apple is released from your hand, it is subject to the gravitational pull of the Earth, causing it to accelerate downward and fall to the ground.

ChatGPT certainly appears to understand that apples fall because of gravitational attraction, and that gravity is universal.

What makes all the discussion of whether ChatGPT does or does not truly understand this or that so frustrating is that it's based on pure assertion. ChatGPT responds exactly like someone who understands gravity would, so I'm very strongly inclined to believe that it understands gravity. Otherwise, what does "understanding" even mean? It's not some magic process.

Again, turning to ChatGPT to define "understanding," here is what it says:

> [Understanding] involves making connections, integrating information, and gaining insights or knowledge about a particular subject or concept. Understanding goes beyond simple awareness or recognition; it involves interpreting, analyzing, and synthesizing information to form a coherent mental representation or mental model of the subject matter. It often involves the ability to apply knowledge in new or different contexts, make connections to prior knowledge or experiences, and make sense of complex or abstract ideas.

ChatGPT definitely fulfills that definition of "understanding."


I’ve made many attempts to use ChatGPT to develop or double-check my own logical reasoning on technical topics that happen to not be widely discussed (or maybe not discussed at all) in ChatGPT’s training data. It didn’t work well. It always devolved into guesswork and fabrication by ChatGPT, if not outright false reasoning, and while correcting ChatGPT succeeded in it agreeing about individual objections, it never showed a true and consistent understanding of the topic under discussion, and also seemingly no understanding of why I was having issues with its responses, beyond the usual “I apologize, you are correct, <rephrasing of your objection>”.

One problem likely is that it doesn’t have an internal dialogue, so you have to spoon-feed each step of reasoning as part of the explicit dialogue. But even then, it never feels like ChatGPT is having an overall understanding of the discussion. To repeat, this is when the conversation is about lines of reasoning about specific points that you don’t find good results for when googling for them.


> One problem likely is that it doesn’t have an internal dialogue, so you have to spoon-feed each step of reasoning as part of the explicit dialogue.

I think if we were to put ChatGPT on the map of the human mind, it would correspond specifically to the inner voice. It doesn't have internal dialogue, because it's the part that creates internal dialogue.


ChatGPT does not fulfill that definition because it does not have any “mental representation”; it has no mind with which to form a “mental model”. It emulates understanding — quite well in many scenarios — but there is nothing there to possess understanding; it is at bottom simply a very large collection of numbers that are combined arithmetically according to a simple algorithm.


It must have some representation of the real world, or else it wouldn't be able to generate responses that explain the real world.

At a certain point, there's no difference between emulating understanding and having understanding.

> it is at bottom simply a very large collection of numbers that are combined arithmetically according to a simple algorithm.

If you dissect a human brain, you'll find neurons, synapses, etc. Your brain is also "simply" a machine.


But now you have to explain why the same is not true of a human. Just saying a human has a 'mental representation' and a 'mind' is not explaining anything


Because as humans, we know we have something we call minds and mental representations, since we experience having such things as we go about our lives. How the nervous system produces those and how exactly we should understand mental is unclear. But since LLMs aren't brains and don't work the same way, we can't say they have anything like minds right now. The solution isn't to get rid of the mental in humans, it's to better understand the differences and similarities between machine learning models and biological nervous systems.


here it is for anyone curious:

https://archive.is/oGXNt

(edit, archive link)


Well not in the field he's a professional clickbaiter targeting a specific, well defined eco chamber.


That is a very common perception from people whom haven't given him any attention. I used to be in the same boat but it was eventually very interesting when I listened to him.


Accusing Chomsky of being a ‘clickbaiter’ is maybe the most absurd thing I’ve heard all month. You think he’s trying to get additional views for his TikTok videos?

His recent political ramblings and Epstein-adjacency are extremely embarrassing (at best), but he's not some kind of cheap online attention whore.


> Accusing Chomsky of being a ‘clickbaiter’ is maybe the most absurd thing I’ve heard all month. You think he’s trying to get additional views for his TikTok videos?

Chomsky has been addicted to media attention for decades. Back in the day there were literally people selling cassette tapes of his latest thoughts.


Stating obvious - but inconvenient - truths is political rambling to you? Guy is a treasure.


C'mon, let's not get into it here. As I'm defending Chomsky, I just wanted to be clear that I don't agree with his recent comments on the Russian invasion of Ukraine, and that I find his association with post-conviction Esptein extremely distasteful at best. Others may disagree, but this isn't the place to have that argument.


Hey, you started it. I wouldn't have said anything if you hadn't done the exact same thing you accused the other guy of doing.

Yes, I very much disagree with you, hence my reaction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: