I agree that "proof of thought" is a misleading name, but this whole "computers can't think" thing is making LLM skepticism seem very unscientific. There is no universally agreed upon objective definition of what it means to be able to "think" or how you would measure such a thing. The definition that these types of positions seem to rely upon is "a thing that only humans can do", which is obviously a circular one that isn't useful.
If you believe computers can think then you must be able to explain why a chain of dominoes is also thinking when I convert an LLM from transistor relay switches into the domino equivalent. If you don't fall for the marketing hype & study both the philosophical & mathematical literature on computation then it is obvious that computers (or any mechanical gadget for that matter) can not qualify for any reasonable definition of "thinking" unless you agree that all functionally equivalent manifestations of arithmetic must be considered "thinking", including cascading dominoes that correspond to the arithmetic operations in an LLM.
>If you believe computers can think then you must be able to explain why a chain of dominoes is also thinking when I convert an LLM from transistor relay switches into the domino equivalent.
Sure, but if you assume that physical reality can be simulated by a Turing machine, then (computational practicality aside) one could do the same thing with a human brain.
Unless you buy into some notion of magical thinking as pertains to human consciousness.
No magic is necessary to understand that carbon & silicon are not equivalent. The burden of proof is on those who think silicon can be a substitute for carbon & all that it entails. I don't buy into magical thinking like Turing machines being physically realizable b/c I have studied enough math & computer science to not be confused by abstractions & their physical realizations.
I recently wrote a simulation of water molecules & got really confused when the keyboard started getting water condensation on it. I concluded that simulating water was equivalent to manifesting it in reality & immediately stopped the simulation b/c I didn't want to short-circuit the CPU.
That isn’t a definition or even a coherent attempt.
For starters, what kind of cognition or computation can’t be implemented with either logic or arithmetic?
What is or is not “cognition” is going to be a higher level property than what basic universally capable substrate is used. Given such substrates can easily simulate each other, be substituted for each other.
Even digital and analog systems can be used to implement each other to arbitrary accuracy.
The jury maybe out on how to judge what 'thought' actually is. However what it is not is perhaps easier to perceive. My digital thermometer does not think when it tells me the temperature.
My paper and pen version of the latest LLM (quite a large bit of paper and certainly a lot of ink I might add) also does not think.
I am surprised so many in the HN community have so quickly taken to assuming as fact that LLM's think or reason. Even anthropomorphising LLM's to this end.
For a group inclined to quickly calling out 'God of the gaps' they have quite quickly invented their very own 'emergence'.
Lots of people consider company valuations evidence of a singularity right around the corner but it requires a very specific kind of mindset to buy into that as "proof" of anything other than very compelling hype by people who have turned financial scams into an art form.
I understand computers, software, & the theory of computation well enough to know that there is no algorithm or even a theoretical algorithmic construction that can be considered thought. Unless you are willing to concede that thinking is nothing more than any number of models equivalent to a Turing machine, e.g. lambda calculus, Post systems, context aware grammars, carefully laid out dominoes, permutations of bit strings, etc. then you must admit that computers are not thinking. If you believe computers are thinking then you must also admit dominoes are thinking when falling in a cascading chain.
We're already at the point where LLMs can beat the Turing test. If we define thinking as something only humans can do, then we can't decide if anyone is thinking at all just by talking to them through text, because we can't tell if they're human any more.
Animals can also think. It's not restricted to one specific type of primate physiology. But it seems like you think you're nothing more than falling cascades of dominoes in which case we don't really have much to discuss. Your metaphysical assumptions are fundamentally at odds with what I consider a reasonable stance on computation & reality.
The void created by modernity must be filled somehow so it might as well be the great programmer in the great beyond. Just as childish as religions of pre-modernity but very useful if you're a technocrat building data centers & trying to pump the valuations of companies that can benefit from all that buildout w/ promises of forthcoming utopias approximating the palace of the great programmer in the great beyond. Just a few more nuclear power plants & a few more GPU clusters is all that's needed.
Ideally it is filled with curiosity and continued exploration.
Not manufactured stop gaps or generic cynicism.
There is no reason more GPUs can’t contribute to further understanding, as one of many tools that have already assisted with relevent questions and problems.
Opt out of serious inquiry, no excuse needed, if you wish. Reframing others efforts is not necessary to do that.
The people who think enough nuclear reactors & silicon chips w/ the right incantation of 0s & 1s will deliver them to an abundant utopia don't leave much room in their ideology for any doubt about the eschatological objective of their quest & mission in life. These people are definitely not on some kind of religious side of a religious vs non-religious divide.
Sure thing buddy, I'm the confused one in this entire millenarian frenzy.