Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> On two occasions I have been asked, 'Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?' I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.


An Artificial Intelligence on that level would be able to easily figure out what you actually want. We should maybe go one step further and get rid of the inputs? They just add all of that complexity when they're not even needed.


At some point we just need to acknowledge that such speculation is like asking for a magic genie in a bottle rather than discussing literal technology.


"Magic genie in a bottle" is a very good way of framing what an AGI would (will) be if implemented, complete with all the obvious failure modes that end disastrously for the heroes in magic genie stories.


That's basically the whole premise of Bostrom's "Superintelligence"


Also consider that you won’t know if the answer relates to what you want or not because you have no reference helping distinguish between reality and hallucination.

And have been conditioned to accept LLM responses as reality.


Right? The AI can just predict what we'll want and then create it for us.


And if it wasn't what you wanted, well maybe you are wrong. An AI that intelligent is always right after all.

Seriously though, at some point AI does actually need an input from the user, if we are imagining ourselves still as the users.

If instead we just let AI synthesise everything, including our demands and desires, then why bother being involved at all. Take the humans out of the loop. Their experience of the outputs of the AI is actually a huge bottleneck, if we skip showing the humans we can exist so much faster.


Well, 'create it' at least. Leaving the humans in place would just mess things up.


Isn't that the whole premise behind 'The Matrix'? An imaginary world created as a simulation by machines.


Yea but the world they created wasn't one anyone wanted or asked for any more than our own reality caters to us; it is intended to portray life as it was in the nineties. To accept this as ideal is to accept that we currently live in an ideal world, which is extremely difficult to accept.


In one of the movies it's actually explained the machines originally created an utopia for humanity, but it was bad for "engagement" and "retention" and they had to pivot on the nineties simulator - which was better accepted.


Ok, sure. In neither case was such a world a product of imagination.


Or figured out that “perception is reality” and make the necessary adjustments to what we think we want.


Y'all starting to sound awfully like religious types. Is that satire?


They were simply asking if Babbage had prepared the machine to always give the answers to the questions Babbage knew he was going to enter - i.e. whether he was a fraud.

Enter 2+2. Receive 4. That's the right answer. If you enter 1+1, will you still receive 4? It's easy to make a machine that always says 4.


Mr. Babbage apparently wasn't familiar with the idea of error correction. I suppose it's only fair; most of the relevant theory was derived in the 20th century, AFAIR.


No, error correction in general is a different concept than GIGO. Error correction requires someone, at some point, to have entered the correct figures. GIGO tells you that it doesn't matter if your logical process is infallible, your conclusions will still be incorrect if your observations are wrong.


GIGO is an overused idea that's mostly meaningless in the real world. The only way for the statement to be true in general sense, is if your input is uniformly random. Anything else carries some information. In practice, for Babbage's quip to hold, the interlocutor doesn't need to merely supply any wrong figures, they need to supply ones specifically engineered to be uncorrelated with the right figures.

Again, in general sense. Software engineers are too used to computers being fragile wrt. inputs. Miss a semicolon, program won't compile (or worse, if it's JavaScript). But this level of strictness wrt. inputs is a choice in program design.

Error correction was just one example anyway. Programmers may be afraid of garbage in their code, but for everyone else, a lot of software is meant to sift through garbage, identifying and amplifying desired signals in noisy inputs. In other words, they're producing right figures in the output out of wrong ones in the input.


I don't think machines should rely on an opaque logic to assume and "correct errors" on user input. It's more accurate to "fail" than handling out an assumed output.

And also:

> they need to supply ones specifically engineered to be uncorrelated with the right figures.

I assume most people will understand this way (including me) when it's said to "input wrong figures".


> In other words, they're producing right figures in the output out of wrong ones in the input.

This does not refute the concept GIGO nor does it have anything to do with it. You appear to have missed the point of Babbage's statement. I encourage you to meditate upon it more thoroughly. It has nothing to do with the statistical correlation of inputs to outputs, and nothing to do with information theory. If Babbage were around today, he would still tell you the same thing, because nothing has changed regarding his statement, because nothing can change, because it is a fundamental observation about the limitations of logic.


I don't know what the point of Babbage's statement was; it makes little sense other than a quip, or - as 'immibis suggests upthread[0] - a case of Babbage not realizing he's being tested by someone worried he's a fraud.

I do usually know what the point of any comment quoting that Babbage's statement is, and in such cases, including this one, I almost always find it wanting.

--

[0] - https://news.ycombinator.com/item?id=43893270


I suppose spell checking is a sort of literal error correction. Of course this does require a correct list of words and misspellings to not be on that list.


Honestly I see this not about error but instead divining with perfect accuracy what you want. And when you say it that way it starts sounding like a predicting the future machine.


Well yea as soon as someone starts acting like humans communicate well, let alone the mind-reading behavior you're describing, I'm completely lost.


What's 1+1?

Exactly, it's 5. You just have to correct the error in the input.


Yes, with sufficient context, that's what I do every day, as presentation authors, textbook authors and Internet commentariat alike, all keep making typos and grammar errors.

You can't deal with humans without constantly trying to guess what they mean and use it to error-correct what they say.

(This is a big part of what makes LLMs work so well on wide variety of tasks, where previous NLP attempts failed.)


I often wish LLMs would tell me outright the assumptions they make on what I mean. For example, if I accidentally put “write an essay on reworkable energy”, it should start by saying “I'm gonna assume you mean renewable energy”. It greatly upsets me that I can't get it to do that just because other people who are not me seem to find that response rude for reasons I can't fathom, so it was heavily trained out of the model.


Huh, I'd expect it do exactly what you want it to, or some equivalent of it. I've never noticed LLMs silently make assumptions on what I meant wrt. anything remotely significant; they do stellar job at being oblivious to typos, bad grammar and other fuckups of ESL people like me, and (thankfully) they don't comment on that, but otherwise, they've always been restating my requests and highlighting if they're deviating from direct/literal understanding.

Case in point, I recently had ChatGPT point out, mid-conversation, that I'm incorrectly using "disposable income" to mean "discretionary income", and correctly state this must be the source of my confusion. It did not guess that from my initial prompt; it took my "wrong figures" at face value and produced answers that I countered with some reasoning of my own; only then, it explicitly stated that I'm using the wrong term because what I'm saying is correct/reasonable if I used "discretionary" in place of "disposable", and proceeded to address both versions.

IDK, but one mistake I see people keep making even today, is telling the models to be succinct, concise, or otherwise minimize the length of their answer. For LLMs, that directly cuts into their "compute budget", making them dumber. Incidentally, that could be also why one would see the model make more assumptions silently - these are one of the first things to go when one's trying to write concisely. "Reasoning" models are more resistant to this, fortunately, as the space between the "<thinking> tags" is literally the "fuck off user, this is my scratchpad, I shall be as verbose as I like" space, so one can get their succinct answers without compromising the model too badly.


That’s right, it goes in the square hole!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: