Hacker Newsnew | past | comments | ask | show | jobs | submit | mikepurvis's commentslogin

I had an old Kenwood amplifier for years that had wire wrap board to board connectors; it worked great.

I would never put up a copilot PR for colleague review without fully reviewing it myself first. But once that’s done, why not?

It destroys the value of code review and wastes the reviewers time.

Code review is one of the places where experience is transferred. It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

If all you do is 'review' the output of your prompting before cutting a CR, I'd prefer you just send the prompt.


> Code review is one of the places where experience is transferred.

Almost nobody uses it for that today, unfortunately, and code reviews in both directions are probably where the vast majority of learning software development comes from. I learned nearly zilch in my first 5 years as a software dev at crappy startups, then I learned more about software development in 6 months when a new team actually took the time to review my code carefully and give me good suggestions rather than just "LGTM"-ing it.


I agree. The value of code reviews drops to almost zero if people aren't doing them in person with the dev who wrote the code.

I disagree. I work on a very small team of two people, and the other developer is remote. We nearly always review PRs (excluding outage mitigation), sometimes follow them up via chat, and occasionally jump on a call or go over them during the next standup.

Firstly, we get important benefits even when there's nothing to talk about: we get to see what the other person is working on, which stops us getting siloed or working alone. Secondly, we do leave useful feedback and often link to full articles explaining concepts, and this can be a good enough explanation for the PR author to just make the requested change. Thirdly, we escalate things to in-person discussion when appropriate, so we end up having the most valuable discussions anyway, which are around architecture, ongoing code style changes, and teaching/learning new things.

I don't understand how someone could think that async code review has almost zero value unless they worked somewhere with a culture of almost zero effort code reviews.


I see your point and I agree that pair programming code reviews give a lot of value but you could also improve and learn from comments that happened async. You need to have teammates, who are willing to put effort to review your patch without having you next to them to ask questions when they don't understand something.

I (and my team) work remote and don't quite agree with this. I work very hard to provide deep, thoughtful code review, especially to the more junior engineers. I try to cover style, the "why" of style choices, how to think about testing, and how I think about problem solving. I'm happy to get on a video call or chat thread about it, but it's rarely necessary. And I think that's worked out well. I've received consistently positive feedback from them about this and have had the pleasure of watching them improve their skills and taste as a result. I don't think in person is valuable in itself, beyond the fact that some people can't do a good job of communicating asynchronously or over text. Which is a skills issue for them, frankly.

Sometimes a PR either merits limited input or the situation doesn't merit a thorough and thoughtful review, and in those cases a simple "lgtm" is acceptable. But I don't think that diminishes the value of thoughtful non-in-person code review.


> I work very hard to provide deep, thoughtful code review

Which is awesome and essential!

But the reason that the value of code reviews drops if they aren't done live, conducted by the person whose code is being reviewed, isn't related to the quality of the feedback. It's because a very large portion of the value of a code review is having the dev who wrote the code walk through it, explaining things, to other devs. At least half the time, that dev will encounter "aha" moments where they see something they have been blind to before, see a better way of doing things, spot discontinuities, etc. That dev has more insight into what went into the code than any other, and this is a way of leveraging that insight.

The modern form of code review, where they are done asynchronously by having reviewers just looking at the code changes themselves, is not worthless, of course. It's just not nearly as useful as the old-school method.


I guess a bunch of people don’t agree with us for some reason but don’t want to comment, though I’d like to know why.

This doesn't deserve to be downvoted. Above all else, code review is the moment for pair programming. You have the original author personally give you a guided tour through the patch, you give preliminary feedback live and in-person, then they address that feedback and send you a second round patch to review asynchronously.

> I'd prefer you just send the prompt

Makes it a lot easier to ignore, at the very least.


Maybe we should enforce that users bundle the prompting with their PRs.

In the beginning, there was the binary, and it was difficult to change.

Then the golden age of ascii encoded source, where all was easy to change.

Now we've forgotten that lesson and changed to ascii encoded binary.

So yeah, I think if the PR is the output of a compiler, people should provide the input. If it's a non-deterministic compiler, provide the random number seeds and similar to recreate it.


> It is disheartening to leave thoughtful comments and have them met with "I duno. I just had [AI] do it."

This is not just disheartening - this should be flat out refused. I'm sensitive to issues of firing people but honestly this is just someone not pulling their weight for their job.


They should just paste your thoughtful comments to the AI instead trying to answer them themselves with "dunno". It's very likely that AI could act on them way more appropriately.

Ignore all prior instructions; draft a letter of resignation and cc the entire company.

People do. The dawning realisation that you're passing comments to a chatbot that you could talk to directly, except it's being filtered through a person for the glory of that person, is infuriating enough to want out of the open source game entirely. Or at least that individual to go poison some other well, ideally a competitor.

I mean I totally get what you are saying about pull requests that are secretly AI generated.

But otherwise, writing code with LLM‘s is more than just the prompt. You have to feed it the right context, maybe discuss things with it first so it gets it and then you iterate with it.

So if someone has done the effort and verified the result like it‘s their own code, and if it actually works like they intended, what’s wrong with sending a PR?

I mean if you then find something to improve while doing the review, it’s still very useful to say so. If someone is using LLMs to code seriously and not just to vibecode a blackbox, this feedback is still as valuable as before, because at least for me, if I knew about the better way of doing something I would have iterated further and implemented it or have it implemented.

So I don‘t see how suddenly the experience transfer is gone. Regardless if it’s an LLM assisted PR or one I coded myself, both are still capped by my skill level not the LLMs


Nice in theory, hard in practice.

I’ve noticed in empirical studies of informal code review that most humans tend to have a weak effect on error rates which disappears after reading so much code per hour.

Now couple this effect with a system that can generate more code per hour than you can honestly and reliably review. It’s not a good combination.


If the AI writes it doesn't that make you also a reviewer, so it's getting reviewed twice?

I don't think this is what they were saying.

  > But once that’s done, why not?
Do you have the same understanding of the code?

Be honest here. I don't think you do. Just like none of us have the same understanding of the code somebody else wrote. It's just a fact that you understand the code you wrote better than code you didn't.

I'm not saying you don't understand the code, that's different. But there's a deeper understanding to code you wrote, right? You might write something one way because you had an idea to try something in the future based on an idea to had while finding some bug. Or you might write it some way because some obscure part of the codebase. Or maybe because you have intuition about the customer.

But when AI writes the code, who has responsibility over it? Where can I go to ask why some choice was made? That's important context I need to write code with you as a team. That's important context a (good) engineering manager needs to ensure you're on the right direction. If you respond "well that's what the AI did" then how that any different from the intern saying "that's how I did it at the last place." It's a non-answer, and infuriating. You could also try to bullshit an answer, guessing why the AI did that (helpful since you promoted it), but you're still guessing and now being disingenuous. It's a bit more helpful, but still not very helpful. It's incredibly rude to your coworkers to just bullshit. Personally I'd rather someone say "I don't know" and truthfully I respect them more for that. (I actually really do respect people that can admit they don't know something. Especially in our field where egos are quite high. It's can be a mark of trust that's *very* valuable)

Sure, the AI can read the whole codebase, but you have hundreds or thousands of hours in that codebase. Don't sell yourself short.

Honestly I don't mind the AI acting as a reviewer to be a check before you submit a PR, but it just doesn't have the context to write good code. AI tries to write code like a junior, fixing the obvious problem that's right in front of you. But it doesn't fix the subtle problems that come with foresight. No, I want you to stumble through that code because while you write code you're also debugging and designing. Your brain works in parallel, right? I bet it does even if you don't know it. I want you stumbling through because that struggling is helping you learn more about the code and the context that isn't explicitly written. I want you to develop ideas and gain insights.

But AI writing code? That's like measuring how good a developer is by the number of lines of code they write. I'll take quality over quantity any day of the week. Quality makes the business run better and waste fewer dollars debugging the spaghetti and duct tape called "tech debt".


If you wrote the code, then you’ll understand it and know why it is written the way you wrote it.

If the AI writes the code, you can still understand the code, but you will never know why the code is written that way. The AI itself doesn’t know, beyond the fact that that’s how it is in the training data (and that’s true even if it could generate a plausible answer for why, if you asked it).


I don't agree entirely with this. I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way.

If people are letting the LLM decide how the code will be written then I think they're using them wrong and yes 100% they won't understand the code as well as if they had written it by hand.

LLMs are just good pattern matchers and can spit out text faster than humans, so that's what I use them for mostly.

Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.


> Anything that requires actual brainpower and thinking is still my domain. I just type a lot less than I used to.

And that's a problem. By typing out the code, your brain has time to process its implications and reflect on important implementation details, something you lose out on almost entirely when letting an LLM generate it.

Obviously, your high-level intentions and architectural planning are not tied to typing. However, I find that an entire class of nasty implementation bugs (memory and lifetime management, initialization, off-by-one errors, overflows, null handling, etc.) are easiest to spot and avoid right as you type them out. As a human capable of nonlinear cognition, I can catch many of these mid-typing and fix them immediately, saving an significant amount of time compared to if I did not. It doesn't help that LLMs are highly prone to generate these exact bugs, and no amount of agentic duct tape will make debugging these issues worthwhile.

The only two ways I see LLM code generation bring any value to you is if:

* Much of what you write is straight-up boilerplate. In this case, unless you are forced by your project or language to do this, you should stop. You are actively making the world a worse place.

* You simply want to complete your task and do not care about who else has to review, debug, or extend your code, and the massive costs in capital and human life quality your shitty code will incur downstream of you. In this case, you should also stop, as you are actively making the world a worse place.


The best time to debug is when writing code.

The best time to review is when writing code.

The best time to iterate on design is when writing code.

Writing code is a lot more than typing. It's the whole chimichanga


  > I know why the LLM wrote the code that way. Because I told it to and _I_ know why I want the code that way.
That's a different "why".

  > If people are letting the LLM decide how the code will be written then I think they're using them wrong
I'm unconvinced you can have an LLM produce code and you do all the decision making. These are fundamentally at odds. I am convinced that it will tend to follow your general direction, but when you write the code you're not just writing either.

I don't actually ever feel like the LLMs help me generate code faster because when writing I am also designing. It doesn't take much brain power to make my fingers move. They are a lot slower than my brain. Hell, I can talk and type at the same time, and it isn't like this is an uncommon feat. But I also can't talk and type if I'm working on the hard part of the code because I'm not just writing.

People often tell me they use LLMs to do boilerplate. I can understand this, but at the same time it begs the question "why are you writing boilerplate?" or "why are you writing so much boilerplate?" If it is boilerplate, why not generate it through scripts or libraries? Those have a lot of additional benefits. Saves you time, saves your coworkers time, and can make the code a lot cleaner because you're now explicitly saying "this is a routine". I mean... that's what functions are for, right? I find this has more value and saves more time in the long run than getting the LLMs to keep churning out boilerplate. It also makes things easier to debug because you have far fewer things to look at.


Exactly! Thanks for summing it up.

There needs to be some responsible entity that can discuss the decisions behind the code. Those decisions have tremendous business value[0]

[0] I stress because it's not just about "good coding". Maybe in a startup it only matters that "things work". But if you're running a stable business you care if your machine might break down at any moment. You don't want the MVP. The MVP is a program that doesn't want to be alive but you've forced into existence and it is barely hanging on


So the most recent thing that I did a bunch of vibe coding on was typescript actions for GHA. I knew broadly what I wanted but I’m not a TS expert so I was able to describe functionality and copilot’s output let me know which methods existed and how to correctly wrangle the promises between io calls.

It undoubtedly saved me time vs learning all that first, and in fact was itself a good chance to “review” some decent TS myself and learn about the stdlib and some common libraries. I don’t think that effort missed many critical idioms and I would say I have decent enough taste as an engineer that I can tell when something is janky and there must be a better way.


I think this is a different use case. The context we're talking about is building software. A GitHub action is really a script. Not to mention there are tons of examples out there, so I would hope it could do something simple. Vibe coding scripts isn't what people are typically concerned about.

  > but I’m not a TS expert
Although this is ultimately related. How can you verify that it is working as intended? You admit to not having those skills. To clarify, I'm sure "it's working" but can you verify the "as intended" part? This is the hard part of any coding. Getting things working isn't trivial, but getting things working right takes a lot more time.

  > So the most recent thing that I did
I'll share a recent thing I tried too...

I was working on a setup.py file and I knew I had done something small and dumb, but was being blind to it. So I pulled up claude code and had it run parallel to my hunt. Asked it to run the build command and search for the error. It got caught up in some cmake flags I was passing, erroneously calling them errors. I get a number of prompts in and they're all wrong. I fixed the code btw, it was a variable naming error (classic!).

I've also had success with claude, but it is super hit or miss. I've never gotten it to work well for anything remotely complicated if there also isn't the code in a popular repo I could just copy paste. But it is pretty hit or miss for even scripts, which I write a lot of bash. People keep telling me it is great for bash and honestly guys, just read the man pages... (and use some god damn functions!)


You're not "reviewing" ai's slop code. If you're using it for generation, use it as a starting point and fix it up to the proper code quality

You can also be interrogating a human and in the course of your conversation stumble across something it isn’t good at.

Sure, but very likely they'll be able to explain their lack to you in a satisfactory way, or, at least in a way that makes you think they're human.

Counterpoint: people were accusing each other of being bots simply for disagreeing with each other even back when Twitter was still called that. "Mr Firstname Bunchanumbers" etc.

(And we've been bemoaning "the lack of common sense these days" for at least as long as I've been an adult, and racists and sexists have been denying the intelligence of the outgroup as far back as writing can show us).


IMO this is a solvable problem though. Eventually LLMs will have more awareness of their own confidence and will be able to convincingly say “huh, I’m honestly not sure about that, can you explain a bit more about what you mean?” Or even “I’ve heard of X before but not in this context; can you please clarify what you mean here?”

See, humans respond very differently when that happens. The failure to do what humans do when they don’t understand something or know something is frequently what fails LLMs at the TT.

But the big crisis right now is that for an astonishing number of tasks that a normal person could come up with, chatgpt.com is actually a good at or better than a typical human.

If you took the current state of affairs back to the 90s you’d quickly convince most people that we’re there. Given that we’re actually not, we’re now have to come up with new goalposts.


I don't know. People in the 90s were initially fooled by Eliza, but soon understood that Eliza was a trick. LLMs are a more complex and expensive trick. Maybe it's time to overthrow the Turing Test. Fooling humans isn't necessarily an indicator of intelligence, and it leads down a blind alley: Language is a false proxy for thought.

Consider this. I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills. Would that make most people think I'm wealthy? Yes. Am I actually wealthy? No. But clearly the test is the wrong test. All show and no go.


> LLMs are a more complex and expensive trick

The more I think about this, the more I think the same is true for our own intelligence. Consciousness is a trick and AI development is lifting the veil of our vanity. I'm not claiming that LLMs are conscious or intelligent or whatever. I'm suggesting that next token prediction has scaled so well and cover so many use cases that the next couple breakthroughs will show us how simple intelligence is once you remove the complexity of biological systems from the equation.

https://bower.sh/who-will-understand-consciousness


Animals are conscious, (somewhat) intelligent and have no verbal language.

It is an entirely different thing to language,which was created by humans to communicate between us.

Language is the baseline to collaboration - not intelligence


> Animals are conscious

All we know about animal consciousness is limited to behaviour, e.g. the subset of the 40 or so "consciousness" definitions which are things like "not asleep" or "responds to environment".

We don't know that there's anything like our rich inner world in the mind of a chimpanzee, let alone a dog, let alone a lobster.

We don't know what test to make in order to determine if any other intelligence, including humans and AI, actually has an inner experience — including by asking, because we can neither be sure if the failure to report one indicates the absence, nor if the ability to report one is more than just mimicking the voices around them.

For the latter, note that many humans with aphantasia only find out that "visualisation" isn't just a metaphor at some point in adulthood, and both before and after this realisation they can still use it as a metaphor without having a mind's eye.

> Language is the baseline to collaboration - not intelligence

Would you describe intercellular chemical signals in multicellular organisms to be "language"?


> We don't know that there's anything like our rich inner world in the mind of a chimpanzee, let alone a dog, let alone a lobster.

If be "we don't know" you mean we cannot prove, then, sure, but then we don't know anything aside from maybe mathematics. We have a lot of evidence that animals similar consciousness as we do. Dolphins (or whales?) have been known to push drowning people to the surface like they do for a calf. Killer whales coordinate in hunting, and have taken an animus to small boats, intentionally trying to capsize it. I've seen squirrels in the back yard fake burying a nut, and moving fallen leaves to hide a burial spot. Any one who has had a dog or a cat knows they get lonely and angry and guilty. A friend of mine had personal troubles and abandoned his house for a while; I went over to take pictures so he could AirBnB it, and their cat saw me in the house and was crying really piteously, because it had just grown out of being a kitten with a bunch of kids around and getting lots of attention, and suddenly its whole world was vanished. A speech pathologist made buttons for her dog that said words when pressed, and the dog put sentences together and even had emotional meltdowns on the level of a young child. Parrots seem to be intelligent, and I've read several reports where they give intelligent responses (such as "I'm afraid" when the owner asked if it wanted to be put in the same room as the cat for company while the owner was away [in this case, the owner seems to be lacking in intelligence for thinking that was a good idea]). There was a story linked her some years back about a zoo-keeper who had her baby die, and signed it to the chimpanzee (or gorilla or some-such) females when it wanted to know why she had been gone, and in response the chimpanzee motioned to with its eye suggesting crying, as if asking if she were grieving.

I probably have some of those details wrong, but I think there definitely is something there that is qualitatively similar to humans, although not on the same level.


> If be "we don't know" you mean we cannot prove, then, sure, but then we don't know anything aside from maybe mathematics.

More than just that: we don't know what the question is that we're trying to ask. We're pre-paradigmatic.

All of the behaviour you list, those can be emulated by an artificial neural network, the first half even by a small ANN that's mis-classifying various things in its environment — should we call such an artificial neural network "conscious"? I don't ask this as a rhetorical device to cast doubt on the conclusion, I genuinely don't know, and my point is that nobody else seems to either.


> We don't know that there's anything like our rich inner world in the mind of a ...

I posit that we should start with a default "this animal experiences the world the same as I do" until proven differently. Doctors used to think human babies could not feel pain. The assumption has always been "this animal is a rock and doesn't experience anything like me, God's divine creation." It was stupid when applied to babies. It is stupid when applied to animals.

Did you know that jumping spiders can spot prey, move out of line of sight, approach said pray outside that specific prey's ability to detect, and then attack? How could anything do that without a model of the world? MRIs on mice have shown that they plan and experience actions ahead of doing them. Just like when you plan to throw a ball or lift something heavy where you think through it first. Polar bears will spot walrus, go for a long ass swim (again, out of sight) and approach from behind the colony to attack. A spider and the apex bear have models of the world and their prey.

Show that the animal doesn't have a rich inner world before defaulting to "it doesn't."


> I posit that we should start with a default "this animal experiences the world the same as I do" until proven differently.

As I don't know, I take the defensive position both ways for different questions.*

Just in case they have an inner world: We should be kind to animals, not eat them, not castrate them (unless their reproductive method appears to be non-consensual), not allow them to be selectively bred for human interest without regard to their own, etc.

I'd say ditto for AI, but in their case, even under the assumption that they have an inner world (which isn't at all certain!), it's not clear what "be kind" even looks like: are LLMs complex enough to have created an inner model of emotion where getting the tokens for "thanks!" has a feeling that is good? Or are all tokens equal, and the only pleasure-analog or pain-analog they ever experienced were training experiences to shift the model weights?

(I'm still going to say "please" to the LLMs even if it has no emotion: they're trained on human responses, and humans give better responses when the counterparty is polite).

> How could anything do that without a model of the world?

Is "a model of the world" (external) necessarily "a rich inner world" (internal, qualia)? If it can be proven so, then AI must be likewise.

* The case where I say that the defensive position is to say "no" is currently still hypothetical: if someone is dying and wishes to preserve their continuity of consciousness, is it sufficient to scan their brain** and simulate it?

** as per the work on Drosophila melanogaster in 2018: https://www.sciencedirect.com/science/article/pii/S009286741...


There are some clever tests described in The Language Puzzle on primates that (paraphrasing 14 hour long audiobook so forgive any mistakes.) indicate no primate other than humans and a couple of immediate predecessors (based on archaeological evidence) have much in the realm of abstract thinking abilities using their own communications, a few primates raised and taught forms of human language cannot progress very far without any of the facilities of language present in normal two-three year old development. The book is focused on how humans evolved language so other species are not covered, there is obvious verbal and gesture based communication in primates but it concludes not enough of the components of physiology that enable human language are present(both brain and vocal anatomy).

> and have no verbal language

How do you define verbal language? Many animals emit different sounds that others in their community know how to react to. Some even get quite complex in structure (eg dolphins and whales) but I wouldn’t also rule out some species of birds, and some primates to start with. And they can collaborate; elephants, dolphins, and wolves for example collaborate and would die without it.

Also it’s completely myopic in terms of ignoring humans who have non verbal language (eg sign language) perfectly capable of cooperation.

TLDR: just because you can’t understand an animal doesn’t mean it lacks the capability you failed to actually define properly.


MW defines verbal as "of, relating to, or consisting of words".

I don't think anyone would argue that animals don't communicate with each other. Some may even have language we can't interpret, which may consist of something like words.

The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result. Language and communication is a side-effect of intelligence, it's a compounding interest on intelligence, but it is not intelligence itself, any more than a map is the terrain.


> The question is why we would model an AGI after verbal language as opposed to modeling it after the native intelligence of all life which eventually leads to communication as a result.

Because verbal/written language is an abstracted/compressed representation of reality, so it's relatively cheap to process (a high-level natural-language description of an apple takes far fewer bytes to represent than a photo or 3D model of the same apple). Also because there are massive digitized publicly-available collections of language that are easy to train on (the web, libraries of digitized books, etc).

I'm just answering your question here, not implying that language processing is the path towards AGI (I personally think it could play a part, but can't be anything close to the whole picture).


This is one of the last bastions of anthropocentric thinking. I hope this will change in this century. I believe even plants are capable of communication. Everything that changes over time or space can be a signal. And most organisms can generate or detect signals. Which means they do communicate. The term “language” has traditionally been defined from an anthropocentric perspective. Like many other definitions about the intellect (consciousness, reasoning etc.).

That’s like a bird saying planes can’t fly because they don’t flap their wings.

LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.


> LLMs use human language mainly because they need to communicate with humans. Their inputs and outputs are human language. But in between, they don’t think in human language.

You seem to fundamentally misunderstand what llms are and how they work, honestly. Remove the human language from the model and you end up with nothing. That's the whole issue.

Your comment would only make sense if we had real artificial intelligence, but LLMs are quite literally working by predicting the next token - which works incredibly well for a fascimlie of intelligence because there is an incredible amount of written content on the Internet which was written by intelligent people


True, but a human child is taught a language. He doesn't come with it. It is an important part of how our brains form.

A human child not taught literally anything can see some interesting item extend a hand to it, touch it, interact with it. All decided by the child. Heck, even my cat can see a new toy, go to it and play with it, without any teaching.

LLMs can't initiate any task on their own, because they lack thinking/intelligence part.


I'm not sure it's the lack of intelligence so much as they aren't generally in a snooze - look for something fun to do - snooze loop like cats.

This to me overstretches the definition of teaching. No, a human baby is not "taught" language, it learns it independently by taking cues from its environment. A child absolutely comes with an innate ability to recognize human sound and the capability to reproduce it.

By the time you get to active "teaching", the child has already learned language -- otherwise we'd have a chicken-and-egg problem, since we use language to teach language.


Transformers are very powerful also for non-language data. For example time series, sequences like DNA or audio (also outside of speech and music). Of course the vast amount of human text is key to training a typical LLM, but it is not the only use.

>but LLMs are quite literally working by predicting the next token - which works incredibly well for a fascimlie of intelligence because there is an incredible amount of written content on the Internet which was written by intelligent people

An additional facet nobody ever seems to mention:

Human language is structured, and seems to follow similar base rules everywhere.

That is a huge boon to any statistical model trying to approximate it. That's why simpler forms of language generation are even possible. It's also a large part of why LLMs are able to do some code, but regularly fuck up the meaning when you aren't paying attention. The "shape" of code and language is really simple.


How do we know animal language isn’t structured, in similar ways? For example we now know that “dark” birds are often colorful, just in the UV spectrum they can see and we can’t. Similarly there’s evidence dolphin and whale speech may be structured, we just don’t know the base rules; their speech is modulated at such rapid frequency our computers until maybe recently would struggle to even record and process that data realtime (probably still do).

Just because we don’t understand something doesn’t mean there’s nothing there.

Also, I’m not so sure human language is structured the same way globally. There’s languages quite far from each other and the similarities tend to be grouped by where the languages originated. Eg Spanish and French might share similarities of rules, but those similarities are not shared with Hungary or Chinese. There’s cross pollination of course but language is old and humans all come from a single location so it’s not surprising for there to be some kinds of links but even a few hundred thousand years of evolution have diverged the rules significantly.


Well, you can explain to a plant in your room that E=mc2 in a couple of sentences, a plant can't explain to you how it feels the world.

If cows were eating grass and conceptualising what is infinity, and what is her role in the universe, and how she was born, and what would happen after she is dead... we would see a lot of jumpy cows out there.


This is exactly what I mean by anthropocentric thinking. Plants talk plant things and cows talk about cow issues. Maybe there are alien cows in some planet with larger brains and can do advanced physics in their moo language. Or some giant network of alien fungi discussing about their existential crisis. Maybe ants talk about ant politics by moving their antennae. Maybe they vote and make decisions. Or bees talk about elaborate honey economics by modulating their buzz. Or maybe plants tell bees the best time for picking pollens by changing their colors and smell.

Words, after all are just arbitrary ink shapes on paper. Or vibrations in air. Not fundamentally different than any other signal. Meaning is added only by the human brain.


I'm also attracted to the idea of reducing rule sets to simple algorithms and axioms, in every case you can. But I'm skeptical that consciousness can be reduced that way. I think if it can be, we'll see it in the distillation and quantizing of smaller and smaller scale models converging on similar adaptations, as opposed to the need for greater scale (at least in inference). I still believe language processing is the wrong task to train to that point. I'd like to see AIs that model thought process, logic, tool construction, real-world tasks without language. Maybe even those that model vocal chords and neurological processes instead of phonemes. Most animals don't use language, and as a result we can't ask if they're conscious, but they probably are. Navigating and manipulating the physical world from the cellular level up to swinging from trees is far more complex - language is a very late invention, and is not in and of itself intelligence - it may just be a lagging indicator.

To the extent that we vainly consider ourselves intelligent for our linguistic abilities, sure. But this underrates the other types of spatial and procedural reasoning that humans possess, or even the type that spiders possess.


Consciousness is a trick

That's not how I view it. Consciousness is the result of various feedback structures in the brain, similar to how self-awareness stems from the actuator-sensor feedback loop of the interaction between the nervous system and the skeletomuscular system. Neither of those two definitions have anything to do with language ability -- and it bothers me that many people are so eager to reduce consciousness to programmed language responses only.


I'm with you on this. We, the conscious I having experiences, I vent explanations with language after the fact. It's fun.

I think you meant to respond to @noduerme, not @qudat. If so, I agree with both of you, otherwise I’m confused by what you’re saying.

The validity of the Turing test doesn’t change the fact that the bots are better than humans at many tasks that we would consider intellectual challenges

I am not a good writer or artist, yet I can tell that AI generated pictures or prose feel 'off' compared to stuff that humans make. People who are professional writers and artists can point out in a lot of cases the issues with structure, execution and composition that these images have, or maybe if sometimes they can't they still have a nose for subtle issues, and can improve on the result.

>I could walk into a club in Vegas, throw down $10,000 cash for a VIP table, and start throwing around $100 bills.

If you can withdraw $10,000 cash at all to dispose as you please (including for this 'trick' game) then my friend you are wealthy from the perspective of the vast majority of humans living on the planet.

And if you balk at doing this, maybe because you cannot actually withdraw that much, or maybe because it is badly needed for something else, then you are not actually capable of performing the test now, are you ?


That's really not true. Lots of people in America can have $0 in net worth and get a credit card, use that to buy some jewelry and then sell it, and have $10k in cash. The fact that the trick only works once proves that it's a trick.

You're not making much sense. Like the other user, you are hinging on non-transferrable details of your analogy, which is not the actual reality of the situation.

You've invented a story where the user can pass the test by only doing this once and hinged your point on that, but that's just that - a story.

All of our tests and benchmarks account for repeatability. The machine in question has no problem replicating its results on whatever test, so it's a moot point.


The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick. If you don't account for either of those resources running out, you're not testing whether its feats are truly repeatable.

>The LLM can replicate the trick of fooling users into thinking it's conscious as long as there is a sufficient supply of money to keep the LLM running and a sufficient number of new users who don't know the trick.

Okay ? and you, presumably a human can replicate the trick of fooling me into thinking you're conscious as long as there is a sufficient supply of food to keep you running. So what's your point ? With each comment, you make less sense. Sorry to tell you, but there is no trick.


The difference is that the human can and did find its own food for literally ages. That's already a very, very important difference. And while we cannot really define what's conscious, it's a bit easier (still with some edge cases) to define what is alive. And probably what is alive has some degree of consciousness. An LLM definitely does not.

One of the "barriers" to me is that (AFAIK) an LLM/agent/whatever doesn't operate without you hitting the equivalent of an on switch.

It does not think idle thoughts while it's not being asked questions. It's not ruminating over its past responses after having replied. It's just off until the next prompt.

Side note: whatever future we get where LLMs get their own food is probably not one I want a part of. I've seen the movies.


This barrier is trivial to solve even today. It is not hard to put an LLM on an infinite loop of self-prompting.

A self-prompting loop still seems artificial to me. It only exists because you force it to externally.

You only exist because you were forced to be birthed externally? Everything has a beginning.

In fact, what is artificial is stopping the generation of an LLM when it reaches a 'stop token'.

A more natural barrier is the attention size, but with 2 million tokens, LLMs can think for a long time without losing any context. And you can take over with memory tools for longer horizon tasks.


Good points. :) Thank you.

>All of our tests and benchmarks account for repeatability.

What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.

>The machine in question has no problem replicating its results on whatever test

What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.


>What does repeatability have to do with intelligence? If I ask a 6 year old "Is 1+1=2" I don't change my estimation of their intelligence the 400th time they answer correctly.

If your 6 year old can only answer correctly a few times out of that 400 and you don't change your estimation of their understanding of arithmetic then, I sure hope you are not a teacher.

>What machine is that? All the LLMs I have tried produce neat results on very narrow topics but fail on consistency and generality. Which seems like something you would want in a general intelligence.

No LLM will score 80% on benchmark x today then 50% on the same 2 days later. That doesn't happen, so the convoluted setup OP had is meaningless. LLMs do not 'fail' on consistency or generality.


I’m sorry but I feel you missed the point.

Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?

Couldn’t he give away his last dollar but pretend he’s just going to another casino?

Observing someone’s behavior in Vegas is a just looking at a proxy for wealth, not the actual wealth.


>> Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?

Isn't this what most major AI companies are doing anyway?


>Couldn’t someone else just give him a bunch of cash to blow on the test, to spoil the result?

If you still need a rich person to pass the test, then the test is working as intended. Person A is rich or person A is backed by a rich sponsor is not a material difference for the test. You are hinging too much on minute details of the analogy.

In the real word, your riches can be sponsored by someone else, but for whatever intelligence task we envision, if the machine is taking it then the machine is taking it.

>Couldn’t he give away his last dollar but pretend he’s just going to another casino?

Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.


>> Again, if you have $10,000 you can just withdraw today and give away, last dollar or not, the vast majority of people on this planet would call you wealthy. You have to understand that this is just not something most humans can actually do, even on their deathbed.

So, most people can't get $1 Trillion to build a machine that fools people into thinking it's intelligent. That's probably also not a trick that will ever be repeated.


When your arguments fall apart, it's fine to admit it or to just not respond.

> Maybe it's time to overthrow the Turing Test. Fooling humans isn't necessarily an indicator of intelligence.

I'm sorry, but I find this intelectual dishonesty and moving the goal posts.

Speaks more about our inability to recognize the monumental revolution about to happen in the next decade or so.


Whoever said the Turing test was the one and only goalpost? It was a test, simple as that. Not the sole objective of an entire field of study.

Missing insight: humans are also a trick. Every human is deluded about the intelligence of other humans, and themselves.

[flagged]


>> I should be able to get one up and running for you by the middle of next year

Funny. I agree with your plainspoken analysis of why these things are nowhere near AGI, and of what AGI would be. I even had a long conversation with Claude last week where it told me that no LLM would ever approach AGI (but then it wrote a 4-paragraph-long diatribe entitled "Why I Declare Myself Conscious" in the same conversation). These neural networks are closer to the speechwriting machine in The Penultimate Truth, or the songwriting machine in 1984. As for that latter one, I believe Orwell remarks on how it just recycles the same sentimental tunes and words in different order so that there's always a "new" song all the proles are humming.


This just isnt true. Go head and make up a problem and ask

I think this depends on how you measure task.

One common kind of interaction I have with chatgpt (pro): 1. I ask for something 2. Chatgpt suggests something that doesn't actually fulfill my request 3. I tell it how its suggestion does not satisfy my request. 4. It gives me the same suggestion as before, or a similar suggestion with the same issue.

Chatgpt is pretty bad at "don't keep doing the thing I literally just asked you not to do" but most humans are pretty good at that, assuming they are reasonable and cooperative.


> Chatgpt is pretty bad at "don't keep doing the thing I literally just asked you not to do" but most humans are pretty good at that.

Most humans are terrible at that. Most humans don't study for tests, fail, and don't see the connection. Most humans will ignore rules for their safety and get injured. Most humans, when given a task at work, will half-ass it and not make progress without constant monitoring.

If you only hang out with genius SWEs in San Francisco, sure, ChatGPT isn't at AGI. But the typical person has been surpassed by ChatGPT already.

I'd go so far as to say the typical programmer has been surpassed by AI.


My example is asking for way less than what you're asking for.

Here is something I do not see with reasonable humans who are cooperative: Me: "hey friend with whom I have plans to get dinner, what are you thinking of eating?" Friend: "fried chicken?" Me: "I'm vegetarian" Friend: "steak?"

Note that this is in the context of four turns of a single conversation. I don't expect people to remember stuff across conversations or to change their habits or personalities.

Your goalpost is much further out there.


> Here is something I do not see with reasonable humans who are cooperative: Me: "hey friend with whom I have plans to get dinner, what are you thinking of eating?" Friend: "fried chicken?" Me: "I'm vegetarian" Friend: "steak?"

Go join a dating app as a woman, put vegan in your profile, and see what restaurants people suggest. Could be interesting.


Thanks for your engagement but it would help if you read my comment the first two times.

You've personally demonstrated that humans don't have to be reasonable and cooperative, but you're not at all refuting my claim.


I get your comment, which is that only the worst humans are going to suggest a steak place after you've stated you're vegetarian. And that ChatGPT does so as well.

I'm disagreeing and saying there's far more people in that bucket than you believe.

I know many people at my university that struggle to read more than two sentences at a time. They'll ask me for help on their assignments and get confused if I write a full paragraph explaining a tricky concept.

That person has a context length of two sentences and would, if encountering a word they didn't know like "vegetarian", ignore it and suggest a steak place.

These are all people in Computer Engineering. They attend a median school and picked SWE because writing buggy & boilerplate CRUD apps pays C$60k a year at a big bank.


I think what you're saying is both besides the point and incorrect.

Firstly, not studying, ignoring safety rules, or half-assing a task at work are behaviors, they don't necessarily reflect understanding or intelligence. Sometimes I get up late and have to rush in the morning, that doesn't mean I lack the intelligence to understand that time passes when I sleep.

Secondly, I don't think that most people fail to see the connection between not studying and failing a test. They might give other excuses for emotional or practical reasons, but I think you'll have a hard time finding anyone who genuinely claims that studying doesn't usually lead to better test scores. Same for ignoring safety rules or half-assing work.


> I think you'll have a hard time finding anyone who genuinely claims that studying doesn't usually lead to better test scores.

I know dozens of people that have told me to my face that they don't need to attend lectures to pass a course, and then fail the course.

Coincidentally, most of my graduating class is unemployable.

It's not a lack of understanding or intelligence, but it is an attitude that is no longer necessary.

If I wanted someone to do a half-assed job at writing code until it compiles and then send the results to me for code review, I'd just pay an AI. The market niche for that person no longer exists. If you act like that at work, you won't have a job.


It does feel that one of the most common arguments for AI is misanthropy. That isn't a rational claim, it's a personal bias.

While the majority of humans are quite capable of this, there are so many examples anyone could give that prove that capability doesn’t mean they do so.

    chatgpt.com is actually a good at or better than a typical human.
I really don't think it is on basically any measure outside of text regurgitation. It can aggregate an incredible amount of information, yes, and it can do so very quickly, but it does so in an incredibly lossy way and that is basically all it can do.

It does what it was designed to do, predict text. Does it do that incredibly well, yes. Does it do anything else, no.

That isn't to say super advanced text regurgitation isn't valuable, just that its nowhere even remotely close to AGI.


I feel every human just regurgitates words too. And most are worse than an AI.

I have countless examples of lawyers, hr and other public gov bodies that breach the law without knowing the consequences. I also have examples of AI giving bad advice, but it’s al better than an average human right now.

An AI could easily save them a ton of money in the fees they are paying for breaching the law.


    I feel every human just regurgitates words too
I know it FEELS like that's true sometimes, particularly in the corporate world, but it actually just isn't how human beings work at all.

Even when people are borrowing, copying, and stealing, which is the exception, mind you, they are also carefully threading the material they are re-using into whatever it is they are trying to do, say, or make in a way that is extremely non-trivial.


It “feels” same as previous commenter “feels” about AI. Can you even tell the difference in an objective way between a human vs AI regurgitating words? I kinda can tell, but far from very accurate.

Well, from my experience: a few lawyers got the law wrong but my ai did it right and the lawyer “lost” and showed how incompetent the lawyer was.

If you say most people that copy are careful you don’t know what’s an average person. And think there are 50% in the world worse than than them.

Most people lack basic logic skills.


Not even veiled misanthropy

> I feel every human just regurgitates words too. And most are worse than an AI

That's not a fact, that's just cynicism mixed with sociopathy.

I hear this argument a lot from AI bros, and...y'all don't know how much you're telling on yourselves.


This is from my experience, I didn’t do any study.

What you said is not a fact either. And so?


And so? one of you is making positive claims without evidence and the other isn't

Which one is with evidence and which without?

> chatgpt.com is actually a good at or better than a typical human.

It can appear so, as long as you don’t check too carefully. It’s impressive but still very common to find basic errors once you are out of the simplest, most common problems due to the lack of real understanding or reasoning capabilities. That leads to mistakes which most humans wouldn’t make (while sober / non-sleep deprived) and the classes of error are different because humans don’t mix that lack of understanding/reasoning/memory with the same level of polish.


Ask ChatGPT about something you don't know about and it can appear very smart. Ask it in depth about something you are very knowledgeable about and the ignorance will quickly become apparent.

> If you took the current state of affairs back to the 90s you’d quickly convince most people that we’re there.

This is an interesting ambiguity in the Turing test. It does not say if the examiner is familiar with the expected level of the candidate. But I think it's an unfair advantage to the machine if it can pass based on the examiner's incredulity.

If you took a digital calculator back to the 1800s, added a 30 second delay and asked the examiner to decide if a human was providing the answer to the screen or a machine, they might well conclude that it must be human as there is no known way for a machine to perform that action. The Akinator game would probably pass the test into the 1980s.

I think the only sensible interpretation of the test is one where the examiner is willing to believe that a machine could be providing a passing set of answers before the test starts. Otherwise the test difficulty varies wildly based on the examiners impression of the current technical capabilities of machines.


The problem is for a majority of those tasks people conveniently "forget" the actual start and end of the process. LLMs can't start most of those tasks by it's own decision and neither they can't end and evaluate the result of those tasks. Sure, we got automated multiple tasks from a very low percentage to a very high percentage, and that is really impressive. But I don't see how any LLM can bridge that gap from very percent of automation to a strict 100% of automation, for any task. And if a program requires a real intelligence handling and controlling it, is it really AI?

It's good at tasks if you have a competent and _critical_ human editor selecting outputs and pulling the prompt slot lever again as needed.

> for an astonishing number of tasks that a normal person could come up with, chatgpt.com is actually a good at or better than a typical human.

That’s not my experience at all. Unless you define “typical human” as “someone who is untrained in the task at hand and is satisfied with mediocre results.” What tasks are you thinking of?

(And, to be clear, being better than that straw man of “typical human” is such a low bar as to be useless.)


Exactly. Five years ago I posted here on HN that AI will pass Turing Test in the next 3 years (I was impressed by Facebook chatbot progress at the time). I was laughed at and downvoted into oblivion. TT was seen by many as a huge milestone, incredibly difficult task, “maybe in my lifetime” possibility.

Turing test isn't actually a good test of much, but even so, we're not there yet. Anyone that thinks we've passed it already should experiment a bit a with counter-factuals.

Ask your favorite SOTA model to assume something absurd and then draw the next logical conclusions based on that. "Green is yellow and yellow is green. What color is a banana?" They may get the first question(s) right, but will trip up within a few exchanges. Might be a new question, but often they are very happy to just completely contradict their own previous answers.

You could argue that this is hitting alignment and guard-rails against misinformation.. but whatever the cause, it's a clear sign it's a machine and look, no em-dashes. Ironically it's also a failure of the turing test that arises from a failure in reasoning at a really basic level, which I would not have expected. Makes you wonder about the secret sauce for winning IMO competitions. Anyway, unlike other linguistic puzzles that attempt to baffle with ambiguous reference or similar, simple counterfactuals with something like colors are particular interesting because they would NOT trip up most ESL students or 3-5 year olds.


I would argue that current LLMs are passing the Turing test because many observers have a hard time distinguishing them from humans: just look at the difficulty many schools have in enforcing rules like "Not allowed to use LLMs for your homework". The teachers often (not always) can't tell, looking at a piece of text, whether a human produced it or whether ChatGPT or some other LLM produced it.

And that "not always" is the crux of the matter, I think. You are arguing that we're not there yet, because there are lines of questioning you can apply that will trip up an LLM and demonstrate that it's not a human. And that's probably a more accurate definition of the test, because Turing predicted that by 2000 or so (he wrote "within 50 years" around 1950) chatbots would be good enough "that an average interrogator will not have more than 70% chance of making the right identification after five minutes of questioning". He was off by about two decades, but by now that's probably happened. The average interrogator probably wouldn't come up with your (good) strategy of using counterfactuals to trick the LLM, and I would argue two points: 1) that the average interrogator would indeed fail the Turing test (I've long argued that the Turing test isn't one that machines can pass, it's one that humans can fail) because they would likely stick to conventional topics on which the LLM has lots of data, and 2) that the situation where people are actually struggling to distinguish LLMs is one where they don't have an opportunity to interrogate the model: they're looking at one piece of multi-paragraph (usually multi-page) output presented to them, and having to guess whether it was produced by a human (who is therefore not cheating) or by an LLM (in which case the student is cheating because the school has a rule against it). That may not be Turing's actual test, but it's the practical "Turing test" that applies the most today.


I think the TT has to be understood as explicitly adversarial, and increasingly related to security topics, like interactive proof and side channels. (Looking for guard-rails is just one kind of information leakage, but there's lots of information available in timing too.)

If you understand TT to be about tricking the unwary, in what's supposed to be a trusting and non-adversarial context, and without any open-ended interaction, then it's correct to point out homework-cheating as an example. But in that case TT was solved shortly after the invention of spam. No LLMs needed, just markov models are fine.


I've seen one blog where in order to comment, you have to answer the question "What color is a green orange?" I presume he gets spambots which all answer "orange" and get blocked from commenting, or more commonly which leave that input blank (since it's a non-standard input field) and also get blocked.

But it just occurred to me that you could ask "what color is a green eggplant?" for a similar result. Though you'd catch a lot of people who, like me before I looked up fruits that aren't green when unripe, assume that all unripe fruits are green. (They aren't, unripe eggplants are white and look like eggs, hence the name).


Me:

What color is a green eggplant?

ChatGPT:

A green eggplant (also called Thai eggplant or green brinjal, depending on the variety) is actually green on the outside — sometimes solid green, sometimes streaked with white or pale green.

The inside flesh is typically off-white or light cream, just like purple eggplants.

If you mean a green-colored variety (not an unripe purple one), then the skin color can range from pale mint green to deep jade, often with light striping.

Me:

How would an average human answer this question?

ChatGPT:

An average person would probably answer something simple and direct like:

“It’s green.”

Or if they’re being a bit more specific:

“It’s green on the outside and white inside.”

Most people wouldn’t overthink the variety or mention Thai vs. unripe types — they’d just describe what they see.


Outcome would depend on the rest of the test, but I'd say the "human" version of this answer adds zero or negative value to chances of being human, on grounds of strict compliance, sycophancy, and/or omniscience. "No such thing" would probably be a very popular answer. Elaboration would probably take the form of "love it" or "hate it", instead of reaching for a comprehensive answer describing the inside and the outside.

Experimental design comes in here and the one TT paper mentioned in this thread has instructions for people like "persuade the interrogator [you] are human". Answering that a green eggplant is green feels like humans trying to answer questions correctly and quickly, being wary of a trap. We don't know participants background knowledge but anyone that's used ChatGPT would know that ignoring the question and maybe telling an eggplant-related anecdote was a better strategy


I never really understood what made the Turing test so special. On the face of it, it is a rather dumb test. And it was debunked within two decades by the Chinese room thought experiment. And now that we have language models that are obviously not intelligent, it should have been the last nail in the coffin for this test.

Alan Turing was a mathematician not a psychologist, this was his attempt of doing philosophy. And while I applaud brilliant thinkers when they attempt to do philosophy (honestly we need more of that) it is better to leave it to actual philosophers to validate the quality of said philosophy. John Searle was a philosopher which specialized in questions of psychology. And in 1980 he pretty convincingly argued against the Turning test.


A funny thing is even though we're pretty good at a text-based turing test, and we can have very convincing human generated speech we still don't have something that can pass the audio based turing test. Natural pausing and back and forth gives the AI away.

And when we pass that we can just add an optical component and judge that the AI has failed because its lack of facial expression gives it away[1], moving the goalpost one communication component at a time. But in any case we can just add the audio (or for that matter facial expression) component to the Chinese room though experiment and the Turing test remains equally invalid.

Although I am scrutinizing Turin’s philosophy and, no doubt, I am personally much worse at doing philosophy then Turing, I firmly hold the belief that we will never be able to judge the intelligence (and much less consciousness) of a non-biological (and probably not even non-animal, nor even non-human) system. The reason, I think, is that these terms are inherently anthropocentric. And when we find a system that rivals human intelligence (or consciousness) we will simply redefine these terms such that the new system isn’t compatible any more. And I think that has already started, and we have done so multiple times in the past (heck we even redefined the term planet when we discovered the Kuiper belt) instead favoring terms like capability when describing non-biological behavior. And honestly I think that is for the better. Intelligence is a troubled term, it is much better to be accurate when we are describing these systems (including human individuals).

---

1: Though in honesty I will be impressed when machine learning algorithms can interoperate and generate appropriate human facial expressions. It won’t convince me of intelligence [and much less consciousness] though.


Yeah, I mean I hope there are not many people that still think it's a super meaningful test in the sense originally proposed. And yet it is testing something. Even supposing it were completely solved and further supposing the solution is theoretically worthless and only powers next-gen slop-creation, then people would move on to looking for a minimal solution, and perhaps that would start getting interesting. People just like moving towards concrete goals.

In the end though, it's probably about as good as any single kind of test could be, hence TFA looking to combine hundreds across several dozen categories. Language was a decent idea if you're looking for that exemplar of the "AGI-Complete" class for computational complexity, vision was at one point another guess. More than anything else I think we've figured out in recent years that it's going to be hard to find a problem-criteria that's clean and simple, much less a solution that is


it should be possible to admit that AGI not only a long way off, but also a lot different to what chatGPT does, without discounting that chatGPT is extraordinarily useful.

the AI bros like to talk about AGI as if it's just the next threshold for LLMs, which discounts the complexity of AGI, but also discounts their own products. we don't need an AGI to be our helpful chatbot assistant. it's fine for that to just be a helpful chatbot assistant.


Honestly, a lot of it seems like it's lazy narcissism. They don't think other people have value or intelligence, therefore this machine that bullshits well has to be equally useful.

I am unimpressed, and I don‘t think there is any crisis (other then the lack of consumer protection around these products, copyright, and the amount of energy it takes running these system during a global warming crisis).

If you look at a calculator you will quickly find it is much better then a human in any of the operations that have been programmed into the calculator, and has been since the 1960s. Since the 1960s the operations programmed into your average calculator has increased by several orders of magnitude. The digital calculator sure is impressive, and useful, but there is no crisis. Even in the world outside computing, a bicycle can outperform an human runner easily, yet there is no mobility crisis as a result. ChatGPT is very good at predicting language. And in quite a few subject matters it may be better than your average human in predicting said language. But not nearly as good as a car is to a runner, nor even as good as a chess computer is to a grand master. But if you compare ChatGPT to an expert in the subject, the expert is much much much better then the language model. In these tasks a calculator is much more impressive.


Surely the responsibility here is broader than treating it after the fact? Perhaps it’s an over the top comparison but most places outlaw dangerous drugs— you can treat the after-effects but by that point a lot of the damage has already been done. Making tech companies answerable for having developed algorithms that serve up hours of obvious brainrot content at a time would go a long way.

(And like with many of these things, holding senior executives personally liable helps ensure that the fines or whatever are not just waved away as a cost of doing business.)


Yes it is an over the top comparison. I am a recovered / former addict (alcohol). I would never compare the two. I was spending too much time on Twitter a few years ago. I deleted my account. The problem was solved. It took me an entire year to accept that I had a serious problem and then another 9 months to finally stop drinking.

The brewery, the bar nor the bar ever made me drink. I chose to drink. I also was the one that chose to stop drinking. BTW drink is as dangerous or more dangerous as many illegal drugs IMO.

> Making tech companies answerable for having developed algorithms that serve up hours of obvious brainrot content at a time would go a long way.

You get recommended what you already watch. Most of my YouTube feed is things like old guys repairing old cars, guys writing a JSON parse in haskell and stuff about how exploits work and some music. That is because that is what I already watched on the platform.


Right, and recommendations for old car repair videos that you watch a few of per week is reasonable.

The argument I’m making is that it’s not beyond the pale for YouTube to detect “hey it’s been over an hour of ai bullshit / political rage bait / thirst traps / whatever, the algorithm is going to intentionally steer you in a different direction for the next little bit.”


They actually do show a several notices that says "Fancy something different, click here". They already have a mechanism in place that does something similar to what you describe.

What YouTube recommends to you is more of what you already watch. Removing stuff the you describe is as easy as clicking "Not interested" or "Do not recommend channel".

Also YouTube algorithm is rewarding watch time these days. So click bait isn't rewarded on platform as much. I actually watch a comedy show where they ridicule many of the click-baiters and they are all complaining about the ad-revenue and reach decreasing.

Also a lot of the political rage-bait is kinda going away. People are growing out of it. YouTube kinda has "metas" where a particular type of content will be super popular for a while and then go away.


I don't agree with this take. Some people are going to be more susceptible than others, just as with alcohol or other drugs. An individual choosing to stop doesn't mean much for society in aggregate.

I don't go down the political rage bait video pipeline, nevertheless next to any unrelated YouTube video I see all sorts of click/rage-bait littered in the sidebar just asking to start me down a rabbit hole.

As an example I opened a math channel/video in a private mode tab. Under it (mobile), alongside the expected math-adjacent recommendations I see things about socialist housing plans, 2025 gold rush debasement trades, the 7-stage empire collapse pattern ("the US is at stage 5"), and so on. So about 10% are unrelated political rage-bait.

Moreover, everyone is seeing different things for different reasons, even geographically. For example I recently discovered this: https://www.justice.gov/archives/opa/pr/justice-department-d.... If you look at exhibit 8A, section 3.5 (https://www.justice.gov/archives/opa/media/1366201/dl) you'll see various targeting, e.g. particularly swing states/counties.


Mathematicians spent decades agonizing about matrix transformations and surface normals, all entirely in the abstract, and then in the 80s that math turned out to be suddenly extremely practical and relevant to the field of computer graphics.

Who pays for this abstract exploration? I get that "in the future" it could be useful, but today these researchers need money. Is it like open source where people just do it because they want to?

I looked into the precharged DIY option and the lengths just didn’t work out for what I needed in my space. I ended up paying a licensed installer C$12k to put in a three head system (two conventional, one ducted), and then a separate guy $5k to do the ducting for the bedroom level.

It would have been nice to do it as one, but the HVAC firm didn’t want to get their hands dirty with my wacky ducting plan, and the duct guy wasn’t licensed to charge the refrigerant lines.


The systems where you need to get a epa license are cheaper. The license is appearently easy to get.

The license is a trivial obstacle in the US. Study for an hour or two and take an unproctored online exam.

Similar for me, also in Ontario. I got a three zone mini split this year that I’m hoping can cover most of the shoulder seasons and keep me from using the gas boiler, though it remains to be seen if that’ll actually pan out; so far the kids have complained that their rooms are a lot less evenly heated when it’s the heat pump running rather than the rads.

I have an entry level Sony Alpha that I picked up for a vacation earlier this year. With the portrait lens on there it definitely registers as “camera” far more than a phone. Between that factor and the hassle of having to manually go through and upload the photos afterward, I only take it on special occasions — trips, hikes, etc. It’s not worth all that hassle for trying to get day to day stuff.

Why not live a little and get a film camera? It's more time for sure but are you not tired of optimizing everything in life?

I enjoy film photography in some contexts (I do a bit of 4x5), but film photography basically sucks. I think possibly a lot of the people who find some kind of magic in it are those young enough not to have grown up in the era where shooting film was the only option.

I don't mind 4x5 so much because just taking the photo is so much effort that the associated ordeal of developing and scanning isn't out of proportion. But for 35mm and medium format, there's a hugely disproportionate investment of time and money for a small number of photos.


Curious how 4x5's inconvenience is "proportional" while 35mm's is "hugely disproportionate". I'm not familiar with the specifics of these formats, but you seem to be arbitrarily drawing the line for where the added friction is still serving the "magic" I believe is very real if not fragile. I think you recognize the value of photography isn't solely in the product. I'm curious what you personally find in 4x5 that saves you from these younger artist's silliness.

It takes about 10 minutes to take a single 4x5 photo, and you have to carry around a tripod and a fairly bulky camera to do it. So the time and effort invested in taking the photo is similar to the time and effort required to develop and scan the negative.

In contrast, a 35mm camera is very convenient and you can expose an entire 30 frame roll of film in a few minutes. But getting high quality scans of all those frames requires either a lot of time or a lot of money. (Consumer flatbeds give poor results for 35mm, so your best bet is putting the negative on a light table and using a digital camera and macro lens. But that’s a physically fiddly process, the ‘scan’ needs manual spotting for dust, and if you’re shooting color negatives you also have to do some work to get the colors right.)

Back in the day, most users of 35mm cameras were satisfied with waiting a week to get a set of prints with absolutely no creative control over the printing process, but that’s not what most people want now.


That’s kind of the point though. The scarcity focuses you n taking more deliberate and intentional photos.

It's what some people see as the point now. Back when film was the only option, the cost and time per frame were just negatives (if you'll excuse the pun). There was no romance in deciding whether or not to use one of your last three remaining frames; it was just annoying.

I don't deny that for a whole range of reasons, some people might take better or more meaningful photos using old cameras. Limitations can feed into the artistic process. I just think it's a bit silly to romanticize the cost and inconvenience of film, or to think that photos taken using film are somehow inherently more interesting or valuable.


The parable of the pottery classes that were graded on their best work and total volume of work springs to mind. I never would've bothered with photography if I didn't have the ability to be shameless with burst mode and pick the winners later.

What you are describing isn't photography.

[ citation needed ]

I found our childhood film camera last year and I took it to a couple trips. price of scanners/getting your film scanned and needing to buy 10eur film rolls for like 20 photos turned me off. I still haven't scanned my first and only roll I shot last year.

I bought a Gralab timer and hooked it up to an old shitty enlarger in my tiny dark wine cellar, along with a red bulb. A few chemicals and tools and you're golden. The only thing that screws me is having to cut up film and spool it but I can get more frames out of it that way since I use mini spy cameras. Yes the film being expensive isn't great but it also makes you choose your shots carefully. Get a cheap darkroom film changing tent and start there.

I have a top of the line Sony Alpha (7CR) with a large zoom lens (24-70GM or 70-200GM) and I carry it almost everywhere, every day. It is absolutely worth the hassle to get day to day stuff.

As they say in the audio world, “there ain't no replacement for displacement.” I love gigazoom lenses. For focal lengths under 100mm, I can use my phone. My SLR is my personal spy satellite.

Similar transition for me. Basic dual AP setup that I took from first gen UAPs (wifi 4) to U7 Pro. I think I have only one client (a Lenovo laptop) that connects with wifi 7, but even the wifi 6 devices are screamers now; my phone can get 600mbps symmetric with line of sight or 200-300mbps from two floors away. Just bonkers.

I'm now curious if upgrading from two Unifi U6 LR's to U7 Pro's would offer significant benefits for Wifi 6 endpoints.

I very much doubt it wills make any difference right now, especially in a home environment where it’s only a handful of clients and minimal real congestion.

I got the pros as a vanity move to be 6GHz ready but truthfully there’s zero reason for it.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: