OK, so we are 100% in agreement then? I absolutely don't believe the marginal value of programmers will fall to zero by 2030 (but, to clarify, the way you phrased your original sentence I thought it was that an LLM made this assertion, not some random VC dudes). I also highlighted in my posts that I use AI as an aid to my processes, "That is, for me, the output of ChatGPT or other AI tools is the starting point of my investigation, not the end output. Yes, if you just blindly paste the output from an AI tool you're going to have a bad time, but we also standardize code reviews into the human code-writing process - this isn't that different."
Also, I think the coding example in that substack highlights that one of the most important characteristics of good programmers has always been clarifying requirements. I had to read the phrase "remove all ASCII emojis except the one for shrugs" a couple times because it wasn't immediately clear to me what was meant by "ASCII emojis". I think this example also highlights what happens when you have 2 "VC bros" who don't know what they're talking about highlighting the "clever" nature of what ChatGPT did, because it is totally wrong. Still, I'd easily bet that I could create a much clearer prompt and give it to ChatGPT and get better results, and still have it save me time in writing the boiler plate structure for my code.
You asked for an example and I provided one that I thought illustrated the mistakes GPT makes in a vivid way -- mistakes that are already leading people astray. The fact that this particular example was coupled with a silly prediction is just gravy.
In short, I don't know if we "agree", but I think OP is/was correct that GPT generates lots of subtle mistakes. I'd go so far as to say that the folks filling this thread with "I don't see any problems!" comments are probably revealing that they're not very critical readers of the output.
Now for a wild prediction of my own: maybe the rise of GPT will finally mean the end of these absurd leetcode interview problems. The marginal value of remembering leetcode soltutions is falling to zero. The marginal value of detecting an error in code is shooting up. Completely different skills.
Getting back to that example from that post, though, thinking about it more, "remove all ASCII emojis except the one for shrugs" makes absolutely no sense, because you can't represent shrugs (either with a unicode "Person shrugging" character emoji, or the "kaomoji" version from that code sample that uses Japanese characters) in ASCII, at all. So yes, asking an LLM a non-sensical question is likely to get you a non-sensical response, and it's important to know when you're asking a non-sensical question.
Well, explain it however you like, but the point is that GPT is more than happy to confidently emit gibberish, and if you don't know enough to write the code yourself (or you're outsourcing your thinking to it), then you're going to get fooled.
I'd possibly argue that knowing how to ask the right question is tantamount to knowing the answer.
Also, I think the coding example in that substack highlights that one of the most important characteristics of good programmers has always been clarifying requirements. I had to read the phrase "remove all ASCII emojis except the one for shrugs" a couple times because it wasn't immediately clear to me what was meant by "ASCII emojis". I think this example also highlights what happens when you have 2 "VC bros" who don't know what they're talking about highlighting the "clever" nature of what ChatGPT did, because it is totally wrong. Still, I'd easily bet that I could create a much clearer prompt and give it to ChatGPT and get better results, and still have it save me time in writing the boiler plate structure for my code.