AGI is the biggest succesful scam in human history Sam Altman came up with to get the insane investment and hype they are getting. They are intentionally not defined what it is and when will be achieved, making it a never-reachable goal to keep the flow of money going. "we will be there in a couple of years", "this feels like AGI" was told every fucking GPT release.
It's the best interest for every AI lab to keep this lie going.
They are not stupid, they know it can't be reached with the current state-of-the-art techniques, transformers, and even with the recent groundbreaking techniques like reasoning, and I think we are not even close.
It's so much easier to build a mental model of a code base with LLMs. You just ask specific questions of a subsystem and they show files, code snippets, point out the idea, etc.
I just recently took the time to understood how the GIL works exactly in CPython, because I just asked a couple of questions about it, Claude showed me the relevant API and examples where can I find it. I looked it up in the CPython codebase and all of a sudden it clicked.
The huge difference was that it cost me MINUTES. I didn't even bother to dig in before, because I can't perfectly read C, the CPython codebase is huge and it would have taken me a really long time to understand everything.
Not even close. An agentic tool can be fully autonomous, an IDE like Cursor is, well it's "just" an editor. Quite the opposite. Sure it does some heavy lifting too, but still the user writes the code. They start to implement fully agentic tools and models, but they are nowhere near work as good as Claude Code does.
putting the review into git notes might have worked better. It's not attached to tje lines directly, but the commit and it can stay as part of the repo
Not at all. Good documentation for humans are working well for models too, but they need so much more details and context to be reliable than humans that it needs a different style of description.
This needs to contain things that you would never write for humans.
They also do stupid things which need to be adjusted by these descriptions.
I was thinking about this too, but the problem is that different models need to be prompted differently for better performance.
Claude is the best model for tool calling, you might need to prompt less reliable models differently. Prompt engineering is really hard, a single context for all models will never be the best IMO.
This is why Claude Code is so much better than any other agentic coding tool, because they know the model very well and there is an insane amount of prompt engineering went into it.
I tried GPT-5 with OpenCode thinking that it will be just as good, but it was terrible.
Model-specific prompt engineering makes a huge difference!
The original title is "Objects should shut the fuck up". I don't like unnecessary cursing either, but it is emphasizing his frustration in this case and cursing objects, not people.
Renaming the title is just losing information for no reason.
AGI is the biggest succesful scam in human history Sam Altman came up with to get the insane investment and hype they are getting. They are intentionally not defined what it is and when will be achieved, making it a never-reachable goal to keep the flow of money going. "we will be there in a couple of years", "this feels like AGI" was told every fucking GPT release.
It's the best interest for every AI lab to keep this lie going. They are not stupid, they know it can't be reached with the current state-of-the-art techniques, transformers, and even with the recent groundbreaking techniques like reasoning, and I think we are not even close.