Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm confident you have not used Cursor Composer + Claude 3.5 Sonnet. I'd say the level of bugs is no higher than that of a typical engineer - maybe even lower.


There's no LLM for which that is true or we'd all be fired.


In my experience it is true, but only for relatively small pieces of a system at the time. LLMs have to be orchestrated by a knowledgeable human operator to build a complete system any larger than a small library.


In the long term, sure. Short term, when that happens, we're going to be on Wile E. Cyote physics and keep up until we look down and notice the absence of ground.


If all you bring to the table is the ability to reimplement simple web apps to spec, then sooner or later you probably will be fired.


It's only as good as its training data.

Step outside of building basic web/CRUD apps and its accuracy drops off substantially.

Also almost every library it uses is old and insecure.


Yet most work seems to be CRUD related and most SaaS businesses starting up just really need those things mainly.


That last point represents the biggest problem this technology will leave us with. Nobody's going to train LLMs on new libraries or frameworks when writing original code takes an order of magnitude longer than generating code for the 2023 stack.


With LLM's like gemini, which have massive context windows, you can just drop the full documentation for anything in the context window. It dramatically improves output.


I use phind which does searches to provide additional context


I am confident you didn't understand my comment. I didn't say anything about "level of bugs".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: