Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A formalized form of this is the red-green-refactor pattern common in TDD.

Self created or formalized methods work, but they have to have habits or practices in place that prevent disengagement and complacency.

With LLMs there is the problem with humans and automation bias, which effects almost all human endeavors.

Unfortunately that will become more problematic as tools improve, so make sure to stay engaged and skeptical, which is the only successful strategy I have found with support from fields like human factors research.

NASA and the FAA are good sources for information if you want to develop your own.



This is what I came to comment. I'm seeing this more and more on HN of all places. Commenters are essentially describing TDD as how they use AI but don't seem to know that is what they are doing. (Without the tests though.)

Maybe I am more of a Leet coder than I think?


In my opinion TDD is antithetical to this process.

The primary reason is because what you are rapidly refactoring in these early prototypes/revisions are the meta structure and the contacts.

Before AI the cost of putting tests on from the beginning or TTD slowed your iteration speed dramatically.

In the early prototypes what you are figuring out is the actual shape of the problem and what the best division of responsibilities and how to fit them together to fit the vision for how the code will be required to evolve.

Now with AI, you can let the AI build test harnesses at little velocity cost, but TDD is still not the general approach.


There are multiple schools to TDD, sounds like you were exposed to the kind that aims for coverage vs domain behavior.

Like any framework they all have costs,benefits, and places they work and others that they don’t.

Unless taking time to figure out what your inputs and expected outputs, the schools of thought that targeted writing all tests and even implement detail tests I would agree with you.

If you focus on writing inputs vs outputs, especially during a spike, I need to take prompt engineering classes from you


And you have to pay special attention to the tests written by LLMs. I catch them mocking when they shouldn't, passing tests that don't actually pass, etc.


> With LLMs there is the problem with humans and automation bias, which effects almost all human endeavors.

Yep, and I believe that one will be harder to overcome.

Nudging an LLM into the right direction of debugging is a very different skill from debugging a problem yourself, and the better the LLMs get, the harder it will be to consciously switch between these two modes.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: