Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Because when people use LLMs, they are getting the tool to do the work for them, not using the tool to do the work. LLMs are not calculators, nor are they the internet.

A good rule of thumb is to simply reject any work that has had involvement of an LLM, and ignore any communication written by an LLM (even for EFL speakers, I'd much rather have your "bad" English than whatever ChatGPT says for you).

I suspect that as the serious problems with LLMs become ever more apparent, this will become standard policy across the board. Certainly I hope so.



Well, no, a good rule of thumb is to expect people to write good code, no matter how they do it. Why would you mandate what tool they can use to do it?


Because it pertains to the quality of the output - I can't validate every line of code, or test every edge case. So if I need a certain level of quality, I have to verify the process of producing it.

This is standard for any activity where accuracy / safety is paramount - you validate the process. Hence things like maintenance logs for airplanes.


> So if I need a certain level of quality, I have to verify the process of producing it

Precisely this, and this is hardly a unique to software requirement. Process audits are everywhere in engineering. Previously you could infer the process of producing some code by simply reading the patch and that generally would tell you quite a bit about the author itself. Using advanced and niche concepts with imply a solid process with experience backing it. Which would then imply that certain contextual bugs are unlikely so you skip looking for them.

My premise in the blog is basically that "Well now I have go do a full review no matter what the code itself tells me about the author."


> My premise in the blog is basically that "Well now I have go do a full review no matter what the code itself tells me about the author."

Which IMO is the correct approach - or alternatively, if you do actually trust the author, you shouldn't care if they used LLMs or not because you'd trust them to check the LLM output too.


The false assumption here is that humans will always write better code than LLMs, which is certainly not the case for all humans nor all LLMs.


I’m not seeing a lot of discussion about verification or a stronger quality control process anywhere in the comments here. Is that some kind of unsolvable problem for software? I think if the standard of practice is to use author reputation as a substitute for a robust quality control process, then I wouldn’t be confident that the current practice is much better than AI code-babel.


> Because when people use LLMs, they are getting the tool to do the work for them, not using the tool to do the work.

You can say that for pretty much any sort of automation or anything that makes things easier for humans. I'm pretty sure people were saying that about doing math by hand around when calculators became mainstream too.


I think the main issue is people using LLMs to do things that they don't know how to do themselves. There's actually a similar problem with calculators, it's just a much smaller one: if you never learn how to add or multiply numbers by hand and use calculators for everything all the time, you may sometimes make absurd mistakes like tapping 44 * 3 instead of 44 * 37 and not bat an eye when your calculator tells you the result is a whole order of magnitude less than what you should have expected. Because you don't really understand how it works. You haven't developed the intuition.

There's nothing wrong with using LLMs to save time doing trivial stuff you know how to do yourself and can check very easily. The problem is that (very lazy) people are using them to do stuff they are themselves not competent at. They can't check, they won't learn, and the LLM is essentially their skill ceiling. This is very bad: what plus-value are you supposed to bring over something you don't understand? AGI won't have to improve from the current baseline to surpass humans if we're just going to drag ourselves down to its level.


>Because when people use LLMs, they are getting the tool to do the work for them, not using the tool to do the work.

What? How on god's green earth could you even pretend to know how all people are using these tools?

> LLMs are not calculators, nor are they the internet.

Umm, okay? How does that make them less useful?

I'm going to give you a concrete example of something I just did and let you try and do whatever mental gymnastics you have to do to tell me it wasn't useful:

Medicare requires all new patients receiving home health treatment go through a 100+ question long form. This form changes yearly, and it's my job to implement the form into our existing EMR. Well, part of that is creating a printable version. Guess what I did? I uploaded the entire pdf to Claude and asked it to create a print-friendly template using Cottle as the templating language in C#. It generated the 30 page print preview in a minute. And it took me about 10 more minutes to clean up.

> I suspect that as the serious problems with LLMs become ever more apparent, this will become standard policy across the board. Certainly I hope so.

The irony is that they're getting better by the day. That's not to say people don't use them for the wrong applications, but the idea that this tech is going to be banned is absurd.

> A good rule of thumb is to simply reject any work that has had involvement of an LLM

Do you have any idea how ridiculous this sounds to people who actually use the tools? Are you going to be able to hunt down the single React component in which I asked it to convert the MUI styles to tailwind? How could you possibly know? You can't.


You’re being unfairly downvoted. There is a plague of well-groomed incoherency in half of the business emails I receive today. You can often tell that the author, without wrestling with the text to figure out what they want to say, is a kind of stochastic parrot.

This is okay for platitudes, but for emails that really matter, having this messy watercolor kind of writing totally destroys the clarity of the text and confuses everyone.

To your point, I’ve asked everyone on my team to refrain from writing words (not code) with ChatGPT or other tools, because the LLM invariably leads to more complicated output than the author just badly, but authentically, trying to express themselves in the text.


I find the idea of using LLMs for emails confusing.

Surely it's less work to put the words you want to say into an email, rather than craft a prompt to get the LLM to say what you want to say, and iterate until the LLM actually says it?


My own opinion, which is admittedly too harsh, is that they don't really know what they want to say. That is, the prompt they write is very short, along the lines of `ask when this will be done` or `schedule a followup`, and give the LLM output a cursory review before copy-pasting it.


Still funny to me.

`ask when this will be done` -> ChatGPT -> paste answer into email

vs

type: "when will this be done?" Send.


Greetings Jimbokun,

I trust this message finds you well.

I am writing to inquire about the projected completion timeline for the HackerNews initiative. In order to optimize our downstream workflows and ensure all dependencies are properly aligned, an estimated delivery date would be highly valuable.

Could you please provide an updated forecast on when we might anticipate the project's conclusion? This data will assist in calibrating our subsequent operational parameters.

Thank you for your continued focus and effort on this task. Please advise if any additional resources or support from my end could help expedite the process.

Best regards, Sebmellen


Yep, I have come to really dislike LLMs for documentation as it just reads wrong to me and I find so often misses the point entirely. There is so much nuance tied up in documentation and much of it is in what is NOT said as much as what is said.

The LLMs struggle with both but REALLY struggle with figuring out what NOT to say.


I definitely see where you're coming from, though I have a slightly different perspective.

I agree that LLMs often fall short when it comes to capturing the nuanced reasoning behind implementations—and when used in an autopilot fashion, things can easily go off the rails. Documentation isn't just about what is said, but also what’s not said, and that kind of judgment is something LLMs do struggle with.

That said, when there's sufficient context and structure, I think LLMs can still provide a solid starting point. It’s not about replacing careful documentation but about lowering the barrier to getting something down—especially in environments where documentation tends to be neglected.

In my experience, that neglect can stem from a few things: personal preference, time pressure, or more commonly, language barriers. For non-native speakers, even when they fully understand the material, writing clear and fluent documentation can be a daunting and time-consuming task. That alone can push it to the back burner. Add in the fact that docs need to evolve alongside the code, and it becomes a compounding issue.

So yes, if someone treats LLM output as the final product and walks away, that’s a real problem. And honestly, this ties into my broader skepticism around the “vibe coding” trend—it often feels more like “fire and forget” than responsible tool usage.

But when approached thoughtfully, even a 60–90% draft from an LLM can be incredibly useful—especially in situations where the alternative is having no documentation at all. It’s not perfect, but it can help teams get unstuck and move forward with something workable.


I wonder if this is to a large degree also because when we communicate with humans, we take cues from more than just the text. The personality of the author will project into the text they write, and assuming you know this person at least a little bit, these nuances will give you extra information.


> A good rule of thumb is to simply reject any work that has had involvement of an LLM,

How are you going to know?


That's sort of the problem isn't it? There is no real way to know so we sort of just have to assume every bit of work is involving LLMs now so we have to take a lot closer look at everything


We already treat spam and unsolicited commercial email that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: