Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

A JR using an (open source, open weight, and locally hosted) LLM to learn a new well established well understood industry skill, I could tolerate if I can see they are clearly improving in ways I can easily validate when pairing and interacting with them when the LLMs are not available.

That said, for me personally, almost all of the work I do is on things that have never been done before in security engineering and supply chain security and the entire body of relevant public research an LLM could have trained on is like 10 people, most of whom I am in frequent touch with and very familiar with their work.

LLMs in general are very very bad at threat modeling or security engineering because there is so little training data on the -right- way to do things. They can often produce code that -works- but most would be unable to spot when it is wildly insecure.

There are many many cases where the overwhelming majority of the way things are done in open source training data are completely wrong, and are what an LLM will respond with on average.

Honestly the only way I could maybe see using an LLM in teaching the type of security engineering and auditing work I do is having LLMs generate code examples, and train humans to spot the security flaws the LLM confidently overlooked because it cannot reason or threat model.





Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: