We've all been waiting for the other shoe to drop. Everyone points out that reviewing code is more difficult than writing it. The natural question is, if AI is generating thousands of lines of code per day, how do you keep up with reviewing it all?
The answer: you don't!
Seems like this reality will become increasingly justified and embraced in the months to come. Really though it feels like a natural progression of the package manager driven "dependency hell" style of development, except now it's your literal business logic that's essentially a dependency that has never been reviewed.
My process is probably more robust than simply reviewing each line of code. But hey, I am not against doing it, if that is your policy. I had worked the old-fashioned way for over 15 years, I know exactly what pitfalls to watch out for.
And this my friends is why software engineering is going down the drain. Weve made our professiona joke. Can you imagine an architect or civil engineer speaking like this?
These kind of people make me want to change to a completely new discipline.
Strong feelings are fair, but the architect analogy cuts the other way. Architects and civil engineers do not eyeball every rebar or hand compute every load. They probably use way more automation than you would think.
I do not claim this is vibe coding, and I do not ship unreviewed changes to safety critical systems (in case this is what people think). I claim that in 2025 reviewing every single changed line is not the only way to achieve quality at the scale that AI codegen enables. The unit of review is shifting from lines to specifications.
Since you've continued to break the site guidelines right after we asked you to stop, I've banned the account.
If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.
You were never an engineer. I'm 18 years into my career on the web and games and I was never an engineer. It's blind people leading blind people and your somewhere in the middle based on 2013 patterns you got to this point on and 2024 advancements called "Vibe Coding" and you get paid $$ to make it work.
Building a bridge from steel that lasts 100 years and carries real living people in the tens or hundreds of thousands per day without failing under massive weather spikes is engineering.
If your intelligent in your work, but completely retarded when it comes to society , information gathering and independent thinking rather than regurgiate whatever your oranged tanned cheeto says, then no, your not smart. You just have been able to condition your brain to do something over and over again. Intelligence and smartness isnt about doing one thing well.
Bin laden did achieve what he set out, hurting the US, not just the CIA. You could argue that todays problems actually started with Bush and his cronies cheney et al. And while he poked the monster that killed him too much, America is not what it use to be.
And you, underestimate people that are not of your own.
Tesla is going to down the shitter and he is trying to fool everyone that it suddenly is now an AI company lol with a disaster rollout for his taxis. Waymo is going to eat them for lunch. Driverless taxis with people overseeing things in the car lol Wow. Such autonomy. He also didnt even create the company. He basically stole it from some other guys who actually founded and built the early stages of tesla.
He doesnt and isnt capable of running SpaceX. Their current CEO and tech lead is the person who runs the business and is actually knowledgable in the space industry and space engineering. Elon? Oh he just is there for the launches.
His neuralink and xAI? lol Ok. Yes Im sure we will see a lot coming out of those businesses with most government and people know shunning his business's and himself. Oh and new version of a nazi LLM. Cant wait to use it. And Twitter. Wow so much great discourse and sensible conversation that it competes with truth social.
Yes, he is doing remarkably well because he has money. Just like Pablo escoabr had money. The leaders of Enron were also doing remarkably well for a while. What about the guy who ran that ponzi scheme? Maddof. Yes he was also doing remarkably well since no one knew the bullshit he was generating. Elon is a fraud like all these other successful people who may have created businesses but hide the bullshit well for now. One day though, it will all come crashing down. Then you and all other sheep will look like greater fools than you do now. You still have time to come to your senses. Just dont be a sheep and glorify any man or exalt him above others. Its quite simple. He is no genius. He is someone who takes advantage and exploit others for his personal gain and is more destructive to society today than he has ever been and people like you are contributing to it so congrats to screwing over other people.
ITT: People who think LLMs are AGI and can produce output that the LLM has come up with out of thin air or by doing research. Go speak with someone who is actually an expert in this field how LLMs work and why the training data is so important. Im amazed that people in the CS industry seem to talk like they know everything about a tech after using it but never even writing a line of code for an LLM. Our indsutry is doomed with people like this.
This isn't about being AGI or not, and it's not "out of thin air".
Modern implementations of LLMs can "do research" by performing searches (whose results are fed into the context), or in many code editors/plugins, the editor will index the project codebase/docs and feed relevant parts into the context.
My guess is they either were using the LLM from a code editor, or one of the many LLMs that do web searches automatically (ie. all of the popular ones).
They are answering non-stackoverflow questions every day, already.
Yeah, doing web searches could be called research but thats not what we are talking bout. Read the parent of the parent. Its about being able to answer questions thats not in its training data. People are talking about LLMs making scientific discoveries that humans haven't. A ridiculous take. Its not possible and with the current state of tech never will be. I know what LLMs are trained on. Thats not the topic of conversation.
A large part of research is just about creatively re-arranging symbolic information and LLMs are great at this kind of research. For example discovering relevant protein sequences.
> Its about being able to answer questions thats not in its training data.
This happens all the time via RAG. The model “knows” certain things via its weights, but it can also inject much more concrete post-training data into its context window via RAG (e.g. web searches for documentation), from which it can usefully answer questions about information that may be “not in its training data”.
I think the time has come to not mean LLMs when talking about AI. An agent with web access can do so much more and hallucinates way less than "just" the model. We should start seeing the model as a building block of an AI system.
Read the parent of the parent. Its about being able to answer questions thats not in its training data. People are talking about LLMs making scientific discoveries that humans havent. A ridiculous take. Its not possible and with the current state of tech never will be. I know what LLMs are trained on. Thats not the topic of conversation.
However many of my issues with CISA are based on my own professional work in security, and that of accomplished professors like J Halderman & M Blaze saying our election infrastructure is insecure.
We’ve been saying the same thing in hackerdom for 30 years!
If my career has been completely about the security of federal & military systems, then some lawyer like Krebs saying our infrastructure is secure when it’s running Windows 7 is a giant slap in the face, particularly given all of the censorship.
You wanted evidence. Here goes:
The censorship & viewpoint discrimination pressure CISA was bringing to bear has been over the top.
At the same time Krebs was talking about how secure our election infrastructure was, prominent professors such as Matt Blaze & J Halderman that have researched election security said the opposite.
This historically has been a bipartisan& Aceademic issue with more Dems & Repubs & Academia supporting claims of insecurity.
Those of us in security are convinced that all this unpatched windows7 usage is crazy and Chris Krebs lying about election security isn’t being open and truthful with the American people.
- NBC News revealed in 2020 that ES&S installed modems in voting machines, making them susceptible to hacking. [Note: The exact NBC News article from January 2020 titled "Voting Machines Vulnerable to Hacking Due to Modems" is not directly linked in the web results, but this matches the description in the thread. The full URL is not available in the provided web results, and I cannot search for it in real-time. You may need to look up the NBC News article from January 2020 for the precise link.]
- Vox highlighted in 2016 that voting machines on Windows XP and voter databases online were vulnerable to hacking. https://www.vox.com/policy-and-politics/2016/11/7/134 educed/hackers-election-day-voting-machines
- Senators Warren, Klobuchar, Wyden, and Pocan sent letters in 2019 to voting machine companies about security concerns. [Note: The direct link to the letters is not provided in the web results. These letters were sent to the private equity firms owning voting machine companies, as noted in the thread. You may need to search for "Warren Klobuchar Wyden Pocan voting machine letters 2019" to find the original source, possibly on a government or senator's website.]
- A 2019 compilation of media articles detailed election system vulnerabilities over four years post-2016 election.