Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI driven business will be cheaper, and faster. The "better" factor will be a case by case situation. One could argue that your competitor combined with their skills, expertise, AND AI would have a market advantage over you.


And my bet is that they will not, long term.

If you use "AI" to increase productivity, you're doing it by reviewing the code from the "AI" less than you would your own. That lack of review, and testing, will slow down velocity over time as the software becomes untenable, unmanageable, and too complicated to understand.

Meanwhile, my slow velocity at first will result in great design and little technical debt. Eventually, my velocity will increase, and I'll beat my competitors.

"A little bit of slope makes up for a lot of y-intercept." [1]

[1]: https://gist.github.com/gtallen1187/e83ed02eac6cc8d7e185


There exists a world outside of programming. I've been using "AI" every day in my work for some time, and it's an essential tool by now. It's proven.


What is your work?

I bet your work is less good the more you rely on it.


I use AI to communicate and close sales with clients in languages I don't understand. It would be impossible for me to learn all languages, and too expensive and slow to hire professional translators. The AI simply works and has been reliable for years by now.


I think you're describing the difference of AI creating code vs developing _with_ AI which firmly keeps humans in charge but enabling them to be many times more efficient.


They'll only be more efficient if they don't review the code. Otherwise, these tools are just typing helpers, and typing is a small part of the job.


Not sure I agree, but to be fair my perspective is informed by this talk I watched not too long ago

https://www.youtube.com/watch?v=qmJ4xLC1ObU


I've now watched that video, and I'm going to say what I disagree with.

First, he claims that Copilot/GPT only needs more data and more compute to get better. I disagree. It needs both, for sure, but I think it needs more.

Also, it won't get any more data! As LLM's are used more and more, the data fed in will be more and more like what they would generate anyway, which will lead to the models overfitting on things and getting worse.

He claims that bots make mistakes quickly and that allows iteration. This is true, but the iteration will probably be more like bogosort than anything intentional. (Bogosort is famously very expensive.)

Why is it like Bogosort? Because even good "prompt engineers" are more or less creating enchantments on the fly to coax information out of a black box. To me, that seems like a more or less random search. Hence, random search, random sort, it's like Bogosort.

He claims reviewing the code is 100x faster than writing it in his experience. Well, yes, because you don't dive into as much as the original writer did. Reviewers only have so much time, so they spend only as much time as they have. They could spend more and catch more bugs.

He claims that the AI will (eventually) take instructions from you and run them directly. He says they won't generate the code, they are the code.

They can only do this if they are Turing-complete, which if they are the typical neural nets I've seen, they cannot because data can only flow one way. Turing-completeness requires data to be able to flow conditionally, forwards, and backwards, and any combination. These models cannot do that. They would also need recursion.

He claims they have reasoning capabilities. I claim they only have the appearance of reasoning, borrowed from the reasoning capabilities of the humans that wrote the material used in the training sets.

His example of the cards is good, but not that impressive. It didn't tickle any Turing-completeness.

Those are my thoughts as I watched it. It was pretty good though. It was convincing. I know I'm weird in that I just cannot be convinced.


Interesting summary and rebuttal. Appreciate you taking the time to watch the video I linked.

>>He claims they have reasoning capabilities.

I think here you touch on the crux of LLM conversation. In my limited experience with GPT, it does appear to have some basic reasoning ability, but that could be that it's really good at regurgitating its trained dataset and it just appears that it's reasoning. I think over time we'll be able to sort this question out.


> I think over time we'll be able to sort this question out.

I hope so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: