Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Couple of comments:

> But the exact same argument can be made of every advance that has continously raised the level of abstraction of programming over the decades.

Languages and libraries are written (at least up until now) by humans, who _care_ about writing good working code. Generally, if you use a library, it will do it's job flawlessly, in the bounds of what it's expected to do. You rely on those people writing and maintaining it to perfect the job it was designed to do. Co-pilot is making suggestions, each a entirely independent "guess" at what you're trying to accomplish (apologies I don't know _that_ much about ML, but don't think this is far fetched). Meaning that each suggestion produces code in "untested waters" and wasn't code written specifically to do this job, used over and over by others... It's not like it's part of a project and a bug report going to filed for a bug...

Edit: Moved the top portion below, as it was sort of just repeating what was said:

I think the comparison of sifting through copilot suggestions to sifting through errors in google is that, generally you are looking for the error that fixes your problem. Not 5 different (probably working) solutions, which each may contain different bugs. Meaning that I (or X developer of any level would) _need_ to continue searching google to find a working answer to my problem. But validating a co-pilot answer for any potential flaws is much more error prone.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: