Hacker Newsnew | past | comments | ask | show | jobs | submit | robertheadley's commentslogin

I wouldn't say they were dismissive of AI, just that they are unwilling to merge code that they don't have the time or motivation to review.

If you want AI code merged, make it small so it it's an easy review.

That being said, I completely understand being unwilling to merge AI code at all.


Why would you be unwilling to merge AI code at all?

Consider my other PR against the Zig compiler [1]... I was careful to make it small and properly document it but there's a strict anti-AI policy for Zig and they closed the PR.

Why?

Is it not small? Not carefully documented? Is there no value it int?

I'm not complaining or arguing for justice. I'm genuinely interested in how people think in this instance. If the sausage looks good and tastes great, and was made observing the proper health standards, do you still care how the sausage was made?!

[1] https://github.com/joelreymont/zig/pull/1 [2] https://ziggit.dev/t/bug-wrong-segment-ordering-for-macos-us...


Personally, I'm skeptical that it's a real bug, and that even if it is, that's the proper fix. For all I know, the LLM hallucinated the whole thing, terminal prompts, output, "fix" and all.

It takes time to review these things, and when you haven't shown yourself to be acting responsibly, there's no reason to give you the benefit of the doubt and spend time even checking if the damn alleged bug is real. It doesn't even link to an existing issue, which I'm pretty sure would exist for something as basic as this.

How do you know it's an issue? I think you're letting the always confident LLM trick you into thinking it's doing something real and useful.


> Why would you be unwilling to merge AI code at all?

Because structurally it's a flag for being highly likely to waste extremely scare time. It's sort of like avoiding bad neighborhoods,not because everyone is bad, but because there is enough bad there that it's not worth bothering with.

What sticks out for me in these cases is that the AI sticks out like a sore thumb. Go ahead and use AI, it's as if the low effort nature of AI sets users on a course of using low effort throughout the cycle of whatever it is they are trying to accomplish as an end game.

The AI shouldn't look like AI. The proposed contributions shouldn't stand out from the norm. This include the entire process, not just the provided code. It's just a bad aesthetic and for most people it screams "low effort."


I can't even reproduce your supposed "issue" regarding the Zig compiler "bug". I have an Apple Silicon Mac and tried your reproducer and zig compiled and ran the program just fine.

Honestly, I really suggest reading up on what self-reflection means. I read through your various PRs, and the fact that you can't even answer why a random author name shows up in your PR means the code can't be trusted. It's not just about attribution (although that's important). It's that it's such a simple thing that you can't even reason through.

You may claim you have written loads of tests, but that means literally nothing. How do you know they are testing the important parts? Also you haven't demonstrated that it "works" other than in the simplest use cases.


Check the 2nd PR, the one in my repo and not the one that was rejected.

> Why would you be unwilling to merge AI code at all?

Are you leaving the third-party aspect out of your question on purpose?

Not GP but for me, it pretty much boils down to the comment from Mason[0]: "If I wanted an LLM to generate [...] unreviewed code [...], I could do it myself."

To put it bluntly, everybody can generate code via LLMs and writing code isn't what defines the dominant work of an existing project anymore, as the write/verify-balance shifts to become verify-heavy. Who's better equipped to verify generated code than the maintainers themselves?

Instead of prompting LLMs for a feature, one could request the desired feature from the maintainers in the issue tracker and let them decide whether they want to generate the code via LLMs or not, discuss strategies etc. Whether the maintainers will use their time for reviews should remain their choice, and their choice only - anyone besides the maintainers should have no say in this.

There's also the cultural problem where the review efforts are non-/underrepresented in any contemporary VCS, and the amount of merged code grants for a higher authority over a repository than any time spent doing reviews or verification (the Linux kernel might be an exception here?). We might need to rethink that approach moving forward.

[0]: https://discourse.julialang.org/t/ai-generated-enhancements-...


I'm strictly talking about the 10-line Zig PR above.

Well-documented and tested.


That's certainly a way to avoid questions... I mean sure, but everybody else is talking about how your humongous PRs are a burden to review.

Which is something I agreed with and apologized for, and admitted was somewhat of a PR stunt.

Now, what's your question?


> admitted was somewhat of a PR stunt.

You should be blocked, banned, and ignored.

> Now, what was your question?

Your attitude stinks. So does your complete lack of consideration for others.


You are admitting to wasting people’s time on purpose and then can’t understand why they don’t want to deal with you or give you the benefit of the doubt in the future?

It's worth asking yourself something: people have written substantial responses to your questions in this thread. Here you answered four paragraphs with two fucking lines referencing and repeating what you've already said. How do you expect someone to react? How can you expect anybody to take seriously anything you say, write, or commit when you obviously have so little ability, or willingness, to engage with others in a manner that shows respect and thought?

I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.

You need to stop 'contribiting' to public projects and stop talking to people in forums until you figure this stuff out.


>I really, truly don't understand. This isn't just about manners, mores, or self-reflection. The inability or unwillingness to think about your behavior or its likely reception are stupefying.

Shower thought: what does a typical conversation with an LLM look like? You ask it a question, or you give a command. The model spends some time writing a large wall of text, or performing some large amount of work, probably asks some follow up questions. Most of the output is repetitive slop so the user scans for the direct answer to the question, or checks if the tests work, promptly ignores the follow-ups and proceeds to the next task.

Then the user goes to an online forum and carries on behaving the same way: all posts are instrumental, all of the replies are just directing, shepherding, shaping and cajoling the other users to his desired end (giving him recognition and a job).

I'm probably reading too much into this one dude but perhaps daily interaction with LLMs also changes how one interacts with other text based entities in their lives.


I'll gladly discuss at length things that are near and dear to my heart.

Facing random people in the public court of opinion is not one of them!

Also, there's long-form writing in my blog posts, Twitter and Reddit.


Remind me please, when did I sign up to meet your expectations?

> Why would you be unwilling to merge AI code at all?

Because AI code cannot be copyrighted. It is not anyone's IP. That matters when you're creating IP.

edit: Assuming this is a real person I'm responding to, and this isn't just a marketing gimmick, having seen the trail you've left on the internet over the past few weeks, it strikes me of mania, possibly chatbot-induced. I don't know what I can say that could help, so I'm dropping out of this conversation and wish you the best.


This is a position that seems to be as unenforceable as AI can't be trained on code whose copyright owners have not given consent.

The main reason for being unwilling to merge AI code is going to be that it sets a precedent that AI code is acceptable. Suddenly, maintainers need to be able to make judgement calls on a case-by-case basis of what constitutes an acceptable AI contribution, and AI is going to be able to generate far more slop than people will ever have the time to review and agree upon.


> This is a position that seems to be as unenforceable as AI can't be trained on code whose copyright owners have not given consent.

This depends on what courts find, at least one non-precedent setting case found model training on basically everyone's IP without permission to be fair use. If it's fair use, consent isn't needed, licenses don't matter and the only way to prevent training on your content is to withhold it and gate it behind contracts that forfeit your clients' rights to fair use.

But that is beside the point, even if what you claim was the case, my point is that AI output isn't property. It's not property whether its training corpus was full of licensed or unlicensed content. This is what the US Copyright Office determined.

If you include AI output in your product, that part of it isn't yours. It isn't anybody's, so anyone can copy it and anyone can do whatever they want with it, including the AI/cloud providers you allowed your code to get slurped up to as context to LLMs.

You want to own your IP, you don't want to say "we own 30% of the product we wrote, but 70% of it is non-property that anyone can copy/use/sell with no strings attached, even open source licenses". This matters if you're building a business or open source project. If you include AI code in your open source project, that part of the project isn't covered by your license. LLMs can't sign CLAs and they can't produce intellectual property that can be licensed or owned. The more of your project that is developed by AI, the more it is not yours, and the more of it cannot be covered by your open source license of choice.


> This is what the US Copyright Office determined.

There are hundreds of countries in the world. Whatever the "US Copyright Office" determines, applies to only one of them.


Find me a jurisdiction where AI output is the IP of the prompter

> This is a position that seems to be as unenforceable as AI can't be trained on code whose copyright owners have not given consent.

Stares at facebook stealing terabytes of copyrighted content to train their models

Also, even if code is trained only on FLOSS approved licenses, GPL based ones have some caveats that would disqualify many projects with including code


If poo flinging monkeys are making the sausage people don't care how good the sausage is.

This is really cool.


I would be more interested if it was using something like servo as a driving engine instead of blink.


why?


80% of all devices on the internet are already using blink.


AI isn't going anywhere. It won't magically disappear, but that businesses are trying to use AI in situations that are unsustainable and unneeded.


How do you learn how and when to use AI if you never use it? Just wait another decade to see what others do? Like Kodak and digital photography?


It's not unpossible to vibe code (I hate that term) good software, or well designed software, but if you don't come from a UX or software background, you can generally tell when you look at their product documentation.

Lots of Emojis, too many emojis. Lots of flow charts. Too many flow charts.


He didn't even criticize Kirk. The criticism was levied against Trump / The Republican party.


one and the same


I actually tried to ask the Model about that, then I asked ChatGPT, both times, they just said that it was marketing speak.

I was like no. It is false advertising.


I sure wouldn't hate a lobsters invite...


Wave was a solution without a problem. Neat though.


I am still mad that Facebook mostly abandoned the Open Graph protocol on their own sites.


for me, when both Facebook and Google rejected Jabber/XMPP federation :(

but yeah, in general, what happened to the dream of true Data Portability?


> for me, when both Facebook and Google rejected Jabber/XMPP federation :(

I agree with you in principle, but this is only half-true. You're right that Facebook's XMPP was always just a gateway into their proprietary messaging system, but Google did support XMPP federation. What Google did not support was server-to-server TLS, and thus it was “us” who killed Google XMPP federation.

In late 2013 there was an XMPP community manifesto calling for mandatory TLS (even STARTTLS) for XMPP server-to-server communication by a drop-dead date in mid 2014: https://github.com/stpeter/manifesto/blob/master/manifesto.t...

"The schedule we agree to is:

- January 4, 2014: first test day requiring encryption

- February 22, 2014: second test day

- March 22, 2014: third test day

- April 19, 2014: fourth test day

- May 19, 2014: permanent upgrade to encrypted network, coinciding with Open Discussion Day <http://opendiscussionday.org/>"

Well-intentioned for sure, but the one XMPP provider with an actual critical mass of users (Google Talk) remained non-TLS-only, all Google Talk users dropped off the federated XMPP network after May 2014, and so XMPP effectively ceased to matter. I'm sure Google were very happy to let us do this.


> what happened to the dream of true Data Portability?

It got muddled into the privacy/security debate and then we all got distracted.


As other posters have said - capitalism.

But also privacy - it would be amazing to just be able to connect to any app or service you want, interact and react to stuff that's happening _over there_.

However, do you want any old app or service connecting to _your_ data, siphoning it and selling it on (and, at best, burying their use of your data in a huge terms of service document that no-one reads, at worst, lying about what they do with that information)? So you have to add access controls that are either intrusive and/or complex, or, more likely, just ignored. Then the provider gets sued for leaking data and we're in a situation where no-one dares open up.


Capitalism happened. You can't extract value if the usership can flow away from your site like water.


Capitalism happened. My hope is on regulation - I don't see any other force being capable of prying these moat cans open.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: