Hacker Newsnew | past | comments | ask | show | jobs | submit | more mplewis9z's commentslogin

Primordial Hack Bloles?


Maybe he's just poisoning the scrapers regarding pointy-haired bosses?


I’m in fairly good shape from biking pretty much every day and my average _cycling_ speed on my hybrid bike is below that, my goodness. If I’m on a completely flat trail, carrying nothing else with me, and I’m really pushing it I can average about 16 mph for an hour but to know that there’s someone out there running as fast as I bike is absurd.


16 > 13.8 no?


I think this is an extreme take - they only had those mass surveillance tools since the start of the internet, and any other method of communication (phone calls, physical mail) all required warrants individualized to specific people to tap. But somehow the internet is excluded from all those privacy protections, and now that there’s technology available to ratchet us back to where we used to be, law enforcement agencies are throwing a tantrum about not being able to constantly violate our privacy.

In my mind, it’s pretty simple: if you want to surveil someone, get an individualized warrant to access their devices and data. If they refuse or wipe their data, treat it like destroying evidence in a case and throw the book at them. There’s zero excuse for what law enforcement and intelligence agencies have done to our privacy rights since 9/11.


These (mass surveillance) programs go back to 60s, and it was already prevalent before internet was widespread, also internet was also under blanket surveillance way before. Moreover, this is not only limited to internet per se. Phone calls and any form of unencrypted communications are probably actively monitored for signals intelligence. We're not seeing laws related to this, because mechanisms are probably already in place.

So, I'm keeping my stance of "They want their tools back, because they had them before".


There are very strict laws against wiretapping on calls within the US. Warrants are required before the call can be recorded. That’s why there was so much controversy over blanket metadata collection.


How to achive total pervasive surveillance? One step at a time where each step is not quite too much to cause rioting and revolution. Outrage has a very short attention span.


Marcan certainly can be abrasive (I mean lol, so can Linus), but all the things he points out in the message below are 100% valid - I highly recommend for anyone here to try to contribute something even very small and logical to the Linux kernel or git (which use similar processes), it’s an eye-opening experience that’s incredibly unapproachable, frustrating, and demoralizing.

https://lore.kernel.org/rust-for-linux/208e1fc3-cfc3-4a26-98...


Having read through the email thread, I think both vocal people are basically in the wrong here. There is a way to constructively disagree and the DMA maintainer did not do that. The Rust maintainer should not have brigaded using social media.

The “in hindsight” version of how this should have gone without ego:

* Patch adds Rust bindings to C API

* Maintainer has concerns about increased maintenance cost and clarifies policy about C changes and Rust abstraction if unsure.

* Since R4L is approved, the binding is allowed to exist in a form that doesn’t inhibit changes to C API. C API is allowed to break Rust (I assume, otherwise the entire effort is moot).

* Any concerns about tooling etc which DON’T exhibit this property (and can demonstrably show that merging this Rust code will make C code harder to change) are brought up.

* These are resolved as a tooling issue before the change is merged (I don’t think there are any in this case).

All the discussion about multi-language projects etc is for the project as a whole to decide, which happened when R4L was approved and the breakage policy was decided (might need to be properly formalised).

If the maintainer (or anyone) is unreasonable, then the only approach is to have someone with more authority weigh in and make the decision to bypass their objections or sustain them (which is sort of the direction this was going before the diatribes).


Both were wrong, but only one was corrected.

> If the maintainer (or anyone) is unreasonable, then the only approach is to have someone with more authority weigh in and make the decision to bypass their objections or sustain them (which is sort of the direction this was going before the diatribes).

While they were arguing, Linus said nothing. While the maintainer was issuing ultimatums, Linus said nothing. Linus only said something when social media forced his hand. This is the real issue.


You’re right - add insufficient leadership to the list as well.

IMO, it seems inconsistent to green light R4L and not declare a clear policy for Rust code interacting with C code without adding a hard dependency (and if it WAS declared, not enforcing it).

The only benefit of doubt I can give is that there wasn’t enough time for Linus etc to weigh in before the thread got sidetracked (and the decision became much more politically charged). It’s unclear what would have happened if only the maintainer was unreasonable.


Apparently GregH(?) had already stepped in earlier to resolve the issue before it blew up again. But I’ve not been following it closely.


>Both were wrong, but only one was corrected.

People are wrong in LKML often.

This time, somebody was wrong in a much worse way than usual.


While I never submitted a patch personally, I had once conferred with some of the input devs to add a trackpad to the synaptics driver... they were queueing up an update to add other trackpads, and they said they would add mine... 5 years later, it's still not there. It was just a one-liner, and I'm not really sure why it never got added...

On the other hand, I once ran into an issue with uboot where a bad update knocked out my emmc, usb and sata controllers... found an email address of someone developing the dtb files and got in touch with them, and it was fixed in under a week.

At the end of the day, people are weird sometimes. I wish all the best for marcan.


I tried once to contribute a fix to be able to use the track-pad on my laptop many years ago. But it was not accepted as the maintainer claimed it was an problem in userspace that did not process out of order events correctly. Despite none of the other drivers sent the events out of order. I had no intention to fix the problem on X11 (the only userspace for this at the time), so I used the patched kernel driver locally until I stopped using that laptop. https://bugzilla.kernel.org/show_bug.cgi?id=43591 https://lore.kernel.org/all/1340829375-4995-1-git-send-email...


FWIW, I have submitted a couple small patches to get the display and gamepad for my Lenovo Legion Go recognized correctly - probably similar levels of complexity to your change. One was to input and one was to display quirks.

They did take months to finally land, and the whole process of getting my corp email address to be compatible with their email-based PR system was way more of a faff than it had any right to be, but they did land. You can install mainline Linux on a Legion Go now and the display and controller will behave as expected, out-of-the-box.


Thanks!


> 5 years later, it's still not there. It was just a one-liner, and I'm not really sure why it never got added.

I think they expect people who want things to advocate harder than just mentioning it once. If no one brings it up again, then they assume that no one cares.


this seems very inefficient and the opposite of what I assumed. repeated requests take up time on both sides and are not a very good measure of how important something is.


This is how the most of the open-source development works. There are many projects with thousands of issues and PRs. Those that will get most attention, typically gets prioritized.


This is even how closed source development works. If you throw issues in a backlog and never follow up to advocate for it then it will never get done.


it's not perfect but it works.


well, apparently it doesn't.


Well, apparently nobody even noticed for 5 years, so that's 5 years that nobody had to even think about that code.


Nah, people noticed, and then they thought "Linux always has these kind of issues, I'm going back to [whatever other OS]" because 99.9% of users will never even TRY to report a bug.


People just think "that's Linux, it's buggy and we have to live with it"


why bother infering such intent when the obvious answer - that they simply forgot about it with no ill intentions - is right there?


Requiring people to advocate for their changes is not ill-intent. It handles all cases such as forgetting/missing a patch, and disagreement whether something is needed. The point is there's no system in place to track which patches "should ideally be included but weren't for some reason", it's up for the people who need them to push for them.


Quite the opposite. If you “pester” for something, they’ll explicitly reject it.


I didn't say pester, I said advocate for it.


It's the same thing.


People forget things etc.

Should probably have just asked again, or sent a small one-line patch. It's "mention something on Slack" vs "creating a GitHub issue/PR"


Then you get stories like Greg in the linked mail thread, who emailed to check after not hearing anything and got told that now he'd been annoying and it would never be done.


A story about a 17000 line patch with seemingly no discussion before dumping it on the kernel maintainers. Understandable that noone felt like reviewing it.


Which sounds inefficient and exactly the sort of problem that doesn't happen with a Github issue/PR.


But Github, being a platform, is a nonstarter.

Have there been any recent popular developments on a similar workflow that is as robust as e-mail ?


They "just" need to settle on a platform. GitLab sounds good and is used by a lot of important Open Source projects.


Did you miss the part where being a platform was most of the issue ?

I'm not sure how long GitLab can be trusted either, as well as git becoming a bit too synonymous with GitHub...


> exactly the sort of problem that doesn't happen with a Github issue/PR

What? PRs or issues being forgotten happens all the time, especially for large projects.


It would still be easier to track the progress (or lack thereof) with a proper ticketing system.


Yeah instead the issue will be auto-closed by some bot. Yay.


Having contributed a few times, I'd rate it as similar (sometimes much easier!) than contributing to Firefox and Chromium. That is to say that it is indeed extremely time-consuming and frustrating, but when compared to projects of the same scale it does not necessarily come out as more time-consuming or more frustrating - this will never be a small team collaborating on a random Github repo. A simple "swap out X workflow for Y" does not fix these annoyances, and false dichotomies and peer pressure towards is not a way to cooperate.

I cannot claim to have felt the effects on the maintainer-side of this workflow in large-scale projects though.


It's way more painful to contribute to the kernel than contribute to Firefox, at least, unless things have changed since I was involved with Firefox.

Suppose you find a bug in the kernel and come up with a patch. You email the patch to some kernel mailing list and ask for feedback. Typically, you will receive no response whatsoever, because no-one is responsible for responding. You can try emailing random developers and eventually maybe one of them will have mercy on you.

In Firefox and I think Chromium, you can file a bug, attach your patch, request review from someone (the UI will help you choose a suitable person), and it's their job to respond.


In my experience it's the opposite - the email patch usually gets dealt with within a week or two, Firefox and Chromium dragged out because it wasn't whatever Mozilla or Google prioritized right now. Or worse, it might go against an internal corporate KPI.

In Firefox you have to fiddle with Mercurial, phabricator, and their homegrown CI. In Chromium its Gerrit and their homegrown CI, and oh btw you touched code that lacked tests so tag, you're it.


"The email patch usually gets dealt with within a week or two" is absolutely not my experience dealing with the kernel.

Firefox and Chromium's bespoke tools have their pluses and minuses but they're a lot easier to deal with that the kernel "workflow".


That your experience is not shared suggests that the other "workflows" are not in fact objectively easier to deal with than the kernel workflow, but instead that there's a high variability in the frustration experienced across all three workflows.


I haven't touched Gecko in a decade, but your second paragraph sounds like my experience. My best record was something like a single character bug fix taking months (might have been years?). Yes, the review flag was set to the right person.

I still remember the story where some other guys had to meet some Mozilla folks for lunch and nag them for reviews…


I'm sorry you had a bad experience with someone, but at least you know who wasn't doing their job. On the kernel side, you don't.


get_maintainer.pl gives you the list. Theres no equivalent in Firefox or Chromium to flag which Mozilla/Red Hat/Google/... manager does not consider your ticket an area of focus.


A kernel maintainer can completely ignore any submission with no repercussions even in principle. And they often do.

In Firefox, in my era at least, a reviewer who simply ignores a review request indefinitely was not doing their job and would get yelled at by someone --- me, if it came to my attention.


Besides the current drama, I'm glad someone of his stature agrees with and can call out the horrible processes and tooling involved in the kernel. Using email and a pile of hacks to mess around with patches just sounds nuts and makes it so much harder to understand or contribute. I don't think decentralized necessitates such a terrible workflow - you can run a local website with a distributed DB, distributed git forges exist, you can use federated chats instead of email, there has to be a better way than email.


I don’t think there is enough demonstrable benefit to sacrifice the ubiquity and flexibility of email for a newer solution, especially for existing maintainers who are probably very comfortable with the current workflow.

Harder to understand and contribute is a bad, but unless there is a proposal for a similarly flexible system that has minimal downsides and massive advantages, the preference of existing maintainers should dominate over potential future contributors. Especially factoring in how expensive of a migration it would be.


I can understand this mindset, but I also think this is how communities die. They go to great lengths to avoid inconveniencing existing members while neglecting to welcome new ones. In the short term, the better choice is always to favor the existing contributors right up until they start dropping out and there's no one left to replace them.

Linux is so ubiquitous & important that might never happen, maybe it will just become increasingly captured by corporate contributors who can at least build long lasting repos of knowledge and networks of apprenticeship to help onboard newbies. Open source in name only.


I really like the way sourcehut integrates mailing list patches with a good UI. I’d like to see that become more common in some of these “classic” open source projects.


Afaik Linus tried Github in the past, but had several significant complaints about it hiding information, messing with basic git operations, generating bad commit messages, etc. . So it is not as if they wouldn't use something better, there just isn't anything that has feature parity with a workflow they have been optimizing for decades.


That optimization includes things like email filters and email client customization that is individualized to longtime contributors, not to mention that it is just what Linus and others are used to. And the long time contributors have had years, or decades to incrementally set up their tools, and become familiar with the workflow. The problem is that new contributors and maintainers don't have that, and learning the workflow, and setting up tools so that the email based workflow is manageable is daunting and takes a lot of time.

I won't contest that there are advantages to the linux Kernel's workflow, but there are downsides too, and a major one is that it scares off potential contributors.

That said GitHub definitely is far from perfect as well, and has different strengths and weaknesses from email based flows. As do any other options.

But just because there isn't currently anything that is unilaterally better doesn't mean things can't be improved. There is clearly a problem with onboarding new developers to the linux workflow. That should be acknowledged, and a solution sought. That solution doesn't have to be switching to GitHub or similar. Maybe there just needs to be better documentation on how to set up the necessary tools, that is oriented towards developers used to the Github process. Maybe there needs to be better tooling. Maybe the mailing lists need to be organized better, or have the mailing list automatically add metadata in a standard, machine-readable format to emails. Etc.


> [..] a workflow they have been optimizing for decades.

it sounds the opposite of optimized to me. Unless we're optimizing for something other than developer experience and efficiency?


Once you realize the kernel workflow is optimized for Linus’s (and a few other top honchos) efficiency it all begins to make sense.

Any change introduces anomalies and those cause all sorts of hell.


> Unless we're optimizing for something other than developer experience and efficiency?

It's almost like there are more important goals wrt software development.


> a workflow they have been optimizing for decades

obligatory reminder about breaking someone's workflow https://xkcd.com/1172/


Every time I have to interact with mailing list based projects I feel like I must be missing some secret set of tools and workflows to make it easier.

Then I talk to the old timers and they act like I just need to get used to it.


I always thought it was a pretty blatant "vibe check" to filter out people who are so uncomfortable with software that they can't customize their environment to create an email workflow that works for them.


That sounds about right - the medium is the message. If you can't stand the clunky-but-working, decades-old, patch process, you probably won't stand the clunky-but-working decades-old code.

I'm grateful the kernel still supports MIPS, which means an old appliance of mine still works perfectly fine and is upgradable. I would be cery sad if someone were to rip-out support of an old MIPS arch, just because it's old and unwieldy


I've contributed to a couple of projects that use email based workflows. I can customize my environment, but it takes a lot of time, and I would rather do something else than figure out how to filter the firehose of a mailing list to the few emails I actually care about, or learn how to use a new email client that is slightly better and handling patches.

The first few times, it took me longer to figure out how to send a patch than it did to fix the bug I was writing a patch for.


But you only have to figure that out once. Amortized over many contributions the cost is essentially nothing.


But the initial cost is what determines whether the first patch will ever be sent, so the amortization may never happen.


I guess technically that’s true, but it cannot possibly take long to learn how to use `git format-patch`, and everyone should already know how to attach a file to an email. Even if you have to spend half an hour reading the entire output of `git format-patch --help`, is that really enough to prevent you from sending your first patch?


Yes.


Ok, let me get this straight. You diagnosed a problem in the Linux kernel. You debugged it and figured out how to fix it. You edited the kernel source code, recompiled it, and tested your fix. After all that, if you have to read a man page you’ll just give up?

That’s seriously lame.


What time is more valuable than that, and I don’t owe it to anyone.


By that same token, there’s no reason for you to expect kernel developers to adopt a different way of working either. Their time is even more valuable than yours.


We only have so many hours in our sadly finite lives.


Can't or won't? Surely what you just read would make you reconsider?


shrug maybe "won't" is a false positive, maybe a true positive, I dunno man, not my vibe check.


As someone who has never used mailing lists before (for software development), how much harder/less advantageous it is to migrate to an issues or thread-based approach, like with Github?

And why not?


The short version is:

- Distributing patches via email is more scalable than web hosting. Even GitHub could not host the level of activity happening on the LKML mailing list

- Web hosting has a variety of access and legal problems when working with international teams; email crosses borders more easily

- Email is a decentralized and distributed system that prevents any single company from controlling a project's development infrastructure (release infrastructure is another story, but multiple companies will generally manage their own release process anyway)


It's not Wikipedia, right? Getting the maximum number of contributors isn't a stated goal? I'm a C programmer with a fair bit of kernel experience, and they don't want me, I'm pretty sure, and I'm completely fine with that.


Wikipedia has plenty of gatekeeping too. I once had to submit a single edit three times before the moderators safeguarding the article begrudgingly accepted it.


They do, but they have a stated goal of maximizing contribution. Linux does not, right? I'm asking.


"Maximum number", perhaps not, but Greg KH did at one point want new contributors: https://old.reddit.com/r/linux/comments/2ny1lz/im_greg_kroah...

Q: What would make you even more happy with Linux? GregKH: If you contribute to it.


The kernel dev process is more pathological than what I deal with at $DAYJOB.

Why the hell would I wish that upon myself?


A stable career where you can move to any of the companies who have a dependency on your subsystem.


But I already have that


happy 4 u


Different Wikipedia communities have different governance policies. In the math wikis there's generally a rule that small fixes are not allowed. This stops people from arguing whether slightly better explained sentences are the right edits.


> Marcan certainly can be abrasive (I mean lol, so can Linus)

My impression of a few glancing online interactions is that they're both abrasive but marcan is quite unwise in a way that Linus has had beaten out of him


I'm tired of anaphoras.

And he's not just abrasive He's a troublemaker. Seriously, code of conduct violation? It was perfectly clear what Hellwig meant by "cancer".


In my opinion, calling the well-intentioned hard work of others "cancer" is undeniably hyperbolic and dismissive. It is clear that Hellwig used it in this way. To interpret it differently requires bending the mind. Most people would also consider it rude, but I'll grant that rudeness is more subjective.

There is an argument that being hyperbolic, dismissive, and maybe a bit rude isn't as bad as some people make it out to be, and that it is a fine way to hash out disagreements and arrive at the best solution - that it is simply a result of the passion of exceptional engineers. There has historically been much of it in kernel development. But it seems that as the background and culture of kernel maintainers has broadened, that a more measured and positive approach to communication is more constructive.

It doesn't seem like marcan exemplifies this very well either. It is a loss for someone so skilled to abandon collaboration on the kernel, and seems like the unfortunate result of someone being dismissive and rude, and someone else taking that harder than is warranted or healthy.


"To interpret it differently requires bending the mind."

Stange, I think interpreting it your way requires bending the mind. Hellwig clearly used it to describe what he sees at the ill effects of multiple languages in the kernel. It was not used to describe either Rust the language or this specifically this particular submission.


> And I also do not want another maintainer. If you want to make Linux impossible to maintain due to a cross-language codebase do that in your driver so that you have to do it instead of spreading this cancer to core subsystems. (where this cancer explicitly is a cross-language codebase and not rust itself, just to escape the flameware brigade).

It was used to describe the Rust for Linux project, as well as any other potential efforts to bring other languages into the kernel, of which there are none. It is clear why someone working on the Rust for Linux project would feel that "this cancer" refers to the project that they are working on.

I'm not trying to pull out pitchforks, I don't want anyone to burn. I just want people to collaborate effectively and be happy, and I think it is empirically clear that calling something that grows/spreads and that you think is bad "cancer" is not useful, and only inflames things. It is not an illuminating metaphor.


I agree with Vegenoid that using diseases for labeling poorly written code is at the very least highly unprofessional. This practice not only diminishes the seriousness of illnesses like cancer when used so casually, but it also cannot provide helpful constructive feedback.

Instead of providing helpful advice like outlining the current situation and suggesting specific improvements (action A, task B, and goal C) to reach the goal, it feels rude and offensive.


There is no specific improvement if the problem is fundamental. There is no "better/right" way to spread a cancer. (I'm not saying it is, just that that is the argument, and in that context, there is no such thing as a common goal to reach some better way. Everyone does not actually have to agree that all goals are valid and should be reached.)

The only helpful advice, which they did give, is don't even start doing this because it's fundamentally wrong.

The linux kernel is like a house where everyone is a vegan. Marcan believes that incorporating some meat in the diet is important, and better that being a vegan. He may even be right. But so what? He makes his pitch, the family says that's nice but no thanks. He then demands that they eat this chicken because he wants to live in the house and wants to eat chicken while living in the vegan house?

I don't see how he has any right to what he wants, and I don't see an existing kernel devs refusal to cooperate, or even entertain cooperating, as automatically wrong or unreasonable.


> The linux kernel is like a house where everyone is a vegan. Marcan believes that incorporating some meat in the diet is important, and better that being a vegan. He may even be right. But so what? He makes his pitch, the family says that's nice but no thanks. He then demands that they eat this chicken because he wants to live in the house and wants to eat chicken while living in the vegan house?

While I think this a dumb metaphor, it's also incorrect in this context. The Linux kernel explicitly supports C and Rust code, and there are very clear parameters to allow for Rust code to be integrated into parts of the kernel.

Or in other words, the decision has already been made to allow meat into the vegan household, and now one maintainer is explicitly blocking a package of meat from entering the building, even though it has already been decided from on high that meat should be allowed in.

This isn't quite accurate, though, because of the unnecessary metaphor thing. Reading the original mailing list chain all the way through and talking about these events directly is completely sufficient here. The patch was reasonable within the parameters set out for the R4L project. The maintainer of this subsystem blocked it explicitly because they disagree with the idea of R4L in general (calling it a cancer).

The question is not whether or not R4L is a good thing or a bad thing - anyone can have their own opinion on that. R4L is part of Linux, and will be for the foreseeable future, until it either clearly demonstrates its use, or clearly demonstrates its failure. The question (at least as regards the "cancer" comment) is whether it is okay for a maintainer to describe another team's work as cancer, and to publicly block it as much as they can.


Of course it's ok to block something they judge to be harmful as much as they can. That is their explicit job as maintainer is to make exactly that type of judgement.

If they are overstepping, then Linus will make that known. Until then, apparently they are not overstepping.

And he can use that image if it communicates the concept he wants to communicate.

It sounds like a valid image to me to apply to the concept of polyglot.

He is saying that "If there is really no way for a rust driver to exist all by itself without any of the c code having to do anything special to accomodate it, then so be it, I guess rust doesn't fit here after all."

rust devs are saying "you're not even helping a tiny bit!". I am saying, no, they're not, so what? They don't have to. They did not request what rust devs are trying to do.

The concession rust devs got to proceed to attempt to use rust in the kernel at all doesn't promise almost anything beyond "well you can try". It does not promise to facilitate that try at all really.


No, the rust devs are saying “you do not need to accommodate” and he’s saying “I say no anyway.”


It's like the trope of the hyper emotional significant other that turns practically any statement into "You just called me a dog!!??".


Even if you put that aside, the problem is you offer Hellwig two solutions and he NACKs them both.

  H: I don't want to support multilanguage codebase
  R: We'll have a maintainer verify R4L is behaving properly.
  H: I solved issues because they were unified.
  R: Rust will be mirror of whatever C is, and you're free to break it, R4L will maintain it.
  H: No.


I'll bite and play devils advocate here - both of those are not a solution to his problem. Ultimately he's the maintainer and he gets the emails if X driver is broken, so because of that he doesn't want to rely on another group to maintain the 'Rust half' of his part of the code. It's also a system that works until it doesn't, the biggest rule of the kernel is no breaking userspace - at some point in the future it won't matter if it's his C changes breaking the Rust drivers, it's still his changes that have to be rolled back if the Rust code isn't updated.

And to clarify I'm not saying he's right or wrong or acting good or bad. I have however expected R4L to ultimately fall apart because of this exact issue, the maintainers have never been on board with maintaining Rust code and that hasn't changed. While that remains the case the project is going to be stuck at a wall - to the point that if they're confident they can maintain the Rust code themselves they should just fork it and do that. If it works well enough they'll eventually be too popular to ignore with people choosing to write their new modules in Rust instead.


That is not a problem. Christoph does not have the right to gatekeep who can use DMA. If he tried to veto an Nvidia graphics driver from using DMA using the excuse that "it might create more work for me", everyone would rightfully tell him to f-off, because that's not his call.


What is the interpretation of "cancer" in this context that isn't rude, offensive, or hostile to the R4L project?


He meant to say "the Rust code will spread everywhere [like cancer]".

I agree it's rude, offensive, and hostile, but there are degrees of things and context matters. "You are cancer" would be much worse. I feel we should try and interpret things in good faith and maintain some perspective. For a single word like this: you can just read over it (which is also what the other Rust people did).

Certainly outright removing Hellwig from the Linux project, as Marcan suggested, is bizarrely draconian.

As I argued a few days ago: part of "being nice" is accepting that people aren't perfect and dealing with that – https://news.ycombinator.com/item?id=42940591


He wasnt talking about Rust specifically, he was referring to codebases in any other language.

He said: “ And I also do not want another maintainer. If you want to make Linux impossible to maintain due to a cross-language codebase do that in your driver so that you have to do it instead of spreading this cancer to core subsystems. (where this cancer explicitly is a cross-language codebase and not rust itself, just to escape the flameware brigade).”


Mixing codebases in the Linux core.

I think the conversation is more about people equating R4L as validation for rust or even themselves.


That is not what is happening! Nobody is mixing languages in the Linux core...


> It was perfectly clear what Hellwig meant by "cancer".

No, it is not perfectly clear.

The generous interpretation is that they meant it is "viral", spreading through dependencies across the codebase (which I don't think is technically accurate as long as CONFIG_RUST=n exists).

The less generous way to interpret it is "harmful". The later messages in the thread suggests that this is more likely.

But I'm left guessing here, and I'm willing to give some some benefit of doubt here.

That said, the response to this comment was incredibly bad as well.


And Linus’ immediate reply

https://lore.kernel.org/rust-for-linux/CAHk-=wi=ZmP2=TmHsFSU...

(not taking either side, just interesting to read the reply)


This part of the reply exemplifies one of the big problems in the kernel community:

> You think you know better. But the current process works.

Regardless of how badly broken the kernel development process is, Linus and others observe that Linux continues to dominate and conclude that therefore the development process "works". Success in the market blinds the successful vendor to their defects. Sound familiar?

If Linux carries on down this path then eventually something else that has been subject to more evolutionary pressure will displace Linux, and we'll all be telling stories about how Linux got complacent and how it was obvious to everyone they were heading for a fall.

And honestly, with maintainers like Hellwig maybe that's what needs to happen.


Or worse: Linux is so widespread and managed to practically kill most Unix alternatives, that progress in OS development is slowed down globally. I would strongly prefer Linux being an OS with a lot of progress to stagnation and possible no alternative in the next decades.


I find this reply interesting. Linus says that what matters is technical stuff, but even before the social media brigading, the whole thread was nothing but non-technical drama. So why is Linus focused only on that and not Hellwig's behavior?


You have to be pretty clueless not to understand that Martin's is wrong here, he, and the rest of Rust bozos he clicks with should have been kicked out of the Kernel the minute they started with their social media drama... of course, drama and rust are just bound to be hand in hand.


Definitely interesting to read both sides. I think they both present compelling arguments. There's a need to ensure stability with the kernel and avoid interference with outside forces. I suppose balancing that principle with eventual change is an inevitable difficulty.


Up until the point that he tried to leverage social media to get his way in a kernel maintainer dispute? That's just fundamentally not acceptable.

Linus was right to reprimand him for the suggestion.


I don't think there's "the point" when it was pretty much modus operandi for years.


I agree with you, I'm just not personally familiar with their behavior and was trying to be as charitable as possible.


I've contributed here and there over there years, even got something merged that broke Linus's printer driver. It really isn't unapproachable, frustrating, or demoralizing.


You broke Linus’ printer driver and you’re still alive to post? WOW! ;)


Looks like @marcan deleted his existence on mastodon? Does anyone have a copy of what he said on there?


The tweet he got called out for on the thread was

"Thinking of literally starting a Linux maintainer hall of shame. Not for public consumption, but to help new kernel contributors know what to expect.

Every experienced kernel submitter has this in their head, maybe it should be finally written down."


The person who called him out for that made some testy social media comments of her own this morning.

Personally, seeing

> Being toxic on the right side of an argument is still toxic, [...]

written unironically, on social media, immediately after that person wrote @marcan

> and if that then causes you to ragequit, because you can't actually deal with the heat you've been dishing out coming back around the corner: fuck off

leaves me feeling more sympathetic to marcan's argument about the kernel being full of toxic attitudes, not less. Maybe public shaming isn't the answer but there's a problem here. Maybe don't make comments like that on social media if you want to criticize people for leaning on social media in kernel disputes.


> Maybe don't make comments like that on social media if you want to criticize people for leaning on social media in kernel disputes.

This seems like a tu quoque fallacy. The feedback is either applicable or not, regardless of who said it. They're absolutely correct that Being toxic on the right side of an argument is still toxic.

Even if there is hypocrisy (whether judged by you personally or someone else), it wouldn't invalidate the point.


I disagree. The behavior of the person who complains about decorum is relevant because it may indicate a double standard in the community. If a community (such as the Linux kernel maintainer community) is ganging up on an outsider using arguments of decorum, it's highly relevant that the already-accepted members of that community themselves act in a way which would've been deemed unacceptable if they were outsiders.


Of course it doesn't invalidate the point, but it's hard to blame being so exasperated by the incredible hypocrisy that you ragequit at that point.


I can also see him quitting because he was unhappy with people pointing out the toxicity of some of his posts


Kind of reminds me of Kenneth Reitz somehow


> and if that then causes you to ragequit, because you can't actually deal with the heat you've been dishing out coming back around the corner: fuck off

Certainly in context, this seems fairly reasonable: https://chaos.social/@sima/113961283260455876

Yeah this isn't about "being civil" or "friendly" and even less about "don't call out". This is about calling out in an ineffective and imprecise way, so that the people actually trying to change things are busy patching up your collateral damage instead of implementing real changes, while all you've achieved is airing your frustration for internet drama points.

When you're that harmful with your calling out, eventually I'm going to be fed up, and you get a live round shot across your bow.

And if that then causes you to ragequit, because you can't actually deal with the heat you've been dishing out coming back around the corner: fuck off.

Or as Dave Airlie put it upthread in this conversation: "Being toxic on the right side of an argument is still toxic, [...]"

So please do call out broken things, do change things, I've been doing it for well over a decade in the linux kernel. But do it in a way that you're not just becoming part of the problem and making it bigger.

---

And this is not the first time something like this has happened with Marcan. He may be tired of the Linux devs, but many of them are also tired with him (including some of the people working on Rust, it seems).

And this is part of a conversation on what went wrong here, not an attempt to rally the troops. You really can't compare it to Marcan's stuff. This kind of (selective) demand for absolute perfection is really not great.


> Every experienced kernel submitter has this in their head, maybe it should be finally written down

If true, then it sounds like there are some “missing stairs”[1] (the professionally difficult kind, hopefully not the other kind) in Kernel development.

[1]: https://en.wikipedia.org/wiki/Missing_stair


You can use the waybackmachine:

https://web.archive.org/web/20250204162031/https://social.tr...

However it seem that you need to disable js as soon as the content load or it will be overwritten by a 404


As a follower of him on Mastodon this makes me sad. Hector posted a lot of valuable, very informative toots that I learned a lot from.


Haha it’s funny that this stuff is still going on. The difficulty of getting things into mainline is why Con Kolivas stopped developing his interactivity-prioritizing schedulers for Linux some 20 years ago. It’s just how the project works.


I agree contributing code to the kernel is by no means as approachable or easy going but it's not self-evident that alone is supposed to be the sign of bad things™ unlike more specific examples boiling up to be part of that picture. Are there things and ways I think it could be improved? For sure! I just don't necessarily think they imply the resulting process would be quick and painless.


`apt` (the program) is a relatively recent addition to the APT (Advanced Package Tool) ecosystem - until not that long ago, `apt-get` was the way to install packages, and `apt` is now a "cleaner" way of interacting with APT.


You’ve been able to do that with Thunderbolt 4 for a while (with Display Stream Compression) - I currently drive an 8k ultrawide (7680x2160, or two 4K side-by-side) at 120 Hz off a single Thunderbolt 4 port.


What model?


Pretty sure he's talking about our Samsung Odyssey G9 57"

I too get 7680x2160x120Hz via TB4


> how is it different to Apples integration with Safari?

It’s not, other than Google has a way larger market share (especially if you count Edge/Opera/Brave/etc.) and has been (ab)using that position to push web standards in a direction that favors their business and that other browser vendors have to follow to keep up.

If Safari had Chrome’s market share and was throwing their weight around like Google does and Microsoft did with IE, it’d be the same argument and I’d also personally support forcing them to divest it.


Safari is the #2 browser behind Chrome. It's about 55% to 30%, so while Chrome has a larger market share, it's not an order of magnitude larger.

Really the main difference is that Apple has a captive audience on iOS and no incentives to improve so they don't do anything with it.


18.5 for Safari 65% for chrome + 5% for edge = 70%

It is a magnitude higher.


I think you mean order of magnitude, which means 10x. Magnitude just means size. Chrome's market share is not an order of magnitude higher than Safari.


In the business world it is, because going from 18% to 65% market share is much more than a 4X improvement. Market share progress is highly non-linear in cost/investment/strategy. There are network effects at play favoring a winner-takes-all.


A (truly) clever argument! Def seems like a stretch though, especially if you're hoping to save GP's comment by suggesting that this is what they had in mind :-)


No, that might be the word origin but not how it is actually used. Just like "decimate" nowadays does not require a factor 10.

So instead of "10x" substitute "by a large enough factor or margin to make a significant difference". That is totally true globally speaking. Locally, in the US, you could however argue that apple abuses it's iPhone market share to sabotage competition (e.g. streaming, webstandards,etc). That just means you should sue both not neither.


Decimate used to mean 10% less (1 out of ten gone), nowadays folks mean 90% or about so less (9 out of ten gone).


Nothing more evil than pushing standards and even sharing the source code. How dare they...


So why are those standards impossible to keep up with and we already see plenty of sites break under Firefox? Which by the way is the only independent browser remaining in game, even goddamn Microsoft leaving the domain behind?


Because development costs money. Your "impossible to keep up" here is easily explained by Google simply investing more money in development and thus being able to "innovate" faster. The only way to compete is to invest more, but where do you get that money from?

The easy fix is to make them slow down development, but I fail to see how that's a good thing.


Sure. Continuing my analogy to the British empire's rule over the seas has also surely resulted in technological improvements, but that is not the only way to achieve that.

For a more practical example, Linux is also developed mostly by paid employees, but they are from many different companies and thus improvements can't be weaponized as easily.


Maybe if Mozilla spent more money on development and less money trying to be an NGO they could keep up... Mozilla gets more than enough revenue (from Google ironically), they just spend it poorly.

Or they could do what Brave, Vivaldi and others do and simply use Chromium as a base.


> Or they could do what Brave, Vivaldi and others do and simply use Chromium as a base.

Don't you even see the problem?!

Even Microsoft dropped out from developing a web browser, it literally has a larger scope than a whole OS.

But sure, enjoy your Chrome OS proprietary "open" web.


Again, how can it be a proprietary web if everything is open source and available to every other vendor?

Not sure if you remember all the "native" applets that actually were proprietary before Chrome came on the scene and made JS fast enough to kill them... ActiveX, Flash, Java... Those were the dark ages, because of Google the web is more open and better than ever...


More like there were actually multiple vendors that would have to agree on a common thing, but they died out so the single leftover can do whatever it wants...


Here's a list of members of the W3C:

https://www.w3.org/membership/list/

It's a lot more than just Google...


As a long time FF user what is one website that breaks on FF?

my ad-blockers ruin plenty of websites. never met a site that was broken due to FF itself.


Miro board is simply unusably slow, but plenty of other commonly used websites have annoying breakages, like login screen not actually logging in and the others.


Facebook Messenger video chat. It straight up doesn't work. I have to open a Chromium instance to chat with my family.


Good example, but sounds like Facebook or Firefox are to blame for this, not Google.


The parent asked for examples of sites that don't work on Firefox. I gave him one. No one was talking about who to blame.


I see them all the time on internal websites. Corporate frontend devs favor Chrome and those sites aren’t automatically tested.


A modern word document file (.docx) is literally just a Zip archive with a special folder structure, so unless your company is scanning word document contents I can’t imagine there’s any issue.


“Stronger” generally just means “stronger for a given weight”, so manufacturers would likely reduce the final vehicle weight while maintaining the same strength (~safety), or improve strength while keeping weight the same.


Since there's already a binding for Swift that's listed on Tree-sitter's site (https://github.com/ChimeHQ/SwiftTreeSitter), it'd be great to list in your ReadMe how your implementation differs/is better than that one!


Seems to me this is a case of the Sesame Street song, "Which one of these things is not like the other ones?"

There are bindings for Swift, parsers for Swift source, and this utility kit for Swift which seems more focused:

- Tree-Sitter Bindings for Swift provides the foundational tools to use tree-sitter’s parsing capabilities in Swift: https://github.com/ChimeHQ/SwiftTreeSitter

- Tree-Sitter Parser for Swift is a specific implementation that allows tree-sitter to parse Swift code: https://github.com/alex-pinkus/tree-sitter-swift

- Tree-Sitter Kit is a higher-level toolkit that simplifies creating and using tree-sitter parsers in Swift, providing a more integrated and Swift-friendly approach to defining and working with grammars and parsed data structures: https://github.com/daspoon/tree-sitter-kit

This Tree-Sitter Kit looks like a convenience layer on top of the tree-sitter system, designed to work smoothly within Swift, making the process of creating and using parsers more straightforward and idiomatic within the Swift language itself.


That's a good point, thanks.

My understanding is that work exposes nearly the full tree-sitter runtime API, but relies on tree-sitter's standard tech for converting javascript grammar specifications to separately compiled C code.

This work instead exposes a minimal subset of tree-sitter functionality, but enables defining parsers entirely in Swift -- eliminating the need for javascript and mixed-language targets, and streamlining the build process.


you can write the grammars in swift now, not javascript.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: