Hacker Newsnew | past | comments | ask | show | jobs | submit | TheDong's commentslogin

> so is still a bug and should still get a CVE

It's a bug, sure. The V in CVE is for "vulnerability", which is why people treat CVEs as more than just bugs.

If every bug got a CVE, practically every commit would get one and they'd be even less useful than they are now.

At that point, why not just use commit hashes for CVEs and get rid of the system entirely if we're going to say every bug should get a CVE?

> Re: Harassment - Can't the project release a statement saying that the bug writeup is low quality and unable to be reproduced?

If your suggested response to a human DoS is "why can't the humans just do more work and write more difficult-to-word-correctly communication", then you're not understanding the problem.


If you are wasting time wording communication then are you doing it wrong?

I imagine the response would be looking at it briefly, seeing if it looks dangerous or reproducible and getting an AI to return a templated "PoC or GTFO" response.

The mere existence of a CVE doesn't tell anyone whether a bug is valid or not, and the security reports should be handled in the same way regardless of whether one does exist. For some odd reason people have attached value to having your name logged beside CVEs, despite it not telling you anything,


"human communication is easy, just have an AI say 'buzz off' and the conversation partner and other strangers will always respond respectfully, I don't know why so many people complain about lack of spoons or other social issues".

Thanks doctor, you just solved my anxiety.

I broadly agree that having templates does lower the amount of human effort and emotional labor required, but trust me, it's not a silver bullet, even hitting someone with a template takes spoons.

I don't really care that CVEs in theory are apparently entirely without meaning and created for nonexistent bugs, we're talking about the reality of how they're perceived and used.

Like, I'm saying "Issuing garbage such that 100 people have to read it and then figure out what to do is bad, we should instead have a higher bar for the initial issuing part so 1 or 2 people have to actually read it, and 100 people can save some time. We should call out issuing garbage as bad behavior to hopefully reduce it in the future".

You're apparently disagreeing with that and saying "But reading is easy, and the thing is meaningless anyway so this real harm that actually happens is totally fine. We should keep issuing as much garbage as we can, the numbers don't mean anything. It's better to make a pile of garbage and stress the entire system such that no one values or trusts it than to add any amount of vetting or criticism over creating garbage"

idk, I guess we're probably actually on the same page and you're just arguing for arguing's sake because you think you can be a pedant and be technically correct about CVEs. Tell me if I got a wrong read there and you have a more concrete point I'm missing?


But that's not what happened here. These are memory corruption bugs. Probably not meaningful ones, but in the subset of bugs that are generally considered vulnerabilities.

It's more complicated than that though. For security, the whole context has to be considered.

Like for example, look at the linked CVE-2025-12200, "NULL pointer dereference parsing config file"...

Please, explain a single dnsmasq setup where someone is somehow constructing a config file such that it both takes in untrusted input where this NPE is the difference between it being secure and being DoSd or insecure somehow, if you can even conjure up a plausible hypothetical way this could happen, I'd love to hear it, because this just seems so impossible to me.

This seems firmly in the realm of issuing CVEs for "post quantum crypto may not be safe from unknown alien attacks"


CVE-2025-1312 bash and sudo privilege escalation

sudo may be exploited to obtain full root privilege when the shell receives attacker-controlled input

to reproduce: execute this shell script and authorize sudo when prompted


If someone can template in data, it's a lot easier to just set "dhcp-script=/arbitrary/code"

If the person templating isn't validating data, then it's already RCE to let someone template into this config file without careful validation.

... Also, this is a segfault, the chance anyone can get an RCE out of '*r = 0' for r being slightly out of bounds is close to nil, you'd need an actively malicious compiler.

While CVE's in theory are "just a number to coordinate with no real meaning", in practice a "Severity: High" CVE will trigger a bunch of work for people, so it's obviously not ideal to issue garbage ones.


Maybe we should issue a CVE for company vulnerability response processes that blindly take CVSS scoring as input without evaluating the vulnerability.

> blindly take CVSS scoring as input without evaluating the vulnerability.

Evaluating the CVSS score in your own context is the work I'm talking about.

It does no one any good to have a CVE that says "may lead to remote code execution", when in fact it cannot, and if the reporter did more work, then you wouldn't need hundreds of people to independently do that work to determine this is garbage.


People being able to collectively analyze a vulnerability instead of having to all do it independently is pretty much the whole reason for having a CVE database, so I'm glad we agree.

I mean, I'm fine with the complaint about vulnerabilities that ambiguously refer to possible code execution, but that is a problem that long predates CVE.

Like I said, it depends on the configuration field. But people saying "you have to be root to change this configuration" are missing the point.

If the argument is "CVSS is a complete joke", I think basically every serious practitioner in the field agrees with that.


I identify with this, though I'm further along the path.

Coding was incredibly fun until working in capitalist companies got involved. It was then still fairy fun, but tinged by some amount of "the company is just trying to make money, it doesn't care that the pricing sucks and it's inefficient, it's more profitable to make mediocre software with more features than really nail and polish any one part"

Adding on AI impacts how fun coding is for me exactly how they say, and that compounds with company's misaligned incentives.

... I do sometimes think maybe I'm just burned out though, and I'm looking for ways to rationalize it, rather than doing the healthy thing and quitting my job to join a cult-like anti-technology commune.


I resonate.

For me I’m vaguely but persistently thinking about a career change, wondering if I can find something of more tangible “real world” value. An essential basis of which being the question of whether any given tech job just doesn’t hold much apparent “real world value”.


You have to push the signing as far out as possible.

The light sensor must have a key built into the hardware at the factory, and that sensor must attest that it hasn't detected any tampering, that gets input into the final signature.

We must petition God to start signing photons, and the camera sensor must also incorporate the signature of every photon input to it, and verify each photon was signed by God's private key.

God isn't currently signing photons, but if he could be convinced to it would make this problem a lot easier so I'm sure he'll listen to reason soon.


It's open source, you can find this trivially yourself in less than a minute.

https://github.com/immich-app/devtools/tree/a9257b33b5fb2d30...


If anyone's got questions about this setup I'd be happy to chat about it!

I’m curious about basically all of it. It seems like such a powerful tool.

I seem to have irritated the parallel commenters tremendously by asking, but it seemed implausible I’d understand the design considerations by just skimming the CI config.

Top of mind would be:

1. How do y'all think about mitigating the risk of somebody launching malicious or spammy PR sites? Is there a limiting factor on whose PRs trigger a launch?

2. Have you seen resource constraint issues or impact to how PRs are used by devs? It seems like Immich is popular enough that it could easily have a ton of inflight PR dev (and thus a ton of parallel PR instances eating resources)

3. Did you borrow this pattern from elsewhere / do you think the current implementation of CI hooks into k8s would be generalizable? I’ve seen this kind of PR preview functionality in other repos that build assets (like CLI tools) or static content (like docs sites), but I think this is the first time I’ve seen it for something that’s a networked service.


1. It only works at all for internal PRs, not for forks. That is a limitation we'd like to lift if we could figure out a way to do it safely though.

2. It's running on a pretty big machine, so I haven't seen it approach any limits yet. We also only create an instance when requested (with a PR label).

3. I've of course been inspired by other examples, but I think the current pattern is mostly my own, if largely just one of the core uses of the flux-operator ResourceSet APIs [1]. It's absolutely generalizable - the main 'loop' [2] just templates whatever Kubernetes resources based on the existence of a PR, you could put absolutely anything in there.

[1]: https://fluxcd.control-plane.io/operator/resourcesets/github...

[2]: https://github.com/immich-app/devtools/blob/main/kubernetes/...


Wow. What a rude way to answer.

Sometimes it is also rude to ask without looking the obvious place themselves. It is about signaling that ”my” time is more precious than ”your” time so I let them do that check for me, if I can use someone elses time.

I think we might have hit the inflection point where being rude is more polite. It's not that I want people to be rude to me, it's that I don't want to talk to AI when I intend to be talking to a person, and anyone engaging with me via AI is infinitely more disrespectful than any curse word or rudeness.

These days, when I get a capitalized, grammatically correct sentence — and proper punctuation to boot, there is an unfortunate chance it was written using an AI and I am not engaging fully with a human.

its when my covnersation partner makes human mistakes, like not capitalizing things, or when they tell me i'm a bonehead, that i know i'm talking to a real human not a bot. it makes me feel happier and more respected. i want to interact with humans dammit, and at this point rude people are more likely to be human than polite ones on the internet.

i know you can prompt AIs to make releaistic mistakes too, the arms race truly never ends


If I went to a lot that had a sign at the entrance saying "Open Source Cars, feel free to open the hood and look to learn stuff. No warranty implied. Some may not function. All free to duplicate, free to take parts from, and free to take home", and then took a car from the lot and drove it home, no I would not be surprised if it fell apart before getting out of the lot.

When you purchase a car, you pay actual money, and that adds liability, so if it implodes I feel like I can at least get money back, or sue the vendor for negligence. OSS is not like that. You get something for free and there is a big sign saying "lol have fun", and it's also incredibly well known that software is all buggy and bad with like maybe 3 exceptions.

> If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car)

If the security features are implemented in software, like "iOS app unlock", no I would not expect it to actually be secure.

It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.


> then took a car from the lot and drove it home, no I would not be surprised if it fell apart before getting out of the lot.

Congrats, the brakes failed, you caused bodily damage to an innocent bystander. Do you take full responsibility for that? I guess you do.

Now build a security solution that you sell to millions of users. Have their private data exposed to attackers because you used a third party library that was not properly audited. Do you take any responsibility, beyond the barebones "well I installed their security patches"?

> It is well known that while the pure engineering disciplines, those that make cars and planes and boats, mostly know what they're doing... the software engineering industry knows how to produce code that constantly needs updates and still manages to segfault in so much as a strong breeze, even though memory safety has been a well understood problem for longer than most developers have been alive.

We're aligned there. In a parallel universe, somehow we find a way to converge. Judging by the replies and downvotes, not on this universe.


Even better than earlyoom is systemd-oomd[0] or oomd[1].

systemd-oomd and oomd use the kernel's PSI[2] information which makes them more efficient and responsive, while earlyoom is just polling.

earlyoom keeps getting suggested, even though we have PSI now, just because people are used to using it and recommending it from back before the kernel had cgroups v2.

[0]: https://www.freedesktop.org/software/systemd/man/latest/syst...

[1]: https://github.com/facebookincubator/oomd

[2]: https://docs.kernel.org/accounting/psi.html


Do you have any insight in to why this isn't included by default in distros like Ubuntu. It's kind of bewildering that the default behavior on Ubuntu is to just lock up the whole system on OOM

systemd-oomd I'm pretty sure is enabled by default in fedora and ubuntu desktop.

I think it's off on the server variants.


Is there any way to get something like the oomd or zram that works on gpu memory? I run into gpu memory leaks more often. Itbseems to be electron usually.

GPU memory model quite different from CPU memory model, with application level explicit synchronization and coherency and so on. I don't think that transparent compression would be possible, and even if it would surely carry drastic perf downside

Kubuntu LTS definitely didnt have it by default. And there are no system settings exposing it (or ZRAM)

"earlyoom is just polling"?

> systemd-oomd periodically polls PSI statistics for the system and those cgroups to decide when to take action.

It's unclear if the docs for systemd-oomd are incorrect or misleading; I do see from the kernel.org link that the recommended usage pattern is to use the `poll` system call, which in this context would mean "not polling", if I understand correctly.


systemd-oomd, oomd, and earlyoom all do poll for when to actually take action on OOM conditions.

What I was trying to say is that the actual information on when there's memory pressure is more accurate for systemd-oomd / oomd because they use PSI, which the kernel itself is updating over time, and they just poll that, while earlyoom is also internally making its own estimates at a lower granularity than the kernel does.


Unrelated to the topic, it seems awfully unintuitive to name a function ‘poll’ if the result is ‘not polling.’ I’m guessing there’s some history and maybe backwards-compatible rewrites?

Specifically, earlyoom’s README says it repeatedly checks (“periodically polls”) the memory pressure, using CPU each time even when there is no change. The “poll” system call waits for the kernel to notify the process that the file has changed, using no CPU until the call resolves. It’s unclear what systemd-oomd does, because it uses the phrase “periodically polls”,

The "poll" system call does not wait until a file changes.

s/the file has changed/it has published new data to the file descriptor/

See https://docs.kernel.org/accounting/psi.html


Poll takes a timeout parameter. ‘Not polling’ is just a really long timeout

"Let the underlying platform do the polling and return once the condition is met"

Thanks, I will try that out.

Did you know if you open a page in a private browser window, once you close that page all the cookies vanish? It's even better than a button which might not even work.

It's purely out of principle because in a proper cookie popup rejecting everything should take the same amount of clicks that accepting everything does

I do understand that this is one those generic ones (I saw it many times) which the original creator of the website just slapped on.


https://mail.tarsnap.com/tarsnap-announce/msg00050.html

> Following my ill-defined "Tarsnap doesn't have an SLA but I'll give people credits for outages when it seems fair" policy, I credited everyone's Tarsnap accounts with 50% of a month's storage costs.

So in this case the downtime was roughly 26 hours, and the refund was for 50% of a month, so that's more than a 1-1 downtime refund.


I believe you are making a technical statement and the parent poster is making a legal one. You're both right I guess

The parent poster is not making a legal statement. They copied/pasted the first line of the Readme. I made the clarification that the note is a legal disclaimer, not s technical requirement, so people, including the parent poster, are not confused.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: