unrelated to phones a lot of (more professional) malware has moved to not persist itself in root space (or at all) as to not leaf traces (instead it will just rely on being able to regain root access as needed every time you reboot with all the juicy parts being in memory only (as in how often do you even roboot your phone))
I think (but am not fully sure) this also applies to phone malware.
I.e. no it doesn't work.
Not unless you
- ban usage of all old phone (which don't get security updates)
- ban usage of all cheap phones/phones with non reliable vendors
- have CHERY like protections in all phones and in general somehow magically have no reliable root privilege escalations anymore
Oh and advanced toolkits sometimes skip the root level persistence and directly go into firmware parts of all kinds.
Furthermore proper 2FA is what is supposed to make online banking secure, not make pretend 2FA where both factors are on the same device (your phone).
And even without proper 2FA, it is fully sufficient to e.g. classify rooted phones as higher risk and limit how much money can be transmitted/handled with it (the limit should ignoring ongoing long term automated repeated transactions, like rent).
- effective moneytizeability of a lot of AI products seem questionable
- so AI cost strongly subsidized in all kinds of ways
- which is causing all kind of strange dynamics and is very much incompatible with "free market self regulation" (hence why a company long term running by investor money _and_ under-pricing any competition which isn't subsidized is theoretically not legal (in the US). Not that the US seem to care to actually run a functioning self regulating free market, even going back as far as Amazone. Turns out moving "state subsidized" to "subsidized by rich" somehow makes it no longer problematic / anti-free-market /non-liberal ... /s))
In general "failing to run (successfully)" should per-see been seen as a bad signal.
It might still be:
- the closest to a correct solution the model can produce
- be helpful to find out what it wrong
- might be intended (e.g. in a typical very short red->green unit test dev approach you want to generate some code which doesn't run correctly _just yet_). Test for newly found bugs are supposed to fail (until the bug is fixed). Etc.
- if "making run" means removing sanity checks, doing something semantically completely different or similar it's like the OP author said on of the worst outcomes
it hinders you long term decision making and in turn makes it more likely to do risky decisions which could end bad for you (because you are slightly less risk adverse)
but that is _very_ different to doing decisions with the intend to kill yourself
you always need an different source for this, which here seem to have been ChatGPT
also how do you think he ended up thinking he needs to take that levels of testosterone, or testosterone at all. Common source of that are absurdly body ideals, often propagated by doctored pictures. Or the kind of non-realistic pictures ChatGPT tends to produce for certain topics.
and we also know that people with mental health issues have gone basically psychotic due to AI chats without taking any additional drugs...
but overall this is irrelevant
what is relevant is that they are hiding evidence which makes them look bad in a (self) murder case, likely with the intend to avoid any form of legal liability/investigation
that tells a lot about a company, or about how likely the company thinks they might be found at least partially liable
if that really where a nothing burger they had nothing to risk, and could even profit from such a law suite by setting precedence in their favor
Who, exactly, are you trying to argue against? Because nowhere in my comment did I absolve OpenAI of anything; I explicitly said multiple things can be a factor.
And, no, I don’t buy for a second the mental gymnastics you went to to pretend testosterone wasn’t a huge factor in this.
soso, in suicide cases it hardly possible to separate co factors from main factors, but we do know that mentally sick people have gotten into what more or less is psychosis from AI usage _without consuming any additional drugs_.
but this is overall irrelevant
what matters is that OpenAI selectively hide evidence in a murder case (suicide is still self murder)
now the context of "hiding" here is ... complicated, as it seems to be more hiding from the family (potentially in hop to avoid anyone investigating their involvement) then hiding from a law enforcement request
but that is still supper bad, like people have gone to prison for this kind of stuff level of bad, like deeply damaging the trust into a company which if they reach their goal either needs to be very trustable or forcefully nationalized as anything else would be an extrema risk to the sovereignty and well being of both the US population and the US nation... (which might sound like a pretty extreme opinion, but AGI is overall on the thread level of intercontinental atom wappons, and I think most people would agree if a private company where the first to invent, build and sell atom weapons it either would be nationalized or regulated to a point where it's more or less "as if" nationalized (as in state has full insight on everything and veto right on all decisions and they can't refuse to work with it etc. etc.)).
They are playing a very dangerous game there (except if Sam Altman assumes that the US gets fully converted to a autocratic oligarchy and him being one of the Oligarchs, then I guess it wouldn't matter).
please don't (replace your typical eBPF filter with it, but do replace you custom kernel modules with it where viable ;) )
rust type system is not a security mechanism
it's a mechanism to avoid bugs which can become security issues not a way to enforce well behavior on a kernel boundary
as an example the current rust compiler has some bugs where it accepts unsound programs which are not seen as supper high priority as you most most likely won't run into them by accident. If rust where a verification system enforcing security at a kernel boundary this would be sever CVEs...
also eBPF verification checks more properties, e.g. that a program will deterministically terminate and can't take too long to do so, which is very important for the kind of thing eBPF is(1).
and eBPF programs are also not supposed to do anything overly complex or difficult to compute but instead should only do "simple" checks and accounting and potentially delegate some parts to user space helper program. So all the nice benefits rust has aren't really that useful.
In the end there is a huge gap between the kind of "perfect verification" you need for something like eBPF and "type checking to avoid nasty bugs". One defends against mistakes the other against malicious code.
To be fair if your use case doesn't fit into eBPF at all and you choice is rex-rs or a full kernel driver rex-rs is seems a far better choice then a full custom rust driver in a lot of way.
IMHO it would be grate if rust verification could become at some point so good that it can protect reliably against malicious code and have extensions for enforcing code with guaranteed termination/max execution budged. But rust isn't anywhere close to it, and it's also not a core goal rust development focused on.
(1): In case anyone is wondering how that works given that the halting problem is undecidable: The halting problem applies to any arbitrary program. But there are subsets of programs which can be proven to halt (or not halt). E.g. `return 0` is trivially proven to halt and `while True: pass` trivially to not halt (but `while(1){}` is UB in C++ and henceforth might be compiled to a program which halts, it's still an endless loop in C)
> which is very important for the kind of thing eBPF is(1)
The question is, going into 2026, what kind of thing is eBPF? It seems like all hope of it being a security boundary has been thwarted by micro-architectural vulnerabilities to the extent that you can no longer load eBPF programs as non-root. So, is it a security boundary? That's an honest question that I've not been able to find an answer to in the kernel documentation or recent mailing list posts.
If it's not a security boundary, what is it? There's a few other nice properties enforced by the validator, like protos for a subset of kernel functions, which provides some load-time validation that you've built against a compatible kernel. That's something that's lost here, so we don't get the same compile once, run everywhere properties eBPF has. One might argue this is a big loss, but in the branch that eBPF is not a security subsystem, it's worth asking whether these are strictly necessary checks that need to be enforced, or whether they're niceties that bring a higher hope of stability and reduce the burden of code review that are perfectly fine to bypass given those caveats.
IMO eBPF is best viewed as a mechanism that allows you to load "arbitrary" code in specific kernel paths, while guaranteeing that the kernel won't hang or crash.
That's it. Though I said "arbitrary" because the program has to pass the verifier, which limits valid programs to ones where it can make the stability guarantees.
> If AI use produces obviously inferior work, how did it win in the first place?
they uses some AI placeholders during development as it can majorly speed up/unblock the dev loop while not really having any ethical issues (as you still hire artists to produce all the final assets) and in some corner case they forgot to replace the place holder
also some of the tooling they might have used might technically count as gen AI, e.g. way before LLM became big I had dabbled a bit in gen AI and there where some decent line work smoothing algorithms and similar with non of the ethical questions. Tools which help removing some dump annoying overhead for artists but don't replace "creative work". But which anyway are technical gen AI...
I think this mainly shows that a blank ban on "gen AI" instead of one of idk. "gen AI used in ways which replaces Artists" is kinda tone deaf/unproductive.
> AI placeholders during development as it can majorly speed up/unblock
Zero-effort placeholders have existed for decades without GenAI, and were better at the job. The ideal placeholder gives an idea of what needs to go there, while also being obvious that it needs to be replaced. This [1] is an example of an ideal placeholder, and it was made without GenAI. It's bad, and that's good!
A GenAI placeholder fails at both halves of what a placeholder needs to do. There's no benefit for a placeholder to be good enough to fly under the radar unless you want it to be able to sneak through.
it's not better as they fundamentally fail to capture the atmosphere and look of a scene
this means that for some use cases (early QA, design 3D design tweaks before the final graphic is available etc.) they are fully useless
it's both viable and strongly preferable to track placeholders in some consistent way unrelated to their looks (e.g. have a bool property associated with each placeholder). Or else you might overlook some rarely seen corner cases textures when doing the final cleanup
so no, placeholder don't need to be obvious at all, and like mentioned them looking out of place can be an issues for some usages. Having something resembling the final design is better _iff_ it's cheap to do.
so no they aren't failing, they are succeeding, if you have proper tooling and don't rely on a crutch like "I will surely notice them because they look bad"
I've actually considered hiring artists to help me out a few times too under sort of comparable circumstances? I could use AI to generate basic assets, and then hire artists for the real work! More work for artists, better quality for me. Unfortunately, I fear I'd get yelled at (possibly as a traitor to both sides?)
Frankly, in the wider debate, I think engagement algorithms are partially to blame. Nuanced approaches don't get engagement, so on every topic everyone is split into two or more tribes yelling at each other. Folks in the middle who just want to get along have a hard time.
(Present company excepted of course. Dang is doing a fine job!)
it's an (maybe the most) extreme example of something which is "gen AI" but not problematic
and as such a naive "rule" saying "no gen AI at all" is a pretty bad competition rule design
Reductio ad absurdum is a form of logic which takes an argument to its logical conclusion in order to demonstrate that it is absurd if it were to be taken on its face. Whether or not anyone applies it that way in reality is irrelevant.
some pretty decent "line smoothing" algorithm are technically gen AI but have non of the ethical issues
You'd have to cite an actual example of something this ridiculous. Non gen-AI algorithms have been line smoothing just fine for 2 decades now for less than a trillionth of the resources required to use gen AI for the same task.
yes, and here is a fun fact, most of the push for mass surveillance comes from the European Council, the thing is that literally are "just" the locally elected leaders...
not some vague far away "the EU (personalized)" thing
which also mean you can locally enact pressure on them
furthermore the EU supreme court(s) might have more often hindered mass surveillance laws in member states then the council pushing for them...
and if we speak as of "now", not just the UK, but also the US and probably many other states have far more mass surveillance then the EU has "in general".
so year the whole "EU is at fault of everything" sentiment makes little sense. I guess in some cases it's an excuse for people having given up on politics. But given how often EU decisions are severely presented out of context I guess some degree of anti-EU propaganda is in there, too.
> mass surveillance comes from the European Council, the thing is that literally are "just" the locally elected leaders...
Factually incorrect.
The European Parliament is elected. The Council is appointed, so there is no direct democratic incentive for the council to act on and no direct electorate to please.
On top of that the actually elected European Parliament can only approve (or turn down) directives authored by the Council. They have no authority to draft policies on their own.
To make matters even worse the European Council, which drafts the policies, has no public minutes to inspect. Which obviously makes it ripe for corruption. Which evidently there is a lot of!
Looking at the complete picture, the EU looks like a construct designed intentionally to superficially appear democratic while in reality being the opposite. The more you look at how it actually works, the worse it looks. Sadly.
In short, there are three core institutions, the "technocratic" European Commission, the European Parliament elected by direct popular vote, and the Council ("of the EU"/"of ministers") made up of the relevant (in terms of subject matter) ministers of the standing national govs. The law-making procedures depend on policy areas etc. but usually in the policy areas where EU is fully competent, the Commission — the democratically least accountable of the three bodies — by default makes the initiatives and negotiates/mediates them further along with the Parliament and Council, but only the last two together really have the power to finally approve actual legislation, usually either Regulations (directly applicable in member states as such — so an increasingly preferred instrument of near-full harmonisation), or Directives (requiring separate national transposition / implementation and usually leaving more room for national-level discretion otherwise as well).
While not fully comparable to nation-state parliaments, the powers of the EU Parliament have been strengthened vis-à-vis both the Commission and the Council, and it's certainly long been a misrepresentation to say that they, e.g., only have the power to "approve or turn down" proposals of the Commission and/or the Council.
unrelated to phones a lot of (more professional) malware has moved to not persist itself in root space (or at all) as to not leaf traces (instead it will just rely on being able to regain root access as needed every time you reboot with all the juicy parts being in memory only (as in how often do you even roboot your phone))
I think (but am not fully sure) this also applies to phone malware.
I.e. no it doesn't work.
Not unless you
- ban usage of all old phone (which don't get security updates)
- ban usage of all cheap phones/phones with non reliable vendors
- have CHERY like protections in all phones and in general somehow magically have no reliable root privilege escalations anymore
Oh and advanced toolkits sometimes skip the root level persistence and directly go into firmware parts of all kinds.
Furthermore proper 2FA is what is supposed to make online banking secure, not make pretend 2FA where both factors are on the same device (your phone).
And even without proper 2FA, it is fully sufficient to e.g. classify rooted phones as higher risk and limit how much money can be transmitted/handled with it (the limit should ignoring ongoing long term automated repeated transactions, like rent).
There really is no reason to ban it.
reply