Windows prioritize phoning home and data collection over UX. If you have a corporate install you’ll also have negligent EDP software killing your boot times.
You can get fast boot times on linux if you care to tweak things.
I’ve always just blamed the extreme bloat of the web and lack of design around poor connections for 2bar lack of performance. HN usual works fine on but that’s about it for sites I visit.
Rust technically isn’t a memory safe language the second you use “unsafe”. Rust advocates tend to pretend the can have their cake and eat it too when comparing it to other low level languages. No, just because you have the word unsafe next to the scary parts doesn’t make it okay.
I’ve written a good chunk of low level/bare metal rust—unsafe was everywhere and extremely unergonomic. The safety guarantees of Rust are also much weaker in such situations so that’s why I find Zig very interesting.
No oob access, no wacky type coercion, no nullptrs solves such a huge portion of my issues with C. All I have to do is prove my code doesn’t have UAF (or not if the program isn’t critical) and I’m basically on par with Rust with much less complexity.
The point of unsafe is you have small bubbles of unsafe which you can verify rigorously or use tools like Miri to make sure they upheld and you build safe abstraction on top of that unergonomic part. Looking at embedded-hal and even to extreme embassy you can see the value of it. If you don't do any abstraction I definitely agree Rust is not fun to write at all.
The safety guarantees of Rust the language around unsafe are just as good as C or Zig if you use the appropriate facilities (raw pointers, MaybeUninit, UnsafeCell/Cell, Option for nullability, Pin<> etc). Sometimes this is made unnecessarily difficult by standard library code that expects all the guarantees of ordinary Safe Rust instead of accepting more lenient input (e.g. freely aliasable &Cell<T>'s), but such cases can be addressed as they're found.
My point is that it’s easier to write correct Zig code than correct unsafe Rust. Raw pointers can be null in rust so you should use NonNull<T> but there’s aliasing rules that are easy to mess up. And difficultly with the stdlib as you mentioned.
I don’t actually mind Rust when I was able to write in safe user land, but for embedded projects I’ve had a much better time with Zig.
Programs aren’t text that you run on a computer though. Programs are text that describe an abstract syntax tree which encodes the operational semantics of the thing you’re computing.
Maybe (likely) you could come up with a more convenient set of operations, but I don’t really see how expressing that as plain text ast is really holding things back.
Consider that when doing 3d modeling you usually do not work on the mesh data itself but on a visual representation of it. Sometimes you have to go under the hood of this representation to write e.g. shaders.. But you'd like to not do that at all for the daily work.
In particular, the syntax tree is also Just Another Representation of the functionality... But it's still way overspecified, compared to your intent, since it has lots of implementation details encoded in it. Actually it is the tests that get closer to an exact representation of what you intend (but still, not very close). (This is also why I love React and declarative programming: because it lets me code in a way which is closer to the model of what I intend that I hold in my head. Although still not that close).
So, programming seems similar to the mesh data for a model to me. The more you can get a representation which is faithful to the programming intent, the more powerful you are. LLMs demonstrate that natural language sorta does this.. But not really, or at least, not when the 'compiler' is a stochastic parrot. On the flip side it gets you part of the way and then you can iterate from there by other methods.
Incidentally, coming from a half-baked physics background: to me this feels very similar to how, as physics moved closer to the fundamental theories of GR and QFT, it was forced to adopt the mathematical framework of representation theory[1], which is to say, to reckon with the fact that
(a) a mathematical model like a group is a representation of a physical concept, not the concept itself
(b) this process of representing things by mathematical models has some properties that are inescapable, for instance the model must factor over the ways you can decompose the system into parts
(c) in particular there is some intrinsic coordinate-freedom to your choice of model. In physics, this could be the choice of say coordinate frame or a choice of algebraic system (matrices vs complex numbers vs whatever); in programming the choice of programming language or implementation detail or whatever else
(d) the coordinate-freedom is forced to align at interfaces between isolated systems. In physics this corresponds to the concept of particles (particularly gauge bosons like photons, less sure about fermions...); in programming corresponds to APIs and calling conventions and user interfaces---you can have all the freedom you want in the details but the boundaries are fixed by how they interop with each other.
all very hand-wavey since I understand neither side well... but I like to imagine that someday there will be a "representation theory of software" class in the curriculum (which would not dissimilar from the formal-language concepts of denotational/operational semantics, but maybe the overlaps with physics could be exploited somehow to share some language?)... it seems to me like things mathematically kinda have to go in something like this direction.
It’s semantics. Zig can still have dangling references/uaf. You can do something like ‘var foo: *Bar = @intToPtr(0x00)’ but in order to “properly” use the zero address to represent state you have to use ‘var foo: ?*Bar = null’ which is a different type than ‘*Bar’ that the compiler will force you to check before accessing.
It’s the whole make it easy to write good code—not impossible to write incorrect code philosophy of the language.
Judging from the article, Zig would have prevented the CVE.
> This includes memory allocations of type NV01_MEMORY_DEVICELESS which are not associated with any device and therefore have the pGpu field of their corresponding MEMORY_DESCRIPTOR structure set to null
This does look like the type of null deref that Zig does prevent.
Looking at the second issue in the chain, I believe standard Zig would have prevented that as well.
The C code had an error that caused the call to free to be skipped:
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
status = rmapiMapWithSecInfo(/*…*/); // null deref here
threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
Zig’s use of ‘defer’ would ensure that free is called even if an error occurred:
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
defer threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
status = try rmapiMapWithSecInfo(/*…*/); // null deref here
Nothing can prevent a sufficiently belligerent programmer from writing bad code. Not even Rust—which I assume you’re advocating for without reading the greater context of this thread.
Especially in the case of GP, I'd say Rust is not the main recommendation, although it is one. I would concur that Rust is only one of many decent languages (for memory safety or otherwise).
Still, there are languages with guardrails, and then there are languages with guardrails, and the order for memory safety is probably something like C < C++ < Zig < Rust < managed (GC) languages.
No, the solutions I spoke about were language features that make it trivial to avoid or impossible to make the mistakes.
If your bar for mistakes is “what if you forget to add literally the next line of code in the incredibly common pattern”, I don’t really care to have a discussion about programming languages anymore.
You can forget to increment a loop and have your program not terminate so why don’t you program with language of exclusively primitive recursive functions?
You won't get anywhere with people who just like to argue.
Note that the mention of Zig that I responded to was in reference to Tony Hoare's "billion dollar mistake", which was making null a valid value of a pointer type, not free after use, which is a quite different issue. As I noted, the mistake doesn't occur in Zig because null is not a valid value for a pointer, only an optional pointer, which must be unwrapped with an explicit null test.
I do think it's a bit too easy to forget a deferred free, although it's possible for tools to detect them. Unfortunately Andrew Kelley is prone to being extremely opinionated about language design (GingerBill is another of that sort) and so macros are forever banned from Zig, but macros are the only mechanism for encapsulating a scoped feature like defer.
> You won't get anywhere with people who just like to argue.
Yeah not really sure why I bother. I think I just get bothered that Rust gets touted everywhere as a silver bullet.
> Tony Hoare's "billion dollar mistake", which was making null a valid value of a pointer type
It’s funny how we got stuck with his biggest mistake for decades and his (probably not entirely his) algebraic types / tagged unions have just started to get first class support now.
You were correct about the lack of billion dollar mistake in Zig, once I'd decided to list some "C replacement" languages not just C and C++ I should have either checked they all make exactly this mistake (Odin does, Zig does not) or removed that part of my comment.
However actually in practice for this nVidia bug Zig's "defer" is just irrelevant, which is why nVidia's "fix" doesn't attempt the most similar C equivalent strategy and instead now performs a heap allocation (and thus free) on the happy path.
There's a kernel Oops, likely in someone else's code. When that happens our stack goes away. In Rust they can (I don't happen to know if they do in Rust for Linux but it is commonly used in some types of application) recover from disasters and unwind the stack before it's gone, such as removing the threadState from that global state. In Zig that's prohibited by the language design, all panics are fatal.
A kernel oops isn’t a panic at least however zig or rust defines a panic. So zig saying things about panics don’t apply here.
Rust fails here the same exact way if drop semantics aren’t upheld (they aren’t afaik). Also Rust’s soundness goes immediately out the window if UB happens in unsafe code. So immediately when a kernel Oops happens safety is moot point.
I’m not sure if Zig has a clean way to kill a thread, unwind the stack, and run deferred code. Zig is a pre-1.0 language after all so it’s allowed to be missing features.
30ms is pretty close to noticeable for anything that responds to user input. 30ms startup + 20-70ms processing would probably bump you into the noticeable latency range.
Yeah I don’t think 30ms is very noticeable, but say you have a cli tool with other 20-70ms to bump you up to 50-100ms your tool will have noticeable latency. Death by a thousand cuts.
Water attenuates (reduces the power of) signals significantly and more-so at higher frequencies. The HF (3-30MHz) band is definitely not what you’d want to pick (sonar is in the KHz range). The sub was probably still 27MHz but just higher power with a better antenna because of the FCC regulations though.
I’m still regularly getting on projects and moving C89 variable declarations from the start of functions to where they’re initialized, but I guess it’s not the kids doing it.
I only declare variables at the begin of a block, not because I would need C89 compatibility, but because I find it clearer to establish the working set of variables upfront. This doesn't restrict me in anyway, because I just start a new block, when I feel the need. I also try to keep the scope of a variable as small as possible.
You can get fast boot times on linux if you care to tweak things.
reply