Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

With how its going I feel Zig 1.0 wont be a thing until my retirement in 37 years


I’m willing to bet $5 it happens in 4 years or fewer.


That’s a pretty low confidence if we measure confidence in dollars willing to risk


It’s 100% of my annual betting budget though!


Do you have a history we can look at to see how good you are at predicting this for programming languages? Like say, some 2020 predictions you had for languages which would or would not ship 1.0 by 2025 ?


I made a set of predictions for Rust in 2022, nearly all of which turned out to be correct. And I was publicly confident Go and Rust would be massive when they reached 1.0. I was right on both counts.

But I will also admit I don’t follow developments in zig as closely as Rust. I’ve never written any Zig. And in any case, past performance isn’t indicative of future performance.

I could be wrong about this prediction, but I don’t think I will be. From what I’ve seen Andy Kelley is a perfectionist who could work on point releases forever. But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0. They’ll nudge him towards 1.0. They can wait a few years, but not forever. That’s why I guessed 4 years.


> But his biggest users (tigerbeetle and bun especially) will only be taken seriously once Zig is 1.0.

TB is only 5 years old but already migrating some of the largest brokerages, exchanges and wealth managements in their respective jurisdictions.

Zig’s quality for us here holds up under some pretty extreme fuzzing (a fleet of 1000 dedicated CPU cores), Deterministic Simulation Testing and Jepsen auditing (TB did 4x the typical audit engagement duration), and is orthogonal to 1.0 backwards compatibility.

Zig version upgrades for our team are no big deal, compared to the difficulty of the consensus and local storage engine challenges we work on, and we vendor most of our std lib usage in stdx.

> They’ll nudge him towards 1.0.

On the contrary, we want Andrew to take his time and get it right on the big decisions, because the half life of these projects can be decades.

We’re in no rush. For example, TigerBeetle is designed to power the next 30 years of transaction processing and Zig’s trajectory here is what’s important.

That said, Zig and Zig’s toolchain today, is already better, at least for our purposes, than anything else we considered using.


I stand corrected. I fear I may lose my $5 now.

If you don’t mind my asking, did TB add support for transaction metadata? I’ve seen this anti-pattern of map<string, string> associated with each transaction. Far from ideal, but useful. Last I checked TB didn’t support that because it would need dynamic memory allocation. Does it support it now or will it in future?


Haha! You could double down and up the stakes.

It’s not that it would need dynamic memory allocation (it could be done with static), but rather it’s not essential to performance—you could use any KV or OLGP for additional “user data”, it’s not the hard contended part.

To keep consistency, the principle is:

- write your data dependencies if any, then “write to TB last as your system of record”,

- “read from TB first”, and if it’s there your data dependencies will be too.


That makes sense, but it seems to add latency? Let’s say we’re currently reading the transactions from Cassandra with latency t. If TB is the source of truth, but we need an additional read from Cassandra for every transaction, the read latency is strictly worse now. Similarly with writes, especially since you recommend writing to TB last.

If what I’m saying is correct, we won’t actually see any performance benefits, only possible regressions. And if we’re already happy with Cassandra’s record keeping, what does TB add here?

Correct me if I’m missing something though! Like maybe we could rework how we write to Cassandra.


Fair question!

If you have any contention in your workload, which is typical for OLTP workloads [1], then, per Amdahl’s Law, that portion of the work will dominate, and TB will be orders faster.

For example, if your KV can do high throughput writes (and most can) at low latency then you should be looking at 20-50ms P100s all combined, per 8K transactions, when you’re running at 500K TPS, even with extreme contention up to 90%. Very hard if not impossible to do that if you don’t have TB as part of your stack. At least not with the durability and strict serializability guarantees that TB gives.

[1] Worth watching! https://youtu.be/yKgfk8lTQuE


Both go and rust had substantial dedicated corp support and dollars behind them. Zig not so much. With that in view its advancement is pretty remarkable.


Wouldn’t you think AI advances in 4 years would make this a safe bet?


No, unless you're riding the hype train so hard that you are excited to go to colonize Mars thanks to AI advances.


The language itself won't change much. The standard library is what is still in flux.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: