Zero-knowledge proofs are basically a way to trust code execution without re-running it yourself.
Compile C# to a minimal RISC-V runtime. You run the program once, and instead of shipping all the outputs and logs, you generate a zk proof—a tiny math receipt that says "this execution was correct." Anyone can verify that receipt in milliseconds.
It's a bit like TEEs (Intel SGX, AMD SEV) where you outsource compute to someone else and rely on hardware to prove they ran it faithfully. The difference is zk proofs don’t depend on trusting special chips or vendors - it's just math.
Implications:
* Offload heavy workloads to untrusted machines but still verify correctness
* Lightweight sync and validation in distributed systems
* New trust models for cloud and datacenter compute
Im not familiar with how these zk proofs work, but for a PoW scheme I was working with the binary proofs were over 60kb - and they were sample based to decrease probability of cheating - not an absolute proof without full replay.
Do you have some info/resource to describe how these proofs work and can be so small?
There's different proof constructions, but many are depending on recursive SNARKs. You basically have an execution harness prover (proves that the block of VM instructions and inputs were correct in producing the output), and then a folding circuit prover (that proves the execution harness behaved correctly), recursively folding over the outer circuit to a smaller size. In Ethereum world, a lot of the SNARKs use a trusted setup — the assumption is that for as long as one contributor to the ceremony was honest (and that there wasn't a flaw in the ceremony itself), then the trusted setup can be trusted. The outsized benefit of the trusted setup approach is that it allows you to shift the computational hardness assumption over to the statistical improbability of being able to forge proof outputs for desired inputs. This of course, assumes that the trusted setup was safe, and that quantum computers aren't able to break dlog any time soon
End of Dennard scaling was the performance breakdown. Meant chip frequencies couldn't be cranked higher and higher as temperature dissipation became more and more of an issue https://en.wikipedia.org/wiki/Dennard_scaling
C# (and Go) have adjusted goto to ensure consistent scoping, avoiding undefined variables, and keeping control flow reducible. So its a much safer and better form of goto than exposed in C/C++, where you can still do weird things without being warned by the language
Alameda Research (re: FTX exchange) normally makes a ton of money doing the arbitrage between dollars and tether but its having liquidity problems, so not doing it currently.
Why they are depegging is people are selling for under a dollar on coinbase (perhaps in panic); and Alameda Research (re: FTX exchange) who normally makes a ton of money doing the arbitrage between dollars and tether is having liquidity problems, so not doing it currrently.
Compile C# to a minimal RISC-V runtime. You run the program once, and instead of shipping all the outputs and logs, you generate a zk proof—a tiny math receipt that says "this execution was correct." Anyone can verify that receipt in milliseconds.
It's a bit like TEEs (Intel SGX, AMD SEV) where you outsource compute to someone else and rely on hardware to prove they ran it faithfully. The difference is zk proofs don’t depend on trusting special chips or vendors - it's just math.
Implications:
* Offload heavy workloads to untrusted machines but still verify correctness
* Lightweight sync and validation in distributed systems
* New trust models for cloud and datacenter compute