So the security of a blockchain now depends on the security of the SGX enclave? What could possibly go wrong...
> Unfortunately, this proposal suffers from a critical security economics issue: node maintainers here have a strong incentive to break into their own SGX chips. If an adversary managed to compromise their SGX, they could win the leader election at every round by setting the timeout to 0. The more valuable the network, the stronger the incentive to compromise your own platform.
I wouldn't call controlling my own SGX "compromising my own platform". It is my own platform after all, why should I not control it as I please?
Part of it is community agreement. In order to mutually trust what we do on each others' machines, we give up some rights, including the ability to lie about what you executed on your own machine. It is the implicit agreement in Folding @home and many community computation projects, only this is better enforced.
Peer-to-peer computation is hard to implement because of the quite hairy social aspect of requiring a trust root that is out of direct control of the owner of the equipment.
> In order to mutually trust what we do on each others' machines, we give up some rights
I trust what people do on their machines to the extent they can cryptographically prove it to me. Anything else is, for me, an unacceptable compromise.
That may sound like a worthwhile goal, but it's actually ripe for abuse by exacerbating existing power imbalances. For example, right now we just laugh at websites that insisting on imposing nonsensical requirements on end users (client side form validation, insistence on using a particular browser, disable copy/paste, anti-adblock, etc). Imagine they have the power to do this and succeed.
Furthermore, the actual implementation isn't likely to use a narrow proof that the running javascript hasn't been tampered with, but rather a blunt proof over the entire software environment. The outcome would basically be putting decades of personal computing freedom back in the box. Imagine needing to run Windows on your bona fide desktop and not being able to virtualize it or even use a headless box via RDP.
A small sandbox isn't a full threat in the manner I laid out, just the same owner-is-hostile dynamic.
If attestation keys were only rooted in the processor itself (ie not signed by Intel/AMD) and users could load their own, the worthwhile properties of hardened hardware would be preserved without making the owner an enemy.
The assumption behind PoET is precisely that you don't control a part of your machine (the SGX enclave). I agree that that's an unreasonable assumption which is why we developed RRR.
> I agree that that's an unreasonable assumption which is why we developed RRR.
Out of curiosity, I would've hoped that the consensus in the cryptography community is that "secure enclaves" are pretty much snake oil to begin with... is this not the case?
(I mean... for a secure enclave to actually work, you need "perfect" physical security, "perfect" software implementation inside, and "perfect" design of the cryptographic systems around it... that sounds rather infeasible. I always assumed cryptographers would agree and that it's mostly engineers and lawyers trying to "ship the impossible" to do things like DRM.)
As the article states at the end, no one should assume that enclaves give you perfect security. They are most useful when used as a defense in depth mechanism: a way to increase the cost for a successful attack.
There is no such thing as perfect security - it is always a risk assessment exercise. Secure enclaves in general are not snake oil. Secure elements and TPMs, for example, are evaluated by independent testing bodies to high levels of security certification. These are examples of secure enclaves. Secure enclaves in this context, SGX, are an on-CPU mechanism that have weaker isolation properties than other enclaves. However, SGX has a different set of advantages, and may be acceptable to use based on the value of the asset being protected.
You could apply the same logic to cryptographic systems themselves; we have very few absolute proofs of security properties for the cryptography we use for TLS and the like, so they are “imperfect” and may be vulnerable and therefore are snake oil. However I doubt you’d feel indifferent about whether a website you are sending your credit card number to uses HTTPS and stores your payment information encrypted at rest.
I'd trust much more to software in general (and TLS-based crypto in particular) compared to hardware devices.
The TLS vs SGX is a particularly bad comparison. SGX's internal design was not even published, let alone reviewed; and it already had multiple bad exploits.
The TLS design and code has been reviewed by a multiple cryptographers, and the algorithm itself (not implementation) is unbroken.
You have a very limited understanding of what "works" and what is "impossible", it seems. Actual security engineering occurs in hostile environments all the time, in extremely non-perfect environments, with non-perfect tools, and many of its principles (and practices) noticeably make security better for users, operators, and engineers. Similarly, DRM is not only NOT "impossible", it's very possible, very real, and actively used against people today. I suspect if you truly think DRM is "impossible" you have a vastly more limited understanding of security than you think you do.
Computer people have absolutely got to get out of this mindset where impossible means "according to my made up set of Nerd rules", vs the rules that are in place in the world surrounding them.
No, I have a very good understanding of how most of my users, i.e. "normal" people, operate. I would suggest you pick a random relative outside a STEM field and discuss what their security requirements for a blockchain wallet are.
I haven't "made up my set of Nerd rules." I have learned what the general expectations of my users are.
Yeah, you can only use SGX-like systems when the reward from breaking them is limited since they can all be broken given sufficient resources (or even just insider knowledge).
For example, it might be fine as a way to guarantee the integrity of things like Folding@Home where disruption provides no gain other than the satisfaction of successful vandalism, but not for securing a popular cryptocurrency.
It's not "your own platform", that's what the article gets wrong. The entire purpose of SGX is that it's my platform (that I can control/be assured that you can't), running on your hardware.
It's not my platform either, it's Intel's platform and I'm simply trusting them as to what code is running on SGX. Might as well trust Amazon or any other cloud provider.
Except then you trust Amazon and Intel, not just Amazon (presuming you run on x86). The nice thing about SGX is that you don't need to trust the whole cloud provider software stack.
> Unfortunately, this proposal suffers from a critical security economics issue: node maintainers here have a strong incentive to break into their own SGX chips. If an adversary managed to compromise their SGX, they could win the leader election at every round by setting the timeout to 0. The more valuable the network, the stronger the incentive to compromise your own platform.
I wouldn't call controlling my own SGX "compromising my own platform". It is my own platform after all, why should I not control it as I please?