I don't get these crypto types that use plain http for the download page [1] and then make a show with "You have to verify downloaded tarballs integrity and authenticity to be sure that you retrieved trusted and untampered software." And if you go to the "alternate resources" links[2][3], you get "Error code: sec_error_cert_signature_algorithm_disabled."
If your primary threat model is an active attacker in proximity to the server, rather than the client, SSL provides _NO_ security: The attacker can intercept the plaintext communication to your domain used to show control over the domain and trivially obtain a certificate.
If your threat model is adversarial nation states that control a CA or ten and are willing to create some bad issuance drama... again no security added by browser SSL.
If your threat model is an attacker who will compromise your webserver because no one can keep up with the flood of new vulnerabilities, and the only way you can keep a private key private is to keep it offline-- which can't be used with SSL then again, no joy.
Some people believe the use of HTTPS in these cases creates a false sense of security and reduces the likelyhood that people will check using other mechanisms. I am pretty confident() that they are wrong and that they've not actually measured the effect. But it's not a crazy position to take.
(Especially when you mix in how easy it is for https snafus to result in giving users scary warnings that make them blind to scary warnings)
(: confidence due to religiously verifying packages and keys, and finding _frequently_ that they are unverifiable even on major high profile targets like major linux distros or crypto libraries... e.g. signed with a key that is signed by no one else and exists only in the same directory as the binary; if people were actually checking I wouldn't find so many messed up cases; Also confident by watching the number of .sig downloads on my own software-- no one checks).
> The attacker can intercept the plaintext communication to your domain used to show control over the domain and trivially obtain a certificate.
Could you explain? Are you talking about DNS spoofing / hijacking or protocol downgrade attacks? There are answers to those, so I'm not following.
> If your threat model is an attacker who will compromise your webserver because no one can keep up with the flood of new vulnerabilities, and the only way you can keep a private key private is to keep it offline-- which can't be used with SSL then again, no joy.
If you are important enough to worry about 0days, then you need to invest heavily in monitoring and tools like Appcanary that can autoupdate your packages. If you detect that your key is compromised then issue a new one and move on with your life.
HTTPS does not create a false sense of security, MD5 sums for packages delivered over non-TLS do. HTTPS isn't perfect but not using it because it won't stop some very well funded actors is silly. I'm worried about hacked wifi routers at my cafe, not about state level actors stealing HTTPS certificates.
But OpenPGP detached signatures, that are isolated and does not depend no transport protocols (TLS), defend you from all that kind of MitM attacks, because they are point-to-point (directly from developer to end-user), without depending on any third-party (like CA issuing TLS certificates, DNSSEC provides, intermediate DNS proxies that must not strip DNSSEC off, and so on).
Exactly. Even the famed homakov's company delivers keys via HTTP: http://sakurity.com/contact this BS about HTTPS providing the illusion of security is nonsense. It's much harder to even do a protocol downgrade attack (and we have HSTS lists for those!) than it is to replace a single endpoint or key of a HTTP connection.
For example http://www.cypherpunks.ru/pygost/Download.html page contains instructions how to receive the key. You can get it using either maillist, website, DNS, keyservers. And you can use various DNS servers and transport routes via Tor. There is plenty of options. And this key is signed with another one containing many signatures. Of course there is no full guarantees, but at least you have to do it just once and then conveniently do tarballs verifying. With TLS you have to do it everytime, all the time you visit and connect to the server.
Moreover how can you "transfer" the trust to other people? If you proxy/give tarball to someone else, then how can you prove that you did not tamper it? Again, with detached signatures people knowing public key can authenticate it, without connecting to Internet. With TLS there is only single distribution point (TLS website) that can not transfer trust to someone else.
What CA should be used for certificate issuing? Paid one? Not an option if you do not want to support PKI business model (it is business, not security). CAcert.org? Modern browsers and operating systems does not include its certificate too. So anyway you have to get its public key too somehow.
So, TLS has the same problem of getting the public key and is less convenient in use, requiring TLS-aware webserver (instead of cheap providers with static pages hosting), without ability to transfer trust (send signature separately) to someone else. OpenPGP keys (for www.cypherpunks.ru websites), comparing to CA ones, can be received with several (!) keyservers (many of them replicates between themselves), several (!) DNS servers (listed as NS record), through various transports (VPN, proxy, Tor) to one of webservers (listed as A/AAAA record).
That was my point. How do you know the pub key is not tampered with? Come to think of it, is meeting in person the only reliable way to reliably exchange keys?
PKI is a business model. That download page suggests you to verify downloaded tarballs with OpenPGP key, or visit Git repository to look for signed (OpenPGP again) tags there. Of course you have to setup some kind of trust for verifying keys. If your browser shows you such kind of errors, then seems that you do not trust CAcert.org used for certificate creation. You may retrieve OpenPGP keys and find signature you may trust. PKI (HTTPS) is single point of trust, OpenPGP provides much more ones.
End-to-send security is much better than the in-flight protection TLS gets you, and even that only as far as you trust the half-assed pki. I imagine most crypto experts don't put much faith in the integrity of their hosting providers and are not in the business of opsec themselves. Of course end-to-end only helps if people actually perform the check...
TL;DR They probably think TLS in software distribution = false sense of security.
Some projects, such as Gentoo, use multiple hashing algorithms in parallel to protect against potential collision attacks while verifying package sources. Adding Streebog for diversity may be a good idea.
Looks like SHA256, SHA512 + Whirlpool to me. Apparently the SHA algorithms have FIPS (US) and NESSIE (EU) certification. Whirlpool has NESSIE (EU) certification. Streebog has GOST (Russian) certification.
You're right, I thought this was deprecated with thin manifests.
Still I don't see a point in this. We should discuss cryptographic algorithms based on technical arguments, not on algorithm origin.
The price is not mainly computationally, but in complexity. You need to have a library implementing that alg, maintain that, make sure it has no security flaws...
And by the way, these Gentoo manifests, if you're worried about their cryptographic security there's something much bigger to worry about: For most users they're transmitted unprotected and unsigned via rsync. There are non-default ways to improve that, but the default is insecure. (It pains me to say this, because I'm a long time Gentoo dev, but it's a nasty truth about Gentoo's lack of security.)
If you want to improve Gentoo's cryptographic integrity this is the first thing that should be worked on (either through a working signing system that would be acceptable by default or by switching to an authenticated transmission mechanism like git over https). This would be much more helpful than adding an obscure hash algorithm.
The current sync recommendation is emerge-webrsync:
# emerge-webrsync
Fetching most recent snapshot ...
Trying to retrieve 20161021 snapshot from http://mirror.com/gentoo ...
Fetching file portage-20161021.tar.xz.md5sum ...
Fetching file portage-20161021.tar.xz.gpgsig ...
Fetching file portage-20161021.tar.xz ...
GPG is enough cryptographic assurance for me.
(Edit: WTF! Either past-midnight has zapped my brain, or this is a total community pants down moment. emerge-webrsync doesn't actually verify the GPG signature by default, despite downloading it, and no warning is issued! One must follow the obtuse, well-buried instructions @ https://wiki.gentoo.org/wiki/Handbook:AMD64/Working/Features... to get it to actually verify ... I've added another bug @ https://bugs.gentoo.org/show_bug.cgi?id=597800 about this... serious misfeature! Looks like for all intents and purposes if you want to own the average Gentoo box, having MITM on sync + emerge is enough! NSA must have been using that...)
It was very sane in the 1990s, when hash combiners were invented and designed by Paul Kocher into SSL 3.0. The hashes we had to work with then were basically first generation designs, preceding the first wave of major cryptanalytic results against hashes. An MD5/SHA1 hash combiner made sense.
It is less sane now, I agree. With the possible exception of PQ schemes, cascades of all kinds are a silly idea.
Well, I'd say we can pretty clearly pinpoint when cascading schemes makes sense: When we have different algs with different security properties and we can't have them all in one algorithm then cascading makes sense.
This is the case for newhope + x25519, because one is based on well known crypto, but not quantum safe, the other is based on not so well known crypto, but (maybe) quantum safe. You could make similar arguments in the past about hash functions when the analysis around their safety was much more fuzzy. (Which is why I can also understand why Gentoo decided way back to combine sha2+whirlpool.)
We are using in Actor.im double encryption of all our traffic with AES+Kuznechik and SHA256+Streebog. We are modified Signal protocol to handle such encryption. While keeping curve25519-only for public key cryptography as russia doesn't have any kind of standart for pki.
Main ussue is performance. AES and SHA256 usually have hardware optimizations in ARM and x64 processors, but Russian doesn't have such thing.
Second thing is i think that this is not actually required as AES and Kuznechik have very similar ideas in them with just slightly different combination. Also AES is not cracked and it is not going to be in the nearest future.
Maybe I misunderstood you, but http://www.cypherpunks.ru/gost/enVKO.html VKO 34.10-2001 is ECDH analogue. It uses two 256 or 512-bit elliptic curves keypairs for deriving common shared 256-bit key. It is Diffie-Hellman, like curve25519, with at least 128-bit security margin.
If you do X, then Y is secure unless there are other dimensions to the problem, e.g., an airgapped network in a Faraday cage is secure from data exfiltration via radio emissions, but not necessarily via optical or acoustic emissions.
That said, the NSA recommends multiple encryption, for instance using "inner" and "outer" VPN gateways:
I think you're missing the implication in the original comment, that the NSA has put a backdoor into NIST and the Russian equivalent has put a backdoor into GOST, but neither can use the other's backdoor.
I cannot tell you how silly this is. If you're using GOST, you're no longer building NIST-compliant crypto. If you're using NIST, you're no longer building GOST-compliant crypto. For Christ's sake, just stop using standardized crypto if you're worried about backdoors like this. Use an eSTREAM portfolio cipher for bulk crypto, use Blake2 as your hash, and use Curve448 for key agreement and signatures.
You can just use the Noise protocol framework to accomplish this, which was designed to use all of these components.
It's still NIST compliant crypto on the outside regardless of the inner contents. Assuming NIST(GOST(plaintext)), NIST would be terribly broken if using GOST ciphertext as the payload weakened the security.
It's crypto 101 that, given a ciphertext without the key, an algorithm's correct input should be indistinguishable from random input of the same length.
I'm shocked that you think the plaintext contents would have an affect on whether or not something is NIST compliant.
I don't follow this objection even a little bit, but I'm also not very motivated to try, so: no need to clarify. I'm just going to reiterate.
I am making a very simple point. If you don't trust NIST standards because you think they're backdoored, but won't run Russian standards because you think they might be too, the answer isn't to compose the two flawed standards.
Instead: just use a crypto stack composed of well-reviewed, well-regarded components that are neither NIST nor GOST standards.
Nobody in the world thinks Curve25519 is backdoored, or that Chapoly is, or that Blake2 is.
In fact: this is what I think you should do anyways. Maybe, just maybe, you should keep using AES because it will be more performant --- but the cycles/byte cost of bulk encryption is so low that I'm skeptical that this matters. Otherwise: avoid crypto standards like NIST and GOST. Standards processes produce crypto that is at best ungainly and at worst actively harmful. Standards are evil.
I am, of course, addressing this advice to the very, very limited subset of engineers who should be working with crypto directly. Everyone else should just use Nacl.
If you think standard A is good except for a potential backdoor with the key held by entity X and you think standard B is good except for a potential backdoor held by entity Y and you assume entity X and entity Y do not cooperate, then composing A and B is completely reasonable.
It's the same thing as having 3 computers vote on the space shuttle control signals. You could follow your argument and claim, "If the software has a bug in it that would produce output different from the other 2, then don't use it!" The problem is that we don't know if there is an issue or not, so we go the safer route with multiple implementations.
You also did not address the main issue I have with your comment. You made this assertion: "If you're using GOST, you're no longer building NIST-compliant crypto.", which implies that the contents of the plaintext determine if the crypto is NIST compliant. This is completely false.
It's like claiming that uploading an AES encrypted file over an HTTPS connection is less secure than uploading via HTTP.
Considering only the secure channel problem and not the entire systems problem (which might motivate encrypting clientside in anticipation of the file being stored), encrypting before sending on a secure channel is indeed pointless, which is the reason you'll find very few soundly designed cryptosystems that do this.
The point is again simple: there are far better options to untrustworthy standards than composing them in the hopes of mitigating their flaws. It's for the same reason that we used to use hash combiners to handle MD5 and SHA1, but now we use HKDF over SHA-2.
>which is the reason you'll find very few soundly designed cryptosystems that do this.
Nearly every secure system I've dealt with (in the military side) encrypted at the network layer (VPN) and they sent encrypted files over that channel.
Yes, because (as I just said), encrypting files mitigates systems problems outside the scope of the secure channel problem. A secure channel doesn't help you if the bag of bits you send down it ends up persisted on an exported, unencrypted filesystem.
That doesn't mean that redundant clientside encryption of files is a sensible feature for a secure channel to have.
Its nearly impossible to predict when someone would find vulnerabilities (or if they have already in secret, Bletchley Park anyone) in crypto primitives and the problem gets compounded we try to use untested crypto primitives such as those highlighted in this article.
AES has been around since 2001 and researchers haven't gotten past 7 of the 10 rounds so that significantly improves my confidence in its ability to not crumble under the most simple cryptanalysis.
Here's an interesting video by the author of one of the attacks on the inner round of SHA-3 explaining why public analysis is exceptionally important.
https://www.youtube.com/watch?v=uT4hrWkbBxM
My point is that though gaining popularity may be good because more researchers may find vulnerabilities but until these primitives are proven its probably not a good idea to use then in any real world application.
> "Why those algorithms could be interesting and great worth alternative to foreign ones? Because they are obviously not worse, in some places are much better and have high serious security margin."
Is there a reason these algorithms aren't formally introduced as NIST standards? Are they copyrighted? Couldn't anyone submit them?
Lack of interest on behalf of the author, lack of interest on behalf of the standards body, implicit political pride, lack of internationally-recognized, systemic cryptanalysis.
It'd be an entirely different matter if NIST posted a new competition and someone entered an algorithm that just happened to be an existing Russian standard.
Altogether, this isn't an unpleasant status quo. These algos are standards in their sphere of influence; other spheres of influence (like NIST) aren't showing much interest in evaluating their fitness for use because it's not necessary for them -- they already have comparable standards.
However, it's more of an issue for someone like IETF who tries to promulgate standards with a global reach. They are simultaneously trying to be accommodating while trying not to make recommendations that are objectively bad (e.g. recommending a broken algorithm, despite it being a national standard). There is a now-expired draft RFC for some GOST algos in TLS [1], and there's an RFC for supporting the Japanese-origin Camellia block cipher in TLS [2], although all major browsers have since removed support for it, mostly born out of a blog post about reducing supported TLS ciphersuites but also citing lack of use [3].
* The kind, named "AES", that are implemented in hardware on a bunch of different mainstream platforms.
* Ciphers that were designed to be fast on general-purpose CPUs (for instance: that have good multipliers) and are thus cycles/byte competitive. Salsa/Chacha is the best-known example here, and currently only one with widespread use (this used to be the bucket RC4 was in).
* Ciphers that nobody uses unless a government mandates it.
There might be a case for having two bulk cipher specs, one for hardware acceleration and one for software. That's how eSTREAM did it.
There's really not much of a case for standardizing, or really even adopting, other ciphers.
Things are different in signature-land, or key-agreement-land, or even hash-land. They're definitely different in AEAD-land. But for bulk encryption primitives, we may be at a happy place already.
NIST usually doesn't standardize algorithms just because. Which is reasonable, because having more algorithms doing the same thing only creates confusion and more sources for error. New algorithms should only be introduced when they serve a need or when the existing algorithms have weaknesses.
The likely next algorithm standardization process NIST is going to start is about Post Quantum cryptography. However NIST has lost a bit of its relevance with the IETF taking over after questions about NSA influence over NIST.
Btw, there is no requirement for NIST algs to be US-based. E.g. AES was developed in Belgium.
Belgium is an America loving country though, like all of western Europe as far as I know, and Americans usually quite aware who the friends and 'terrorists' are. I wonder if AES had been from, I don't know, Kenya or Kazakhstan or Bolivia, if it had had much chance at all (though I'm not up to date on the exact America-sentiment in each of those countries, we hardly hear about them in Western news).
Rijndael, the Belgian cipher in question, famously won a very well-regarded cryptographic contest to obtain its role as AES. "Belgium" has nothing to do with it.
I know it was a Belgian cryptographer, not Belgium the state. I still think the country has makes a difference in how people regard an entry for the contest.
Of course, but do you think an algorithm from a state that most Americans see as "behind" or even a "terrorist state" has a perfectly equal chance of winning? The judges might even explicitly try to give each an equal chance but at least subconsciously, I would expect there to be some bias.
The submitters were not countries but (typically groups of) practitioners. Only one proposal was made by a group consisting entirely of Americans and it did not win. Like all such processes, I'm sure this particular one was imperfect and could be made better. You're making it sound like the judging of an Olympic Ice Dancing event, though, and I think all available evidence suggests that it wasn't.
Yes but NIST holds requests for crypto standards that are used widely around the world. As far as I know there's no requirement that the algorithm be created in the United States to be considered, but I'm not entirely sure about the process.
It seems to me that if the standard was better, faster, and more secure, then it would become the greater standard by merit. Unfortunately there's been a shadow on NIST since Dual_EC_DRBG was "pushed" through. I would like to think it's greater mission is still intact.
Rijndael (which is better known by the name AES) was developed by Belgian cryptographers, working in Europe. I think the same team developed Keccak (SHA-3).
Vincent Rijmen and Joan Daemen are the co-designers of AES. Later, they collaborated with Gilles Van Assche and Michaël Peeters to design NOEKEON for the NESSIE competition [1]; simultaneously Rijmen worked with Paulo S. L. M. Barreto to design the WHIRLPOOL hash function [2], which became a NESSIE recommendation.
Daemen, Van Assche, and Peeters were joined by Guido Bertoni to create the hash function RadioGatún [3] from Daemen and Craig Clapp's old stream cipher PANAMA [4]. The same team refined the design into Keccak [5], a 'sponge function' primitive they invented [6] that forms the basis of SHA-3.
The uncertainty around their origin (a foreign country who until a few years ago was hostile to the US) would preclude them being used by US government agencies (who depend on NIST for the stamp of approval).
Is that true for an open-source algorithm? I'm sure that the Russian Government still uses things like SHA256, even though it's an approved NIST standard.
I would imagine so, particularly in cryptography, being able to view the source is not the same as being able to (easily) verify it doesn't contain any backdoors. See ECDSA as good example; we can see the entire standard and yet we can't be sure that it hasn't been weakened/backdoored by its creators despite it looking secure on the surface. See the start of [0], the bit about the standard pushing state leaking through secret dependent operations.
Edit: my point here is that America would have to trust that there was no way that the Russians had added backdoors. Given their history and the current political climate I can't see that trust coming soon.
You can go a long way towards showing a standard hasn't been backdoored by showing the parameters were generated by 'nothing up your sleeve' numbers. E.g. start trying RSA keys at SHA-256 of 1. And keep incrementing until requirements are met.
Incorrect, a lot of people don't know but it was the NATO and the united states that was actually actively hostile to russia. (long family history of military folks who had EYES only access.)
Why, exactly, would NIST standardize GOST? What other ciphers should it standardize? So far, it's done DES and, to replace DES, AES. Why GOST and not Salsa20 --- which people in the real world actually use?
Standardizing multiple ciphers defeats NIST's whole purpose. The whole point is to have one standard block cipher. That, incidentally, is also the point of GOST.
[1]: http://www.cypherpunks.ru/gogost/Download.html#Download
[2]: https://lists.cypherpunks.ru/mailman/listinfo/gost
[3]: https://git.cypherpunks.ru/cgit.cgi/gogost.git/