But OpenPGP detached signatures, that are isolated and does not depend no transport protocols (TLS), defend you from all that kind of MitM attacks, because they are point-to-point (directly from developer to end-user), without depending on any third-party (like CA issuing TLS certificates, DNSSEC provides, intermediate DNS proxies that must not strip DNSSEC off, and so on).
Exactly. Even the famed homakov's company delivers keys via HTTP: http://sakurity.com/contact this BS about HTTPS providing the illusion of security is nonsense. It's much harder to even do a protocol downgrade attack (and we have HSTS lists for those!) than it is to replace a single endpoint or key of a HTTP connection.
For example http://www.cypherpunks.ru/pygost/Download.html page contains instructions how to receive the key. You can get it using either maillist, website, DNS, keyservers. And you can use various DNS servers and transport routes via Tor. There is plenty of options. And this key is signed with another one containing many signatures. Of course there is no full guarantees, but at least you have to do it just once and then conveniently do tarballs verifying. With TLS you have to do it everytime, all the time you visit and connect to the server.
Moreover how can you "transfer" the trust to other people? If you proxy/give tarball to someone else, then how can you prove that you did not tamper it? Again, with detached signatures people knowing public key can authenticate it, without connecting to Internet. With TLS there is only single distribution point (TLS website) that can not transfer trust to someone else.
What CA should be used for certificate issuing? Paid one? Not an option if you do not want to support PKI business model (it is business, not security). CAcert.org? Modern browsers and operating systems does not include its certificate too. So anyway you have to get its public key too somehow.
So, TLS has the same problem of getting the public key and is less convenient in use, requiring TLS-aware webserver (instead of cheap providers with static pages hosting), without ability to transfer trust (send signature separately) to someone else. OpenPGP keys (for www.cypherpunks.ru websites), comparing to CA ones, can be received with several (!) keyservers (many of them replicates between themselves), several (!) DNS servers (listed as NS record), through various transports (VPN, proxy, Tor) to one of webservers (listed as A/AAAA record).
That was my point. How do you know the pub key is not tampered with? Come to think of it, is meeting in person the only reliable way to reliably exchange keys?
Maybe I misunderstood you, but http://www.cypherpunks.ru/gost/enVKO.html VKO 34.10-2001 is ECDH analogue. It uses two 256 or 512-bit elliptic curves keypairs for deriving common shared 256-bit key. It is Diffie-Hellman, like curve25519, with at least 128-bit security margin.
PKI is a business model. That download page suggests you to verify downloaded tarballs with OpenPGP key, or visit Git repository to look for signed (OpenPGP again) tags there. Of course you have to setup some kind of trust for verifying keys. If your browser shows you such kind of errors, then seems that you do not trust CAcert.org used for certificate creation. You may retrieve OpenPGP keys and find signature you may trust. PKI (HTTPS) is single point of trust, OpenPGP provides much more ones.
syncer works on block level, with raw byte sequences. It knows nothing about file systems. Everything is limited with sequential read/write speeds of your hard drives.
It takes blocksize-size memory block for each CPU in your system. I have got 4 CPUs and work with 2 MiB blocks: program will take 8 MiB of RAM.
I answered in another comment on that question. The main point of course is not a language. ZFS incremental snapshots and its send/receive are great, but my task is to create binary identical image of hard drive with bootloaders, partitions and so on. I have got SSD that can breake and slow USB connected HDD. I want to sync them several times a day and be able to trivially switch them in the case of failure. ZFS is only a filesystem. I need to prepare harddrive with bootloaders to use ZFS send/receive. I do not want it. Raw dd is what is needed, but it is slow -- the only reason this utility is written.
> ZFS incremental snapshots and its send/receive are great, but my task is to create binary identical image of hard drive with bootloaders, partitions and so on.
If you use UEFI, bootloader is just a normal file on a normal (FAT) partition so no special magic is needed, but fair enough.
ZFS typically handles partitions its own way so no need to sync that.
But really: If you just want to keep to full drives in sync, at block-level so one can at any point be a stand-in replacement for the other, it sounds like what you want is RAID0. Isn't that what you want?
And with UEFI you can boot those RAID-volumes, even though they are soft-raid volumes.
Not saying your need doesn't exist, but that existing solutions which are more standardized and requires less administration seemingly already exists.
Forgive me, it's Monday morning and I haven't yet had my coffee, but it sounds like you're describing a block-level copy between two drives, something which RAID0 is most definitely not.
Unless I'm misinterpreting, I think you were going for RAID1.
It should be enough to copy partition table and boot loader only once. After that, it's safer and probably faster to backup with consistent btrfs/zfs snapshots. Copying raw blocks from a writable mounted filesystem will lead to inconsistent backups.
Agreed that it is preferred way, after raw dd with unmounted filesystem. But I need to be prepared for that: I am a human, I can forget some steps. Here command takes only two arguments (src and dst) -- simplicity is reliability against humand mistakes. Some data will be corrupted, but with journaling filesystems or something like ZFS this is not dangerous (unfinished transactions will be ignored).