Run a normal filesystem on top of S3! (Non-shared.)
In theory it might have really good performance due to your kernel caching blocks and files, and 25Gbit throughput to S3. Dependent of course on your instance being in EC2 and in the right region and having a big enough instance to get 25Gbit network.
I tried AWS EFS and found the performance very sad. Like 100Mbit even with a 25Gbit instance and the highest-specced EFS filesystem.
EFS performance was tied to how much storage you were using (at the rate of 50KiB/s per GB stored… with the ability to burst to 100MiB/s based on a credit system. A file system storing 256GiB can sustain 12.5MiB/s and burst to 100MiB/s for up to 180 minutes per day.
If you like, you can now specifically provision throughput for EFS at the rate of about $6 per MiB/s (8Mbps) per month.
I provisioned throughput on the EFS filesystem to the largest allowed value and still got crappy performance. Maybe it was the access pattern of the data, but large bulk file transfer performance was no higher than 100Mbit on a 25Gbit instance...
I switched to Syncthing with the main desktop backed up to S3 with zbackup (encrypted). Syncthing works really well (and faster!) when keeping laptops and desktops in sync for my case.
The one killer feature I wish Syncthing had was untrusted peers like Resilio has. There has been an issue[0] open since 2014 but I guess there are still major challenges to this feature.
It would be great for me to be able to specify a certain node as encrypted and read-only like I can in Resilio. If a friend wanted to store his files on my machine and vice-versa, we could do so without being forced to give access to each others plaintext files. Likewise, if I wanted to spin up a VPS and host an untrusted node to help facilitate syncing but didn't want my unencrypted files to be sitting on the disk, I could do so easily.
Personally I was a Spideroak customer for 2 years or so and moved to Tresorit after the canary thing. They offer very similar features but their apps are a lot better. Spideroak always felt quite clunky. Also, the servers are based in Europe.
I don't think either is great to use with lots of data as upload and download seem quite slow, but for my use case of just storing documents it is fine.
SpiderOak has some serious performance issues if you have a lot of data. I just wanted to back up a few hundred GB of data but the client wouldn't do more than ~10Mbit/s on my gigabit line.
Also worth noting their "zero-knowledge" stuff is broken as soon as you log in to the website or use one of the mobile apps, at which point the server has your key.
If you want to have encryption with Dropbox, we just launched FileSafe at Standard Notes. It encrypts files client side then uploads to Dropbox (or any WebDAV server).
SpiderOak looks interesting. The biggest features for me missing from other "sync a folder to the cloud" type services are selective syncing on a machine-by-machine basis and generating public links for my files. These are the only two things keeping me away from iCloud Drive.
Their main business is personal backup, so they would be remiss to not have a good restore feature.
Each of your computers is associated with a "deleted items bin" where files go when the backup finds you delete them locally. They never get automatically deleted (even when you've reached your backup quota? the website isn't clear).
Does this work for mounting Dropbox for Business folders? If so, this solves a significant problem for Linux users of Dropbox (whose official Linux client does not support account switching).
If you are talking about having multiple Dropbox accounts on Linux here it goes. You may not see two systray icons but Dropbox daemons works in background.
In light of what Storj.io is about to bring to market, I'm curious to see if this could be modified to work with it. I'm a huge fan of Dropbox and have used it almost every day for the past 7 years, but if a decentralized option comes along with feature parity and comparable pricing, I'd switch in a heartbeat.
Run a normal filesystem on top of S3! (Non-shared.)
In theory it might have really good performance due to your kernel caching blocks and files, and 25Gbit throughput to S3. Dependent of course on your instance being in EC2 and in the right region and having a big enough instance to get 25Gbit network.
I tried AWS EFS and found the performance very sad. Like 100Mbit even with a 25Gbit instance and the highest-specced EFS filesystem.