Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The issue here is not refusing to use a foreign third party. That makes sense.

The issue is mandating the use of remote storage and not backing it up. That’s insane. It’s like the most basic amount of preparation you do. It’s recommended to even the smallest of companies specifically because a fire is a risk.

That’s gross mismanagement.



This. Speaking specifically from the IT side of things, an employer or customer refusing to do backups is the biggest red flag I can get, an immediate warning to run the fuck away before you get blamed for their failure, stego-tech kind of situation.

That being said, I can likely guess where this ends up going:

* Current IT staff and management are almost certainly scapegoated for “allowing this to happen”, despite the program in question (G-DRIVE) existing since 2017 in some capacity.

* Nobody in government will question sufficiently what technical reason is/was given to justify the lack of backups and why that was never addressed, why the system went live with such a glaring oversight, etc, because that would mean holding the actual culprits accountable for mismanagement

* Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

* The major cloud providers will likely win several contracts for “temporary services” that in actuality strip the sovereignty the government had in managing its own system, even if they did so poorly

* Other countries will use this to justify outsourcing their own sovereign infrastructure to private enterprise

This whole situation sucks ass because nothing good is likely to come of this, other than maybe a handful of smart teams lead by equally competent managers using this to get better backup resources for themselves.


> * Everyone involved is unlikely to find work again anytime soon once names are bandied about in investigations

They might (MIGHT) get fired from their government jobs, but I'll bet they land in consulting shops because of their knowledge of how the government's IT teams operate.

I'll also bet the internal audit team slides out of this completely unscathed.


> I'll also bet the internal audit team slides out of this completely unscathed.

They really, really shouldn't. However, if they were shouted down by management (an unfortunately common experience) then it's on management.

The trouble is that you can either be effective at internal audit or popular, and lots of CAE's choose the wrong option (but then, people like having jobs so I dunno).


Likely it wasn't even (direct) management, but the budgeting handled by politicians and/or political appointees.


Which begs the question, Does N Korea have governmental whistle-blower laws and/or services?

Also, internal audit aren't supposed to be the only audit, they are effectively pre-audit prep for external audit. And the first thing an external auditor should do - ask them probing questions about their systems and process.


Wrong Korea, this is South Korea


I have never been to DPRK but based on what I've read, I wouldn't even press "report phishing" button in my work email or any task at work I was not absolutely required to do, much less go out of my way to be a whistleblower.


That's true, but by their nature, external audits are rarer so one would have expected the IA people to have caught this first.


I abhor the general trend of governments outsourcing everything to private companies, but in this case, a technologically advanced country’s central government couldn’t even muster up the most basic of IT practices, and as you said, accountability will likely not rest with the people actually responsible for this debacle. Even a nefarious cloud services CEO couldn’t dream up a better sales case for the wholesale outsourcing of such infrastructure in the future.


I'm with you. It's really sad that this provides such a textbook case of why not to own your own infrastructure.

Practically speaking, I think a lot of what is offered by Microsoft, Google, and the other big companies that are selling into this space is vastly overpriced and way too full of lock-in, taking this stuff in-house without sufficient knowhow and maturity is even more foolish.

It's like not hiring professional truck drivers, but instead of at least people who can basically drive a truck, hiring someone who doesn't even know how to drive a car.


If this is true, every government should subsidize competitors in their own country to drive down costs.


Aside from data sovereignty concerns, I think the best rebuttal to that would be to point out that every major provider contractually disclaims liability for maintaining backups.

Now, sure, there is AWS Backup and Microsoft 365 Backup. Nevertheless, those are backups in the same logical environment.

If you’re a central government, you still need to be maintaining an independent and basically functional backup that you control.

I own a small business of three people and we still run Veeam for 365 and keep backups in multiple clouds, multiple regions, and on disparate hardware.


One co-effects of the outsourcing strategy is to underfund internal tech teams.. which then makes them less effective in both competing against and managing outsourced capabilities.


There's a pretty big possibility it comes down to acquisition and cost saving from politicians in charge of the purse strings. I can all but guarantee that the systems administrators and even technical managers had suggested, recommended and all but begged for the resources for a redundant/backup system in a separate physical location were denied because it would double the expense.

This isn't to preclude major ignorance in terms of those in the technology departments themselves. Having worked in/around govt projects a number of times, you will see some "interesting" opinions and positions. Especially around (mis)understanding security.


By definition if one department is given a hard veto, then there will always be a possibility that all the combined work of all other departments can amount to nothing, or even have a net negative impact.

The real question then is more fundamental.


I mean - it should be part of due diligence of any competent department trying to use this G-drive. If it says there are no backups it means it could only be used as a temporary storage, maybe as a backup destination.

It's negligence all the way, not just with this G-Drive designers, but with customers as well.


Backups should be far away, too. Apparently some companies lost everything on 9/11 because their backups were in the other tower.


Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.


Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.


Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.


The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.

I know this because I was working on online systems back then.

I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.


For backups, latency is far less an issue than bandwidth.

Latency is defined by physics (speed of light, through specific conductors or fibres).

Bandwidth is determined by technology, which has advanced markedly in the past 25 years.

Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.

Various cloud service providers have offered such services, effectively a datacentre-in-a-truck, which loads up current data and delivers it, physically, to an off-site or cloud location. A similar current offering from AWS is data transfer terminals: <https://techcrunch.com/2024/12/01/aws-opens-physical-locatio...>. HN discussion: <https://news.ycombinator.com/item?id=42293969>.

Edit to add: from the above HN discussion Amazon retired their "snowmobile" truck-based data transfer service in 2024: <https://www.datacenterdynamics.com/en/news/aws-retires-snowm...>.


I’ve covered those points already in other responses. It’s probably worth reading them before assuming I don’t know the differences between the most basic of networking terms.

I was also specifically responding to the GPs point about latency for DB replication. For backups, one wouldn’t have used live replication back then (nor even now, outside of a few enterprise edge cases).

Snowmobile and its ilk was a hugely expensive service by the way. I’ve spent a fair amount of time migrating broadcasters and movie studios to AWS and it was always cheaper and less risky to upload petabytes from the data centre than it was to ship HDDs to AWS. So after conversations with our AWS account manager and running the numbers, we always ended up just uploading the stuff ourselves.

I’m sure there was a customer who benefited from such a service, but we had petabytes and it wasn’t us. And anyone I worked with who had larger storage requirements didn’t use vanilla S3, so I can’t see how Snowmobile would have worked for them either.


Laws of physics hasn't changed since the early 00s though, we could build very low latency point to point links back then too.


Switching gear was slower and laying new fibre wasn't an option for your average company. Particularly not point-to-point between your DB server and your replica.

So if real-time synchronization isn't practical, you are then left to do out-of-hours backups and there you start running into bandwidth issues of the time.


Never underestimate the potential packet loss of a Concorde filled with DVDs.


Plus long distance was mostly fibre already. And even regular electrical wires aren’t really much slower than fibre in term of latency. Parent probably meant bandwidth.


Copper doesn't work over these kinds of distances without powered switches, which adds latency. And laying fibre over several miles would be massively expensive. Well outside the realm of all but the largest of corporations. There's a reason buildings with high bandwidth constraints huddle near internet backbones.

What used to happen (and still does as far as I know, but I've been out of the networking game for a while now) is you'd get fibre laid between yourself and your ISP. So you're then subject to the latency of their networking stack. And that becomes a huge problem if you want to do any real-time work like DB replicas.

The only way to do automated off-site backups was via overnight snapshots. And you're then running into the bandwidth constraints of the era.

What most businesses ended up doing was tape backups and then physically driving it to another site -- ideally then storing it an fireproof safe. Only the largest companies could afford to push it over fibre.


To be fair, tape backups are very much ok as a disaster recovery solution. It's cheap once you have the tape drive. Bandwith is mostly fine if you want to read them sequentially. It's easy to store and handle and fairly resistant.

It's "only" poor if you need to restore some files in the middle or want your backup to act as a failover solution to minimise unavailability. But as a last resort solution in case of total destruction, it's pretty much unbeatable cost-wise.

G-Drive was apparently storing less than 1PB of data. That's less than 100 tapes. I guess some files were fairly stable so completely manageable with a dozen of tape drives, delta storage and proper rotation. We are talking of a budget of what 50k$ to 100k$. That's peanuts for a project of this size. Plus the tech has existed for ages and I guess you can find plenty of former data center employees with experience handling this kind of setup. They really have no excuse.


The suits are stingy when it's not an active emergency. A former employer declined my request for $2K for a second NAS to replicate our company's main data store. This was just days after a harrowing data recovery of critical from a failing WD Green that was never backed up. Once the data was on a RAID mirror and accessible to employees again, there was no active emergency, and the budget dried up.


I don't know. I guess that for all intents and purposes I'm what you would call a suit nowadays. I'm far from a big shot at my admittedly big company but 50k$ is pretty much pocket change on this kind of project. My cloud bill has more yearly fluctuation than that. Next to the cost of employees, it's nothing.


> There's a reason buildings with high bandwidth constraints huddle near internet backbones.

Yeah because interaction latency matters and legacy/already buried fiber is expensive to rent so you might as well put the facility in range of (not-yet-expensive) 20km optics.

> Copper doesn't work over these kinds of distances without powered switches, which adds latency.

You need a retimer, which adds on the order of 5~20 bits of latency.

> And that becomes a huge problem if you want to do any real-time work like DB replicas.

Almost no application would actually require "zero lost data", so you could get away with streaming a WAL or other form of reliably-replayable transaction log and cap it to an acceptable number of milliseconds of data loss window before applying blocking back pressure. Usually it'd be easy to tolerate enough for the around 3 RTTs you'd really want to keep to cover all usual packet loss without triggering back pressure.

Sure, such a setup isn't cheap, but it's (for a long while now) cheaper than manually fixing the data from the day your primary burned down.


Yes but good luck trying to get funding approval. There is a funny saying that wealthy people don't become wealthy by giving their wealth away. I think it applies to companies even more.


In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.


DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.


Assuming that dark fiber is actually dark (without amplifiers/repeaters), I'd wonder how they'd justify the 4 orders of magnitude (99.99%!) profit margin on said fiber. That already includes one order of magnitude between the 12th-of-a-ribbon clad-fiber and opportunistically (when someone already digs the ground up) buried speed pipe with 144-core cable.


Google the term “high frequency trading”


So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.


Yeah, most large electronic finance companies do this. Lookup “the sniper in mahwah” for some dated but really interesting reading on this game.


Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.

IIRC, multiple IBM mainframes can be setup so they run and are administered as a single system for DR, but there are distance limits.


A Geographically-Dispersed Parallel Sysplex for z\OS mainframes, which IBM has been selling since the '90s, can have redundancy out to about 120 miles.

At a former employer, we used a datacenter in East Brunswick NJ that had mainframes in sysplex with partners in lower manhattan.


If you have to mirror synchronously the _maximum_ distances for other systems (e.g. storage mirroring with NetApp SnapMirror Synchronous, IBM PPRC, EMC SRDF/S) are all in this range.

But an important factor is, that performance will degrade with every microsecond latency added as the active node for the transaction will have to wait for the acknowledgement of the mirror node (~2*RTT). You can mirror synchronously that distance, but the question is if you can accept the impact.

That's not to say that one shouldn't create a replica in this case. If necessary, synchronize synchronous to a nearby DC and asynchrone to a remote one.

For sure we only know the sad consequences.


The actual distance involved in the case of the Brunswick DC is closer to 25 miles to Wall St.; but yes, latency for this is always paramount.


>Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.

I was told right after the bombing, by someone with a large engineering firm (Schlumberger or Bechtel), that the bombers could have brought the building down had they done it right.


Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.


Germany, of course. Like my company needs government permission to store backups.


More like: your company (or government agency) is critical infrastructure or of a certain size, so there are obligations on how you maintain your records. It’s not like the US or other countries don’t have similar requirements.


[flagged]


> This is incredible. Government telling me how to backup my data. Incredible.

No more incredible than the government telling you that you need liability insurance in order to drive a car. Do you think that is justifiable?


The difference is that you cannot choose who you're sharing a road with while you can usually choose your IT service providers. You could, for instance, choose a cheaper provider and make your own backups or simply accept that you could lose your data.

Where people have little or no choice (e.g government agencies, telecoms, internet access providers, credit agencies, etc) or where the blast radius is exceptionally wide, I do find it justifiable to mandate safety and security standards.


> you cannot choose who you're sharing a road with while you can usually choose your IT service providers

You can choose where to eat, but the gov still carrier out food heath and safety inspections. The reason is that it isn't easy for customers to observe these things otherwise. I think the same applies to corporate data handling & storage.


It's a matter of balance. Food safety is potentially about life and death. Backups not so much (except in very specific cases where data regulation is absolutely justifiable).

If any legislation is passed regarding data, I would prefer a broader rule that covers backup as well as interoperability/portability.


Losing data is mostly(*) fine if you are a small business. If a major bank loses it's data it is a major problem as it may impact a huge number of customers and an existential way, when all money is "gone"

(*) From state's perspective there is still a problem: tax audits, bad if everybody avoids them by "accidental" data loss


As I said, a wide blast radius is a justification and banks are already regulated accordingly. A general obligation to keep financial records exists as well.


> liability insurance in order to drive a car. Do you think that is justifiable?

New Zealand doesn't require car insurance, and I presume there are other countries with governments that that don't either.

I suspect most people in NZ would only have a sketchy idea of what liability is, based on learning from US TV shows.


It seems New Zealand is one of very few countries where that is the case, and that's because you guys have a government scheme that provides equivalent coverage for personal injury without being a form of insurance (ACC). As far as I understand, part of the registration fees you pay go to ACC. I would argue this is basically a mandatory insurance system with another name.


Australia is the same. Its part of the car registration cost annually.


Nope: The other way around. If you are of a certain size, you are required to ensure certain criteria. NIS-2 is the EU directive and it more or less maps to ISO27001 which includes risk management against physical catastrophes. https://www.openkritis.de/eu/eu-nis-2-germany.html

Of course you can do backups if you are smaller, or comply with such a standard if you so wish.


[flagged]


Is it? It would be incredible if the government didn’t have specific requirements for critical infrastructure.

Say you’re an energy company and an incident could mean that a big part of the country is without power, or you’re a large bank and you can’t process payroll for millions of workers. They’re ability to recover quickly and completely matters. Just recently in Australia an incident at Optus, a large phone company, prevented thousands of people from making emergency calls for several hours. Several people died including a child.

The people should require these providers behave responsibly. And the way the people do that is with a government.

Companies behave poorly all the time. Red tape isn’t always bad.


I'm usually first in line when talking shit about the German government, but here I am absolutely for this. I was really positively surprised when I had my apprenticeship at a publishing company and we had a routine to bring physical backups to the cellar of a post office every morning. The company wasn't that up-to-date with most things, but here they were forced to a proper procedure which totally makes sense. They even had proper desaster recovery strategies that included being back online within less than 2 hours hours even after a 100% loss of all hardware. They had internal jokes that you could have nuked their building and as long as one IT guy survived because he was in the home office, he could at least bring up the software within a day.


It's incredible knowing the bureaucracy of Germany.


Considering that companies will do everything to avoid doing sensible things that cost money - yes, of course the government has to step in and mandate things like this.

It's no different from safety standards for car manufacturers. Do you think it's ridiculous that the government tells them how to build cars?

And similarly here: If the company is big enough / important enough, then the cost to society if their IT is all fucked up is big enough that the government is justified in ensuring minimum standards. Including for backups.


It’s government telling you the minimum you have to do. There is nothing incredible there.

It makes sense that as economic operators become bigger, as the impact of their potential failure grows on the rest of the economy, they have to become more resilient.

That’s just the state forcing companies to take externalities into account which is the state playing its role.


Well, given that way too many companies in the critical infrastructure sector don't give a fuck about how to keep their systems up and we have been facing a hybrid war from Russia for the last few years that is expected to escalate in a full on NATO hot war in a few years, yes it absolutely does make sense for the government to force such companies to be resilient against Russians.

Just because wherever country you are at doesn't have to prepare for a hot war with Russia doesn't mean we don't have to. When the Russians come in and attack, hell even if they "just" attack Poland with tanks and the rest of us with cyber warfare, the last thing we need is power plants, telco infra, traffic infrastructure or hospitals going out of service because their core systems got hit by ransomware.


> it absolutely does make sense for the government to force such companies

Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.

> power plants, telco infra, traffic infrastructure or hospitals

Their system _will_ get hit by ransomware or APTs. It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.


> Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.

The regulations are a framework called "BSI Grundschutz" and all parts are freely available for everyone [1]. Even if our government were fully corrupted by Russia like Orban's Hungary - just look at the regulations on their face values and tell me what exactly you would see as "detrimental" or against best practice?

> It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.

I think it actually is. The BSI Grundschutz criteria tend to feel "checkboxy", but if you tick all the checkboxes you'll end up with a pretty resilient system. And yes, I've been on the implementing side.

The thing is, even if you're not fully compliant with BSI Grundschutz... if you just follow parts of it in your architecture, your security and resilience posture is already much stronger than much of the competition.

[1] https://www.bsi.bund.de/DE/Themen/Unternehmen-und-Organisati...


Government isn’t perfect but I’d be interested to know what alternative you propose?


a) Incarceration time for IT execs and responsible engineers.

b) Let companies go out of business once they fail to protect their own crucial data.

None of that is possible.


Responsible for what? If the government does not mandate any behavior, what basis does it have to incarcerate anyone?


Those are only punishments, which are shown to not work. Solutions are needed


so you are not proposing anything real then? I can pull "magic indestructible backup solution" out of my arse, too :(


No propositions at this point. I have no idea how to fix the problem.


It feels like you are being obtuse/arguing in bad faith. Of course there are standards on backups. Most countries have them.

Let's think what regulations does the 'free market' bastion US have on computer systems and data storage...

HIPAA, PCI DSS, CIS, SOC, FIPS, FINRA...


> HIPAA, PCI DSS, CIS, SOC, FIPS, FINRA

Those are related to _someone else's_ data handling.


They had standards for a variety of stuff, including how you architect your own systems to protect against data loss due to a variety of different causes.


(Without knowing the precise nature of these laws) I would expect that they don't forbid you to store backups elsewhere. It's just that they mandate that certain types of data be backed up in sufficiently secure and independent locations. If you want to have an additional backup (or backups of data not covered by the law) in a more convenient location, you still can.


> sufficiently secure and independent locations

This kind of provision requires enforcement and verification. Thus, a tech spec for the backup procedure. Knowing Germany good enough, I'd say that these tech spec would be detrimental for the actual safety of the backup.


wild speculation and conjecture


Not wild.

When you live in Germany and are asked to send a FAX (and not a mail, please). Or a digital birth certificate is not accepted until you come with lawyers, or banks not willing to operate with Apple pay, just to name few..

Speculation, yes, but not at all wild


I'm German and in my 45 years of being so have never been required to send a fax. Snail mail yes, but never a fax.


Agree. It is based on my experience with German bureaucracy.


Certain data records need to be legally retained for certain amounts of time; Other sensitive data (e.g. PII) have security requirements.

Why wouldn't government mandate storage requirements given the above?


No it doesn’t. It does however need to follow the appropiate standards commensurate with your size and criticality. Feel free to exceed them.


They deserved to lose everything... except the human lives, of course.

That's like storing lifeboats in the bilge section of the ship, so they won't get damaged by storms.


Nothing increases the risk of servers catching fire like government investigators showing up to investigate allegations that North Korea hacked the servers.


Or investigations into a major financial scandal in a large French bank!

(While the Credit Lyonnais was investigated in the 90s, both the HQ and the site where they stored their archives were destroyed by fire within a few months)


>This file contains the complete set of papers, except for a number of secret documents, a few others which are part of still active files, some correspondence lost in the floods of 1967...

>Was 1967 a particularly bad winter?

>No, a marvellous winter. We lost no end of embarrassing files.


Yes minister! Great show that no one in the US has heard of which is a shame.


"We must do something --> this is something --> We must do this!"


It _almost_ sounds like you're suggesting the fire was deliberate!


It is very convenient timing


> The issue here is not refusing to use a foreign third party. That makes sense.

For anyone else who's confused, G-Drive means Government Drive, not Google Drive.


> The issue here is not refusing to use a foreign third party. That makes sense.

Encrypt before sending to a third party?


Of course you'd encrypt the data before uploading it to a third party, but there's no reason why that third party should be under control of a foreign government. South Korea has more than one data center they can store data inside of, there's no need to trust other governments sigh every byte of data you've gathered, even if there are no known backdoors or flaws in your encryption mechanism (which I'm sure some governments have been looking into for decades).


There is a reason that NIST recommends new encryption algorithms from time to time. If you get a copy of ALL government data, in 20 years you might be able to break encryption and get access to ALL government data from 20yr ago, no matter how classified they were, if they were stored in that cloud. Such data might still be valuable, because not all data is published after some period.


That doesn't sound like a good excuse to me.

aes128 has been the formal standard for 23 years. The only "foreseeable" event that could challenge it is quantum computing. The likely post quantum replacement is ... aes256, which is already a NIST standard. NIST won't replace aes256 in the foreseeable future.

All that aside, there is no shortage of ciphers. If you are worried about one being broken, chain a few of them together.

And finally, no secret has to last forever. Western governments tend to declassify just about everything after 50 years. After 100 everyone involved is well and truly dead.


That's going away. We are seeing reduced deprecations of crypto algorithms over time AFAICT. The mathematical foundations are becoming better understood and the implementations' assurance levels are improving too. I think we are going up the bathtub curb here.

The value of said data diminishes with time too. You can totally do an off-site cloud backup with mitigation fallbacks should another country become unfriendly. Hell, shard them such that you need n-of-m backups to reconstruct and host each node in a different jurisdiction.

Not that South Korea couldn't have Samsung's Joyent acquisition handle it.


I don't consider myself special, anything I can find eventually proof assistants using ML will find...


The reason is because better ones have been developed, not because the old ones are "broken". Breaking algos is now a matter of computer flops spent, not clever hacks being discovered.

When the flops required to break an algo exceed the energy available on the planet, items are secure beyond any reasonable doubt.


If you are really paranoid to that point, you probably wouldn't follow NIST recommendations for encryption algorithms as it is part of the Department of Commerce of the United States, even more in today's context.


Because even when you encrypt the foreign third party can still lock you out of your data by simply switching off the servers.


Would you think that the U.S would encrypt gov data and store on Alibaba's Cloud? :)


Why not?


Because it lowers the threshold for a total informational compromise attack from "exfiltrate 34PB of data from secure govt infrastructure" down to "exfiltrate 100KB of key material". You can get that out over a few days just by pulsing any LED visible from outside an air-gapped facility.


Wait what?


There are all sorts of crazy ways of getting data out of even air-gapped machines, providing you are willing to accept extremely low data rates to overcome attenuation. Even with million-to-one signal-to-noise ratio, you can get significant amounts of key data out in a few weeks.

Jiggling disk heads, modulating fan rates, increasing and decreasing power draw... all are potential information leaks.


> There are all sorts of crazy ways of getting data out of even air-gapped machines.

Chelsea Manning apparently did it by walking in and out of the facility with a CD marked 'Lady Gaga'. Repeatedly

https://www.theguardian.com/world/2010/nov/28/how-us-embassy...


On which TV show?


As of today, there's no way to prove the security of any available cryptosystem. Let me say that differently: for all we know, ALL currently available cryptosystems can be easily cracked by some unpublished techniques. The only sort-of exception to that requires quantum communication, which is nowhere near practicability on the scale required. The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

On the other hand, some popular cryptosystems that were more common in the past have been significantly weakened over the years by mathematical advances. Those were also based on math problems that were believed to be "hard." (They're still very hard actually, but less so than we thought.)

What I'm getting at is that if you have some extremely sensitive data that could still be valuable to an adversary after decades, you know, the type of stuff the government of a developed nation might be holding, you probably shouldn't let it get into the hands of an adversarial nation-state even encrypted.


> The only evidence we have that the cryptography that we commonly use is actually safe is that it's based on "hard" math problems that have been studied for decades or longer by mathematicians without anyone being able to crack them.

Adding to this...

Most crypto I'm aware of implicitly or explicitly assumes P != NP. That's the right practical assumption, but it's still an major open math problem.

If P = NP then essentially all crypto can be broken with classical (i.e. non-quantum) computers.

I'm not saying that's a practical threat. But it is a "known unknown" that you should assign a probability to in your risk calculus if you're a state thinking about handing over the entirety of your encrypted backups to a potential adversary.

Most of us just want to establish a TLS session or SSH into some machines.


While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

The current state of encryption is based on math problems many levels harder than the ones that existed a few decades ago. Most vulnerabilities have been due to implementation bugs, and not actual math bugs. Probably the highest profile "actual math" bug is the DUAL_EC_DRBG weakness which was (almost certainly) deliberately inserted by the NSA, and triggered a wave of distrust in not just NIST, but any committee designed encryption standards. This is why people prefer to trust DJB than NIST.

There are enough qualified eyes on most modern open encryption standards that I'd trust them to be as strong as any other assumptions we base huge infrastructure on. Tensile strengths of materials, force of gravity, resistance and heat output of conductive materials, etc, etc.

The material risk to South Korea was almost certainly orders of magnitude greater by not having encrypted backups, than by having encrypted backups, no matter where they were stored (as long as they weren't in the same physical location, obviously).


>While I understand what you're saying, you can extend this logic to such things as faster-than-light travel, over-unity devices, time travel etc. They're just "hard" math problems.

No you can't. Those aren't hard math problems. They're Universe breaking assertions.

This is not the problem of flight. They're not engineering problems. They're not, "perhaps in the future, we'll figure out..".

Unless our understanding of physics is completely wrong, then None of those things are ever going to happen.


According to our understanding of physics, which is based on our understanding of maths, the time taken to brute force a modern encryption standard, even with quantum computers, is longer than the expected life of the universe. The likely-hood of "finding a shortcut" to do this is in the same ball-park as "finding a shortcut" to tap into ZPE or "vacuum energy" or create worm-holes. The maths is understood, and no future theoretical advances can change that. It would involve completely new maths to break these. We passed the "if only computers were a few orders of magnitude faster it's feasible" a decade or more ago.


Sorry, I don't think this is true. There is basically no useful proven lower bound on the complexity of breaking popular cryptosystems. The math is absolutely not understood. In fact, it is one of the most poorly understood areas of mathematics. Consider that breaking any classical cryptosystem is in the complexity class NP, since if an oracle gives you the decryption key, you can break it quickly. Well we can't even prove that NP != P, i.e., that there even exists a problem where having such an oracle gives you a real advantage. Actually, we can't even prove that PSPACE != P, which should be way easier than proving NP != P if it's true.


One-time pad is provable secure. But it is not useful for backups, of course.


OTP can be useful especially for backups. Use a fast random number generator (real, not pseudo), write output to fill tape A. XOR the contents of tape A to your backup datastream and write result to Tape B. Store tape A and B in different locations.


But you have one copy of the key stream. It is not safe. You need at least two places to store at least two copies of the key stream. You cannot store it in non-friendly cloud (and this thread started from backing up government sensitive data into cloud owned by other country, possibly adversary one.

If you have two physically separate places which you could trust key stream, you could use them to backup non-encrypted (or "traditionally" encrypted) data itself, without any OTP.


You may want some redundancy because needing both tapes increases the risk to your backup. You could just backup more often. You could use 4 locations, so you have redundand keystreams and redundant backup streams. But in general, storing the key stream follows the same necessities as storing the backup or some traditional encryption keys for a backup. But in general, your backup already is a redundancy, and you will usually do multiple backups in time intervals, so it really isn't that bad.

Btw, you really really need a fresh keystream for each and every backup. You will have as many keystream tapes as you have backup tapes. Re-using the OTP keystream enables a lot of attacks on OTP, e.g. by a simple chosen plaintext an attacker can get the keystream from the backup stream and then decrypt other backup streams with it. XORing similar backup streams also gives the attacker an idea which bits might have changed.

And there is a difference to storing things unencrypted in two locations: If an attacker, like some evil maid, steals a tape in one location, you just immediately destroy its corresponding tape in the other location. That way, the stolen tape will forever be useless to the attacker. Only an attacker that can steal a pair of corresponding tapes in both locations before the theft is noticed could get at the plaintext.


Even OTP is not secure if others have access to it.


Every castle wall can be broken with money.


How much money is required to decrypt a file encrypted with a 256-bit AES key?


How much person(s) who know the key will take to change country and don't work anymore?

Or how much it is cost to kidnap significant one of key bearer(s)?

I think, it is very reasonable sums for governments of almost any country.


I think you assume that encryption keys are held by people like a house key in their pocket. That's not the case for organizations who are security obsessed. They put their keys in HSMs. They practice defense in depth. They build least-privilege access controls.


Thank you for writing this post. This should be the top comment. This is a state actors game, the rules are different.


> could still be valuable to an adversary after decades

What kind of information might be valuable after so long?


Why make yourself dependent on a foreign country for your own sensitive data?

You have to integrate the special software requirements to any cloud storage anyway and hosting a large amount of files isn't an insurmountable technical problem.

If you can provide the minimal requirements like backups, of course.


Presumably because you aren't capable of building what that foreign country can offer you yourself.

Which they weren't. And here we are.


> Encrypt before sending to a third party?

That sounds great, as long as nobody makes any mistake. It could be a bug on the RNG which generates the encryption keys. It could be a software or hardware defect which leaks information about the keys (IIRC, some cryptographic system are really sensitive about this, a single bit flip during encryption could make it possible to obtain the private key). It could be someone carelessly leaving the keys in an object storage bucket or source code repository. Or it could be deliberate espionage to obtain the keys.


It does only make sense if you are competent enough to manage data, and I mean: Any part of it, forever. It's not impossible, of course, but it is really not as trivial as the self-host crowd makes it out to be, if you absolutely need a certain amount of 9s of reliability. There is a reason why AWS etc can exist. I am sure the cloud market is not entirely reasonable but certainly far more reasonable than relying on some mid consultant to do this for you at this scale.


Yeah, the whole supposed benefit of an organization using storage the cloud is to avoid stuff like this from happening. Instead, they managed to make the damage far worse by increasing the amount of data lost by centralizing it.


The issue is without profit incentive of course it isn’t X (backed up, redundant, highly available, whatever other aspect is optimized away by accountants).

Having worked a great deal inside of aws on these things aws provides literally every conceivable level of customer managed security down to customer owned and keyed datacenters operated by aws, with master key HSMs owned, purchased by the customer, with customer managed key hierarchies at all levels and detailed audit logs of everything done by everything including aws itself. The security assurance of aws is far and away beyond what even the most sophisticated state actor infrastructure does and is more modern to boot - because it’s profit incentive drives that.

Most likely this was not about national security than about nationalism. They’re easily confused but that’s fallacious. And they earned the dividends of fallacious thinking.


Call me a conspiracy theorist, but this kind of mismanagement is intentional by design so powerful people can hide their dirty laundry.


Never attribute to malice what can be attributed to stupidity.

There was that time when some high profile company's entire Google Cloud account was destroyed. Backups were on Google Cloud too. No off-site backups.


One of the data integrity people sadly committed suicide as a result of this fire, so I am also thinking this was an incompetence situation (https://www.yna.co.kr/view/AKR20251003030351530).

For the budget spent, you’d think they would clone the setup in Busan and sync it daily or something like this in lieu of whatever crazy backup they needed to engineer but couldn’t.


You were probably thinking of UniSuper [0], an Australian investment company with more than $80B AUM.

Their 3rd party backups with another provider were crucial to helping them undo the damage from the accidental deletion by GCloud.

GCloud eventually shared a post-mortem [1] about what went down.

0: https://news.ycombinator.com/item?id=40304666

1: https://cloud.google.com/blog/products/infrastructure/detail...


> Never attribute to malice what can be attributed to stupidity.

Any sufficiently advanced malice is indistinguishable from stupidity.

I don't think there's anything that can't be attributed to stupidity, so the statement is pointless. Besides, it doesn't really matter naming an action stupidity, when the consequences are indistinguishable from that of malice.


I know of one datacenter that burned down because someone took a dump before leaving for the day, the toilet overflowed, then flooded the basement, and eventually started an electrical fire.

I'm not sure you could realistically explain that as anything. Sometimes ... shit happens.


I mean, I don't disagree that "gross negligence" is a thing. But that's still very different from outright malice. Intent matters. The legal system also makes such a distinction. Punishments differ. If you're a prosecutor, you can't just make the argument that "this negligence is indistinguishable from malice, therefore punish like malice was involved".


Hanlon's Razor is such an overused meme/trope that it's become meaningless.

It's a fallacy to assume that malice is never a form of stupidity/folly. An evil person fails to understand what is truly good because of some kind of folly, e.g. refusing to internally acknowledge the evil consequences of evil actions. There is no clean evil-vs-stupid dichotomy. E.g. is a drunk driver who kill someone with drunk driving stupid or evil? The dangers of drunk driving are well-known, so what about both?

Additionally, we are talking about a system/organization, not a person with a unified will/agenda. There could indeed be an evil person in an organization that wants the organization to do stupid things (not backup properly) in order to be able to hide his misdeeds.


Hanlon's Razor appears to be a maxim of assuming good-faith; "They didn't mean to be cause this, they are just inept."

To me, it has no justification. People see malice easily, granted, but others feign ignorance all the time too.

I think a better principle is: Proven and documented testing for competence, making it clear what a persons duties and (liable) responsibilities are, then thereafter treating incompetence and malice the same. Also: any action need to be audited by a second entity who shares blame (to a measured and pre-decided degree) when they fail to do so.


It's also true that "it is difficult to get a man to understand something, when his salary depends on his not understanding it."


You have to balance that with how low can you expect human beings to lower their standards when faced with bureaucratic opposition. No backups on a key system would increase the likelihood of malice versus stupidity, since the importance of backups is known to IT staff regardless of role and seniority for only 40 years or so.


I very seriously doubt that the US cares about South Korea's deepest, darkest secrets that much, if at all.

Not using a cloud provider is asinine. You can use layered encryption so the expected lifetime of the cryptography is beyond the value of the data...and the US government themselves store data on all 3 of them, to my knowledge.

I say US because the only other major cloud providers I know of are in China, and they do have a vested interest in South Korean data, presumably for NK.


It's quite wild to think how US wouldn't want access to their data on a plate, through AWS/GCP/Azure. You must not be aware of the last decade of news when it comes to US and security.


The US and South Korea are allies, and SK doesn't have much particular strategic value that I'm aware of? At least not anything they wouldn't already be sharing with the US?

Can you articulate what particular advantages the US would be pursuing by stealing SK secret data (assuming it was not protected sufficiently on AWS/GCP to prevent this, and assuming that platform security features have to be defeated to extract this data—this is a lot of risk from the US's side, to go after this data, if they are found out in this hypothetical, I might add, so "they would steal whatever just to have it" is doubtful to me).


The NSA phone-tapped Angela Merkel's phone while she was chancellor as well as her staff and the staff of her predecessor[1], despite the two countries also being close allies. "We are allies, why would they need to spy on us?" is therefore proveably not enough of a reason for the US to not spy on you (let's not forget that the NSA spies on the entire planet's internet communications).

The US also has a secret spy facility in Pine Gap that is believed to (among other things) spy on Australian communications, again despite both countries being very close allies. No Australians know what happens at Pine Gap, so maybe they just sit around knitting all day, but it seems somewhat unlikely.

[1]: https://www.theguardian.com/us-news/2015/jul/08/nsa-tapped-g...


Airbus was spied on by NSA For the benefit of Boeing: https://apnews.com/general-news-e88c3d44c2f347b2baa5f2fe508f...

Why do you think USA wouldn't lie, cheat and spy on someone if it had a benefit in it?


[flagged]


As a sysadmin at company that provide fairly sensitive services, I find online cloud backups to be way to slow for the purpose of protecting against something like the server room being destroyed by a fire. Even something like spinning disks at a remote location feel like a risk, as files would need to be copied onto faster disks before services could be restored, and that copying would take precious time during an emergency. When downtime means massive losses of revenue for customers, being down for hours or even days while waiting for the download to finish is not going be accepted.

Restoring from cloud backups is one of those war stories that I occasionally hear, including the occasionally fedex solution of sending the backup disk by carrier.


Many organizations are willing to accept the fallbacks of cloud backup storage because it’s the tertiary backup in the event of physical catastrophe. In my experience those tertiary backups are there to prevent the total loss of company IP in the should an entire site be lost. If you only have one office and it burns down work will be severely impacted anyway.

Obviously the calculus changes with maximally critical systems where lives are lost if the systems are down or you are losing millions per hour of downtime.


For truly colossal amounts of data, fedex has more bandwidth than fiber. I don’t know if any cloud providers will send you your stuff on physical storage, but most will allow you to send your stuff to them on physical storage- eg AWS snowball.

There are two main reasons why people struggle with cloud restore:

1. Not enough incoming bandwidth. The cloud’s pipe is almost certainly big enough to send your data to you. Yours may not be big enough to receive it.

2. Cheaping out on storage in the cloud. If you want fast restores, you can’t use the discount reduced redundancy low performance glacier tier. You will save $$$ right until the emergency where you need it. Pay for the flagship storage tier- normal AWS S3, for example- or splurge and buy whatever cross-region redundancy offering they have. Then you only need to worry about problem #1.


Amazon used to offer a truck based data transport: https://www.datacenterdynamics.com/en/news/aws-retires-snowm...


If you allow it to cost a bit, which is likely a good choice given the problem, then there are several solutions available. It is important to think through the scenario, and if possible, do a dry run of the solution. A remote physical server can work quite well and be cost effective compared to a flagship storage tier, and if data security is important, you can access the files on your own server directly rather than downloading an encrypted blob from a cloud located outside the country.


In one scenario, with offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and there will be some downtime while we get things rolling again."

In the other scenario, without offsite backups ("in the clown" or otherwise): "We had a fire at our datacenter, and that shit's just gone."

Neither of these are things that are particularly good to announce, and both things can come with very severe cost, but one of them is clearly worse than the other.


SK would be totally fine with that though because that means there would eventually be recovery!


You're not designing to protect from data loss, you're designing to protect from downtime.


That’s why

Microsoft can't guarantee data sovereignty

https://news.ycombinator.com/item?id=45061153


He obviously meant encrypting before uploading. At that point it doesn't matter who's holding your data or what they try to do with it.


It still matters who holds your data. Yes they can't read it, but they can hold it ransom. What if the US decides it wants to leverage the backups in tariff negotiations or similar? Not saying this would happen, but as a state level actor, you have to prepare for these eventualities.


That's why you backup to numerous places and in numerous geopolitical blocs. Single points of failure are always a bad idea. You have to create increasingly absurd scenarios for there to be a problem.


or… hear me out…

you obviate the need for complex solutions like that by simply having a second site.


How’s that? Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?


It is much more likely and cheaper, that US marines will desant and capture your backup facility, than someone would break AES-128.


Sending troops would be an act of war, and definitely not cheap.

Stealing some encryption keys, just another Wednesday.


You mean like blowing up an oil pipeline? Accidents happen all the time. It is quite a lot cheaper to have an 'accident' happen to a data center than to break AES-256.


There are less public options to get the data without breaking encryption especially when the target uses MS software


Now I’m down the rabbit hole of https://en.wikipedia.org/wiki/NSAKEY


There might be unknown unknowns....


Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

As for backdoors, they may exist if you rely on a third party but it's pretty hard to backdoor the relatively simple algorithms used in cryptography


It's not so much that there is a way to directly crack an encrypted file as much as there being backdoors in the entire HW and SW chain of you decrypting and accessing the encrypted file.

Short of you copying an encrypted file from the server onto a local trusted Linux distro (with no Intel ME on the machine), airgapping yourself, entering the decryption passphrase from a piece of paper (written by hand, never printed), with no cameras in the room, accessing what you need, and then securely wiping the machine without un-airgapping, you will most likely be tripping through several CIA-backdoored things.

Basically, the extreme level of digital OPSEC maintained by OBL is probably the bare minimum if your adversary is the state machinery of the United States or China.


This is a nation state in a state of perpetual tension, formal war and persistent attempts at sabotage by a notoriously paranoid and unscrupulous next-door enemy totalitarian/crime family state.

SK should have no shortage of motive or much trouble (it's an extremely wealthy country with a very well-funded, sophisticated government apparatus) implementing its own version of hardcore data security for backups.


Yeah, but also consider that maybe not every agency of South Korea needs this level of protection?


> Can you provide an example of a commonly used cryptography system that is known to be vulnerable to nation state cracking?

DES. Almost all pre-2014 standards-based cryptosystems due to NIST SP 800-90A. Probably all other popular ones too (like, if the NSA doesn't have backdoors to all the popular hardware random number generators then I don't know what they're even doing all day), but we only ever find out about that kind of thing 30 years down the line.


Dual_EC_DRBG


Please provide any proof or references to what you are claiming.


>Using encryption, which is known to have backdoors and is vulnerable to nation state cracking?

WTF are you talking about? There are absolutely zero backdoors of any kind known to be in any standard open source encryption systems, and symmetric cryptography 256-bits or more is not subject to cracking by anyone or anything, not even if general purpose quantum computers are doable and prove scalable. Shor's algorithm applies to public-key not symmetric, where the best that can be done is Grover's quantum search for a square-root speed up. You seem to be crossing a number of streams here in your information.


As someone who’s fairly tech-literate but has a big blind spot in cryptography, I’d love to hear any suggestions you have for articles, blog posts, or smaller books on the topic!

My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable. You sound both very knowledgeable on the topic, and very confident in the safety of modern encryption. I’m thinking maybe my understanding is obsolete!


Encryption is the mechanism of segmentation for most everything on 2025.

AES is secure for the foreseeable future. Failures in key storage and exchange, and operational failures are the actual threat and routinely present a practical, exploitable problem.

You see it in real life as well. What’s the most common way of stealing property from a car? A: Open the unlocked door.


> My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable

Lol this is woefully misinformed.


https://en.wikipedia.org/wiki/Post-quantum_cryptography

It is my understanding that current encrypted content can someday be decrypted.


That's incorrect. Current asymmetric (ie: public-key) algorithms built using prime factoring or elliptic curve techniques are vulnerable to quantum attack using Shor's algorithm.

However, symmetric algorithms are not nearly as vulnerable. There is one known quantum attack using Grover's algorithm, but with quadratic speedup all it does is reduce the effective length of the key by half, so a 128-bit key will be equivalent to a 64-bit key and a 256-bit key will be equivalent to a 128-bit key. 256-bit keys are thus safe forever, since going down to a 128-bit key you are still talking age-of-the-universe break times. Even 128-bit keys will be safe for a very long time. While being reduced to a 64-bit key does make attacks theoretically possible, it is still tremendously difficult to do on a quantum computer, much harder than the asymmetric case (on the order of centuries even with very fast cycle times).

Finally, it's also worth noting that asymmetric cryptosystems are rapidly being updated to hybrid cryptosystems which add post-quantum algorithms (ie: algorithms which quantum computers are believed to provide little or no speedup advantage). So, going forward, asymmetric crypto should also no longer be vulnerable to store-now-decrypt-later attacks, provided there's no fundamental flaw in the new post-quantum algorithms (they seem solid, but they are new, so give the cryptographers a few years to try to poke holes in them).


This is also assuming a theoretical quantum computing system is developed capable of breaking the encryption. Which isn't at all a given.


>However, symmetric algorithms are not nearly as vulnerable. There is one known quantum attack using Grover's algorithm, but with quadratic speedup all it does is reduce the effective length of the key by half, so a 128-bit key will be equivalent to a 64-bit key and a 256-bit key will be equivalent to a 128-bit key. 256-bit keys are thus safe forever, since going down to a 128-bit key you are still talking age-of-the-universe break times. Even 128-bit keys will be safe for a very long time. While being reduced to a 64-bit key does make attacks theoretically possible, it is still tremendously difficult to do on a quantum computer, much harder than the asymmetric case (on the order of centuries even with very fast cycle times).

Specifically it's worth noting here the context of this thread: single entity data storage is the textbook ideal case for symmetric. While Shor's "only" applies [0] to one type of cryptography, that type is the key to the economic majority of what encryption is used for (the entire world wide web etc). So it still matters a lot. But when you're encrypting your own data purely to use it for yourself at a future time, which is the case for your own personal data storage, pure symmetric cryptography is all you need (and faster). You don't have the difficult problem of key distribution and authentication with the rest of humanity at all and can just set that aside entirely. So to the point of "why not back up data to multiple providers" that "should" be no problem if it's encrypted before departing your own trusted systems.

Granted, the "should" does encompass some complications, but not in the math or software, rather in messier aspects of key control and metadata. Like, I think an argument could be made that it's easier to steal a key then exfiltrate huge amounts of data without someone noticing, but there's powerful enough tools for physically secure key management (and splitting, Shamir's Secret Sharing means you can divide up each unique service backup encryption key into an arbitrary number of units and then require an arbitrary number of them to all agree to reconstitute the usable original key) that I'd expect an advanced government to be able to handle it, more so then data at rest even. Another argument is that even if a 3rd party cannot ever see anything about the content of an encrypted archive, they can get some metadata from its raw size and the flows in and out of it. But in the reduced single use case of pure backups where use is regular widely spaced dumps, and for something as massive as an entire government data cloud with tens of thousands of uncorrelated users, the leakage of anything meaningful seems low. And of course both have to be weighed against a disaster like this one.

Anyway, all well above my pay grade. But if I were a citizen there I'd certainly be asking questions because this feels more like NIH or the political factors influencing things.

----

0: Last I checked there was still some serious people debating on whether it will actually work out in the real world, but from the perspective of considering security risk it makes sense to just take it as given that it will work completely IRL, including that general purpose quantum computers that can run it will prove sufficiently scalable to provide all the needed qubits.


> someday be decrypted

Yup and that someday is the same day nuclear fusion is commercially viable.


Someday, theoretically, maybe. This means that, as far as everyone knows, if I properly secure a message to you using RSA, no one else is reading the message. Maybe in 50 years they can, but, well, that's in 50 years. Alarmists would have you believe it'll happen in three. I'm just an Internet rando, but my money's on it being closer to 50. Regardless though, it's not today.



Perhaps that is why I was asking for better information.


Whew, that's actually a hard one! It's been long enough since I was getting into it that I'm not really sure what's the best present path on it. In terms of books, JP Aumasson's "Serious Cryptography" got a 2nd edition not too long ago and the first edition was good. Katz & Lindell's "Modern Cryptography" and Hoffstein's "Introduction to Mathematical Cryptography" are both standard texts that I think a lot of courses still get started with. Finally I've heard good things about Esslinger's "Learning and Experiencing Cryptography with CrypTool and SageMath" from last year and Smart's "Cryptography Made Simple", which has a bunch of helpful visuals.

For online stuff, man is there a ton, and plenty comes up on HN with some regularity. I guess I've been a fan of a lot of the work Quanta Magazine does on explaining interesting science and math topics, so you could look through their cryptography-tagged articles [0]. As I think about it more, honestly though it might almost seem cliche but reading the Wikipedia entries on cryptography and following that along with reference to the links if you want isn't bad either.

Just keep in mind there's plenty of pieces that go into it. There's the mathematics of the algorithms themselves. Then a lot of details around the implementations of them into working software, with efforts like the HACL* project [1] at formal mathematical verification for libraries, which then has gone on to benefit projects like Firefox [2] in both security and performance. Then how that interacts with the messy real world of the underlying hardware, and how details there can create side channels can leak data from a seemingly good implementations of perfect math. But then also that such attacks don't always matter, it depends on the threat scenarios. OTP, symmetric and asymmetric/pub-key (all data preserving), and cryptographic hash functions (which are data destroying) are all very different things despite falling under the overall banner of "cryptography" with different uses and tradeoffs.

Finally, there is lots and lots of history here going back to well before modern computers at all. Humans have always desired to store and share information with other humans they wish while preventing other humans from gaining it. There certainly have been endless efforts to try to subvert things as well as simple mistakes made. But we've learned a lot and there's a big quantitative difference between what we can do now and in the past.

>My (rudimentary, layman) understanding is that encryption is almost like a last line of defense and should never be assumed to be unbreakable.

Nope. "We", the collective of all humanity using the internet and a lot of other stuff, do depend on encryption to be "unbreakable" as a first and only line of defense, either truly and perfectly unbreakable or at least unbreakable within given specified constraints. It's the foundation of the entire the global e-commerce system and all the trillions and trillions flowing through it, of secure communications for business and war, etc.

Honestly, I'm kind of fascinated that apparently there are people on HN who have somehow internalized the notion of cryptography you describe here. I don't mean that as a dig, just it honestly never occurred to me and I can't remember really seeing it before. It makes me wonder if that feeds into disconnects on things like ChatControl and other government backed efforts to try to use physical coercion to achieve what they cannot via peaceful means. If you don't mind (and see this at some point, or even read this far since this has turned into a long-ass post) could you share what you think about the EU's proposal there, or the UK's or the like? Did you think they could do it anyway so trying to pass a law to force backdoors to be made is a cover for existing capabilities, or what? I'm adamantly opposed to all such efforts, but it's not typically easy to get even the tech literate public on-side. Now I'm curious if thinking encryption is breakable anyway might somehow play a role.

----

0: https://www.quantamagazine.org/tag/cryptography/

1: https://github.com/hacl-star/hacl-star

2: https://blog.mozilla.org/security/2020/07/06/performance-imp...


Wow, thank you for this detailed reply! I’ll be checking out some of those resources at lunch today :)

I didn’t take your comment as a dig at all. I’m honestly a little surprised myself that I’ve made it this far with such a flawed understanding.

> Did you think they could do it anyway so trying to pass a law to force backdoors to be made is a cover for existing capabilities, or what?

I had to do some quick reading on the ChatControl proposal in the EU.

I see it along the lines of, if they really needed to target someone in particular (let’s not get into who “deserves” to be targeted), then encryption would only be an obstacle for them to have to overcome. But, for the great majority of traffic - like our posts being submitted to HN - the effort of trying to break the encryption (eg, dedicating a few months of brute force effort across multiple entire datacenters) simply isn’t worth it. In many other scenarios, bypassing the encryption is a lot more practical, like that one operation where I believe it was the FBI waited for their target to unlock his laptop - decrypting the drive - in a public space, and then they literally grabbed the laptop and ran away with it.

The ChatControl proposal sounds like it aims to bypass everyone’s encryption, making it possible to read and process all user data that goes across the wire. I would never be in support of something like that, because it sounds like it sets up a type of backdoor that is always present, and always watching. Like having a bug planted in your apartment where everything you say is monitored by some automated flagging system, just like in 1984.

If a nation state wants to spend the money to dedicate multiple entire datacentres to brute forcing my encrypted communications, achieving billions of years of compute time in the span of a few months or whatever, I’m not a fan but at least it would cost them an arm and a leg to get those results. The impracticality of such an approach makes it so that they don’t frivolously pursue such efforts.

The ability to view everyone’s communications in plaintext is unsettling and seems like it’s just waiting to be abused, much in the same way that the United States’ PRISM was (and probably is still being) abused.


From someone else who was curious about an intelligent answer for the question in the comment above, thanks for taking the time to really deliver something interesting, politely too. Nice to see that not everyone here replies with arrogant disdain to someone who openly admits not knowing much about a complex field like cryptography, and nicely asking about it.


You don’t need a backdoor in the encryption if you can backdoor the devices as such.

Crypto AG anyone?


In fairness, they backdoored the company, the crypto algorithms and the devices at Crypto AG.

Anyway, there are many more recent examples: https://en.wikipedia.org/wiki/Crypto_Wars

Don’t get me started on the unnecessary complexity added to TLS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: