Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.
Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.
Jersey City still was fine and 50 miles can be problematic for certain types of backup (failover) protocols. Regular tape backups would be fine but secondary databases can't be that far away (at least not at the time). I remember my boss at WFC saying that the most traffic over the data lines was in the middle of the night due to backups - not when everybody was in the office.
Companies big enough will lay the fibre. 50-100 miles of fibre isn't much if you are a billion dollar business. Even companies like BlackRock who had their own datacenters have since taken up Azure. 50 miles latency is negligible, even for databases.
The WTC attacks were in the 90s and early 00s and back then, 50 miles of latency was anything but negligible and Azure didn’t exist.
I know this because I was working on online systems back then.
I also vividly remember 9/11 and the days that followed. We had a satellite dish with multiple receivers (which wasn’t common back then) so had to run a 3rd party Linux box to descramble the single. We watch 24/7 global news on a crappy 5:4 CRT running Windows ME during the attack. Even in the UK, it was a somber and sobering experience.
For backups, latency is far less an issue than bandwidth.
Latency is defined by physics (speed of light, through specific conductors or fibres).
Bandwidth is determined by technology, which has advanced markedly in the past 25 years.
Even a quarter century ago, the bandwidth of a station wagon full of tapes was pretty good, even if the latency was high. Physical media transfer to multiple distant points remains a viable back-up strategy should you happen to be bandwidth-constrained in realtime links. The media themselves can be rotated / reused multiple times.
I’ve covered those points already in other responses. It’s probably worth reading them before assuming I don’t know the differences between the most basic of networking terms.
I was also specifically responding to the GPs point about latency for DB replication. For backups, one wouldn’t have used live replication back then (nor even now, outside of a few enterprise edge cases).
Snowmobile and its ilk was a hugely expensive service by the way. I’ve spent a fair amount of time migrating broadcasters and movie studios to AWS and it was always cheaper and less risky to upload petabytes from the data centre than it was to ship HDDs to AWS. So after conversations with our AWS account manager and running the numbers, we always ended up just uploading the stuff ourselves.
I’m sure there was a customer who benefited from such a service, but we had petabytes and it wasn’t us. And anyone I worked with who had larger storage requirements didn’t use vanilla S3, so I can’t see how Snowmobile would have worked for them either.
Switching gear was slower and laying new fibre wasn't an option for your average company. Particularly not point-to-point between your DB server and your replica.
So if real-time synchronization isn't practical, you are then left to do out-of-hours backups and there you start running into bandwidth issues of the time.
Plus long distance was mostly fibre already. And even regular electrical wires aren’t really much slower than fibre in term of latency. Parent probably meant bandwidth.
Copper doesn't work over these kinds of distances without powered switches, which adds latency. And laying fibre over several miles would be massively expensive. Well outside the realm of all but the largest of corporations. There's a reason buildings with high bandwidth constraints huddle near internet backbones.
What used to happen (and still does as far as I know, but I've been out of the networking game for a while now) is you'd get fibre laid between yourself and your ISP. So you're then subject to the latency of their networking stack. And that becomes a huge problem if you want to do any real-time work like DB replicas.
The only way to do automated off-site backups was via overnight snapshots. And you're then running into the bandwidth constraints of the era.
What most businesses ended up doing was tape backups and then physically driving it to another site -- ideally then storing it an fireproof safe. Only the largest companies could afford to push it over fibre.
To be fair, tape backups are very much ok as a disaster recovery solution. It's cheap once you have the tape drive. Bandwith is mostly fine if you want to read them sequentially. It's easy to store and handle and fairly resistant.
It's "only" poor if you need to restore some files in the middle or want your backup to act as a failover solution to minimise unavailability. But as a last resort solution in case of total destruction, it's pretty much unbeatable cost-wise.
G-Drive was apparently storing less than 1PB of data. That's less than 100 tapes. I guess some files were fairly stable so completely manageable with a dozen of tape drives, delta storage and proper rotation. We are talking of a budget of what 50k$ to 100k$. That's peanuts for a project of this size. Plus the tech has existed for ages and I guess you can find plenty of former data center employees with experience handling this kind of setup. They really have no excuse.
The suits are stingy when it's not an active emergency. A former employer declined my request for $2K for a second NAS to replicate our company's main data store. This was just days after a harrowing data recovery of critical from a failing WD Green that was never backed up. Once the data was on a RAID mirror and accessible to employees again, there was no active emergency, and the budget dried up.
I don't know. I guess that for all intents and purposes I'm what you would call a suit nowadays. I'm far from a big shot at my admittedly big company but 50k$ is pretty much pocket change on this kind of project. My cloud bill has more yearly fluctuation than that. Next to the cost of employees, it's nothing.
> There's a reason buildings with high bandwidth constraints huddle near internet backbones.
Yeah because interaction latency matters and legacy/already buried fiber is expensive to rent so you might as well put the facility in range of (not-yet-expensive) 20km optics.
> Copper doesn't work over these kinds of distances without powered switches, which adds latency.
You need a retimer, which adds on the order of 5~20 bits of latency.
> And that becomes a huge problem if you want to do any real-time work like DB replicas.
Almost no application would actually require "zero lost data", so you could get away with streaming a WAL or other form of reliably-replayable transaction log and cap it to an acceptable number of milliseconds of data loss window before applying blocking back pressure.
Usually it'd be easy to tolerate enough for the around 3 RTTs you'd really want to keep to cover all usual packet loss without triggering back pressure.
Sure, such a setup isn't cheap, but it's (for a long while now) cheaper than manually fixing the data from the day your primary burned down.
Yes but good luck trying to get funding approval. There is a funny saying that wealthy people don't become wealthy by giving their wealth away. I think it applies to companies even more.
In the US, dark fiber will run you around 100k / mile. Thats expensive for anyone even if they can afford it. I worked in HFT for 15 years and we had tons of it.
DWDM per-wavelength costs are way, way lower than that, and, with the optional addition of encryption, perfectly secure and fast enough for disk replication for most storage farms. I've been there and done it.
Assuming that dark fiber is actually dark (without amplifiers/repeaters), I'd wonder how they'd justify the 4 orders of magnitude (99.99%!) profit margin on said fiber.
That already includes one order of magnitude between the 12th-of-a-ribbon clad-fiber and opportunistically (when someone already digs the ground up) buried speed pipe with 144-core cable.
So that's 5 million bucks for 50 miles? If there are other costs not being accounted for, like paying for the right-of-way that's one thing, but I would think big companies or in this case, a national government, could afford that bill.
Before 9/11, most DR (disaster recovery) sites were in Jersey City, NJ just across the river from their main offices in WFC or WTC, or roughly 3-5 miles away. After 9/11, the financial industry adopted a 50+ miles rule.
IIRC, multiple IBM mainframes can be setup so they run and are administered as a single system for DR, but there are distance limits.
A Geographically-Dispersed Parallel Sysplex for z\OS mainframes, which IBM has been selling since the '90s, can have redundancy out to about 120 miles.
At a former employer, we used a datacenter in East Brunswick NJ that had mainframes in sysplex with partners in lower manhattan.
If you have to mirror synchronously the _maximum_ distances for other systems (e.g. storage mirroring with NetApp SnapMirror Synchronous, IBM PPRC, EMC SRDF/S) are all in this range.
But an important factor is, that performance will degrade with every microsecond latency added as the active node for the transaction will have to wait for the acknowledgement of the mirror node (~2*RTT).
You can mirror synchronously that distance, but the question is if you can accept the impact.
That's not to say that one shouldn't create a replica in this case. If necessary, synchronize synchronous to a nearby DC and asynchrone to a remote one.
>Some foolishly believed that the twin towers were invincible after the 1993 WTC bombing.
I was told right after the bombing, by someone with a large engineering firm (Schlumberger or Bechtel), that the bombers could have brought the building down had they done it right.
Funnily enough, Germany has laws for where you are allowed to store backups exactly due to these kinda issues. Fire, flood, earthquake, tornadoes, whatever you name, backups need to be stored with appropriate security in mind.
More like: your company (or government agency) is critical infrastructure or of a certain size, so there are obligations on how you maintain your records. It’s not like the US or other countries don’t have similar requirements.
The difference is that you cannot choose who you're sharing a road with while you can usually choose your IT service providers. You could, for instance, choose a cheaper provider and make your own backups or simply accept that you could lose your data.
Where people have little or no choice (e.g government agencies, telecoms, internet access providers, credit agencies, etc) or where the blast radius is exceptionally wide, I do find it justifiable to mandate safety and security standards.
> you cannot choose who you're sharing a road with while you can usually choose your IT service providers
You can choose where to eat, but the gov still carrier out food heath and safety inspections. The reason is that it isn't easy for customers to observe these things otherwise. I think the same applies to corporate data handling & storage.
It's a matter of balance. Food safety is potentially about life and death. Backups not so much (except in very specific cases where data regulation is absolutely justifiable).
If any legislation is passed regarding data, I would prefer a broader rule that covers backup as well as interoperability/portability.
Losing data is mostly(*) fine if you are a small business. If a major bank loses it's data it is a major problem as it may impact a huge number of customers and an existential way, when all money is "gone"
(*) From state's perspective there is still a problem: tax audits, bad if everybody avoids them by "accidental" data loss
As I said, a wide blast radius is a justification and banks are already regulated accordingly. A general obligation to keep financial records exists as well.
It seems New Zealand is one of very few countries where that is the case, and that's because you guys have a government scheme that provides equivalent coverage for personal injury without being a form of insurance (ACC). As far as I understand, part of the registration fees you pay go to ACC. I would argue this is basically a mandatory insurance system with another name.
Nope: The other way around. If you are of a certain size, you are required to ensure certain criteria. NIS-2 is the EU directive and it more or less maps to ISO27001 which includes risk management against physical catastrophes. https://www.openkritis.de/eu/eu-nis-2-germany.html
Of course you can do backups if you are smaller, or comply with such a standard if you so wish.
Is it? It would be incredible if the government didn’t have specific requirements for critical infrastructure.
Say you’re an energy company and an incident could mean that a big part of the country is without power, or you’re a large bank and you can’t process payroll for millions of workers. They’re ability to recover quickly and completely matters. Just recently in Australia an incident at Optus, a large phone company, prevented thousands of people from making emergency calls for several hours. Several people died including a child.
The people should require these providers behave responsibly. And the way the people do that is with a government.
Companies behave poorly all the time. Red tape isn’t always bad.
I'm usually first in line when talking shit about the German government, but here I am absolutely for this.
I was really positively surprised when I had my apprenticeship at a publishing company and we had a routine to bring physical backups to the cellar of a post office every morning.
The company wasn't that up-to-date with most things, but here they were forced to a proper procedure which totally makes sense.
They even had proper desaster recovery strategies that included being back online within less than 2 hours hours even after a 100% loss of all hardware.
They had internal jokes that you could have nuked their building and as long as one IT guy survived because he was in the home office, he could at least bring up the software within a day.
Considering that companies will do everything to avoid doing sensible things that cost money - yes, of course the government has to step in and mandate things like this.
It's no different from safety standards for car manufacturers. Do you think it's ridiculous that the government tells them how to build cars?
And similarly here: If the company is big enough / important enough, then the cost to society if their IT is all fucked up is big enough that the government is justified in ensuring minimum standards. Including for backups.
It’s government telling you the minimum you have to do. There is nothing incredible there.
It makes sense that as economic operators become bigger, as the impact of their potential failure grows on the rest of the economy, they have to become more resilient.
That’s just the state forcing companies to take externalities into account which is the state playing its role.
Well, given that way too many companies in the critical infrastructure sector don't give a fuck about how to keep their systems up and we have been facing a hybrid war from Russia for the last few years that is expected to escalate in a full on NATO hot war in a few years, yes it absolutely does make sense for the government to force such companies to be resilient against Russians.
Just because wherever country you are at doesn't have to prepare for a hot war with Russia doesn't mean we don't have to. When the Russians come in and attack, hell even if they "just" attack Poland with tanks and the rest of us with cyber warfare, the last thing we need is power plants, telco infra, traffic infrastructure or hospitals going out of service because their core systems got hit by ransomware.
> it absolutely does make sense for the government to force such companies
Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.
> power plants, telco infra, traffic infrastructure or hospitals
Their system _will_ get hit by ransomware or APTs. It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.
> Problem is, a) governments are infiltrated by russian assets and b) governments are known to enforce detrimental IT regulations. Germany especially so.
The regulations are a framework called "BSI Grundschutz" and all parts are freely available for everyone [1]. Even if our government were fully corrupted by Russia like Orban's Hungary - just look at the regulations on their face values and tell me what exactly you would see as "detrimental" or against best practice?
> It is not possible to mandate common sense or proper IT practices, no matter how strict the law. See the recent incident in South Korea with burned down data center with no backups.
I think it actually is. The BSI Grundschutz criteria tend to feel "checkboxy", but if you tick all the checkboxes you'll end up with a pretty resilient system. And yes, I've been on the implementing side.
The thing is, even if you're not fully compliant with BSI Grundschutz... if you just follow parts of it in your architecture, your security and resilience posture is already much stronger than much of the competition.
They had standards for a variety of stuff, including how you architect your own systems to protect against data loss due to a variety of different causes.
(Without knowing the precise nature of these laws) I would expect that they don't forbid you to store backups elsewhere. It's just that they mandate that certain types of data be backed up in sufficiently secure and independent locations. If you want to have an additional backup (or backups of data not covered by the law) in a more convenient location, you still can.
This kind of provision requires enforcement and verification. Thus, a tech spec for the backup procedure. Knowing Germany good enough, I'd say that these tech spec would be detrimental for the actual safety of the backup.
When you live in Germany and are asked to send a FAX (and not a mail, please). Or a digital birth certificate is not accepted until you come with lawyers, or banks not willing to operate with Apple pay, just to name few..
Here in the Netherlands, via my bank I can list all of my pre-approved transfers and block them. I'm pretty sure every bank here is required to support this. PayPal also has this feature.
I recently had to cut down on expenses starting with extraneous subscriptions and charitable donations, of which I had dozens. Many ad a click-to-cancel or at least fill-out-a-form-to-cancel process, but some of them said 'call us'. Then I discovered that I could cut them all off from my side!
I got a few 'hey your donation stopped' messages, and answered the first ones, but they all eventually went away.
Ok, good info. So GP means to evoke the type of information shearing garbage some of us are wise enough to expect from unaligned and underspecified human-emulating but self-serving autonomous digital systems, and not comment on their sincere affection for information loss in clickbait titles that. . .
That's true, and people should learn to recognize it. But in general, sarcasm is easily misunderstood in pure text. You read it with a tone in your head, but they can't hear it.
Also, it's best to avoid it a site like this with many non-native English speakers. It's an extra layer of difficulty.
I tried nose strips but I don't like disposables. I now use silicone nostril openers - two little tubes attached at the base that you stick up your nose. It came as a set of 4 sizes so a bit of waste there, but one size fit me and one size fit my wife.
These work well, but I wonder about hygiene. I keep mine in a glass dish on my desktop and attempt to cleanse them in hydrogen peroxide on occasion.
Ultimately, surgery is the best option in my experienced opinion, but it also has diminishing returns over time (~20 years in my case). This occurred recently for me, and I am looking to consult with an ENT again, when I feel like taking the recovery leap. With that said, I am still functioning extremely better than I ever did when I couldn't breathe 20 years ago.
There's a way to walhalla. Infrastructure, training, and liability laws prime the pump. When almost everyone is a cyclist, every driver knows that any cyclist could be a friend or family member. Or themselves.
Also... The Dutch reach. Open the driver's side door with the other hand to make sure you look for cyclists. In 12 years in Amsterdam I have never won the 'door prize'.
Yes! Just use an app to say where you want to go, and it tells you which of the 3 nearest bus stops to go to, and you get where you want to go reasonably quickly. No bus routes, just dynamic allocation and routing based on historical and up-to-the-minute demand.
If you tell the system your desire well in advance, you pay less. "I need to be at the office at 9 and home by 6 every weekday". Enough area-to-area trips allocate buses. Smaller, off-peak, or short-notice group demand brings minivans. Short-notice uncommon trips bring cars. For people with disabilities or heavy packages, random curb stops are available.
Then you remove private cars from cities entirely. Park your private car outside the city, or even better, use the bikeshare-style rentals. No taxis or Ubers, only public transit, with unionized, salaried drivers. Every vehicle on the road is moving and full of people and you can get rid of most parking spaces and shrink most parking lots.
It's not rocket science. It's computer science.
Fantasy, because it would allow us to drastically reduce the manufacturing of automobiles.
I suspect it's a pretty hard optimisation problem if you want to be lean. And if you want to overprovision... you end up with something that looks a bit like status quo.
Don't get me wrong, I'd love for this to exist. Just, as someone with optimisation experience, it seems pretty gnarly.
I think the cheapest and easiest starting point would be to offer people a time guarantee if they book, and contract with cab companies to provide capacity.
E.g. a bus route near where I used to live was frequent enough that you'd usually want to rely on it, but sometimes buses would be full during rush hour. Buying extra buses and hiring more drivers to cover rush hour was prohibitively expensive, but renting cars to "mop up" when on occasion buses had to pass stops would cost a tiny fraction, and could sometimes even break even (e.g. 4 London bus tickets would covered the typical price for an Uber to the local station, where the bus usually emptied out quite well)
Reliably being picked up in a most 10 minutes vs. sometimes having to wait for 20-30 makes a big difference.
Even just letting people know how full the bus is, in advance, would help a lot with that decision to take a cab etc. There could easily be a map or list of the physical buses and how full they are.
If the bus is full then the transit agency needs to run more service. Unless this is a "short bus" or your fares are unreasonably low (free fares are bad for this reason) your bus is paying for itself and you can run more service on that route to capture even more people.
The status quo in many cities is ~5x overprovisioning just in terms of capacity actively on the road at any given time, and way more than that if you count idle capacity. You could overprovision by a lot and still come out ahead.
Then price to you was just a but more than a bus fare. However the real price to the city works out to about 15x as much as a bus fare. Does your city really want to subsidize this (it would be a similar price for your city to just give you a basic car!)
They did a study for a small- to medium-size town in Germany – based on traffic modelling, it was estimated that an extensive on-demand system with almost 500 on-demand vehicles would only have about the same effect on passenger numbers as simply extending all existing fixed-routes bus routes to run every ten minutes all day (which for a town of that size is a rather good service), but operating the fleet of on-demand vehicles was so expensive that even fully automated on-demand vehicles were significantly more expensive than driver-operated conventional buses (never mind automated conventional buses).