Second, to understand the critique I think it's helpful to understand what I would perceive as the chewing on that paper by the community over time, which can be summarized as this: The distinguishing characteristic of an "RPC call", in the sense that Tanenbaum meant, is an attempt to make a network transaction have the exact same semantics as a function call. Importantly, you should read this strictly; the idea isn't that it should be "functionish", or have some syntax sugar around a network transaction that looks a lot like a function in your language, but the idea is that it is providing an abstraction that is fully as reliable as a normal function call within that language, and can, in every way, be treated as a function call. It is a "procedure call" that just happens to be remote, but the abstraction is so complete that you can stop thinking about it entirely as a programmer.
If you add that perspective to a reading of the PDF I linked, you'll probably understand the objections made more deeply.
You now live in a world where Tanenbaum "won", because he is objectively correct that such a thing is simply not possible, so it is much harder to understand what he's banging on about here in 2022. The network "RPC" calls you're used to have backed down the promises they made, and in most languages aren't as simple as a function call. Many modern things that call themselves "RPC" instead focus on having a function-like flow in the happy case, but don't promise to magically make problems with local vs. remote references go away, and instead of jumping through hoops to try to solve the problems just have you deal with them.
I caught the tail end of the RPC world at the beginning of my programmer career. It was a horrible place. You'd have these horrifyingly complex manifests of what the remote RPC call could do, and then they might get compiled into your code like "functions", and then problems as simple as "the remote function call was lost and you never got a reply" would just be a function you called that hung your program forever. Dealing with this was awful and hacky and ugly, because the function abstration they were trying to jam themselves so hard into simply didn't have a place for handling "the function you're trying to call is missing", so you might have to declare "handlers" elsewhere that ran in entirely different contexts or who knows what garbage. Total mess. In the process of trying to make things easier than they could possibly be, they made handling errors incredibly difficult. The sloppiest modern "I vaguely take some JSON and return some other JSON" HTTP API is much preferable to this mess.
One of the interesting lessons of software engineering is that just because something is impossible in general doesn't mean you can't still try, and get something that sorta sometimes works some of the time if all the stars align, and then get people selling that as the next hot thing that is the most important thing in programming ever.
Despite what you may cynically think, I actually don't have any current tech in mind as I type that. It may just be a lack of perspective as I'm as embedded in the present as anyone else, but I don't feel like there's a lot of extant techs right now in the general programming space promising the impossible. Closest I can think of is the machine learning space, but my perception is that it isn't so much that machine learning is promising the impossible as that there are a lot of people who expect it to do the impossible, which isn't quite what I mean. I mean more like people selling database technology that, in order to exist, must completely break the CAP theorem (and I don't just mean shading around the edges and playing math games, but breaking it), or RPCs that fundamentally only work if all 8 fallacies of distributed computing [1] were in fact true, or techs that don't work unless time can be absolutely and completely synchronized between all nodes, and so on. I'm sure there's little bits of those here and there, but there was a time when this sort of impossible RPC was thought of as the future of programming. The software engineering space has become too diverse for things that are essentially fads to take over the whole of it like things could in the 1980s and 1990s.
(See also OO, which also has a similar story where if you learn OO in 2022, you're learning a genteel and tamed OO that has been forced to tone down its promises and adapt to the real world. The OO dogma of the 1980s and 1990s was bad and essentially impossible, and only a faint trace of it remains. Unfortunately, that "faint trace" is still in a few college programs, so it has an outsized effect, but for the most part the real world seems to break new grads of it fairly quickly nowadays.)
Finally, this should be contrasted with the modern answer that is continuing to grow, which is message passing systems. A message passage system loosens the restrictions and offers fewer guarantees, and as such, can do more. You can always layer an RPC system for your convenience on top of a message passing system, but you can't use an RPC system to implement a message passing system with the correct semantics, because the RPC system implements too much. I personally view "RPC", in the looser modern form, as a convenient particular design pattern for message passing, but not as the fundamental abstraction lens you should view your system through. Even the modern genteel form of RPC imposes too many things, because sometimes you need a stream, sometimes you need an RPC, sometimes you just need a best-effort flinging of data, etc. When you have a place you need RPC's guarantees, by all means use an established library for it if you can, but when you need something it can't do, drop it immediately and use the lower-level message bus you should have access to.
Empathy has a performative component to neurotypicals. If you are not acting as though you are empathizing, you have "no empathy" and are thus a strange form of psychopath. Actual psychopaths can pass the performative part of neurotypical empathy with flying colors because they are excellent maskers and mirrorers -- that is, of course, when they can be bothered to try at all.
Neurotypical psychology is deep, complex, and fascinating. They devote significant brainpower to constantly evaluating and testing other people's behavior against a constantly evolving set of rules in order to ascertain whether they are a member of the neurotypical's tribe or ingroup. The rules have to change and evolve because ingroup members will be able to predict how they will change, and so catch any outgroupers who have heretofore successfully infiltrated the ingroup. It's like you have a monster CPU with a lot of cores, and then devote half (or more!) of those cores to the world's most elaborate DRM scheme. We benefit because much of that CPU power is in us freed to do other exciting things, like programming or particle physics; but we also suffer because most of the people around us cannot attest that we are legitimate humans running a legitimate copy of the human OS.
Relatedly, I love Japan and I love the Japanese people but... Japanese society has one of the most elaborate, impenetrable set of social rules in the world. If you want to know why hikikomori are such a thing there, it's simple, really: so many more people are frustrated with their failure to conform to the elaborate ruleset it takes to simply be Japanese and tired of being flagged as impostors in that game of Among Us that they simply give up and withdraw into whatever brings them comfort.
I'm in a strange position here. I fundamentally agree that reviews for the most part are absolute steaming pile of crap of a space for many niches. I work in one of the worst - web hosting reviews. It's plagued by fake affiliate reviews dominating basically every search result. I've been trying for 10 years to run a company that did reviews differently in the web hosting space.
Full disclosure, I have affiliate links on companies that have them too. But I also list companies without them and it has had zero bearing on any result in 10 years. In fact, when I launched I had to beg the CEO of the top rated company to create a special affiliate program for me. Why? Because he didn't believe in review sites and affiliates in the space. It took months, but I told him if he didn't create one, what I was trying to do would never have a chance because I'd never make a dollar - you're the top rated company. I want to do something different, but it needs to remain somewhat financially viable and if you don't have a program I'm dead before it starts.
So what happened in those 10 years?
Honestly, not a whole a lot. I have mediocre rankings (often page 2-5) on some of the most competitive terms on Google. I can't afford to buy the links my competitors do because they make 10x or more what I do pushing the highest paying affiliates and designing for conversion. The site has some traction within niche communities - especially the WordPress hosting space - because I also run annual performance benchmarks (https://wphostingbenchmarks.com) where I document and thoroughly test most of the meaningful players in that space.
The data I'm providing is almost surely the most transparent data tracking the industry and maybe the least biased (the reviews work by analyzing Twitter sentiment at scale - everything publicly documented in terms of ranking algorithm and published comments).
But outside little bubbles in communities that care, nobody noticed. Google doesn't care. Google happily ranks affiliate sites spending six figures buying links off apache.org and other open source projects (look at those sponsor lists on a lot of open source projects - hosting/gambling is a bad sign).
I got excited when my work fighting against .ORG registry price increases and sale at ICANN (https://reviewsignal.com/blog/2019/06/24/the-case-for-regula...) got a lot of press, even getting cited by the California AG in his letter which effectively killed it. I got backlinks from a lot of large news sites and traffic. I honestly saw no meaningful improvement in rankings or traffic.
So I'm stuck, I keep the sites running - part time - mostly between other projects. I've moved back heavily into consulting and other projects because being an honest affiliate - I can't compete. Providing honest, transparent data and presenting it with the goal of informing versus pushing sales is a terrible business model. The majority of people simply don't care. A lot of 'in-the-know' folks read and get informed by my work. They advise their clients using it, and I never see any financial benefit from it. The broader world, especially Google, doesn't know or care.
Is the root problem affiliate links? It certainly skews incentives and pushes manipulation. If we removed them, what fills the void? Ads? Sponsored content? Something else? I don't think the problem goes away - there is so much money in some of these industries and the stakes are so high. Companies and people will take advantage of it one way or another.
How do we identify honest / good content from the garbage seems to be the bigger question. After 10 years, I'm don't have an answer and I'm certainly not being noticed.
My reading of the piece is that it ascribes to the Copenhagen interpretation an anti-realist perspective - that is, the theory is nothing other than the ability to predict the results of experiments. In this view, there is no wave-function in reality: it is just a mathematical tool that appears to predict the dots on the screen well.
Scientific Realism holds that in some sense scientific theories approximate the world, not just in what the experiments observe, but also in the content of their explanations (precisely, you gain knowledge not just about observables, but also non-observables: things that the theory requires to be true, but can't show). The article ascribes this view to Einstein, who presumably thought that there was a such thing as space-time, and it does actually curve under the influence of mass - despite only seeing things that are explained by the curvature, and not the curvature, or the space-time, itself.
The article then goes on to say that the anti-realist approach (dominated by not-undeserving practical concerns and application) focuses on computation: the mathematics is good so long as it gets the right answers in the end, and the end justifies the means. Therefore, it doesn't matter what contrivances must be dealt with in-between: if you get a better prediction, or can do a new exciting thing, then that was always the aim.
Thus, I read the article as advocating a stepping back from this view: it blames this focus on sheer mathematical sophistication as the route to truth as the source of profound disinterest in philosophy by physicists (it is important to note that I think that aeon is a philosophy newsletter!). Earlier and contemporary physicists (prominently, Einstein) had an interest in not just what their theories produced, but what they explained the world to be, and the article decries the modern lack of this.
I recommend the SEP article on scientific realism, which is dense but on a brief reading gives enough of the context to recognise the article. https://plato.stanford.edu/entries/scientific-realism/ (Although it is even more philosophically focused).
NB. I'm not a physicist or philosopher either, so grains of salt! My only self-endorsement is that I spent the last year reading a bit of philosophy, so perhaps I can be at least a stepping-stone to better resource.
For a much more evenhanded -- and hence rare -- take, you may wish to read this very recent commentary from Jack Matlock, the last US Ambassador to the USSR[1]. He has the advantage of being an observer who is (a) very informed and (b) disinterested. The following paragraphs are worth reading, even if you fundamentally disagree with this viewpoint[2].
Russia is extremely sensitive about foreign military activity adjacent to its
borders, as any other country would be and the United States always has
been. It has signaled repeatedly that it will stop at nothing to prevent
NATO membership for Ukraine. Nevertheless, eventual Ukrainian
membership in NATO has been an avowed objective of U.S. and NATO
policy since the Bush-Cheney administration. This makes absolutely no
sense. It is also dangerous to confront a nuclear-armed power with
military threats on its border.
When I hear comments now such as, “Russia has no right to claim a
‘sphere of influence,’” I am puzzled. It is not a question of legal
“rights” but of probable consequences. It is as if someone announces,
“We never passed a law of gravity so we can ignore it.” No one is saying
that Ukraine does not have a “right” to apply for NATO membership. Of
course it does. The question is whether the members of the alliance
would serve their own interest if they agreed. In fact they would assume a
very dangerous liability.
I point this out as a veteran of the Cuban missile crisis of 1962. At that
time I was assigned to the American embassy in Moscow and it fell to my
lot to translate some of Khrushchev’s messages to President John
Kennedy. Why is it relevant? Just this: in terms of international law, the
Soviet Union had a “right” to place nuclear weapons on Cuba when the
Cuban government requested them, the more so since the United States
had deployed nuclear missiles of comparable range that could strike the
USSR from Turkey. But it was an exceedingly dangerous move since
the United States had total military dominance of the Caribbean and under
no circumstances would tolerate the deployment of nuclear missiles in its
backyard. Fortunately for both countries and the rest of the world,
Kennedy and Khrushchev were able to defuse the situation. Only later
did we learn how close we came to a nuclear exchange.
As for the future, the only thing that will convince Moscow to withdraw
its military support from the separatist regimes in the Donbas will be
Kyiv’s willingness to implement the Minsk agreement. As for the Crimea,
it is likely to be a de facto part of Russia for the foreseeable future,
whether or not the West recognizes that as “legal.” For decades, the U.S.
and most of its Western allies refused to recognize the incorporation of the
three Baltic countries in the Soviet Union. This eventually was an
important factor in their liberation. However, the Crimea is quite different
in one key respect: most of its people, being Russian, prefer to be in
Russia. In fact, one can argue that it is in the political interest of
Ukrainian nationalists to have Crimea in Russia. Without the votes from
Crimea, Viktor Yanukovich would never have been elected president.
One persistent U.S. demand is that Ukraine’s territorial integrity be
restored. Indeed, the U.S. is party to the Budapest Memorandum in
which Russia guaranteed Ukraine’s territorial integrity in return for
Ukraine’s transfer of Soviet nuclear weapons to Russia for destruction in
accord with U.S.-Soviet arms control agreements. What the U.S. demand
ignores is that, under traditional international law, agreements remain
valid rebus sic stantibus (things remaining the same).
When the Budapest memorandum was signed in 1994 there was no plan
to expand NATO to the east and Gorbachev had been assured in 1990
that the alliance would not expand. When in fact it did expand right up to
Russia’s borders, Russia was confronted with a radically different strategic
situation than existed when the Budapest agreement was signed.
Furthermore, Russians would argue that the U.S. is interested in
territorial integrity only when its interests are served. American
governments have a record of ignoring it when convenient, as when it and
its NATO allies violated Serbian territorial integrity by creating and then
recognizing an independent Kosovo. Also, the United Sates violated the
principle when it supported the separation of South Sudan from Sudan,
Eritrea from Ethiopia, and East Timor from Indonesia.
Yes. The CPU and GPU demand has nothing to do with it. The reason is the car industry.
For some reason in early 2020 all the car industry execs were convinced that people would buy dramatically fewer cars in 2020, due to pandemic crashing demand. Because they have a religious aversion to holding any stock they decided to shift the risk over to their suppliers, fucking said suppliers over, as the car industry normally does when they expect demand shifts. The thing that made this particular time special as opposed to business as usual is that the car execs all got it wrong, because people bought way more cars due to pandemic rather than less, due to moving out of cities and avoiding public transit. So they fucked over their suppliers a second time by demanding all those orders back.
Now, suppose you're a supplier of some sort of motor driver or power conversion chip (PMIC) in early 2020. You run 200 wafers per month through a fab running some early 2000s process. Half your yearly revenue is a customized part for a particular auto vendor. That vendor calls you up and tells you that they will not be paying you for any parts this year, and you can figure out what to do with them. You can't afford to run your production at half the revenue, so you're screwed. You call up your fab and ask if you can get out of that contract and pay a penalty for doing so, and you reduce your fab order to 100 wafers per month, so you can at least serve your other customers. The fab is annoyed but they put out an announcement that a slot is free, and another vendor making a PMIC for computer motherboards buys it, because they can use the extra capacity and expect increased demand for computers. So far so normal. One vendor screwed, but they'll manage, one fab slightly annoyed that they had to reduce throughput a tiny bit while they find a new buyer.
Then a few months later the car manufacturer calls you again and asks for their orders back, and more on top. You tell them to fuck off, because you can no longer manufacture it this year. They tell you they will pay literally anything because their production lines can't run without it because (for religious reasons) they have zero inventory buffers. So what do you do? You call up your fab and they say they can't help you, that slot is already gone. So you ask them to change which mask they use for the wafers you already have reserved, and instead of making your usual non-automotive products, you only make the customized chip for the automotive market. And then, because they screwed you over so badly, and you already lost lots of money and had to lay off staff due to the carmaker, you charge them 6x to 8x the price. All your other customers are now screwed, but you still come out barely ahead. Now, of course the customer not only asked for their old orders back, but more. So you call up all the other customers of the fab you use and ask them if they're willing to trade their fab slots for money. Some do, causing a shortage of whatever they make as well. Repeat this same story for literally every chipmaker that makes anything used by a car. This was the situation in January 2021. Then, several major fabs were destroyed (several in Texas, when the big freeze killed the air pumps keeping the cleanrooms sterile, and the water pipes in the walls of the buildings burst and contaminated other facilities, and one in Japan due to a fire) making the already bad problem worse. So there are several mechanisms that make part availability poor here:
1. The part you want is used in cars. Car manufacturers have locked in the following year or so of production, and "any amount extra you can make in that time" for a multiple of the normal price. Either you can't get the parts at all or you'll be paying a massive premium.
2. The part you want is not used in cars, but is made by someone who makes other parts on the same process that are used in cars. Your part has been deprioritized and will not be manufactured for months. Meanwhile stock runs out and those who hold any stock massively raise prices.
3. The part you want is not used in cars, and the manufacturer doesn't supply the car industry, but uses a process used by someone who does. Car IC suppliers have bought out their fab slots, so the part will not be manufactured for months.
4. The part you want is not used in cars, and doesn't share a process with parts that are. However, it's on the BOM of a popular product that uses such parts, and the manufacturer has seen what the market looks like and is stocking up for months ahead. Distributor inventory is therefore zero and new stock gets snapped up as soon as it shows up because a single missing part means you can't produce your product.
So here we are. Shameless plug - email me if you are screwed by this and need help getting your product re-engineered to the new reality. There's a handful of manufacturers, usually obscure companies in mainland China that only really sell to the internal market, that are much less affected. Some have drop-in replacement parts for things that are out of stock, others have functionally similar parts that can be used with minor design adaptation. I've been doing that kind of redesign work for customers this whole year. Don't email me if you work in/for the car industry. You guys poisoned the well for all of us so deal with it yourselves.
Product engineer here from a major filter manufacturing company. I've done a ton of tests this summer evaluating DIY box fan filters and I have a couple insights for people building box fan filters this year.
1) Box fans (and other axial fans like your ceiling fan) are terrible at pulling air and an HVAC filter will significantly reduce fan speed due to the added pressure differential across the filter. As a reference, a new MERV 13 filter can reduce fan speed by ~33% when mounted to the intake side, and ~66% if mounted to the front of a box fan. The motor used in your box fan is cooled by the air passing around it, which is why choking off the air flow through your fan can lead to overheating, damaging your fan and creating a fire hazard.
If you're going to make a DIY air purifier, mount the filter to the intake side. You'll get much better performance, your fan will stay cleaner, and there's less risk of damage.
2) Some analysis of the test data from the posted article:
CADR stands for Clean Air Delivery Rate, measured in cfm. It's basically a measure of how efficiently ambient air in a confined space is purified. A higher CADR means particles are pulled out of the air quicker. For now, just treat this as a relative quality value. Using the PM 2.5 graph for the tiny room experiment in the article, I'd estimate that the DIY filter the author created has a CADR value of 45-50. This is a pretty low value. The room air purifier (RAP) he used is even worse. Then, from the large room graph, I'd estimate the DIY purifier as a CADR of 30-35. This is a rough estimate for two reasons. One, room size is an important variable in this calculation and I don't have an exact value. Two, when I test a purifier, the test chamber starts at a PM 2.5 around 10^5. At the extremely low starting concentrations used in the author's experiments, the percentage of particles that are removed due to natural decay is much more significant.
Side note, cigarette smoke is the standard for room air purifier testing. Incense sticks are used less commonly due to their slower particle generation.
3) For wildfire season, I recommend a MERV 13 filter for overall performance, cost effectiveness, and for smoke particle capture (PM 2.5). There is a clear trend of diminishing returns for a DIY box fan filter beyond MERV 12-13 filters, peaking at a CADR value of ~150 for a MERV 13 filter for box fans I've tested at their highest fan speeds. A MERV 13 filter is about 50% better than a MERV 10 in a box fan configuration, while a MERV 14 is actually slightly worse. This will vary based on filter brands, too. Rule of thumb, quality matters. We actually rate our filters a bit lower than their actual performance for a number of reasons.
So, my company offers a standing room air purifier with a HEPA filter for about $200. It has a CADR value of 158. A box fan and a MERV 13 filter cost about $40 and has a CADR around 150 - pretty good for a DIY substitution. We even offer cheaper room air purifiers with even lower CADR values. So why buy an expensive room air purifier?
First, room air purifiers use a radial fan rather than an axial fan. Axial fans create a low pressure area on the exhaust side, drawing air through the fan. Radial fans draw in lower pressure air near the axle of the fan and push out higher pressure air at an exhaust port at the radius of the fan. Room air purifiers use a radial fan to push air through the high pressure drop HEPA filters they are designed to use. HEPA filters are qualified to remove +99.9% of tiny particles (PM 0.3) in one pass. The thick filtration media requires a high pressure differential to pull air through it. A HEPA filter on an axial box fan is going to kill the motor. If you care about PM 0.3 particles, only a HEPA filter will do the job.
Also, longevity. A room air purifier and HEPA filter should run for a year or longer without needing to change filters. An HVAC filter is meant to last 3 months in your home air system under a normal particle load. This lifespan can be much shorter due to poor conditions, such as smoke particles during wildfire season or drywall dust from a renovation project. (Side note - seriously, replace your filter after doing any drywall work. Anything better than a fiberglass filter can clog in just a day or two of dusty drywalling. Also also, fiberglass filters do absolutely nothing, don't buy them.) The room air purifier is designed to run for years with a high pressure differential filter. Your box fan is not. A cheap 20" box fan with an HVAC filter is a good temporary solution, but will not last nearly as long as a room air purifier used daily.
Last, noise and style. RAP units are very quiet and blend into your living room space. A large box fan with a strapped on furnace filter makes for an interesting conversation piece, if you could hear your guests.
4) Don't worry about hermetically sealing your 20x20 filter to the box fan. From my experience, a filter with tape sealing every side to the fan is no more efficient than a filter held to the fan at a couple of contact points. Once the fan is on, the intake air will hold the filter close to the fan. Even with a fully sealed filter, remember that axial fans suck at sucking. Your axial box fan will actually draw air around the front corners and into the fan, even without a filter on the back. And with how easy it is to reduce the fan speed of a box fan, you're better off allowing some air through so that the fan runs at a more efficient and safer speed.
5) A filter on a box fan is definitely better than nothing and a good, cheap short term solution. RAP units are great for long term use and capturing all sizes of particulate. I'm not going to try to sell you my brand, but there is one product I advise you avoid. Recently, Lasko has released a box fan with a filter slot as a 2 in 1 product. I've tested this thing with the provided filter and it does not perform nearly as well as a DIY filter you could make. Also, the filter slot does not fit standard 20x20x1 filters. The slot is designed for a slightly smaller, 19x19x1 sized filter, meaning that if you want to buy a replacement filter or a higher quality one than provided, you won't be able to. A normal 20x20 filter (which is actually slightly smaller in W x L nominally) can barely be squeezed into this fan's slot. Just buy a better filter and some tape instead.
6) If you look for DIY box fan filters, you will find examples with 4-5 filters in a cube shape. 5 filters improves filtration efficiency by ~50%, but costs x5 as much. That's $100-$150 in filters that will last just about as long as one filter while only cleaning the air 50% better. Just invest in a room air purifier unit of the same cost instead and enjoy much cleaner air over an entire year.
It's an interesting question because there has been a dominant narrative that back when there were only 3 channels and everyone trusted what Walter Cronkite said there was a lot more consensus.
That was true in the sense that centre-left and centre-right points of view were expressed on all networks (by law because of the fairness doctrine), also because the cold war gave a broad framework for crafting a consensus within, consolidation of newspapers left only 1 or 2 major newspapers in most markets, and because the two parties were each much more ideologically diverse. There was a consensus, but it bounced around the centre, and on topics where there were large disagreements usually you could still count on people to at least be able to articulate the countering sides' views and why someone might hold them. There was a greater appetite for trying to figure out an issue by hearing the best arguments from the other side (watch old episodes of Firing Line on YouTube to get an idea of how it was).
When climate change first became an issue in the late eighties there were a wide range of opinions on it, even though people had no more expertise on the subject than they do now. But people had less confidence that their views were right and people with contrasting views were deplorable or had a secret agenda. Even in early naughts Gingrich and Pelosi did a commercial together about how it was an important issue [0], even though there were pro-growth republicans who didn't want to take action, and pro-labour democrats who wanted to preserve working class jobs that disagreed with them. It was more common to see someone's opinion move in the course of a conversation because the culture war lines weren't so sharply defined. Politicians also had less to gain by exploiting divisions because the structure of the political system rewarded politicians who could pull voters over the centre line instead of just firing up the base.
More than anything it was more acceptable to have heterodox views. You could be a pro-choice republican or pro-life democrat because on the balance you had more things in common with the party of you choice than you didn't. Did everyone have quixotic or unique views? No, but you didn't have to self-censor as often if you did. It was a lot easier to be intellectually curious and learn things from others.
The theory at the time was that the 500 channel universe and the internet would break the old consensus and there would be a whole universe of opinions available. But it seems that "the big sort", the weakening of the parties and strengthening of PACs through campaign finance reform, the modern primary system and how it allows activists to influence and take over the parties (which is really only 40 years old), filter bubbles and people discovering that they prefer not to read stuff they disagree with, and algorithmic newsfeeds that optimize for engagement (ie outrage and out-group homogenization)... all of that has formed 2 consensuses that are more doctrinaire because they are so clustered apart from each other.
It's increasingly uncommon for people to have many or any friends with different political ideologies than them. 50 years ago a majority of Americans said they would not be ok with their child marrying outside of their race now the vast majority are ok with it, but inter political marriage is the exact opposite with it becoming increasingly uncommon and socially unacceptable.
I tried to follow a range of non-political commentators over the last year, and always thought that lab leak was a hypothesis that couldn't be ruled out given what we know, but made a zoom birthday call go silent when I voiced that opinion because everyone on it was sure that only a right wing loon could consider such a thing. While there was persistent coverage of the hypothesis in alternate media like the Dark Horse podcast, and posts on medium by postdocs sticking their necks out, the liberal media seems to have committed epistemic closure on the topic, with the NY Times COVID reporter just yesterday bemoaned the hypothesis's "racist roots" [1] (despite, as Greenwald points out, that the wet market theory sounded a lot more racist than a mistake at an NIH funded lab). Even the NY Times' previous COVID reporter tried to excuse his blind spot by falsely claiming that the hypothesis was confined to right wing kooks. [2] Of course it's no more racist to blame the pandemic on a mistake made at an NIH funded lab than to blame it on wet markets, but once the lines were drawn it was more important to score short term points against the other side and never admit that someone like Tom Cotton, might have made a reasonable point.
Some personal anecdotes. I grew up during the hyperinflation days in Brazil (80s, early 90s): talking about 60% a month.
People would get their salary and run to the supermarket and buy everything they needed for the month, because if they waited a single day the prices would have changed. A lot of people internalized that habit and still do that nowadays (not in the sense of running to the supermarket, but buying a lot for the whole month).
This was before barcodes, and every item had a price label on it. Supermarkets had people employed full time just to be remarking the items. I remember running to pick up an item on one end of a shelf while the employee was remarking the items coming from the other end, so you could buy the item at yesterday's price.
I lived through 6 currency changes. Usually when prices started being in the scale of millions, the government would announce that in a very short period (sometimes that being next week), there was a new currency with a new name and 1000 OLD = 1 NEW. Until the government could replace all existing bills, the old bills would be accepted as if they were the new bills (at 1/1000 of the face value, of course). Old bills passing through the banking system would be stamped with the name of the new currency and the new value before being put back into circulation.
Contracts like rentals were all indexed: there was a clause saying the price would be corrected every month using the official index that tracked the inflation. Or were pegged to the dollar. This by itself fed into the positive feedback loop that was perpetuating the hyperinflation.
The government tried some bizarre measures to tame inflation. Often they would try freezing all prices, but that was never sustainable for long. The craziest one was probably in 1990 when the government simply froze 80% of everybody's money in the bank for 18 months to reduce the amount of money in the economy. This was a total disaster and even caused many suicides.
In 1994 hyperinflation was finally tamed when the current currency, Real, was introduced. It was the culmination of an ingenious plan that actually worked.
My experience in life with self-identified Christians has largely been in the context of those people disagreeing, on moral grounds, with actions I take or people I support--almost always citing religion in their reasoning. It's possible the people I've dealt with just aren't components of this "mainline christianity" you're familiar with, but they use the same tools to believe these things. To me, those tools are egregiously flawed, and I have a vested interest in making sure those tools don't get used to believe false things that bring harm to myself or my neighbors.
> Because on the main point, they're all pretty much aligned...
Historically, wars have been fought over these disagreements--both within only Christianity, and in the wider religious space.
I think it's worth considering other religions personally because it's what led to my de-conversion: I couldn't answer the question of why, other than being raised in it, I should believe Christianity over, say, Buddhism or Islam. As I regarded other religions with skepticism, when I was a Christian, I should also regard Christianity.
While the specific point I've made previously deals with selecting a denomination and a church within that denomination it's also true that people choose religions for similar (flawed?) reasoning.
Further:
> What you see as fundamental differences are really more social than theological...
I'm not sure I'm convinced on this. Take gay marriage, for example: I vividly recall being 12 in our church, sitting in on a conversation between my (single) mother and our pastor, on how to deal with people who chose to sin in our lives, specifically referring to my father who was openly gay at the time. The church we went to was firmly against homosexuality, but was of the love-the-sinner-hate-the-sin cloth. On the upshot, they were relatively kind to those of the LGBT community, but they did still make it clear they did not support their "choices" and largely ostracized them--with reasoning that, in their view, was ensconced in theology.
While LGBT rights have certainly been a social issue throughout the world, I think dismissing this "difference" between my church and the one on my college campus who made a point of welcoming LGBT members is to minimize these actual theological differences. There's part of me that wonders if this is a bad-faith maneuvering (not on your part, but organized religion as a whole) to downplay socially repulsive beliefs without having to sacrifice their supposed moral authority.
> I think most mainline Christians would reject the notion that the Old testament is a factual historical record.
This certainly hasn't been true across history, and even now I harbor doubt. Perhaps I've only dealt with more fundamentalist types than you, but the opposite has been true in my experience, and is definitely not true of the more loudmouthed Creationist/Ken Ham style evangelicals. While they may not be representative of the majority, *they are affecting policy* in many regions of the country. My mother, an elementary teacher, frequently voices her frustrations that she's not allowed to pose creationism as an "alternative" to evolution in her classes science units--something that is allowed in several other states[1, though from 2014].
You see similar flaws in other arenas, too: my grandparents view climate change as an issue outside of human concern, squarely in God's hands, in part because they believe in life-after-death and the eventual rapture, so while they should do reasonably well to steward the planet, they don't think we're going to be here forever so it doesn't matter if Earth becomes an unlivable rock; while some may suffer the effects of an adverse climate, it won't matter when everyone's in heaven.
I also wonder about what motivates these changes in how doctrine is viewed. Supposing your right, what drove the digression that the Old Testament is not factual? I doubt it was the Church deciding on its own, outside of societal pressure. I'm sure it's because of pressure from those who found fault in the Old Testament teachings--those who were condemned by it, or ostracized by the churches of their time, and the Churches granted this concession without wholly usurping their power. But what about the next issue? Maybe folks are believing less in Noah's Ark, but how will they contend with folks who're trans, or polyamourus, or take issue with abstract (i.e., not historical) teachings of the bible like not rebelling against kings, for they have been ordained by God [2]?
Of course. The issue is simply that it needs to happen within the site guidelines, for example this one: "Comments should get more thoughtful and substantive, not less, as a topic gets more divisive." (https://news.ycombinator.com/newsguidelines.html)
Commenters here need to learn the difference between posting in the flamewar style and having curious conversation. The issue is not the topic—it's which mode people's nervous systems are functioning in as they discuss it. Here is what to watch for:
(1) One mode is battle mode, in which people use grandiose, aggressive rhetoric to try to defeat an enemy, and take any opportunity they can to twist what the other side says to gain an advantage...
(2) ... and the other is curiosity mode, in which people explore together to find the truth and are interested in what each other are actually saying, thinking, and feeling.
(3) You can tell which mode you're in by sensing into your level of activation while posting. If you're not sure about it yet, or if you're feeling agitated, slow down before posting, set an intention to observe your own state, and it will soon become clear.
Another way of explaining this is the distinction between reflexive and reflective responses: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor.... Reflexive responses are lightning fast, associated with high activation, and oriented towards threat assessment and defense. They come from the fight-or-flight layer that we're all dealing with in ourselves. They're also highly repetitive, because they basically come from cache—that's why we're able to generate them so quickly. The reflexive system is not about responding to or creating anything new. It's there for survival and to prevent the recurrence of past painful experiences.
Reflective responses are slow, come from the drive to explore and learn, and have to do with new responses to new information—which take time to come together. They lead to conversations and outcomes that aren't simply replays of past reactions. They aren't primarily about past traumas and threat defense. In order to function this way, the nervous system needs a certain baseline of safety. One way to get there is simply to wait until the initial wave of reactiveness subsides, and then look around and orient to what's specifically new and interesting in the present situation.
Empathy comes into this too, because the ability to put oneself in the other person's position, instead of quickly scanning their comment for weaknesses to exploit, is a complex process that requires the slower cognitive systems to come online.
I've helped with an IT project for several large dairies in the US (1,000+ cows each). Like you said the cows practically milk themselves. They walk by themselves into the milk barn, one person attaches the automatic milking machines to each cow as they walk into a stall. The milking machine automatically drops off the udders when the cow is done being milked and they walk back out to their pen on their own. I was told the cows like to be milked because it's uncomfortable for them when they are full of milk. Most dairies milk their cows 3 times a day around the clock. One of the dairies I've been to has a round milk barn with a rotating floor with about 100 milking stalls around the outside edge, so the guy attaching the milking machines to the cows doesn't even have to move. By the time the cows rotate all the way around the building they are done, the milking attachments drop off and the cows walk out the door on their own and go back to their pens to eat. I've shown up at these places a few times unannounced and spent 30 minutes walking around looking for a person (other than the guy in the milk barn) to let me into the office. Many times there's two people at the entire dairy: the guy in the milk barn and a guy driving a tractor that drops the feed for the cows along the sides of their pens. Each pen has artificial wood floors kind of like trex decking material and sprinklers in the roof of the pen that run automatically several times a day to wash away the poop which drains into holding ponds. The holding ponds have machinery just like a waste water treatment plant: the solids settle to the bottom of the ponds, and the water is used for corn crops that are fed back to the cows. The manure is dredged from the holding ponds, dried, and distributed on the fields with tractors to fertilize the corn. The majority of food fed to the cows is corn silage, which is the entire corn plants ground up and composted/fermented in large covered piles for a while to make it easier for the cows digest. This makes the whole setup fairly self sustaining. If they need to, the dairies will buy feed, but they try to minimize that because it increases their expenses. Most of these large dairies own thousands of acres around the dairy that is almost entirely dedicated to growing corn to feed the cows. After being to these kinds of setups it's clear to me that there's no way a small operation could compete. If they could replace those last two employees with machines, they would.
1. We do attempt to attack cancers by reducing their available energy. That's why, at one point, a major field of research in cancer therapeutics was interfering with angiogenesis, because cancers will secrete messengers that help grow them dedicated (if crappy, low-quality) blood vessels. The issue with "starving" them more starkly is that they're very good at getting a share (e.g., forcing the body to supply them with blood vessels), so you're going to be hitting other labile tissues as fast or faster (skin, GI mucosa, blood and immune cells.)
Another way of targeting their rapid metabolism is pointing our therapy at cells with high replication rates. A number of our cancer therapeutics are aimed directly at cells that are currently replicating, which should selectively hit cancer cells (though again, it hits skin, GI mucosa, blood and immune cells, etc. because they're also high-turnover cells.)
We use methotrexate to interfere with DNA synthesis, thus reducing the rate of replication altogether (in cancer cells, as well as.... above).
The problem is, besides the dose-limiting toxicities of all of these things (because targeting metabolism hits all high-metabolism cells), is that cancer cells are really good at developing resistances. So, for instance, if you starve them of blood supply, they'll switch to anaerobic metabolism of glucose. If you starve them of glucose, well, you can't really - I'll discuss that below. If you give them methotrexate or other nasty drugs, they alter the cells' native drug-efflux pumps to target those drugs better and pump them right out of the cell. Cancer cells have a broken mechanism for protecting DNA - the result is really high rates of cell death among cancer cells, and also really rapid evolution.
In terms of starving cells of glucose: glucose is the least common denominator of cellular metabolism. It's the primary food source for the brain. Different cells have different receptors for absorbing it, with different levels of affinity. If you're running low, pretty much every cell in the body that can will kick up metabolic products to the liver to turn into glucose it can share with the bloodstream - because the best receptors in the bloodstream for picking up glucose belong to the brain. You'll starve, or poison, the brain long before you manage to starve out a cancer. (Yes, Ketone bodies are a thing, but that happens alongside your body mobilizing everything it can to feed the brain, not instead of.)
We also can't 'see' all the tumor. The way cancers actually develop is you have an abnormal cell A, which grows into a tiny nest. These are below detection in any practical clinical way, and we don't want to treat them because they're ridiculously common - your immune system wipes them up. If we tried to detect and treat them all, we'd kill everyone with side effects long before we prevented a fatal cancer.
Out of the bunches of these that develop and die, or develop and go permanently quiet, one gets active enough to start seeding tumor cells into the blood stream. Most of those cells will die, too, because blood is rough for cells not built to withstand it. Most of these are going to be undetectable in any way, and do nothing to people.
(Every time I say something is undetectable, I mean "Except for high precision laboratory experiments used to detect just such things").
Eventually a tiny pre-pre-tumor will start seeding cells into the blood stream that can survive the blood. These will get seeded effing everywhere. Most of these are permanently quiescent and do nothing, ever. They exist at the level of single cells - we can't see them. They don't do anything, metabolic activity very low, so we can't target them.
Once in a blue moon you get one seeded that is actually metabolically highly active. Or maybe it mutates into metabolic activity later. Most of those die.
Once in a blue moon, one of these will live enough to start replicating for real. Most of those get wiped out.
And once in a blue moon, they start replicating for real, and develop immune evasion, and you have something that becomes a cancer, maybe. Or it gets triggered by something external and becomes a cancer. There's a "seed and soil" element here. It'll often start seeding back into the blood stream.
By the time you have a detectable mass, your entire body has been seeded with these cells, most of them both un-image-able and un-selectively-treatable. Luckily, the overwhelming majority of these cells - lots of nines - won't do jack. Of the trillions that will seed your body, if we stimulate them just right, you might get a couple of new tumors, or none at all.
We know this because we learned that tumors benefit from circulating inflammatory markers early in modern oncology. When a surgeon took out a tumor, not infrequently, a patient would come in a year later with a new one or two that weren't previously detectable. We eventually learned that the inflammatory growth signals that come with surgical trauma can provoke an otherwise sleepy tumor cell into metabolic activity.
Which is a roundabout way of saying "cancers are more metabolically varied than the late, aggressive stage of the process we usually refer to as 'cancer' would suggest."
That being said, if you could inject something directly into the tumor (rather than the bloodstream would prioritize sending said poison pill glucose to the brain or liver) and take advantage of its metabolism, that would be great. We do kind of do that: we implant radioactive pellets directly, with the added benefit that we know it won't affect much tissue outside of the immediate area.
I hope my answer was actually useful in providing some biological context? I'm afraid I might have just word-vomited instead of being helpful.
There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says “Morning, boys. How’s the water?” And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes “What the hell is water?”
How much of the accepted-without-question, 100% undisputed mainstream narrative is disinformation as well?
Just two recent examples:
1. Voter fraud in the election.
After the election was done and rumours of fraud arose in the social media memeplex, nearly immediately the "There is no evidence of election fraud - claims of election fraud are disinformation" counter-meme was released into official media channels (and from there spread into social media). That this too is disinformation should be obvious.
- Election fraud (to whatever degree) occurred, or it did not.
- Evidence of election fraud (that is discoverable by humans) then existed, or it did not (regardless of that discovery).
- The necessary investigations to discover any such evidence had been launched and completed, or they had not.
disinformation: false information which is intended to mislead, especially propaganda issued by a government organization to a rival power or the media.
As of the date of the election completion + 1 day, the necessary investigations to uncover the existence of election fraud had not been completed - therefore, the assertions of fact broadcast from multiple independent media organizations that there is(!) no evidence of election fraud is disinformation.
2. The riot at the Capitol
Immediately after the Capitol incident, a coincidentally similar narrative was asserted by multiple independent media organizations: this was literally a coup attempt, an attack on "our most sacred(!) institution". And, the intent of the people on the ground was(!) to execute a coup against the US government. This was asserted as if it was a fact. However, what was actually going on (which is contained within the minds of the human beings who were there that day) is not actually known.
Getting back to the "Morning, boys. How’s the water?" story...somewhere along the way (my sense is that this has occurred mostly in the last 5 years), the notion of what is actually(!) true seems to have become somehow radioactive in a way - the very idea is considered repulsive, not to be mentioned in polite company.
And it's not just braindead political partisans who exhibit this behavior - as far as I can tell, this default intuition has been adopted by extremely high percentages of people across all categorizations. If anything, it seems to me that highly intelligent people are often even more repulsed by it than normal folks (who perhaps don't grasp the philosophical depth of the idea).
The election fraud disagreement and the Capitol riot, these things concern me somewhat, but what concerns me even more is the degradation of the very fabric of reality, the "This is Water" in the story. And not only the degradation itself, but that the degradation is being promoted (at least unconsciously and passively) by some of the brightest minds in our population. A society that is arguably involved in some form of a new cold war with a rapidly rising new global superpower adopting a culture of disregard for what is actually true seems like not a great idea to me. If we actually care as much about the Capitol riots as we proclaim to, I think we should be willing to face up to what the actual underlying causes of it are, as opposed to the simplistic narratives (disinformation) that are asserted as fact to be the cause.
As it happens, I have developed one algorithmic, low latency trading system in Common Lisp / ANSI C, and then was asked to rewrite it in Java which I did.
It actually traded on Warsaw Stock Exchange and was certified by WSE and was connected directly to it (no intervening software).
Yes, it is possible to do really low latency in Java. My experience is my optimized Java code is about 20-30% slower than what very optimized C code which is absolutely awesome. We are talking real research project funded by one of larger brokerage houses and the application was expected to respond to market signals within single digit microseconds every single time.
The issue with Java isn't that you can't write low latency code. The issue is that you are left almost completely without tools as you can't use almost anything from Java standard library and left scratching your head how to solve even simplest problem like managing memory without something coming and suddenly creating a lot of latency where you don't want it to happen.
Can't use interfaces, can't use collections, can't use exceptions, etc.
You can forget about freely allocating objects, except for very short lived objects you must relegate yourself to managing preallocated pools of objects.
I don't know what is the current state of garbage collection but in our case garbage collection had to be turned off and nothing outside of Eden could be collected during program execution (Java 8).
Writing that kind of optimized code is all about control and Java is all about abstracting it so you don't have to worry. As you see, both goals are at odds.
So I still prefer working low latency in C as it is more natural, native solution for managing problem where you absolutely need to control memory layouts, preallocated object pools, NUMA, compilation of decision trees to machine code, prefetcher, etc.
I have about 10 years combined experience working with C and 15 years working with Java.
It is racist to implement systems of racial discrimination. There is no way that you could modify the meaning of the word racism to the point that it no longer includes racial discrimination.
Racism also doesn’t presuppose any particular motive. There are many reasons that somebody could choose to be racist, and if you look at real world racists, you’ll find that they offer a wide variety of justifications for their views. Just as you have done in this comment. You are making the argument that your racism is morally righteous, and that it will create positive outcome (which, coincidentally, certainly isn’t an uncommon position for racists to take).
You’ve also made the mistake of presuming that the problem you describe requires a racist solution. It absolutely doesn’t. If wealth creates more inter-generational wealth, and poverty creates more inter-generational poverty, then you need a solution for social mobility, not to artificially elevate people based on ethnic group membership (which of course also disadvantages others based on their different ethnic group membership). A poor white or Asian kid, who’s family has always been poor, is going to face the same socio-economic disadvantages that you’re describing, and a wealthy black kid isn’t going to be facing them at all. The solution your offering (aside from being racist) doesn’t solve the problem you’re describing. In many cases, it actually makes it worse, because if you look at how such systems operate, you’ll find that a decent portion of the people who benefit from them actually come from relatively well off families.
> To live in this process is absolutely not to be able to notice it-please try to believe me-unless one has a much greater degree of political awareness, acuity, than most of us had ever had occasion to develop. Each step was so small, so inconsequential, so well explained or, on occasion, 'regretted,' that, unless one were detached from the whole process from the beginning, unless one understood what the whole thing was in principle, what all these 'little measures' that no 'patriotic German' could resent must some day lead to, one no more saw it developing from day to day than a farmer in his field sees the corn growing. One day it is over his head.
> How is this to be avoided, among ordinary men, even highly educated ordinary men? Frankly, I do not know. I do not see, even now. Many, many times since it all happened I have pondered that pair of great maxims, Principiis obsta and Finem respice-'Resist the beginnings' and 'Consider the end.' But one must foresee the end in order to resist, or even see, the beginnings...
> In the university community, in your own community, you speak privately to your colleagues, some of whom certainly feel as you do; but what do they say? They say, 'It's not so bad' or 'You're seeing things' or 'You're an alarmist.'
And you are an alarmist. You are saying that this must lead to this, and you can't prove it. These are the beginnings, yes; but how do you know for sure when you don't know the end, and how do you know, or even surmise, the end?
-- Milton Mayer, They Thought They Were Free (The Germans 1933-45)
In the past few decades, the political right served as the watchdog against governmental abuse of power (domestically), while the left was more sensitive to corporate abuses. But with the rise of big tech has come a generation of left-leaning young people willing to give the benefit of the doubt to corporations they see as by- and for- their own generation. At the same time, the right's foray into populism has ushered in a party-wide acceptance of authoritarianism (provided it is wielded against members of out-groups).
I agree with you if we were talking about a decision by HN, or Metafilter, or other small communities. But Twitter/Facebook et. al. do not feel the same. The situation feels more like a company deciding that they can ban certain literature on the grounds that they own all the land in the town [1].
So it seems to me that our watchdogs are in some sense asleep at their traditional posts. It seems to me that we have an instance of a corporation granting itself new and wide ranging powers (regardless of their benevolence) over a wide swath of public discourse. It seems to me that this ought to be resisted as a beginning, though we cannot see the ends.
>...we have recognized that the preservation of a free society is so far dependent upon the right of each individual citizen to receive such literature as he himself might desire... can those people who live in or come to Chickasaw be denied freedom of press and religion simply because a single company has legal title to all the town? For it is the State's contention that the mere fact that all the property interests in the town are held by a single company is enough to give that company power, enforceable by a state statute, to abridge these freedoms. We do not agree that the corporation's property interests settle the question... Ownership does not always mean absolute dominion. The more an owner, for his advantage, opens up his property for use by the public in general, the more do his rights become circumscribed by the statutory and constitutional rights of those who use it.
1) Be a legacy admit. This is the easiest way to get into Yarvard Law.
2) Be an Ivy League undergraduate. This is the second easiest way to get into Yarvard Law, because you have the opportunity to get law professors to vouch for your admission even before you apply, and the grade inflation in the Ivies means that your transcript will easily beat any student that went to a public university or college.
3) Be a token minority student (i.e., not White or any type of Asian) with excellent grades that graduates top of your undergraduate class, because Yarvard Law needs token minority admissions every year to put on their brochures. (In Yarvard's defense, this is true of almost every law school.)
4) Be rich and do things during college that most people can't do during their summers, like volunteering at the Hague, or having your parents make a generous donation to the school.
5) If there are any spots remaining after categories 1-4, excel at the LSATs, get a 4.0 GPA in undergraduate (the major does not matter), and ace the admissions interview. Generally, there are only a handful of spots remaining by this point so you are competing with thousands of other applicants for a dozen or so spots. (In this regard, Legally Blonde is actually pretty spot on: a 4.0 in a fluff major is just as good as a 4.0 in an engineering major for Yarvard admissions purposes.)
Well, props for sticking with a truly unpopular opinion.
I’d like to change your mind one day though. My friendship with Scott was instructive here. He was a former HN mod. I looked to him as a mentor and a friend, though I’m not sure it went both ways. Regardless, we worked together on Lumen for years. When I was banned for a year from HN, he never once allowed our friendship (such as it was) to affect his duty to the site. The decision wasn’t his, and he wasn’t going to pull strings internally just because we occasionally wrote code together.
I get what you’re saying. And I agree that in the long term, it’s extremely important to set up incentive structures in the right way. But friendship — a word quite hard to define, if you think about it — is a part of the human experience.
The point here is that there are people with integrity. They do exist. And they can be friends regardless of other duties — sometimes unpleasant ones.
Now, my little story isn’t quite related. I wasn’t an adversarial peer, which is what you’re talking about. But your reasoning seems to be: if the incentive structure permits friendship, it compromises integrity. It’s a reasonable concern, especially over the course of decades. But the word “professional” reflects the fact that business comes before friendship.
It’s a fundamental truth that people will try to form friendships regardless of their occupation. Rather than change the incentive structure, as you propose, shouldn’t we recognize that truth?
The reason I related to your comment so much is, for a time, I felt exactly the same way: if business was any indication, it was a web of insider deals, “friendships”, and favors behind closed doors. I wanted nothing to do with that world. But two people with integrity can sidestep all of those concerns and simply... be friends. Even in the highest court of the land, which determines our fate.
(As a sidenote, you seem like an interesting person. If you happen to want a friend, or to debate hypothetical political structures, feel free to DM me on Twitter.)
As a decidedly amateur historian with interest in this area, I think that claim is true on its own but also does not tell the entire story.
It is true that decentralization was viewed as a major component of nuclear survivability, particularly in the earlier part of the cold war. This was the time period during which the FEMA (today's name) crisis relocation program was being devised, for example, and the high cost and complexity of crisis relocation (which, in an earlier form, was a major motivation for the Eisenhower freeway system, to the extent that it's probably fair to say that it was the main motivation with materiel movement as a second) served to highlight the inherent vulnerability of dense cities and keep it very much in the minds of government planners. Decentralized cities had an inherent advantage to planners in that money could be saved on crisis relocation efforts. Of course the crisis relocation program was never fully implemented, but the way of thinking was fairly influential.
The federal government had an enormous role in suburbanization of US cities in many, many ways, which is actually part of what makes it hard to address this point. Support for suburbanization was not coming just (or even primarily) from FEMA, all kinds of federal agencies had a hand. Much of the urban renewal work of the 20th century took the effective form of relocating poor people to the suburbs and replacing their inner-city housing with industrial/commercial/transport, for example the Model Cities program of the late '60s. This was in part a result of the general feeling that the inner city was where poor people lived and so improving their economic situation required getting them out of it, part of it was merely the practical issue that substantially improving a dense area is a lot more expensive than razing it and build something new there. I don't know if these programs were strongly influenced by crisis planning, they probably were at least in part, but it seems unlikely that crisis planning was a much bigger influence in federal advocacy of suburbanization than the more organic trends of white flight and urban decay that came out of a set of race and class relations, in a potent combination with some simple budget and timeline considerations.
My point is that nuclear planning was a factor, but the massive suburbanization of the postwar decades originated from many factors out of which nuclear crisis planning was only one, and I'm not convinced that it was one of the bigger ones in the end. Yes, the Bulletin of the Atomic Scientists very openly advocated for suburbanization of cities and that view was influential, but at the same time so much of suburbanization was motivated by a new vision of the American dream that came out of the peculiarities of post-war economics and demographics, racism, the radical popularity of the automobile (which not only enabled low-density areas but often required the destruction of high-density areas to provide freeway access to business districts), and probably at least a few other things.
Any claim that "urban sprawl is a result of x" where X isn't a list of things is probably pretty hard to defend. A complete change in not just urban planning but people's patterns of life tends to require a confluence of factors.
> in Crimea almost everyone wanted to join Russia anyway
That much is actually true - I grew up not far from there, and it was a mistake for Khrushchev to give Crimea to Ukrainian SSR - it's been a part of Russia since 1783. But back then it was unimaginable that the USSR would fall, so nobody made much of it. Ukrainians were an ethnic minority in Crimea, and Russians were 67% of the population at the time it re-joined Russia. Not something you will find in US mainstream press. This is why Greenwald's input is valuable - he actually studied the issue rather than parroting the mainstream party line.
To see why he's right, consider this: Russia took Crimea _without any bloodshed_ and with fairly minimal military presence. This would not be possible if populace wasn't in agreement.
Leaving aside the formalities, Ukraine's grasp on Crimea was tenuous at best, and its fate was decided when Ukraine threatened to kick Russian military base out of Sevastopol - which lets Russia control the vital Black sea. With Russia no longer there, somebody else (e.g. NATO) could take its place, which Russia will not allow for geostrategical reasons.
But even ignoring Russia's interests in the region - one could argue Crimea was never really "Ukrainian" in the first place, and their temporary possession of that land was a historical mistake, which has now been corrected. It's not even clear if Khrushchev had the authority to give it away in the first place.
No opinion on the West Bank, I don't have first hand info on that.
By my understanding, Popper didn’t mean the “paradox of tolerance” in quite the way you think. It wasn’t meant to be used preemptively. It basically meant: when some group starts resorting to actual, literal violence, actual, literal violence is justified to stop them. The classic example of this are the Nazis, who engaged in actual armed violence for about a decade before ever taking power.
To this point, Bari Weiss, to my knowledge, never committed violence, never harassed coworkers, never formed an angry mob to harass coworkers—she merely stated opinions that people didn’t like. What people are doing is defining anything they strongly disagree with as “intolerance” in order to justify their own intolerance against it. The cosmic joke is on them because they are gradually radicalizing themselves into the very people the paradox of tolerance is meant to protect us from.
Here’s what Popper actually had to say:
> In this formulation, I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise. But we should claim the right to suppress them if necessary even by force; for it may easily turn out that they are not prepared to meet us on the level of rational argument, but begin by denouncing all argument; they may forbid their followers to listen to rational argument, because it is deceptive, and teach them to answer arguments by the use of their fists or pistols. We should therefore claim, in the name of tolerance, the right not to tolerate the intolerant.
To me, it sounds like the people Popper is describing are exactly the people who misuse his ideas to justify censorship and violence. He isn’t talking about actually becoming the side that “denounces all argument”. He’s addressing people who are focused on peaceful discourse as their mode of resolving disagreements and reminding them—some people will escalate to violence and if you are unwilling to defend yourself, your values will become extinct. None of this justifies being the first side to take up arms against someone who hasn’t done so themselves.
So they do—they just don’t remain there, they get kicked back up.
What is at stake is that you carry a subtle contradiction in your head, assuming you got some pieces of wisdom from your physics classes but not others.
One side of this is the minimum energy principle. Like, this is common sense, you leave a basketball bouncing in your driveway and you expect it to stop somewhere, probably (if it's not a perfectly flat blacktop) downhill of wherever you started it. Heck if you've had a hoop in your driveway you probably have a reflex to run after the ball when it touches ground, otherwise it'll eventually find the road and roll very far away as it chases the downhill. That's the minimum energy principle, dynamic friction reduces kinetic energy in a system while forces tend to make potential energy into kinetic energy, so you would expect if you just leave the system alone it ends up at rest at some minimum of potential energy.
The other side of the contradiction is that we tell you that energy is conserved cannot be created or destroyed. If you are very lucky, we tell you that there is a way of phrasing the laws of physics such that energy conservation is the same as saying that the laws of physics are the same today as they are tomorrow—we call this “time translation symmetry” and did the theorem that connects continuous symmetry is to conserved quantities is Noether’s Theorem if you are looking for something to google here.
The only way to resolve the contradiction is to say that friction is actually dissipation—energy is getting more spread out among the universe but is not being destroyed. So you want to picture a big bucket of water and a thousand little glasses and we empty the bucket only by putting it into all of those glasses. And the idea is that eventually if random processes take over the moving of the water from any of these to any other of these, all of them will have the same water level. You can actually see this if you see demonstrations of the siphon effect, water will actually flow up and down a hose to equalize two water levels in two reservoirs. When energy does this, has the same average occupation in every degree of freedom of a system, we say that the system has thermalized and we can measure its absolute temperature as that energy level. Technically temperature is not uniquely defined in any other context—mostly, we find physical objects whose properties like volume or length or so vary approximately linearly with temperature in this sense, then we use them as thermometers to measure temperatures in other contexts.
Now there is an interesting result, which is that if your bucket is at the same level as the cups, in some sense your bucket never ends up empty. Like there can be a lot of cups and that water can be spread over everything and there is only a tiny film of water in the actual bucket left, but it’s not zero.
This is also a theorem, it is called the fluctuation-dissipation theorem. It says that I can't dissipate energy into some environment without feeling noise from that environment prevent me from dissipating all of my energy into it: I have to accept random fluctuations back from it. In other words, there are no one-way channels for energy.
To bring this back to your question, the basketball only comes to rest on the ground because it has so much more energy than the thermal fluctuation energy which it gets back from the ground. When it eventually settles, it turns out that it is not fully at rest but is moving imperceptibly due to these fluctuations. And those fluctuations are imperceptible because the mass of the basketball is very large, large enough that this disturbs the center of mass by a height way smaller than the size of atoms, which it turns out are way smaller than the light you can see. So like even with a microscope, visible light is too chunky to show you this on a basketball.
But repeat the calculation for how far those 25 meV of thermal energy will launch a 28-amu nitrogen molecule and you will find that the height is roughly nine kilometers [1] which is a pretty good rough estimate for the height of the Earth's atmosphere, that's about where the troposphere ends. Just to be clear, the ground doesn't kick any individual molecules that high, they collide with other air molecules way before they get anywhere near that high, but that energy and momentum does ultimately get communicated to the whole swarm of air molecules and stops the swarm collectively from falling lower than that distance on average, even though everything is one big colliding mess. The first order prediction is actually an exponential decrease in density as you go to those higher heights, and I think that 9km figure is a 1/e decay constant, but the truth gets a lot more complicated as the ultraviolet light coming into the Earth is getting preferentially scattered in the high atmosphere and contributing a second source of energy to the system.
But yeah, the air doesn't fall down to the ground because the sun is shining and keeping our planet warm, and that warmth is imperceptible in the motion of a basketball but several kilometers in terms of the height of air molecules.
The community reflects the larger society, which is divided on social issues. Don't forget that users come from many countries and regions. That's a hidden source of conflict, because people frequently misinterpret a conventional comment coming from a different region for an extreme comment coming from nearby.
The biggest factor, though, is that HN is a non-siloed site (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), meaning that everyone is in everyone's presence. This is uncommon in internet communities and it leads to a lot of misunderstanding.
(Edit: I mean internet communities of HN's size and scope, or larger. The problems are different at smaller size or narrower scope, but those aren't the problems we have.)
People on opposite sides of political/ideological/cultural/national divides tend to self-segregate on the internet, exchanging support with like-minded peers. When they get into conflicts with opponents, it's usually in a context where conflict is expected, e.g. a disagreeable tweet that one of their friends has already responded to. The HN community isn't like that—here we're all in the same boat, whether we like it or not. People frequently experience unwelcome shocks when they realize that other HN users—probably a lot of other users, if the topic is divisive—hold views hostile to their own. Suddenly a person whose views on (say) C++ you might enjoy reading and find knowledgeable, turns out to be a foe about something else—something more important.
This shock is in a way traumatic, if one can speak of trauma on the internet. Many readers bond with HN, come here every day and feel like it's 'their' community—their home, almost—and suddenly it turns out that their home has been invaded by hostile forces, spewing rhetoric that they're mostly insulated from in other places in their life. If they try to reply and defend the home front, they get nasty, forceful pushback that can be just as intelligent as the technical discussions, but now it feels like that intelligence is being used for evil. I know that sounds dramatic, but this really is how it feels, and it's a shock. We get emails from users who have been wounded by this and basically want to cry out: why is HN not what I thought it was?
Different internet communities grow from different initial conditions. Each one replicates in self-similar ways as it grows—Reddit factored into subreddits, Twitter and Facebook have their social graphs, and so on. HN's initial condition was to be a single community that is the same for everybody. That has its wonderful side and its horrible side. The horrible side is that there's no escaping each other: when it comes to divisive topics, we're a bunch of scorpions trapped in a single bottle.
This "non-siloed" nature of HN causes a deep misunderstanding. Because of the shock I mentioned—the shock of discovering that your neighbor is an enemy, someone whose views are hostile when you thought you were surrounded by peers—it can feel like HN is a worse community than the others. When I read what people write about HN on other sites, I frequently encounter narration of this experience. It isn't always framed that way, but if you understand the dynamic you will recognize it unmistakeably, and this is one key to understanding what people say about HN. If you read the profile the New Yorker published about HN last year, you'll find the author's own shock experience of HN encoded into that article. It's something of a miracle of openness and intelligence that she was able to get past that—the shock experience is that bad.
But this is a misunderstanding—it misses a more important truth. The remarkable thing about HN, when it comes to social issues, is not that ugly and offensive comments appear here, though they certainly do. Rather, it's that we're all able to stay in one room without destroying it. Because no other site is even trying to do this, HN seems unusually conflictual, when in reality it's unusually coexistent. Every other place broke into fragments long ago and would never dream of putting everyone together [1].
It's easy to miss, but the important thing about HN is that it remains a single community—one which somehow has managed to withstand the forces that blow the rest of the internet apart. I think that is a genuine social achievement. The conflicts are inevitable—they govern the internet. Just look at how people talk about, and to, each other on Twitter: it's vicious and emotionally violent. I spend my days on HN, and when I look into arguments on Twitter I feel sucker-punched and have to remember to breathe. What's not inevitable is people staying in the same room and somehow still managing to relate to each other, however partially. That actually happens on HN—probably because the site is focused on having other interesting things to talk about.
Unfortunately this social achievement of the HN community, that we manage to coexist in one room and still function despite vehemently disagreeing, ends up feeling like the opposite. Internet users are so unused to being in one big space together that we don't even notice when we are, and so it feels like the orange site sucks.
I'd like to reflect a more accurate picture of this community back to itself. What's actually happening on HN is the opposite of how it feels: what's happening is a rare opportunity to work out how to coexist despite divisions. Other places on the internet don't offer that opportunity because the silos prevent it. On HN we have no silos, so the only options are to modulate the pressure or explode.
HN, fractious and frustrating as it is, turns out to be an experiment in the practice of peace. The word 'peace' may sound like John Lennon's 'Imagine', but in reality peace is uncomfortable. Peace is managing to coexist despite provocation. It is the ability to bear the unpleasant manifestations of others, including on the internet. Peace is not so far from war. Because a non-siloed community brings warring parties together, it gives us an opportunity to become different.
I know it sounds strange and is grandiose to say, but if the above is true, then HN is a step closer to real peace than elsewhere on the internet that I'm aware of—which is the very thing that can make it seem like the opposite. The task facing this community is to move further into coexistence. Becoming conscious of this dynamic is probably a key, which is why I say it's time to reflect a more accurate picture of the HN community back to itself.
[1] Is there another internet community of HN's size (millions of users, 10-20k posts a day), where divisive topics routinely appear, that has managed to stay one whole community instead of ripping itself apart? If so, I'd love to know about it.
Here is what I don't understand. (I seriously don't, and would honestly appreciate some explanations or ideas. And keep in mind I am 100% on board with following any and all recommendations of the CDC.) When we were all sent to sit at home back in March, the word was "flatten the curve." The idea was that containment was no longer an option, as declared by the CDC/WHO. Epidemiologists were predicting that somewhere between 40% and 70% of the world's population were going to catch COVID19, at one point or another.
So the paramount need was to ensure that the health systems did not get overrun. We socially distance and stay at home until we're sure the health systems are in the clear, and then we return to some mixed approximation to 'normal,' with the understanding we'd probably have to return a few weeks at a time at some points in the future as case counts climbed again.
But at some point, almost every one I know has switched to believing that the real goal is eradication, that we need to be extremely socially distanced until case counts are near zero, or until a vaccine arrives. People now seem terrified at the thought of catching it, despite having no risk factors. What I don't understand is why this switch occurred. I think we have been seeing a lot of sensationalized headlines and statements from scientists taken out of context somewhat, but I don't think that in and of itself is enough to explain the switch.
Since about early March, we have known or strongly believed that the virus was mostly harmless to most people, with the extreme vast majority of serious cases occurring in people with high risks factors. So I have trouble understanding the extreme pushback against moderate reopening I am seeing out of a lot of people I know. People say "but we need to keep case counts low." Yes, I agree, but I don't know how in the world we get them to zero. Containment was broken. If we're going to be angry, let's be angry about the fact that nothing was done to attempt full containment until it was far too late, not the fact that places are ready to fulfill the statements they made in March.
We can't even count on a vaccine to ever come, because there are issues with some coronavirus vaccines. For instance, the vaccine for SARS become not recommended after research showed that, while it prevented future infections from SARS, it made infections from other coronaviruses more damaging. The last thing we want to do is rush a vaccine through human trials that ends up causing more harm than good, but that's what's going to happen if we believe we're all trapped in our houses until one comes.
Anyways, I'm just not sure where the switch happened. Why are so many people I know now frantically talking about how they need to stay home through September or later, after being told just a mere 2 months ago that the goal was curve-flattening? In most areas in the US, it seems the curve has been flattened. Where I live (a mid-sized city), the health system was never remotely close to being stressed. Yes, case counts will increase if you reopen a bit, but remember, 40%-70% of the pop is going to catch it at some point. Those people are going to catch it whether they catch it now, or six months from now.
If you can't tell from my comment, I think (as is usually the case) the right approach sits directly in between the "open now" and "keep everything shut down" camps. The virus is an extremely serious event, a terrible tragedy, and one that we need to ensure we keep from overwhelming hospitals. But where hospitals are not in any danger of being overwhelmed, I just don't know what good having everyone indefinitely locked at home is doing. Taking what seem to be unreasonably conservative stances now does not fix the bullshit actions by our governments 3 months ago; that ship is gone.
That's the whole point. What some people consider to be "good engineering" is a different set of standards, a different set of qualifiers.
Let's go back to 2012. I was yet again doing web stuff.
We had this hodgepodge of jasmine, junit, eslint and selenium and couldn't commit unless it all passed
But the tests broke more then the code itself, because it was <far more complicated then the thing being tested>. So more time was spent on fixing and babysitting the tests then writing the damn software.
Alas, we finally released and it totally completely bombed. Why?
Because those test suites don't care if something "feels" clunky or "looks" wrong ...The machine responded to the interface in machine time, it didn't actually test human time, which was the only thing that mattered. We should have relied on human dogfooding, like the business books say to do. I got arrogantly laughed at for suggesting it, multiple times; that simply wasn't "engineering" to this team.
Now of course tests are valuable, sometimes. But "sometimes", that's the important thing. Understanding when to make that call is actually important. When, where, what, why, and how - not just important for journalists.
But instead, like some 18th century royal court disconnected from reality, we did ceremony. So we wrote tests, most of them bullshit. One of the tests was essentially: "Does this image on the page load from s3?"
At least that one usually passed.
Except when AWS was down or our internet went out: "I guess we can't work today, the does_image_load_from_s3 test is preventing the commit." They were a waste of time and got in the way of actual work. But we HAD to have them, we MUST, right? Nonsense.
I'm convinced the tests were there because "doing it right" was about virtue signaling. So we built a salary defending potempkin village composed of pure thought stuff.
I imagine it all like a catholic mass: Men in robes walk around, ring bells, and use special boxes to wash their hands with special cloths; it's all very important if you go to church, but that's the point, it's praxis and faith: we were coding from plato's cave, creating intricate shadows of reality representing actual work.
Symbols passing as tools: like Dumbo fetishizing the feather and being oh so worried when it falls, everything passed the most sophisticated testing I had ever seen yet the program still crashed in the user's hands almost every time. All that work was mere ceremony.
Understanding how modern computing speeds and vc capital has allowed people to be wrapped up in their own bullshit, call it programming and get away with it, is a major insight into why technology sucks today.
It's not just you, everyone agrees. It's lame now.
Hi there, friend. I'm Asian American as well, and I remember being in your shoes when I was your age a little over a decade ago. I too also loved programming language theory and very much adored my time studying it in college. What I can do is give you some good news and some bad news.
The bad news is that what you're going through now is real. It's unfair, and it's going to hurt. The truth is that you are seeing the effect of a system that means to optimize superficial representation and not the root cause of the problem of income inequality. People are going to game the system. Folks will get in without merit, and folks without merit that should have it will not get in. Worse yet, when you get out of college, these folks are going to have an advantage over you in the early phases of their career. They'll get undue (even insulting, if you think about it) attention for their racial background, and generally have an easy time getting their foot in the door for top tier roles in investing, startup founding, and corporate strategy. The system will, for a while, be capable of giving them affirmative action. But, the unfair advantage ends there.
Beyond just your own experience, think about what that implies. It means that educational institutions that reject meritocracy are going to slowly crumble as they are no longer compete to be the most intellectually rigorous institutions. I didn't go to an Ivy league university, despite the, ahem, very forceful "advisory" of my parents. I instead went to a tiny liberal arts college where I double majored in CS and Music. My college had a mandatory humanities core, so I learned to read and write and be critical and think. I learned not just to research history, but make sense of it. I learned how to make sense of culture, past and present. I learned how to make sense of computation, data structures and algorithms, and get a grounding of the tools I'd use to make elegant solutions to problems.
When I first started my career, I felt hampered by my lack of a name brand education and my Asian American ethnic identity. I felt passed over by investors when I wanted to start a company (although in retrospect, I think that's in part because I, like almost any other startup founder out of college, was not fit to start a company), and I felt passed over by hot startups and big companies for fast tracker career roles. But, something changed about four or five years through my career.
The problems started getting bigger and less clearly defined -- it made sense, as I was getting more senior and the scope of my work was growing. My hunger and desire to push and prove myself kept growing as well. I continued to look in the mirror and ruthlessly try to improve my worst flaws so that I could be more effective and not stagnate. While it didn't happen immediately, one day I realized on the job that these Ivy league educated folks who I used to feel like were miles ahead of me all of a sudden weren't very far ahead of me at all. In fact, it was more often the case that when we were working together, I would be the one taking the lead. I was the one leading the great charges into the unknown. I was the one writing the script, and figuring out how to get the problems solved. And this was just during the work part of it -- things got even more lopsided during the spirited lunchroom barroom debates about life, the universe and politics -- for some reason, the Ivy league educated colleagues (at least those who seemed to derive a lot of their identity from their pedigree) seemed to have a certain rigidity in thought, a certain degree of haughtiness, and a certain inability to adapt and grow. They often couldn't keep up in debates and conversations compared to folks who went to less pedigreed schools but clearly took their education more seriously.
They couldn't make judgment calls and take risks. They couldn't solve problems both quickly and deeply. My work rivalries were almost never with them. The only other folks I had to compete with for top tier performance and promotion were almost always folks like me. Folks who were sharp, who treated their career like a portfolio, who were ambitious, who wanted every project they worked on to be bigger and better than the last, and who would be dissatisfied if that wasn't the case. This didn't preclude the Ivy league educated colleagues, but I realized that just as in the general population, the percentage of Ivy league educated colleagues with that level of ambition was low. I eventually made it to my destination in the startup world (head of engineering a funded startup with a solvent business model and a blank check) at the same speed as them, if not faster. And now, I'm at a top tier big company, and I realize that a lot of them despite their shiny pedigree wouldn't make it here either.
I guess what I'm saying to you is that I agree with your premises but be careful about the conclusions you draw. You are right to note that affirmative action is ethically wrong. But take that observation further and observe that it is also systemically flawed. Whether it is ethically right or wrong, it simply does not work. The real world careers that you end up in have challenges which are so difficult and challenging that students of institutions who engage in these kinds of appearances over rigor (which is a disturbing amount, especially in prestigious institutions) are ill prepared for its rigors and find difficulty succeeding. It's best to understand that the prestige of institutions that used to be synonymous with their educational rigor is no longer coupled to that, and once you have true rigor competing against prestige, rigor eats prestige for lunch every day of the week. So you didn't get into an institution, and you figure it's because of affirmative action? It's rough, but did you get into an institution that will be good enough? Will you learn the skills you need to learn and then learn the rest on the job? Yes. To be honest, the best educational institution in the world cannot remotely compete with the on the job training you'll get from a good mentor at a top tier startup or tech company. So, focus on your habits, your skills, and your own unique identity. You like PL theory and FP, right? That's great! Try to figure out where it's used in the industry. Send emails to researchers. Make comments on social media. Start blogging. Participate in the public discourse. Once you build up momentum there, you'll be your own brand and it won't matter where you go to university (I say this with the understanding that you did indeed get into a good engineering school anyways, as I saw further on doing the thread -- it just didn't happen to be your absolute first choice).
The truth is these days, where you went to college is such a lossy signal by mid career that it may as well not even matter. There are plenty of ambitious go getters from state schools who far out perform folks who went to Ivy league universities by at least mid career, enough that you should just focus on learning how to learn in college (and for that, I highly recommend expanding your horizons a little into philosophy, history, art and science), and learning how to get ahead after college. Don't be intimidated by prestige games. Maybe things will change one day, but for now, there is still plenty of space to make an impressive fulfilling career by focusing on how to find and solving good, hard problems. If you're at all interested in having an expanded conversation on this, let me know. It might be many years later, but I remember exactly how it felt being in your shoes, and there are so many things I wish someone had told me that I just had to figure out myself in my own career.
It's a MOF (metal-organic framework). Basically a metal ion or metal cluster linked (nodes) together with organic molecules (linkers or struts). Some of them are even stable in aqueous solutions (like UiO-66). Typically, the linker-metal bond is via a carboxylic acid, and the metal node is something like... Zr, Ce, Hf, Ti, Fe, Cu, Co, Mn, Al, etc. etc. They've been the gold standard for gas sorption and hydrogen-car promises for at least 17 years.
The actual term MOF was apparently first used in 95[0].
While there are a lot of things that can affect the synthesis (metal concentration, metal precursor, metal:linker ratio, solvent choice, presence or absence of water, modulators, synthesis temperature), the synthesis of MOFs is usually about tuning what goes into the pot. Then its shake-n-bake and MOF comes out a day later (solvothermal method). So, it's an easy synthesis if you know what to load into your reaction vessel. While continuous synthesis is a harder, I think it's a lot more immediately scalable than porous aromatic frameworks (PAFs).
This work is combining some of the advantages of MOF (high specific surface area, regular structure, easy synthesis) with some of the advantages of PAFs (even higher specific surface area). You can see the linker they use on PDF page 5 of the supplementary online material[1]. The hexadentate structure is large and is reminiscent of PAFs like PAF-1[2], which are known for having very high specific surface area. This is because the aryl group has a high specific surface area. By making the linker very large and bulky, they're reducing the contribution of the metal nodes to the specific surface area by having a greater volume fraction of (lightweight) aromatic linker. However, while PAFs usually (always? I'm not a PAF person) have an SP3 carbon center (and thus a tetrahedral symmetry), this linker is kind of shaped like a paddle wheel with three paddles (or a trigonal prism, if you access the Science article and see Fig. 1). Thus, while PAFs are typically in a diamond-like net (dia [3]), this MOF is in a acs net[4].
Fun fact: Most MOFs are named after the research institution that found them first. This one is named NU-1500 for Northwestern University, where Farha is. The UiO-series are named after Universitet i Oslo, the HKUST series is named for Hong Kong University of Science and Technology, the MIL series is named for Material Insitute Lavoisier.
What the Fed is doing will work very well, at some modest real cost later on (likely to the dollar, represented in the cost of things we import and commodities), if we are able to somewhat restart the economy in the coming months (and we will). The worst of the NY region's situation (which is overwhelmingly the primary problem in the US) will end in the coming weeks (it's ending now, represented in the plunge in hospital and ICU admissions; the deaths will lag though). The Spring and Summer heat will dramatically reduce the virus transmission, combined with practical ongoing measures like heightened rapid testing, distancing and quarantining (along with occasional lockdowns that will spring up due to burst outbreaks; we will likely get far more aggressive with tracking people regarding outbreaks). There's a decent chance we'll combat the virus short term with a serum therapy (might be able to considerably reduce the per case mortality rate over the coming year), and then a vaccine is definite later on.
The tangible cost to what the Fed is doing, is that they will effectively destroy low single digit trillions of dollars in wealth held in US dollars (picture household wealth at $100 trillion for this purpose, and then picture the Fed lighting $1-3 trillion of that on fire as a means to prop up the economy; they're debasing our national wealth in this process, drawing on it via their control of the dollar, to point it as a firehose at the fire; not exact figures, merely a conceptual representation). That damage is likely to be anywhere from one to a few trillion dollars in real losses that they'll see from their programs (only a portion of what they do will result in losses of real value, as in the actions taken by the Fed during the great recession; % losses will be higher in this case, as they're doing some wider, riskier things). They're trading that hit as a cost to prop the whole thing up until the economy can find its legs again.
It's absolutely the right approach. It's the only serious option, other than doing nothing (which isn't reasonable, but it's another option). It will not be without a cost. It will prevent a far, far worse catastrophic outcome. If unemployment peaks at ~14-18% (it's almost guaranteed to hit at least the 15% area somewhere, and soon), without the Fed's actions you could easily double that figure.
The US is incredibly fortunate in this case. The many choices of our ancestors, which made the USD the global reserve currency post WW2, we're cashing in that rainy day benefit right now. We've been irresponsible with our fiscal condition the past 20 years, so our primary fiscal back-stop is the US dollar on such a short notice desperate need (this is far beyond the great recession, in terms of extreme sudden need of dollars); using that is a form of a tax against the assets held in dollars and the productive output of the US economy.
There is a very plausible scenario where the US dollar sees little negative impact despite the trillions of dollars in magic printing the Fed is going to do. And that is: the other major currencies it is competing with globally, are all supported by economies being similarly smashed right now (Eurozone, China, Japan; and the Chinese Yuan has very little global footprint, so it's not very relevant to that context presently, it's really mostly the Euro). The global demand for US dollars right now is extreme, which pushed the dollar to a very high level recently. That dollar demand, for liquidity purposes, will relax later on as some normalcy returns with eg a vaccine (within ~12-18 months sometime probably), and then the dollar will see some fallout from what the Fed is doing now, that's when the long-term cost will begin to be represented in such things as consumer prices, commodity prices (priced in dollars), and so on.
People with assets will benefit tremendously from what the Fed is doing. The stock market would be anywhere from 1/3 to 1/2 lower than it is right now, if the Fed hadn't stepped in in an extreme way (and I don't like where the market is at right now at all, it's not properly pricing in the grinding damage we have to deal with over the coming year, it's temporarily buoyant on the Fed's sugar actions). This is the world's largest bailout for asset holders, and it also happens to be very necessary to preserve the economy until it can return to functioning properly.
The only approach that would have maybe been better, is if a national hold had been placed on all major firings, all mortgages and rents for N months (3 months initially). The Fed would then step in to pay that toll directly (ie prevent the fire, rather than try to put it out afterward), along with the Treasury doing various programs. That could have possibly prevented more damage than what we're doing now. The US system, legally speaking, doesn't allow for that kind of command-economy type action very easily though. So the Fed's moves, which were 'guns at-ready' and made possible by the great recession, were the best choice we had (if this were 2007 and it were happening then, the Fed wouldn't have been able to move as quickly; there was a lot of stumbling around in dark in the initialy days of the great recession, trying to figure out what the Fed was allowed to do and what made sense).
Long story short, the Fed is eating some of our national wealth to do what it's doing, that's the tax we're paying (and some of that is being paid by the rest of the world, as the dollar is the reserve currency and widely held). Instead of everyone selling off 1-3% of their wealth and handing that cash to a central authority to take bold actions, the Fed is doing a conceptually similar thing via 'stealth taxation' (aka inflation (which won't register near-term due to very slack demand), aka dollar debasement, aka printing).
I've been on HN since before 2008. I've seen it change a lot. Before then, I was a regular on Slashdot, on IRC, on various phpBB boards, and, before that, dial-up BBSs. I've got a fairly healthy offline life too and have been a part of climbing communities, business communities, and outdoor communities, and have had organizational roles in some of those. So my opinions aren't worth more than anyone else's, but I've spent a lot of time developing them nonetheless.
Whether a community, online or not, is "healthy", or not, is largely a matter of perspective. You'll see a lot of people say some community isn't healthy, and then a lot of people say the same community is healthy for the same reasons that other people find it unhealthy. The only metric that makes sense to me is whether the community helps me to be a happier or better person. A community might have a lot of faults, but if the overall impact of the community on me is a positive one, then it's healthy -- for me.
So from that standpoint, HN has been good to me. I learn a lot from it, it helps me stay sharp in my part of the industry, it challenges me to learn new things all the time. Some of the stuff I've learned here, I've gone on to teach others (as faithfully as I could) or just shown other people how to find it here on their own.
There are a lot of smart people here and a lot of interesting content on all kinds of subjects. Sometimes a subject matter expert shows up to point out everything that's wrong with some content that I thought I was learning something from; from their perspective, that content made HN a little bit worse, but from my perspective, that content led to their participation and together that made HN a little bit better.
Sure, there are some "personalities" on here that some people disagree with from time to time, or maybe that a lot of people disagree with often. Well, those people are in every community and I don't think HN would be more healthy without them. They could, maybe, benefit from a little more humility, but so could I.
I'm a bit mercurial and I'm passionate about some topics, especially those involving the health and welfare of the people around me. And, honestly, I'm just a bit of a jerk sometimes, a fault that I developed young and something I have to work on every day. That's made me an "unhealthy" part of HN from time to time. It's also my humanity, though, and I don't think that the things I've written in a dispassionate voice have necessarily been better, or more impactful, or even received better, than the things I write passionately. But, I don't want to become a part of the problem, so mostly I try to be quiet and let the smarter people lead the discussion.
One of the healthiest parts of HN is Dan Gackle (~dang). Okay, so some of this might be interpreted as boot-licking, so you'll have to trust me when I say that nobody's ever accused me of loving authority. I have never, in any of my communities, online or offline, seen a more even-handed, fair-minded, or restrained person in a moderator role. There have been some articles written about his work here (https://thenewstack.io/the-beleaguered-moderators-who-keep-h..., https://www.newyorker.com/news/letter-from-silicon-valley/th..., https://qz.com/858124/why-y-combinators-hacker-news-silicon-...). I keep hanging around here in part because he and the other moderators here do such a great job overall. So, anybody ever wants to get rid of me, there ya go.
Their positions necessarily mean that they're going to piss somebody off now and again. They have the unenviable task of often asking people not to talk -- well, argue -- about the subjects they most want to talk or argue about. I'm amazed at how many people though instead say something like, "You're right, I was out of line, sorry." I wish this was a skill they could teach, I'd sign up for that class without a second thought.
I do wish we had a little more balance here. We need more outspoken women here for instance. I appreciated ~jl's presence here and a few others early on and was hopeful there would be more. We need to hear more from people who are experiencing the industry, or life, in a different way from the rest of us.
I wish also that there were more opportunities for people here to be, well, a little more "human", I guess. HN's nature leads it to sort of discourage humanity in the discussions. You have to make an effort to get to know anyone here, and mostly that happens outside of HN, in email or elsewhere. So to that extent, HN often feels less like a real community to me. I knew much more about the people in my old IRC communities.
The only other weakness I think HN has is the really short-lived nature of its discussions. In the past, online communities all had software that would allow discussions to continue for a little while, so if you read something interesting and wanted to say something interesting about it, but needed time to compose it or maybe do a little research before saying anything, that was fine. You could take a little bit of time to write something better, and people would still read it. On HN, once something isn't on the front page anymore, nobody reads it. If something is on the front page for a long time, then it usually gets so many comments that there's no point adding to them, because nobody will navigate through hundreds of other comments to find the new thing you wrote, even if it's good. And if something's on the front page for a short time, you have to rush to add to the discussion before it disappears forever. It's a bit like the whole forum is always doing a bit of methamphetamine, and that's not great.
I never know what to put in the last line of comments like these.
Second, to understand the critique I think it's helpful to understand what I would perceive as the chewing on that paper by the community over time, which can be summarized as this: The distinguishing characteristic of an "RPC call", in the sense that Tanenbaum meant, is an attempt to make a network transaction have the exact same semantics as a function call. Importantly, you should read this strictly; the idea isn't that it should be "functionish", or have some syntax sugar around a network transaction that looks a lot like a function in your language, but the idea is that it is providing an abstraction that is fully as reliable as a normal function call within that language, and can, in every way, be treated as a function call. It is a "procedure call" that just happens to be remote, but the abstraction is so complete that you can stop thinking about it entirely as a programmer.
If you add that perspective to a reading of the PDF I linked, you'll probably understand the objections made more deeply.
You now live in a world where Tanenbaum "won", because he is objectively correct that such a thing is simply not possible, so it is much harder to understand what he's banging on about here in 2022. The network "RPC" calls you're used to have backed down the promises they made, and in most languages aren't as simple as a function call. Many modern things that call themselves "RPC" instead focus on having a function-like flow in the happy case, but don't promise to magically make problems with local vs. remote references go away, and instead of jumping through hoops to try to solve the problems just have you deal with them.
I caught the tail end of the RPC world at the beginning of my programmer career. It was a horrible place. You'd have these horrifyingly complex manifests of what the remote RPC call could do, and then they might get compiled into your code like "functions", and then problems as simple as "the remote function call was lost and you never got a reply" would just be a function you called that hung your program forever. Dealing with this was awful and hacky and ugly, because the function abstration they were trying to jam themselves so hard into simply didn't have a place for handling "the function you're trying to call is missing", so you might have to declare "handlers" elsewhere that ran in entirely different contexts or who knows what garbage. Total mess. In the process of trying to make things easier than they could possibly be, they made handling errors incredibly difficult. The sloppiest modern "I vaguely take some JSON and return some other JSON" HTTP API is much preferable to this mess.
One of the interesting lessons of software engineering is that just because something is impossible in general doesn't mean you can't still try, and get something that sorta sometimes works some of the time if all the stars align, and then get people selling that as the next hot thing that is the most important thing in programming ever.
Despite what you may cynically think, I actually don't have any current tech in mind as I type that. It may just be a lack of perspective as I'm as embedded in the present as anyone else, but I don't feel like there's a lot of extant techs right now in the general programming space promising the impossible. Closest I can think of is the machine learning space, but my perception is that it isn't so much that machine learning is promising the impossible as that there are a lot of people who expect it to do the impossible, which isn't quite what I mean. I mean more like people selling database technology that, in order to exist, must completely break the CAP theorem (and I don't just mean shading around the edges and playing math games, but breaking it), or RPCs that fundamentally only work if all 8 fallacies of distributed computing [1] were in fact true, or techs that don't work unless time can be absolutely and completely synchronized between all nodes, and so on. I'm sure there's little bits of those here and there, but there was a time when this sort of impossible RPC was thought of as the future of programming. The software engineering space has become too diverse for things that are essentially fads to take over the whole of it like things could in the 1980s and 1990s.
(See also OO, which also has a similar story where if you learn OO in 2022, you're learning a genteel and tamed OO that has been forced to tone down its promises and adapt to the real world. The OO dogma of the 1980s and 1990s was bad and essentially impossible, and only a faint trace of it remains. Unfortunately, that "faint trace" is still in a few college programs, so it has an outsized effect, but for the most part the real world seems to break new grads of it fairly quickly nowadays.)
Finally, this should be contrasted with the modern answer that is continuing to grow, which is message passing systems. A message passage system loosens the restrictions and offers fewer guarantees, and as such, can do more. You can always layer an RPC system for your convenience on top of a message passing system, but you can't use an RPC system to implement a message passing system with the correct semantics, because the RPC system implements too much. I personally view "RPC", in the looser modern form, as a convenient particular design pattern for message passing, but not as the fundamental abstraction lens you should view your system through. Even the modern genteel form of RPC imposes too many things, because sometimes you need a stream, sometimes you need an RPC, sometimes you just need a best-effort flinging of data, etc. When you have a place you need RPC's guarantees, by all means use an established library for it if you can, but when you need something it can't do, drop it immediately and use the lower-level message bus you should have access to.
[1]: https://www.simpleorientedarchitecture.com/8-fallacies-of-di...