Hacker Newsnew | past | comments | ask | show | jobs | submit | muraiki's favoriteslogin

America's post-war strength was built on unusually strong education. After the war, America had far more schooling overall than other countries. It was one of the many factors that made America a powerhouse in the 50s and 60s.

Economic historians Claudia Goldin and Lawrence Katz, who are essentially the gold standard reference here, show that the US became the richest nation precisely because it led the world in mass education (first universal high school, then mass higher ed), not in spite of it.

>> America’s preference for common wisdom over book learning is a strength, not a weakness. Formal education filters for risk averse, process and credential-oriented people.

High-education countries don't look like basket cases. Among 25-64 year olds, the countries with the highest tertiary attainment shares are: Canada (64%), Japan (56%), South Korea (53%), USA (50%), and the Nordic countries hovering around similar rates. These are some of the richest, most technologically advanced societies in the world. If "credential worship" made a society brittle and unproductive, you'd expect these places to be obvious failures.

India's problem is not too much college. It's that gross tertiary enrollment ratio is only 33%, below the world average. The development-econ diagnosis of India is actually the reverse of your claim: too many people with too little quality education, especially basic literacy, numeracy and foundational skills, plus a small highly-credentialed elite at the top.

>> The GI bill isn’t a counterpoint. GI’s still had to gain admissions at a time when colleges were far more selective than today: https://www.realclearscience.com/blog/2024/01/23/why_college... (undergraduate IQs fell from 119 in 1939 to just 102 in 2022). So you created a filter that was extremely rigorous. It supported college education for people who were both significantly smarter than average, and also had served in the military—the Marcus Aurelius type.

The GI bill massively expanded college. Half of all college students in 1947 were vets. It is widely credited with building the post-war middle class. The IQ meta-analysis you cite explicitly says the drop in average student IQ is a mechanical result of more people going to college, not evidence that universities got worse. The researchers in fact explicitly say this.


I did most of an electronic engineering course and discovered that I like software more than hardware. After programming portable barcode readers and interactive voice response systems, I wrote a Windows device driver at one company then applied to write Linux device drivers at another. I was strictly honest during the job interview, saying I wanted to write device drivers but I had done only one for Windows and none for Linux. The interviewer said "Writing Linux device drivers is exactly the same as writing Windows device drivers" and hired me. I found that it wasn't and faced the kind of learning curve that requires mountaineering equipment.

I succeeded and even trained someone else how to do it. When we started, he didn't know C all that well so I split the drivers into hard and easy parts and gave him the easy ones. We wrote about six drivers together. I made his parts harder and harder until he was able to write a driver on his own.

Perhaps I haven't been reading the right software architecture books but it seems to me that very little mention is made of changing the design to suit the ability of the individual programmers, the way band leaders like Duke Ellington and Glenn Miller did with their arrangements.


There’s so much genetics and innate aptitude to elite level performance that if you don’t see yourself rapidly advancing toward top level performance at most skills it’s safe to say you will never be at the competitive standard at those. I’d argue many people that are told they have a “self-limited” mindset are just making themselves not feel terrible by being brutalized for being legitimately incompetent compared to what competitive standards rely on.

I want to also be clear that I’m not arguing that it’s not worth trying to be good at things you enjoy. By all means, that is healthy and a good use of time. I am making the distinction specifically around elite level play and the competitive standards.

People don’t like this argument but it is so far the case and there’s a plethora of evidence for it depending on the activity. It honestly can boil down to a couple things in most cases easily enough.

VO2 max is the measure of the maximal effort of your cells at consuming oxygen. This varies in humans between 20 and 90, with average being around 40 for women and 45 for men. It is well known that with rigorous persistent training that men can raise their VO2 max by about 20%. There’s even more potential with blood doping, altitude training, and illicit substances but it hardly goes up much more after that. If your VO2 max when first tested is around 45, do you think you’ll be competitive at any intense movement based activity? The average VO2 max of male olympic track and field atheletes is 75.

For weight lifting there is bone density, limb length, shoulder width, and overall size. All of these are barely alterable aside overall size. Someone who is 5’8” 140 lbs no matter how much training they ever do in their life might only ever bench 225 or 250. Meanwhile a 6’5” 280 lb man might bench 225 in his first month of working out. Granted here there is weight classes, so you would still need to put in the effort to find out how you may compare.

For things like musicianship, there is large variability in the tolerance of tendons and finger joints to rigorous practice. The amount required for elite level play is in the order of several hours per day for potentially decades. Shin Lim a famous magician was on track to being a concert pianist first, but had to withdraw due to developing carpal tunnel syndrome. Do you really think he had poor technique causing this? Extremely doubtful. On top of this, the ability for people to play various acrobatic repetoire varies quite a lot. Many people even with decades of rigorous practice will not ever be able to play the works of Alkan or Liszt etudes.

Many elite level players of shooters and other games have reaction times significantly lower than the general population.

It is thought but is not proven due to the difficulty of study that there is a cognitive equivalent to VO2 max and this is something much higher in top level players of fast paced competitive games compared to the general population. It certainly would make sense given the metabolic activity of cells working almost the same everywhere in the body.

Personal anecdote: My brother was reasonably athletic but not top level, but exceptionally gifted at math (never really studying all the way to acquiring a masters from university), so he chose contract bridge as his game of choice. He makes a living off it now and is a national champion. He told me the game and improving at it came completely naturally to him even far past what normal people would think is good and he said that’s just how it is. “I fell into this spot from inertia.” Several pro players of other competitive games have said something similar. It’s not “IQ” either. My brother has a friend who is something unreasonably gifted intellectually (tested IQ something like 158) and a multimillionaire and grinds like NO other at whatever he does. He is one of the all time best at a popular gameshow. He put this effort into Bridge too for quite a while. My brother said he’s decently good at it but nowhere near the competitive standard.

The point of all this rambling is that I think natural variability in talent accounts for far more than people think and attitudes toward mastery in many things adjust due to awareness of how one measures up after a reasonable effort has been put in. It is best for one to find the niche in their life where they seemingly have the most aptitude and go above and beyond developing in that area. These “attitude” ideals are great and all but they don’t make the difference between elite and average. They make the difference between average and good. And by extension I think trying every psychological trick you can possibly come up with like in the article to try and make up for being incompetent at a particular skill is probably less useful than simply finding something you seem naturally better at.


> I have worked on multiple 500K+ line Python projects.

There were already several 1M+ Perl LoC code bases by the end of the 1990s, as Perl use exploded at places like Amazon.

One key driver of Raku's design was making it easy to write large, clean, readable code bases, and easy to enforce coding policies if and when a dev team leader chose to.

(Of course, as with any PL, you can write messy unreadable code -- and you can even write articles and record videos that make it seem like a given PL is only for Gremlins. But it's so easy to write clean readable code, why wouldn't you do that instead?)

> type hint ... is never enough if it isn't guaranteed by a compiler or at runtime

Indeed. Type hints are a band aid.

Raku doesn't do hints. Raku does static first gradual typing.[1]

The "static first" means static typing is the foundation.

The "gradual typing" means you don't have to explicitly specify types, but if you do types, they're always enforced.

Enforcement is at compile time if the compiler's static analysis is good enough, and otherwise at run time at the latest.

For example, in the Gremlins article only one example uses a static type (`Int`). But user code can and routinely does include types, and they are enforced. For example:

    sub double(Int $x) { $x * 2 }
    double '42'
results in a compile time error:

    SORRY! Error while compiling ...

    Calling double(Str) will never work with declared signature (Int $x)
        (Int $x)
> total mess due to weak typing

I know what you mean, but to be usefully pedantic, the CS meaning of "weak typing" can be fully consistent with excellent typing discipline. Let me demonstrate with Raku:

    sub date-range(Date $from, Date $to) { $from..$to }
The subroutine (function) declaration is strongly typed. So this code...

    say date-range '2000-01-01', '2023-08-21';
...fails at compile time.

But it passes and works at run time if the function declaration is changed to:

    sub date-range(Date() $from, Date() $to) { $from..$to }
(I've changed the type constraints from `Date` to `Date()`.)

The changed declaration works because I've changed the function to be weakly typed, and the `Date` type happens to declare an ISO 8601 date string format coercion.

But what if you wanted to insist that the argument is of a particular type, not one that just so happens to have a `Date` coercion? You can tighten things up...

    sub date-range(Date(Str) $from, Date(Str) $to) { $from..$to }
...and now the argument has to already be a `Date` (or sub-class thereof), or, if not, a `Str` (Raku's standard boxed string type), or sub-class thereof, that successfully coerces according to `Date`'s ISO 8601 coercion for `Str`s.

This healthcare/Cheesecake Factory mashup is one of my all-time favorite articles: https://www.newyorker.com/magazine/2012/08/13/big-med

To set my non-coffee-snob bona fides, I exclusively drink decaf (I've avoided caffeine for almost 10 years now).

The reactions this post is getting are kind of odd. A typical home coffee brewing setup is going to offer you just a couple of variables --- a set-it-and-forget-it grind size, water temperature, and the dose of grounds you use for whatever amount of coffee you brew.

It is not especially weird or "gourmet" to be interested in what the right values are for each of those variables. You figure out the right grind size and dial it into your grinder; you figure out the right temperature and hit that button on your kettle; you figure out the right dose and either weigh or scoop-measure that much grounds. Mostly what I'm describing is the simple act of brewing a cup of coffee.


Chuck Moore is pretty much the Buckminster Fuller of computing. Fuller's Dymaxion brand was associated with his idea of getting maximum utility out of minimum input (material, energy, etc.) and Forth's "energy" is much the same. Minimum runtime, minimum code, minimum CPU complexity, but combinatorially explosive power out of these things. You need to realign your brain to think in these terms -- how do I simplify my code further and make it do more? -- but people who have done this can do amazing things.

(Alan Kay is the Willy Wonka of computing; one can restate Joe Armstrong's banana-gorilla-jungle problem more whimsically: with Smalltalk you go into the shop to buy a candy bar but end up getting the whole chocolate factory, with everything you need to make every kind of confection staffed by Oompa-Loompas and ready to go.)


This is a how-to-write-a-FORTH tutorial that I wrote a few years ago. It's particularly nice that you get to see how various control structures are implemented, like IF, CASE and even comments!

Part 1: http://git.annexia.org/?p=jonesforth.git;a=blob;f=jonesforth...

Part 2: http://git.annexia.org/?p=jonesforth.git;a=blob;f=jonesforth...

(github mirror: https://github.com/AlexandreAbreu/jonesforth/blob/master/jon... https://github.com/AlexandreAbreu/jonesforth/blob/master/jon...)

Previous HN comments: https://news.ycombinator.com/item?id=10187248


Tangentially taking this opportunity to mention the far-superior "Lying and Cheating" version of Stratego, that (as far as I know) my father invented.

It makes the game so much more interesting, IMO. Played it a lot as a child.

Here are the basic rules, when a piece is attacked:

  * The attacker says what their piece is, without showing it (they can lie)
  * The defender says whether they believe that
  * The defender says what their piece is, without showing it (they can lie)
  * The attacker says whether they believe that
  * ONLY IF someone calls a bluff is that piece revealed. Otherwise, it is treated as the piece it was claimed to be, and kept hidden.
  * If someone calls a bluff, and they were right, then the other player loses a piece (reach over and remove any piece you like)
  ** If you pick their flag, then you win — game over.
  * Likewise, if someone calls a bluff but is wrong, then *they* lose a piece. 
  * After all of that is resolved, do combat as normal, with pieces having either their revealed or not-revealed claimed value, as appropriate.
Once you resolve all this, there is no "memory" - you can claim it is a different piece in the future.

Some minutiae:

  * You can move any piece as though it were a Scout (9), but when you do the move, the other player can call your bluff since you're essentially claiming it is a Scout at that moment. Resolve that bluff/call before completing the move.
  * You could even call a bluff on *any* move someone makes, if you believe that piece is a bomb or flag (and thus cannot move).
  * You can attack with a bomb! It's a two-step process: first you move (and they could call your bluff, if they know it is a bomb - see above). Then, when the attack happens, you say it *is* a bomb. Of course, your opponent may say their piece is a Miner, and if you haven't seen it, it's a dangerous proposition (since bombs are rare).
  ** You can also do a variant where bombs can't attack (by attacking, you are claiming it is *not* a bomb). I prefer the above version.
Overall, I find this version of the game is a lot less boring. Since you'll probably get several pieces zapped over the course of the game, it affects your flag placement. Plus, you can move flags and bombs, making it more dynamic. Also, the "remember where things were" aspect is even more poignant, since once a piece has been revealed, it loses all the power of being whatever-is-needed-right-now (assuming the other player has a good memory).

So, for instance, you can do something crazy like move your bomb as though it were a Scout, all the way across the board, onto an opponent's piece, but then claim it's a "5" instead for the attack. Then if it survives, just let it sit there, continuing to be a bomb in the future (causing havoc).


> Does it actually succeed though?

Not sure about J, but I feel APL certainly does, at least for me - the symbol set makes it very easy to sketch out a solution.

Part of that ease is the generality of the various operators. That is the reductions and cumulative prefix can be done with any function. For example ⌊\ as cumulative min - useful as Vanessa McHale shows <http://blog.vmchale.com/article/numba-why>

kspalaiologos also shows APL enabling mathematical understanding on her blog <https://palaiologos.rocks/>

An APL equivalent of the J is +\¯1 1[(?100⍴2)] although others exist as there's no exact equivalent for J's { - see <https://aplwiki.com/wiki/From>


Oracle Database 12.2.

It is close to 25 million lines of C code.

What an unimaginable horror! You can't change a single line of code in the product without breaking 1000s of existing tests. Generations of programmers have worked on that code under difficult deadlines and filled the code with all kinds of crap.

Very complex pieces of logic, memory management, context switching, etc. are all held together with thousands of flags. The whole code is ridden with mysterious macros that one cannot decipher without picking a notebook and expanding relevant pats of the macros by hand. It can take a day to two days to really understand what a macro does.

Sometimes one needs to understand the values and the effects of 20 different flag to predict how the code would behave in different situations. Sometimes 100s too! I am not exaggerating.

The only reason why this product is still surviving and still works is due to literally millions of tests!

Here is how the life of an Oracle Database developer is:

- Start working on a new bug.

- Spend two weeks trying to understand the 20 different flags that interact in mysterious ways to cause this bag.

- Add one more flag to handle the new special scenario. Add a few more lines of code that checks this flag and works around the problematic situation and avoids the bug.

- Submit the changes to a test farm consisting of about 100 to 200 servers that would compile the code, build a new Oracle DB, and run the millions of tests in a distributed fashion.

- Go home. Come the next day and work on something else. The tests can take 20 hours to 30 hours to complete.

- Go home. Come the next day and check your farm test results. On a good day, there would be about 100 failing tests. On a bad day, there would be about 1000 failing tests. Pick some of these tests randomly and try to understand what went wrong with your assumptions. Maybe there are some 10 more flags to consider to truly understand the nature of the bug.

- Add a few more flags in an attempt to fix the issue. Submit the changes again for testing. Wait another 20 to 30 hours.

- Rinse and repeat for another two weeks until you get the mysterious incantation of the combination of flags right.

- Finally one fine day you would succeed with 0 tests failing.

- Add a hundred more tests for your new change to ensure that the next developer who has the misfortune of touching this new piece of code never ends up breaking your fix.

- Submit the work for one final round of testing. Then submit it for review. The review itself may take another 2 weeks to 2 months. So now move on to the next bug to work on.

- After 2 weeks to 2 months, when everything is complete, the code would be finally merged into the main branch.

The above is a non-exaggerated description of the life of a programmer in Oracle fixing a bug. Now imagine what horror it is going to be to develop a new feature. It takes 6 months to a year (sometimes two years!) to develop a single small feature (say something like adding a new mode of authentication like support for AD authentication).

The fact that this product even works is nothing short of a miracle!

I don't work for Oracle anymore. Will never work for Oracle again!


I do software supply chain security consulting for several high risk companies and largely agree with this post that we must stop expecting devs to have any responsibility for code they produce. The responsibility is on those that consume it.

This will sound pretty harsh, but if your company chooses to use open source code that does not have capable, paid, full time professionals reviewing it for security and quality, then your company is signing up for that responsibility. If you make no reasonable attempt at vetting your supply chain and harm comes to users as a result, then IMO you should be liable for negligence just like a restaurant serving food with poisonous ingredients. Broadly normalized negligence is still negligence.

This should not be controversial, but it is. Washing hands in hospitals was once controversial too but those advocating for it had irrefutable evidence on their side. The medical industry did not want to pay the labor cost of hygiene, and we are seeing the same in the software industry.

https://www.nationalgeographic.com/history/article/handwashi...

Ask yourself if it cheaper to fully review, sign, compile, and maintain third party OSS code or to write something in-house focused on your needs on top of the standard library. Pick one. Both are real options. Some of my clients actually do (or pay others for) security review of every single NPM dependency they use in prod. If you can not afford to review 2000 dependencies then you can not afford 2000 dependencies. Find a leaner path.

Companies must stop expecting others to do their software review job for them. OSS devs already wrote the code for free because, ostensibly, it was fun. You are an ass if you ask them to do anything that is not fun, for free, to make your company safer or more money. Such actions make it not fun anymore, and make them stop entirely.

I do not know why companies have code review policies for code written by peers, but if the code is 2 million lines of NPM dependencies essentially copy/pasted from randos on the internet it is suddenly okay to ship straight to prod and give said randos full control of the data or property of millions of people.

We need to start calling this out as the negligence that it is.


A little OT but related and want to bounce this off HN.

I think we’ve reached a turning point in history and society, aided by lockdowns, where young people spend more time interacting online than IRL. This has major consequences for society and tech.

Young people present themselves to the world primarily through their online persona. The physical world is secondary. To reiterate, what happens online, what is shown and seen online, is more important to them than the same in physical space.

You could probably write a book about the consequences of this (maybe someone has?) but two particularly salient points: (1) if you’re not an onliner then it’s hard for you to understand how the future will look, and (2) your desires are just going to be fundamentally different from other people.

I know from talking to CEOs that those entering the workforce now are most comfortable interacting through Slack and WhatsApp. Physical interactions are sometimes difficult, because people have less experience of it - usually it takes them longer to bond, and more formal social situations are challenging because they’ve not had much practice and to learn the rules governing behaviour in certain physical spaces.

It’s easy for oldies to mock Zuckerberg with his embarrassing meta presentation, and indeed I think his influence is waning (because he’s also out of touch) but for sure the future is going to be driven by people who are living online. Avatars may seem like a 90s throwback but that’s just because of how we conceive of them; the truth is _we already have avatars, they’re just messy and distributed and mostly text based_.

Personally I don’t like this conclusion. I think the physical world is superior in many ways, for example physical activity and exercise are fundamental to wellbeing, and I can’t see that being properly and healthily replicated in the virtual world for a very long time. I think there are more noble and worthwhile goals to be pursued in the physical world. But just because I don’t like it doesn’t mean it ain’t true.

I’ve half a mind to join ‘em, half a mind to go build a log cabin.


Netflix likely spotted the behavior through one of the many other heuristics that can identify this, such as:

- Lower packet TTL than gateway

- Lower packet MTU than gateway

- Much higher latency than gateway

- WAN IP listening on well-known VPN ports for PPTP/IPsec/etc

- Ongoing sessions that suddenly teleported to a Chinese WAN IP when the VPN connection dropped

- Incorrect DNS configurations or stale caches that point toward inappropriate CDN edge nodes

- System time zone (via JavaScript or native app) not matching source region

- Unusual system language preferences for the region

- One of the hundreds of leaky data points revealed by Android devices and other native apps


Nope ; Motorsport is always drivers' skills coupled with engineering ingenuity. It's always about "what can I come up with, which gives me an edge, and still somehow is within the rules?" I don't know anything about Nascar, but the history of Formula 1 is full of such little tricks as well. It's just easier to regulate "other sports" than it is to regulate sports that come coupled with a lot of technological involvement.

If sth gives you an edge for half a season until rules are adjusted, that might be enough to win a championship. It's a cat-and-mouse game, but it's also exciting, and important for the whole thrill of it.

Decades past Gordon Murray designed a fan quite literally sucking cars to the ground, which somehow was within regulations, because no one even considered something like that https://www.youtube.com/watch?v=Hb6DAmm7sZg In rally driving, they would sometimes come up with fake reasons for a start to be delayed, so they wouldn't have to drive in the front car's dust all the time. Audi entering with their 4-wheel car back in the days was only possible, because they pushed for a rule change and no one else really knew what was coming. Sometimes manufacturers straight up "cheated" (almost, sometimes for real) https://www.youtube.com/watch?v=6lo4dGTrzr8 ; it's a thin line, but also what makes it exciting.

I would say that it's the hacker's / engineering ethos almost. What can I do within the framework? Whether it's building a bridge (to make it more stable while still following this brash design), a road car (how can I create something fun, with torque, sound, emotion, down force, power, but a nice shape, and still get a road legal car within environmental regulations), computer games (consider https://www.youtube.com/watch?v=izxXGuVL21o ; computer games are full of hacks to get the most out of the hardware), even legal (how can we pay almost no taxes, while not being busted for tax avoidance?) ; not every ingenuity is necessarily good, but it will always be cat-and-mouse, that's the point of living.

This got meta quick ... and quite a more detailed answer than I anticipated. Sorry for that, hope I gave you a different perspective though.


Yes.

When I discovered programming in my teens it was the single greatest thing I could do. I spent all my time doing it, constantly learning new things, breaking into new areas of discovery and then finding still more interesting thing. I wrote code for entertainment, I wrote code to relax, and sometimes people even paid me to write code.

Fast forward 20 years and I felt like it hurt to write code. So many things were in your way between the writing and the running, useless hoops to jump through, arbitrary changes to things that used to work just fine and now worked in a different way, why? why? WHY? I was pretty burnt out about it.

That lasted for a few years (okay probably closer to six or seven years) when I got a chance to help a teen learn to program who was super excited about it. I showed them the opaque APIs and they were thrilled, I showed them the crappy IDEs that prevented you from seeing what was really going on and they loved it, I showed them tool kits that created guard rails around what you could and could not code and that was just fine. They wrote line after line of code and marveled at each new thing.

It struck me that they were me and I was them and why were they so excited about this when I was so offended? After a lot of introspection I came to realize that the answer for me was that nothing had changed.

The reality I was missing was that computers haven't changed a whole lot since the IBM 360. The only thing that changed is that they got faster, smaller, and way cheaper. But their nature hasn't changed at all. They have registers, they have machine code, they have fast I/O devices and slow I/O devices, they have memory and displays and peripherals of various kinds. But at the end of the day, and this is especially obvious with various recreations and emulators, the computers of today are not really all that different in nature from the ones I learned programming on.

That in itself might not have been a killer, but the killer was software was by and large unchanged as well. After 20 years of programming, every single programming task felt like a remake of something I had already done. And for what? To implement it in the language of the day? Because you couldn't get the source to one version so you rewrite it from scratch to get better licensing terms? Lots and lots of software was done in the sense that there really isn't a good reason to re-design it, but there are a zillion reasons while you might be asked to re-write it.

Imagine if you were a screenwriter or a songwriter and someone said, "Ok we need 'Gone with the Wind' but now its going to have a gay couple, take place during the Syrian Civil war. Don't change any part of the plot or the story or the relationships, just swap Arabs for Southerners, Syria for the South, and have the love interest die of MERS or something."

Wouldn't that be a crappy job? What can you do with that? Where is the creativity? Where is the opportunity to express a fundamental truth in a new way?

I realized that I had come to hate programming because everyone was asking me to program the same stuff that had been written before but now in the Fluff Framework. I could do that with my eyes closed and so could any teen fresh out of college. What did they care that it had been implemented twice, three times, maybe five times previously in different ways?

Once I understood my problem I could start working on the solution. I decided to start writing software in areas where I had never written code before (like DevOps), or in areas where hardly anybody had re-written code before (like software radios). I also started to meticulously develop a workflow that ran on every operating system I might have to work on (Linux, Windows, and MacOS) that worked for me and saved me from having to relearn a new thing "just because" someone got it in their head that they could "improve" text editing for coders. When the friction between thoughts and code and execution are low, and the problems being coded or solved are novel, I love programming. Because I know what to avoid (arbitrarily complex tools and rewriting CRUD code again and again) my level of joy has gone up significantly.

I don't know if any of this will resonate with your question but it worked for me.


Samkhya school of thought makes space for non-deism and subscribe to duality. This is unique because dualism has always been associated with worship of the god head as often embraced by Vaishnavite sects. Samkhya offers a different perspective. The non deistic approach in Vedanta(the non dualism school of thought that is the opposite of samkhya school) is less significant than probably the Mimamsa school. All six astikas(nyaya, Vaisheka,Mimamsa, samkhya,Vedanta,yoga) accomodate for both deistic and non deistic approach to life. Samkhya gives a lot of importance to the three Gunas of saatvic, rajasic and tamasic nature of Self. The guna theory appears everywhere from Ayurveda to yoga to astrology.

Hindu philosophy is the template for a way of life and the key theme that is most common and recurring is the Sattva-Rajas-Tamas Gunas. I look at it as the earliest attempt to take a stab at diversity. Not on the basis of colour or caste or language but diversity of human nature/instincts as it were...caste system, for example, arose out of these division. As did mythology and astrology. Without the division of humans on the basis of their instincts, we’d have no way to codify the different philosophies for everyone.

Diversity is about division and accepting that there are differences amongst us that separates us as individuals. I often feel like diversity as a word has been hijacked by the English language. Diversity exists only because it celebrates differences. If there are differences, there will be hierarchies. It means that we are all NOT the same. How can you celebrate that we are all different and then deny that that differences will manifest itself as a hierarchy? That doesn’t make sense at all.

I grew up listening to mythology from my grandfather. One of the striking things about the gods in Hinduism is that they see no difference between humans, demons and the ‘good’ celestials. There is a hierarchy even amongst gods. I remember asking him why the gods give boons to both the good guys and bad guys. The gods treat everyone equally. That doesn’t make sense.

And he said that it’s because we have all the gods and demons inside us and they all want to come out and live vicariously through us..and we get to choose which god or demon we choose to release. And we do it through worship. Maybe one can worship the goddess of music(Saraswati) or of wealth(Lakshmi) or destruction(Kali). It’s still our choice. And they all live within us. It’s the choice that causes dualism. “I want knowledge. I don’t want poverty. I want beauty. I don’t want injustice.’ Etc. our thoughts and instincts invite and welcome our inner gods and demons to live the human life through our actions. non dualism says that in the end..nothing matters anyways..because we are all that. And we are none of it. God and karma are just cherry on top extras.

It is a constant churn of attempts to diversify and then assimilate and then differentiate again. Rinse and repeat. We can’t escape this pattern because statis will set in after churn and equilibrium is fleeting.

That we call equilibrium or that fleeting moment of stability is what’s illusion or maya. Neither joy nor sorrow..ignorance or enlightenment is permanent. Life follows death and death is guaranteed after life. The churn never stops.

The nirvana or moksha hack is to slow down the entropy and make that fleeting moment seem to last forever. The Now is Forever. The Moment becomes Infinite.

I don’t necessarily accept the western interpretation of samkhya philosophy as ‘atheistic’. Hindu philosophy and religion is codified to reach as many people as possible as a way of life, not bonded faith. Theism is a layer as is atheism and both have its place in multilayered ancient Hindu philosophy.

The Hindu gods are manifestations of the three Gunas. Dualism is picking one of the three Gunas. Non dualism is accepting all the divisions in the soul that culminates as ‘a god’.

This is a good read. Much better than western interpretations of Samkhya and Vedanta etc that I find very ignorant and limited in its grasp of symbolic language.

I am troubled by any definitive interpretation. Hindu philosophy is meant for debate and discussion as a way to unravel layers of Self. It’s interpretation is unique and personal to each individual to suit their particular life circumstance. Contemplation is a beautiful thing. Thanks for the share.


It doesn't actually need 840MB of RAM. This is an inherent limitation of the way the JVM works. It will reserve a configured amount of memory upfront and grow the heap if memory is full until it reaches the configured maximum heap size. One especially annoying thing is that the JVM will not garbage collect until the heap is actually full. This means if you allocate a little bit here and there (like every single Java program in existence) over time you are going to reach the maximum configured heap. Once the JVM has allocated heap space it will never let go of it. Therefore you might as well assume that the minimum amount of memory a JVM process consumes is the maximum that it has been configured for. In this case the maximum was probably set to 840MB. The application itself probably only uses something like 50MB.

Remember those stupid arguments that a competent programmer can write well performing code [0] in any language? Here is the thing... You actually can't. Especially not with the JVM. Let's say you use a super efficient framework that is purely optimized for startup performance and low memory like Micronaut or Quarkus. When you look at your metrics you see that the Java program is consuming 10MB of heap space and pat yourself smugly on the shoulder thinking how well written these frameworks are. Tough luck. Even if you configure the heap size to something absurdly low like 32MB your JVM process will still use 120MB. Why? Because the JVM has a minimum footprint. It has lots of parked GC threads in the background that need 2MB stacks. It has to keep the class files in memory, keep profiling data, the generated code and lots of other things. If I need something that needs to consume as little memory as possible I just stick with C++ or Lua. With those two languages staying under 1 - 2 MB of RAM is trivial.

Also... it is possible your coworkers do not even set a maximum heap and just let it stay on the default values. That's how a tiny little CRUD app as a dashboard ended up allocating 10GB of RAM to itself on a server with 128GB RAM.

[0] well performing = low memory usage in this case


"People who excel at software design become convinced that they have a unique ability to understand any kind of system at all, from first principles, without prior training, thanks to their superior powers of analysis. Success in the artificially constructed world of software design promotes a dangerous confidence."

http://idlewords.com/talks/sase_panel.htm


Unbiased means that if I draw infinitely many random samples from a population and average a statistic (in this case standard deviation) across all the samples, the answer will be the statistic computed from the population itself. If one divides by n instead of n-1, the estimate for standard deviation will be be (n-1)/n too small. One reading this might think, "Wait! We're going to infinity so the ratio converges to 1." That's true if the size of each sample also goes to infinity but not if we draw millions of ten item samples.

As for using up a degree of freedom, the easiest way to build intuition for why this is a useful concept is to think about very small samples. Let's say I draw a sample of 1 item. By definition the item is equal to the mean so I receive no information about the standard deviation. Conversely, if someone had told me the mean in advance, I could learn a bit about the standard deviation with a single sample. This carries on beyond one in diminishing amounts. Imagine I draw two items. There's some probability that they're both on the same side of the mean, in that case, I'll estimate my sample mean as being between those number and underestimate the standard deviation. Note that I'd still underestimate it even with the bias correction, it's just that that factor compensates just enough that it balances out over all cases.

A simple, concrete way to convince yourself that this is real is to consider the standard deviation of a variable that has an equal probability of being 1 or 0. The standard deviation is 0.5. But if we randomly sample two items, 50% of the time they'll be the same and we'll estimate the standard deviation as zero. The other 50% of the time, we'll get the right answer. Hence, our average is half the right answer (n/(n-1)=2/1). The correction makes the standard deviation double what it should be half the same while remaining zero in the other cases. This also suggests why dividing by n is referred to as a the maximum likelihood estimator.


It's been a while so I might not recall all the details. This is what I did on a 6502 in the 80's (the order might not be exactly right):

    - Write drivers (in assembler) for external UARTs and parallel port chips
    - Write (in assembler) enough code to get the very basics of Forth going
    - Now in Forth, write the standard set of Forth words
    - Write a rudimentary text editor
    - Write a floppy disk driver and file system management code
    - Now I have a Forth computer
 
At that point I started to use Forth for robotics. Quadrature encoder inputs. A/D, D/A and digital I/O. Eventually doing real-time PID loop. Control a single motor. Build a robot arm. Control five or six motors. Build an external LED hexadecimal display. Talk to it in Forth. Build a buttons and knobs control panel for the robot. Talk to it in Forth. Have loads of fun and learn a ton.

In other words, I had a very specific project in mind and saw it through from a bunch of chips on the workbench to a finished robot arm with user interface.

There are a million lessons to be learned in such a project.

I've actually thought about dusting off my old files and designs and putting together some kind of an educational kit to launch on Kickstarter. It could be a lot of fun.


Thanks much. For others, here are the relevant bits I could find (my emphasis):

> ...It is by no means assured that our national security sector will be able to attract on a sufficient scale the scarce engineering, mathematical and scientific talent that would supply the necessary expertise. That challenge will require investment, enlightened strategic management and an innovative approach to luring a different type of expert out of the private sector into government. Meeting this challenge will require a greater reliance in general on the private sector, since government alone does not possess the requisite expertise. A large portion of the intelligence community’s experts on the military capabilities and plans of Russia and China joined government during the Reagan administration; other experts on counterterrorism and new technology burnished their technical skills following the Sept. 11 attacks. Many of those experts are nearing retirement or have already left to join an attractive private sector. With millennials believing that technology in the private sector now allows them to help change the world — previously the idea of a mission had been largely the province of public service — it is not clear that the intelligence community will be able to attract and retain the necessary talent needed to make sense of how our adversaries will make use of the new technology...

> ... the government no longer possesses the lead in complex technology, at least in many areas relevant to national security. Arguably, the most powerful computing and sophisticated algorithm development now occurs not in the Pentagon or the N.S.A. but in university research labs and in the Googles and Amazons of the commercial world. (To be sure, the government still maintains its superiority in important areas ranging from nuclear energy to cryptography.)...

> ... our national security agencies for the first time must amass the talent and systems to understand not simply a military challenge but also challenges across a broad range of technology and global finance issues. The capacity for such understanding currently resides principally in the private sector and our universities, not the federal government.


So I'm not super familiar with quantum computation, but I did do my undergrad research in QM (specifically, how chaotic behavior depends on the scale of nonlinear quantum systems) and I can take some informed guesses about what these words mean. It's actually super cool!

In a superconducting circuit

A circuit is a loop of something. Probably a solid material, like a metal or carbon, though it could be something more exotic. A superconducting circuit means the electrons in that material move without any resistance. This tells us the circuit is probably very cold--superconductors tend to break down at warm temperatures, like the ones in your house.

conduction electrons

Conductors have electrons in them. Some are "stuck" to atoms, others get to move around. Conduction electrons are the ones that move.

condense into a macroscopic quantum state

Macroscopic means "big", and for QM, "big" means, like, more than a handful of atoms or particles. At least as far as QM is concerned, everything--rocks, electrons, photons, people, etc., has a quantum state, but we use the phrase "quantum state" to mean a state that's, like, WEIRDLY QUANTUM. For instance, a pencil sitting on your desk is normal. A pencil that's like, half on your desk and half on mine is "quantum". Condensing means the electrons are going to change from doing normal individual electron things into acting like some sort of Big But Weirdly Quantum system, likely as a group. Like a crowd becoming a flash mob, they might do some sort of synchronized dance, only except the dance involves, say, every dancer doing two or three or ten dance moves at the same time.

such that currents and voltages behave quantum mechanically [2, 30].

Specifically, we're gonna be able to see quantum effects like superposition in Big Things like "current" and "voltage". The circuit might be in a combination of 3 volts and 5 volts at the same time. Also some of those voltages might be partly real and partly imaginary. Long story.

Our processor uses transmon

What the fuck is a transmon? I had to look this one up; it's a way of making these qubits less sensitive to voltage fluctuations.

qubits [6]

Qubits are quantum bits. A bit can be either 0 or 1. A qubit can be 0 or 1 or (and this is the quantum part) any state in between. Let's call the 0 state |0>, and the 1 state |1>. A qubit can be |1>, but it could also be (1/sqrt(2) |0>) + (1/sqrt(2) |1>). We call that a "cat" state, incidentally, because it's "half 1, half 0"--like Schroedinger's Cat, half alive and half dead. Again, the coefficients here are, in general, complex numbers, but we're gonna gloss over that.

which can be thought of as nonlinear

Nonlinear means they don't respond linearly to some input. Ever had someone do a series of small, mildly annoying things, and at some point you snapped and yelled at them? That's called "going nonlinear".

superconducting resonators

Oscillators are things that vibrate, like strings. Resonators have preferred frequencies to vibrate at. I don't exactly know what this means in this context, though. I'm guessing the circuit has some preferred frequencies it really likes to oscillate at.

at 5 to 7 GHz.

Voltages or currents or whatever are gonna go back and forth 5-7 billion times a second. That's about the same frequency as wifi signals, or microwaves.

The qubit is encoded

A qubit is an abstract thing on a whiteboard. There lots of ways we could actually make a thing that looks like a qubit. "Encoded", here, means "turned into an actual machine you can build in a lab".

as the two lowest quantum eigenstates of the resonant circuit.

An eigenstate, loosely speaking, is a state that has nothing in common with any other eigenstate. For instance, if we wanted to measure a particle's position on a line, we could take x=0 as one eigenstate, x=1 as another, x=2.5 as yet another, etc etc. An infinite number of eigenstates. Quantum systems can be in any (well, normalized) sum of eigenstates. My cat loves being inside and outside at the same time, so they're always trying to occupy 0.2|x=0> + 0.6|x=4> + 0.2|x=5>.

An operator is a thing you can do to a quantum state. Think of operators like functions on values, if you're a programmer, or like matrices that can be applied to state vectors, if you know linear algebra. For instance, I might have a measurement operator, which I use to look at my cat. There's also a special operator called the Hamiltonian, which (loosely) tells you what a state will look like after an infinitely small step in time.

Each operators has associated eigenstates, and those eigenstates have a magic property: if you apply that operator to one of its eigenstates, you get back the exact same state, times some complex number, which we call an eigenvalue. This means eigenstates for the Hamiltonian are, in a sense, stable in time. When we talk about the eigenstates of a system, we usually mean the eigenstates of the Hamiltonian. They could also be talking about measurement eigenstates--I'm not sure.

For the Hamiltonian, eigenvalues are, for Really Fucking Cool Reasons, energies. When we talk about "the two lowest quantum eigenstates", we mean the two states with the lowest energy. So maybe the circuit's eigenstates are, I dunno, 5 Ghz, 6 Ghz, 7 Ghz, etc. We'd take 5 and 6 as our |1> and |0> states.

Each transmon has two controls

A control is a thing we can use to change the transmon.

a microwave drive

Something like the microwave in your kitchen, but very small, and probably expensive.

to excite the qubit

This probably means changing the qubit from |0> to |1>. Microwaves carry energy, right? That's how they heat food. If they microwave the circuit at the right frequency, that microwave energy probably helps it jump from a lower frequency/energy to a higher one.

and a magnetic flux control

This feels like something specific to transmons. Flux has to do with the density of stuff moving through a surface. Magnetic flux probably has to do with how strong and close field lines are in some part of the transmon machinery.

to tune the frequency.

How fast the circuit wobbles depends on a magnetic field, I guess?

Each qubit is connected to a linear resonator

Huh. So we've got nonlinear resonators (the qubits) connected to linear resonators (some sort of measurement device?)

used to read out the qubit state

We need a way to actually look at the qubits, and I guess the linear resonator does that. I assume that the linear resonator is isolated from the qubit during computation, and once the computation is over, it gets connected somehow, and vibrates at the same frequency as the qubit. That process probably "spreads out" the quantum state of the system, pushing it REAL CLOSE to an actual eigenstate of the measurement system, which looks like a probabilistic measurement of the actual qubit state.

Like... my cat could be 3/4 inside and 1/4 outside, so long as the room is really dark. If I turn on the light, suddenly my cat is coupled to a MUCH BIGGER system--the room, and that "quantum" state gets diffused into that larger system, in what looks like a measurement like "cat definitely inside". I don't know a simple way to explain decoherence, haha, but if you like math, try Percival's "Quantum State Diffusion".

Hope this helps, and I also hope I got at least some of this right. Maybe someone with a better/more recent command of QM can step in here.


Linux came after the BSDs, so you would think the BSDs would have won.

There are many reasons Linux-based systems are generally much more popular than the BSDs in the server and workstation spaces. Here's why I think that happened:

* GPL vs. BSD license. Repeatedly someone in the BSD community had the bright idea of creating a proprietary OS based on a BSD. All their work was then not shared with the OSS BSD community, and the hires removed expertise from the OSS BSD community. In contrast, the GPL forced the Linux kernel and GNU tool improvements to stay in the community, so every company that participated improved the Linux kernel and GNU tools instead of making their development stagnate. This enabled the Linux kernel in particular to rocket past the BSDs in terms of capabilities.

* Bazaar vs. Cathedral. The BSDs had a small group who tried to build things elegantly (cathedral), mostly in "one big tree". GNU + Linux were far more decentralized (bazaar), leading to faster development. That especially applies to the Linux kernel; many GNU tools are more cathedral-like in their development (though not to the extent of the BSDs), and they've paid a price in slower development because of it.

* Multi-boot Installation ease. For many years Linux was much easier to install than the BSDs on standard x86 hardware. Linux used the standard MBR partitioning scheme, while the BSDs required their own scheme that made it extremely difficult to run a BSD multi-boot setup. For many people computers (including storage) were very expensive - it was much easier to try out Linux (where you could dual-boot) than BSDs. The BSDs required an "all-in" commitment that immediately caused many people to ignore them. I think this factor is underappreciated.

* GNU and Linux emphasis on functionality and ease-of-use instead of tiny-ness. GNU tools revel in all sorts of options (case in point: cat has numerous options) and long-name options (which are much easier to read). The BSDs are often excited about how small their code is and how few flags their command lines have... but it turns out many users want functionality. If tiny-ness is truly your goal, then busybox was generally better once that became available circa 1996 (because it focused specifically on tiny-ness instead of trying to be a compromise between functionality and tiny-ness).

Some claim that the AT&T lawsuit hurt the BSDs, but there was lawsuit-rattling for Linux and GNU as well, so while others will point to that I don't think that was serious factor.

Here's one article discussing this:

https://www.channelfutures.com/open-source/open-source-histo...


There's precedent to this in the USA. The US Government once passed regulations that caused one of the largest companies in America to break up into three smaller companies. Today those, three companies are entirely independent, employ over 400,000 people combined, and have a combined net work of over $400B.

Those companies are Boeing Aircraft (153k employees, $244B market value), United Technologies (202k employees, $148B market value), and United Airlines (88k employees, $33B market value).[0]

What most people perceive as a threat to the market is when one company takes over an entire single market. And that is a problem, no doubt. But in the case of Boeing, the problem was that one company had such an advantage vertically- lose money on planes in order to make money on shipping, or vice versa, as needed. It meant it could win in whichever market it wanted to and slowly come to dominate all of those markets. The synergies of doing it all internally meant it could win at everything.

If one uses that situation as a precedent, one can start to see the parallels in many of the FAANG companies today.

[0] "The Air Mail Act of 1934 prohibited airlines and manufacturers from being under the same corporate umbrella, so the company split into three smaller companies – Boeing Airplane Company, United Airlines, and United Aircraft Corporation, the precursor to United Technologies." https://en.wikipedia.org/wiki/Boeing


You could say it's a 'lemon market' ( https://en.wikipedia.org/wiki/The_Market_for_Lemons ).

A lemon market is when the buyer doesn't have enough information to tell the difference between a good product (peach) and a bad one (lemon). Since the buyer can't tell the difference, the price he is willing to pay will end up somewhere between the value of the peach (more expensive) and the value of the lemon (cheaper). What happens then is that the sellers of lemons thrive while the the sellers of peaches end up leaving the market. This lowers the average price and creates a cycle where the price decreases which drive even more sellers of quality products to leave the market which lowers the price further...


This is a lovely article, and I feel there's a lifetime of lessons to be learned from any colony behaviour. I've recently submitted a thesis on a swarm intelligence algorithm which exhibits some of these qualities, and the parallels between the two was a core component of the contribution.

This effect, where something is globally stable, even though all the individuals that make it happen are unstable is sometimes described as "a forest whose contours remain the same, as the trees all change".

Consider this algorithm for an ant colony foraging for a good source of food (this behaviour has also been observed in ants searching for a new nest site):

  for each ant in the swarm
    if the ant is unhappy
      run to a random ant
      if the random ant is happy
        follow it to its location
      if the random ant is unhappy
        select a location at random
    if the ant is happy
      the ant continues to search its current location

   for each ant in the swarm
     the ant searches a tiny of its location, at random
     if the ant finds food
        the ant becomes happy
      if the ant doesn't find food
        the ant becomes unhappy
If you run this algorithm you will find a "clusters" of ants form, which is a number of ants who share the same foraging location. Importantly, and this is mathematically proven, the largest cluster will form in the location with the best probability for finding food. This algorithm works even when the locations change over time and, as in the article, even when the ants which found the location are replaced with ants who have simply followed other ants to get there.

The aspect which captured my attention is that there are tiny changes to individual lines of the algorithm which implement diverse behaviours such as hill climbing, optimise for exploitation or exploration, and global optimisation.

The algorithm is called Stochastic Diffusion Search and I'm in the process of polishing a Python library which implements it and its many variants for a Show HN :) The repo is here https://github.com/AndrewOwenMartin/sds some info and an explanatory animation here http://www.aomartin.co.uk/sds-animation/ and an beta version is already on PyPi here https://pypi.org/project/sds/.

Contact me (email address on my profile) if you're interested in using this algorithm or contributing to the library, it needs snappy C implementations, and a better explanatory animation!


I was lucky enough to have met John Wheeler when I was in grad school. He was a wonderful man who radiated joy and curiosity.

My favorite anecdote about him, which was not widely known, involved what used to be called "nut letters." Before the internet, if you were a famous scientist, particularly a famous physicist, you would receive actual letters from people all over the world asking for help with their perpetual motion machines, time travel devices, and similar nutty theories. Having worked on black holes, gravity, general relativity, and the bomb, Wheeler was quite a nut magnet. He was also blessed with a bit of OCD in the way he organized and categorized all his notes (his annotated bibliography for Misner Thorne and Wheeler Gravitation filled many shelves in the library). John didn't just receive nut letters. He received, read, organized, filed, classified, and acted on nut letters. His preferred response to them was "I'm afraid I'm not very knowledgeable in the area of your work but I believe you should contact _____ who is working on similar conjectures and may be a good source of additional insights" at which point both parties in that conversation would be so ecstatic to be talking with someone recommended to them by the great John Wheeler that they'd never bother him again. While in grad school, I happened to read an article in the New York Times on a perpetual motion machine (their weekly Science Times section was fantastic but did occasionally step into pseudo science topics). At the end of the article the main researcher thanked John Wheeler for having introduced him to the theorist who had helped him refine his understanding of the mechanisms at play in his invention, and I couldn't help but smile at the wonderful successes of John Wheeler's nut dating service.


I once made friends with an octopus, also.

In my youth, during the southern Australian summer startup parts of the year, things would get warmer and warmer .. the sun staying up for longer and earlier and waking up the beach from the cold nights winter storm hangover, the awesome Indian Ocean water a bit docile, the winds calmer, perhaps, visibility getting less shark'y and more plankton'y, and in general a happy sea in the morning first-thing meant it was time for a swim. And so the daily ritual: goggles, flippers, a knife and a bag of treats, beach-bag: check.. I'd walk down to the spot, as it wasn't far from the family beach-house, and by the time I got to the water had deposited all earthly possessions behind me such that I could just splash my dive with the wave .. always a bit of a shiver at first, but then with clarity the ocean opens up its wonders.

This particular spot, and there were many for sure, but this particular one was just better to swim to, rather than walk up and over .. a spot just out past a reefy-beach, beyond a bit of a break, and then back onto a shallow reef a bit further from the beach. A bit of a swim and daunting in the deep bits, but if it were docile once you got there, the creatures were too. Playful, even. Chill.. A true ocean garden paradise of warm-weather nooks, holes, ledges, crannies, shelves, and a few scarey deep bits thrown in for pleasure. On the right day though, its like swimming in a giant bath-tub, although all your mates have the potential to kill you..

I can see it in my mind now, the back of my hand ahead of me as I take one last breath of air and dive down to the hole with a ledge... my bag of treats seemingly dropping into the hole with a mind of its own, and out spills the contents .. a few gold coins (well-polished with coca cola), a couple marbles, a flourescent rubber ball, like they used to make, heavy... all sinking into the shallow depth. And the inky tentacle reaching from the hole, catching each coin as it fell, glistening endlesly in the sun, to dark silence below. I float and watch, and of course the other objects cross my mind, but once the coins are gone, the octopus disappears for a few seconds .. probably inspecting the validity of the mint .. before returning in a gigantic splash of liquid being, all 8 tentacles extended, some kind of attack mode like .. how dare i enter this lair .. and then .. docile. friendly. just a big scareshow for the human, and anyway.. what else did i bring, oh yes .. the fluoro ball now has a bite mark in it, didn't like it .. marbles .. hmm ..

I go up, grab another breath, dive down .. can't see anything its just the hole with a ledge. Dive deeper, under the ledge, there it is .. guarding the horde. A few fish-hooks and sinkers down there, a thousand empty shells too, some kind of a collection. Would I call it a midden? Yes, I would. With my coins in it. Up for another breath, and down again to retrieve what treat I can, but of course the game is to get a coin from the lair, drop it from the surface, and let the sun do all the work. Not easy though, putting ones hand into an octopus lair when its guarding its midden.

I know this is going to sound weird, but to me it really seemed there was a sudden sweet-spot where the reflections would catch in a way that the octopus could see I was trying to play the game, and so it was that we played catch like this a few times that summer. It did give me one of the coins back eventually.


I’m assuming you’re trying to claim this is not related to global warming (“ it has happened before”).

Global warming is a large scale and long term problem, and the first thing you would expect is an increase in frequency of previously uncommon events - take warm weather: historically there may have been a few year where the temp got to X degrees, from an average of Y degrees. But now the average Temperature has increased by a small amount. That shifts the entire normal curve of temperatures up. So now instead of X degrees being the 99th percentile (say) it becomes the 90th. The wonders of normal curves and statistics ensures that as small changes accrue the current extremes will rapidly become more common - a 1% increase in average temperature (random Number here) results in a frequency shift of more than 1% for the extremes.

Then we get to the localized vs global issue. Global warming in an average increase in temperature across the world. The world is big, and so there will always be area that are substantially higher or lower temperature than other places, but by virtue of the above statistics over time the high temperature pockets will become larger and more common. The high temperature pockets and the increased energy in the atmosphere increase movement of the atmosphere around the globe - so yes you may also get occasional pockets of uncommon low temperature. that is likely cause by uncommonly “high” temperatures in the cold parts of the world pushing the higher than average cold air into new locations. Over time the average temperature of the cold areas will become “warm”.

Finally, many biological and physical behaviors are purely a function of very specific temperatures. A lot of the biological behaviors (in plants especially) are direct products of physical laws - gas and fluid expansion being most common. Plenty of amphibians have gender determined entirely by temperature - something like 23.4c being the exact cutoff. Water freezes/thaws at exactly 0c - it doesn’t matter how small the difference is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: