Hacker Newsnew | past | comments | ask | show | jobs | submit | Gh0stRAT's favoriteslogin

> is the “more safety” camp saying to slow down AI development?

This is an EXCELLENT question and the answer is very very nuanced. Warning: the definitions of the terms I'm about to use are actively evolving in current discourse.

The following taxonomy reflects my own views, specifically with the inclusion of faction 4:

---

There are basically four competing factions.

    (1) The "normal" faction, which includes Satya and almost all business people both in VC & on Wall Street.  Normals say (through their actions and their investments, which both speak much louder than words) that we can deal with x-risk later, and right now let's make some money and continue life as normal.  They focus their life's work on "buying a home", "saving for retirement", and maybe someday "giving back to their community", and other such comforting, familiar little platitudes of life as it was for our mom and dad.

    (2) The "decel" faction (short for "decelerate"), which includes most old-school AI safety folks such as Ilya, Helen, and Tasha.  Sometimes you see these people with a "pause button" emoji or "stop sign" emoji in their Twitter name.

    (3) The "e/acc" faction (short for "effective accelerationists"). This faction is a mix of fanatical techno-utopians (like Yann LeCun and Andrew Ng), mixed with a bunch of Twitter people who post macho memes and have a "lol let's watch the world burn" sort of attitude.  Those people are in my view very similar to the young people from 4chan who voted for Trump over Hillary in 2016 because they thought that a Trump presidency would be hilarious.

    (4) The newest faction doesn't even have a name. I've only heard it articulated by Greg Brockman so let's call it Brockism.  This faction is very new and it actually has me reconsidering my own beliefs.  Brockists believe that the safest way to reduce x-risk is to move as fast as possible with software development while moving as slowly as possible with semiconductor development. Basically, Brockman believes that semiconductors are already way too powerful and that we could accidentally stumble into artificial superintelligence by accidentally inventing a really good algorithm that's suddenly way smarter while still fitting in the limits of semiconductor technology as we know it (i.e., not requiring any fancy optical chips or quantum chips or memristor chips or 3D chips or any of the other ideas for what to do after Moore's Law soon stops progressing). The possibility that we could stumble into an accidental sudden leap in intelligence through a few lines of code is what Brockman believes is super dangerous and is what he calls the capabilities "overhang".  As far as I know the Brockist ideology has only ever been articulated exactly once, which is in the final six minutes of this very interesting & heartwarming little TED talk:
https://youtu.be/C_78DM8fG6E?si=uIP2OIxV8dXAKr9B&t=1478

---

All in all:

- The "normal" and "e/acc" factions are both in my view stupidly naive, and both of them more or less advocate to follow standard Silicon Valley doctrine of "move fast, break things, get rich".

- The "decel" and "Brockist" factions both take x-risk super seriously, and agree on the need to restrict semiconductor development, but they have totally opposite views on whether AI software research should slow down or speed up.

For what happens next at OpenAI:

- In the political shake-up that just concluded, the "decel" faction lost everything, to the point where there is not even a single decel that I am aware of left standing in OpenAI leadership despite the fact that OpenAI was originally founded primarily by decels.

- Next, there will be an interesting and subtle three-way power struggle between the normals (Satya, + Sam?), Brockists (Brockman, + Sam?), and e/acc's (an ideology possibly held by some of the ML scientists).


Panspermia is fascinating conjecture.

The preferred chirality of organic molecules could absolutely have arisen by chance, but it's an interesting to see this in meteorites.

On the unrelated subject of handedness, I saw an interesting thread on Twitter today [1] speaking about how we're starting to synthesize reverse chirality polymers and enzymes, most notably DNA and replication enzymes.

There are a lot of interesting implications.

You can't get rid of L-DNA without reverse DNase, leading to an accumulation of information and transcription. So they need to remake all the enzyme steroisomers.

That alone is interesting, but you can take it further to the limit and produce reverse biology that synthesizes reverse sugars that can't be metabolized by much of extant life [2]. Suddenly a lab-escaped reverse autotroph can out-compete all of us right-handed lifeforms because nothing can eat them. Bacteria, plankton, the entire food web collapses. When we have nothing left to fish or farm, we die too.

Never thought nanotech's grey goo was plausible. Now I see something that rhymes with it, and I could see it happening within our lifetimes.

It'd make a crazy MAD bioweapon on par with or potentially worse than nukes.

Wild tangent, sorry.

[1] https://twitter.com/eigenrobot/status/1420952351968432130

[2] https://twitter.com/prawncis/status/1420982623048925187


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: