Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Future Bubble (thenewinquiry.com)
29 points by dreamweapon on March 24, 2015 | hide | past | favorite | 14 comments


The claim the author leads off with is not plausible: capitalism has seen enormous changes in social relations since its earliest days.

The author is a sociologist, not an economic historian, but they should still be aware of that economic historians (even ones on the Left) tend to place the emergence of capitalism sometime in the 1600's, and break it up into multiple phases, each of which is embodied by significantly different social relations.

Once upon various times anti-capitalists proclaimed that unionization, emancipation, universal suffrage (giving the vote to all men rather than just men of property), women's suffrage, labour parties and social democracy would all overthrow the capitalist order precisely because these were seen as radical departures from existing social relations upon which capitalism was presumed to depend.

Instead, capitalism has proven an enormously resilient mode of economic organization that can be instantiated in societies with extremely different social relations.

So the claim that the modern worker in a mature capitalist society like Sweden or Canada or Germany (to pick a few at random) stands in the same social relationship to their employer as did workers in 17th century Amsterdam or 18th century London or 19th century New York or 20th century Shanghai is extremely implausible.

To further claim that social relations in the future must remain as they are today for the preservation of the capitalism mode of economic organization is equally implausible.


If the current modes of operations around the world are of any sign, it's quite clear that capitalism is already on its deathbed.

A lot of systems produce good outputs when good intentions drive them. Of course, in long runs, it's always clear that man's silly, short-sighted, and greedy nature runs even the best of systems amok... From there, a new one is birthed with lessons learned and a sharper focus on good intentions (or so we hope).

The current 'capitalistic' system is based on social foolery and the ignorance therein of its functional mechanisms.

What you mean to say is that man's well bodied intentions are proven as an 'an enormously resilient mode of operation that can be instantiated in societies with extremely different social relations'. When a particular economic system breaks ties with this, it collapses under its own weight (by the weight of the larger society). Maybe because this is universally an unstable state of existence.. Who knows.

Sociologist could teach economist a great deal given the Frankenstein of an economic engine they've created. The economist are so buried in their institutionalized ideals to recognize the damming social impacts their misguided systems of obfuscation and unnecessary complexity have caused throughout modern history. Just because you have some cool swag as a result says nothing about the fragile and hollow foundation that the whole global economy is currently teetering on. A historian could tell you about all the empire swag that was destroyed throughout history due to ignorance and lack of concern for social balance.

If there was ever something man truly feared and kept hidden with all their might all throughout history it has been : truth. People will spend their life's fortune on hiding truths... Empires are built on it. Wars waged over it. Power is structured on it. Institutions and edifices of grandeur charged by the productivity of generations are created to ensure it never gets out. Yet, it always seems to somehow.

So, if anything, that's what this 'high finance' system is all about (hiding truths). Security through obfuscation and it doesn't take some crackpot PHD Nobel Prize winning economist to see that. Maybe it takes one to construct a convince-able lie.


Loved the article - VCs and investing is indeed an approach where you want the future to be 'X', so for that, in the present you want to do 'Y' at 'Z' costs.

In essence, we are doing exactly what this describes, and every time investors or founders take cash off the table on a round, they are indeed taking future 'gain' money into the present, without knowing exactly if the future will turn out as described and anticipated.


"This rendering—the unknowable future that eats the present—may resonate more with an anxiety endemic to capitalist societies; as we will see, it is a characteristic nightmare of the capital-accumulating class. Capital always has one foot in the future, and even packages and exchanges “futures” as a financial instrument. A time bubble that erases the future would mean a collapsing asset price bubble in the present. For capitalism’s reality, it turns out, is stranger even than science fiction. Radical challenges to the system can change conditions in the present by, in a manner of speaking, altering the future."

I had this exact thought the other day in another context.

Lately I've been trying to puzzle out why there's been this outbreak of seemingly absurd and ridiculous nail biting over artificial intelligence in and around Silicon Valley circles. Rationally it makes little sense.

If you don't know, I am referring to this kind of thing: http://blog.samaltman.com/machine-intelligence-part-2

We have no evidence that "Hollywood AI" is nigh, no evidence it will "explode" and become super-human in a short period of time (and some very good counter-arguments against this scenario), and no evidence it would be intrinsically more dangerous than we are to each other. The whole fear mongering topic seems rooted in a tower of speculations that becomes increasingly precarious as you ascend.

I wrote a maybe 3/4 of the way baked blog post on it here: http://adamierymenko.com/on-the-imminence-and-danger-of-ai/

That blog post addresses some of the issues such as whether AI can or will "explode," but to me it felt like I was still struggling with the ultimate question of what really lies behind all this. Then maybe yesterday or the day before I realized that these fears might be rooted in the fear of disruption.

Consider Francis Fukuyama's very similar -- and perhaps equally shaky -- fear-mongering about transhumanism.

http://reason.com/archives/2004/08/25/transhumanism-the-most...

So transhumanism, which is basically the nebulous idea that we should attempt to radically improve ourselves, is what Fukuyama thinks is the most dangerous idea to future human welfare? Really? I can think of a few concerns, but how is this more dangerous than other much more obvious candidates like religious fundamentalism, totalitarian nationalism, or certain varieties of misanthropic nihilism? You know, ideas already drenched in blood that seem to have a disturbing ability to recur throughout history?

Fukuyama is also well known as the author of "The End of History," which is basically a court intellectual feel-good tome assuring today's leaders that the world has achieved a steady state and nothing much is going to change. (It's since become a laughingstock, as it should have been on the basis of its absurd title.)

Perhaps what scares certain people so much about AI is its potential to upset the world order. Human systems of control and authority are largely based on the systematic exploitation of human cognitive biases and fallacies. Even if an AI weren't explosively super-human, it might still operate in ways that are non-human. In so doing it might simply not be vulnerable to the same techniques of persuasion. How exactly does one rule aliens?

Maybe the fear isn't so much that AI is going to kill us all (especially since it would probably be symbiotic with us), but that it'd be a loose cannon on the deck.

At the same time, even a non-sentient but very versatile and powerful AI -- a programmable "philosophical zombie" if you will -- could obsolete entire industries overnight. As the article says, capitalist economies can cope with some amount of so-called creative destruction but too much is bad news. What happens if/when some kind of AI can do >50% of the job of lawyers, doctors, politicians, journalists, non-fiction writers, bankers/financiers, etc.? You'd have wave upon wave of bankruptcies both personal and corporate.

A real deep and wide breakthrough in AI could be hyperdeflationary. So might real "transhumanism" for that matter, by radically increasing the effectiveness of labor among other reasons.

I do know this: the reason you constantly hear financial types harp on about their terror of inflation is because their real fear is the opposite.

Interesting food for thought, don't you think? I'm not sure I share all this article's sentiments, but I agree with the basic sense that present economic systems demand conformity and conservatism at some level and fear large disruptive changes.


> Lately I've been trying to puzzle out why there's been this outbreak of seemingly absurd and ridiculous nail biting over artificial intelligence in and around Silicon Valley circles. Rationally it makes little sense.

From my observation, there's a huge confusion generated by people (mostly journalists) who have no knowledge whatsoever about the research of AIs as existential risks, but who write texts comparing AIs to science fiction movies people know. Rational basis for "fear of AI" is actually quite simple - a mind is a strong optimization system; if we somehow create one that is as powerful as our own, there is no reason to assume it will automagically share our values - and any strong optimization that does not share our values (note: we don't really know what they are anyway) will most likely destroy us. All the talk about "terminators" and "rise of the machines", etc. is just muddying the waters.


I live around tons of people with wildly varying value systems, and very few people hurt each other. Differing value systems do not guarantee conflict, especially if there are common interests and economic interdependence. Common value systems also don't prevent it... Witness the centuries and centuries of bloody conflict that has raged between humans of the same religious belief system, ethnic group, and even the same language and similar culture (e.g. the perpetual Middle East bloodbath).

A wildly alien AI might actually have fewer reasons to fight with humans. It might simply carve out some economic niche to earn income to purchase what it needs, and go exist in some physical and/or virtual enclave somewhere. Last I checked Antarctica was big, uninhabited, and reduces the need for active cooling. Then there is space. Why wouldn't an AI with no interest in living with humans just go to the Moon? There are points on the Lunar surface in perpetual daylight, meaning tons of free energy. Lots of mineral resources, lava tubes big enough for cities, and no oxygen, mold, fungi, water, or meat sacks, and surely a superintelligence could do something valuable enough to buy a couple hundred heavy lift rocket launches. Funny if Elon who seems worried about AI got it as a customer. :)

Like I said: I see no intrinsic reason AI is more dangerous than the 350000 other minds born daily.


You are underestimating the amount of the values human share :). No matter how different our beliefs are, we all can love, hate, laugh, be jealous, greedy, selfless. We all share the feelings of pain and joy, hunger and lust. We all think in similar ways, because we run on the same cognitive architecture. Whatever the first AI will be, it will be unlikely to share any of that with us - especially if it will be some random process you happened to pull from the space of possible minds.

> A wildly alien AI might actually have fewer reasons to fight with humans. It might simply carve out some economic niche to earn income to purchase what it needs, and go exist in some physical and/or virtual enclave somewhere. Last I checked Antarctica was big, uninhabited, and reduces the need for active cooling. Then there is space. Why wouldn't an AI with no interest in living with humans just go to the Moon? There are points on the Lunar surface in perpetual daylight, meaning tons of free energy.

Did we do it? Did Europeans "carve out some economic niche" and engaged in trade with America? No, they just invaded it. Did we all move to Antarctica so that cows have space to live? No, we eat them. Because they're tasty. Did we go to the Moon, because Earth has so beautiful ecosystem, full of various life forms, many of them quite smart? Of course not. We dominated it all.

There is no reason for AI to "leave us alone" unless we put care about human values in it explicitly. Because otherwise, why should it care?

A lot of confusion stems from people thinking about AI in anthropomorphic terms. If you want a good example what an actually alien mind can look, see the pretty much proto-AIs we managed to create, namely corporations, big bureaucracies and market economy. Even though they're "made out of people", they're not optimizing for anything close to what any human would want. Hence various problems we discuss every day on HN.

I very much like the definition of intelligence as a very strong optimization process - it highlights the fact that a process doesn't have to be like human to be dangerous.


This is a big topic, and there's a lot to unpack. I agree with some of what you say, and that's partly why I am not dismissing dangers associated with AI. I just don't think it ranks as something to lose sleep over compared with the much more tangible, imminent, and guaranteed-to-be-bad threats we already face.

One thing I disagree with is that intelligence is just a very strong optimizer. It is that, but I don't think it's just that. I think it is very much a multi-modal / multi-paradigm thing, and I think there's a ton of stuff we don't understand about the operation of our own minds. It's yet another reason I don't think human-level or beyond AI is imminent. We barely understand how we think.


> One thing I disagree with is that intelligence is just a very strong optimizer. It is that, but I don't think it's just that. I think it is very much a multi-modal / multi-paradigm thing

I agree with that. I didn't want to imply that you get intelligence by throwing enough compute at a gradient descent. It's multi-domain, cross-paradigm kind of optimization. But I find it worthwhile to deanthropomorphize intelligence, so that it's easier to appreciate how different can a powerful mind be from the ones we have.

> and I think there's a ton of stuff we don't understand about the operation of our own minds. It's yet another reason I don't think human-level or beyond AI is imminent. We barely understand how we think.

I also don't think AI is imminent - hell, we can't even get a decent webapp generator working, we have a lot of research in front of us to build self-improving systems. But I share the concern of the FAI crowd that we may eventually get there, and when we do, it's important to do it right the first time - otherwise, a runaway optimization process may not give us another chance. As you said, we barely understand how we ourselves think - so it's good to figure that one out before someone manages to build an actual AI.


One thing I've learned by being a "smart person" is that intelligence is not everything. I do not believe that high intelligence or even super-intelligence would automatically yield power, wealth, influence, or anything else that we might fear, especially if it has a high chance of coming with various forms of baggage and trade-offs.

We can't assume that the apparent correlation between intelligence and mental illness would hold in non-humans -- or that it wouldn't. That's because we don't understand why that correlation exists. But there is one trade-off that I think is likely to be universal: the smarter you are, the more effort you seem to have to put into "meta" thinking like philosophy to keep yourself on track.

Think of it this way: it's easy to drive a Honda Civic, but a supercar can actually require performance driving classes to learn to drive it safely. Otherwise you can do things like accelerate into the car in front of you very easily, lose control, etc. because it does not drive like a commodity car.

Many people with very high IQs use their high intelligence to create elaborate delusions and rationalizations that land them in a ditch. That's a "meta" problem, a philosophical problem, not a problem with engine size, but it's one that probably gets worse as the motor gets bigger.

It's another factor that I think might place rate limits on the rate at which an AI could self-improve. Technically it's just a special case of the combinatorial search problem associated with improving intelligence beyond known local maxima.

An AI just thinking "hey I can just double my processing power and storage and get twice as smart!" might very easily end in some novel form of madness, not super-intelligence.


Maybe the fear centers on the possibility on how accurately and deeply such a strong A.I could identify the ugly nature of human 'values', decisions, intentions... Especially those of powerful institutions. Wouldn't it be something damning if, with high accuracy, you could get a stack trace leading up to the formation of certain human thoughts.

So, maybe the fear centers on a judgement day of sorts.. Being confronted with a dimension of 'truth' about ourselves. Terminator Judgement Day ;)


> We have no evidence that "Hollywood AI" is nigh, no evidence it will "explode" and become super-human in a short period of time (and some very good counter-arguments against this scenario), and no evidence it would be intrinsically more dangerous than we are to each other. The whole fear mongering topic seems rooted in a tower of speculations that becomes increasingly precarious as you ascend.

Agreed.

> ... I realized that these fears might be rooted in the fear of disruption.

Yes, that might be part of it.

Another part is that ideas that are both interesting and plausible tend to be more visible. Lots of people will pay to hear Ray Kurzweil speak about how the next 20 years are going to be absolutely amazing. Not so many want to hear you or me saying, "Well, maybe not."

Ideas like those of RK have always been interesting. In 1960, when a computer was a mysterious machine in the back room, surrounded by white-coated technicians (and even they weren't too sure what it could do), such ideas were plausible, too. In 1990, when a computer was a machine for typing letters or playing lousy games -- not so much. But today we all have a box in our pocket that responds to voice commands, and we hear about self-driving cars. So this amazing future becomes plausible again.

This combination of interest and plausibility means we hear a lot more about the possibilities of strong AI than arguments against them. Even your own post contains 2 links for, and just 1 against. And guess which ones will be spread. No one says, "Hey! I just read this cool article about how computers aren't going to become intelligent any time soon!" So despite the relatively easy arguments against the radical futurists, the people making them are not heard much.

> Interesting food for thought, don't you think?

Indeed. Lots of thoughts. No time to write them all ....


very good insight, thank you.

I don't think its the potential to upset the world order that has everyone so worried about AI. At least, as someone who studied Artificial Intelligence in college and have since refused to study it any further, I don't think that is the main concern.

Its a risk/reward problem.

The potential risk is infinite: worst case scenario (in my opinion) is the extinction of humanity and the destruction of our planet. There are arguments that this won't ever happen with AI, but it is un-provable either way until AI is actually invented.

the promised reward for developing AI? self-driving cars, extension of human life, raised quality of life, end of all suffering, etc. Although these are all positive developments, my opinion is that we don't necessarily need AI in order to achieve them. We are already well on our way to making life much better. I mean, look at how things have improved over the last century, and think about how much better life will be if we can extend these gains from the 1st world to those in less fortunate areas of the planet. Basically the only reward I see from developing AI is increasing the speed at which these things are developed, not the actual development of these improvements.

in my opinion, the risk is not worth the reward. an acceleration of progress vs the complete destruction of humanity? no thanks.


Your thoughts are most definitely shared by others. Thank you for taking the time to detail them.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: