The song "Prisencolinensinainciusol" has a similar effect: it's meant to sound exactly like American English to non-native speakers, but is actually just gibberish. Works amazingly well.
"The last enemy that shall be destroyed is death."
Well, maybe not the last one, but still an important one. Nick Bostrom has a great story describing the underlying philosophy in the fight against aging:
Imagine a world in which Stalin was still in power. That is what amortality looks like.
I can't help but think that this would also completely retard scientific progress. Imagine tenure that lasts a milenium or more. We would still be discussing scholastism.
Imagine a world where Newton could still contribute, peak scientific output wouldn't be before the age of 35, and politicians would tackle long term problems because they will be affected by them too. That is also what amortality looks like.
I'd be worried that Newton would be spending an even higher percentage of his time on alchemy research than he already did, and using his reputation to push promising scientists to do the same.
Modern chemistry was born from Alchemy research. I find it unlikely that Newton would have continued to do exactly the same kinds of things for centuries more.
Politicians tackle quick wins that get them good PR and re-elected in next 4/5 years cycle, plus of course return back all the favors/contributions/etc to shady characters behind governments.
No amount of longevity is going to fix that, in contrary it could contribute to entrenchment of those behind curtains as permanent puppet masters. And we all know that if power corrupts, then semi-eternal power ...
People are living longer and longer already, compared to ~50 years ago. Scientific progress doesn't seem to be slowing down, though it has shifted to different fields.
1) No. Whatever people make, other people can destroy. You could get rid of Stalin any time you wanted, if even by running away or making him irrelevant. With various degrees of difficulty involved.
And for every Stalin, you brought in a few Buddhas and Gandhis.
2) Again, that depends on how you approach innovation which has little to do with age. What would have to be instituted is probably rotation based on tenure, similar to presidency terms. If a professor is still deemed innovative, they can stay in charge.
In fact, such a system would be vastly superior to the current one where once tenured, a professor is almost immovable for many years. It would also help with the publish or perish part if extended to lower levels - you'd get more chances.
Most importantly, if the basic needs are met, you just gained access to a huge pool of genius engineers and scientists by sheer numbers. Imagine if, say, Feynman or Hawking or Knuth or even Leibnitz and Newton were still around, and cooperating... No matter the academic structures.
Why would Stalin still be in power because of lack of aging? Stalin was quite likely murdered, though it was never proven. This is usually what happens to horrible leaders when they're in power too long.
Of course, Stalin was so popular that huge crowds showed up to honor him, and 100 people got crushed in the crowding. Over in Spain, Franco was apparently so popular that they never bothered to oust him at all. So if you don't like dictators like Stalin and Franco, that means you also don't really support democracy, since in a democratic system these people would have also been in power due to massive popular support.
Losing leaders to aging has historically robbed us of great leaders too, don't forget. Elizabeth I was considered one of England's best rulers, her reign considered a golden age of 40 years. Marcus Aurelius is considered one of Rome's best emperors, and he was infamously replaced by the horrible Commodus after he died. I wonder how history would be different if Marcus Aurelius had reigned for another few centuries.
You're imagining Stalin but we live in the 21. century. This century telepathy is going to become reality and that still seems to be the less interesting thing compared to AI, that is also inevitably coming. Do you think these facts won't change anything?
Oh good. I'm glad you agree we will see the largest genocide in history in our life time. People are usually a lot more optimistic than me about the future.
I don't see it as the largest genocide (not that I consider it positive). The Borg didn't kill, it assimilated. Considering that the absolute majority of people in the future will be cloned, the assimilation of old timers (that's us) is going to be seen as a minor event and probably not even a genocide as we will continue to live, I personally think it's going to be seen as our salvation.
--
IMO there is one thing that seems to be truly unique and irreplaceable - consciousness and its continuity, and control over it. That is probably going to be prized.
"Once you've wrestled, everything else in life is easy."
The entire experience, from the first days when I got triangled by girls half my size (choking my ego as much as choking my body), to competition days, with pressure passers and crazy leglock guys... It absolutely changed me - for the better.
I think Sam Harris put it like this: for free will, it doesn't even matter if reality is deterministic or random because the determinism or randomness are found at the quantum level, many levels "below" neurological free will.
Let's say we have Universe 1 (deterministic) and Universe 2 (random). You face a choice - raise your left hand or your right hand. In U1, if you went back in time several times, you would always pick the same hand because the configuration of matter in the universe would "require" that the next step, globally, is you raising that same hand. In U2, if you went back in time, there could be some variance - maybe you'd pick the other hand 50% of the time - but this variance would happen on a quantum level and only manifest neurologically/physically. I still see no possibility of the classical idea of free will there.
wasnt there a story linked here a couple weeks ago about neurons possibly having some sensitivity to quantum interactions? Wouldn't that mean that the quantum level is not actually below neurological free will?
> I still see no possibility of the classical idea of free will there.
That's because you're mistaken that the classical version of free will requires the ability to do otherwise in the sense of making different choices after rewinding time.
Consider what that actually means: an outcome that is different every time you sample it, even going back in time, is classified as a random phenomena. That's not what free will means, classically. Where is the will if your choices are random?
Furthermore, the Frankfurt cases debunked this old notion of the principle of alternate possibilities back in the 60s. Sam Harris is simply mistaken about the applicability of this principle.
In the case, where answer is No to both the questions, I have come to terms with being comfortable with procrastinating on that task.
If I cannot convince myself of value added vs effort required today vs in future, it is okay to delay the task.
Also, choosing not to do a task now and procrastinate can be a powerful and useful tactic.
As I understand it, AI ethical principles relate to the development of a superintelligence. Talking about unethical usage of narrow AI is like talking about the unethical usage of any other tool - there is no significant difference.
The "true" AI ethical question is related to ensuring that the team that develops the AI is aware of AI alignment efforts and has a "security mindset" (meaning: don't just try stuff and repair the damage if something happens - ensure in advance, with mathematical proof, that a damaging thing won't happen). This is important because in a catastrophic superintelligent AI scenario, the damage is irreparable (e.g. all humanity dies in 12 hours).
For a good intro to these topics, Life 3.0 by Max Tegmark is a good resource. Superintelligence by Nick Bostrom as well.
AI ethical principles relate to the development of a superintelligence
This is not true; there are real-world ethical considerations right now with existing tech, infact have been since the most rudimentary AI was applied in commerce or government
People get upset about relatively unimportant things. I'm pretty sure sexism exists and is bad, but this is just... not it. It's a dumb meme, that's all.
It's not really up to you or me to dictate what other people think though, or what they feel is important to them, so the fact you think it's a dumb meme isn't relevant. That's why we have courts.
That's why we have levels of courts and people whose job it is to decide whether or not a case has merit based on arguments brought by interested parties. We can't say something is a waste of time just by looking at it. We can't see the nuance at the heart of every matter.
Again, it's not for you or me to dictate what is actually important. Not important to you is not the same as being objectively unimportant.
Well, apparently not everyone agrees with you. Since you haven't laid out an argument there's not really much ground to see where you and this ruling disagree.
Wow. This is honestly the best analogy I've ever seen. I still think that some particular behaviors are generally good and should be emulated (like e.g. exercise) - but even there you'll find exceptions.
But taking CEO quirks and designing lives so that they include them... That's just guessing the teacher's password (https://www.lesswrong.com/posts/NMoLJuDJEms7Ku9XS/guessing-t...).
The analogy of inheriting a position in a game played by somebody else is great. Winning or loosing depends on how you finish it. What I can't agree is using chess itself: in chess there's no luck, if you play better than your opponent you will certainly win. In life, even after you inheriting the position, luck is a relevant factor, and the better player can loose.
Paraphrasing Nassin Taleb, a good player is somebody that optimises the chances of black swans to happen in life. But even this might no grant you a win.