I really found "The King of Hearts" a fascinating book.
Early open-heart surgery was ... jaw dropping.
For a child with a heart defect they sewed the mother's circulatory system to the child, and she would pump blood for both of them while the child's heart was stopped and being worked on.
There were other wierd attempts to create a heart-lung machine to oxygenate and circulate the blood during an operation, and by today's standards they would be apalling. Monkey lungs and complex machines that never saw the light of day.
but now open heart surgery is commonplace and taken for granted.
That said, I don't know if neuralink is doing something really important or is too fast-and-loose.
I'd like to point you to the wikipedia article of Werner Forssmann, who performed the first cardiac catheterization. Using himself as the test subject. And more.
> He ignored his department chief and persuaded the operating-room nurse in charge of the sterile supplies, Gerda Ditzen, to assist him. She agreed, but only on the promise that he would do it on her rather than on himself. However, Forssmann tricked her by restraining her to the operating table and pretending to locally anaesthetise and cut her arm whilst actually doing it on himself. He anesthetized his own lower arm in the cubital region and inserted a urinary catheter into his antecubital vein, threading it partly along before releasing Ditzen (who at this point realised the catheter was not in her arm) and telling her to call the X-ray department. They walked some distance to the X-ray department on the floor below where under the guidance of a fluoroscope he advanced the catheter the full 60 cm into his right ventricular cavity. This was then recorded on X-Ray film showing the catheter lying in his right atrium.
> That said, I don't know if neuralink is doing something really important or is too fast-and-loose.
When I was young, I thought the concept of neural interfaces was fascinating, and couldn't wait until it would become available. Now I understand that they would be nothing more than an excuse to wire advertisements directly into our brains, and I think maybe humanity doesn't need them, after all.
Actually, now that you say that, I realize how most medical devices are connected to one (or more) motherships. If you define "advertising" correctly (surveillance) this will definitely happen.
I think that someone would buy out the failed company and then suck up all of your thoughts so they could sell that to advertisers and maybe the police.
Other benefits for politics: if you are reluctant to go to war or planning to escape it, then you can be detected early and be sent to re-education camp (until thoughts injection is developed)
Whoa, I remember reading about these things in some "How Things Work" book when I was a kid. Sad to see that this is what became of them, but I'm not surprised.
This is such a tired, vapid take tho. There are plenty of reasons to be skeptical of BCI tech (and Neuralink in particular) but "they will just use it to serve us ads" completely overlooks the potential of this tech.
Is it? The internet has incredible potential. For a brief period of time in the early 2000s it felt like it was going to live up to that potential. Now it's a festering wound on our collective psyche as a species.
Augmenting our minds directly with our technology probably does have the potential to bring humanity to soaring heights. But in our current state, the powerful people who will be responsible for this technology will not use it for our good. They'll use it to further enrich themselves.
... which yes, probably means serving us advertisements.
It doesn't overlook the potential. It decides the potential, however amazing, isn't worth it.
There is amazing potential in this tech. But also incredible potential for abuse by the powerful. And the powerful have a way of getting away with such abuse.
well i am reminded of the artificial 3d-printed tracheas this doctor implanted in Russia, it took an ocean of whistleblowers and documentaries and journalist, all of whom were threatened and attacked with legal abuse, before anything was done about the out-and-out medical fraud he was perpetrating, with the help of US institutions no less, basically used human beings like guinea pigs by flat out lying to them about the test results of the procedure.
I presume the subject must've been in pretty bad shape in terms of health, to assume the inherent risk of brain surgery at the micron scale. Best wishes to them, and I hope humanity can benefit from knowledge gathered with this breakthrough.
The first requirement for participating in the study:
"Have quadriplegia (limited function in all 4
limbs) due to spinal cord injury or amyotrophic
lateral sclerosis (ALS) and are at least 1-year
post-injury (without improvement)"
Ideally yes, but the liabilities are likely too great at this time. So better start with the "hopeless", where no harm can be done, then go up from there.
The only way to get approval to experiment on human subjects is by specifically intending the device to treat a condition, and then only testing on individuals with that condition. You can't just take volunteers off the street and start testing chemotherapy drugs on them for example - this is to avoid exploiting the poor and indigent as human lab rats.
They need to thread the needle of a condition so advanced that a brain link device would potentially restore a basic essential of life but also not so advanced that the patient can no longer give consent.
It's worth thinking about just how much could be accomplished with a device like this even with the theoretical minimum bandwidth. Just one input and output signal, like morse code. If your brain could reliably send and receive text that way. All of a sudden your brain has a built in CLI connected to a computer and the internet. Easily calculate numbers, telepathically communicate with others worldwide, look up anything with AI, etc. That all seems highly doable. Let alone all the other applications with higher bandwidth.
> Tech Enthusiasts: Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via alexa! I love the future!
> Programmers / Engineers: The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise.
I would not want my brain connected to the internet. Imagine giving 4chan the power to fuck with humanities brains on a global scale.
A lot of that stuff is homebrewed with open source software, at least among the programmers I know. My coworkers who are all in on the latest Nest or Alexa crap tend to be young executive/management types who are overly concerned with "lifestyle". The kind of guys who dress like they're heading to the yacht club straight after work.
That has been possible without brain surgery for decades. This was literally a demonstration by undergrad students when I was visiting a university in 2011 or so. They used one of those "shower caps" you place on your head to sense brain waves.
You're missing the point entirely, it's not about whether the shower cap is or is not enough. It's about how the shower cap is far less intrusive and dangerous than brain surgery and yet nobody is wearing them. Ergo maybe the most minimally capable version of this technology isn't as revolutionary as you've described.
(Ignoring that the Neuralink device has many different capabilities, including and especially being able to cause brain stimulus, which an EEG cap cannot. And that writing data into the brain is a major component of what I said.)
Writing data is a huge can of worms. It hits at the core of (my take on) I think therefore I am. Once it might no longer be your thoughts you are thinking.
My bachelors thesis in psychology was actually an experiment using positive reinforcement to alter the brain activity as read by an electroencephalogram, also in 2011. My experimantal design was flawed and I failed to show results. But getting participants to interact with externally placed electrodes was indeed a no-brainier (pun intended) to a 23 year old Icelandic kid with minimal help.
It terrifies me that a multi-billionaire is more disillusioned about this technology, than a BS-student.
Brains are not computers, they work fundamentally differently.
Our brains are have fuzzy inputs and outputs, and are probably closer to quantum computers.
You cant plug into its fabric and somehow have a sidebus that does something
For participants in this study, the eligibility is:
>Have quadriplegia (limited function in all 4
limbs) due to spinal cord injury or amyotrophic
lateral sclerosis (ALS) and are at least 1-year
post-injury (without improvement)
These are people with very poor quality of life and the help this could potentially provide is life changing in the best possible way. The reward almost certainly outweighs the risk for everyone involved.
Neuralink legitimately terrifies me. Not because "brain implant", but because of the multitudinous factors for failure surrounding it.
For example, lots of bio things can take a lifetime to have an effect. One crystal clear example of this is the buildup of AGEs in collagen and the like over time as a part of damage accumulation in the process of aging. Tons of drugs have taken decades to show signs of causing cancer, and whatnot.
Additionally, brain implants have been done before, but I honestly don't trust what comes out of neuralink, not just from the madcap madhouse environment that many employees have described it as, but also from the track record of dead animals, as well as the rather notorious examples of corner cutting in other similar companies. It's a human life. Is this thing on, are we thinking here? This is madness to me.
For example, what happens if there is progressive scarring over a few decades from electrodes. What about unintended side effects of certain kinds of signalling over several decades. Oh, you have a ground leak across leads? How does that impact brain dynamics over time? Why are we using humans as the subjects to describe these things? Why are people volunteering for this?
Like, I generally don't have a major conceptual issue with implants themselves in terms of the actual device itself. But I have a major issue with a company that comes from a fold of companies who has "plays it fast and loose" as a defining attribute. And has killed a lot of their own lab animals in experiments. I'm semi okay with brain implants happening through another organization that is much more responsible, but I really hope neuralink gets shut down before they do too much damage.
Just like asbestos snow decoration, you don't know the damage until it's too late. And maybe that's an okay analogy, structurally the leads aren't too terribly different from the absestos fibers that created scarring (though of course, asbestos is inherently carcinogenic on its own if I remember correctly).
Anyways. Hopefully people stay safe and we avoid the likely horror story that this is. And if it all comes crashing down in 30 years, I hope we look at this and don't do it again. :'((((
Something like 4% of males admit to steroid use at some point. If you consider that non gym goers probably aren't using steroids (let's hope), that means a significant portion of fit males are. Heck, it doesn't take long at the gym to see women that are dabbling if you're in a major metro area. And from reading their posts online in forums id say most know there are severe long term problems. They try to ignore the people dying in their 20s and 30s as people that just took way too much.
People seem to be perfectly happy to trade future years of their life for present benefits.
> Something like 4% of males admit to steroid use at some point. If you consider that non gym goers probably aren't using steroids (let's hope), that means a significant portion of fit males are.
> There is a distinction between lifetime ever-use of AASs and chronic use. Lifetime prevalence use includes a high percentage of short-term (even a single episode of) experimental use in teenagers and young men.
Significant? Are we talking 40 percent or 4 percent of gym goers?
My perception of most gym goers are ordinary folks who want to get healthy, not gymbros who want to build giant muscles. Of course, this is at a single gym so this is anecdotal.
So ignore the ones that don't workout at all. Now probably ignore the ones just going through the motions at the gym and clearly aren't trying to achieve real muscle growth.
It stands to reason that if you see someone fit and with decent muscles, they're part of the group that might have taken them.
If it's 4% of all males, makes me think it's close to 25% of males with good amounts of muscles. Of course it's skewed by basically 90% of people with massive amounts of muscles taking them. But massive muscles alone doesn't make up 4% of the population.
90% of statistics are garbage. More seriously, there is a massive replication crisis and these types of studies are at the center. Did they survey 20 people? Did they happen to be 20 year old fratbros on a college campus?
Even if you set survey methods aside, it has been well established that there are is a single digit percent of people that are simply survey obstructionists. They will say that the sky is red on a survey, because they dont care or are trolling
If it's 4% of all males, makes me think it's close to 25% of males with good amounts of muscles. Of course it's skewed by basically 90% of people with massive amounts of muscles taking them. But massive muscles alone doesn't make up 4% of the population.
Reasonable guess, but hard to verify.
It appears for example that every youtuber who looks like they have decent amount of muscles are going to be hit with the accusation that they used steroids.
The commenter is just trying to point out that people are willing to risk themselves for a perceived short term or long term benefit. Professional athletes might be on steroids, which we may not 100% know if there are long term hidden impacts from doing so.
Rather than hoping it gets shut down because of your fears, why not just take the approach of ensuring the FDA closely regulates the safety and efficacy of the technology. They do clinical trials and study these things to address concerns like yours.
My friend works for a medical device manufacturer helping prepare for FDA inspections and related quality control measures and as I recall, and this was a couple years ago, he said that the FDA is fairly stringent but could be more so, and he wishes they'd fine for higher amounts, because if getting fined for a mistake was a bigger financial hit, companies would be better incentivized to avoid mistakes that the FDA currently catches and has them correct.
If it worked like that, that would be great! However, our regulatory frameworks unfortunately seem to be in rather a rather weakened place right now. The medical industry is rife with corruption in a number of places, as best as I understand the situation. And even if it weren't, measuring safety I think is a very hard thing to begin with. :'/
>Have quadriplegia (limited function in all 4
limbs) due to spinal cord injury or amyotrophic
lateral sclerosis (ALS) and are at least 1-year
post-injury (without improvement)
Even if all the problems you are talking about happen, I would venture a guess that the vast majority of quadriplegics would jump at the opportunity to use their arms and legs again, even if it meant an overall shorter lifespan or other complications.
I agree it seems like a horror story from where we’re sitting, but at the same time, I could see how our conception of integrating brains with computers could become an outdated way of thinking. In the future uploading your mind to a computer might be a casual thing you do to play a video game.
The best analogy would be showing someone from the 17th century a modern prosthetic device. There is a realistic chance that they would hang you for witchcraft. A mere digital camera would be considered interfering with the laws of nature and the work of the devil or something. We may be harboring similar biases in the present day we don’t even realize.
I agree it seems like a horror story from where we’re sitting, but at the same time, I could see how our conception of integrating brains with computers could become an outdated way of thinking. In the future uploading your mind to a computer might be a casual thing you do to play a video game.
Mind uploading has the same philosophical problem as....teleportation. That is why some people think Star Trek teleporters are actually death machine.
That's not the issue that the OP is concerned. He's concerned that the device will cause irreversible damage to the information substrate that is the brain.
The best analogy would be showing someone from the 17th century a modern prosthetic device. There is a realistic chance that they would hang you for witchcraft.
The problem with using analogies is that the logic might not apply. We can conceive or think certain things will be different in the future doesn't mean it will be.
For example, it is unlikely that we can travel beyond the speed of light, because the speed of light is the speed of cause and effect. You would need to show that cause and effect still hold when we travel faster than light or that our most fundamental assumption about reality is wrong.
The imagination of the 21st century and 20th century has explored the vast conceptual space of technology. It is unlikely that new technology 23rd or 24th century will likely get you hanged for witchcraft in the 21st century.
See for example the 1979 story Newton's Gift, where a man is troubled by the thought that the scientific greats from hundreds of years ago had to waste months or years of their lives doing manual calculations.
So he invents a time travel machine, and goes back in time to give Isaac Newton a modern electronic calculator.
> And has killed a lot of their own lab animals in experiments
Killed in horrifying ways, in many cases for no reason at all. And definitely too quickly, compared to what would have been expected if they were trying to learn from their mistakes so that, one day, human implants would be successful. And, even then, there would still be all the unknowns you are talking about.
> if it all comes crashing down in 30 years
I give it 3, tops. Save this comment and add to a calendar if you have to.
There's an interesting discussion to be had about informed consent in a medical context, and what really counts.
I think taking a maximalist view of "this person actually knows what they're signing up for" would lead to almost no one being capable of consenting to almost any treatment. Do you actually know the pharmacology of acetaminophen? Are you aware of all the possible complications of an appendectomy? Can you truly grasp the difference between a one in a thousand occurrence and a one in a million occurrence?
The minimalist view of "this person said yes and so the treatment is ethical" is of course, also fraught.
I'm reminded of the potential for COVID vaccine challenge trials. Tens of thousands of people signed up to be deliberately infected with COVID so that we could complete the vaccine trials more quickly, and in the process save far more lives. Medical ethicists ultimately prevailed in not allowing such trials for the vaccines, arguing that no one involved could truly consent given the unknown effects of COVID.
But what about all the people that didn't consent to getting COVID the old-fashioned way, who wouldn't have had they had a vacccine?
yeah, the paternalism was out of control, given that billions of people could and were making daily choices to expose themselves.
A nurse could choose (or be strongarmed) into treating sick patients. Parents and spouses can choose to tend the sick. Hell, people could and did go to covid parties to get infected.
Having killed many test animals could be about finding the limits. It certainly isn't a sign that the tech is poorly developed. (Tho another response says that the deaths were irresponsible)
At the same time, killing many test animals says something about the ethical disposition of neuralink. Not that they are unethical. But that they are not on the side of caution with action in the ethical debate, and more on the side of 'not developing life-improving tech is unethical'. Which is certainly a colorable ethical stance.
I wonder how much of the potential connectivity problems the brain would just figure out on its own. Human brains are incredibly adaptable. If a sense is lost then that brain area gets used for other activities. If a nerve is damage for an important process then the brain does figure out a way to send the signal through other pathways.
The way you word it, it's as if they were a bunch of tech bros putting microchips in people's brains and running a remote debugger for fun, in a completely unregulated environment.
The people that accept these kinds of implants during a "prototype phase" are usually in a very bad position, so the risk is worth it for them, as the technology can drastically improve their lives.
Other implants like the Utah array have killed just as many animals to be honest. People are upset because the monkeys seem intelligent, are cute and Elon is connected to the company. They stick their head in the sand the moment you mention that in some parts of the world this type of monkeys are kill-on-sight pests beaten to death in droves by people with sticks.
For example, lots of bio things can take a lifetime to have an effect. One crystal clear example of this is the buildup of AGEs in collagen and the like over time as a part of damage accumulation in the process of aging. Tons of drugs have taken decades to show signs of causing cancer, and whatnot.
Which drugs are you talking about? As far as I am aware, while there are pollution in the environment that are damaging, a lot of damage are caused by 'lifestyle' factor(arguably, the cause are still systematic).
Like, I don't have an issue with implants, I really don't. But I have a major issue with a company that comes from a fold of companies who has "plays it fast and loose" as a defining attribute. And has killed a lot of their own lab animals in experiments.
Are you talking about the safety issues from SpaceX's Starbase facility and manufacturing safety at Tesla factories? There's some controversy about autopilot as well. I know there are also controversy at Neuralink, but it doesn't seem clear to me how true it is. In any case, testing on animals isn't the same as testing on humans.
Falcon 9 is one of the most reliable and tested rocket of all time, and NASA is confident about launching astronauts on the Falcon 9 and the Dragon capsule.
I've recently read articles about people who have been left without support after companies selling health related tech have gone out of business. That in itself must be concern enough for anyone interested in this.
What is stopping the following scenario:
- This technology matures and becomes effective
- In country X, 'Enemies of State' are forcibly implemented with a Neurlink?
In A.C. Clake's 3001, everyone is wired with implants and instead of prison the implant will simply take you over and make you perform some laborious task until your sentence is served.
Alright I’ll say the obvious since no one else will. This technology is far too dangerous to ever see widespread adoption because no one can be trusted not to use it for power and leverage over everyone who is chipped.
Think about it though, the ability to control a computer from your brain would be incredible. It's what UI development is constantly pushing towards - to be as close as possible to mind reading. This would literally be mind reading.
And that is what scares the living bejeebers out of me. How is it to differentiate from thoughts with intent behind them, vs random stray thoughts? Or intrusive thoughts? Have you ever gotten frustrated and the thought went through your head "I should just rm -rf everything" ? I really don't want the computer to carry that out just because that brilliant flash of insight went through my head.
Or thinking quietly in your head, 'thinking out loud' in your head, and actually vocalizing out loud.
This is (partly?) learned behavior. Children under the age of 5-6 will have audible self-directed speech, but eventually they learn to mute this and remove vocalization altogether.
It isn't able to delete your brain unless your brain is a bunch of files.
These "augments" will likely be, at best, some kind of subsystem you can tap into, like an extra limb, or a memory you didn't know you had until you "recall" it.
You misread my concern -- it has nothing to do with deleting your brain. My concern was if you are thinking an intentional thought "Write a letter to Gramma", and you had a passing thought of "Format and wipe my C: drive", I would hope the mind reading would distinguish between these two.
Now if it is the same thought process as controlling a limb, well that sounds like the same mental energy needed to type out something on a keyboard or speaking commands, both of which happen at close to the same speed as thinking the words to begin with (well, talking definitely, typing within a high enough fraction for most proficient typists).
I read a lot of science fiction books where they discuss all kinds of stuff like this that is pure speculation. The only difference between those books and Elon's latest racket is that they are honest about the fact that they are fiction.
Stop hyperventilating over this. There are serious research projects worth following that are far more likely to deliver useful technologies.
I'm pretty sure whoever receives such device to help them move/see/hear or whatever again, would much more prefer having an "unsupported" device (however that may manifest itself), to not having it at all.
Besides, I don't think devices of this nature would be subjected to the same market forces as smartphones or other devices, so cycles would be much longer and obsolescence wouldn't be a problem.
Also, this is not the first time people would have electronics implanted into them for medical reasons.
Do we know what features are already supported? Or rather, what range of features are feasible with the actually implanted technology? There's kinda only two broad categories of potential features, right?
* "Push" (sending out your brainwaves to do stuff)
* "Augmentation" (enhancing performance of the brain itself)
> Do we know what features are already supported? Or rather, what range of features are feasible with the actually implanted technology? There's kinda only two broad categories of potential features, right?
We don't even know if this even happened. Neuralink has not replied to any news networks that I have seen.
They have demonstrated a chimp playing pong (decades old tech).
The version of the device described on Neuralink’s website for the current trial only reads signals from the brain to control external software. So this is just for providing functionality and daily use to individuals with quadriplegia, not augmenting the brain itself.
Depends on where you implant it. In this case it's attached to the motor cortex and will allow for controlling a computer, and in this case there's no feedback into the brain itself (no write, only read). Sending signals to the motor cortex is probably not useful even in the device can do it.
People may not like me saying this, but brain implants like this becoming commonplace effective technology is worth a lot of people dying. Probably half a million deaths at least. (not risk unknowingly)
I think the potential is too high.
Early airplanes killed a lot of people, but it was totally worth it. Same with transoceanic sailing.
Brain implants can let me think pictures and music and feelings directly into/with other people. Lets us record and replay memories, including tastes and smells. (Imagine youtube for fully featured memories). Add cheap no touch, low latency control interfaces and neural hotkeys for common objects and machines. Instantly learn new skills by trading brain volume to internal asics. Instantly solve drug addiction. Im probably only touching on a small portion here.
Brain computer interfaces are a platform for the future in the same way computers as a whole are.
I did not volunteer. As I would not accept it given the current risk level, or costs. But there are people who would. And as that risk is reduced, and the risk window pushed back, more and more people would, so the sum total of people dying after using a device over ten to fifty years would increase, but the result of the technologies widespread use would be rapid improvement in it that we would all mutually benefit from.
Consider cars kill easily a million people every year, and they are totally normal. Thats not some crazy experimental technology. People decided. Cars are/were worth millions of deaths.
If you are still not convinced, consider what I said but in the context of airplanes in the 1910s.
Nobody should be forced into it. I did not say they should. And they should be well aware of the risks. Short term and long term. Potential unknown risks as well. But poeple use technology and enjoy it, despite dying from it.
The trouble with your utopian thinking is that we do not have an education system that reliably equips people to make rational decisions, nor do we have information hygiene rules that help reduce noise so that people can make rational decisions. Last, but far from least, we do not give people the time to think and/or collect data.
So, this is not to say that any of this tech should be banned or regulated. Regulation itself is at risk of being irrationally/dishonestly captured.
It's just that there's no cause for celebration either. A lot of people are going to suffer, and ultimately even if the tech does mature, people will find far more ways to enshittify it for a quick buck.
In other words, even if in the best case scenario we might have new gadgets etc. the world won't really change. The shape of our conflicts (information vs. misinformation) will remain roughly the same, and the balance of our power in those conflicts will remain roughly the same.
As a consequence: the probability that you (or yours) will get to benefit, rather than merely be the temporarily-disgraced winners will remain roughly the same: quite low.
I do not have any utopian thinking. I just said "wow paper is such a great invention. if i had paper i could send messages really far away and write down my thoughts and draw and write. Everyone could benefit in so many ways if paper was ubiquitous. Its probably worth a lot of effort and sacrifice."
And youre the one going on about politics, regulation, and utopia. Its just not relevant.
> Brain implants can let me think pictures and music and feelings directly into/with other people.
So this hypothetical device can insert images, sounds, and emotions directly into a person's mind?
> Lets us record and replay memories, including tastes and smells.
Any device that can do this can implant false memories as well.
> Instantly solve drug addiction.
In other words, it has an extreme degree of control over one's emotional state.
You don't even mention a single downside of this tech, only your grand utopian vision. Have you thought about them at all? Handwaving them away with "well airplanes used to crash more, I'm sure we'll figure it out" doesn't count.
Thats not hypothetical those are pretty obvious functions of the tech. If two people have it implanted in similar regions and have record,playback, and can send files to the other person for playback、then you get thought transmission. Probably lossily. Two people with internet enabled laptops. (more useful if both of them can read and write a common language)
There are downsides of course, but the benefit vs downside of most new technology is assymetricaly in favor of the upsides, primarily due to human nature being assymetrically good natured. You can use an iphone to organize terrorist attacks and scams, or to call your loved ones. Thats up to the users. 99.9999% of users are just using it to do normal human social stuff. Probably same for implants, but cooler. That doesnt make an iphone utopian. Thats your words not mine.
paper and conversation, and audio recordings, cameras and video do many of these things. Just with lower bandwidths.
As for new vulnerabilities, I dont know if we will be able to interpret incoming signals via brain interface as foreign, but id guess like any other input, if you didnt grow up with it youd be susceptable to manipulation. But if you grew up with it youd probably have some sense in discerning between your ideas and others ideas. (Sort of like how my grandma thinks scam emails are from honest people, but for younger folks its obviously malicious) Could be wrong though. Only one way to find out. Put one in grandma.
> the benefit vs downside of most new technology is assymetricaly in favor of the upsides
Follows from this:
> primarily due to human nature being assymetrically good natured.
Even though I agree with the second one.
How many people are involved in adtech, finding new and ever more cynical ways to collect, mine, and exploit your data to manipulate consumer behavior to their benefit? A tiny minority. And yet their work influences nearly everyone in the developed world every day.
How many people are willing to use nuclear weapons against their fellow humans? Again, a tiny minority, but it only takes a few. If the majority of people being good natured was a safeguard, the existence of nuclear weapons would be of no concern to anyone.
There's an asymmetry you're not accounting for: the asymmetry of power wielded by those who are willing to use a technology for nefarious purposes. It only takes a handful of people (relatively speaking) to buy and sell the personal information of billions, to deploy a weapon, or to exploit for personal gain what is essentially a mind control device as you've described it.
Also, while people may be "asymmetrically good natured", I would argue that they're also asymmetrically naive and ineffectual. Just look at domestic spying in the Five Eyes nations. That was considered a wild conspiracy theory for years until Snowden proved it was happening. Then after he proved it, tons of people just shrugged and said "well they're doing it to protect us, and I have nothing to hide anyway". A lot of that "good nature" people have isn't goodness, it's harmlessness. They're not good like Batman, they're good like a bunny rabbit. That's part of the reason it's so easy for the evil minority to wield such power over them.
you have provided some of the water drops of counterexamples in an ocean of cooperatively used technology. Fire, buildings, wheels, hammers. You are so used to this world of assymetrically positively used technology that you dont even notice it.
I would not forego the internet itself in fear of adtech existing. Acceptable consequence. worth it. same for implants. All of the unimaginable malicious avenues provided by the internet,and I still would rather it exist amd be ubiquitous than not.
As would I, and I work in tech myself. Technology is a wonderful thing, when used wisely.
My argument is not that technology is bad. It's that every technology isn't automatically good just because it's technology and "people are nice". There's a reason nerve agents aren't available over the counter at your local pharmacy.
I believe that almost every technology is net good.
The reason i believe that is i fundamentally believe almost all people are selfish semi rational and cooperative by default. And I think that positive behavioural inclination may have evolved as a solution to an environment that required game theory like itterated cooperation, where the emergent solution was like 90% cooperative actors and 10% exploititive actors.
Even if that were not true, this particular technology is a platform, a medium for other technologies, and so I suspect the distribution of its use and misuse will be nearly identical to all the other similar technologies, such as paper, phones, the internet, etc
Damn. I'm all for finding ways for this technology to help people, but you sound like a fucking psychopath. Life isn't as simple as "trade X number of human lives for shinny new Utopia"
well my context was a lot of people in the thread bitching about how the technology could maybe be dangerous and people might die from the implants 20 years later from brain damage. But nobody mentioning all the cool shit that you can do with it, so its probably worth the risks.
just like cars, planes, surgery, etc. theyre all dangerous as fuck but were worth the risk.
reading this my privacy anxiety goes up again but then realizing the brainwave decoder / transcriber study was out at least 6 months ago...so this is acutally good. maybe.
need new privacy technology though. probably a combination of tinfoil hats, biotech, and advanced cryptographic algorithms. i believe those will be made cheap enough when the time comes.
My favorite home cooked conspiracy theory: Neuralink's input/output capability will be made accessible to Starlink satellites and anybody with a chip will be able to have their thoughts/nervous system controlled from anywhere on the planet.
This will inevitably lead to a band of underground rebels who live inside a Faraday cage and use abandoned missile silos to shoot Starlink satellites out of the sky. As each satellite goes out of commission, this frees more neurolinked people from control, and they descend underground to join the army.
Over time, rumors spread among the underground population that the cages have been augmented with a local control source. Hysteria spreads and the cages are torn down, with the Starlink satellites immediately retaking control, but this time with a much more subtle approach.
The result is an underground war of all against all, where nobody can trust their own thoughts.
Early open-heart surgery was ... jaw dropping.
For a child with a heart defect they sewed the mother's circulatory system to the child, and she would pump blood for both of them while the child's heart was stopped and being worked on.
There were other wierd attempts to create a heart-lung machine to oxygenate and circulate the blood during an operation, and by today's standards they would be apalling. Monkey lungs and complex machines that never saw the light of day.
but now open heart surgery is commonplace and taken for granted.
That said, I don't know if neuralink is doing something really important or is too fast-and-loose.
https://www.ahajournals.org/doi/full/10.1161/CIRCULATIONAHA....
ISBN 0812930037 King of hearts : the true story of the maverick who pioneered open-heart surgery