Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The AI industry turns against its favorite philosophy (semafor.com)
54 points by spopejoy on Nov 22, 2023 | hide | past | favorite | 80 comments



Risk from GAI is very nebulous, even if we assume that it is a given. It's not exactly clear how you will align an AI for the benefit of humanity, let alone aligning humans with other humans in mutually beneficial arrangement.

Let's not forget that benefit for humanity is also somewhat nebulous term, especially since humans can sometime have incompatible goals against each other. However, it's not impossible to have some kind of agreement, like making sure that we all have shelter, food, and water, and so forth.


EA continues to look like moral cover for good old-fashioned profit maximization.

The "Effective" part is immediate and drives short-term incentives like maximizing profit. The "Altruism" part is hypothetical and delegated to a future date where the resources acquired "effectively" are deployed for the benefit of humanity. In the gap between the two are fallible human beings, not necessarily more virtuous than average, that task themselves with making decisions about what's important.

The focus on hypothetical dangers like GAI is telling here. If you can invent an apocalyptic boogeyman that you're "addressing" by funding thinktanks and sitting on company boards you don't actually have to spend anything on addressing immediate issues.


I think the EA community is very open about the uncertainties around the risks of AGI and the difficulty in addressing them. The point is that if there is even a 0.01% chance that AGI could threaten the existence of humanity then it's worth spending time and thought addressing it.

NASA has the Sentry program to look out for asteroids, with a budget of around $35 million, and the odds of a "big one" hitting in our life time is less than 1 in 10,000.

That said, the majority of EA spending is on global health and development to address immediate issues, on programs like Against Malaria.


> The point is that if there is even a 0.01% chance that AGI could threaten the existence of humanity then it's worth spending time and thought addressing it.

The disconnect I see is that if there is even a 0.01% chance that is could threaten the existence of humanity, then why in the world would anyone actively pursue making it happen before figuring out a way to mitigate that risk?


I think it’s arms race logic. “If we don’t pursue it before figuring out risk, worse people will pursue it without figuring out the risk.”


That's not logical, though. Assuming that "worse people" will develop it anyway, wouldn't it make more sense to let them, and focus entirely on how to mitigate the risk instead?


I mean, yeah. I phrased it in a weak way intentionally.


I think this is more about the longtermism branch of EA in particular. There are many EAs who are still focused on solving more concrete problems (like malaria, extreme poverty) and directing millions of dollars in charity to Against Malaria, Give Directly, lobbying for stricter lead regulations, etc.


EA/longtermerism smell a lot like “intelligent design”; yet another attempt at belief in divine mandates for man.

Like the PR teams got together and intentionally conjured semantics that tickle the same biological responses seen in religious true believers.

Consider “religion” as a biological state, not an abstract philosophical framework. Behavioral economics has been leveraged to embed the hallucination that is American Civic Life similarly to embedding religious mind viruses.

Given what we know now in neuroscience, how we brainwave sync with rooms of people just by being there, the leadership class reinforces among themselves hallucinations of their right to reach into all of our lives and claim them for themselves, just like the church.


For EA to tickle the same nerve that religion does seems like a feature, not a bug. Channel that urge into doing something meaningful, important and bigger than oneself.


Yup. I give to the extreme poverty and animal welfare funds, because they seem like a great way to give to well-vetted high-impact charities.

The longtermism stuff has always struck me oddly alongside those other obviously-beneficial causes. It's a bummer that's all some people think of when they hear EA.


Maybe because if all you want to do is to support things like that, there is no need for something like the EA movement in the first place?


I still find valuable the idea of evaluating charities based on their effectiveness, but I guess you could rely on charity evaluators for that without taking on the entire conceptual burden of EA.


Donating to causes that resonates with you isn't the same thing as donating to causes that work on the most necessary and urgent problem of humanity.


It is if what resonates with you is whatever you consider "the most necessary and urgent problem of humanity."


That's why there's a whole movement of people dedicated to convincing people to donate to the most necessary and urgent problem of humanity and trying to decide what exactly is that.

What we chose to donate to are organizations like churches, museum, theaters, makerspaces, and the like. They enrich our lives in the neighborhoods we live in, but they're not exactly urgent as someone on the other side of the world dying to something perfectly preventable like cholera.


That's fair. Most of what I know of EA is the stuff that makes the "news" here on HN, which is generally not flattering.


EA continues to look like moral cover for good old-fashioned profit maximization.

The "Effective" part is immediate and drives short-term incentives like maximizing profit. The "Altruism" part is hypothetical and delegated to a future date where the resources acquired "effectively" are deployed for the benefit of humanity. In the gap between the two are fallible human beings, not necessarily more virtuous than average, that task themselves with making decisions about what's important.

I do not agree that we should try to maximize our profit and then only help people at a nebulous distant point in time.

We all make priorities about things important to us. Some of that shows in what we chose to donate to. A lot of charities we donate to are not fundamentally urgent because they do not work on saving lives. A lot of EA is simply prioritizing what saves the most lives, right here, and right now.

The focus on hypothetical dangers like GAI is telling here. If you can invent an apocalyptic boogeyman that you're "addressing" by funding thinktanks and sitting on company boards you don't actually have to spend anything on addressing immediate issues.

They can be sincerely strongly held belief. The fact that OpenAI is actually...not doing anything remotely about it makes me discount OpenAI as a serious organization, regardless of the nebulous and hypothetical nature of GAI.

For the record, I don't think we are able to even remotely approach the problem if we can't take of issues of alignments of human actors. Once we figure that out, maybe we'll have a concrete path for aligning GAI should that or when it becomes a technological possibility.


> The "Altruism" part is hypothetical and delegated to a future date

The only resources we as people have are our actions. So, if you want to do some good, do it today. There's no shortcut and no reason to think anyone in the future will act other than precisely as you do now.


> A lot of EA is simply prioritizing what saves the most lives, right here, and right now.

Is this altruism, or just utilitarianism by a different name? This is obviously a serious ethics question, and one that likely has no objective answer - but utilitarianism has run contrary to the morals of most societies, generally.


"GAI" itself is a very nebulous term. Give me a definition without resorting to "acts like a human". Because if that's the touchstone, then one can argue GAI was achieved with Eliza in the 1960s.

All we have now are faster, better, more powerful Elizas and as long as deep learning is the only tool we have, that's all we'll keep getting


The debate over AGI won't be won by changing people's minds. It will be won by AI's capabilities rendering the question irrelevant.


The bigger question, of course, whether the following statement is true:

"All we ARE now are faster, better, more powerful Elizas"

Is that all there is, but organic instead of digitally modeled? Organic brings in a lot of uncertainty and chaos. That, too, can be digitally modeled. Are we ghosts in our own organic machines, or are we simply those machines, hallucinating our own importance?


> Risk from GAI is very nebulous

Meanwhile, 11 hours ago over on Reddit a dude has connected a robot to ChatGPT. The robot sees and talks, and clearly has motorized limbs. All you need now is to turn verbs into servo commands:

https://www.reddit.com/r/nextfuckinglevel/comments/1811bct/m...


Cute, there are a number of people and labs working on integrating GPT models with robotics. I think you'd need to train a model on prompt-action pairs to get the best results but you could potentially do what they did in the Voyager paper https://arxiv.org/abs/2305.16291 and have a pre-trained LLM just write the raw programs to control the robot and then save and re-use them with function calling.

https://www.microsoft.com/en-us/research/group/autonomous-sy...

https://www.microsoft.com/en-us/research/group/applied-robot...

https://blogs.nvidia.com/blog/eureka-robotics-research/

https://www.vice.com/en/article/93kb43/princeton-uses-gpt-to...

https://techcrunch.com/2023/11/10/ai-robotics-gpt-moment-is-...


It would be impressive if it had built the robot, serviced it, and deployed it to my house to do chores.


We can't agree on a non-leaky definition of benefiting humanity, and we can't be sure of a lossless way of communicating this ideal to an AGI if we could agree. We then don't know how our instructions will be interpreted (is it just pretending to understand), or if it will understand the concept of "greater good", which would allow it to justify to itself doing practically anything towards any stated goal.

On top of this, the people trying to race to this eventuality are trying to convince us that any of this is possible to constrain.

Despite binging enough Ghost in the Shell to entertain all of the scenarios I still don't think AGI is anywhere near and I'm just going to mostly ignore the possibility. One of those distant existential things I have no control over so I'll try not to think about it too much :)


> Risk from GAI is very nebulous

Those are the very best risks to motivate and manipulate people!

Plus there's no risk of nosey eggheads trying to explain "how it really works." Now we're completely into the land of untethered ego!


The operating key word here being "agreement", which, as far as I'm concerned, is the absolute antithesis of what EA is all about.


this is why Prompt Engineering is going to be so important

we don't want to start another "Your purpose is 'to serve man'" fiasco...


Effective altruisme is a smoke screen term to make legislators and the public believe that tech companies have humanities best imterest at heart. It's just to prevent legislators from imposing legislative controls on the industry by making them believe that they have things under control... When they obviously dont.

Self regulation of industries has never worked and never will.


> Effective altruisme is a smoke screen term to make legislators and the public believe that tech companies have humanities best imterest at heart.

I think it's a smoke screen term to convince wealthy people they have humanity's best interest at heart—pretty trivially. I find it impossible to believe that anyone else believes this.


You forget people who are convinced that they are wealthy, and then are convinced that they have humanity's best interest in part.

This describes many who fell for crypto Ponzi schemes. Who were convinced that buying into the Ponzi scheme made them rich, and the world a better place.


“Isolated” is the right word for EA.

Everybody else concerned about the future is concerned about climate change but EA is at first silent and then reveals itself as a climate denying movement when it does open its mouth as every other imagined threat, even ones with a 10^-50 probability seems to matter more for them.

But it is not accidental because isolation is one of the most important techniques of cult mind control. The point of Sequences is the same point as Dianetics it is supposed to rewrite the way you communicate so you just can’t relate to “wogs” (the rest of us) and can only talk to true believers.


> In the sense that matters most for effective altruism, climate change refers to large-scale shifts in weather patterns that result from emissions of greenhouse gases such as carbon dioxide and methane largely from fossil fuel consumption. Climate change has the potential to result in—and to some extent is already resulting in—increased natural disasters, increased water and food insecurity, and widespread species extinction and habitat loss.

> 80,000 Hours rates reducing extreme risks from climate change a "second-highest priority area": an unusually pressing global problem ranked slightly below their four highest priority areas.

https://forum.effectivealtruism.org/topics/climate-change

You should point your climate denier accusations at someone, so readers can at least check that person's statements.


All discussions of EA run into the fact that EA is a diverse movement with lots of different opinions and “wings” of the movement with different priorities. So any statement like the above is going to have dissenters.

At the same time (and to practice charity, as many in the movement recommend): it is also perfectly accurate to note that there is an opinionated wing of the EA movement that believes AI risk is the most important X-risk facing us, and influential members of that movement have certainly made statements downplaying the relative risk of climate change. They’ve also managed to get some agreement from portions of the broader community. (And it is also hard to deny that some of the latter elements of the movement seem to be gaining ground in ‘capturing’ the brand, at least as far as it’s known to the public.)


Today it comes across like somebody is holding a gun to your head (climate change and the even worse political problem it entails) and they are worried your shoes are untied.


That”s denial because you will be forced into denial to defend that position (e.g. you’re always going to be caught saying it is not so bad, just turn up your AC, …)


Can you define what you mean by denial and point to a specific example? You're still being hypothetical and vague with "you will be forced into denial" - I would flat out say that's obviously false.

Saying the risk of climate change wiping out humanity is less than the risk of nuclear war doing so is not climate denial.

It may be wrong, but it's not climate denial.


I am very concerned about climate change, and I am somewhat EA adjacent.

But it is not accidental because isolation is one of the most important techniques of cult mind control. The point of Sequences is the same point as Dianetics it is supposed to rewrite the way you communicate so you just can’t relate to “wogs” (the rest of us) and can only talk to true believers.

Wow. This is just weird.


In what ways?


That person should find a copy of

https://en.wikipedia.org/wiki/Snapping:_America%27s_Epidemic...

I read the first edition circa 1989 when people still remembered the moonies. When LSD got fashionable in the 1960s parents blamed it for what seemed like sudden personality change (Straight A student athlete now smokes dope all the time, grows their hair long, gets the crabs, etc.) but after the crackdown all the psychedelic bands either went under like the Beatles or changed the way the Rolling Stones and Pink Floyd did. By the 1970s the fear was much more that your kid would get ambushed at the airport and brainwashed or maybe even kidnapped like Patty Hearst and transformed into a revolutionary.

I found out about the 1995 edition circa 2005 or so and though first “gee that issue wasn’t so salient then” but no you had groups like Aum Shirikyo, Al Quadea, etc. that were not so prominent in terms of numbers but highly dangerous in behavior.

The Reaganite sirge in inequality makes EA quite different from the Scientology of old (which wanted to drain people of their money and then make them work like slaves) because back then it was worth emptying the wallets of ordinary people. An EA club at a place like Stanford on the other hand is really a potemkin village that creates the appearance of a movement, what matters in a group of 30 of them is 2 rich kid whales, the other 28 are there just to make it look like there is movement. (E.g. Scientology has moved into big ticket donations because they found ‘working people like slaves’ isn’t that profitable). What happened to Patty Hearst is close to the right idea for a cult in 2023 except you don’t want to kidnap them and encircle them, you want to encircle them in their own environment.


Neglectedness is one of the three pillars that EA uses to decide how much impact you can have on a problem. EA is generally against funding climate change because it is easily the least neglected x risk, not because they’re climate change deniers. They figure the non ea community has already picked all the low hanging fruit in the climate change area.


>But it is not accidental because isolation is one of the most important techniques of cult mind control. The point of Sequences is the same point as Dianetics it is supposed to rewrite the way you communicate so you just can’t relate to “wogs”

At the very minimum, that might have been the way it worked with previous grifter cults. You can sort of characterize those as the ones that were created for a single founder to exploit the cult for, money, sex, and adulation.

But, if these others are cults, then they're a very different sort. They do seem to want to communicate to others. That tends not to happen, as the other side seems to shut down communications one way or another (something like an immune response?).

> Everybody else concerned about the future is concerned about climate change but EA is at first silent and then reveals itself as a climate denying movement

I'm fairly sure your species is functionally extinct already. It's unclear why you're concerned about the future farther than, I dunno, 100 years out. While there are exceptions here and there, as a civilization you seem hellbent on wrecking your fertility. Any civilization that can't be bothered or otherwise refuses to create the replacements for those that will die of old age is already gone even if it doesn't know it yet.

Climate change isn't a problem, because 500 years from now there won't be anyone around to suffer it.


Cults have always had some focus on "recruiting", partially because they do recruit people that way but because it leads into despairing interactions with "wogs" that lead members to believe the outside world is corrupt.

I don't know if you've had the experience of having a friend get into Amway and then giving you a long rambling nonsensical lecture about how they plan to make money at Amway that makes no sense at all... but you should because it's an education in how discourses with that structure play a role in mind control. You see them in Scientology, you see them in scam supplement ads on YouTube (it's genius that they ramble for 45 minutes before revealing what they are selling), and you see them in EA.

As for risks the cultural risks you are talking about are very real and not taken seriously enough although sometimes I wonder if the largest urban areas are just full and people who are living in a place that is full have an instinct to stop reproducing. If that's the case and there is a significant population drop it may turn around.


You provide some insight. I agree that these patterns exist and are meaningful.

But I see them in the climate change people too.


Yeah, the pattern that bothers me the most in people's behavior in social media in 2023 is the rise of "parareligious" movements which are about people finding meaning in (frequently politicized) beliefs.

The following pair of examples seems to piss everybody off. In one corner we have the person with conservative views who was just an ordinary person working, eating, sleeping and watching TV until some point in 2021 when they discovered purpose in their life because they would wake up each morning and read a lot about how everything the government about COVID-19 has done (vaccines, masks, closures, etc.) and not just wrong because people made a mistake but because of some deeply seated moral flaws in the political class.

In the other corner is a person who believes they have long COVID or who believes they are exceptionally vulnerable to COVID so when they see someone isn't wearing a mask they put on two masks. On their Mastodon profile they say they'll ban you if you ever post a selfie of yourself not wearing a mask. (I had a friend who used to be into all sorts of things like music festivals and Martin Scorsese movies but now all she wants to talk about is fashion face masks.)

The thing both of these people have in common, though they'll deny it, was their life was empty and meaningless doing ordinary things but now that they're suffering under fascist oppression or are under mortal threat by a virus every moment of their life is elevated. Either one will have a hard time accepting a "return to normal".


Everything I have ever learned about EA and it's followers has led me to conclude it is perhaps the largest and most complex internal belief system to justify stressful amounts of motivated reasoning, so people who have amassed unconscionable wealth can use that wealth to do basically whatever it is that they wanted to do anyway, but with an ideological overhead to shield them from inevitable criticism. It's the peak of this "good billionaire" nonsense where people can aspire to a level of wealth that is, to be clear, unable to be attained without exploitation of other people on an industrial scale, but justify it internally as "well if I do that I can do the most good."

It's utter nonsense. It positions the person in question as some sort of arbiter of humanity's future, someone so intelligent and so well rounded as to see the folly of mankind itself as an obstacle to be overcome, not as a collective where we move forward together, but as a personal barrier to what you already know needs to be accomplished. It is arrogance without limit to assume that you personally know better what needs to happen than the total summed knowledge of billions of your fellow man and I cannot take anyone who self-identifies as an EA seriously as a result. You, singularly, are not qualified to "fix" the fucking world, you are not ABLE, let alone currently, to be qualified for that.

To be clear: There are TONS of problems that the resources of one of these jackoffs could solve basically overnight. Flint's inability to have clean water is a great example, but just buying a shit ton of pipe and having it installed in a struggling city isn't flashy, it isn't exciting, it isn't paving the way for the future: it's just fixing a problem we know how to fix and that should be fixed, but that isn't enough. You can't put your name on it, you can't get famous off doing it (outside of Flint anyway) and therefore it doesn't get brought up.


I mean I don't really care either way, but isn't that for the government to solve? They print whatever money they want at will, sometimes I think that a policy of forcing 1% of any money printing into this type of project would fix most of the world.


Oh totally agreed. I'm just saying, if you, as this theoretical rich or soon-to-be-rich EA believer, are committed to improving the world by attaining and using wealth, there are problems out there than can indeed be solved with just... a big ass check. Those exist. But most of them are also pretty boring. People need things, and you can provide them things. But because they're boring, they never end up on an EA's radar. No, instead let's dump shit tons of cash into a nonprofit that will create AI and also create tools to stop the AI they created from killing us all with nukes.


Sources on it being “climate denying”?



This is a forum post from a student with 18 total karma


so… it is typical. Find me an article on lesswrong that says climate change should be taken as seriously as AI, it might exist (probably with trigger warnings all over it) but let’s see what the comments are like.


Here's the second result of your google search, which has 34 upvotes vs. 14:

https://forum.effectivealtruism.org/posts/pcDAvaXBxTjRYMdEo/...


… which also supports my point.


Someone arguing on the forums that EA cares too much about climate change supports your point, and someone arguing on the forums that EA doesn't care enough about climate change also supports your point? What exactly would they have to say to damage your point?


Leadership making climate change a priority and not making excuses for why it isn't.


Someone in the movement said "maybe we're NOT all gonna die in the next 1° C increase in average temp" = Everyone in the movement are climate change deniers


I think more a accurate title would be "AI industry's favorite philosophy turns against the industry". My understanding is the anti-AI bent is much more recent within EA.


No, it actually is much older.

https://extropians.weidai.com/extropians/0303/4140.html is Yudowsky inventing the specter of a paperclip maximizer as a potential problem about 20 years ago.

He was, of course, the moving force behind both LessWrong and the https://en.wikipedia.org/wiki/Machine_Intelligence_Research_.... Both of which formed one of the three strands that later merged into https://en.wikipedia.org/wiki/Effective_altruism.


Yudkowsky has been warning about ai x risk for a very long time. I guess you could say the “halt all research” rhetoric from him is more recent, but I can’t say it’s surprising giving his long standing views.


In much the same way that rationalists groups attract the most irrational people. We're seeing that Effective Altruism is a magnet for psychopaths. I'm not saying that all people who practice EA are psychopaths, it's just that it provides a natural habitat and cloak for dark triad types.


Are we seeing that? I'm certainly not? I suppose "psychopaths" will attach themselves to literally anything given the right opportunities, but EA seems like a relatively poor target all things considered.

At it's core, I struggle to see how anyone can disagree with effective altruism as a basic principle. People want to do good in the world, but we're all monkey-brained and emotional, so that often manifests in ineffective use of time and resources to do things that give an immediate, visible outcome so we can feel good instead of what would actually do the most good.

EA to me is basically just shifting altruistic desires from emotional-payoff (volunteering at a soup kitchen around the holidays, donating to a local animal shelter) to things that have the best outcome (volunteering at a political action group to improve laws, donating to an NGO distributing mosquito nets in malaria zones).


There's an element of "the ends justify the means" in EA that can lead to bad outcomes. An extreme example is SBF's hypothetical "coin flip"[1], but one could see how making the world worse in the short term could be justified with EA as long as those actions might make it better in the long term. Just as crucially, the meaning of "worse" and "better" is often not left up to the communities being affected but to each EA practitioner.

[1] https://www.businessinsider.com/sam-bankman-fried-coin-flip-...

> If you flip it and get heads, the entire world improves by more than double.

> If you get tails, the world is destroyed.

> Sam Bankman-Fried said he would flip the coin — and urged everyone else to do so, too, Caroline Ellison testified in court Tuesday.


> At it's core, I struggle to see how anyone can disagree with effective altruism as a basic principle

I don't think anyone disagrees with that. The issue isn't the idea that we want to engage in altruism in the most effective way we can, the issue is the EA movement.


I don't get how effectiveness is a prerequisite. Be altruist, or not.

Sounds like ass-covering from people who are like 'I am ultimately rational and therefore must act with maximum effectiveness'. Ultimately rational as a perceived state within humans is a FEELING. You can take a bunch of ketamine and conclude that you possess that quality, and many people have done just that, some of 'em very wealthy and powerful.

Beats admitting the truth, I guess.


In my experience, people are better at rationalizing their feelings than actually being rational. There's a certain lack of humility to claiming that just because you used some numerical weights to arrive at a decision it was arrived at rationally.


Indeed the more complex and complete you make your rational model, the more tunable weights there are. By just dropping in the right weights, you can get whatever result you want. Therefore complex models tend to produce worse results in practice than very simple ones.

This lesson was brought home for me by https://www.amazon.com/Software-Estimation-Demystifying-Deve... explaining why the COCOMO model didn't work well in practice as an estimation technique, despite their having collected a lot of good data on what affects schedule.

This lesson is one that the EA community broadly seems to ignore.


> I don't get how effectiveness is a prerequisite. Be altruist, or not

It's not. What I meant by my comment was that pretty much everyone who engages in altruistic behavior wants that behavior to be as effective as possible. The EA movement did not invent or discover this, it's always been the case.

Even people whose altruistic behavior stops at dropping a few coins in the donation box at the supermarket wants those coins to be used in the most effective way.

The EA movement is something else entirely.


Disagreements with EA seem to come in four forms. (1) Disagreements about discount rates (how to weigh harm now against harm in the future). (2) Disagreements about the probabilities of various disasters and the effectiveness of various interventions. (3) Statements of the form "this person claimed to be EA and they were bad so EA is bad" -- which please. (4) Defining the greatest good, and therefore how to maximize it, is hard. This critique, unlike the others, is fundamental, but it doesn't nullify the value of the project. Just do your best and call it a day.

This article absurdly misconstrues both recent changes in the community and Talinn's comments in particular. For people o take issue with the culture that has evolved around a philosophy is not for it to reject that philosophy. And for Tallinn to say that we should not rely on [initially] EA-motivated corporate governance to be benevolent is not for him to have disavowed EA's principles.


These are technicalities IMO. There is nothing else to EA than the current institutions, people and praxis. If they become unfashionable or dishonored their moment will pass.

It’s not some special new kind of cause, it’s just charity with an almost intolerably smug self-image.


EA was a philosophy before it was a culture. Sure, the culture has wankers. And unfortunately the wankers are loud, which can make you think everyone who does EA is smug and unrealistic.

But the idea of bringing statistical rigor to charity is an important one, and it really was a development -- it came shockingly late. Statistics came late to baseball too, and plenty of people fought the idea, but it won pretty fast, because people in baseball care about winning. If* people doing charity really care about achieving the most good, it will win there, too.

* This is a nontrivial assumption.


I just really feel… you didn’t, like at all, read what I wrote, you know?


How could you think I didn't read what you wrote when I disagreed so specifically with it? EA is something new -- not as a goal, clearly (Bentham had already gotten there) but as a method. And it is more than the people who identify with it, just like mathematics can be distinguished from mathematicians.


If there is something new about EA it is, as a matter of public record, not the use of statistically rigorous cost-/benefit analysis. The hubris!

Anyhow, I will admit that the (putative) indifference to specific causes, so long as they are E and A, and (ostensibly) apolitical posture is original.

They are ofcourse transparently performative. Animal welfare cannot be measured but only assumed, and should therefore by their own “philosophy” rank lower than the lowest net positive measurable utility at any price.

Whatever else “effective” implies, it must include some change in society to be materially meaningfull. While this is not a standard the Longtermist arm adheres to, the rest of EA and the “philosophy” demand it. It doesn’t take all that much imagination to see which political project EA is most aligned with…


> if you want to do good, you should spend your money where it does the most good.

Is there an exception for helping my in-laws that live one step above abject poverty?


Yup, that comes in the form of a utility function that is nonhomogeneous across people. It's expected, for instance, that parents will value an increment in their childrens' well-being more than an increment in a stranger's.

EA is radical in that it treats everyone's utility as equally valuable (across people, not income levels -- a dollar to a poor person is, all other things equal, more valuable than a dollar to a rich person). But you can do EA and still care more about your family than strangers.


You can define good however you like. That’s not an exception, it’s baked right in.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: