There's an element of "the ends justify the means" in EA that can lead to bad outcomes. An extreme example is SBF's hypothetical "coin flip"[1], but one could see how making the world worse in the short term could be justified with EA as long as those actions might make it better in the long term. Just as crucially, the meaning of "worse" and "better" is often not left up to the communities being affected but to each EA practitioner.
[1] https://www.businessinsider.com/sam-bankman-fried-coin-flip-...
> If you flip it and get heads, the entire world improves by more than double.
> If you get tails, the world is destroyed.
> Sam Bankman-Fried said he would flip the coin — and urged everyone else to do so, too, Caroline Ellison testified in court Tuesday.