Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How come I never see any concrete proposals for how to equitably distribute the wealth of AI? It's always either "stop AI immediately for the sake of our labor" or "don't worry sometime in the future everyone will live in utopia probably".

Here's a starter example: any company whose main business is training AI models needs must give up 10% of their company to a fund whose charter is long-term establishing basic care (food, water, electricity, whatever) for citizens.

I'm sure people will come at me with "well this will incentivize X instead!" in which case I'd like to hear if there are better thought out proposals.



This is what taxation and wealth redistribution schemes are for. The problem is that Americans generally find this idea to be abhorrent, even though it would probably benefit most of the people who are against the principle. They don’t want a dime to go to people they feel are undeserving of it (“lazy” people, which is typically coded language to mean minorities and immigrants).


In theory, we know how to do wealth redistribution, AI or no AI: tax value creation and wealth transfer, such as inheritance. Then use the money to support the poor, or even everyone.

The problem really is political systems. In most developed countries, wealth inequality has been steadily increasing, even though if you ask people if they want larger or smaller inequality, most prefer smaller. So the political systems aren't achieving what the majority wants.

It also seems to me that most elections are won on current political topics (the latest war, the latest scandal, the current state of the economy), not on long-term values such as decreasing wealth inequality.


The question is what is different about equitably distributing the wealth of AI vs. equitably distributing wealth in general. It seems that the main difference is that, with AI wealth specifically, there is a lot of it being generated right now at a breakneck pace (although its long-term stability is in question). Given that, I don't think it's unreasonable to propose "stop AI immediately while we figure out how to distribute wealth".

The problem is that the longer you refrain from equitably distributing wealth, the harder it becomes to do it, because the people who have benefited from their inequitably distributed wealth will use it to oppose any more equitable distribution.


> How come I never see any concrete proposals for how to equitably distribute the wealth of AI?

Probably because most politics about how to "equitably distribute the wealth" of anything are one or both of "badly thought out" and/or "too complex to read".

For example of the former, I could easily say "have the government own the AI", which is great if you expect a government that owns AI to continue to care if their policies are supported by anyone living under them, not so much if you consider that a fully automated police force is able to stamp out any dissent etc.

For example of the latter, see all efforts to align any non-trivial AI to anything, literally even one thing, without someone messing up the reward function.

For your example of 10%, well, there's a dichotomy on how broad the AI is, if it's more like (it's not really boolean) a special-purpose system or if it's fully-general over all that any human can do:

• Special-purpose: that works but also you don't need it because it's just an assistant AI and "expands the pie" rather than displacing workers entirely.

• Fully-general: the AI company can relocate offshore, or off planet, do whatever it wants and raise a middle finger at you. It's got all the power and you don't.


This sounds a lot like a sovereign wealth fund. The government obtains fractional ownership over large enterprises (this can happen through market mechanisms or populist strongarming — choose your own adventure) and pours the profits on these investments into the social safety net or even citizens' dividends.

For this to work at scale domestically, the fund would need to be a double-digit percentage of the market cap of the entire US economy. It would be a pretty drastic departure from the way we do things now. There would be downsides: market distortions and fraud and capital flight.

But in my mind it would be a solution to the problem of wealth pooling up in the AI economy, and probably also a balm for the "pyramid scheme" aspect of Social Security which captures economic growth through payroll taxes (more people making more money, year on year) in a century where we expect the national population to peak and decline.

Pick your poison, I guess, but I want to see more discussion of this idea in the Overton window.


> The government obtains fractional ownership over large enterprises (this can happen through market mechanisms or populist strongarming...)

Isn't that what happened in the Soviet Union? Except it wasn't fractional. It ushered 50 years of misery.


Yes, it is. And yes, except that it wasn't. A SWF is about building common wealth inside the systems that finance capital built (in the same way that the 401k replaced the pension) rather than turning back the clock on them. How you acquire those assets can vary wildly:

- Maybe you just decide to invest some public money

- Maybe you have some natural resources that are collective-by-default (minerals wealth on public land)

- Maybe there's a bailout of an industry that is financially broken but has become too big to fail cough and the government presses its leverage

- Maybe a president just wakes up and decides that he wants the government to own 10% of Intel, and makes that deal happen on favorable terms.


The problem is there are many people who think AI is a big scam and has no chance of long-term profitability, so a fund would be a non-starter, or people who think AI will be so powerful that any paltry sums would pale in comparison to ASI's full dominance of the lightcone, leaving human habitability a mere afterthought.

There honestly aren't a lot of people in the middle amazingly, and most of them work at AI companies anyway. Maybe there's something about our algorithmically manipulated psyche's in the modern age that draws people towards more absolutist all-or-nothing views, incapable of practical nuance when in the face of a potentially grave threat.


Why would the AI owners want to distribute wealth equitably? They want to get rich.

What government in the foreseeable future would go after them? This would tank the US economy massively, so not US. The EU will try and regulate, but won't have enough teeth. Are we counting on China as the paragon of welfare for citizens?

I propose we let the economy crash, touch some grass and try again. Source: I am not an economist.


Bernie Sanders talks about a "robot tax" that is roughly what you're talking about. https://www.businessinsider.com/bernie-sanders-robot-tax-ai-...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: