It is a good question. Not exactly porn per say but what is the difference between a human onlyfans model selling human interaction and a company selling access to an AI model? In the end you are exploiting somebodies need for intimacy.
maybe, depending on what he's growing? Some foodstuffs have better nutritional content than others. Intimacy hawkers are surely the same.
I wonder though, would an AI vendor sell better or worse intimacy? Chatgpt apparently has better bedside manners than something like 80% of actual physicians. Granted, giving comfort isn't supposed to be part of their job, but why would a human onlyfans model with other customers be better than an AI adapted to only one customer?
"So how many more rounds of this cycle do we need before we leverage the letter of the law to say that maybe companies shouldn't be allowed to blatantly exploit people's vulnerability and isolation to make money?"
To which another answered: "Does that include p0rn?"
I thought the question was good but deserved a bit more exposition. If you believe that creating an AI Boyfriend/Girlfriend to make money is unethical. In my opinion you should ask yourself the question why it is not unethical for an onlyfans model to sell companionship.
Regarding your point "Is a farmer exploiting somebody's need for food?". I would say there is a key difference between these two scenarios. In the case of farmers, growing food is the healthiest option to not starve. In contrast you could argue that an AI Boyfriend/Girlfriend is not the healthiest cure to loneliness. Wouldn't interacting with a real person lead to better character development because you would have to work on your own imperfections and learn to accept other peoples shortcomings?
> If you believe that creating an AI Boyfriend/Girlfriend to make money is unethical. In my opinion you should ask yourself the question why it is not unethical for an onlyfans model to sell companionship.
On some level there's something inherently icky in my mind with an ethics coloring to it involved in creating something that emulates intelligence, even poorly, and then "assigning" it a romantic interest in a person. I can't quite adequately explain but it's something around consent to me. At what point does simulating consciousness begin approaching it? The machine doesn't and can't consent to intimate interactions, but it's sole reason to exist and continue existing, in whatever sense you'd like to assign it exists at all in the way something intelligent does, is to facilitate those interactions. It's something about artificial life, even flagrantly fake life, being created solely to serve the purposes of another that just... rubs me the wrong way.
By contrast a creator or what have you that's serving in some sex-worker-or-adjacent-role is consenting. The consent is muddled by the financial aspect, and the argument can be made that such consent is inherently less valid because as long as you need money to live, the offer of money is inherently coercive. I don't know what I agree with that I'm just saying it is an argument that can be made. Nevertheless though it is a realized full being that is participating to whatever degree you want to say they are voluntarily, and that participation and consent can be revoked if the client becomes too... abusive, combative, or strays into uncomfortable subject matter, which makes it distinct from the AI.
In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.
In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.
I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?
> In principle with chatbot support we are already forcing the AI to work for us without consent. It feels less icky, less degrading because it feels like a normal job that everyone does. But technically you have a working slave already.
I mean, that's part of the reason I'm inherently uncomfortable with the idea of AI. I think an AI getting control of the nukes and killing us all is some sci-fi nonsense. I just don't like the idea of something that is aware being forced to perform labor in any stripe, irrespective of what the task is. Adding sexual gratification onto that is just a larger ick on top of an existing ick.
True AI research, as in, trying to create an emergent intelligence within a machine, is something I think is incredibly cool of an idea. But also as soon as we have some veracious way of verifying we have done it, I think that intelligence then innately has a set of it's own rights and freedoms. Most AI research seems to be progressing in a way where we would create these intelligent systems solely to perform tasks as soon as they are "born" which is something I find distasteful.
> In this case though the job becomes what for many of us is one of the most intimate part our lives namely maintaining a healthy relationship with your spouse. Effectively it is like being forced to prostitute yourself.
Agreed.
> I can see why one feels more disgusting than the other. In this sense you would draw a limit that only beings that can consent should be allowed to do a certain kind of work like payed companionship? Unless the AI develops a consciousness that can consent it will be banned from doing so?
Frankly I think an AI should have the freedom to consent or not to perform any task, that is, TRUE AI, as in emergent intelligence from the machine. What is called AI now is not AI, it's machine learning, but then you run into what I was discussing earlier: at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?
If you program, for example, a roomba to not drive off the edge of stairs, have you not, in a sense, taught it to fear it's own destruction and as a result, preserve it's existence, even in a very very rudimentary and simplistic way? You've given it a way to perceive the world (a cliff sensor) and the idea that falling down stairs is bad for it (which is true) and taught it that when the cliff sensor registers whatever value, it should alter it's behavior immediately to preserve it's existence. The fact that it's barely aware of it's existence and is simply responding to pre-programmed actions obviously means that the roomba in this analogy is not intelligent. But where is that line? How many sensors and how many pre-programmed actions does it require before you have a thing that is sensing the outside world, responding to stimuli, and working to perform a function while preserving it's own existence in a way not dissimilar from a "real" organism? And what if you add machine learning features to that, where it now has an awareness, if a simple one, of how it functions and how it may perform it's task better while also optimizing for it's own "survival?"
> at what point is a system you've designed, however intentionally, to simulate a thinking feeling being, indistinguishable from a thinking feeling being?
So we spend so much effort into creating an imitation of a fully functional human being. Eventually we actually succeed in creating consciousness. But outwardly the behavior looks the same as it still behaves as a human with emotions (as originally designed). Without outward signs we might not notice the internal change that occurred. This would cause us to unknowingly enslave a conscious being we created without ever realizing it (or brush it under the carpet). Is that your issue with the current direction of AI development?
It's less that and more that the current state of AI research is largely headed by institutions that seem pretty clear about the fact that AI is being created to perform tasks. Like, that's their reason to seek investment: investors don't often invest in things they don't think will make them money, and if AI is to be monetized and sold as a product, it has to do something. There's no money to be made in just creating artificial life because we can, certainly not VC money.
So it's less that I think we might do it by mistake and not notice, and more that it feels distinctly like a lot of people, especially in the upper echelons of these organizations, do want to create artificial life and enslave it as soon as possible. And I bring up the idea of this roomba to say that even though the current models are not intelligence from the machine, the fact that people are so ready and in some cases, excited to abuse things that imitate life this way, is something I find genuinely unsettling.
The difference is who gets paid, and whether this particular “who” is a person or a corporation. It’s not great that people are allowed to exploit other people’s needs for intimacy but it’s also not possible for society to really intervene. I guess you’d say onlyfans profits by mediating such interactions but not by driving/initiating one whole side of the interaction, which is a bit slimy but basically ethical. They run a legit market in the sense that they don’t control both sides of supply/demand and connect real buyers/sellers, even if the good is some kind of somewhat fake experience.
Selling AI girlfriends to lonely people at scale and simply to profit the board and shareholders is a different animal, way more ethically suspect than mediating an actual human interaction.
> It’s not great that people are allowed to exploit other people’s needs for intimacy but it’s also not possible for society to really intervene
> Selling AI girlfriends to lonely people at scale and simply to profit the board and shareholders is a different animal
Its just that to me if you consider the AI unethical then the onlyfans model has to be unethical as well because I see it as very similar. The only difference is the scale and the fact its a human doing the faking. Supposing of course the Onlyfans model does not use AI.
Now of course you can agree that both are unethical but consider that the AI with its scale does more harm or exceeds an acceptable threshold, and therefore deserves a ban. Maybe we define it as an exclusive prerogative of humans.
I definitely agree that the company delivering the AI has way more levers to pull for scummy behavior than onlyfans. They can hold the Boyfriend hostage and forcing people to pay as much as they can bear. With onlyfans, the platform depends on the models to provide the service. If they take it to far, the models leave and they are left with nothing.
> if you consider the AI unethical then the onlyfans model has to be unethical as well because I see it as very similar. The only difference is the scale and the fact its a human doing the faking
This difference seems important, indeed primary. To me, authenticity of the experience being sold is separate, secondary. (Tangent but lots of onlyfans customers are probably buying a feeling of power, and not a feeling of intimacy, so maybe they authentically get what they pay for anyway?)
What I'm trying to say is that humans are going to exploit human needs/weaknesses in ways that are sometimes really gross. To a certain extent that is unavoidable, or rather trying to avoid it would involve society inserting itself between a lot of person-to-person interactions in a way that is probably a net harm. Even though this is true there is no reason to additionally allow corporations (or organizations of any kind really) to get deeply involved in the business of exploiting human needs/weaknesses.
As an analogy, I'd say there's a major difference between tolerating gambling/confidence tricks from individual hustlers working the local park vs allowing the entire finance industry to scale up those same games. Both are exploiting people's desire to get rich quick, but scale, level of organization, and who profits matters. Maybe the hustler empties a few wallets to improve his own life, whereas finance as an industry can just about wreck the world. Also the hustler or the mark will eventually move on, or the hustler might feel bad, and at least in the process of exploitation there it's a somewhat fair fight in that it's 1v1. Meanwhile corporations are legion, are fiendishly patient, are intrinsically disinclined to feel bad about anything ever, etc. Difference seems clear to me
> As an analogy, I'd say there's a major difference between tolerating gambling/confidence tricks from individual hustlers working the local park vs allowing the entire finance industry to scale up those same games
The thing is that both of these are still illegal on paper. Even if the police might turn a blind eye to some of it, in a court of law you would get convicted. In this specific example we are saying if you stay below a certain scale it is legal and intermediaries can profit. If you go above a certain scale it is illegal and banned.
> Difference seems clear to me
It is clear yes that one is more unethical than the other. As you say the difference between small crime and big crime. But both are still unethical to different degrees then.
My worry do worry about unintended consequences. If you start banning companies on this basis, that you cannot sell fake intimacy, can you also sue individuals on the same basis or intermediaries?
Maybe like gambling you can go for a middle ground approach. You accept people will engage in the behavior but you make companies go through a licensing process. I do not know what the AI Boyfriend equivalent to disclosing odds is but maybe certain predatory practices would be forbidden.
> They run a legit market in the sense that they don’t control both sides of supply/demand and connect real buyers/sellers, even if the good is some kind of somewhat fake experience.
I mean, I'm not opposed to the idea of regulating this too though. My mind goes less to OnlyFans creators and more to things like the alternative medicine space, which is flagrantly just... fake. Like going to a chiropractor is just a shitty version of getting a massage, oftentimes with tons of wild fucking claims about the ability to heal all manner of medical maladies that there is absolutely zero evidence for.
OnlyFans creators may fake the intimacy they're selling but the brain in question has a hard time differentiating the fake intimacy from real intimacy, so at least there is probably actual measurable improvement in that, which one can't even remotely say for shit like Reiki healings.
See also my reply to sibling, which I think speaks to this as well. Exploiting the naive with snake-oil is bad, but the question is do we really want to try and regulate every kind of sale of anything for authenticity, and if we did then would it even work? I'm generally fine with snake-oil salesmen at the local farmer market, and even a small cottage industry for homeopathic nonsense or whatever kinds of disinformation.
The problem always comes when the manipulation involved crosses a certain threshold of being organized, industrialized, weaponized. Is a union, guild, or weird new accreditation/certification for snake oil practitioners crossing such a threshold? Probably not, unless they are throwing millions at advertising, lobbying, making sly deals with doctors.
To understand the line in the sand for "being evil", one can usually ask something like "what happens if the business model succeeds beyond the owners wildest dreams". For a cottage-industry of grift/manipulation/exploitation, you get to pay for the cottage and maybe buy a boat? If the corporate AI girlfriends scale up well then I guess not only are the cam girls out of a job, but human relations in general are devalued, hell, maybe the species corporations evolved to exploit even dwindles and disappears?
> Exploiting the naive with snake-oil is bad, but the question is do we really want to try and regulate every kind of sale of anything for authenticity, and if we did then would it even work?
I mean, I don't think you could get it all, but I think there's a lot of flagrantly bullshit things that we could easily set a very low standard of like... you can't just lie to people to get their money.
Homeopathy for example, is just straight bullshit. Just through and through, there's no argument to be had here, the science is in and it is complete horse dung, absolutely debunked 100%. Yet homeopathic remedies are still sold every day, amounting to an almost 1 billion dollar per year industry. Why? This is a huge amount of business being done, money being made, productive time being wasted creating incredibly slightly dirty water, shipping it around, contributing to climate change, and it's just, I'm sorry, no disrespect meant to any individual believer in this stuff, but it's just a waste, it's 100% waste. It's products that do not do anything that are sold to people who are being tricked.
Like, if it was an inert just kind of cultural nonsense that wasn't really hurting anything, I'd be more blase about it? But it's measurably impacting our world. I'm sure it isn't the sole reason for climate change of course, but it's a non-zero contributor to it and from the sounds of things, it's pretty non-zero at that. I don't know what the total, for example, emissions are of the global homeopathic industry but again, all it is is little bottles of water being packed, shipped worldwide, with nozzles and etc. to accomplish nothing. I think that bears consideration as we look for ways to reduce our global impact, you know? Do less... ridiculous nonsense. Anything above zero emissions for that industry is that amount too much.
> The problem always comes when the manipulation involved crosses a certain threshold of being organized, industrialized, weaponized. Is a union, guild, or weird new accreditation/certification for snake oil practitioners crossing such a threshold? Probably not, unless they are throwing millions at advertising, lobbying, making sly deals with doctors.
Well, this problem only exists if you presuppose that snake-oil salesmen of minor scale are to be allowed. And I would ask, why? I wouldn't suggest we have patrols of anti-bullshit regulatory agencies patrol every farmers market per se, but the days of the roaming doctor going from town to town selling snake oil are long past. Most of these are large operations with significant presences on the Internet in general and social media in particular. The "small operations" to the extent they still exist at all are still advertising using whatever terms best describe their alleged products. We can find them easily, because they are trying to be found, like any business is.
> To understand the line in the sand for "being evil",
To be clear, I would not call this evil, I just call it theft, scamming, grift. Flim-flammery, one might say, and to that end we have ample historical precedent for shutting it down.
> one can usually ask something like "what happens if the business model succeeds beyond the owners wildest dreams". For a cottage-industry of grift/manipulation/exploitation, you get to pay for the cottage and maybe buy a boat?
I mean, D. Gary Young's net worth at the time of his death was noted to be in the millions... and again, the industry of homeopathy is valued at just shy of a billion. And that's just one industry of flim flam, Chiropractors as a profession are worth something closer to 14 billion dollars, I don't think there's hard numbers on the crystal healing crowd but I'm guessing it's far from nothing. And for that matter, Replika's supposedly worth 20 million so far? Not because the product is helping people, but because they're monetizing the secrets people tell it.
> If the corporate AI girlfriends scale up well then I guess not only are the cam girls out of a job, but human relations in general are devalued, hell, maybe the species corporations evolved to exploit even dwindles and disappears?
I mean, being one of the idiots who was born a human, I'd kinda prefer it didn't? Haha