> OpenAI enabled its users to have a sext conversation.
Considering that this is only with verified adults, how is this "evil"? I find it more evil to treat full grown adult users as kids and heavily censor their use of LLMs.
(Not to detract from the rest of your post, with which I agree).
That's true. Most of my local models are uncensored. I don't want that prude culture pushed on me. And it also stops AI from working correctly because a lot of stuff I talk about with my friends is sexual and it's so annoying for an AI model to keep closing up.
My point is not about morality. It’s about ROI focus and that OpenAI can’t and won’t ever return anything remotely close to what’s been invested. Adult content is not getting them closer to profitability.
And if anyone believes the AGI hyperbole, oh boy I have a bridge and a mountain to sell.
LLM tech will never lead to AGI. You need a tech that mimics synapses. It doesn’t exist.
I have also a hard time understanding how AGI will magically appear.
LLMs have their name for a reason: they model human language (output given an input) from human text (and other artifacts).
And now the idea seems to be that when we do more of it, or make it even larger, it will stop to be a model of human language generation? Or that human language generation is all there is to AGI?
Because the first couple major iterations looked like exponential improvements, and, because VC/private money is stupid, they assumed the trend must continue on the same curve.
And because there's something in the human mind that has a very strong reaction to being talked to, and because LLMs are specifically good at mimicking plausible human speech patterns, chatGPT really, really hooked a lot of people (including said VC/private money people).
LLMs aren't language models, but are a general purpose computing paradigm. LLMs are circuit builders, the converged parameters define pathways through the architecture that pick out specific programs. Or as Karpathy puts it, LLMs are a differentiable computer[1]. Training LLMs discovers programs that well reproduce the input sequence. Roughly the same architecture can generate passable images, music, or even video.
It's not that language generation is all there is to AGI, but that to sufficiently model text that is about the wide range of human experiences, we need to model those experiences. LLMs model the world to varying degrees, and perhaps in the limit of unbounded training data, they can model the human's perspective in it as well.
The words 'lead to' there cover a lot. I don't think we'll get AGI just by giving more compute to the models but modifying the algorithms could cover a lot of things.
Like at the moment I think during training new data changes all the model weights which is very compute intensive and makes it hard to learn new things after training. The human brain seems to do it in a more compartmentalised way - learning about a new animal say does not rewrite the neurons for playing chess or speaking French for example. You could maybe modify the LLM algo along those lines without throwing it away entirely.
The need for new data seems like it has outpaced the rate at which real data is being generated. And most of the new data is llm slop.
So you might improve algorithms (by doing matrix multiplications in a different order.... it's always matrix multiplications) but you'll be feeding them junk.
So they need ever increasing amounts of data but they are also the cause of the ever increasing shortage of good data. They have dug their own grave.
Because always/never are absolutes that are either very easy or very hard to see through. For example, 'I will never die', 'I will never tell a lie', 'I will never eat a pie' all suffer through this despite dying being the most implausible. And it gets worse as we get most abstract:
'Machine will always know where to go from here on now'.
AGI might be possible with more Param+Data scaling for LLM. It is not completely within the realm of impossible given that there is no proof yet of "limits" of LLM. Current limitation is definitely on the hardware side.
This is what I'm talking about. The correct tech would enable the strands of information in a vector to "see" each other and "talk" to each other without any intervention. This isn't the same as using a shovel to bash someone's head in. AGI would need tech that finds a previously undocumented solution to a problem by relating many things together, making a hypothesis, testing it, proving it, then acting on it. LLM tech will never do this. Something else might. Maybe someone will invent Asimov's positronic brain.
I think _maybe_ quantum computing might be the tech that moves AGI closer. But I'm 99.9999% certain it won't be LLM tech. (Even I can't seriously say 100% for most things, though I am 100% certain a monkey will not fly out of my butt today)
Quantum compute would definite make a leap to moving closer to AGI. Calculating probability vector is very natural for quantum computer or more precisely any analog compute system would do. qubits==size(vocab) with some acceptable precision would work i believe.
The processing capability of today’s CPU’s and GPU’s is insane. From handheld devices to data centers, the capability to manipulate absurd amounts of data in fractions of a second is everywhere.
Maybe it is the algorithms. But just by doing a op for an 10^25 param llm is definitely not feasible on todays hardware. Emergent properties does happen at high density. Emergent properties might even look as AGI.
The whole "porn is a poison to society" narrative is very strong with conservatives now. A lot of them (here in Holland even) want it banned like it's Afghanistan and for everyone to have a family with lots of kids that has their dinner at 6pm after prayer.
I also don't subscribe to that. I'm polyamorous and sex-positive. And very LGBTIQ friendly. But I've seen that attitude a lot even in Europe :( especially from the emerging extreme right parties.
I don't really understand it either. Why is it any of their business what I do? I don't tell them they can't have a big traditional family. Why are they so preoccupied with me.
Couldn't agree more. I am cis-het in a long relationship, childfree by choice.
But don't think conservative ideology/politics is about making their rules your rules. Or at least being bound by their rules. I mean: If Musk doesn't like what someone says, they get blocked from Twitter (or in case of German political parties, downranked, so that the far right is ranking higher/being recommended more often on X). But they (like Musk) claim "free speech" for themselves. Meaning, they want to say what they feel like without consequences.
I found this interesting "law" by Frank Wilhoit:
> Conservatism consists of exactly one proposition, to wit: There must be in-groups whom the law protects but does not bind, alongside out-groups whom the law binds but does not protect.
Well in my country (Holland) I see a lot more hate. Even when I wear a rainbow wristband in June I get insulted by groups of extreme right guys. This didn't use to happen.
The thing is if you're not protected you don't really have free speech.
Yeah the disapproval/disgust I'm seeing everywhere, from pretty much every side that I keep my eye on, about OpenAI enabling erotica generation with ChatGPT is so frustrating, because it seems like just Puritanism and censorship, and desiring to treat adults like children as you say.
The issues that these pseudo-relationships can cause have barely begun to be discussed, nevermind studied and understood.
We know that they exist, and not only for people with known mental health issues. And that's all we know. But the industry will happily brush that aside in order to drive up those sweet MAU and MRR numbers. One of those, "I'm willing to sacrifice [a percentage of the population] for market share and profit" situations.
That's kind of patronizing position or maybe a conservative one (in US terms). There can be harm, there can be good, nobody can say at this moment for sure which is more.
Do you feel the same about say alcohol and cigarettes? We allow those, heck we encourage those in some situations for adults yet they destroy whole societies (look at russia with alcohol, look at Indonesia for cigarettes if you haven't been there).
I see a lot of points to discuss and study but none to ban with parent's topic.
I'm really not suggesting a ban, there's no way that would fly.
I'm suggesting restraint and responsibility on the part of the organization pushing this. When do we learn that being reactive after the harm is done isn't actually a required method of doing business? That it's okay to slow down even if there's a short-term opportunity cost?
This applies just as much to the push for LLMs everywhere as it does OpenAI's specific intention to support sexbots.
But it's all the same pattern. Push for as much as we can, as fast as we can, at as broad a scale as we can -- and deal with the consequences only when we can't ignore them anymore. (And if we can keep that to a bare minimum, that would be best for the bottom line.)
We did finally come around to the point of restricting advertising and sale of cigarettes, and limiting where you could smoke, to where it is much less prevalent in today's generation than earlier generations.
The issue is it becoming ubiquitous in an effort to make money.
I mean, their issue isn't that not enough users are using ChatGPT, so they need to enable new user modalities to draw more people in — they already have something like 800 million MAU. Their issue is that most of their tokens are generated free right now both from those users and stuff like CoPilot, and they're building stupidly huge unnecessary data enters to scale their way to "AGI." So yeah, everyone says this looks like a sign of desperation, but I just don't see it at all, because it would solve a problem they don't actually have (not enough people finding GPT useful).
If you re--calibrate from any lofty idea of their motives to "get investor money now", this and other moves/announcements make more sense: anything that could look good to an investor.
User count going up? Sure.
New browser that will deeply integrate chatGPT into users lives and give OAI access to their browsing/shopping data? Sure
Several new hardware products that are totally coming in the next several months? Sure
We're totally going to start delivering ads? Sure
We're making commitments to all these compute providers because our growth is totally going to warrant it? Sure
Oh, since we're investing in all of that compute, we're also going to become a compute vendor! Sure
None of it is particularly intentional, strategic, or sound. OAI is a money pit, they can always see the end of the runway, and must secure funding now. That is their perpetual state.
Looks like OpenAI can do anything it desires, but if an indie artist tries to take money for NSFW content, or even just make it for free publicly - they get barred from using payment processors and such.
Considering that this is only with verified adults, how is this "evil"? I find it more evil to treat full grown adult users as kids and heavily censor their use of LLMs.
(Not to detract from the rest of your post, with which I agree).