Hacker Newsnew | past | comments | ask | show | jobs | submit | okareaman's commentslogin

The beauty with the internet and now AI is that it greatly reduces the power of and need for gatekeepers who take most of the profit


Sure. I as a musician that practised for years to play the right note in the right way the right moment will truly appreciate it when the unique style I developed with my band over years of work and sweat will be cloned in mere seconds by a multi-national corporation and their AI. Or it could be a 13 year old teenager that has the AI create music in the style of someone else.

Not that I am against sampling and remixing, but I have yet to see that AI based cloning of music can truly become an artform with it's own merits like sampling has become — or whether it will destroy the filaments of reality to such a degree music will have to become something entirely different than it is now.

What I am sure of, is that those who have power today will find ways to hold that power tomorrow.


And I'm sure all of the musicians you "took inspiration from" are happy about you stealing from them.


How are you so sure? You've never even heard their music.


You are not the authority on what counts as an "artform of its own".

I agree with your last statement though.


I am the authority of what counts as an artform of its own — to myself. I mean I also have a MA in arts, so I know about art forms, but I didn't attempt to represent an authority here. So I am sorry if you perceived I was acting as an "authority" when in fect I was speaking a out my very own perspective and nothing more.


There will be two groups of people: those who figure out how to use AI to their benefit and those who don't


And those who figure it out and still won't benefit, because bigger entites do what bigger entites always did when technological promises loomed at the horizon.


Just by virtue of how the human brain transducers auditory signals versus visual signals there is a big disparity in being able to automate music production. We fill in many gaps for vision but to a much smaller degree for music. There's a lot less room for AI to fudge the difference between human and generative model derived music.

It will be interesting to see how this all plays out as I'm sure AI will found some use in music that has broad application and influence. But it will take a lot longer than generating pictures


In exchange for a deafening level of noise.

One of the things those recording engineers used to battle was a high noise floor in the signal. AI lowers that signal<>noise ratio (applying the analogy to producing music or "content").

The issue isn't the barrier of entry in terms of skill or technical expertise, it's malicious and greedy business practices. And tech is ripe with that in all new ways of its own. Institutionalized, codified psychopathy is on trial here.


Bullshit.

You can't run your own AI, so the AI is gatekept.


You need to calm the f down. Do you understand anything about machine learning, software and hardware? How do you propose that some configuration will become generally intelligent given what we know today? I'd really like to hear your theories, then maybe I'd be frightened too.


Well, I do know something about those topucs, but here's a twitter thread from someone else who thinks the people worried about risks are silly, who's definitely an expert, with speculation on how he thinks GPT-4 could be made into a generally intelligent autonomous system: https://mobile.twitter.com/karpathy/status/16425988905738199...


There is progress trending that way - chat GTP4 is getting close, and huge financial incentives to improve it with hundreds of best and brightest working on it. Seems kind of inevitable to me.


I think it goes deeper than that. I think some are exploiting "end time" religious fears in America to get AI taken away and restricted from the public because a public empowered by AI is a threat to the current power structure. This isn't any different than what has already gone on in America, just now with AI

For example, Peter Doocy of Fox made a point of reading outloud at a White House press briefing the words of Eliezer Yudkowsky: "If we don't shut down AI everyone on Earth will die" Evangelicals watch Fox. They were the target. Doocy didn't have to read such dramatic words aloud. He could have said something much more neutral.


This is a very fluid and chaotic situation so I'd be more concerned if he said one thing and stuck to it

When the Facts Change, I Change My Mind. What Do You Do, Sir? - John Maynard Keynes

A foolish consistency is the hobgoblin of little minds, adored by little statesmen and philosophers and divines. With consistency a great soul has simply nothing to do. He may as well concern himself with his shadow on the wall - Ralph Waldo Emerson

Do I contradict myself? / Very well then, I contradict myself. / (I am large, I contain multitudes) - Walt Whitman's “Song of Myself”


That's all well and good, but it's not well and good to change what you say because the calculus of what will personally benefit you has changed. It's not the same as "new information coming to light", it is instead greed.

We've already seen a charity robbed to no end other than Profit. Commercializing OpenAI was for one reason: to enrich the people commercializing it. Not to benefit society, and with nary a thought given to the consequences.

Control of AI serves one purpose, in Altman's mind, I am convinced: to prevent other people from eating the chickens he counted too early.

If ten million people lose their jobs to ChatGPT, it's fine as long as Altman & Co grow richer, but if those same people try to democratize AI and make his time and investment valueless in the face of a commoditized category of software, all the sudden he has a problem and those people should be constrained by the government -- but Altman should not?

This won't be the first time greed failed.


I am astonished how anyone could just make a statement like this, here. It's so sticky, it's so damning and brash. It offers absolutely no proof, while contradicting what sama says themselves.

And we are entirely fine with that. Top of the comments after 7 hours.

Note, I am not claiming it could not be true. That is entirely beside my point. The issue is, that it could be entirely untrue. And I really mean, every single piece of information in this comment could be off. And we are comfortable with that. We are burning witches again.

If this is the level of our engagement, in this relatively good and on-topic forum, in times of change and disagreement, I do truly wonder if there is any hope for us.


I can think of exactly zero other reasons to have engineered the privatization of a charity, other than greed.

The balance of probabilities indicates to me that the attempt at regulatory capture is also based in greed, but if you have a different view, I'd love to hear your logic.

After all, if AI proliferation were as dangerous as Altman seems to claim, then why is the answer "OpenAI should continue to provide unfettered global public access to quasi-strong AI, but the government should slow other people down "?

If billions of dollars are involved, your first suspicion shouldn't be altruism.

Edit: OK perhaps a valid reason for a private subsidiary is to provide the charity with the income it needs to operate if it has no endowment (OpenAI does), but given all the other points I raised, I don't think that's what's going on here.


> I can think of exactly zero other reasons to have engineered the privatization of a charity, other than greed.

The question has recently been asked by Lex Fridman and answered by Sam Altman. This is the clip: https://youtu.be/qQdqFZFZQ6o

To quote: "We learned early on that were gonna need far more capital than we were able to raise as a non-profit" (you can listen in on it being expanded over 3 Minutes)

Now, since I can almost feel the goal-post-shifting coming, this seems like a good time to align: You claimed you could see no reason. Their reason is: They had a model. Then they figured it was wrong. They adjusted their model.

This is not an insane or unreasonable process. This is how we know things get adjusted all the time. We agree with it on principle.

Now, of course, there are a lot of possible, unfavorable interpretations, even if we can agree up to here. A couple:

a) Despite the process making sense, that's not what happened. Sam might be lying or deceiving. He may be leaving things unsaid, or it could all be a long-planned con, etc. All of this might be entirely possible, and I did not do the work to disprove this option. BUT: as a general rule, we always agree that the burden of proof is on the accuser, in court and in civil discourse. Walking up to people and saying "You stole!", them being irritated, and you saying "Well! Prove me otherwise!" is just not how we do things. I am aware that in practice we often are willing to apply different rules to the very poor, the very rich, or people we just dislike.

b) Sam is incompetent or delusional. Someone else could have made it work without going the route they did. Maybe it would have worked, and he is just... bad at math? Certainly a possibility. Similar to the above, if you want to make this claim without discrediting yourself: Show your work. Reason us through it.

c) This move was illegal. I am not a lawyer, but since I wouldn't rate this thought as particularly groundbreaking, I am okay with assuming someone raised the concern during the transition, lawyers were consulted, and it was a legal option.

d) It's simply immoral to go on, at best the exploitation of a loophole, at worst the biggest treason to humankind. If "open" cannot mean "open source" anymore, before making this transition, letting the company fail was the right move. This is an entirely understandable position, and I can empathize with the feelings it triggers. It requires no evidence, and it requires no claims about greed. It's an opinion piece. Fair enough.


I watched the video.

I'll preface this by saying I am neither judge nor prosecutor, and we are not at trial.

Where we are is in a position of considering, before it is too late, the honesty and motives of a man who has taken a charity private because they "needed money".

Needed money for what, exactly? Well, the mission of OpenAI, according to https://openai.com/about their mission is Our mission is to ensure that artificial general intelligence benefits all of humanity.

So how does this newfound capital help that mission? By hiding the details of GPT4, so that no one outside of OpenAI can benefit. I don't even know how many parameters the damn thing has or what kind of growth trajectory we are on, and neither does anyone outside of OpenAI.

Here's a man who stands to control a large part of what resembles AGI, financially benefit to an almost indescribable degree, and control, arguably, a large part of the direction of society (given the number of jobs this will cost), especially if the regulators limit access by Mere Mortals. They won't even tell us the basics of their new models (and newfound powers). So much for "Open" and "benefiting all of humanity".

If someone stands to gain this kind of money and power, and does things that on their face seem dishonest and inimical to society at large, and to the goals of the very charity he privatized, should he be immune to questioning and suspicion?

I am suspicious. I think his actions merit suspicion. I think you all should be suspicious too, based on the events so far.


> So how does this newfound capital help that mission?

Again, as soon as we jump to the "so...", we are already past the point that I was taking offense with. I myself listed a few options from where to take this and you added a few more. Fair enough and entirely fine with me.

This is the point:

> Commercializing OpenAI was for one reason: to enrich the people commercializing it. Not to benefit society, and with nary a thought given to the consequences.

Making bad faith statements about someone without even rebutting the stated intent of the accuse, or even putting in the effort to learn about the intent, is just so incredibly lame and bad style.

Imagine saying this to someones face: "You made this business for one reason alone: To enrich yourself personally, you absolutely do not care about benefitting society, and you don't give a fuck about the damage it might do." Would you think it really be okay to advance this to the accusse, before even bothering to find or hear an explanation regarding the issue you are taking such strong offense with? Just an inkling of what their explanation is without you assuming it for them, as a matter of human decency?

If we cut out the option for people to explain themselves or reason on basis of their explanation, what are we doing other than advancing populism? I really want to believe this place can not be that.

(I want to add, that I have 0 allegiance or interesting opinions about Sama. In the past I found some of their takes interesting, but for me that's just normal, when I listen to a decently smart person for the first time, even if I end up disagreeing with most of their values.)


My skepticism is not bad faith, it's honest inquiry based on the facts available to me, which I have enumerated for you.

The hiding of scientific data from a charity has VERY few kind explanations and a lot of terrible ones. The balance of probabilities points towards Sam Altman acting maliciously and not in the public interest, which might be "okay" if he hadn't robbed a charity to do so.

I am still open to contrary viewpoints, but you're not presenting any, you are only saying I am acting in bad faith for what I (and others here) consider to be merited and valid skepticism based on recent events and statements from Altman himself.

Can you provide any charitable interpretation of the hiding of scientific data from a charity whose mission to ensure that artificial general intelligence benefits all of humanity?


You're in the clear. Like another commenter in the same thread mentioned, you've responded in good faith to a semi-paranoid question. In my uncharitable view, the GP seems to both have a bias for Sam and a weird anxiety over moral questions.


Chill out a little bit. The commenter made it clear that it's their opinion. In fact, they seem to have gone out of their way to suggest it's their opinion - despite the rules here requiring the assumption of good faith (a rule you're breaking, in my opinion).

That it happens to also be a popular opinion doesn't indicate that people are burning witches - it indicates that people are increasingly finding it easier to explain the world in terms of greed and cynicism; because, frankly, that is the world we live in (aghem, in my opinion).

That you find this to be "witch-burning" indicates to me that you must really relate with the ultra-wealthy which honestly just sounds tone deaf. Sam Altman nor Jeff Bezos will "go down" because of a comment on the internet. Do you know who _doesn't_ have that guarantee? Normal people.


Seriously, we're talking about a company that Microsoft invested 10 billion dollars into, this is very powerful technology that will affect our lives. They didn't invest all that money because it will make the world a better place, they did so because they see the opportunity to make a LOT of money and have the ability to steer where this technology goes while also reducing transparency. Why shouldn't we view that with suspicion, you don't have to roll over just because there's a lot of money involved or because some millionaire/billionaire might get treated "unfairly".


"We are burning witches again."

Excuse me, but what witch is being burned for the wrong reason here?

This debate evolves around Sam Altman, how what he and OpenAI was doing was open - and now it is not anymore, despite contrary initial statements. That is a legit reason for anger, no?


Burning witches? Call me a radical but I can't help but notice the name of the company is OpenAI, but it's not open at all!


Tbh the only thing I find irritating about the article is the dumb story about being 8 and realizing his future already of being the leader of an ai company.


The discourse about lost jobs is necessary but a little odd: leveraging VC money and software driven automation to take over sectors ripe for the picking is the startup model in a box.


I think the point is that society and our economies and particularly people in places of power are not preparing for a world in which automation, AI, and robots rapidly advance to the point where jobs will no longer be needed in many cases, and perhaps one day be a choice not a necessity to even live. It would be better if we prepared for that instead of worshipping jobs as if the point of life is to have a job. The USA is particularly unprepared for this, despite having plenty of wealth to do better. Imagine a workforce half of what we have today. Then figure out a way to have an even better standard of living for everyone. That’s the mission, and it could be possible, unless we measure success of leadership solely by jobs created. People freed from the burden of work would be such a better measurement.


To anyone interested in this topic I highly recommend listening to this lecture John Cage gave at Stanford University in 1992.

John Cage - Overpopulation and Art

https://www.youtube.com/watch?v=WzPneYqBLAI


Machines can do the work, so that people have time to think.

https://youtu.be/G3V2n9QtpfE


What indications are there that the US will start preparing for this anytime soon? We’ve had decades to share the cost savings from outsourcing to China and failed to do so.

On the other hand, unemploying a large enough segment of society at once might finally force some change in the system, rather than letting it continue limping on in a slow burn where 1-2% of jobs get automated a year.


>On the other hand, unemploying a large enough segment of society at once might finally force some change in the system, rather than letting it continue limping on in a slow burn

Proof of employment required to vote.


This shows how little you understand the actual source of power in the world. The vote is a way to keep that source from changing governments, not the actual source.

This mindset is prevalent today, as if people are incapable of changing government without its permission


Thanks brilliant person on HN for correcting me! I see now that repressive measures are never passed in order to deal with what people in power consider destabilizing or problematic issues.


And breed.


Altman is v2.0 of Zuckerberg, with 1000X more parameters.

I'm worried about v3.0


As an enterprise, they have absolutely no moat. Google, FB or anyone else can come up with a larger system in two-weeks time, and they can well open source it.


that was my intuition as well, but it seems pretty obvious by now that the tech is a little harder to reproduce thzn that.

otherwise wouldn't have they already done it by now ?


Whenever someone is accused of hypocrisy on the internet Oscar Wilde's dead corpse must be exhumed and put on the defense. I must have seen these exact lines on HN a dozen times over the last few years. It's as though this is a subject that people cannot reason about in the normal fashion and must fall back on cliches and appeals to authority (meanwhile almost the entire western canon is against this sorta character failing)

So here's a few quotes that go in the reverse direction:

"That which he sought he despises; what he lately lost, he seeks again. He fluctuates, and is inconsistent in the whole order of life."—Horace, Ep., i. I, 98.

Many of the Greeks, says Cicero,—[Cicero, Tusc. Quaes., ii. 27.]— cannot endure the sight of an enemy, and yet are courageous in sickness; the Cimbrians and Celtiberians quite contrary; "Nothing can be regular that does not proceed from a fixed ground of reason."—Idem, ibid., c. 26.

"Esteem it a great thing always to act as one and the same man."—Seneca, Ep., 150.

“If a man knows not to which port he sails, no wind is favorable.”—Seneca, Ep., 71.


This is interesting, but can you explain the Oscar Wilde reference? I didn't get that part.


"Either this wallpaper goes, or I do.”

― Oscar Wilde


"Did I stutter?" - Stanley Hudson


> When the Facts Change, I Change My Mind. What Do You Do, Sir? - John Maynard Keynes

I sure would love to hear Mr. Keynes acknowledge defeat. If he were alive today, I seriously doubt we (Marxists like myself) would have the pleasure.


Keynes is probably the most influential economist of all time. I disagree with some of his ideas (for likely different reasons than you do), but I can't see any basis for him "admitting defeat" - he's many things, but defeated is not one of them.


Karl Marx is not only the most influential economist of all time; he is many orders of magnitude more influential than Keynes. Only an American could be clueless enough to think this is up for discussion.


We don’t need this kind of garbage here.


They're not wrong though. Marx is so influential even the people who abhor him have adopted his terminology (where do you think the word "capitalism" comes from?). Meanwhile, Keynes' ideas are pretty much forgotten, only coming up in fringe conversations about CBDCs.


Um, no. Keynesian economics is still predominately what is taught as "economics", and is a huge influence on how governments make economic decisions.

It's a bit like Freud - people only remember him for his kookier ideas, but his basic thinking has permeated all of psychology.


Facts?


Who are 'we'?


I'm not an American. I'm also quite a fan of Marx (or, at least his analysis of the inherent flaws on capitalism). But Keynesian economics essentially runs the world. Marxism... does not.


Can you name a country with a successful implementation of communism? Successful for me means the people are happy and implementation means they actually stuck to the ideals of communism and haven’t slid capitalism in.

Here’s a happy map for the world.

https://www.mappr.co/thematic-maps/world-happiness/


William Randolph Hearst anyone? Maybe people have heard of a little movie called Citizen Kane?

"Flood the zone with bullshit" - Steve Bannon


In my last submission I asked, Ask HN: Are leaders genuinely afraid of AI or do they have an agenda?

Here's my answer


Learning to use your tools is part of any job. I don't understand the point of this article or why such a low quality entry is on the front page.


Good point. It's a crazy, mixed up world we live in.


My reason for posting was I was appalled by this interview, which I think was quite irresponsible. I've lost all respect for Lex Fridman for playing the hapless dupe "just asking questions" and not pushing back harder.

Eliezer Yudkowsky: Dangers of AI and the End of Human Civilization | Lex Fridman Podcast #368

https://www.youtube.com/watch?v=AaTRHFaaPG8


I'm over halfway through it, and it seems to me that Lex just can't wrap his mind around the warning that Eliezer is trying to give him. He's so in love with AI that he just can't fathom how things could go wrong.

I'm convinced the threat is real, but have no idea what the timeline is. I hope, like most things, we'll skate by, and just stop calling it AI once it happens, and treat it like any other tool. I strongly doubt that is true.

I suspect what will actually happen is that peak oil will catch us off guard, and we won't have the spare power available to train GPT7, and that will avert the singularity.


Having finished the episode, it seems quite clear to me that Lex just doesn't understand the argument, or doesn't want to understand. He's so used to the idea of falling in love with an AI that he can't see the danger.

I see the danger, let me give an analogy.

What if, according to the laws of physics, it were possible to make a thermonuclear weapon out of beach sand using a microwave oven.

That's something so absurd that we'd never figure it out, but AGI could. That scale of dangerously destabilizing knowledge could show up at any time from a superintelligent AGI.

Its bad enough that nation-states have the resources to make civilization ending weapons. I think AGI could super-empower those with access to it.

---

On the other hand, what if it were possible to make unlimited clean energy using beach sand, a microwave oven and some whiskey as a catalyst. AGI could make that future possible as well.


Thanks for the recommendations. I have concerns too, but it seems to me some are stoking fears for political and religious purposes. For example, talking about Golems and Sam Altman being Jewish to justify their antisemitism.

https://en.wikipedia.org/wiki/Golem


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: