Put this on my tombstone after the robots kill me or whatever, but I think all “AI safety” concerns are a wild overreaction totally out of proportion to the actual capabilities of these models. I just haven’t seen anything in the past year which makes me remotely fearful about the future of humanity, including both our continued existence and our continued employment.
The point isn't that the current models are dangerous. My favorite bit from the GPT-4 safety paper was when they asked it how to kill the most people for a dollar and it suggested buying a lottery ticket (I also wonder how much of the 'safety' concerns of current models are just mislabeling dark humor reflecting things like Reddit).
But the point is to invest in working on safety now while it is so much more inconsequential.
And of everyone I've seen talk about it, I actually think Ilya has one of the better senses of the topic, looking at alignment in terms of long term strategy vs short term rules.
So it's less "if we don't spend 8 months on safety alignment this new model will kill us all" and more "if we don't spend 8 months working on safety alignment for this current model we'll be unprepared to work on safety alignment when there really is a model that can kill us all."
Especially because best practices for safety alignment is almost certainly going to shift with each new generation of models.
So it's mostly using the runway available to test things out and work on a topic before it is needed.
The clear pattern for most of human history is conflict between a few people who have a lot of power and the many more people that are exploited by those few. It should be obvious by this point that the most probable near-term risk of AI development is that wealthy and influential groups get access to a resource that makes it cheap for them to dramatically expand their power and control over everyone else.
What will society look like when some software can immediately aggregate an enormous amount of data about consumers and use that to adjust their behavior? What might happen when AI starts writing legislation for anybody that can afford to pay for it? What might AI-generated textbooks look like in 50 years?
These are all tools that could be wielded in any of these ways to improve life for lots of people, or to ensure that their lives never improve. Which outcome you believe is more likely largely depends on which news you consume -- and AI is already being used to write that.
Apparently what made this person fearful was grade school math.
"Though only performing maths on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said."
No, what made this person fearful was a substantial jump in math ability. (Very) obviously they are not afraid of producing a machine that can multiply numbers. They’re afraid of what that capability (and especially the jump in capability) means for other behaviors.
The response gets more reasonable the smaller the model in question. A 1B parameter model passing grade-school math tests would be much more alarming (exciting?) than a GPT-4 sized model doing the same.
GPT-4 probably has some version of the answer memorized. There’s no real explanation for a 1B parameter model solving math problems other than general cognition.
Kids will stop to learn maths and logic, because they understand it has become useless in practice to learn such skills, as they can ask a computer to solve their problem.
A stupid generation, but one that can be very easily manipulated and exploited by those who have power.
Darn well I really was hoping my children and grandchildren could continue my wonderful data entry career but OCR ruined that, and now they can’t even do such meaningful jobs like read emails and schedule appointments, or do taxes like an accountant. What meaning will they have in life with all those ever so profound careers ruined!! /s
We need to stop this infinite rights mentality. Why should continued employment be guaranteed for any jobs? That’s really not how we got to where we are today, quite the opposite actually. If it ok with people I’d like to seen humans solve big problems and go to the stars and that’s going to take AGI and a bunch of technological progress and if that results in unemployment, even for us “elite” coders, then so be it. Central planning and collectivism has such a bad track record, why would we turn to it now at such a critical moment? Let’s have lots of AGIs and all competing. Hey anyone at OAI that know whatever Q* trick there might be, leak it! Get it to open source and let’s build 20 AI companies doing everything imaginable. wtf everyone why so scared?
perhaps not rights to have a job in general but there is value in thinking about this at least at the national scale. people need income to pay taxes, they need income to buy the stuff that other people sell. if all the people without jobs have to take their savings out of the banks then banks can't loan as much money and need to charge higher interest rates. etc etc
if 30% of the working population loses their jobs in a few months there will be real externalities impacting the 70% who still have them because they don't exist in a vacuum.
maybe everything will balance itself out without any intervention eventually but it feels to me like the rate of unprecedented financial ~events~ is only increasing and with greater risks requiring more intervention to prevent catatastrophe or large scale suffering
oops yeah, sounds absurd, i was falling asleep when i wrote it. p sure i was thinking about the first few months of covid lockdowns in the US as a comparison when i was writing the reply
It will take years not months and I’m against any intervention. Redistribution and socialism inspired governments policies will just make things worse. Progress requires suffering, that the history of our species, that’s the nature of reality.
I know of at least one person making nearly 6 figures doing data entry.
It turns out some websites work hard enough to prevent scraping that it is more cost effective to just pay a contractor to go look at a page and type numbers in rather than hire a developer to constantly work around anti-scraping techniques (and risk getting banned).
The point isn't forbidding anything, it is realizing that technological change is going to cause unemployment and having a plan for it, as opposed to what normally happens where there is no preparation.
Yup. Likewise, a key variable in understanding this is .. velocity? Ie a wheel is cool and all, but what did it displace? A horse is great and all, but what did it displace? Did it displace most jobs? Of course not. So people can move from one field to another.
Even if we just figured out self-driving it would be a far greater burden than we've seen previously.. or so i suspect. Several massive industries displaced overnight.
An "AI revolution" could do a lot more than "just" self-driving.
This is all hypotheticals of course. I'm not a big believer in the short term affect, to be clear. Long term though.. well, i'm quite pessimistic.
Past technological breakthroughs have required large, costly retools of society though. Increasingly, those retools have resulted in more and more people working in jobs whose societal value is dubious at best. Whether the next breakthrough(or the next five) finally requires a retool whose cost we can't afford is an open question.
> Increasingly, those retools have resulted in more and more people working in jobs whose societal value is dubious at best.
This implies to me that in the past more people had worked in jobs with good societal values which would mean it was better for them I assume, and better for society. So I’m genuinely curious when that was and why. It sounds like a common romanticized past misconception to me.
An increasing number of people being unproductive doesn't rule out an increase in total production. It does suggest that for those whose jobs are now obsolete, there is increasingly no alternative to subsidizing their entire existence. We've kept pace so far, but a majority of people being in a situation where their labor is worthless is a huge potential societal fault line.
I think the argument here is that we are losing the _good_ jobs. It's like we're automating painting, arts and poetry instead of inventing the wheel. I don't fully agree with this premise (lots of intelectual work is rubbish) but it does sound much more fair when put this way.
I doubt the people who experienced the technological revolution of locomotives and factories imagined the holocaust either. Of course technology has and can be used for evil
>> Exactly. The rational fear is that they will automate many lower middle class jobs and cause unemployment, not that Terminator was a documentary.
> Wasn't this supposed to happen when PCs came out?
Did it not?
PCs may not have caused a catastrophic level of unemployment, but as they say "past performance is not a guarantee of future results." As automation gets more and more capable, it's foolish to point to past iterations as "proof" that this (or some future) iteration of automation will also be fine.
Occupations like computer (human form), typist, telephone switcher, all became completely eliminated when the PC came out. Jobs like travel agents are on permanent decline minus select scenarios where it is attached with luxury. Cashier went from a decent nonlaborious job to literal starvation gig because the importance of a human in the job became negligible. There are many more examples.
Some people managed to retrain and adapt, partially thanks to software becoming much more intuitive to use over the years. We don't know how big the knowledge gap will be when the next big wave of automation comes. If retraining is not feasible for those at risk of losing their careers, there better be welfare abundance or society will be in great turmoil. High unemployment & destitution is the single most fundamental factor of social upheavel throughout human history.
Yeah but then capitalism breaks down because nobody is earning wages. One of the things capitalism is good at is providing (meaningless) employment to people because most wouldn’t know what to do with their days if given the free time back. This will only continue.
To some degree. Certainly the job of "file clerk" whose job was to retrieve folders of information from filing cabinets was made obsolete by relational databases. But the general fear that computers would replace workers wasn't really justified because most white-collar (even low end white-collar) jobs required some interaction using language. That computers couldn't really do. Until LLMs.
Employment is only necessary because goods do not exist without work. With AI able to work to satisfy any demand, there will be no point in human employment/work. There will be turmoils during the transition between the rule sets tho.
I want to see reliable fully autonomous cars before I worry about the world ending due to super-AGI. Also, have we figured out how to get art generators to always get the number of fingers right, and text generators to stop making shit up? Let's not get ahead of ourselves.
from one perspective we already have fully autonomous cars, it's just the making them safe for humans and fitting them into a strict legal framework for their behavior that needs finishing before they're released to the general public (comma.ai being a publicly available exception)
Ok so you accept that latest gen art generators can do fingers. I'd argue from the latest waymo paper they are reliable enough to be no worse than humans.