That is true. I actually was ambiguous in my post, because I meant code that generates stuff, not that was generated by AI, even though I don't like the latter, either.
I find it offensive to have any generative AI code on my computer.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
The theatrics of being *forced* to use completely optional, opt-in features has been a staple of discussions regarding Apple for years.
Every year, macOS and iPadOS look superficially more and more similar, but they remain distinct in their interfaces, features, etc. But the past 15 years have been "we'll be *forced* to only use Apple-vetted software, just like the App Store!"
And yeah, the Gatekeeper mechanism got less straight-forward to get around in macOS 15, but … I don't know, someone will shoot me down for this, but it's been a long 15 years to be an Apple user with all that noise going on around you from people who really don't have the first clue what they're talking about — and on HN, no less.
They can come back to me when what they say actually happens. Until then, fifteen dang years.
I think I know what you meant. You mean you don't want code that runs generative AI in your computer? But, what you wrote could also mean you don't want any code running that was generated by AI. Even with open source, your computer will be running code generated by AI as most open source projects are using it. I suspect it will be nearly impossible to avoid. Most open source projects will accept AI generated code as long as it's been reviewed.
Good point, and you were right. I was ambiguous. I meant a system that generates stuff, not stuff that was generated by AI. But I'd rather not use stuff that was generated by AI, either. But you are also right. That will become impossible, and probably already is. Not a very nice world, I think. Best thing to do then is to minimize it, and avoid computers as much as possible....
I didn't say "generating code", I meant I find it offensive to have any code sitting on my computer that generates code, whether I use it or not. I prefer minimalism: just have on my computer what I will use, and I have a limited data connection which means even more updates with useless code I won't use.
I was musing before sleep days ago about how maybe the internet still is just a fad. We’ve had a few decades of it, yeah, but maybe in the future people will look at it as boring tech just like I viewed VCRs or phones when I was growing up. Maybe we’re still addicted to the novelty of it, but in the future it fades into the background of life.
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
A few years ago I would've said you were incredibly cynical, but nowadays with so much AI slop around social media and just tonnes of bad content I tend to agree with you.
I think younger me would think the same. Its not even the AI slop or bad content but also the intrusive tracking, data collection, and the commercialization of interests. I just feel gross participating.
I do think there is a lot of valid criticism of the internet. I certainly don't think it's an annoying fad but I do think it has caused a lot of bad things for humanity. In some ways, life was much better without it, even though there are some benefits.
It is impossible to have a negative opinion of AI without silly comments like this just one step removed from calling you a boomer or a Luddite. Yes all technological progress is good and if you don’t agree you’re a dumb hick.
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
On the other side of that are the people screaming that AI is murder.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
I know there are problems on both sides, but I simply don't think it's logical or humane for other countries (especially the U.S.) to send Israel enormous amounts of high-powered weaponry to fight, either. Both sides are committed to hating each other so why add fuel to the fire? Not to mention that Israel was foisted upon the Palestinians by the British and the Israelis just started claiming that the land was theirs due to their religion of supremacy.
Again, yeah obviously Palestinians has some responsibility in this conflict as well, and I don't know if it could ever be solved but why send huge amounts of weapons there? It feels like the U.S. is sustaining a game of the card game "war".
There needs to be a worldwide standard, such as an HTML tag, that says "no training". And a few countries need to make it a punishable offense to violate the tag. The punishment should be exceptionally severe, not just a fine. For example: any company that violates the tag should be completely barred from operating, forever.
That will just lead to situations where one company scrapes the site, cleans the content of tags, and sells the data, and another does the training on the precleaned data. The first one hasn't trained and the second one never saw the tag.
This isn't a new concept in law. It's similar to buying goods that were stolen or procured through illegal means. Here's the US law that applies when it happens across state lines:
Note that it requires the defendant to know the goods were illegally taken. Can be hard to prove, but not impossible for companies with email trails. The fun question is, what will the analog be for the government confiscating the illegally "taken" data? A guarantee of deletion and requirement to retrain the model from scratch?
>There needs to be a worldwide standard, such as an HTML tag, that says "no training"
Any country that seriously implemented this would just end up being completely dominated by the autonomous robot soldiers of another country that didn't, because it effectively bans the development of embodied AGI (which can learn live from seeing/reading something, like a human can).
I get that you have your own opinion, but I'm personally tired of living in the butter-churning era and would prefer that this all went a bit faster.
I want my real time super high fidelity holo sim, all of my chores to be automatically done, protein folding, drug discovery. The life extension, P = NP future. No more incrementalism.
If the universe only happens once, and we're only awake for a geological blink of an eye, I'd rather we have an exciting time than just be some paper-pushing animals that pay taxes and vanish in a blip.
I'd be really excited if we found intelligent aliens, had advanced cloning for organ transplants and longevity, developed a colony on Mars, and invented our robotic successor species. Xbox and whatever most normal people look forward to on a day to day basis are boring.
There is already a beautiful, exciting world out there full of animals and plants and we don't need AI or some computer crap to experience it. The problem is, creating all this AI and advanced technology is directly crushing that world.
> There is already a beautiful, exciting world out there full of animals and plants and we don't need AI or some computer crap to experience it.
I'm glad that this works for you, but I want more.
We're temporary apes on a soon to be permanent addition of metallicity to our sun's outer atmosphere. I don't think we should romanticize or hold anything sacred about our very temporary place in the universe.
We are metastable and ephemeral. Everything in this world is.
Would not be possible without the continued advancement of technology. Technology allows us to rapidly expand the human population, see all of the healthcare and transportation technology, which has resulted in a vastly higher extinction rate. Pretty much all the damage we have done on earth is only possible due to advanced technology.
The openness of the internet is a good thing, but it doesn't come without a cost. And the moment we have to pay that cost, we don't get to suddenly go, "well, openness turned out to be a mistake, let's close it all up and create a regulatory, bureaucratic nightmare". This is the tradeoff. Freedom for me, and thee.
It is definitely the responsibility of anyone suing someone who trained a model on copyrighted data to prove that it isn't fair use, they have to show how it violated law, and while it's in the best interest of those organizations to make things easier for the court by showing why it is fair use, they are technically innocent until proven guilty.
Accordingly, anyone on the internet who wants to make comments about how they should be able to prevent others from training models on their data needs to demonstrate competence with respect to copyright by explaining why it's not fair use, as currently it is undecided in law and not something we can just take for granted.
Otherwise, such commenters should probably just let the courts work this one out or campaign for a different set of protection laws, as copyright may not be sufficient for the kind of control they are asking over random developers or organizations who want to train a statistical model on public data.
You've got it backwards. It's on the defendant to prove that their use is fair. The plaintiff has to prove that they actually own the copyright, and that it covers the work they're claiming was infringed, and may try to refute any fair-use arguments the defense raises, but if the defense doesn't raise any then the use won't be found fair.
It's true that the process is copyright strike/lawsuit -> appeal, but like I said, it's in their best interests to just prove that it's fair use because otherwise the judge might not properly consider all facts, only hear one side of the story and thus make a bad judgement about whether or not it is fair use. If anything, I'm just being pedantic, but we do ultimately agree here I think.
Well, lawsuits have multiple stages. First the plaintiff files the suit, and serves notice to the defendant(s) that the suit has been filed. Then there's a period where both sides gather evidence (discovery), then there's a trial where they present their evidence & arguments to the court. Each side gets time to respond to the arguments made by the opposing party. Then a verdict is chosen, and any penalties are decided by the court. So there's not really any chance the judge only hears one side of the story.
That said, I think we do agree. The plaintiff should be prepared to refute a fair-use argument raised by the defendant. I'm just noting that the refutation doesn't need to be part of the initial filing, it gets presented at trial, after discovery, and only if the defendant presents a fair-use defense. So they don't have to prove it's not fair use to win in every case. I'm probably also being excessively pedantic!
> It is definitely the responsibility of anyone suing someone who trained a model on copyrighted data to prove that it isn't fair use, they have to show how it violated law, and while it's in the best interest of those organizations to make things easier for the court by showing why it is fair use, they are technically innocent until proven guilty.
No, fair use is an affirmative defense for conduct that would otherwise be infringing. The onus is on the defendant to show that their use was fair.
Yeah, I don't think downloading my paid-for books, from an illegal sharing site, to scrape and make use of, is in any way fair use.
From the decision in 1841, in the US (Folsom vs Marsh):
> reviewer may fairly cite largely from the original work, if his design be really and truly to use the passages for the purposes of fair and reasonable criticism. On the other hand, it is as clear, that if he thus cites the most important parts of the work, with a view, not to criticize, but to supersede the use of the original work, and substitute the review for it, such a use will be deemed in law a piracy
Further, to be "transformative", it is required that the new work is for a new purpose. It has to be done in such a way that it basically is not competing with the original at all.
Using my creative works, to create creative works, is rather clearly an act of piracy. And the methods engaged, to enable to do so, are also clearly piracy.
Where would training a model here, possibly be fair use?
Art is highly derivative for the most part, and artists are constantly learning from each other. The jury's out whether this applies to machines. Training an LLM on data is not the same as copying it. As such, the case right now against Meta is wholly focused on the acquisition part and not the training itself.
We must separate the act of training from the act of distribution (which could include filtering). Training and personal use seems well within the scope of fair use.
I do however understand why you would be upset if Meta or OpenAI hosts/distributes a model that could fully reproduce your books (assuming that is really the case) and make money providing that information.
That said, and I'm not trying to move goalposts here, I just don't personally find Meta in particular to be morally at fault, as I already have particular views on the freedom for myself and others to share information with each other that may be incompatible with your views (and to be clear, as an artist and open source engineer I do have an informed personal opinion on this matter, I have deeply considered and continue to reconsider the balance of freedoms required for artists to make a living off their craft without infringing upon what I see as inalienable personal freedoms).
Meta released their models publicly and freely after investing a lot of time and money into them, and I see it as a net good for humanity to have access to these incredible neural networks that were relegated to science fiction just a few years ago.
I also think LLMs are going to force us to rethink our entire approach to copyright. Whether that means abandoning our current notions of copyright entirely, or creating residuals for hosted commercial LLMs, or something else, I don't know.
> victory will belong to the savvy blackhat hacker who uses AI to generate code at scale
This is just it: AI, while providing some efficiency gains for the average user, will become simply too powerful. Imagine a superpower that allows you to move objects with your mind. That could be a very nice thing to have for many to have, because you could probably help people with it. That's the attitude many hacker-types take. The problem is, it allows people to also kill instantly, which means that telekinesis would just be too powerful to juxtapose against our animal instincts.
AI is just too powerful – and if more people took a serious stand against it, it might actually be shut down.
Of course it is. If enough people were truly enraged by it, if some leader were to rile up the mob enough, it could be shut down. Revolts have occurred in other parts of the world, and things are getting sufficiently bad that a sizable enough revolt could shut AI down. All we need is a sufficient number of people who are angry enough at AI.
> a software update could easily cripple its ability to run on your local machine
A software update collaborated on by Microsoft, Apple + countless of volunteer groups managing various other distributions?
The cat really is out of the bag. You could probably make it a death penalty in the whole world and some people would still use it secretly.
Once things like this run on consumer hardware, I think it's already too late to pull it down fully. You could regulate it though and probably have a better chance of limiting the damages, not sure an outright ban could even have the effect you want with a ban.
Models released today are already useful for a bunch of stuff, maybe over the course of 100 year they could be considered "out of date", but they don't exactly bitrot by themselves because they sit on a disk, not sure why'd they suddenly "expire" or whatever you try to hint at.
And even over the course of 100 year, people will continue the machine learning science, regardless if it's legal or not, the potential benefits (for a select few) seems to be too good for people to ignore, which is why the current bubble is happening in the first place.
I think you over-estimate how difficult it is to get "most of the world" to agree to anything, and under-estimate how far people are willing to go to make anything survive even when lots of people want that thing to die.
> I think you over-estimate how difficult it is to get "most of the world" to agree to anything
agreement isn't needed
its success sows the seeds of its own destruction, if it starts eating the middle class: politicians in each and every country that want to remain electable will move towards this position independently of each other
> and under-estimate how far people are willing to go to make anything survive even when lots of people want that thing to die.
the structural funding is such that all you need to do is chop off the funding from big tech
the nerd in their basement with their 2023 macbook is irrelevant
Plenty of past civilizations have thought they were invulnerable. In fact, most entities with power think that they can never be taken down. But countless empires in the past have fallen, and countless powerful people have lost their wealth and power overnight.
Rather, it’s many different types of software running on many different systems around the world, each funded by a different party with its own motives. This is no movie…
True, but the system only exists because it is currently economically viable. A mass taboo against AI would change that. And many people outside of tech already dislike AI a lot, so it's not inconceivable that this dislike could be fuelled into a worldwide taboo.
> True, but the system only exists because it is currently economically viable.
The "system" isn't a thing, but more like running apps, some run on servers, other consumer hardware. And the parts that run on consumer hardware will be around even if 99% of the current hyped up ecosystem dies overnight, people won't suddenly stop trying to run these things locally.
I get the general "too many variables" argument, but the idea that humans have no means of stopping any of these apps/systems/algorithms/etc if they get "out of control" (a farce in itself as it's a chat bot) is ridiculous.
It's very interesting to see how badly people want to be living in and being an active participant in a sci-fi flick. I think that's far more concerning than the AI itself.
Hmm, good point. Also, when COVID struck, although it took some time, everyone collectively participated in staying home (more or less, I know some people didn't but the participating was vast). We can do the same if we choose.
Eh, it's exactly the Johnny Depp movie that would simplify this into "just flip the power switch".
LLM code already runs on millions of servers and other devices, across thousands of racks, hundreds of data centers, distributed across the globe under dozens of different governments, etc. The open source models are globally distributed and impossible to delete. The underlying math is public domain for anyone to read.
Sure, but those millions of servers and devices are not directly connected (nor can be by the AI). The plot in the movie I shared necessitated the AI being able to turn any computer into extra compute for itself—what's necessary for a "we can never shut it down" scenario.
The power switch is still king, even if it's millions of power switches versus one.
> the model could be created with ethically, legally, voluntarily sourced training data
There is no such thing as ethical AI, because "voluntary" usually means voluntary without the participants really understanding what they are making, which is just another tool in the arms race of increasingly sophisticated AI models – which will largely be needed just to "one up" the other guy.
"Ethical" AI is like forced pit-fighting where we ask if we can find willing volunteers to fight for the death for a chance for their freedom. It's sickening.
> Deep research and similar tools alone have helped me navigate complex legal matters recently for my incorporation,
On the flipside, it will make it even easier for corporations to use the legal system in their favour and corporations will more easily and effectively use GenAI against individuals even if both have it due to the nature of corporations and their ability to invest in better tools than the individual.
So it's just an arms race/prisoner's dilemma and while it provides incremental advantages, it makes everything worse because the system becomes more complex and more oppressive.
> AI-first strategies aren't just failing individual companies—they're creating systematic corporate regret on an unprecedented scale. 55% of companies that replaced humans with AI now admit they made wrong decisions about those layoffs.
The real problems is that companies will just learn not to be so blatant about it. The replacement will happen more strategically, over a longer term, with promises about AI-augmentation rather than replacement. But over a longer period of time, jobs openings will go down nonetheless, and over decades, we will see the same thing until it will be still quite hard to get a new job. And I don't think UBI is a good answer either, because too many people are not self-directed enough to be happy without expending effort towards meaningful goals of obtaining necessities.
Not sure about that, I've had great fun vibe coding like another commenter said, as I can simply write what I want in English and see a result immediately. Of course, I'd never use this for production, but for prototyping, it's nice. This is the opposite of industry, as you state.
I'm not talking about short-term gains like you having fun, but long-term effects on the industry of programming. Of course technology always provides some short-term fun in terms of elevating activity to higher industry levels in the long run.
At the end of the day, the people who put in the effort get ahead. I don't worry about the short or long term at all, as long as one is competent. If fewer are competent due to vibe coding their entire career, all the better for me as a competent professional, as with lower supply comes higher demand.