To be fair, unsupervised porn access isn’t great on young people, speaking from personal experience. I don’t think sex would necessarily be as damaging as porn (assuming condoms are used)
The easiest way to get people to turn their brains off is "think of the children".
My rights will not be infringed because of people that won't parent their kids. When that kind of thinking becomes reasonable then anything can be done for any reason just because children might have access because of poor parenting.
The appears reasonable to me. A 15 year old can’t purchase alcohol online and have delivered, right? We agree that is reasonable. There are long standing laws against children buying porn in real life. It does seem reasonable to me to require age verification for online porn.
At this point everyone should assume that everything they do online is traceable to them and can be leaked. In the scenario you describe tracing porn activity to you is easier to do than it is now. I don’t know what the solutions are but I think it reasonable to have some sort of age verification process online for the activities that require age verification if done offline.
No, it should not. But it does provide one with a basis, a starting point, for how to first approach and view a given topic. We all use our experiences to form mental models for now things work. So while anecdotal evidence should not drive policy it should drive which policies one gains an interest in.
Also, responding with anecdotal evidence should not drive policy isn’t really helpful without some sort of refutation of the evidence.
Old unibody MacBooks could do that. The lack of choice now is due to a non-standard architecture, which the Mac Pro seems to confirm (not “unibody”, but still the RAM is not extensible)
It’s true, but also it would cost little to unlock the device after they add it to the Vintage category.
I’m sure that the EU will eventually come up with legislation that forces some larger manufactures to open artificial gates automatically after they declare the products “obsolete”
The pixel, for example, already has a secure yet user-unlockable bootloader. So do modern x86_64 PC's. Statements like these, claiming that only apple can properly secure a device (and hence that users deserve to be locked out), simply show astounding ignorance.
Sure, but they were designed with that in mind, and have presence and authentication requirements, that, as I understand, are not retro-fittable to older devices.
My claim isn’t “it’s impossible to implement a secure bootloader that also has escape hatches”. I’m saying it’s borderline impossible to do that retroactively for a fleet of obsolete devices, in a way that doesn’t compromise security of those.
Yes it is slow on older hardware. It was slow even back then. You must have forgotten the time it took to paint the rest of the page if you scrolled or zoomed out a liiittle too fast.
Sure this was due to the minuscole amount of RAM Apple ships their product with, but “Safari is slow” is appropriate.
Also “Safari is slow” on my i9 as well, I just need to open a GitHub PR with 10+ files to see it come to a crawl, whereas Chrome never feels it. But hey, its scrolling is buttery smooth even if clicking doesn’t work.
They have a manual review step when you submit a game. Although you're right that they can't catch everything, they can certainly catch obvious things.
I'm sure they mostly just don't want to wind up in court with a lawyer being able to say that they let [blatant example here] get published on their store. So long as they can credibly claim that there was no way for them to tell something was in an objectionable category, I'd imagine they're fine with it.
I doubt their manual review actually does much of anything. There are already tons of "games" that don't actually function that are just pre-built engine assets shoved together.
I wonder how automated their system is. They obviously wouldn't boot the game up and start walking around because they can just extract the media files and check. But I'm curious if there is a system that identifies copyrighted images/video stills and searches for copyrighted words.
And neither is choosing to act in a situation where the legality isn't clear.
I understand that OpenAI et al would like to assure all their investors and customers that there's nothing legally problematic with using an AI to launder away copyright infrigement, but we're going to need a few lawsuits to have the matter settled.
And valve leadership has made a decent decision of 'we don't want to be the ones being the defendants on what could be a costly and time consuming lawsuit for something they didn't make'
A lot of AI generated images retain the watermark of the copyright image it was trained on. If you sell something with that image with no agreement from the rights holder it is not fair use.
It is completely reasonable for Valve to forbid this until it is sorted out. Keep mind they are a company of IP creators, creating a marketplace for IP creators. The whole reason Steam was created was to establish a DRM that fought the piracy of Half Life. I am on the side of Valve in this.
I believe the AI generates a watermark because so many examples contained it.
Imagine taking a really dumb gig worker, showing him 10000 images, some of them with watermarks, and then telling him "draw a red car, kinda like the kind of images you saw". There's a decent chance you'll get a red car that looks nothing like any cars with the data set (original work), and yet he'll paint a memorable watermark on top because so many examples contained it, you said "kinda like the kind of images you saw", and he doesn't understand that the watermark isn't meant to be part of the picture. I believe that's whats happening.
They don’t ‘retain’ a watermark. They ‘reproduce’ the watermark.
It’s entirely possible for a diffusion model to produce an original work and yet still hallucinate a ‘shutterstock’ watermark onto it, in much the same way as GPT can hallucinate valid-looking citations for legal cases that never happened.
That's a completely fair point a few people have made. But I think the idea is, if you are a creator, the AI is doing something that you might call copying if it were a person. If it does that, what is your redress as the creator?
To correct the common misconception:
Sometimes AI image generators insert a watermark because they have seen a lot of watermarks on certain kinds of images during training. This does not mean that the image itself is a copy of any particular image in the training data.
Producing (distorted) copies of images in the training data takes some real effort, and typically only occurs for images which are heavily repeated in the training data... Most of the complaints along these lines can be compared to complaints that cars cause massive bodily harm if you steer them into lightposts: The problem is easily preventable by not driving into a lightpost.
I think the "well it's transformative" argument is pretty bad faith and I think a lot of the people making it might know that.
Generative AI cannot exist without pre-existing bodies of work created by human labor. It also displaces that labor and hurts the people whose content was a requirement for AI to exist. From this view, AI is not fair use.
There are multiple jurisdictions where there have been rumblings that an AI-generated work is possibly a derived work from every single work that the AI was trained with. This hasn't been properly tested in court, but I would give very high odds that the standard will be upheld at least somewhere where Steam sells things.
If this is true, then ordinary copyright law means that AI-generated media cannot be used unless you have a release from every bit of training data you used. At least some of the currently existing AI:s were trained with datasets for which such releases are impossible, so they should not be used.
Also, for the love of god, do not use any of the AI coding assistants, or if you do, at least never publicly admit you do.
> multiple jurisdictions where there have been rumblings that an AI-generated work is possibly a derived work from every single work that the AI was trained with
This should apply to humans as well then because brains ultimately do the exact same thing. Nobody creates art in a vaccuum.
Sure— the method of making the image, such as being AI generated, is entirely irrelevant in terms of IP enforcement. You could cut a cross-section from a log that had coincidentally formed the Nike symbol with it's rings, and if you slapped a picture of it on your line of sportsware, you better believe you're going to get owned.
But if they see an increased risk of IP violations from AI generated assets— and given the Getty red carpet debacle that's entirely reasonable— banning it will probably save them a whole lot of money on manual game reviews.
The Nike example is trademark rights, not copyright.
If you give a worker 5 examples of cars, and tell him "draw me a new car in this style", and he does so (from memory without clearly copying any individual example), it's unlikely to be a copyright or other IP violation.
> Judge John M. Walker, Jr. of the U.S. Court of Appeals for the Second Circuit noted in Arica v. Palmer that a court may find copyright infringement under the doctrine of "comprehensive non-literal similarity" if "the pattern or sequence of the two works is similar".
You're going to have a much harder time proving that you absolutely did not copy something if you had an image of what you're being accused of copying in the dataset you used to make it. If the images are deemed substantially similar, it will be deemed an infringement.
> If you give a worker 5 examples of cars, and tell him "draw me a new car in this style", and he does so
Yeah that's great, but it actually has nothing to do with how the AI works. A worker learning through observation about 5 cars is hugely different situation than an AI company scraping 400 million often copyrighted images onto their servers to run through a training algorithm to create a for-profit system that displaces the people who produced the original images.
Being cautious when not being cautious could mean lots of big lawsuits against you doesn't seem that ultra-super conservative. I hope this ends up going the other way, but I understand Valve's calculus here.
That's meaningfully different. "Can't be copyrighted" doesn't mean "can't be sold", or "someone else owns the copyright". It just means someone can copy and resell the generated portions without payment/licensing.
I'm not sure. I'm not an expert, but it doesn't seem that different from including public domain text and art in your game.
I assume that, if it is true that Valve isn't allowing games with generated images, it's because (they feel) the legal status could change, not because of the current status.
There's also a quality argument. If Valve lets a bunch of slapdash AI hackjobs onto the store that were developed in a week by people who don't know anything about game development, and that makes it harder to discover well made games, that's a meaningful business risk for them. They're responsible for curating the steam store.
That is a shallow regurgitation of their opinion that has been repeated out of context in headlines, but it misses their point. The Copyright Office's opinion can be better summed up as:
1. Copyright protects work that humans create
2. Humans sometimes use tools to create their works, that is okay
3. Y'all make up your mind whether your AI is some sentient being or whether it's just a tool. We're just lawyers.
If the wind blows and your typewriter falls off a shelf and writes a novel, it isn't subject to copyright either. That doesn't mean that all works written using a typewriter aren't subject to copyright. It means a human must be part of the creative process.
But what if the wind blows, and my laptop falls off a shelf and writes the source code for windows 95, but reindented, with some implementation details and variable names changed?
It’s pretty clear that the “neural networks are just a tool” ruling is going to have to be revisited eventually (and probably soon).
> But what if the wind blows, and my laptop falls off a shelf and writes the source code for windows 95, but reindented, with some implementation details and variable names changed?
Simple. If it wasn't created by a human, it's not eligible for copyright. The law is quite clear about this.
Microsoft gets the copyright to Windows 95 because they wrote it with humans. You wouldn't get it because you didn't write it. Your laptop wouldn't get it because it isn't a human.
> It’s pretty clear that the “neural networks are just a tool” ruling
I think you misinterpreted the above. There is no "“neural networks are just a tool” ruling".
The copyright office never said neural networks were or were not a tool.
They said if a human makes a creative work, and they happen to use use a tool, then it is eligible for copyright. As it always has been.
All they said is what every lawyer already knows, which is that a work has to have an element of human creativity in order to be eligible for copyright.
But, if my laptop’s implementation of windows 95 is not eligible for copyright protection, then I can freely redistribute it because no one can use copyright law to stop me, in a runaround of Microsoft’s copyright on windows 95 (which the laptop generated version is clearly a derivative of).
This is exactly the ambiguity Valve is concerned about.
But the hypothetical world in which your laptop falls off a shelf and randomly writes Windows 95 is a fake one.
LLMs aren't random number generators running in isolation.
They're trained on copyrighted material. If they regurgitate copyrighted material, we know where it came from. It came from the training material.
Valve is rightly concerned that non-lawyers have no clue what they're getting themselves into when using the current generation of AI models. The inability to determine whether an output is a substantial copy of an input is not a free pass to do whatever you want with it, it's a copyright infringement roulette.
There are way too many people in this industry who believe that building a technology which makes compliance impossible is the same thing as making compliance unnecessary.
> US Copyright Office has stated unequivocally that AI works cannot be copyrighted, or otherwise protected legally.
The “or otherwise legally protected" piece is outright falss (and would be out of their scope of competence if true), the other part is true but potentially misleading (a work cannot be protected to the extent that AI, and not the human user, “determines the expressive elements of the work”, but a work made with some use of AI where the human user does that can be protected to the extent of the human contribution.)
The duty to disclose elements that are created by generative AI in the same guidance is going to prove unworkable, too, as generative AI is increasingly embedded into toolchains with other features and not sharply distinguished, and with nontrivial workflows.
This has everything to do with CYA, the issue is AI trained with copyrighted material is a huge gray area and they don’t want to be in the gray area. That’s rational and has zero to do with “conservative”.
This is likely not set in stone and after the copyright laws and courts catch up and decide what to do, Valve will likely go back and update their policies accordingly.
> This has everything to do with CYA, the issue is AI trained with copyrighted material is a huge gray area and they don’t want to be in the gray area. That’s rational and has zero to do with “conservative”.
The word "conservative" isn't a political word in all (or even I would have thought in most) contexts: it's normal meaning is similar to "chosen so as to be careful". For example, a "conservative estimate" isn't "an estimate that leans to the right of the political spectrum": it is an estimate which has been padded out in the direction you are more likely wrong.
When someone says they are being "ultra super conservatively cautious" they are merely being super extra extra doubly-cautious, as we are stacking similar adjectives (as one might could do with something else such as "carefully"). So, wanting to avoid being in a gray area is dead center to being "conservative" in one's curation or legal strategy.
I'll suggest a real motivation: they want to make their own game generated by AI and have first mover advantage while they work out all the scary AI copyright issues they have to deal with that they already deal with because the same problem exists with human generated art.
Why do I say that? They want the developer to prove they only used material they created to do the training and Valve has the resources to follow that rule unlike the rest of us.
Microsoft wants to leverage LLMs to expand their influence in the software development market. For them, Copilot is both revenue source and a moat, so it behooves them to claim that these models don't constitute copyright infringement. But there's no business benefit to Valve in allowing AI-generated art assets on Steam, and a small (though nonzero) amount of risk.
The best case scenario for Microsoft would be supplying the world with programming tools far ahead of all others (no idea, haven't tried any of that stuff), while maybe not getting sued to bits. The best case scenario for Valve would be not getting sued to bits while getting even more spammed by low-effort money grab attempts that hope to luck into virality than they already are.
At first approximation, yeah, the risk of getting sued to bits might be roughly the same. But the upside is not.
And Microsoft isn't the government. So I see no bearing on the actual issue at hand, which is valve protecting it's own ass from lawsuits that are in the realm of murk at best.
It isn’t a settled legal issue yet. It could be that Valve and Microsoft are responding to different incentives, because they have different business models. But it could also just be that their lawyers have different legal opinions.
Width media queries are far older than you think, so it definitely makes sense that they’re relative to the viewport. Firefox was the first to implement them in 2006. At that time “components” were in their fetal stage.
Commonly speaking you need either tomato sauce or mozzarella to call it a pizza. Without either one, it would be called “pizza pane” but that’s basically flatbread and to be eaten as bread. Pretty rare to find a “pizza” without some sort of sauce to make it wet.
This is such a bad take. People travel for entertainment, they don’t need to become a different person after coming back from Las Vegas.
Some people watch a movie, others go to a museum in a different city.
Additionally, the premise that all travel is meaningless discards what you could experience by spending 5 days in rural Bangladesh after living your whole life in New York. Talk about eye-opening.