In most legal jurisdictions that I know of, kids aren't legally allowed to be able to access to pornography either. How is that working out?
The only way to even attempt to enforce these things is with government mandated age verification. Few people want that as it represents a massive violation of privacy and effectively makes anonymity on the Internet impossible.
The insistence on perfect age verification requires ending anonymity. Age verification to the level of buying cigarettes or booze does not.
Flash a driver's license at a liquor store to buy a single-use token, good for one year, and access your favorite social media trash. Anonymity is maintained, and most kids are locked out.
In the same way that kids occasionally obtain cigs or beer despite safeguards, sometimes they may get their hands on a code. Prosecute anyone who knowingly sells or gives one to a minor.
> Flash a driver's license at a liquor store to buy a single-use token, good for one year, and access your favorite social media trash. Anonymity is maintained...
Ask a woman in a liquor store whether her anonymity is maintained by this scenario...?
The current liquor store approach for buying liquor is hazardous for a good chunk of people and we need to acknowledge that - even if acquiring a token somewhat ameliorates the compounded risk from presenting ID multiple times
So many of these internet ban proposals feel like someone creates a single cartoon scenario that captures ~2% of the use cases, and happily charges ahead to a proposed solution as though they've sufficiently thought about the people affected and the harms involved.
I've seen many women buying alcohol and cigarettes. After a certain age you aren't even carded. It isn't obvious to me that it's a big worry for women in general.
However, I accept it may be a concern for some due to a history of stalkers. They have alternatives.
They can ask a friend to buy a token on their behalf. It's always legal to give alcohol to a friend you know is of legal drinking age. Same thing.
They could find liquor or tobacco stores with women cashiers. And rotate between stores to avoid showing their ID to the same person multiple times.
> So many of these internet ban proposals feel like someone creates a single cartoon scenario that captures ~2% of the use cases
I think the "problem" with my proposal you're harping on is the "~2% of use cases" you're talking about. My proposal isn't foolproof but it is anonymous. Just like alcohol and tobacco sales today.
If we're saying social media is the new tobacco and must be kept away from kids (I agree on both counts) then we must not intrude on the privacy of adults any more than we would when they buy actual tobacco.
It makes no sense to want to control access to certain websites more strictly than access to actual poisons that cause disease, violent behavior, and death. Otherwise it's clear it was never about "the kids". It was about control, speech policing, and ending anonymity online.
Forcing everyone to upload IDs makes all women vulnerable to stalking and harassment. It's strictly worse.
> Ask a woman in a liquor store whether her anonymity is maintained by this scenario...?
Is she not going to say "pretty well compared to a surveillance database, one or two people that are probably going to forget immediately"?
> The current liquor store approach for buying liquor is hazardous for a good chunk of people
What chunk of people?
Are you trying to imply that this chunk includes women in general? It's really easy to find random women without looking at an ID. If this is about addresses, anyone taking actions based on "a woman probably lives here" has about the same effect as picking houses at random.
> Is she not going to say "pretty well compared to a surveillance database"
No, instead she is likely to avoid talking in abstractions and instead talk about personal experiences of getting stalked online by multiple people she has had to show her details to in the past, who may include storekeeps, police, university staff, etc, etc. Eva Galperin is an excellent source on the way many of our procedures are designed in ways that do not at all account for the potential of stalking and harassment, though her focus is on how this continues to unfold in the technology space.
I can't really follow how a woman showing an ID to a lecherous cashier allows said cashier to stalk her online. Where she is, presumably, speaking about personal experiences anonymously.
Generally you can't get through life with no one knowing your name; even women at risk of stalking. As you already pointed out they may have to show ID to police, university staff, employers, landlords, medical staff, banks, social workers or other government employees. Buying a single-use token annually to get on social media doesn't meaningfully increase that risk profile. And as I already said, if they're that worried, they can ask a friend to buy it for them.
Very big citation needed for saying it's "likely" she has been stalked by multiple people because they got a glance at her name. Especially because someone that just wants info on an attractive woman can find a hundred times as many candidates by scrolling facebook.
I'll believe it if you have proof, but you need proof.
I don't see the danger of pornography, tbh. Oh, much of it is sick, sure, but violent video games are far more harmful. Would it be better to depict loving, caring relationships? Hell, yes! But there are so few of those these days.
My teenage son struggles to have any meaningful dialog with any of the girls his age. It's like he doesn't exist. The few kids who are "dating" is basically the exact scenario that MGTOW depicts--girls only go for the elite jocks and ignore everyone else like they don't even exist. Everyone is miserable. Many will eventually grow out of it, but I don't think the females will ever view themselves as doing anything but "settling" because of the nonsense programmed into their heads. And yes, social media is largely responsible for how extreme the situation has become. In the 90s, girls were picky, but nothing like now. So all that young men have left is like AI chatbots and porn and it's better to not take that away from them, too.
Government runs authentication service that has your personal details.
User creates account on platform Y, platform Y asks government service if your age is >18, service says y/n. Platform never finds out your personal details.
The government still knows your identity in this scenario, so it's a pretty limited form of anonymity (i.e. only suitable for activities the government isn't hostile to)
I know Americans don't want to hear this, but once the government turns hostile, internet anonymity won't save you, just like how guns won't save you (hello propaganda and a large and very active brainwashed minority that also has guns).
The only thing saving you from a hostile government is a well educated populace that really wants democracy and is willing to fight for it (through constant activism, peaceful & other types of protests). This is where many democracies are failing now. No amount of technology or rules can replace large amounts of constantly vigilant eyes that understand how democracy is subverted.
I would rather optimize for not giving companies too much power and end up with a Kafkaesque patchwork of corporate abuses and regulatory captures.
Can't you just put a middle man on there then? Get a non-profit organisation like Mozilla to ask the govt. on behalf of the user.
The organisation asks the govt, and gives back a signed token.
The the only thing the government knows is that an age verification was requested. Once verification has been done once for one site, it can be used for future verifications.
The middle man in this scenario can mask the URL that is requesting age verification, but what's to stop the government compelling traffic logs from the middle man?
Nothing more than what prevents them from getting logs from your ISP about the sites you visit after verification. In ideal countries they need a court order for that, in less ideal ones they just scoop up the logs preemptively.
The government then knows all the services you use. No bueno.
There are better ways to do this including zk proofs, but you gotta work against people mass reselling them. Could do some rate limited tokens minted from a proof maybe.
To an extent I agree, except consider that governments of smaller countries probably don’t currently have the means to know, but they with such a system it would be served on a silver platter. Additionally, it could be leveraged as a means of censorship system restricting access to undesirable content
Some concerns:
- government gets a list of every website that requests your age
- every website has to register with the government to initiate age verification checks
Which pretty much puts an end to any notion of an open internet. But maybe a system I prefer to one where a bunch of random startups have my age verification biometrics .
Yes, but that would then require more infrastructure. For example, Australia does not have a national ID card - or a national proof of age card (each state, however, does implement a Proof of Age card, eg https://www.nsw.gov.au/driving-boating-and-transport/driver-...).
So, what is your zero knowledge based on? Who is the signer?
Under the Identity Verification Services Act 2023 we have IDMatch (https://www.idmatch.gov.au/). This whole setup can simply be extended to have third parties act as an intermediary between the government and the party attempting to get proof of age. Similar to AusPost's DigitaliD (https://www.digitalid.com/personal). But let's not have that company owned by the Government :)
It's pretty cooked that we are asking the social media companies to go ahead and prove to the eSaftey commissioner that they have measures in place to stop kids from getting access to social websites, yet they have to use unreliable measures like selfies to do it. The companies can't win here. This won't be the last you hear of this. https://youtu.be/YTwBStZIawY?t=306
You're not wrong, but that doesn't mean they weren't still in "growth" phase.
Their pricing, and their doubling down on account sharing policies over the last few years have shown that they are no longer in a growth phase.
I cancelled my Netflix account a few months ago because I had gotten the "You're not accessing this from your typical location" blocker. Even though I was trying to watch from my permanent residence and I was the account owner / payee.
The reason that happened was that my wife and I own two properties. We are happily married, not separated, but we just like our space... especially with two adult daughters who still live at home with one of their significant others also living in the house.
We are a single family "unit" but have two locations. Furthermore, my wife has sleeping issues and was using Netflix at night in order to fall asleep. To have to get me to check my email for an access code, was a total deal breaker since I would be fast asleep. So that cut her off from her typical usage of Netflix.
And the reason Netflix thought that I was accessing the service from a different location was that I hardly ever watched it. Every time I'd pull it up, I would spend more time scrolling for something to watch than actually watching anything.. and typically I'd just give up and go watch a 30m YouTube video instead.
So I was paying more, receiving less ... mostly had the account purely for my wife and daughters who watched it the most ... and then the final deal breaker was logistical barriers preventing me from being able to use what I'm paying for.
Agree, but I think they moved away from growth to this not because they lost investor money / vc demands but because they started losing a lot of licensing deals and content, and had to shift from redistribution to making more and more originals with capital investment cost and etc.
Slightly different reasons for enshitiffication - if Spotify lost half of their catalogue suddenly they might move in the same way I guess.
> when I'd actually written most of the Wikipedia article on said subject. The irony... Wikipedia doesn't just use unpaid labour, it ends up undermining the people who wrote it.
Surely it would be relatively easy to offer to show the edit history to prove that you actually contributed to the article? And, by doing so, would flip the situation in your favour by demonstrating your expertise?
The fact that you should have to is pretty annoying but also fairly edge case. And if a teacher or institute refuses to review that evidence then I don't think the credential on the table worth the paper it's printed on anyway.
We shouldn't forget that there are trade-offs, however. And it depends on the language's runtime in question.
As we all know, TypeScript is a super-set of JavaScript so at the end of the day your code is running in V8, JSCore or SpiderMonkey - depending on what browser the end user is using, as an interpreted language. It is also a loosely typed language with zero concept of immutability at the native runtime level.
And immutability in JavaScript, without native support that we could hopefully see in some hypothetical future version of EcmaScript, has the potential to impact runtime performance.
I work for a SaaS company that makes a B2B web application that has over 4 million lines of TypeScript code. It shouldn't surprise anyone to learn that we are pushing the browser to its limits and are learning a lot about scalability. One of my team-mates is a performance engineer who has code checked into Chrome and will often show us what our JavaScript code is doing in the V8 source code.
One expensive operation in JavaScript is cloning objects, which includes arrays in JavaScript. If you do that a lot.. if, say, you're using something like Redux or ngrx where immutability is a design goal and so you're cloning your application's runtime state object with each and every single state change, you are extremely de-optimized for performance depending on how much state you are holding onto.
And, for better or worse, there is a push towards making web applications as stateful as native desktop applications. Gone are the days where your servers can own your state and your clients just be "dumb" presentation and views. Businesses want full "offline mode." The relationship is shifting to one where your backends are becoming leaner .. in some cases being reduced to storage engines, while the bulk of your application's implementation happens in the client. Not because we engineers want to, but because the business goal necessitates it.
Then consider the spread operator, and how much you might see it in TypeScript code:
const foo = {
...bar, // clones bar, so your N-value of this simple expression is pegged to how large this object is
newPropertyValue,
};
// same thing, clones original array in order to push a single item, because "immutability is good, because I was told it is"
const foo = [...array, newItem];
And then consider all of the "immutable" Array functions like .reduce(), .map(), .filter()
They're nice, syntactically ... I love them from a code maintenance and
readability point of view. But I'm coming across "intermediate" web developers who don't know how to write a classic for-loop and will make an O(N) operation into an O(N^3) because they're chaining these together with no consideration for the performance impact.
And of course you can write performant code or non-performant code in any language. And I am the first to preach that you should write clean, easy to maintain code and then profile to discover your bottlenecks and optimize accordingly. But that doesn't change the fact that JavaScript has no native immutability and the way to write immutable JavaScript will put you in a position where performance is going to be worse overall because the tools you are forced to reach for, as matter of course, are themselves inherently de-optimized.
Like @drob518 noted already - the only benefit of mutation is performance. That's all. That's the only, distinct, single, valid point for it. Everything else is nothing but problems. Mutable shared state is the root of many bugs, especially in concurrent programs.
"One of the most difficult elements of program design is reasoning about the possible states of complex objects. Reasoning about the state of immutable objects, on the other hand, is trivial." - Brian Goetz.
So, if immutable, persistent collections are so good, and the only problem is that they are slower, then we just need to make them faster, yes?
That's the only problem that needs to be solved in the runtime to gain countless benefits, almost for free, which you are acknowledging.
But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
> But, please, don't call it a "trade-off" - that implies that you're getting some positive benefits on both sides, which is inaccurate and misleading - you should be framing mutation as "safety price for necessary performance" - just like Rust describes unsafe blocks.
I would have agreed with that statement a few years ago.
But what I am seeing in the wild, is an ideological attachment to the belief that "immutability is always good, so always do that"
And what we're seeing is NOT a ton of bugs and defects that are caused by state mutation bugs. We're seeing customers walk away with millions of dollars because of massive performance degradation caused, in some part, by developers who are programming in a language that does not support native immutability but they're trying to shoe-horn it in because of a BELIEF that it will for sure, always cut down on the number of defects.
Everything is contextual. Everything is a trade-off in engineering. If you disagree with that, you are making an ideological statement, not a factual one.
Any civil engineer would talk to you about tolerances. Only programmers ever say something is "inherently 'right'" or "inherently 'wrong'" regardless of other situations.
If your data is telling you that the number one complaint of your customers is runtime performance, and a statistically significant number of your observed defects can be traced to trying to shoe-horn in a paradigm that the runtime does not support natively, then you've lost the argument about the benefits of immutability. In that context, immutability is demonstrably providing you with negative value and, by saying "we should make the runtime faster", you are hand-waiving to a degree that would and should get you fired by that company.
If you work in academia, or are a compiler engineer, then the context you are sitting in might make it completely appropriate to spend your time and resources talking about language theory and how to improve the runtime performance of the machine being programmed for.
In a different context, when you are a software engineer who is being paid to develop customer facing features, "just make the runtime faster" is not a viable option. Not something even worth talking about since you have no direct influence on that.
And the reason I brought this up, is because we're talking about JavaScript / TypeScript specifically.
In any other language, like Clojure, it's moot because immutability is baked in. But within JavaScript it is not "nice" to see people trying to shoe-horn that in. We can't, on the one hand, bitch and moan about how poorly websites all over the Internet are performing on our devices while also saying "JavaScript developers should do immutability MORE."
At my company, measurable performance degradation is considered a defect that would block a release. So you can't even say you're reducing defects through immutability if you can point to one single PR that causes a perf degradation by trying to do something in an immutable way.
So yeah, it's all trade offs. It comes down to what you are proritizing. Runtime performance or data integrity? Not all applications will value both equally.
Alright, I admit, I have not worked on teams where immutable.js was used a lot, so I don't have any insight specifically on its impact on performance.
Still personally wouldn't call immutability a "trade-off", even in js context - for majority of kinds of apps, it's still a big win - I've seen that many times with Clojurescript which doesn't have native runtime - it eventually emits javascript. I love Clojure, but I honestly refuse to believe that it invariably emits higher performing js code compared to vanilla js with immutablejs on top.
For some kind of apps, yes, for sure, the performance is an ultimate priority. In my mind, that's a similar "trade-off" as using C or even assembly, because of required performance. It's undeniably important, yet these situations represent only a small fraction of overall use cases.
But sure, I agree with everything you say - Immutability is great in general, but not for every given case.
Yes, if your immutability is implemented via simple cloning of everything, it’s going to be slow. You need immutable, persistent data structures such as those in Clojure.
I ran a highly trafficked adult website for 18 years. In the early days, CDNs were unattainable for me and I managed my own rudimentary network by hosting bare metal servers in data centres around the world, using geo-ip aware DNS servers to send traffic to the closest data centre to them.
My most significant running expense was bandwidth cost. So I never switched to cloud since the bandwidth costs would have instantly bankrupted me. Cloudflare, on the other hand, was the single most significant development when it came to my bottom line. Adding a basic, $200 / month business account saved me thousands per month on bandwidth + server costs.
DDoS protection was just a nice perk.
Most small websites are hosting with cloud providers these days. If their websites are at all media rich (and most are these days), and those assets can be cached by a CDN ... the cost savings on bandwidth are not marginal. They are often the difference between being able to afford to host your website or not having one at all.
There are, of course, ways to optimize and reduce those expenses without a 3rd party CDN. But if Cloudflare still has their free plans for smaller traffic volumes, it is often a financial decision to use them over your cloud provider's CDN options.
My wife and I own a small theatre. We can process orders in-store just fine. Our customers can even avoid online processing fees if they purchase in-store. And if our POS system went down, we could absolutely fall back to pencil and paper.
Doesn't change the fact that 99% of our ticket sales happen online. People will even come in to the theatre to check us out (we're magicians and it's a small magic shop + magic-themed theatre - so people are curious and we get a lot of foot traffic) but, despite being in the store, despite being able to buy tickets right then and there and despite the fact that it would cost less to do so ... they invariably take a flyer and scan the QR code and buy online.
We might be kind of niche, since events usually sell to groups of people and it's rare that someone decides to attend an event by themselves right there on the spot. So that undoubtedly explains why people behave like this - they're texting friends and trying to see who is interested in going. But I'm still bringing us up as an example to illustrate just how "online" people are these days. Being online allows you to take a step back, read the reviews, price shop, order later and have things delivered to your house once you've decided to commit to purchasing. That's just normal these days for so many businesses and their customers.
I can understand that sentiment. Just don't lose sight of the impact it can have on every day people. My wife and I own a small theatre and we sell tickets through Eventbrite. It's not my full time job but it is hers. Eventbrite sent out an email this morning letting us know that they are impacted by the outage. Our event page appears to be working but I do wonder if it's impacting ticket sales for this weekend's shows.
So while us in tech might like a "snow day", there are millions of small businesses and people trying to go about their day to day lives who get cut off because of someone else's fuck-ups when this happens.
Absolutely solid point; there are a couple of apps I use daily for productivity, chores, even for alarm scheduling, that with the free versions, the ads wouldn’t load so I couldn’t use them but some of them were updated already. Made me realize I forgot that we’re kind of like cyborgs relying on technology that’s integrated so deeply into our lives that all it takes is an EMP blast like a monopolistic service going down to bring -us- down until we take a breath and learn how to walk again. Wild time.
> But it ultimately doesn't matter either way. A major selling point/part of the "contract" the web platform has with web developers is backwards compatibility.
The fact that you put "contract" in quotes suggests that you know there really is no such thing.
Backwards compatibility is a feature. One that needs to be actively valued, developed and maintained. It requires resources. There really is no "the web platform." We have web browsers, servers, client devices, telecommunications infrastructure - including routers and data centres, protocols... all produced and maintained by individual parties that are trying to achieve various degrees of interoperability between each other and all of which have their own priorities, values and interests.
The fact that the Internet has been able to become what it is, despite the foundational technologies that it was built upon - none of which had anticipated the usage requirements placed on their current versions, really ought to be labelled one of the wonders of the world.
I learned to program in the early to mid 1990s. Back then, there was no "cloud", we didn't call anything a "web application" but I cut my teeth doing the 1990s equivalent of building online tools and "web apps." Because everything was self-hosted, the companies I worked for valued portability because there was customer demand. Standardization was sought as a way to streamline business efficiency. As a young developer, I came to value standardization for the benefits that it offered me as a developer.
But back then, as well as today, if you looked at the very recent history of computing; you had big endian vs little endian CPUs to support, you had a dozen flavours of proprietary UNIX operating systems - each with their own vendor-lock-in features; while SQL was standard, every single RDBMS vendor had their own proprietary features that they were all too happy for you to use in order to try and lock consumers into their systems.
It can be argued that part of what has made Microsoft Windows so popular throughout the ages is the tremendous amount of effort that Microsoft goes through to support backwards compatibility. But even despite that effort, backwards compatibility with applications built for earlier version of Windows can still be hit or miss.
For better or worse, breaking changes are just part and parcel of computing. To try and impose some concept of a "contract" on the Internet to support backwards compatibility, even if you mean it purely figuratively, is a bit silly. The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers. If only an extreme minority of "customers" require native xslt support in the web browser, to use today's example, it makes zero business sense to pour resources into maintaining it.
> The fact that you put "contract" in quotes suggests that you know there really is no such thing.
It's in quotes because people seem keen to remind everyone that there's no legal obligation on the part of the browser makers not to break backwards compatibility. The reasoning seems to be that if we can't sue google for a given action, that action must be fine and the people objecting to it must be wrong. I take a rather dim view of this line of reasoning.
> The reason we have as much backwards compatibility as we do is largely historical and always driven by business goals and requirements, as dictated by customers.
As you yourself pointed out, the web is a giant pile of cobbled together technologies that all seemed like a good idea at the time. If breaking changes were an option, there is a _long_ list of potential depreciation to pick from which would greatly simplify development of both browsers and websites/apps. Further, new features/standards would be able to be added with much less care, since if problems were found in those standards they could be removed/reworked. Despite those huge benefits, no such changes are/should be made, because the costs breaking backwards compatibility are just that high. Maintaining the implied promise that software written for the web will continue to work is a business requirement, because it's crucial for the long term health of the ecosystem.
reply