Hacker Newsnew | past | comments | ask | show | jobs | submit | WA's favoriteslogin

Copying from another post. I’m very puzzled on why people don’t talk more about essential complexity of specifying systems any more:

In No Silver Bullet, Fred Brooks argues that the hard part of software engineering lies in essential complexity - understanding, specifying, and modeling the problem space - while accidental complexity like tool limitations is secondary. His point was that no tool or methodology would "magically" eliminate the difficulty of software development because the core challenge is conceptual, not syntactic. Fast forward to today: there's a lot of talk about AI agents replacing engineers by writing entire codebases from natural language prompts. But that seems to assume the specification problem is somehow solved or simplified. In reality, turning vague ideas into detailed, robust systems still feels like the core job of engineers.

If someone provides detailed specs and iteratively works with an AI to build software, aren’t they just using AI to eliminate accidental complexity—like how we moved from assembly to high-level languages? That doesn’t replace engineers; it boosts our productivity. If anything, it should increase opportunities by lowering the cost of iteration and scaling our impact.

So how do we reconcile this? If an agent writes a product from a prompt, that only works because someone else has already fully specified the system—implicitly or explicitly. And if we’re just using AI to replicate existing products, then we’re not solving technical problems anymore; we’re just competing on distribution or cost. That’s not an engineering disruption—it’s a business one.

What am I missing here?


I had one product that took off, made more money than at any of my previous (big-tech) jobs, and I thought the tales were true. If you work hard enough, catch a lucky wave and have the skills to ride it, you're free.

The product peaked, and then very quickly dropped to zero in about two years, while I tried everything to prevent going out of business.

Back at the bottom, a year or two of aimless wandering, I figured, well, I've seen the light, I know how this dance needs to be danced, I can dance it one more time.

I've since had half a dozen attempts not even produce a handful of accounts (or in the better cases, paid accounts), and it's simply not sustainable.

I was finally getting to a point where I felt ready to give up last year, and decided to try one last time or get back into a regular job. The problem being, "getting back" now means that I am 20 years older and more than likely _not a great fit_ for many of the roles out there.

So, for now, I am once again knuckle-deep in a new product, about to have a first customer this week (if they sign up) and some light on the horizon.

Yet, even if I manage to surpass all the possible stretch-goals on growth I have set this year, it will still pay considerably less than minimum wage, and that's if everything (and more) works out within this and the next year or so. On occasional consulting gigs I charge $$$, and I don't even dare to compare that to what these Saas/products bring in, is just ... sad. And I hear someone clacking on their keyboard already, responding...

The usual reaction to a post like mine will and used to be: well, you've got the wrong product, audience, or both. But all I want to say after doing this for 20 years now, is: that's the default. If you're not starting from a large-enough platform, your only way to success is to be literally "failing" upwards, in baby steps, turtle-speed, and that only results in success if you can somehow sustain doing that long enough.

The default is that nothing works out. People love to skip over this and always feel it doesn't apply to them and their idea. It will fail. The game is not to make a great product, the game is to figure out how to not go under while waiting for and/or constantly provoking your lucky break.

You can get lucky, and if you try long enough and often enough, you at least stand somehwat of a (very, very, very slim) chance. And, then, even if you do, all that luck is very brief and temporary and you'll be thirsty waiting for the next strike to stay above water. There is no "I made it" — there's only "I'm safe for a second, but what do I do next" in the very best of cases.

All that said, I believe it is totally worth the life experience. Life is short, it's a noteworthy thing to go through. But after a decade of failing, it'll quietly turn into a question of character, responsibility, psychological or social issues and general life-planning skills rather than a question of "do you want it hard enough to succeed".

Life is short.


The past is the past, no sense in regretting it since you cannot change it.

I'm not quite as old you but close, and I already feel what you're feeling about the time left. That there isn't a lot of it, or that it will be gone quickly. Everyone has things they will not get a chance to try or experience. No lifetime offers everything, and every path taken means many, many others will be never explored.

Like money, you can't take memories with you. So try not to dwell on things you didn't do or that didn't work out the way you imagined. Half or more of people who get married end up divorced. Probably many more are less than happy. Kids can be a joy but they can also be a heartache. Every criminal is somebody's kid. Nothing comes with any guarantees.

Make life interesting today, as today is the only thing you really experience.


The last time Apple introduced a new general purpose computing device was the iPad, 14 years ago.

This one just doesn’t feel as big. When the iPad was introduced, everyone was shocked by the very low price and the sleek form factor compared to previous Windows tablet devices.

Those are exactly the weak points of the Vision Pro: it’s very expensive and very heavy, according to reports.

The hands-on sessions that Apple gave to journalists earlier this week seemed a bit underwhelming. The reporter for the Verge wrote that it feels like the Quest, but with higher resolution.

That’s a worrying sign! Imagine if the first hands-on of the iPhone had been “it’s like a BlackBerry but with better DPI screen.”

Of course Apple is usually very good at consistently evolving their platforms. Maybe the non-Pro model will be something else.


Yes. It is your life. It isn't about what is out there. It's about what's in you.

My kids are 19, 17, 12. I tell them- you're not going to college to get an education that is about knowledge out in the world. You are going to get an education about you. To learn about your person- your body, your brain, your own mental model of your self and other selves and the world.

Your person is still in physical growth mode until at least 25, and then you have lots of other changes and challenges coming after that. You will continue learning, including about your self, throughout the entirety of your life. To be set up to do that is why you're going.

(Yes, college is not the real world, in any way. But in important ways it is real enough.)

==

The most important things to be able to do are- build relationships, focus and concentrate, organize your self and your thinking, communicate, have fun, and take care of the physical self. You don't have any idea, really, how well you do those things as a teenager. It's the job of the adults around you to help. College is an opportunity to expose your person to more unique, distinct, varied, skilled adults and peers than at any time previous, and for some, more than they will ever get again (unfortunately). That exposure is the most intense learning the self can do.

For each of my kids, they have things they are good at now, and things they are not good at. Not just skills- capabilities. Biases. Potentials, not actuals. As their parent I have a good sense of possible distinct and unique trajectories for each of them given those potentials, and I do what I can to coach them onto those various trajectories and in specific work domain disciplines that are potential fits (to my eyes) for them. But that's a conversation that is specific to our relationship. And their lives are their own.

For you, I would encourage you to see yourself not even at the beginning of your adventure, and to think hard and figure out good ways, with the guidance of adults you currently respect and trust, to avail yourself and position yourself to be exposed to and learn from new adults worthy of respect and trust. And pay it forward, too.


The issue with our AI debate is that there's not a single "problem" but many inter-dependent issues without a clear system-wide solution.

- Big tech monopolizing the models, data, and hardware.

- Copyright concerns.

- Job security.

- AIs becoming sentient and causing harm for their own ends.

- Corporations intentionally using AI to cause harm for their own ends.

- Feedback loops will flood the internet with content of unknown provenance, which get included in the next model, etc.

- AI hallucinations resulting in widespread persistent errors that cause an epistemological crisis.

- The training set is inherently biased; human knowledge and perspectives not represented in this set could be systematically wiped from public discourse.

We can have meaningful discussions on each of these topics. And I'm sure we all have a level of concern assigned to each (personally, I'm far more worried about an epistemological crisis and corporate abuse than some AI singularity).

But we're seeing these topics interact in real-time to make a system with huge emergent societal properties. Not sure anyone has a handle on the big picture (there is no one driving the bus!) but there's plenty of us sitting in the passenger seats and raising alarm bells about what we see out our respective little windows.


> create much hard to capture value.

This is pretty key.

Imagine a train line that allows commuters to get to work. Trains are expensive to run, so the actual cost to get a commuter to work and back is £100. The commuters are paid (say) £150 a day, after tax. Is this train worth 2/3 of their post-tax income? Probably not, so they won't use it, and the company can't get workers if there's no other practical way to commute. Workers can take less good jobs near home, earn less, but take home more. Or even no job at all. However, a worker generates substantially more than their post-tax salary in value to the economy as a whole, so subsidy of the train fare creates value by getting them to work and generating that value, even though the train cannot actually turn a profit itself by charging the commuters out of their income.

Even if you say "well the company should just pay more if they've made that value", not all that value manifests directly on the company's bottom line, it includes downstream value, as well as intangible things like worker skills that are more of an abstract societal benefit.

In the same way, roads produce massively more value than people would be willing to pay individually: all the food deliveries in a week might be worth, at retail, about £4 billion, say. If those deliveries can't be made, what will be the cost? £4 billion? Or more because the whole country will become a much less effective economy when everyone is starving and looking for food? And the universal-delivery postal system. Healthcare, childcare, energy, etc etc.


After nearly two decades in early stage startups I couldn’t agree more. Looking back, I know we often built too much too soon, and had too much confidence that we were building the right thing.

These days I often advise would be founders to start with doing their idea manually with just the minimum amount of tech.

Maybe just a google sheet and answering emails and a handful of “customers.” If you can’t give people a good experience with a totally custom personal experience, they’re not going to like it any more when it’s automated and scalable.


I have a very high level of scepticism towards non-technical founders without deep pockets:

1. There's an agility to being able to change things yourself, at 2am if necessary. That's very important to startups.

2. There's a discipline to not accruing unreasonable technical debt of the type that will sink you two years in that you can't expect from someone who doesn't deeply care.

3. Managing devs generally requires understanding what's going on. Unless you're ready to hire a CTO with a track record, a non-technical person can't do that.

There's also a long tail-end of reasons. For example, prototyping and tinkering nights and weekends is "free," and can contribute a surprising amount of value. It can't be managed, though; you can't just pay other people to play around, and expect that to be focused in ways which might add value.

*Every* time I've personally seen the playbook of a non-tech founders for a tech startup, it didn't end well.


There's plenty of reasons not to like abstract games.

Abstract games are highly, highly dependent on memorization and repetition. In lower tiers a Go or Chess player that's played more games or memorized more positions will handily beat a player who hasn't. The sheer hours needed to get to high level chess play, where memorization doesn't matter as much anymore, is insane. There's a reason most great chess players start out young.

Abstracts are also perfect information games. I like games that simulate aspects of real life, and anything with perfect information immediately breaks my immersion.


The Image: A Guide to Pseudo-Events in America, published in 1962, details how news media in the 20th century transformed into a powerful engine generating passable stories and news, fueled only tentatively by developments in the real world.

Conferences, interviews, panels of experts, reactions, leaks, behind the scenes peeks, press releases, debates, endless opinions and think pieces, so much else... We already live in the synthetic age.

Is it about to get worse? It's hard to say. GPT may eventually be able to sling it with the best of them, but humans have a trillion dollar media culture complex in place already. In a sense, we are prepared for this.

The question posed here is broadly the same as the issue we've been coping with since the invention of printing and photography. Is it real or is it staged?

My parents both worked in a newsroom -- my father was an editor and columnist, and my mom a reporter. There is something called a "byline strike", where reporters collectively withdraw consent to have their names appear in the paper. It's not a work stoppage -- the product (newspaper) goes out just the same, just without bylines. Among other things, this is embarrassing for the paper because it draws attention to their labor problems at the top of every article. More fundamental, at least from my dad's perspective, was that it seriously undermined the credibility of the paper. Who are the people writing these articles? Do they even live in this city? Who would trust a paper full of reports that nobody was willing to put their name on?

This paper went on to change hands in the 90s, fire its editors and buy out senior staff, then moved editorial operations out of the state entirely

I am concerned about GPT but I don't think we are going into anything fundamentally new yet, in this sense. Media culture is overwhelmingly powerful in the west, and profitable. GPTs and their successors will massively disrupt labor economics and work (again), but not like... the nature of believability and personhood, or the ratio of real to synthetic. That ship is already long gone, the mixture already saturated.


Analyses like this are bizarre to me. There is an implicit assumption here that human generated content is often high quality and worth consuming or using.

My experience, as an adult who grew up with the internet, is that close to 100% of the content online is garbage not worth consuming. It's already incredibly difficult to find high quality human output.

This isn't even a new fact about the internet. If you pick a random book written in the last 100 years, the odds are very poor that it will be high quality and a good use of time. Even most textbooks, which are high-effort projects consuming years of human labor, are low quality and not worth reading.

And yet, despite nearly all content being garbage, I have spent my entire life with more high-quality content queued up to read than I could possibly get through. I'm able to do this because like many of you, I rely completely on curation, trust, and reputation to decide what to read and consume. For example, this site's front page is a filter that keeps most of the worst content away. I trust publications like Nature and communities like Wikipedia to sort and curate content worth consuming, whatever the original source.

I'm not at all worried about automated content generation. There's already too much garbage and too much gold for any one person to consume. Filtering and curating isn't a new problem, so I don't think anything big will change. If anything, advances in AI will make it much easier to build highly personalized content search and curation products, making the typical user's online experience better.


Learning history / literature in school is important.

I was a total STEM math nerd in school. I used to frequently complain how I don't get what's the point of it, or how it's a waste of time and I'm learning nothing. I still think the emphasis of school was off, but I get the point of it now.

Stories are like code for humans. You can't tell someone what it means to be good or bad, or to give them a course in philosophy and they will become good people. But you can tell them a good story, that engages with them emotionally, and it will change their perception. And history shows that in fact, those stories being told and repeated aren't just interesting minor curiosity, but they have shaped the direction of humanity and they are driving it. A single person with a single story can change history in such a way that it would be completely different without it. And some stories about stories need to be told as a warning so that people will not fall for those kinds of stories again.


1. Read the Elements of Style by E.B. White. It describes how to write in active voice for positive effect.

2. Practice converting your thoughts to the written word so that they're clearly understood by anyone. That is the exercise at hand and it takes practice.

3. Once you've mastered clearly communicating your ideas, add some cleverness to your writing. Use double-entendre and practice economy of words. Leave something for the reader to guess, allowing one's imagination to fill the gaps with what you didn't say.

4. Finally, practice the art of showing versus telling, i.e., the art of story-telling versus an analytical accounting of facts.


Books for general audience are filled with light content. That's just the way things are nowadays. It is not going to change. Any self help, any book about basics of something is going to have massive amounts of filler. I remember listening to Shallows: What Internet Does to Our Brains, and just suffering from being forced to listen to history about reading. How the monks or somebody read in the past, how valueable it was. I didn't care about any of that. I wanted to hear stories about people falling into deep depression because of too much internet. I wanted summaries of research into changes of the brain addicted to the internet. Instead what I got was history of papyrus, and what came after that, how novel it was, and how books became available for everybody, not just select few. How monks read aloud, and how there was one special monk who could read without speaking words. Terrible book.

I remember reading something about geniuses and high performance individuals, and of course examples were about sports. Because everybody understands sports and the book was for everybody. I wanted to read about workings of the minds of best mathematicians or professionals in intellectuals fields, like engineering, programming, businessman. As I was reading I felt physically ill, until I closed the book, yelled as loud as I can "BWHAAAAAAAAAAAAAAH" and made a cup of coffee for myself. I realized that most of the books I wanted to read were fluff written for the publishers and editors. Not for me. I can get the idea from the title, description, table of contents, and maybe a few Amazon reviews.

I notice the same thing with technical books. Sections about history, long winded explanations of what is going to be taught paired with long conclusions. Recaps. I can't tell you how many times I've read history of Linux, and I can't remember anything about it: these mad diagrams of standards, what came from where, and how it was improved, extended and replaced by something else. I wanted to read a book on algorithms, a free one, it had very warm reception on HN[0], and guess what? It starts out with history of numbers. With detailed names of people who came up with ideas, of places, and even pictures. I hate these forms of introduction. But the book still seems to be good. I can recognize whether a book is "heavy" or "light". Heavy books often have exercises, they start fast, and go deep. Light books just can't get to the fucking point.

Heavy book: Computer Systems: A Programmer's Perspective Light book: Practical Object-Oriented Design in Ruby: An Agile Primer

[0]: https://news.ycombinator.com/item?id=18805624


> or some wiser developer refactors substantial parts of the interface, business logic, persistence, and runtime configuration, effectively creating a framework within that application

That's not what people commonly understand as a "framework". Nor is it a helpful definition because then where does "abstracting things" and "framework" start and end?

No, the main difference between frameworks and what you describe is the IOC. In the framework world, the framework calls you and then something happens behind the scenes until the framework decides to call some code that you provided via the frameworks API.

With libraries it's the other way around: you call the library and then you do something with the result and then use it to call the library again and so on until you have the desired outcome.

The difference is, for example, that you can call the library and when it returns something to you, you can inspect it and choose to ignore it. You can do so in an arbitrary way. In the framework case, if the framework does not provide any way for you to interact with it in the way you want, then you are screwed.


I've consciously moved on from ruminating on the past and replaying alternate timelines in my mind, to thinking about what I actually can change: the present, and the future.

Life is a blind let's play. It's more fun to watch exactly because you don't know what's going to happen. Having the ability to play it again would destroy consequence, and thus meaning, to one's actions.

Spending extraordinary amounts of time thinking about the past got me literally nowhere. When I realized this, practically every problem in my life started unwinding and I found friends, love, passion, work, and peace.


I’d like to make kind of a meta-point about what I see as two different ways that people are talking about the issue in this rather contentious thread.

To oversimplify a bit, there seems to be a crowd saying: “these pension fund managers gambled with my retirement to enrich themselves”.

There seems to be another crowd saying: “if you examine the details of these transactions you’ll find that it’s neither that simple nor fundamentally even true”.

I’d like to submit that the latter group, which I suspect is probably technically correct (the best kind) should possibly examine the possibility that while any isolated derivatives transaction probably makes sense and is governed by deep and sophisticated mathematics, it does seem to be the case that in sum total we see, decade after decade, a cumulatively destabilizing effect on both financial markets and the financial security of everyday folks: somehow the emergent system either is or really, really fucking appears to be privatizing profits while socializing losses while simultaneously driving up the swings of the business cycle.

I love the financial mathematics stuff intellectually and this is certainty a forum that welcomes experts discussing details, but at some point we need to acknowledge that it’s high finance’s job to convince the public that they’re actually helping, not the public’s job to learn high finance.

Elites that forget this for too long have historically come to very bad ends.


Now we are talking. I just want to let anyone curious enough to get dragged into the rabbit hole... we are still playing these cards!

There are 2 main nostalgia formats, Oldschool 93/94 and Premodern. Here are some links:

- http://oldschool-mtg.blogspot.com

- https://premodernmagic.com

- https://www.coolstuffinc.com/a/michaelflores-05182022-north-...

- https://www.wak-wak.se

- https://alltingsconsidered.com

Find your way to discord, podcasts and eventually your local clubs and join us in the MTG Underground.


A little over $2M, but that number keeps growing because I still have loads of usernames despite instagram patching the ratelimit bypass.

I like art. I don't like this work, but that should be ok. It should be ok to not like stuff.

I've spent a fair bit of time in art galleries. I enjoy it. I don't enjoy the art snobbery though. I don't know why people gaze endlessly into paintings and try to discern the meaning or whatever. It's not that deep.

I don't understand the hype around the Mona Lisa. I don't understand why people stand in line for hours and then crowd around this one — in my opinion — bland painting to snap that shot and check it off their list that they've seen that one piece. The Centre Pompidou is just down the road and it's full of way more interesting stuff!

My father is an artist, and any time his work is featured in a gallery he is asked to describe the meaning, the inspiration, the message, and countless other pretentious questions intended to draw in a totally unnecessary air of sophistication. His response is the same every time.

"I don't know. I see shit and I paint it."

I'm massively into wine. Similarly, I find wine snobbery frustrating. I have enjoyed the world's best wines. From Ukraine, from Georgia, from Moldova, Italy, France… But the people who clutch their pearls when you pair a dry red with your grilled salmon? Fuck those people especially.

What's beautiful about art, and wine, and music, is that there's an entire universe of it. There's something for everybody. And it's definitely ok to not like some of it.


I keep rattling my brain trying to discern what the implications of hyper advanced generative models like this will be. It's a double edged sword. While there's obvious tangible benefits from such models such as democratising art, the flip side seems like pure science fiction dystopia.

In my mind, the main eras of content on the internet look something like this:

Epoch 1: Pure, unblemished user generated content. Message boards and forums rule.

Epoch 2: More user generated content + a healthy mix of recycled user generated content. e.g. Reddit.

Epoch 3 (Now): Fake user generated content (limits to how much because humans still have to generate it). e.g. Amazon reviews, Cambridge Analytica.

Epoch 4: Advanced generative models means (essentially) zero friction for creating picture and text content. GPT3, Dalle-2.

Epoch 5: Generative models for videos, game over.

IMO, the future of the internet feels like a totally disastrous (un)reality. If addictive content recommended by the likes of TikTok has proven anything, it's that users ultimately don't care _what_ the content is, as long as it keeps their attention. It doesn't matter if it comes from a human or a machine. The difference is that in a world where the marginal cost of generating content is essentially zero, that content can and will be created and manipulated by large malicious actors to sway public opinion.

The Dead Internet Theory will fast become reality. This terrifies me.

[1] https://www.theatlantic.com/technology/archive/2021/08/dead-...


Then you finally meet the smartest person you've ever met and, statistically given the quality of the room and their excellence above and beyond it, ever will meet.

And you realize that person spends their life handing out wisdom which consistently just falls on deaf ears.

It's an empty journey with an unsatisfying end even if you make it to the top.

The money's great, but if you are smart enough to be constantly walking into bigger and better rooms, you should also be smart enough to realize there's diminishing returns on personal wealth but diminishing supply on personal time.

Sometimes the best strategy in winning a game is not to play.


Yeah, I don't remember all the details.

IIRC, Musk makes Twitter a 100% cash offer. Twitter accepts, but writes the deal such that if either side pulls out of the deal, then a $1 billion penalty will be applied.

Musk goes to the banks and secures a $20 billion-ish loan, putting TSLA as collateral.

Musk starts to sell TSLA for the other $20 billion of cash. Stock tanks as a result, only ~$8 billion sold on public filing documents.

Musk runs around looking for another $12 billion for the last week or so.

And now we have today where it looks like he is failing his side of the deal. If Musk lost the $20 billion-ish from the banks due to TSLA being too low, it makes sense for him to give up.

------

All that is going on right now is Musk trying to blame Twitter for the failed deal, so that he avoids the $1 billion penalty written into the contract.


I think you're misunderstanding the point of these apps, which might very well be intentional on these companies' part to get you to spend money on something that you might not need.

Therapy doesn't "solve" anything by itself, you must see it as a tool to help you put your mind in a better position so that you can solve your problems. Lots of people see these as magical apps that will somehow magically make them better with no effort from their side and then proceed to get disappointed when this obviously doesn't happen.

So to answer your question:

- Will these apps provide "scientific benefits" (whatever that is supposed to be)? No.

- Will these apps assist your own effort of improving mental health? Yes, as long as you're committed to it and keep going.


I've been seeing a lot of these sorts of articles recently (time online is ruining your life; here are 17 ways to disconnect), and I have to say I simply reject the premise. Once again, it's binary thinking creeping in and human brains generally being far more receptive to arguments along the lines of "this technology/concept/ideology/whatever is bad and you should reject it" over ones that say "this technology/concept/ideology/whatever is a tool and its value is dependent on how its used and when it's applied".

I've seen the binary thinking problem applied to the internet/online life quite a bit recently, and it bothers me because growing up my experiences of the internet/web were almost overwhelmingly positive, constructive, and developmental in nature. It was instrumental in my development as a person because it's a tool that allows you to read constantly. For people with lots of innate curiosity there really is nothing more powerful than that. The encroachment of the social media/video streaming/adtech companies into the space doesn't negate that use case, so I'm extremely hesitant to say if everyone would just unplug the world would magically be better.


Returning to normal is a terrible idea when COVID patients are overwhelming hospitals across the world. How can things be normal if our healthcare systems are nearing collapse?

You might be willing to accept the risk of getting sick on your behalf, but by advocating a return to 'normal' before we have the capabilities to deal with this virus, you are advocating for putting even more stress on healthcare systems across the globe already on the verge of failure. There are patients in heavily-impacted areas who cannot access healthcare for other life-or-death concerns because hospitals are crumbling under the workload of COVID cases.

The article even states this; COVID is likely to reach endemic status eventually, but we are still nowhere near that. Ignoring it will have enormous costs on vulnerable populations -- even more than it already has.


Fanatics also have a tendency to try to latch onto whatever details may offer a respite from the narrative. The core problem here is that Apple is effectively putting code designed to inform the government of criminal activity on the device. It’s a bad precedent.

Apple gave its legendary fan base a fair few facts to latch onto; the first being that it’s a measure against child abuse, which can be used to equate detractors to pedophile apologists or simply pedophiles (these days, more likely directly to the latter.) Thankfully this seems cliché enough to have not been a dominant take. Then there’s the fact that right now, it only runs in certain situations where the data would currently be unencrypted anyways. This is extremely interesting because if they start using E2EE for these things in the future, it will basically be uncharted territory, but what they’re doing now is only merely lining up the capability to do that and not actually doing that. Not to mention, these features have a tendency to expand in scope in the longer term. I wouldn’t call it a slippery slope, it’s more like an overton window of how much people are OK with a surveillance state. I’d say Americans on the whole are actually pretty strongly averse to this, despite everything, and it seems like this was too creepy for many people. Then there’s definitely the confusion; because of course, Apple isn’t doing anything wrong; everyone is just confusing what these features do and their long-term implications.

Here’s where I think it backfired: because it runs on the device, psychologically it feels like the phone is not trustworthy of you. And because of that, using anti-CSAM measures as a starting point was a Terrible misfire, because to users, it just feels like your phone is constantly assuming you could be a pedophile and need to be monitored. It feels much more impersonal when a cloud service does it off into the distance for all content.

In practice, the current short-term outcome doesn’t matter so much as the precedent of what can be done with features like this. And it feels like pure hypocrisy coming from a company whose CEO once claimed they couldn’t build surveillance features into their phones because of pressures for it to be abused. It was only around 5 years ago. Did something change?

I feel like to Apple it is really important that their employees and fans believe they are actually a principled company who makes tough decisions with disregard for “haters” and luddites. In reality, though, I think it’s only fair to recognize that this is just too idealistic. Between this, the situation with iCloud in China, and the juxtaposition of their fight with the U.S. government, one can only conclude that Apple is, after all, just another company, though one whose direction and public relations resonated with a lot of consumers.

A PR misfire from Apple of this size is rare, but I think what it means for Apple is big, as it shatters even some of the company’s most faithful. For Google, this kind of misfire would’ve just been another Tuesday. And I gotta say, between this and Safari, I’m definitely not planning on my next phone being from Cupertino.


I take issue with most of the alarmism about this CSAM scanning. Not because I think our devices scanning our content is okay, but because of the implication that there's now a slippery slope that didn't exist before. For example, from the article:

> While today it has been purpose-built for CSAM, and it can be deactivated simply by shutting off iCloud Photo Library syncing, it still feels like a line has been crossed

Two simple facts:

(1) The system, as described today, isn't any more invasive then the existing CSAM scanning technologies that exist on all major cloud storage systems (arguably it's less invasive - no external systems look at your photos unless your phone flags enough of your photos as CSAM, which brings it to a manual review stage)

(2) Auto-updating root-level proprietary software can be updated to any level of invasion of privacy at any time for any reason the provider wishes. We aren't any closer to full-invasion-of-privacy with iPhone than we were before; it is and always has been one single update away. In fact, we don't know if it's already there on iPhone or any other proprietary system such as Windows, Chromebook, etc. Who knows what backdoors exist on these systems?

If you truly believe that you need full control and a system you fully trust, don't get a device that runs proprietary software. If you're okay with a device that isn't fully trustworthy, but appears to be benevolent, then iPhone isn't any worse than it was a month ago.

Until there's evidence otherwise, iPhone will continue to be as trustworthy as any other proprietary closed-source system. If you need more than that, please contribute to projects that aim to produce a modern, functional FOSS smartphone.


I think HN readership is so used to live in the digital world that many forgot we can live without internet as humans.

ANY connected digital media can become a 1984-style spying device if you don't have full access to it or even if you have that but you are not an experienced electrical engineer.

The only defensible platforms are nondigital media and airgapped computing. I, for one, wouldn't shoot a nude with any digital device nowadays. Only exception would be a camera that gets connected only to airgapped computers.

But we are losing that too. In a short time we won't even be able to pay for that Polaroid or to buy a Librephone without being traced in some digital form.

We are losing all the battles but we need at least to prioritize. The cloud is lost, connected devices are lost. We need at least to keep cash payments, and to pressure the government to break up digital monopolies.

I don't want to sound like desperate luddite but I really think that warrants requirements are unenforceable in the digital world and we really really need to keep important parts of our lives in the analog one.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: