> Because iOS software, backed by iPhone hardware, actively prevents a customer from installing any software on an iPhone outside of the App Store, it does also prevent attackers from installing malicious software. Because the App Store has rules about how applications (outside of their own) can access customer data, if Apple discovers a competitor like Google or Facebook is violating its privacy rules it can remotely remove their software from iPhones, even internal corporate versions of software owned by Google or Facebook employees.
This is a bit inaccurate; first because the App Store has a spotty record of stopping malware from reaching your phone and also because the apps pulled there did not go through the App Store, they were actually sideloaded using enterprise deployment. Apple does have the ability to remotely disable applications downloaded from the App Store, but to my knowledge it has never used this ability.
> These companies have built very sophisticated and secure defenses all in the name of protecting you from the world outside their walls, yet in reality the walls are designed to keep you inside much more than they are designed to keep attackers out. The security community often gets so excited about the sophistication of these defenses backed by secure enclaves and strong cryptography that their singular focus on what those defenses mean for attackers blinds them from thinking about what they mean for everyone else.
I mean, all you have to do is look at the things that are implemented to see that Apple's goal in many cases is to protect their software, not you. There is custom silicon in every recent iPhone that does nothing but stop modification of kernel code, even in the face of code execution and arbitrary read/write in EL1: interesting from an academic standpoint, but if you stop and think about it for more than a second it's entirely useless for actually protecting users.
"Apple's goal in many cases is to protect their software, not you."
Seems like it is easy to trick the public into confusing one goal for another.
There seems to be an implicit rule in the Apple software scheme: Apple itself is "pre-certified" as trustworthy. Not only at the time point of hardware purchase, but endlessly into the future.
The user of course cannot revoke that "certification". The way these systems are structured today, the user effectively cannot decide after purchase "Thanks Apple, I got this. I'll take it from here." This is sort of implicit trustworthiness of hardware vendor as software vendor also underlies the contemporary concept of "updates". There is no genuine (viable) option for the user to say "no, thank you". Saying no would be deemed as ill-advised for a variety of reasons.
The one-time hardware purchase is transformed into an ongoing dependent relationship that can be, and in fact is, exploitive. Its primary reason for existence is as you suggest not to "protect" or otherwise benefit the user, but it can be contrued that way.
Yes, this is exactly how Apple's security model operates; at least for their embedded devices. On Macs it seems like you at least get the choice between "I trust Apple" and "I don't check anything", which is still annoying because clearly the option people want is "I trust me" but Apple is not willing to provide this.
The notarization process also filters out apps that give users greater control of their hardware, because they might touch things like private APIs. When the user goes to run the software they chose to download, macOS will treat it as if it's radioactive, giving the user the impression that the application either doesn't work or is malicious.
The end result is that users are left being unable to use their computers in ways that Apple doesn't like.
From the macOS Mojave for Users, Administrators, and Developers book:
> The Notary service will also perform some additional checks on the application. These include security checks that verify the application is doing what it indicates as well as the check for private API usage, similar to Mac App Store apps.
> For example, if a plug-in employs deep integration with the host executable via C function pointer overrides, or uses a JavaScript engine for custom workflows, the host executable must declare the Allow Unsigned Executable Memory Entitlement or Allow Execution of JIT-compiled Code Entitlement, respectively. In some cases, a plug-in fails to even load if the host executable lacks the proper entitlement.
This is the best documentation I could find with a quick Google search. But basically, if you can use C function pointers, you can call anything you want and there is no way that an automated scan can detect it with the right level of indirection.
I also couldn’t find anything from the official documentation about not being able to code sign anything that uses private APIs.
But even simpler for apps that don’t actually get manually reviewed, it’s really easy to bypass.
Right, so that is OP's point. Either you trust Apple to do notorization, or you disable signature checks by turning off Gatekeeper.
(Or you let specific apps through Gatekeeper, or you go even further and turn off SIP... there are a lot of in-between states, but they fall somewhere on the spectrum between "trust Apple" and "don't verify trust". There's no "trust Mozilla" option.)
> "the App Store has a spotty record of stopping malware from reaching your phone"
Source?
Apple's App Store has in fact a really good track record for stopping malware from reaching phones. Some mallware still got in, but AFAIK Apple has reacted promptly on discovery, and it would be a logical fallacy to conclude that their track record has been "spotty", and as implied, useless, because it isn't.
There are thousand of Windows PCs infected by botnet malware, per day.
Yes, if you'd remove the App Store, iOS devices would still be more secure because of the sandboxing. What Apple does however is a multi-layered approach to security. Which is why iOS devices are in fact the safest devices for consumers on the market.
You and I may not like it. I actually think Apple should allow third party sources, just like Android. But let's not pretend that there aren't security benefits to their App Store.
> There is custom silicon in every recent iPhone that does nothing but stop modification of kernel code, even in the face of code execution and arbitrary read/write in EL1: interesting from an academic standpoint, but if you stop and think about it for more than a second it's entirely useless for actually protecting users.
Every single malware (or jailbreak) wants to modify EL1 code, it's not just interesting from an academic standpoint.
Malware has no need to modify EL1 code if it can grab your iMessages without doing so; exploit chains found in the wild grab the kernel task port (or equivalent) instead. Jailbreaks do want to modify EL1 code, but this comes back to the point I was making about this protecting Apple's software and not their users.
There's no reason for malware to do this, though. It's like putting reinforcing your back door so people can't kick it in while leaving the front door with a pickable lock.
Except the front door doesn’t have a pickable lock, and this is just one of many overlapping security mechanisms. It’s like installing a metal back door and front door, bolting them, and also locking your windows.
Just because the windows can be broken more easily by an invader doesn’t mean there isn’t also value in the other steps you took.
> It’s like installing a metal back door and front door, bolting them, and also locking your windows.
Except Apple hasn't done that, they have just reinforced the back door and left the front door (which has periodically been picked) open. As it stands now it mostly just exists to annoy jailbreakers.
No platform is perfectly secure and every single piece of software is potentially vulnerable to zero-day exploits. The important question, as the previous commenter said, is what is the security record of each platform? And iOS is miles ahead of everyone else:
While I agree that iPhones are more secure, one should also take into account that they represent around 20% of deployed phone devices across the world.
I wouldnt assume install base is linear to malware incidence. I'd expect something supralinear as you observe here, after all, this type of dev in particular is after cost vs benefit ratio, so to first order only the largest ecosystem is worth your effort.
> This is a bit inaccurate; first because the App Store has a spotty record of stopping malware from reaching your phone
Spotty relative to what? The App Store hasn’t had a 100% perfect record, but it’s pretty darn close to it. I really can’t think of any other software distribution channels for general purpose computing devices (including the open web) that have had a better security track record than the App Store
Have there been widespread instances of malicious software in Linux repos? It might just be that Linux distros are lower value targets than apple phones, but they do seem be pretty good in terms of both keeping out actual viruses (which Apple is pretty good at keeping out) and also keeping out shady advertisement/snooping based software (which Apple seems to have done less good at keeping out than the repos. Although, they are doing good for a private company, right?)
Also, Steam seems to do all right as far as I can tell.
> I really can’t think of any other software distribution channels for general purpose computing devices (including the open web) that have had a better security track record than the App Store
Why is it useless for actually protecting users? Is it easily circumventable or is it just that there are simpler attack vectors that don't require modifying kernel code?
Most iPhone attacks that users care about are those where attackers target sensitive user information and exfiltrate it–other ones don't really make sense on the platform. For this, typical attack vectors are a messaging app exploit (which is entirely outside of the control of Apple, FWIW), a web browser exploit coupled with a sandbox escape, or perhaps a bug in a Wi-Fi or Bluetooth driver; none of which typically require modifying kernel code in order to implement. The group of people that wants to patch the kernel, or extend it, is essentially nobody but security researchers and jailbreakers for whom the rationale to do something like this is often "because I should be able to do this". Thus, the feature is ineffective at addressing or preventing actual attacks that users care about and very effective at preventing people from tinkering with iPhones.
I think you're conflating 'attacks possible and thus more often seen in the wild' with 'attacks users don't care about'. People would very much care about exploits that become persistent if those happened.
Attacks against iOS that become persistent do not exist in the wild to my knowledge, nor did attacks that overwrote kernel text. They just do not exist because there is no reason to do this because it's substantially easier to find other things to exploit.
I just don't see how 'preventing such attacks is just to annoy jailbreakers' follows from any of this, though. Like, it's a narrative but the logic doesn't add up to me. There are really good reasons to also defend against complex attacks, attacks that require physical access, you name it. I don't want my iMessages leaking stuff because someone sent it a funny message, I also want my phone to remain reasonably secure if I hand it to a TSA agent.
Perhaps Apple has some sort of model where this is helping, but as it stands this makes the lives of security researchers and jailbreakers annoying and it does not close a hole that previous attacks were using (they are still using the same techniques!) Neither of your examples involve patching the kernel–the first is finding a bug in iMessage in userspace, and the second is handled by an entirely separate chip. My perspective is literally just "I want Apple to prevent exploits against my iPhone, and also not make jailbreaking worse for no reason" and this feature is "makes exploits no harder because no attacker cares that this exists" and "makes jailbreaking way more annoying". There's no balance between the two here, it's just negatives all around.
I didn't say either of these patched the kernel (one is in userspace, the other one is, well, you don't know where it since I didn't describe one) just that the difficulty and complexity of some potential exploit doesn't mean the exploit isn't important both to Apple and end-users.
You were arguing that some of Apple's mitigations are only aligned Apple's business interests and not those of end-users. I don't think you've said much to really show this is true, other than in your specific case. Not wanting the phone to be jailbreakable, to provide some assurance that when you buy an Apple phone, it's running Apple's software as designed by Apple and that it can't easily and surreptitiously replaced with something else is a perfectly reasonable consumer expectation very much in line with what Apple explicitly sells and promises of the product and services it comes with. The 'for no reason' bit just seems obviously inaccurate.
I’ve been seeing this idea on the Internet a lot more for months now and I’ve been trying to put my finger on what I think is wrong with it I think I finally figured it out.
I work for a corporation, it’s not a massive corporation, when I first started it was a midsize family owned company and we never did work in service to the bottom line at least not solely. First and foremost we were interested in customers and providing value to them. I think that any major corporation or business small large whatever also has that same responsibility and probably also the same drive. Any organization is made up of multiple people and there’s definitely going to be some actors that are completely bottom line driven but you’re going to also have individuals whose purpose is meeting customer needs and providing customer value. I’m not saying Apple is an altruistic corporation or doesn’t have concern for the bottom line, but making such a broad generalization to say that Apple only serves their bottom line perhaps sets you and others up with a pessimistic attitude where the people doing good work are missed.
This is mostly a bit of introspection on the thought that I’ve been trying to flesh out for the last few months and it’s not meant as a criticism of your comment in anyway I understand the sentiment of where you’re coming from because I’ve also been in that same place. I think my views are just changing a little bit and I thought I’d take this opportunity to write them down hope you have a great day.
syshum's post is definitely cynical, but I can see it being true in the sense that companies are not likely to make moves which lower their share price, even if they think it is ultimately good for the consumer, if they know consumers will think it is a net loss for them. For example, it may be possible to raise the security level of their devices 100 fold, but if it causes their products to cost 10x as much, no consumer will feel it is worth the price, and so the product will not sell and their stock prices will fall, even if it's ultimately in the consumer's interest.
But still, they are more consumer interest oriented than their sole competitor. At least this is a big part of their marketing.
Talking share price, Apple could have gone with user data mining and targeted ad business which is very profitable, but they chose not to.
Yes, emphasizing their privacy focus as their differentiation is definitely part of Apple's brand identity and marketing strategy. However, Apple is in a different vertical and has a different business model than FB and Google, and may not have the DNA to succeed in businesses such as advertisement.
But Apple the organization, Apple CEO and Board for sure, and probally all of the C-Level are looking at the share price...
Every public company would be the same
Every Private company would be looking at some other Shareholder value, be it dividends or some other Shareholder ROI metric
Many if not most of the employees would be shielded from that in some ways, and often presents itself in frustation over company policies, public statements, or actions because they feel the company should be looking to something other than Shareholder value because many companies present publicly to be doing something other than that, present to their employees as "we are all a family" and other platitudes because if a company came out with the hard reality people would recoil from that as most people do not desire to exist in reality but instead a nice delusion (this is also why we get the politicians we get)
Like all companies Apple does well when it looks after its users.
In the case of both the App Store and third party repairers that involves protecting users against bad actors. I have downloaded dodgy apps in the early days of the store and have had repairers use counterfeit, defective components for my Mac.
Unfortunately protecting against bad actors inevitably comes at the cost of openness.
Apple let a jailbreak app on the App Store once: https://9to5mac.com/2016/08/29/reddit-jailbreak-dribbble-cli.... And Charlie Miller got his developer account revoked because he showed it was easy to get malware past App Store review. In general, it's trivial to get things that break the rules past the review team; I think almost every large app is probably doing it to some extent.
I'm not sure I would consider a handful of examples of developers sneaking apps past the review team evidence of a "spotty record of stopping malware". Yes, you can trivially sneak things past app review by temporarily disabling disallowed features and re-enabling them once the app is released, this is literally what Epic did to get their payment processing system into Fortnite.
But the fact that all app distribution goes through the App Store means that malicious apps can be disabled once discovered, and bad actors can be banned. This is important.
If you compare the situation on iOS to the situation on Android the difference is night and day.
This type of malware is far more pernicious than an app getting snuck past review, for example rooting your phone and burying itself so deep it survives factory resets. And all of these examples were distributed directly via the web or from third-party app stores, so Google can't simply remove them from the Play Store either.
If you think the App Store has a spotty record when it comes to malware I wonder how you'd describe Android or Windows.
I suppose that depends on how exactly you define side-loading. I'll go ahead and clarify that I meant having a platform-sanctioned method of bypassing the default app store and instead downloading and installing third-party apps from the internet. But I think you could probably tell that from the context of the sentence.
we need apps that can tell the difference between normal use, and a review process so as to hide functionality when appropriate and release functionality to the control of an actual user
Funny thing is my jailbroken iPhone 3GS was probably more secure because it had a firewall and adblocker installed. A PDF exploit got fixed faster as well.
Speaking as an Android user, the usefulness of the eco-system is undeniable and leaving that is too much friction for most, even for those who do care about privacy.
I think in addition to the focus on hardware, one approach should be to provide the tools to the users for more control. Provide a rooted phone by default. Add tools that let me see what my phone is doing behind the scenes. Give me option to turn-off some of the things I don't like (e.g. an app is sending analytics to some domain, let me turn that part off without turning off the whole app in a relatively simple way). Help with planned obsolence by providing updates etc and so on.
Basically give back some control of jail/castle to the users without asking them to leave it completely.
Edit: one more thing, please add NFC. Going back to carrying boatload of cards again is not an option.
All applications and (preferable system services too) should be sandbox fist. Linux has much sandbox technology but it's often focused on server use-cases, fairly complex and everything but sandbox first (more like non-sandbox first and then we fit the sandbox to make it so that the program doesn't notice it runs in one).
But it's also many many small problems.
Most (all?) of which you can solve by a complex combination of selinux + cgroups + ... + modified applications + ... (long list). But doing so it a lot of work and often ends up with major usability drawbacks.
Just to name one example of a small but terrible feature: LD_PRELOAD.
Sure it's nice for hot-fixes including ad-hoc security hardening but it's generally a horrible idea in a context where you don't fully trust all programs.
Another think is that most programs can read conigs/settings of most other programs or at least know that there are such settings.
And the list goes one.
Again just to be clear you can fix a lot of this by putting things into Linux containers but what is missing is a container engine focused on desktop applications which goes far enough, which also entails you can't just run existing software without at least switching out the GTK/Qt framework with a version adapted for this use-case and potentially needing more adaption then this.
So all technical possible but we are not quite there yet I think, but then I'm not completely sure as I haven't followed some of the underlying topics close enough in the recent years.
I'm not the one you asked, but I think the underlying issue is that the Unix security model is built around the assumption that software shares the user's intent, and is treated as exactly as trusted as the user. Computers at the time tended to be multi-user, so the focus was on protecting users from other users. While there was a risk of overtly malicious software, preventing remote code execution attacks and teaching users not to run trojan horses was mostly sufficient (for Unix users; the broader population using Windows often installed malicious adware in the early 2000s).
In a modern environment, software should not necessarily be trusted to act in accordance with the user's wishes or best interests, and there's often a financial incentive for software creators to do things users wouldn't want them to. In the early 2000s, the most visible issue was Windows software that displayed advertisements outside of the software, often not obviously connected to the software. It would often monitor the user's browsing habits and such, leading to the name "spyware".
Spyware of that sort was universally considered malicious, but modern smartphone apps often send far more sensitive information, such as location and address books to their creators. Those provide a simple example of a situation the classic Unix security model doesn't address very well. I am the only user of my phone, and I obviously want to be able to read my address book and get my location from the GPS. I do not want the latest and greatest app for sharing pictures of my lunch to track my location to show me restaurant ads, and I only want it to know about people I have explicitly connected to within the app, not my whole address book.
Android's security model addresses that to a degree, restricting some capabilities until the user explicitly allows them. Some of these, like filesystem access aren't handled very gracefully, and it's possible for an app to refuse to work until granted permissions it doesn't really need (this is against policy for inclusion in the Play store, but enforcement is imperfect, and software can be installed from other sources). One workaround seen in XPrivacy is to feed fake data to apps.
Not just root, but Xposed, which breaks the Android security model even more.
Breakage isn't binary though. The user is still in charge of whether a given app gets access to root or Xposed features. The Android security model and additional security features of XPrivacy can be applied to apps the user does not trust with certain kinds of data or capabilities while granting other apps increased access.
Linux phones are never going to be especially obvious or user friendly. To get those types of features you need a boatload of investment that the open source community can't hope to muster in any sort of reasonable timeframe.
For now the choice is privacy vs user friendliness. Currently rooted/de-Googled Android provides one of the best compromises for the average privacy-minded power user, but maybe in a few more years these Linux phones will be mature enough to give Android more of a run for their money.
> To get those types of features you need a boatload of investment that the open source community can't hope to muster in any sort of reasonable timeframe.
The Linux desktop is an obvious counterexample. If the open source community can develop user-friendly desktop environments (GNOME, KDE Plasma, Xfce) they can surely develop user-friendly mobile stacks.
> Everything is open source. You have all the power to study what your phone is doing. Standard GNU/Linux tools should work.
The vastness of the just the changelogs makes that untenable. Understanding the changes to such a nontrivial stack is something experts will even struggle with.
Certainly, open source code is a prerequisite to any trust, but just that isn't enough.
As a freedom-, privacy- and also security-aware person, I disagree quite strongly. Phones should by default provide a way for people to modify their phone as they see fit but they should definitely not be rooted by default. The reason being that root access is, well, the root of all evil. Practically nothing has advanced end consumer device security in recent times as much as Apple and Google enforcing a secure boot chain (in particular: read-only root file systems), implementing heavy sandboxing and cutting down[0] on apps' permissions. All this is worth nothing if apps can obtain root and do whatever they want.
People often claim Linux (as a desktop OS) is secure. It's not by any standard. All the apps you use in your day-to-day tend to have full file system access to all your personal files and full network access. You've merely been lucky so far that you (hopefully) haven't gotten pwned by any rogue application.
I'm eagerly awaiting a time when my root file system will be read-only, my entire boot chain will be verified[1] and all my day-to-day applications will be fully sandboxed and only very few of them have network access.
[0] Granted, one might argue whether this is actually what Google has done.
> Phones should by default provide a way for people to modify their phone as they see fit but they should definitely not be rooted by default.
You don't see the contradiction here? Root access is synonymous with the ability to modify the device as you see fit. The root of trust starts with the owner of the device, not the hardware manufacturer or operating system developer. This does not imply that any random app should be able to obtain root access, or that you can't have things like verifiable read-only root filesystems and strict sandboxing. It just means that the owner of the device should be in control of the keys—i.e. the "root" user—and that apps should not be able to introspect the system to determine whether or not it's running in a "vendor-approved" configuration. You don't have to use the root account for day-to-day operations just because it's there.
A ROM being "rooted" in the Android context has a very specific meaning: Namely, apps can simply request root access and, boom!, they will have it after the mere tap of a button. (It should be obvious that for security reasons this is not something that should be available to everyone and their grandpa.)
So no, I don't see a contradiction here. For instance, I'm running LineageOS on my one phone and stock Android without GApps on my other, so in both cases I'd say I have full control over what's running on my phone. But neither of my phones is rooted.
> (It should be obvious that for security reasons this is not something that should be available to everyone and their grandpa.)
If they own the phone and that's what they want then it should be available to them.
> Namely, apps can simply request root access and, boom!, they will have it after the mere tap of a button.
How easy the root account should be to access is debatable. I agree that one-tap-root by default without some kind of authentication beyond physical access is probably not the best idea, but I don't think that is what the OP was proposing. The point is that the owner should have root access to customize the software if they want it, without begging anyone's permission (e.g. phones requiring a code from the manufacturer to unlock) or having key features disabled (e.g. "SafetyNet").
> > (It should be obvious that for security reasons this is not something that should be available to everyone and their grandpa.)
> If they own the phone and that's what they want then it should be available to them.
Not if what they want and don't want entirely depends on how well informed they are. Keep in mind that malicious apps do everything to convince their users to grant them the necessary permissions.
> The point is that the owner should have root access to customize the software if they want it, without begging anyone's permission (e.g. phones requiring a code from the manufacturer to unlock) or having key features disabled (e.g. "SafetyNet").
> Google Pay doesn't (officially) work on rooted phones.
As the original request was at least partially pointed in Google's direction, it's probably reasonable to treat it as including a request that google not punish the user in other ways for rooting.
I do believe fundamentally that if you don't have (or can't easily get) root on a device, you don't really own it. On the other hand, giving unlimited permission to third-party apps isn't a great idea in today's model where arbitrary app developers are most certainly not worthy of unlimited trust. I'd like to have a more sophisticated permissions model, to include things like allowing an app access to only specific files and directories rather than full filesystem access (I'm aware Android has a barely-usable form of this), or network access, but only to specific domain names.
It will be great to have more players in the smart phone OS domain so I hope this makes it big. On the other hand, the downside of taking the security and privacy of your phone into your own hands, is that ... you’ve taken the security and privacy of the phone into your own hands.
I’m happy to pay Apple to do it for me because my phone is nowhere near my main or most important computing device and I also quite like that they poke google and Facebook in the eye from time to time. Sure, Apple is just a profit driven enterprise like the rest but their business model is directly related to keeping users happy at least for some value of users and happy.
The only downside for me is that I can’t write an app for my device because I haven’t bought into the Apple computer ecosystem.
> I’m happy to pay Apple to do it for me because my phone is nowhere near my main or most important computing device
But for millions, their mobile device is their main or most important device, whether it runs iOS, iPadOS, or Android.
It can be their main device, with one example being Apple's divisive "What's a computer" ad. Another example is those that rely on mobile, due either to cost or lack of physical space for a desktop or laptop.
For others, their mobile is the most important, likely because that's where all their messages, contacts, location history, and more, resides.
> For others, their mobile is the most important, likely because that's where all their messages, contacts, location history, and more, resides.
Indeed. And some of those people value the security and privacy of that data and want to reduce the risk of having it stolen by malware and rogue apps, so they intentionally buy into an ecosystem that enforces strict controls on what apps can run on their devices.
That last one is big for me. I like using an iphone and I have an idea for a nice app but I do not like macbooks. I have tried them a few times and just do not like macos as much as linux. Now I can't work on my app idea because apples policy is you buy all apple products or things just don't work properly anymore.
You can could develop with React Native and use a Mac VPS to do the final build.
Not sure what you want from Apple here. Is it to port their entire iOS/OSX SDK to Linux just to support the 0.000001% of users who would want to develop that way.
Because Windows users don't ever want to write iOS apps? Particularly game devs, who are primarily using Windows computers to develop games because Windows PC is their largest market? None of those people would like to develop iOS games without switching to an entirely new OS?
> Not sure what you want from Apple here
Open up their SDK to the point where communities like Linux can realistically set up their own build tools or emulators. Don't sue people who are providing iPhone emulator access[0]; instead create code/SDK licensing opportunities for companies to ship these environments to customers/developers without running afoul of copyright laws.
I lose sympathy for companies claiming "this is too hard", when they're going out of their way to make the problem harder for other people to solve.
I don't know why we're pretending that there's anyone at Apple thinking, "wow, I'd love to make it easy for Windows/Linux devs to target iOS, but it's just too hard for me to engineer." We know that's not really happening, we know that Apple's management is very happy to force developers to buy Macs, we know that's a status quo that Apple has every incentive to reinforce at every opportunity.
> pretty sure that Linux accounts for higher than 0.000001% of developers
Definitely! As an quick example, I make a cross-platform developer tool with no particular focus or obvious bias towards any OS (https://httptoolkit.tech), and the active user base is:
44% Windows
42% Mac
14% Linux
Developers are everywhere - limiting tools like Apple's to just their platform is a huge disservice.
Same can be said about Sony, Nintendo having Windows only SDKs, or not being able to target Windows from Linux or macOS (cross compiling only works as much as Wine can help).
If you disagree with Apple and do not want to support them, "vote" with your wallet.
Android devices are more open than Apple devices, have more capabilities, and more vendors with a wider range of device offerings.
Purism, PinePhone, and others are even more open than Android, but the software offerings are much more sparse.
Apple devices are much more locked down, but offer a polished and curated user experience. While Apple devices are more expensive, they tend to maintain their value better and are generally supported with software updates for much longer than Android devices.
That depends on what vendor, device, and Android distribution you choose.
>> Also about 'more open' - sounds like 'more good' or 'more pregnant'.
"More open" meaning apps can be side-loaded on Android devices without needing permission from anyone. "More open" meaning you have a choice of vendors. "More open" meaning there are many marketplaces where you can find or purchase apps. "More open" because more choices are available.
>> So, here's two ways only - open or closed.
The Android ecosystem has a spectrum of openness depending what vendor, device, Android distribution, and app marketplace(s) you choose. Some vendors lock their devices down more than others.
>> "Android is open source" - only the marketing slogan. You can run open source Android only in VM.
Really? There are several privacy-focused, open source Android distributions that actually run on devices:
There is no smartphone which can run Android without binary blobs. Exceptions: https://tehnoetic.com/mobile-devices, but many essential things just don't work.
Yes, Google manages it just fine and people programming on linux is not exactly a non existent group. I'd even be happy with a windows toolkit since I can at least use my existing desktop for it.
> Not sure what you want from Apple here. Is it to port their entire iOS/OSX SDK to Linux just to support the 0.000001% of users who would want to develop that way.
Just from the Stackoverflow survey, Linux developers have a 25% market share, same share as OSX.
I've found the Mac Mini to be Apple's best hardware; more upgradable than the typical MBP, and much more reasonably priced. (I also never use any laptop I use as a laptop; I find the ergonomics of laptops unbearable).
But I also enjoy OSX and haven't experienced many of the downsides commonly mentioned on HN, so YMMV.
I have a ryzen 9 desktop that I use for gaming and programming. Apple offers nothing in the desktop space which covers my needs. So I either buy a less powerful version of my current computer or don't develop for iOS.
Our personal computing devices should have the same sort of protections and affordances that our homes have. We should have an expectation of privacy and control over our domain. We should have the rights to build, repair, and more.
If we're not leasing a phone or leasing a house, if we are owners of the phone or the house, then we should have full control to every aspect of it.
But that, unfortunately seems to be the thinking behind most companies these days...that consumers don't own their phones, among other things, in fact most manufacturers seem to want to get onto thr bandwagon, they lease right to use them.
I'm fully in agreement with you though. Phones and most end consumer devices should not be treated as leased devices.
I may need to pay for a service to fully use said device, mobile data, wifi, etc. But those services should not be locked into one provider and should be seperate from the manufacturers.
Limits on repairs and modification should be few and a void warranty should be the steepest penalty. Modifying or repairing a device should not be grounds to have that device bricked at the whims of a manufacturer.
> Phones and most end consumer devices should not be treated as leased devices.
I lease my home. Even though I don’t own it, I still have an expectation of privacy! The law protects me from having my landlord just decide to walk in whenever they want, for example.
Privacy is something everyone should have, even on leased devices. People have things like private conversations and banking credentials on their phones.
Some companies (like Apple) make it impossible to repair your own device even if you have tools and skills because they ask vendors to not sell spare parts to third parties. For example if a charging IC dies you have an option to either send it to Apple and pay significant mark up for the 10 minute repair or throw away the device. You should be able to buy such chip and replace it without company involvement.
I hope that this practice will be illegal in the future.
I think part of the idea of "not owning your device" is that apple and Google have controlled gardens, apple way more than android, but still. They control the system software.
Apple also controls the hardware - for example they prevent vendors from selling some spare parts to 3rd parties. For example if a USB charging chip dies, you cannot replace it yourself, because you can't buy the part.
While I agree with this, I fear it requires legislative action. Third party doctrine effectively says you have no reasonable expectation of privacy (in the US) if data is transferred to a third party voluntarily.
That means most cloud or other "hosted" systems (which smart devices and phones have become near inexorably tied to, for the average user) offer users no expectation of privacy.
To regain this, you'd have to avoid backups to the cloud, or using any hosted service... Not easy for most users when it comes to email, or even storage. That seems to be the first logical step.
(And yes, I know end to end encryption can help, but usability is the issue for users here - end users aren't good at remembering long high entropy passphrases, or keeping bits of paper with recovery keys safe without being lost)
I think librem/purism has a point when you look at it isolated. But when looked at it holistically it completely missess the point. You can put in all the kill switches you want and sure thst will benefit in some small way but until the users data is with Apple/Google/Fb etc it dosnt matter Largely Which phone you use As you are still very exposed. Until that problem is solved (users truly owning their data) everything is simple a bandaid fix. Sure purism/librem might be a slightly better bandaid than what is offered by current vendors But it remains a bandaid never the less.
The reality today still is that you can’t have greatly useful tools and it’s super powerful functionality separated from where and how it’s data is stored. Until that is separation comes all these security and privacy issues can’t be solved at its root.
Once you look at it holistically you see it’s not just a hardware issue that you are dealing with. You are dealing with a much larger and complex set of issues: user caring about privacy; development of standards and tool that allow vendor agnostic data portability that still are as powerful as google sheets or say maps or keynote etc; pervasiveness of such tools as if only a small population of users are using those tools they are not very useful; business model that supports sustainable monetization of such tools so business can continue to provide such tools.
It’s quite complex and kill switches and secure hardware can’t solve for thst. At least not alone.
I know multiple people who use voice assistants like Alexa that are effectively wiretapping their house. When asked how they feel comfortable with that, they say, "Well, my phone's already compromised anyway..."
I think people do care about privacy, but right now they just don't see any practical way to make it happen. The root of it all is the smartphone itself, because "Don't use a smartphone" is not a serious option.
Using a voice assistant like Alexa isn't like wiretapping your home. It's introducing a powerful vector and risk for easily wiretapping your home, to be sure, but your phone is a vector for wiretapping your room (and your vicinity wherever you go), as is your laptop or desktop if they have microphones that aren't hardware disabled, as is your smart TV or remote if you have one. It's not effectively wiretapping your home, it's effectively making your home wiretappable, which isn't great but isn't the same thing.
It's all about the risk profile you're willing to accept. I personally have much more private information accessible through my phone, computer, and certain cloud accounts than I do from things I'm saying in my home, and I believe a malicious actor (potentially including a large corporation like Amazon or Google) would have a much easier time and/or would be much more likely to engage in tapping one of those sources than a voice assistant in my home.
It's certainly very possible - there was a recent story about how a malicious Alexa skill, if downloaded and installed, could siphon your entire voice history - but those sorts of risks also exist for almost any internet-connected device you own.
So, in my opinion and going by my own risk profile, those people responding "my phone's already compromised" are behaving rationally.
It may be true, but I wouldn't say it's evidence for most people behaving rationally. That assumes that everyone does a thorough risk assessment before buying any "smart" device for their home. I highly doubt that's the case.
I put the statement "my phone's already compromised" in the same category as "I don't have anything to hide". Those may very well be common natural behaviors, but I wouldn't call them rational exactly.
> The reality today still is that you can’t have greatly useful tools and it’s super powerful functionality separated from where and how it’s data is stored. Until that is separation comes all these security and privacy issues can’t be solved at its root.
Not sure what you mean here, sounds very vague. The operating system? PureOS is endorsed by the FSF. Cloud storage? Purism does not force into it and you can use Nextcloud.
> business model that supports sustainable monetization of such tools so business can continue to provide such tools.
One of the goals of Purism is to influence the phone industry.
> development of standards and tool that allow vendor agnostic data portability that still are as powerful as google sheets or say maps or keynote etc
GNU/Linux with its packages?
> pervasiveness of such tools as if only a small population of users are using those tools they are not very useful
> Your security and privacy aren’t really protected inside these walls because the main point of these security measures is to enforce control, security against attackers and protecting your privacy is mostly marketing spin.
The author presents this as fact but does nothing to actually justify the claim. I'm not sure why we should assume it is true when two decades of history of malware on Windows (and to a lesser extent, Android) clearly demonstrate the problems with having no walls at all.
The irony of course being that this article itself is a marketing piece for this company's product.
It's not entirely wrong. Secure boot is great technology, the issue is who controls the keys to the machine. If the user controls the keys, it is empowering technology. If the manufacturer controls the keys, the technology becomes merely a tool they use to maintain control over ther user's computer.
Fair enough, but the claim is that "security and privacy aren't really protected", which I strongly disagree with and don't believe the author has presented anything justify such a strong claim.
I disagree that privacy is opposed to security. On the contrary. Linux is an open systems and it didn't have widespread malware problems. You could make an argument about number of users here, but then it would also invalidate the argument of providing security on Apple systems I presume.
Even more open than Windows was Linux and we hadn't this malware problem. There are repositories that work without the lock-in crap other manufacturers pull.
Linux as a consumer-level operating system has basically zero market share, nothing about it will be widespread. If you add in consumer devices with a Linux kernel (i.e. Android devices) then yes, it has had widespread malware problems.
Also, the flip side of "your X is your castle" is that "You are the lord, you take full responsibility for whom you invite into your castle, and if you invited someone who claimed she was your grandma and she took all your belongings, shat on the carpet, and flew out the window, it's your fault."
In real life on the iPhone we have had apps secretly uploading your address book, copying your clipboard and listening for tones embedded in television ads. And "The Fappening" where many people's private photos were leaked.
If that's what happens when you've got hundreds of experts working to prevent it then why do you think it'll be a less of a problem when it's random non-experts?
edit: The imperfection of the current system does not prove that another option is better.
Every major operating system has this regardless of whether they force you to download software through one marketplace. You're not less-secure if you use two marketplaces as long as both those marketplaces are kept secure. iOS is kept secure independently of the App Store as well.
In real life we also have murders and kidnappings, that just means no system is perfect. It certainly doesn't mean there's no point in having law enforcement.
Sure, but think very carefully about whether or not you actually want me to compare Apple to law enforcement. My feeling is that a different analogy would better suit your argument. Is your intention really to make me think about government 'security' talking points around encryption and terrorism?
In real life, if someone told me that murders and kidnappings were a good reason for the government to have absolute control over what computer applications are allowed to be built or what games/media are allowed to be distributed by its citizens, I would call that person an authoritarian.
That's because in real life we balance law enforcement with individual rights. We don't just claim that every single intrusion into people's privacy and autonomy is necessary because otherwise the murderers would come. We also view certain freedoms as inalienable -- we believe that protecting those freedoms is just universally more important than preventing murderers. In fact, many people believe believe that some degree of difficulty and inexactness and imperfection in law enforcement is necessary for the furthering of social progress outside of what the government currently believes is acceptable.
In other words, we balance between anarchy and authoritarianism.
In the same way, we don't only have two choices here. There is a middle ground between "only Apple decides what can run on your devices", and "everyone for themselves, forget trying to make anyone secure." We can get better sandboxing, we can learn more UX techniques around warnings, we can improve public education about computers, we can build out device administration tools, we can build very targeted escape hatches that don't turn the OS into a free-for-all. Even beyond that, we can decide that some user freedoms are worth an increase in malware, the same way that we've decided some security gains are worth a decrease in user freedom.
So I'm not really swayed by someone saying that the only way to prevent malware is if Apple/Google ban porn, and decide for users which payment methods they're allowed to use in an app, and decide whether or not online game streaming apps are allowed to enter the market, and decide whether or not serious games like Sweatshop can be considered art, and decide whether or not podcast apps will be allowed to include COVID podcasts in their directories.
At the very least, we could get rid of most of those restrictions, or we could move all of the security checks to a separate layer and allow people to bypass the content restrictions on their own, and none of that would impact device security.
That we want some security checks does not imply that we should never try to balance security with user freedom.
> My feeling is that a different analogy would better suit your argument.
Feel free to pick whatever example you'd like, the underlying point is the same: just because some bad actors will ignore the regulations anyways doesn't mean we shouldn't have the regulations in the first place or the regulations have no net benefit. In other words, pointing to a few counter examples and saying "gotcha! your regulation didn't perfectly prevent everything!" is not a meaningful critique.
> So I'm not really swayed by someone saying that the only way to prevent malware is if Apple/Google ban porn, and decide for users which payment methods they're allowed to use in an app, and decide whether or not online game streaming apps are allowed to enter the market, and decide whether or not serious games like Sweatshop can be considered art, and decide whether or not podcast apps will be allowed to include COVID podcasts in their directories.
I generally agree that these examples are overly restrictive and unnecessary. However, I don't think legally forcing manufacturers to open up their devices to side-loading is the appropriate remedy, because it increases the level of risk from bad actors attempting to exploit those devices.
I also think Hacker News posters have a tendency to underestimate/downplay those risks because as highly technical people they know what to do to avoid those risks - but the same does not apply to the vast majority of users.
We might be arguing past each other. I agree that cherry-picking doesn't mean that a system should be immediately discarded. But in my mind, the point of bringing up individual malware examples is not to say that all regulation is worthless, it's to drive home that perfect security doesn't exist, that we shouldn't be striving for perfect security in the first place, and that the real world is about balancing security with other concerns.
I don't understand what makes your argument different from, "I don't think allowing encryption is a good thing, because it increases the level of risk from terrorists and traffickers." There is no such thing as a malware free world, and saying, "this would increase malware" is not an immediately persuasive argument.
In other words, if your angle is that you're worried about people cherry-picking counter-examples, my angle is that I'm worried about people pointing at every single security restriction and saying it's critically important, regardless of what it costs users.
We're talking about abandoning a fundamental user right. I need to see stronger evidence that the security gain is so large that it justifies getting rid of that right. The reason your comparison to the government stuck out to me is because it's the same faulty reasoning that the government uses all the time to say that any increase in citizen security or rule enforcement is worth pursuing, regardless of what it means for citizen autonomy.
> I also think Hacker News posters have a tendency to underestimate/downplay those risks
What are those risks? You want to get rid of cherry-picking, what kind of change in malware would we be talking about if we got rid of sideloading on Android or introduced it on iOS? The best data I'm seeing online suggests possibly an impact to 0.5% of current devices based on Android statistics, and that's assuming we can't get any other gains from sandboxing and user-education.
Frankly, even assuming that we couldn't reduce that number farther, that's not a number that's big enough to justify abandoning a user's fundamental right to control what code runs on their device. Especially when we have good evidence that in the absence of that right, companies like Apple will both censor and use their power to control the market and target competitors.
> I don't think legally forcing manufacturers to open up their devices to side-loading is the appropriate remedy
I'm open to lots of solutions here, some regulatory and some market-based. We don't need to focus on just sideloading if there are other solutions other people find more palatable (<cough>Repeal the DMCA</cough>).
But even on the topic of sideloading, I'm open to the idea that this doesn't need to be a general regulation. I'm fine with saying that Apple is in a unique position because it's one part of a duopoly, and that we don't have to make a generalized rule for every company just to target Apple/Google specifically. My position isn't necessarily that manufacturers all need to be forced to open up their devices, it's that it might make sense to impose that regulation on companies in a duopoly when it can be demonstrated that they are actively harming the market with their restrictions.
Even regulatory solutions are a balance; regulating an aggressive duopoly is different from regulating an entire market.
> But in my mind, the point of bringing up individual malware examples is not to say that all regulation is worthless, it's to drive home that perfect security doesn't exist, that we shouldn't be striving for perfect security in the first place, and that the real world is about balancing security with other concerns.
I certainly agree that perfect security doesn't exist and we need to balance security with other concerns. However, I believe that a platform with strict controls directly contributes to increased security and privacy on that platform, and those factors are important to me, so the balance is worth the trade off. You are of course free to prioritize other concerns and purchase the device that best fits your concerns.
> There is no such thing as a malware free world, and saying, "this would increase malware" is not an immediately persuasive argument.
It is to me, because (as I said in my original comment in this thread) we already have two decades of history of malware on Windows and Android to show us what happens when you expose non-technical users to a highly popular, but unrestricted operating system.
> What are those risks? You want to get rid of cherry-picking, what kind of change in malware would we be talking about if we got rid of sideloading on Android or introduced it on iOS?
I think the numbers speak for themselves and side-loading is exactly the reason why.
In 2018 Android based devices are once more the main target in mobile networks. In the smartphone sector, the vast majority of malware is currently distributed as trojanized applications. The user is tricked by phishing, advertising or other social engineering into downloading and installing the application. The main reason that the Android platform is targeted, is the fact that once side-loading is enabled, Android applications can be downloaded from just about anywhere. In contrast, iPhone applications are for the most part limited to one source, the Apple Store.
> The best data I'm seeing online suggests possibly an impact to 0.5% of current devices based on Android statistics,
I'm curious where that number came from? Individual Android malware attacks have affected up to 25 million devices [2], so that number doesn't really make sense to me.
> and that's assuming we can't get any other gains from sandboxing and user-education.
Note that most of of the counter examples in the comment I replied to were examples of developers abusing legitimate APIs. (Except the photo leak which IIRC was based on a phishing attack). Sandboxing is great for operating system level security but does nothing to help prevent these types of privacy violations, which are enforced via developer guidelines and the review process instead. Protecting privacy cannot merely be treated as a technical problem to be solved via OS-level security restrictions. User education also does not help here because the users have no idea what developers are doing under the hood.
> that's not a number that's big enough to justify abandoning a user's fundamental right to control what code runs on their device.
I'm not opposed to the idea of adding some sort of "developer mode" that allows advanced users to load third-party binaries after some very strict and specific warnings, so people who really know what they're doing can use it. I just think its a very bad idea for side-loading to become a primary method of app distribution, especially for general users.
Be careful of taking large percentages of small numbers. Right above the quote you list in the Nokia threat intelligence whitepaper:
> In 2018 the average monthly infection rate in mobile networks was 0.31%. This means that in any given month, one out of every 300 mobile devices had a high threat level malware infection.[0]
Let's assume that sideloading is responsible for literally everything happening on Android (it's not, but let's assume it is). We're talking about a reduction of <0.5% of current devices. I don't think that's a high enough number to justify getting rid of a fundamental user right.
I'm getting my numbers from some press releases[1], and from Google's 2018 security report for Android[2]. Google reports:
> In contrast, 0.68% of devices that installed apps from outside of Google Play were affected by one or more PHAs in 2018. While this number is 8 times higher than devices that exclusively used Google Play, it’s a noticeable improvement from 0.80% in 2017.
So even when looking purely at devices that allow sideloading (assuming that everyone who sideloads on Android is doing so unwittingly and is the victim of phishing, which, again, isn't the case), we still get a possible savings of ~0.6% of current Android devices.
Is it worth allowing Apple to destroy the entire games streaming market on iOS to save 0.5-0.6% of devices (approximately 1 in 200 devices)? Is protecting 1 in 200 devices worth allowing Apple to be anti-competitive towards music streaming platforms like Spotify? No, probably not -- especially since user education around the risks of sideloading means that at least some of those users are already making an educated choice about their own personal security risks.
> we already have two decades of history of malware on Windows and Android to show us what happens when you expose non-technical users to a highly popular, but unrestricted operating system.
We also have two decades of the web showing us that sandboxing untrusted code is a viable model for application distribution. It's not an accident that the web won as an application runtime/distribution platform for most people, and it's definitely not an accident that the web is one of the few platforms where end-users generally trust themselves to execute hundreds of blobs of unverified code per-person every single day.
Additionally, we're seeing data that suggests platforms like Android and Windows are becoming more secure despite the fact that they allow sideloading. So clearly there are gains to be made in this area beyond just getting rid of user rights.
> I'm not opposed to the idea of adding some sort of "developer mode" that allows advanced users to load third-party binaries after some very strict and specific warnings, so people who really know what they're doing can use it.
I think it's kind of a jump to assume that this isn't something that's mostly already happening on platforms like Android. It is very difficult to accidentally sideload an Android app unless you ignore security warnings.
And there's also a kind of double-standard here. We're assuming that every general user who buys an iPhone is doing so because they understand the underlying security model and are comfortable giving up their freedom in exchange for security. But we're not assuming that people who go through warnings to sideload apps are doing so with the understanding that there are security risks. Why is that?
We get into some uncomfortable questions about protecting users against their consent. If it could be shown that the majority of people sideloading today have no idea of the risk they're getting into, that would be something. But I'm uncomfortable assuming that. I'm uncomfortable looking at outcomes this small and saying that obviously those users need to be protected from themselves.
And I just don't buy your arguments around user education. It is possible to train people to be more secure, especially around well-defined boundaries like sideloading. The point of sandboxing and user-controlled permissions is to make it clear what developers are doing under the hood, because 'abusing legitimate APIs' is a subjective call that different users will have different standards for. Obviously there's more work to be done there, but platforms like Android, the web, and even iOS[3] are proving that users can be educated about topics like privacy and malware. I mean, even MacOS allows users to disable Gatekeeper and (in most cases) bypass the store for app distribution. Do we think that's a giant security risk?
Again, perfection is not the goal. If we're talking about an extra 1 in 200 devices getting infected with malware, and it's not particularly complicated for high-risk targets, companies, and even nontechnical users to completely avoid that extra risk, and we have pretty good evidence that we can get that number even lower without taking away user rights, then I just don't see a compelling reason to take away user rights.
> We're talking about a reduction of <0.5% of current devices.
You're trying to use this number to downplay the severity of the malware problem on Android, but you need to be careful with the interpretation of this number. It's a rolling snapshot, not a measure of total devices affected.
What that means is if you get infected this month and fix your phone, and then I get infected next month and fix my phone, and a third person gets infected the next month and fixes their phone, and a fourth person gets infected the next month and fixes their phone, the snapshot will only capture 1/4 of the total number of infections even though all four of us got infected in the end.
What we really need is a metric of how many users are infected by at least one piece of malware during their ownership of the device.
Edit: I looked around and couldn't find this metric exactly, however I did find several even larger malware attacks that have individually infected way more than 0.5% of devices, which leads me to conclude the 0.5% number is extremely misleading.
Is it worth having a strictly controlled review and install process in order to help prevent hundreds of millions of malware infections on your phone, the most important device in most people's pockets that contains all their messages, emails, photos, location history, health data, etc.? I believe so.
> I don't think that's a high enough number to justify getting rid of a fundamental user right.
I take issue with framing this as a "fundamental user right". If you want to execute unapproved code on the iPhone you already have multiple options, such as using the standard developer SDKs or jailbreaking. What you are claiming is a "fundamental user right" is actually the right for third-party developers to distribute unvetted binaries for installation using platform-sanctioned infrastructure. I think it's a huge stretch to call that a "fundamental user right".
(Granted, I also think calling gun ownership a "fundamental right" is completely and utterly ridiculous, but different people have different opinions on what is truly fundamental.)
> > In contrast, 0.68% of devices that installed apps from outside of Google Play were affected by one or more PHAs in 2018. While this number is 8 times higher than devices that exclusively used Google Play, it’s a noticeable improvement from 0.80% in 2017.
So Google's own statistics say devices that use side-loading have an 8x higher risk of malware. That is significant.
> We also have two decades of the web showing us that sandboxing untrusted code is a viable model for application distribution.
I don't think it's fair to compare the two as browser sandboxing is significantly more restrictive than app sandboxing. Sure, if we restricted apps to the same degree that we restrict the browser, that would definitely improve security, at the cost of functionality.
> Additionally, we're seeing data that suggests platforms like Android and Windows are becoming more secure despite the fact that they allow sideloading.
Yes, because they've intentionally made side-loading more difficult with every release, which means fewer people are doing it, which reduces the attack vector.
> But we're not assuming that people who go through warnings to sideload apps are doing so with the understanding that there are security risks. Why is that?
Because we literally saw what happened when Epic attempted to release their app outside the Google Play Store. Non-technical users went ahead and checked the box to allow side-loading because they wanted to play Fortnite. Then they ended up downloading fake Fortnite APKs cause they didn't know where to get the right one.
You're acting as if these risks are hypothetical when we've already seen this same story play out over and over again.
> And I just don't buy your arguments around user education.
I'm not sure you actually understood this argument. Consider an app that might request access to your contacts for a legitimate purpose (like messaging your friends), that then secretly decides to store and transmits that data for a malicious purpose (like selling your contacts to third parties). No amount of sandboxing, education, or permissions management will prevent this kind of privacy abuse.
> I mean, even MacOS allows users to disable Gatekeeper and (in most cases) bypass the store for app distribution. Do we think that's a giant security risk?
Yes, of course it is. Mac OS has a worse malware history than iOS.
> a rolling measure, not the measure of total devices affected.
The Google numbers I list are not monthly rolling measures, they're for the entirety of 2018. They're also specific to users who sideload. So it's not that 0.68 percent of Android users downloaded malware in 2018, it's that of the subset of devices that actively sideloaded apps, ~2/300 ended up encountering malware at some point during the year.
And this ends up mattering because it means that you can almost entirely eliminate that risk by just deciding for yourself whether or not you want to sideload.
> devices that use side-loading have an 8x higher risk of malware. That is significant.
An 8x increase that still results in less than a 1% risk over an entire year. The context matters, we are talking about extremely small numbers. The current numbers mean that if you own an Android device for 6 years and you regularly sideload applications every single year, you have a 4% chance of getting infected during that time. And this is assuming that nothing else changes to make sideloading more secure, that none of the education measures work, and that you don't sideload one or two important apps and then just turn the feature off.
When you only focus on the percentage change, you miss the bigger picture of what the malware risks actually are for phones. 4% is a number we would like to be lower. We always want the number to be lower. But not at the cost of an entire market. That 4% needs to be stacked against the costs of market capture and anti-competitive behavior.
Quick sidenote, I don't think it's that hard to explain the numbers you're seeing online. There are almost 2.5 billion android devices in use globally. 200 million of 2.5 billion is a little less than 1 percent. I could easily see factors like repeat infections driving that number lower (Google is only counted infected devices, it's not counting the number of infections per device). Those numbers are surprising to me in that they might indicate that a lot more people are sideloading than I expected. But even that is balanced out by the fact that the majority of these cases aren't exclusive to sideloaded apps, they also made their way onto official app stores.
I'm definitely interested in hearing more about them, but I'm not looking at these numbers and thinking, "Google's official security reports are lying."
> I don't think it's fair to compare the two as browser sandboxing is significantly more restrictive than app sandboxing. Sure, if we restricted apps to the same degree that we restrict the browser, that would definitely improve security, at the cost of functionality.
If we want to go down this route, iOS is also fundamentally more restrictive than Android. Android has a permission that allows apps to just directly read sections of the SD card. I think that's a stupid permission for Android to have, and I would hazard that the malware numbers you're looking at would be lower if Android didn't have all of this crap. I shouldn't need to give a photo application access to my SD card just to take a picture.
On the subject of the web: yes, the web is more restrictive than native in many ways. But it's rapidly getting less restrictive, and we're now even considering permissions like native file access. That expansion in functionality is happening because we're seeing that sandboxing works. A lot of the legitimate permissions that we're trying to prevent abuse of within native apps (contacts, advertising IDs, location, data-sharing between apps, camera/microphone access) are areas that the web has grappled with and handled, for the most part, adequately.
It's not a perfect comparison -- if the web could do everything native apps could do, nobody would be writing native apps. But the growth of the web as a platform still suggests that sandboxing is something we should be taking very seriously.
> Yes, because they've intentionally made side-loading more difficult with every release, which means fewer people are doing it, which reduces the attack vector.
Reread that. Google saw a 15% reduction in malware among phones that sideload apps. Not overall across the entire ecosystem, among the people doing the behavior you think is too risky for them to do. We can improve the malware stats among people who sideload.
> Because we literally saw what happened when Epic attempted to release their app outside the Google Play Store.
What's our position on cherry-picking again?
More importantly, what's our basis for saying that when people clicked the checkbox and said, "I understand the risks, I still want to take those risks so I can get Fortnite", that was an accident or that they didn't understand what they were risking?
It is possible for someone to do something risky and get malware even though they generally understood the risks. And to get back to what I'm talking about with consent, I am uncomfortable with the idea that we need to go to people and tell them what risks they are and aren't allowed to take. If we believe that everyone who buys an iPhone is doing so because they are consciously balancing their security/freedom, why do we throw that philosophy out the window when someone makes a conscious decision to sideload an app? Not every user is going to have the same risk tolerance, and it's fine for users to have different degrees of risk tolerance.
> Consider an app that might request access to your contacts for a legitimate purpose (like messaging your friends), that then secretly decides to store and transmits that data for a malicious purpose (like selling your contacts to third parties). No amount of sandboxing, education, or permissions management will prevent this kind of privacy abuse.
No amount of anything will stop that privacy abuse other than extensive corporate auditing, which nobody (including Apple) is prepared to do. Apple can't prevent an app from secretly selling your data, it can only ban the app after the fact. And once it becomes public knowledge which apps are selling your data, then education and permissions management starts to matter again.
The only preemptive thing we can do is to make it obvious when apps are transmitting data and to what location. We can also train users to stick to commercially vetted apps and to do a little bit of research to figure out whether a company seems sleazy, or if they've popped up out of nowhere. But that's the most we can do. Apple's moderation team doesn't have any kind of magical ability to tell what I'm doing with user data once I've gotten it onto my servers.
> such as using the standard developer SDKs or jailbreaking
I wonder, back when Apple was arguing that distributing jailbreaks for iOS should be illegal, did they have any idea that it would someday be a core argument as to why they weren't actually suppressing user rights?
If you don't think that the user right to decide what code runs on their own devices is a fundamental right, then that might just be a disagreement we have. I think it is a fundamental right, and I don't think that the developer SDKs or the constantly shifting jailbreaking communities satisfy that right. But if you disagree with me on that, then we disagree, that's fine. There's no short argument I can come up with as to why you should believe it's a fundamental right.
> Yes, or course it is. Mac OS has a worse malware history than iOS.
To that point, usually people don't try to argue that sideloading should be removed from desktop computers. It's an interesting and kind of troubling shift to see this argument popping up now. You're not the first person to suggest it, but I'm still always surprised when I see it. What would the computing industry look like today if early Windows/Macs had only been able to run authorized software?
> The Google numbers I list are not monthly rolling measures, they're for the entirety of 2018.
Fair enough, I was referring to the "average monthly infection rate" from the text you quoted.
However, I am having trouble reconciling Google's numbers with the numbers from other reports. For example, Kaspersky's mobile malware evolution report (https://securelist.com/mobile-malware-evolution-2019/96280/) says 13.89% of users in the United States were attacked by mobile malware in 2019. The number is as high as 60% for Iran.
> 200 million of 2.5 billion is a little less than 1 percent.
That's 8%. I don't understand how Google can say in the same report, that 199 million devices were infected by a single piece of malware, but only a maximum of 0.68% devices were affected? Something doesn't add up.
(I'll address your other points when I have more free time.)
> 13.89% of users in the United States were attacked by mobile malware in 2019. The number is as high as 60% for Iran.
In fairness, if the actual numbers in some smartphone markets are genuinely as high as 60% of Android users/devices infected, then... yeah. In that case, I'm underestimating the impact and it's worth at thinking more about whether or not the security impact is too high for us to naively allow sideloading -- at least without building much better UX or building much better safety measures around it.
That's a number that's high enough where it does make sense to take a step back and think about the security costs and move very cautiously. I mean, heck, to go all the way back to the original argument, if 1 in 10 people were being killed by murderers in a year, I'd be somewhat inclined to take law enforcement arguments about banning encryption more seriously.
At the same time, that number is very surprising to me and I'm kind of suspicious of it. Even the US numbers, I would be pretty surprised to find out that 1 in 10 Android devices is infected, because I'm not sure I would guess that as many as 1 in 10 Android users actually sideload apps.
I almost wonder if different reports have different definitions of malware or something.
> That's 8%.
Good catch, I am bad at counting zeros. I think I must have done 20 million instead of 200. 8% is also a number where I start to think something is weird.
I assume that Google isn't lying, but there's a factor there I don't understand. Unless the average infected phone is getting infected 8-16 times in a row, I'm having trouble thinking about how those numbers reconcile.
Ideological differences aside, these are interesting numbers.
I've been trying to figure out these Google numbers and they just don't make sense to me. In August 2019 a cluster of apps in the Google Play Store with over 100 million total installs were discovered to contain a trojan (https://news.drweb.com/show/?i=13382&lng=en). I would expect the detection and removal of such a large cluster of malware to be reflected in Google's PHA dashboard (https://transparencyreport.google.com/android-security/overv...), but there's barely any change in August. Which leaves me wondering what exactly are they measuring?
Other points I wanted to address:
1. I don't think it's cherry picking to point out that fake Fortnite APKs are the inevitable consequence of Epic choosing to distribute Fortnite outside the Play Store. I expect this will be a problem with every popular app that decides to go fully off-store.
2. I also don't think it's likely that the people falling for these fake APKs are making a knowing decision to accept the risk of side-loading. I think it's more likely they just don't have the expertise to understand what is the correct place to download it, and they're getting lured in by the promise of free V-bucks or whatever. I mean, yes, ultimately they made that choice to check that box, but it seems a bit like handing a toddler a loaded weapon and then being surprised at what happens next.
3. I agree that we can't stop all privacy abuse, but I think the review process provides a useful deterrent that otherwise wouldn't exist if every developer was doing their own distribution and had no review guidelines to adhere to at all. If you compare the incidence of malicious apps distributed via the Play Store compared to the App Store I also think there's a clear indication of the benefit of the review-first model over the publish-first model.
Mystery partially solved: Google's security report is based on the data from Google Play Protect, which apparently has the worst performance among malware detection tools in the industry (https://www.tomsguide.com/reviews/google-play-protect). A recent evaluation by an independent institute found that Google Play Protect only managed to detect a third of the 6,700 malware samples in the test, compared to ~99% from security companies like AVG, Trend Micro, and Kaspersky (https://www.av-test.org/en/news/here-s-how-well-17-android-s...).
Based on this, I don't think the numbers coming from Google can be considered reliable. It seems the reason their numbers are so low is because they simply aren't detecting a large chunk of the malware that is being distributed on Android.
I prefer ability to force boot a known good FOSS OS from SD card, rather then depending on the SoC vendor or OS vendor to have implemented all the crypto at all levels correctly to verify the bootloader, trusted firmware, kernel, kernel modules, userspace programs, and so on.
"Known good externally sourced" is better then "maybe verified by in-device mechanisms, but we don't really know", when you have reasons for doubt, about someone potentially modifying the software on your phone.
It's not either/or. Making it harder to modify the software via secure boot is good. But having ability to actually run and verify the software from a real point of trust, not just based on some random crappy crypto someone somewhere thrown together for "secure boot", is essential.
They justify the claim that phone vendors enforce control unrelated to security.
And the various linuxes and FOSS Android ROMs can serve as examples of reasonably secure systems without walled gardens. (Or more accurately, walled gardens, but where the user has the key to the garden gates, and can install alternative repositories/app stores).
With all that, it takes a downright reckless degree of trust in corporations that have already betrayed that trust many times, to believe their motives in locking down ever more computing platforms are to benefit the user, and that this continual retreat of user freedoms won't end badly.
> They justify the claim that phone vendors enforce control unrelated to security.
That's not really the claim that was being made in the original quote. I don't disagree that phone vendors also sometimes enforce control unrelated to security.
But, the original claims were that "your security and privacy aren't really protected" and "the main point of these security measures is to enforce control". Your examples don't really support these (much stronger) claims.
> And the various linuxes and FOSS Android ROMs can serve as examples of reasonably secure systems without walled gardens.
I'm not really sure how that follows? I suppose you could argue that the various linuxes have fewer malware problems but I suspect that's more a function of demand than anything else. I'm not sure how other Android ROMs are relevant? In any case neither of those examples address the issue of privacy. (As in mechanisms to help prevent "rogue" apps from exfiltrating your private data via abuse of legitimate APIs.)
I'm not exactly sure how you think these examples justify the author's claim, can you elaborate?
Cause it seems like what you're doing is the equivalent of pointing to an example of someone killing another person and then saying "gotcha! see how the existence of police and murder laws don't completely prevent murder?"
The underlying question is does the walled garden approach improve the security and privacy of the overall ecosystem compared to the non-walled alternative, not whether it leads to 100% perfect security and privacy (which is an unobtainable ideal).
Is it just me or does this feel to be written in a FUD marketing style? Very strong focus on fear in the writing and what seems like cherry picked examples.
Then again I have learned to have a defensive and negative reaction to anything that even smells like marketing so it's hard for me to tell anymore.
The incentives between me (a regular user) and my smartphone OS are (mostly) aligned. Yes, Apple has anti competitive practices which they shouldn't be allowed to partake in. I also don't like how all iOS web browsers are forced to use Safari under the hood. But as a user, that doesn't affect me much - I am happy with most of Apple's default apps and I prefer using mobile web clients (even if they are using Safari) to apps usually. I don't think I am at any risk whatsoever of being deplatformed as a user. Maybe as a developer, but I'm not a mobile developer for the most part.
I understand the benefits of Libre software. But taking the principled stance in favor of Libre software will create a lot more work for me in areas I don't normally think about much - security, the internals of my mobile OS, having to find app workarounds an replacements, etc. The reason I use iOS to begin with is that I like that Apple takes care of the basic decisions for me. I don't want to fiddle with things.
The target market for a Librem phone is people who are hardcore about privacy and/or computing freedoms, who are probably motivated at least partially by fear.
Congratulations! Your post made me do something that almost never happens -- I went and read an article that I otherwise had no intention of reading :)
So here is a data point, it is only one, but it is mine: I am eagerly awaiting the arrival of my pre-ordered Librem 5. My motivation is almost entirely computing freedom, and a desire to support that concept in phones. I don't expect this phone to be polished enough to become my daily-driver. I don't expect that the cost/performance is going to look attractive. But I strongly believe in computing freedom, and I am willing to buy an overpriced phone in order to support and encourage the concept. Putting up money in a way that matters, and helping to demonstrate that a market exists, instead of merely talking about it.
Fear is not a motivator for me. A desire for privacy is somewhat. I would estimate my freedom/privacy/fear breakdown as 85/15/0%.
So to GP's post, is the article fear-mongering/FUD? IMHO, nah, just marketing blather designed to resonate with the kind of person that will give them money. Is whole-house alarm system marketing material fear-mongering? Also a matter of opinion and circumstance.
As I said, for me it is about computing freedom. I don't think I am a hard-core Stallmanite. But I have been around long enough that I did CS homeworks on punch cards, and my friends and I all had to solder together our first personal computers because everything available came in kit form back then. I used to think RMS was pretty far out there. But from my perspective looking back over the years, I have to admit "RMS was right" more often than I like too.
It's a marketing piece, but I don't see anything terribly wrong with what they wrote. I also don't think you have to be that hardcore about it to want your phone to work like this.
Yes this is marketing, but that doesn't mean it's inherently bad. We should support competition in this industry, and any kind of attempts they put forth to market themselves. Especially for things that "hackers" care about.
There isn't enough competition. Don't try to ruin it for the little guys.
In Sweden, all users are locked to whatever OS:s our mobile ID verification app BankID [1] supports. Yes you can opt to use a physical 2fa device to log onto your bank, but you're missing out on 99% of the digital banking infrastructure.
So, even if I'd like to order one, I won't. The friction is too great.
For context, this is not just about banking services. This is our trusted e-identity card. It is used for tons of systems where we need to prove that we are who we day we are. Everything from declaring our taxes to signing documents electronically to, yes, various banking apps.
Also, I feel the same as you, notemaker. I would if I could, but without BankID, I could just add well get a dumbphone.
Fellow Swedes, change is coming in the form of the new national e-identity ID-card[1]. With this you would only need a smart card reader and a compliant browser in order to use eID and electronic signatures. As opposed to BankID this can be used to create derivative e-identities.
Has there been any progress on this? Seems like one of those things that sound great but might take a decade to get through the bureaucracy and implementation.
An operating system containing only libre software, I assume. At least in my book, software must respect the freedoms of its users in order to be ethical. And their OS is indeed libre and respects its users.
No, but they are running on hardware they've carefully selected that does not require any non-free software or binary blobs to operate, and they include hardware kill switches for all the sensors and radios so if you still don't completely trust this "third party" hardware, you can at least shut it off and only turn it on when you really need to use it.
At some point you do, unfortunately, have to compromise, because expecting a small, privacy-focused upstart to do their own silicon design and run their own fabs is just not reasonable.
I don't need much from a phone, and just about any Android-capable phone would work for me, as far as OS and hardware capabilities go. The thing that's going to keep me tied to a Google-controlled phone for the indefinite future is Google Fi. I'm not aware of any other cell service plan that does what Fi does: $20/mo base, $10/gig after that (with everything from 6G-15G at no additional cost, after which you either get unlimited throttled data, or can start paying $10/gig for again). And the most important part of that (for me) is that it's the same price no matter where you are in the world. I haven't been traveling this year, but normally I would, and there's no other way I know of to use my phone internationally for so little (at least not without constantly swapping SIMs and changing the phone number).
While Google Fi is understandable for your situation, for the majority of Americans who don't need international roaming, Google Fi is not the most cost-effective solution.
Mint Mobile offers unlimited plans with 35GB of unthrottled 5G and 4G data for $30/month, 12GB for $25/month, 8GB for $20/month, and 3GB for $15/month. All of these options are significantly cheaper than the same amount of data on Google Fi.
The relevant part is that Mint works just as well on phones not optimized for Google Fi as it does on phones optimized for Google Fi, eliminating vendor lock-in. Mint works on the Librem 5 and the Pinephone (and also on iPhones). Future FOSS phone hardware that uses GSM-based cell networks will be Mint-compatible.
I have been using this service for about a year and a half now and I have to say the cost is great but you get what you pay for. It's very unreliable (drops incoming calls, and incoming texts are delayed very frequently) and there are lots of places my friends will have coverage that I will simply have no network at all. So it's servicable, and I would recommend for e.g. a hobby project that needs a cell data connection like a Pinephone/Librem 5, but not for critical cell lines that you need to guarantee won't miss calls.
Mint Mobile is an MVNO that runs on the T-Mobile network. Google Fi is also an MVNO, but it runs on T-Mobile, Sprint, and the regional US Cellular. Since T-Mobile merged with Sprint, the coverage of Mint Mobile and Google Fi will eventually be similar once T-Mobile converts the Sprint towers, which is in progress.
If you live in an area where T-Mobile is weaker (rural areas or areas outside of metropolitan areas), AT&T and Verizon will provide more consistent coverage than both Google Fi and Mint. AT&T is GSM-based and will work on the Librem 5 and the PinePhone. Verizon is CDMA-based and might not fully work on these phones.
AT&T Prepaid is currently offering an unlimited plan with 22GB of unthrottled 5G and 4G data for $50/month plus tax with monthly billing. There is also an offer for 8GB of 4G data for $25/month plus tax with annual billing.
GSM-based networks still seem to be the more reliable option for Linux phones, but it's good to see that Verizon might also be compatible with some effort.
> ...constantly swapping SIMs and changing the phone number
There are a lot of cloud telephony companies that'd vend you a number and gladly connect your calls over the Internet so that you avoid roaming fees [0].
And data-plans are easy and super cheap to come-by given the advent of esims [1].
I Pay $13 a month for 2GB and unlimited talk and text. Most phones have 2 Sims and can use esim. Truphone is much better internationally. https://www.truphone.com/us/consumer/sim/
It has free incoming calls and you only pay for outgoing.
Given that it's a Google service though, it might be shuttered at any moment. I'm happy with my T-Mobile plan even if it's expensive and harder to use, for the convenience that I won't have to suddenly switch plans and carrier
I'm also on Fi and love it for the reasons you mentioned BUT keep in mind that swapping your SIM for a local plan is a lot cheaper, on average. In Eastern Europe for USD$10/month you get unlimited data and you can have people call you.
The one thing I don't like is I can't have people call my number because it's a US number and it costs them extra to call it in Europe.
An alternative to making your phone is castle is not building your castles in the air. Don't use your phone as your primary computing device. You'll never be allowed to own your phone because it's a radio transmitter and it has a license, not you. You cannot own or control it.
I just want a phone with full Windows 10 that is as customizable as desktop computers are. Phones are very powerful nowadays. There's no reason for them to have the same limitations as 10 or even 5 years ago.
Windows 10? I don't even want that on my computer, let alone my phone. Of course a mainstream Linux OS (with real desktop+mobile convergence) would be quite nice, and both Purism and the pmOS community are working towards making that possible.
> I just want a phone with full Windows 10 that is as customizable as desktop computers are.
The advertised phone is a phone with full GNU/Linux that is as customizable as desktop computers are. So it's almost what you want. A video: https://www.youtube.com/watch?v=bH3RbrwhNd8 (not Librem 5 yet, but another GNU/Linux phone).
It's my device. I should have the choice to override the standard use case to unlock the phone's full functionality at the expense of battery life. I almost always have a charging cable with me, and a 20,000 mAh powerbank. Furthermore, there is a lot of room to increase charging speeds, but most phone manufacturers have dragged their feet because extremely fast charging speeds aren't necessary for the standard use case. However, that "standard use case" is outdated and needs to be reevaluated.
>> Furthermore, there is a lot of room to increase charging speeds, but most phone manufacturers have dragged their feet because extremely fast charging speeds aren't necessary for the standard use case.
And you know, the very small matter of rapid charging destroying phone batteries quickly. There are phones out there which will charge at 30 or every 40(!!!) watts of power, but it tends to kill the battery rather quickly. Vast majority of people charge their phones at night, and then the phone gets a good 6-10 hours of uninterrupted charging. If anything, phones charge too quickly by default, at the expense of battery life.
>>It's my device. I should have the choice to override the standard use case to unlock the phone's full functionality at the expense of battery life.
You're free to do with your device as you please, but I don't see why anyone else should be obliged to cater to your wants and needs.
They also get VERY HOT. I can't hold my phone after using it as a webcam while it's on a wireless charing pixel stand (which is somewhat less than a USB Quickcharge, but more than regular Wireless charging)
“The poorest man may in his cottage bid defiance to all the forces of the crown. It may be frail – its roof may shake – the wind may blow through it – the storm may enter – the rain may enter – but the King of England cannot enter.”
Except in a "State of emergency" in Victoria, Australia, in this present day.
> illustrates how Apple markets their castle’s defenses as protecting the castle residents when in reality it’s about controlling all that goes on inside the castle.
.. which ensures best in the market protection of the castle residents, for which they happily pay premium.
> The biggest threat to most people ends up not being from uninvited hackers, it’s from the apps Apple and Google do invite in that capture and sell your data.
Thats completely wrong. Apple does not make money on users data. They sell protected, private devices and services. Google is direct opposite.
Without commenting on the overall thesis, it's inaccurate to say that if Apple removes Fortnite from the AppStore that it is also removed from users phones.
Anyone who has already purchased Fortnite via the AppStore can still use it on their phone, in whatever version they had downloaded prior to the removal.
My phone is not a home. It’s a filament in a fabric of affordances. I rely on others to support those affordances. I pay for some of them with money. I don’t want a castle. I want a society.
Your phone contains records of all the actions you take with those affordances. What you are looking at, who you are talking to, messages you've sent, inside and outside apps, plus a time series of sensor data that position you and can give audio/video of you at all times...it is foolish not to treat access to this device with great care.
Yes, the phone is your castle, and the castle is breached.
By design. Whether that's by your phone automatically backing up your data to the cloud, or whether that's because your OS decides to roll out a 'track and trace' update to be used by governmental agencies, or any number of other possibilities.
Privacy cannot be regained by petitioning Google, Apple or the government for the features you think you should have - ie total control, repairability, headphone sockets, etc. You are the product.
When it turns out our own phones are snitches, the answer is to get out. Why stay in a breached castle? And these snitches can't get stitches. The only answer I see it to ditch the phone.
You still have the ability to not use these services and have a reasonable privacy. Makes GPS and blue tooth superficial and you might need to carefully select the apps you trust, but theoretically it can work. I know that most people don't care and it is imposing that others suffer from their indiscretion, but at least there are solutions aside from tracking ambitions of your ISP.
But you don't know what privacy you have or don't have.
And they can install an update to covertly 'track and trace' you. How do you even know that this is what is being done?
Frankly, I think this was already possible to do. If anything I'm slightly mystified as to why this was announced as if they didn't already have the capability.
Very much depends on what you need and how reliably they must be for you. The software is not as reliable as in iPhone, but it is improving every month.
No! It's so damn convenient. I wish I'd never got one.
I'm not walking the walk. I've seriously considered the idea for a long time, and it seems bound to happen. The issue for me is the convenience of having a podcast player + camera.
But, sigh, I'm not there yet.
I'll tell you what, I'll try to extract myself again over the next week.
Yeah. I think sometimes about just getting a dumbphone. But the convenience...
I'm doing the next best thing - regularly checking up on permissions (& notifications) used by apps I have installed (on Android) and turning off anything that I don't need.
Shows the strength of the pinephone. Release something cheap, quick and hackable rather than try to deliver everything. It’s been a really impressive project so far!
They're doing CE batches which run about a month each but the underlying hardware is the same for all of them. Just follow their blog (https://www.pine64.org/blog/) and wait for the announcement for the next batch preorder.
Not quite the same; they're now selling two slightly different versions. One has 2GB of RAM, while the other has 3GB RAM and is usually marketed under a "Convergence" label. For example, the UBports CE and the pmOS hardware has some slight revision that changes your ability to connect with the phone over Ethernet. I believe that the upcoming Manjaro CE will have the same distinction between models.
The Pinephone is one of the most disappointing things I have ever bought. I'm an old Nokia N900 fan who was so enthusiastic about getting a new open phone, but the Pinephone’s CPU is extremely underpowered in terms of running the only interface that is both libre and has any real future (i.e. Phosh on Mobian – Ubuntu Touch is based on moribund 2014-era code, and Sailfish’s UI isn’t libre). Scrolling is ragged, opening new windows is painfully slow, and it is easy to make the device start swapping. Also, the Pinephone screen and case feel very cheap.
For me, the Pinephone is at best a tech preview for the sort of experiences you might get to have on the Librem when it becomes available.
Yeah I ordered one 15 months ago and had to start using a flip phone. I can't wait any longer though, I am going to have to get a different smartphone :(
This is the content that really gives me hope. I think Purism is doing great work. But somehow I can't stop thinking that they are too small, and that these companies striving for user privacy and user choice should unite to make the fight against the Duopoly (Apple&Google) easier. I hope one day we can see a curated AppStore, ruled by an alliance formed by companies like Purism (e.g. Huawei -> HarmonyOS, Samsung -> TizenOS), but which doesn't impose what to install (people can opt-out without jail-breaking, like described in the article).
I want to like their phone, but this will be a lot like using Linux on the desktop in the 90s. Software won’t be available and it will get in the way of anything productive I need to do.
> Software won’t be available and it will get in the way of anything productive I need to do.
This is Hacker News, where many of us are weirdos who prefer to do almost everything in Emacs. In that case, the software for these phones is definitely already available. What is still missing is the hardware keyboard like the Nokia N900 had.
Conversely, I'm sitting at a computer for the majority of my waking hours. My productivity happens there.
For a phone, having calls/texts, web access (and most "apps" of any importance have a web version), calculator, clock/alarm, camera, and weather info is fine. I don't need tons of applications installed on it anyway; I have my real computer for that. I don't lose any convenience to go to a more security-minded phone.
And besides, it's nice to be actually away from the computer when you're away from it, instead of having the phone being a constantly nagging ball and chain of your entire digital life. It's got to be socially & mentally healthier, too.
Sigh. Double checking my email: still the same. Order placed: January 16, 2019. Shipping ETA: mid-to-late November 2020. Off topic, I know, but ghrghghr....
I've been running two accounts on my Android phone for a while.
And I'm on the second account.
It really doesn't work perfect, I can't hotpot for instance, I have to swap to the initial account.
But I don't see the phone as my castle, more the second instance.
Same diff perhaps. I do know when I handed it to someone to view something I though I really need a button to mute all the notifications while it's in their hands.
OT: but there is something wrong with the "t" in that font. Does everybody see the same thing as me? It wasn't till I read trustworthy that I thought wtf.
Ah geeze, I thought this was going to be something interesting and it turned out to be a stinking AD. I'm not going to have much faith in your arguments if your final point is going to be "buy our stuff instead of theirs."
A part of me feels the next Apple ad is going to be “Your phone is your castle”. I do like their pro privacy stance against the big dogs. I just really hope the phone was more akin to the bicycle of the mind that Steve Jobs promised us.
> I do like their pro privacy stance against the big dogs
I do like their pro-privacy stance as well, but I take issue with that sentence in that it makes it sound like Apple is not a "big dog" and taking a stance for you, the little guy. Yes, their policies with regards to your data are easier to stomach than, say, Google's, but they are not on your side against the world, they are on their own side, and have plenty of questionable policies and decisions to show for that.
Foremost of all being that they refuse you the ability to install a different OS on the hardware you own, or even install any application you would like on it. This is not merely an abstract philosophical matter either (though I'd argue it would matter even then): in China, you used to be able to install VPN apps to evade state surveillance, until China made Apple boot them from the App Store. I cannot see any argument that makes this into a win for customers.
If you have a dummy account dedicated to your phone, use e.g. K-9 to check email on real accounts, manually export/import your address book, never share it, never sync it, you are pretty close to that.
First I've heard of this. What is the software situation like on a device like this? I assume no comptability with Android, so apps have to be developed specifically targeting their OS?
I know it is nice and convenient having a smartphone, but you all are creating the demand for these horrible phones. And since you keep buying them why should anything change? You complain and complain and google and apple laugh with your trillions of dollars.
If you do not like what apple and google are doing, boycott them.
I know it is hard, you are all addicted to the quick information and the dopamine rush. I know because that was me. And many of you here are probably developers sucking off the teat that you complain about.
Myself? I use a flip phone while I wait for a phone that somewhat meets my criteria for something close to my linux based laptop.
> you are all addicted to the quick information and the dopamine rush
No, I'm addicted to being able to coordinate with my friends, and the additional friction of not being on WhatsApp means I won't be invited to stuff as often.
This sanctimonious diatribe isn't going to make a flip phone any more useful in a country where nobody calls or uses SMS.
The trouble is you think this cannot happen without a smartphone.
If you want things to change you have to take a stand and SACRIFICE. If your friends will not reach out to you because you are not on an App, are they really your friends?
> If your friends will not reach out to you because you are not on an App, are they really your friends?
No, certainly not. That's why I make all my friends send me their invitations through the mail, to test their loyalty. I'm sure I'll get the first one any day now.
I would really love to have a third possibility in the mobile space, especially one as privacy- and user-focused as Purism, but I'm not convinced I'll have the same breadth of experience that I have on Android. Reading through their FAQ[0] and their app list[1], it just seems like they haven't worked hard enough to achieve "app parity" with Android or iOS.
I get that in some ways that's a non-goal, but when I consider my usage patterns, I just don't see myself being able to replace my Android phone with something like this. There's no replacement for my banking apps, things like Uber/Lyft or Postmates, streaming video/music services, etc. The web experiences for most of these are either nonexistent or just generally terrible.
I think in some ways we've gone too far with the current dominant platforms for a third party to emerge for common usage, or even usage by fairly technical people who rely on mainstream apps. A third player would need to have deep pockets and a ton of industry and financial connections to get a viable alternative to Google/Apple Pay working, for example (and the FAQ says the first phone model won't even have NFC). Do you think Chase, Capital One, Wells Fargo, Vanguard, Robinhood, Schwab, ETrade, etc. are all going to write new apps for a new platform, or make their mobile web experiences good enough to act as a substitute? I really doubt it.
There's a FAQ entry about making it possible to run Android apps on PureOS, but it didn't feel like that's much of a priority for them. And it's unclear whether many of these sorts of apps would even run: for example, I expect apps like Google Pay and some banking apps have some hard-to-emulate checks in place that ensure they're running on a conformant, non-rooted Android device. Even if there are workarounds for these sorts of things, it doesn't seem like it'd be safe to rely on those workarounds to not stop working at an inconvenient time.
If they truly believe they can build a sustainable business with a super-low-volume product that only caters to an incredibly niche audience, then that's great, and I genuinely wish them the best of luck. But I fear that any third-party, open platform that ignores mainstream use-cases will just become more and more marginalized over time to the point that they won't even be a viable option for their intended audience. Or, at least, their users will have to have a second device that they use to interact with everything their Librem phone can't talk to.
I can't believe MS dropped the ball so bad in the mobile space. It would be nice to have a third ecosystem to rival Apple and Google. With a huge install base for both Windows and Xbox, HOW do you eff that up? sigh
I feel this is a great initiative, in the mobile OS android and ios goes unchallenged, similar to how it was when windows/mac were ruling earlier, and then linux came and became popular.
I really hope many people get to know of this and can use/contribute to it. One big challenge might be to how to have something as an APP store, without a lot of malware/ poor quality apps.
I'm a bit skeptical about their presumption to be both an app store and a phone provider, but ok, I guess.
> $749 USD pre-order
Unfortunately, it seems the "security-focused" phone is only for very rich people. I mean, ok, if you can afford a new iPhone you can also afford this, but most people in the world can't spend half this much on a phone.
This is somewhat fair, but I think it's also worth pointing out that it's not a business model w/o precedence. Not too long ago, you could have argued that same thing about the Tesla Model S; "electric cars are only for rich people".
The capital from selling at a mark-up to wealthy early adopters helps to fund cheap(er) versions (i.e. Model 3) that are accessible to more people later on.
Not a perfect analogy, and Purism's execution isn't exactly comparable to Tesla's, but I think the idea is the same.
I think it's worth mentioning that the PinePhone exists as well an can be had for 150$ whenever they launch a new batch (every 2 months or so). It is also a Linux phone, maybe not as privacy focuses as the Librem 5, but it serves a similar niche in my opinion.
In terms of UI, UBPorts with lomiri seems to be the best, however it's extremely restrictive in what you can do with it. (Read only root, fstab resets itself after every reboot). It's also based on Ubuntu 16.04.
Personally I have Mobian on it, which is basically Debian with tweaks to work on the Pinephone. You have access to everything in the same way you would have on the desktop. It's using Phosh for the interface.
PostmarketOS is cool, but based on alpine and I have no idea how alpine works.
Fedora and Arch also have (unofficial) Pinephone spins similar to Mobian, so get whatever package manager suits you best.
There are also the Manjaro editions, but I don't really like Manjaro.
That's the stuff I tested, though there's more on the Pinephone wiki.
A phone holds intellectual property and digital information, perhaps belonging to other people. It's complex.
The comparison is overly simplified, I think.
For example, as soon as you subscribe to corporate email, your device is considered "managed" and can be remote wiped. I was surprised not to see this in the article.
> A phone holds intellectual property and digital information, perhaps belonging to other people.
And a house can't? Not to belabor the obvious, but houses often contain phones, along with other storage media holding a wide range of digital information. The phone, like the house, is still your physical property, regardless of any information it may contain.
I really think there is a big difference between digital devices and houses, and saying that the old "house is your castle" laws should apply to a phone is silly, especially when talking about app stores.
Saying that "the phone is still your property" is true physically, but that's not what the OP is saying. He's saying that whoever sold you the phone shouldn't be able to control or limit what happens to it. Why not though?
I wonder what he thinks about homeowner associations in the US. Or gun laws. Or zoning restrictions. Or all of the other laws that society has added since then.
My Samsung TV won't let me install apps from anywhere but the Samsung app store. Is that really breaking the Magna Carta? This is silly.
> He's saying that whoever sold you the phone shouldn't be able to control or limit what happens to it.
I can't speak for the OP but I would generally agree with this statement. Selling something implies turning over control to the new owner. If the device still answers to you after the transaction you haven't really finished your part of the sale. This is somewhat like the concept of inalienable rights, where e.g. you can't voluntarily transfer ownership of your body to someone else while you're still alive because regardless of what you claimed to agree to you still have effective control over it. Except in this case their control is alienable but they simply choose not to relinquish it. One can make a reasonable argument that claiming to sell a device while actually retaining control over it is a form of fraud—that this isn't a sale but only a lease. Imaging selling someone a house with unbreakable locked doors, and keeping the keys to those doors for yourself, so that the new "owner" must beg permission from you to enter. This make a mockery of what it means to own something.
That's what work profiles are for. I'm fine with my company nuking the work profile on my phone. It fulfills infosec requirements while still respecting my ownership over my own device.
I collected a list of arguments in favor of maintaining the status quo on iOS and I’ve yet to see anyone offer a good solution to these:
---- The problems with letting all apps advertise external payment systems:
• Someone may publish a free app to avoid paying anything to Apple, and then charge users [an asston of] money to ""unlock"" via an alternate payment system.
• Users may not be able to see a list of all in-app purchases (and their guaranteed prices) as they can on the App Store, without downloading the app.
• Sharing your payment details and other information with multiple entities, and having to continually trust each of them (e.g. to not abuse or leak).
• Confused users may clog up Apple's customer support with complaints related to third-party payment systems.
• Angry users may demand Apple to offer refunds for shit that was paid for via third-party payment systems.
---- The problems with allowing third-party app stores on iOS:
• How will iOS sandboxing be enforced for apps delivered via third-party stores? Will those apps still have to be submitted to and signed by Apple?
• Third-party stores would need the privilege to write binaries on your iPhone. How will that privilege be regulated to prevent abuse? e.g. what happens if a store starts writing malware?
• Users may sometimes have to wait longer for an app to update on one store than on others (as already happens on Steam vs GoG).
• Developers would no longer be assured that they will have access to literally all the users that iOS has, by publishing on just one store.
This is the biggest kicker for me as an indie dev. You would have to submit to each store, wait for approval on each of them, update for each of them... to come close to the userbase that you can currently access by just publishing once on the App Store.
• Developers will no longer all play by the same rules. One store may allow some content while another may prohibit it.
• Controversial content like porn may still ultimately be bound by Apple's ruling on such matters, rendering moot the freedom of third-party stores in what kind of apps they may offer.
• iOS Parental Control and Screen Time restrictions may be ineffective on other stores (and browsers too if third-party rendering engines were allowed).
• If an app or game is exclusive to a store that a user isn't already using, they would have to create a new account and maintain an additional app just to access that one exclusive.
• Not all stores may be compatible with the iOS backup and restore system, or the APIs for app-thinning and on-demand resources.
In case anyone reading's unaware, when that happens (if it seems silly/obviously not an improvement) you can edit and resubmit exactly the same title as originally submitted, and it will then be accepted as-is.
Always wanted for someone to make a copy of Apple’s products but without their insane level of control over your device. Looks like someone is finally going to do it :)
This is a bit inaccurate; first because the App Store has a spotty record of stopping malware from reaching your phone and also because the apps pulled there did not go through the App Store, they were actually sideloaded using enterprise deployment. Apple does have the ability to remotely disable applications downloaded from the App Store, but to my knowledge it has never used this ability.
> These companies have built very sophisticated and secure defenses all in the name of protecting you from the world outside their walls, yet in reality the walls are designed to keep you inside much more than they are designed to keep attackers out. The security community often gets so excited about the sophistication of these defenses backed by secure enclaves and strong cryptography that their singular focus on what those defenses mean for attackers blinds them from thinking about what they mean for everyone else.
I mean, all you have to do is look at the things that are implemented to see that Apple's goal in many cases is to protect their software, not you. There is custom silicon in every recent iPhone that does nothing but stop modification of kernel code, even in the face of code execution and arbitrary read/write in EL1: interesting from an academic standpoint, but if you stop and think about it for more than a second it's entirely useless for actually protecting users.