Hacker Newsnew | past | comments | ask | show | jobs | submit | more shasheene's commentslogin

> it was well before my time and the graphics just didn't work for me. Older now, and I feel like I can appreciate this game now and overlook the old graphics

As mentioned on the featured article, Zelda: Link's Awakening is being released on the Nintendo Switch in a few months. You may be interested to see the graphics and art style: https://www.youtube.com/watch?v=_U-_XfDGgDw


You may not be aware of the great talk titled "Modchips of the State" [1] (presented at 35C3 in Dec 2018).

The speaker managed to reproduce the exact single-chip hardware implant attack suggested in the Oct 2018 Bloomberg Businessweek story [2] [3], which claimed Amazon and Apple found malicious hardware implants in Supermicro motherboards while conducting detailed inspections.

While Bloomberg has never retracted the story, there's an argument that the sources have vested interests in lying to Bloomberg suggesting that attacks developed in lab-conditions actually occurred in the real-world, in order to raise awareness of supply-chain risks (something the current US administration has been attempting to do for some time). There's also a suggestion that the journalist was acting in good-faith but mixed up a few different attacks, with the sources reluctant to clarify things. Another suggestion is that the attack did happen, and Amazon and Apple were forced to issue denials.

It's a very fascinating story. Maybe I'm naive, but if Bloomberg was in-fact wrong, they would issue a correction or a retraction. The fact they haven't retracted it suggests to me that there's truth to the story.

[1] https://www.youtube.com/watch?v=C7H3V7tkxeA

[2] https://www.bloomberg.com/news/features/2018-10-04/the-big-h...

[3] https://www.youtube.com/watch?v=UJGbcjfJ7rU


The fact Supermicro didn’t sue kind of says it all to me.


Indeed.


The article reports on the lawsuit, but is light on details about the headset. Tested recently did a 10 minute review on this headset, the 'Nreal Light' [1] https://www.youtube.com/watch?v=M9A9u-lwjTs

Regardless of the merits of the lawsuit, I can see why Magic Leap would be scared. The AR glasses looks great. They're certainly stylish enough for the mass-market -- they look like normal glasses at first glance, they're quite reasonably priced (at $500), and run from a regular Android phones via USB-C (for power, as well as sensor and video data).

For reference, here's the Tested review for current Magic Leap product, the Magic Leap One Creator Edition: https://www.youtube.com/watch?v=Vrq2akzdFq8. It retails for $2295. Though it comes with controllers, eye-tracking for focal accommodation, two waveguides displays that are used to provide two separate physical layers to focus on.


On the merits of the lawsuit, after reading the whole district court filing [1], the core allegation is that the 'Nreal Light' is based on Magic Leap's unproductized internal designs produced "after extensive investment of time (multiple years), money (hundreds of millions of dollars spent on research and development) and human resources (hundreds of engineers)", and that the defendant learned about the product designs and research during his employment, including adapting (as discussed elsewhere in this thread) Magic Leap's proprietary font (https://i.imgur.com/7pYZ8bp.png)

If Nreal Light is indeed based on Magic Leap's unreleased designs, this makes me very sympathetic to the price difference -- Magic Leap One is a development kit backed by 5+ years of huge R&D investment, and is/was never intended for end-users (hence the price and awful aesthetics).

The Nreal Light is an indication of what a consumer Magic Leap was going to be: a cheaper, slimmer, less ambitious product (by avoiding multiple waveguide displays) that appeals to a consumer market, then it's unfair and sad that they didn't have the opportunity to go to market with their own product.

On the other hand, Magic Leap has had ample opportunity to release an affordable high-quality AR headset so I do feel some sympathy in an employee leveraging legitimately acquired knowledge and experience to build a commercial product that beats its competitors. The situation may not that different to the Traitorous Eight, or Steve Jobs infamous "adapting" of design ideas of the Alto for the Apple Lisa and Apple Macintosh, after his visit to Xerox PARC.

[1] https://www.courtlistener.com/recap/gov.uscourts.cand.343717...


I've played quite a bit with the Oculus DK1, GearVR, Oculus CV1 and the Oculus Go, and after 2 weeks of usage I can say the Oculus Quest is much better than any of them. Low barriers to putting the headset on and immediately walking around in VR is hard to overstate.

I think it will be a very big hit. The virality levels are very high with people showing their family and friends, who then want to go out and get their own.

The downsides are the games are a bit too expensive, and the headset itself costs $400 which is alright, but it does need to be cheaper if they want to achieve Nintendo Wii type sales figures (which the Quest fully has the potential to do).


Tencent owns 40% of Epic Games, which develops the incredibly popular Fortnite Battle Royale computer game. Epic Games happens to also develop the influential tool Unreal Engine, which is used widely throughout industry to produce interactive 3D applications.

Fortnite BR, like other competitive online games, runs anti-cheat software in-order to detect cheaters. Fornite BR happens to use Easy Anti-Cheat and BattlEye [1] [2].

Anti-cheat software runs with very high privilege. More importantly, with many anti-cheat software, every session a new binary payload is downloaded directly from the internet.

Anti-cheat software seems like a great platform to launch targeted malware in-order to achieve a beachhead on a computer network: highly targeted, and effectively undetectable.

I would expect most software developers don't sandbox their gaming machines from their work-from-home environments.

[1] https://www.reddit.com/r/FortNiteBR/comments/82xyhb/launch_e...

[2] https://www.thesun.co.uk/tech/7446514/fortnite-cheats-beware...


> Anti-cheat software seems like a great platform to launch targeted malware in-order to achieve a beachhead on a computer network: highly targeted, and effectively undetectable.

> I would expect most software developers don't sandbox their gaming machines from their work-from-home environments.

I have been worried about this for some time. In my country we have a lot of issues with metadata retention so I set something up like this[0].

I have separate VLANs:

• VLAN 1: Management (no tag, null route)

• VLAN 2: Untrusted (routes direct to ISP via ppp0)

• VLAN 3: Trusted (routes direct to ISP via ppp0)

• VLAN 4: Trusted (routes via tun0 - VPN connection for private browsing etc)

• VLAN 5: Null route for devices that do not require internet access of any kind, desk phones printers etc.

(Doesn't have to be a Raspberry Pi, you can use anything that Alpine Linux runs on which is x86_64, x86, ppc64le, s390x, armhf, aarch64 (ARM8 like Raspberry Pi 3), armv7 (Raspberry Pi 2, and friends).[1]

[0] https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a...

[1] https://alpinelinux.org/downloads/

The idea is that anything on VLAN2 is completely segregated at the switch and router level from the rest of my network.


Upgrading to managed switches, I had thought about making a bunch of VLANs in a similar manner. But I ended up settling on something much simpler.

There are essentially just two segments / types of switch ports (I may have stuck with the many-vlans thing if switch ports had RGB LEDs showing what zone they were in...). First, the "trusted" network, which does switch management, servers, reasonably-behaved hosts, etc.

Then, a second "access" segment. Ports in this segment are setup to not be able to talk to one another through the switching fabric at all - the only thing they can talk to is the router. Ports on the same switch are prohibited from talking by the switch's config, and different switches are given different associated VLANs. This is good for visitors, Android, Internet of Trash, etc.

For routing, the horizon seen by each device is controlled directly by its own macaddr on the router itself. Two hosts on the same segment can see drastically different routing tables and Internet connections. This isn't perfect, as it can be easily spoofed unless I start pushing the switchport-mac mapping out to the switches. But it works for now.

But I believe "sandboxing" in the original comment was talking about the machine itself, not network access. So PC gaming means being disciplined about getting another machine, or at least a second GPU for PCIE passthrough in a VM. In general I think we're in a time of decommodification. The easiest way to sandbox between security boundaries is separate machines, of which there is an inexpensive surplus of. No need to have banking and games on the same tablet, when a second hand nexus7 (flo) is $40 on fleabay.


Carefully constructing a household network topology and being disciplined with separate physical machines appears to be a strong mitigation.

But will your colleagues who play competitive online games be willing to buy a separate machine used only for remote employment, and be willing (and able) to construct such a network topology correctly?

Most household routers don't even support VLANs.


> Then, a second "access" segment. Ports in this segment are setup to not be able to talk to one another through the switching fabric at all - the only thing they can talk to is the router. Ports on the same switch are prohibited from talking by the switch's config, and different switches are given different associated VLANs. This is good for visitors, Android, Internet of Trash, etc.

Yes essentially that's what VLAN 3 and 4 are (trusted). They are able to talk to each other but VLAN 2 (untrusted) cannot. VLAN 2 cannot access my server on the LAN or any other network resources, except in certain situations where I open a single HTTP port to a specific directory that is read/only. This is where guests would be. I use this to copy 'certain' files to my untrusted hosts. The exploitation surface area is extremely low. Switch configuration can only occur when on VLAN 1 (management). I also can control which VLAN people access via WiFi via my Unifi Controller. One SSID is a trusted network, the other is untrusted. I only use EAP so I can control exactly what users have access to what VLANs via FreeRadius. All of this is documented [0][1]

> For routing, the horizon seen by each device is controlled directly by its own macaddr on the router itself.

Remember MAC Addresses can be spoofed which means you can get things like VLAN hopping if you're not careful. My Windows machine where my gaming happens is "untrusted" and is in port 2 on the switch, my trusted machines are in port 3 and 4. My other family members also have certain devices they consider 'trusted' and those are in VLAN 3/4 while they have devices that are 'untrusted' in VLAN 2. It took some time to educate everyone, but I drew pictures, and explained it nicely. Unfortunately this is the world we currently live in.

I was concerned that a APT (advanced persistent threat) might have the time to monitor the system for idleness and then attempt such an activity. At least that is what I would do.

> But I believe "sandboxing" in the original comment was talking about the machine itself, not network access.

Well they are sort of the same thing in this situation because it's physical sandboxing.

> So PC gaming means being disciplined about getting another machine, or at least a second GPU for PCIE passthrough in a VM.

> In general I think we're in a time of decommodification. The easiest way to sandbox between security boundaries is separate machines, of which there is an inexpensive surplus of.

Exactly.

No need to have banking and games on the same tablet, when a second hand nexus7 (flo) is $40 on fleabay.

This is exactly my point. In regard to my mobile phone I use a Redmi Note 5, with LineageOS, without Google Apps. If I tablet gamed I would have a 7" tablet specifically for that. I would tether it to my phone via WiFi AP and the CPU/GPU would probably be more powerful than you'd get in a phone anyway.

I only install things through F-Droid. I have made a significant attempt to de-google my life and have been successful.

Right now all I have on there is

• andOTP org.shadowice.flocke.andotp

• AnySoftKeyboard com.menny.android.anysoftkeyboard

• Barcode Scanner com.google.zxing.client.android

• BusyBox ru.meefik.busybox

• Call Recorder com.github.axet.callrecorder

• DAVx⁵ at.bitfire.davdroid - Used for syncing with my private Radicale instance.

• Draw com.simplemobiletools.draw.pro

• F-Droid org.fdroid.fdroid

• Fennec F-Droid org.mozilla.fennec_fdroid

• Flym net.frju.flym - RSS yay.

• Ghost Commander com.ghostsq.commander

• K-9 Mail com.fsck.k9

• Maps com.github.axet.maps - Provides a native experience for OSM maps. If I need Google Maps I just use a web browser.

• Markor net.gsantner.markor - Awesome text editor/markdown editor

• MuPDF viewer com.artifex.mupdf.viewer.app

• oandbackup dk.jens.backup

• OpenKeychain org.sufficientlysecure.keychain - PGP mail yes.

• OpenTasks org.dmfs.tasks - Used for syncing tasks with my private Radicale instance

• OpenVPN for Android de.blinkt.openvpn

• primitive ftpd org.primftpd - I upload/download via sftp to my phone without plugging it in with ssh keys (ie /sdcard/.ssh/authorized/keys)

  sftp_phone() {lftp sftp://user:DUMMY@{{ IP_OF_PHONE }} -e 'set sftp:connect-program "ssh -a -x -o KexAlgorithms=diffie-hellman-group-exchange-sha256 -o MACs=hmac-sha2-512,hmac-sha2-256 -i ~/.ssh/id_rsa"'}
• RedReader org.quantumbadger.redreader

• Revolution IRC io.mrarm.irc - Yeah I still use IRC and not IRC bridges, yet with Riot because of https://github.com/vector-im/riot-web/issues/2320

• Riot.im im.vector.alpha

• Share to Clipboard com.tengu.sharetoclipboard

• Silence org.smssecure.smssecure

• VLC org.videolan.vlc

[0]: http://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a_...

[1]: https://wiki.alpinelinux.org/wiki/Linux_Router_with_VPN_on_a...


To put it in term of your network, I didn't want to deal with having to differentiate between VLAN3/VLAN4 switch ports (and wanted to leave room to grow multiple outgoing VPNs).

Also I don't see the need for hosts on VLAN2 to be able to talk to one another. Which enables me to default to putting decently trustable things in my access zone as well (like say an RPi running Raspbian/Kodi).

> Remember MAC Addresses can be spoofed which means you can get things like VLAN hopping if you're not careful

Oh for sure, which is why I alluded to eventually pushing out per-port mac address config to the switches. But my primary concern is browser/pocketsurveillance traffic not going out my ISP's IP, and this suffices for now.

(Thanks for the dump of Free android apps you find useful. Not really on topic for the thread, but I personally appreciate it)


40% ownership won't grant them the ability to abuse the anti-cheat software, unless I misunderstand how partial ownership works.


It's not like an additional 11% ownership and voting rights would prompt the board to pass a motion to begin using Fortnite to spy for China.

The benefit of partial ownership (or dual ownership) is that it opens up the network of internal employees and management to recruitment while also providing plausible cover as it is a branded and well-known "western" firm


They only need to get another 11% to vote with them.(assuming they have 40% voting representation).

That's one or two other shareholders you need to bribe / convince. Not difficult, especially when the company you just invested in is learning what it's like to have scrooge McDuck money and want more.


> They only need to get another 11% to vote with them.(assuming they have 40% voting representation).

This depends a lot on who owns the other 60%. If it's one other party, the 40% effectively has no control. If it's a public company with a million other owners and they're the largest individual shareholder, many of the others won't even show up to vote their shares and their 40% of all shares will generally be >50% of the ones that show up to vote for anything.


It's a private firm, and the voting example I gave isn't really that good.

A more reasonable case is that they can heavily influence what contractors and 3rd parties are used for projects. Not something that needs a board vote, but something a 40% shareholder has a lot of say in.


I imagine they don't hold a vote on whether to put backdoors or not on a board meeting. So that raises the question. If not like that, then how does it actually happen?


You just go to the executives and point out that you own 40% of the ownership, and you'd really like them to hire these couple of guys and put them on this team, and then not ask too many questions about what they're doing, or you'll use your 40% ownership to make their lives more difficult, which at 40% can easily involve ousting them personally. (See other people's comments about the difficulty of getting enough voters to actually vote to prevent a 40% stake from being the vote winner.)

The literal thing I just said isn't even that unethical; suppose you own 100% of a company, then telling it to hire someone specific and pay them this much and put them here would be fully within your rights as owner. (There's probably a legal hurdle or two to clear, but AFAIK it's nothing that will actually stop you.) How that changes ethically as your ownership stake decreases I'll leave as an exercise for the reader. What's unethical is what they'll be doing when they get there, and who else is tainted ethically depends on how much they know and what they do about it.


The other examples are public companies. Epic Games isn’t. Something shady could still happen but there are only a few shareholders that control the vast majority of the company. I think a few other firms and the founder.


A bit too legion of Doom in retrospect.

But you would get to heavily influence what contractors and vendors the company works with are good vectors.

As well as partnership and integrations.


They don't need to do anything with shareholders to spy on anybody.

Heck, if China wants to spy on anybody, they don't even need to resort to technology to begin with... Unlike the PRC, USA is a more or less open society — want someone's secret? Just send your man there; Want to influence someone? Just ask......

I totally don't understand American concern with technology being used by espionage, while it lets this go.


You're thinking too small.

Yes, stealing individual secrets is best done with old school tradecraft. Spies in suits, photos of scandals with dominatrixes.

But what if you want build a system which can identify homosexuals based on textual conversations? You buy Grindr.

Or what if you want to get a backdoor for future exploitation on almost every computer in the US? You buy Fortnite. Because one of those computers will belong to the child of someone important. It only takes one slip by them to get Stuxnet from their home PC to their work PC.

You mustn't be afraid to dream a little bigger darling.


You don't need to buy a company to use its systems in an attack. The odds of discovery and disclosure ahead of time increase massively if you go through a major company's decision making processs as part of your hacking process. Companies are people. Lots of them. They are consequently not good at keeping secrets, particularly "I'm hacking the world for evil" secrets.


Ah yes, brave and valiant humans who took 12 years to blow the whistle on PRISM.

I shouldn't have mentioned attack vectors, the first case is the better opportunity these days. Data data data.


> Yes, stealing individual secrets is best done with old school tradecraft. Spies in suits, photos of scandals with dominatrixes.

Why would anybody need to do more than steal individual secrets they've been told to recon?


Because you can't build a homosexual scoring model for the social credit system by learning that one person is secretly gay.

You can build that model if you have access to the data of 20 million people.


Even easier / party not connected to the above hacks the anti-cheat software. No need to buy the company. Probably already has happened.


But also easy to revoke. If you own the code, you don't have to let anyone know what you're sneaking in.


If for example they also owned battlEye they would have a seat at the table to 'suggest' that was used for anti cheating.

No comment on the wider claims, but 40% does give them a tremendous amount of at least soft power.


Sure, but then the problem would be their ownership of the anti-cheat company, not of Epic. FWIW, BattlEye appears to be German and Easy Anti-Cheat Finnish (and bought by Epic).


That's not how covert operations works. More likely is just a series of subtle moves to get "trusted" assets into the right places, followed by coordination outside of corporate channels to implement the backdoor. You'd have trusted agents actually implementing and using the backdoor, and less trusted assets (ideally, to allay suspicion, people not of your nationality that you've leveraged via MICE or some other mechanism) to look the other way and generally act like useful idiots.


Not to mention that some games are translated for the Chinese/asian market. Guess who will suggest a company for that? Then one could slowly make it so that the translation team needs more and more access rights to the deployment platform (or just stealing credentials is an option too - especially from the inside). All you need is to be able to insert a little script and be able to press the "deploy now" button. Hidden backdoors are so '90... Online games are perfect cybernukes.


They also own Riot Games, Grinding Gear Games and Supercell.


Sent me into a small panic there. But I remembered I've got work and play on separate OSes, on separate encrypted disks.


Still in the same system and network I assume. If you are an "irrelevant" target let's say, one where there are no stakes in getting to you, this is good enough. Otherwise it's a matter of getting into your UEFI or disk firmware (since both disk share the same system), or rest of the network (router, IoT crap, another system, etc.) and the fact that you have an encrypted disk won't matter that much.

It's about how important a target you are. You only have to make one mistake, and "they" only have to get it right once. There's no defense that you can put up against a team of dedicated hackers with nation state backing unless you are either not on their radar, or have nothing inherently hackable in your life.


A dedicated team? Of course not.

But it's still worthwhile to protect oneself from being an easy target for a system which just scoops anything interesting off the disk it's installed on.


I dual boot windows and Linux, work is always done on Linux with FDE and I never access files on one from the other but it's a threat model I never considered, Fortnite is installed because the boy plays if a lot when I'm not using the gaming/dev PC.


Until some anti-malware outfit figures it out, then they're in some very hot water.


I think the concept that Tencent is asking Epic to infect your computer with malware is a ridiculous conspiracy theory.

I work for a western studio for which Tencent is a majority shareholder, and I can tell you, Tencent hasn't even hinted at wanting any of our data, let harvesting more.

As far as I can tell, their motives are simple and capitalistic. Somewhat ironically, it was all the western potential acquiring companies that had agendas that were very ethically distasteful to us.

Tencent certainly complies with Chinese laws in china, but can you blame it? Wouldn't you comply with the law of your country?


What makes you think something like this would be announced on company wide email?

One or two guys, working on cheat detection might know something, or most likely are just told to ignore whatever, they see.


Like Google's Dragonfly. Or AT&T letting the NSA splice intercepts in the main fiber room.

Ntk only.


intelligence services play the long game. Gradually exert pressure, put friendly management in place, move supporting services to Chinese data centers, etc. Maybe it takes 10 years. They aren’t in any rush.


Click on his profile. He's the CTO of the company.


Even if you're the CTO of a US company that's being infiltrated by a foreign intelligence agency, there are only three possibilities:

* You are a foreign intelligence asset and and any denials on your part are lies.

* You are not a foreign intelligence asset, but you know that strange things are afoot and have informed the FBI. In order to not jeopardize the counterintelligence investigation, you have been instructed to play dumb, and hence, any denials on your part are lies.

* You are not a foreign intelligence asset and you have not noticed the infiltration. In this situation, you're not lying when you deny that anything's going on, you're just ignorant.

Of course, if your company isn't being infiltrated by foreign intelligence, you will also, correctly, deny that the company is being infiltrated. I'm not saying that his company is being infiltrated or compromised; I'm saying that there's virtually zero informational value in someone in his position denying such a thing because no one would ever admit it.


> Tencent certainly complies with Chinese laws in china, but can you blame it? Wouldn't you comply with the law of your country?

Since you've put yourself out here as CTO of a Tencent owned company... I've heard that in China, Fortnite requires a Chinese "real ID" to play. Aka China knows exactly who you are at all times.

They also "punish" you for playing more than 3 hours at a time if you're under 18.

And I can't prove, but assume they also record all conversations (otherwise... What is point of using real id??)

This doesn't sound like something Epic games would want to be associated with if Tencent wasn't a major shareholder. Are you saying you've had no pressure at all to implement similar features and make your games "more friendly to the Chinese market"?

To me it feels just like China buying up slivers of hollywood so nothing critical of the regime makes the big screen


The question is if this chinese id is requirement from gov to enter the market... how many corporations would refuse (EA for example)? Especialy with Fortnite which is sure to be super profitable.

I am not saying Epic/Tencent is good or anything just that you don't need to be partialy owned by chinese company for this to happen. It is financial capitalist decision. Similar to why other companies want to enter china market.

It would be more suspicious if Epic was doing it even when Fortnite woulnt be profitable in china.


Since you mention EA, it appears that they partner with Tencent Games for the FIFA franchise: https://eafifa.qq.com/


> I work for a western studio for which Tencent is a majority shareholder, and I can tell you, Tencent hasn't even hinted at wanting any of our data, let harvesting more.

How would you know if they did? Most employees have zero interactions with their employer's investors.


Because I am the CTO of the company, and because the only interaction Tencent has with our company are during it's board meetings, which I am also a member of.


The only interaction you know about.

I was working for a telco hardware and software manufacturer and I remember when about 15 years ago, I was the QA, who was sent to China with an engineer to implement "a feature" onsite (yes it was a backdoor). Everybody could guessed in my team, why are we were sent there. Also worked at a cinema control software company and we had to implement the same feature.

So yeah, it only take one engineer to implement a feature. (or a bug)


> Everybody could guessed in my team

So pretty widely known.


Yeah, because we did not gave a shit about the NDA we had to sign.


And your CTO didn't know about your trip abroad? Seems highly unlikely.


The CTO of the company probably was not even aware that we were working for the company.


And you could have mentioned that fact in your original post. This just reads like you baited someone into what was a reasonable response. I'm not sure why that'd be necessary so maybe it was entirely unintentional on your part.


Nice comeback :)

The person who asked should have looked at your profile.


Congrats on the recent PS4 release! I’ve been a fan of PoE since closed beta.


It has only been less than a year since then though...

Also hello from NZ :)


Awww, right in his face, that's cruial!


> all the western potential acquiring companies that had agendas that were very ethically distasteful to us.

Interesting, can you elaborate?


I don't want to give examples of specific companies for what I hope are obvious reasons.

But in basically all cases western companies were much more interested in buying us for our users, which they could then subject to whatever their business model is. Selling ads, selling subscriptions, selling data, whatever.

Our companies profit was actually a negative to these western companies since it simply increases the price that they would have to pay to get at the users.


If the country law is aimed to achive 100% surveillance and zero privacy, yes, you can blame it.


Yes, there is a difference between giving all your information by your own free will and the state spying on everything you do.

On the other hand how many knows how much Facebook and Google collects about you and where that information ends up some time later? There was a lot of scifi about the information society and how that could derail in the future, where machines decides if you might commit a crime in the future and take preventative meassures. We are not far away from it now, just takes a few small jumps in the imagination to end up there :-)


> There was a lot of scifi about the information society

More importantly, there is a lot of literature about totalitarianism.


Video card interface electrical standards may have changed, but GPU form factors have not.

Electrical interface changes just need to be paired with compatible motherboards.

Same with hard drives: the interface changed but the form factor has not. They are even compatible with their larger versions: a laptop 2.5" platter drive or SSD can fit in a 3.5" drive bay with a cheap bracket. The SATA versions were backwards compatible speed bumps. Performance was always the maximum supported on the motherboard.

M.2 is just PCIe in a different form factor, so on desktops a $5 passive adapter allows NVMe SSDs to be used on any PCIe slot on even on relatively ancient PCs (though use as a boot drive dependent on BIOS support).

Same with display interfaces. VGA is still supported on many systems, with DVI-I being backwards compatible with cheap passive adapters. DVI and HDMI are electrically identically (minus audio) so cheap passive adapters work.

The broader point is that large incompatible electrical changes are possible because they only mean the new motherboard just needs to be paired with a compatible component. There's still market pressure for backwards compatibility unless the leap is large enough.


This is a good time to bring up the fact there was never an industry-wide standardization effort for laptops. A standard form-factor means components would be re-usable between upgrades: the laptop case, power supply, monitor, keyboard, touchpad could all be re-used without any additional effort. This improves repairability, is much better for the environment, and means higher-end components can be selected with the knowledge that the cost can be spread out over a longer period.

For desktop PCs, the ATX standard means that the entirety of a high-end gaming PC upgrade often consists of just a new motherboard, CPU, RAM and GPU.

A 2007 Lenovo ThinkPad X61 chassis is not that different to a 1997 IBM ThinkPad chassis (or a 1997 Dell Latitude XPi chassis). If the laptop industry standardized, manufacturers would produce a vast ecosystem of compatible components.

Instead we got decades of incompatible laptops using several different power supply voltages (and therefore ten slightly-differently shaped barrel power plugs), many incompatibly shaped removable lithium-ion batteries, and more expense and difficulty in sourcing parts if and when components break.

A little bit of forward thinking in the late 1990s would have saved a lot of eWaste.


Some parts such as batteries, storage, ram etc should at least be a standardized.

Manufacturers probably don’t want to standardize on the remaining motherboard/graphics/chassis/cooling because a laptop isn’t like an atx computer where you get modularity at the expense of wasted space. A laptop is basically a 3D puzzle with thermal components. Few consumers would buy a laptop with even a little wasted volume or weight, even if it meant better serviceability and upgradeability. Same with phones. We aren’t going to see modular phones beyond the concept stage either.


I generally agree with your comment. However, when you wrote,

> Same with phones. We aren’t going to see modular phones beyond the concept stage either.

I disagree. I'm writing this on a Fairphone 2, which I bought for its modularity & because running Lineage OS (or any other OS you choose) doesn't void the manufacturer's warranty. While I'm sure Fairphone's sales are small compared to the broader industry, I think they've shown a market exists for ethical, modular phones. I've seen other Fairphones in the wild here in France, as well as seeing them for sale on used goods sites like leboncoin.fr.


Batteries (or rather, individual cells) do have a standard: 18650. Unfortunately too thick for the ultra-thin laptops, but the older Thinkpads use them. I suspect safety is the reason why no one makes replacement laptop battery "empty shells" that take 18650s and have the appropriate balancing/protection circuitry to interface with a laptop, but then again you see mobile phone powerbanks being sold this way... go figure:

https://megaeshop.pk/5v-dual-usb-power-bank-box-diy-shell-ca...


I just wish there was a portable all-in-one PC with modular components (since vertical parts don’t need as much testing as horizontal).

But surplus laptops are in such quantity that I’d be fine replacing my tablet with a laptop.


There's always beem a trickle of machines like that. The problem is that they're targeted towards industrial usage and RIOTOUSLY expensive.

https://www.bsicomputer.com/products/fieldgo-m9-1760 for example (the first vendor I saw that actually shows prices, as opposed to just request-for-quote) It starts at nearly $2400 for a low-spec Celeron, and I'm not sure it even has an onboard battery.

What I could see as viable would be a micro-ATX case of similar dimensions, sold as a barebones for like $300-- use the extra volume from not accommodating ATX mainboards to store batteries and charging circuitry, which can be off the shelf because space constraints are minimal. Pop in some reasonably priced desktop components, and you'd have a competent luggable for under $1000.


You can get mini-ITX cases that are smaller than console sized with full sized graphics cards inside... they cost like $300 bucks though :(


Not really, they are mostly around 100 bucks, some have integrated PSUs and might be >200, but 300$ seems like a lot to me.

I'm using a Silverstone ML08 (about 100$ at the time), which is slightly bigger than a PS4 pro but fits a full length, double slot GPU.

The ML05 is even smaller, costs about 50$ but fits only a single-slot GPU.


I might be getting one of these.


> A standard form-factor means components would be re-usable between upgrades

We don't even have to go that far. Just ensuring that laptops can be serviced by their own users would go a long way to reduce e-waste. i.e. not soldering RAM chips to the motherboard, making it feasible to remove every single part (not gluing the keyboard to the MB for example), etc... instead of pursuing an ever thinner laptop design, which has practically no use.


MacBook Pros not being user serviceable at every component level doesn’t mean they’re not environmentally friendly - not by a long shot. In fact, building a device like that might even shorten lifespan in a laptop form-factor, not to mention no one wants to carry around a heavy, clunky machine so it likely wouldn’t sell anyway.

Here’s an example report that mostly has to do with the production and recycling aspects: https://www.apple.com/environment/pdf/products/notebooks/15-...

When people are done with their MacBooks they don’t just throw them out - they sell them or hand them down to their relatives/kids because they still work well enough, are supported by the manufacturer, are durable and have very high resale value in secondary markets.

Robust engineering, longevity, support, and resale markets do more for the environment than making components user-replaceable.

My old 2011 MacBook Air is still going strong and being used by my mother. If anything goes wrong, she can take it to the Apple store and get help promptly. She still gets software updates, and that thing can STILL be sold for ~$250-300 on eBay, Swappa or Nextdoor. If the machine breaks completely, she can take it into the Apple store to get it properly recycled in almost any part of the world.

That’s what minding the environment looks like. You have to look at the entire lifecycle of the product from the moment the raw materials are sourced all the way to making it easy to recycle when a product is end-of-life.


Apparently you haven't seen any of Louis Rossmann videos on Youtube. Let's say your grandma's MacBook stops working because of blown fuse on the motherboard. Something like that would take Louis 5 min to repair, but Apple store would just replace the whole motherboard and charge $$$. How is that environmentally friendly.


One: shout out your favorite YouTubist, I guess, but a repair Apple makes is a repair Apple has to support.

Two: it's much, much harder to support a repair done on-site with a soldering iron than it is to replace a part. These repairs are much more likely to fail under both normal and unconventional use and then will come back for more repairs--which are themselves, still, expensive to provide.

Three: waste concerns have to factor in what Apple does with the part after they do the swap. (I have no insight into what they do, but your comment ignores this.)


>> One: shout out your favorite YouTubist, I guess, but a repair Apple makes is a repair Apple has to support. <<

Saying he is my favorite Youtuber is a bit condescending. I mentioned him, because he is a loud proponent of the right to repair movement.

>> These repairs are much more likely to fail under both normal and unconventional use and then will come back for more repairs--which are themselves, still, expensive to provide.<<

If that was true, I am sure Apple would choose to repair parts instead of replacing them. ;)


> If that was true, I am sure Apple would choose to repair parts instead of replacing them. ;)

Of course it's true--everything from "that fan's just going to get dirty again, and faster, because it's been blown out but can't be re-sealed outside a factory" to "that solder joint is being done by somebody making fourteen bucks an hour, boy I hope I'm not relying on that long-term".

Why would a company that makes its money off of selling the closest thing to a unified end-to-end experience take the risk of a dissatisfied customer because of a frustrating defect remediation experience?

The quoted point is an example of a fundamental misunderstanding of how Apple views its customers and how Apple makes its money. But stuff like that is a closely-held truth in the various repair-uber-alles communities on the web regardless of reality. (And then, as 'Operyl notes, your cited YouTubist attempts to shore up his own little slice of community by instilling in them the "enlightened"/"sheep" dynamic. Petty little cult leader, that.)

Sorry that you read some real distaste for that mess as condescension, but not sorry to voice that distaste.


>> Why would a company that makes its money off of selling the closest thing to a unified end-to-end experience take the risk of a dissatisfied customer because of a frustrating defect remediation experience?

You make it sound like Apple has never done it before.

Case in point: the overheating early 2011 Macbook Pros - a problem experienced by thousands of customers.

Apple basically pretended the problem didn't exist for well over a year (there was a gigantic thread about the issue in the Apple support forums). By the time they did issue their recall (or "repair order", if you want to use Apple's euphemism), a lot of people had already divested their dead Macbook Pros for a loss.

Mine had bricked just after my AppleCare expired, and I wasn't about to spend $500+ to get a replacement logic board (which basically had the same defect, except it was a brand new board. Source: I had replaced my logic board under AppleCare only to have the problem recur within two months). I was lucky that I didn't dispose of my Macbook Pro before the repair order, but I had bought a replacement laptop by the time it was issued (spoiler alert: it was my first non-Apple laptop purchase in a decade).

They also put up barriers to getting the repair order. You had to prove you had the heat issue and that it was causing crashes. Since mine was bricked, it was easy. But a friend of mine (who had two of the affected models) had to jump through hoops at the Apple Store to get his fixed.

Those early 2011 Macbook Pros were mostly high end 15" i7 models, meaning they were not on the lower end of Apple's Macbook Pro line. People paid good money for them. If Apple didn't have their heads in the sand and gave everyone replacements (i.e., a 2012 model, which didn't have heat issues) as the problem occurred, it would have been a rounding error for them. But they didn't do that.

>> fundamental misunderstanding of how Apple views its customers and how Apple makes its money.

Speaking from my one experience - I didn't feel like Apple was interested in my experience at all. While I never considered myself a fanboy, I was very loyal to Apple and totally invested in the ecosystem. After my experience with the 2011 Macbook debacle, I abandoned them completely. It meant writing off a lot of money spent on Mac software, mobile apps, etc.


He’s cringe at best, just as bad as the rest of them at worst. He’s playing for the camera, the audience. I wouldn’t take much of what he says seriously, but that’s just me I guess.


Plus he outright lies and then soaks up the attention it brings him: https://www.reddit.com/r/apple/comments/9pow06/louis_rossman...


My son spilled milk on our 2015 MacBook Pro. Apple wanted $1,100 to fix it. it took two separate shippings to New York but Louis rossmann fixed it for $550. You need to wake up, grow up, and grow a brain. Apples excessive greed is real. the fact that you were lucky and haven't dropped or spilled anything on your laptop in The last 5 years is not evidence that apple is a great company!


I’m sorry, what? This is the exact shit I’m annoyed about. He is inciting stuff like this, telling his user base to call anybody that disagrees with them things like “sheep”, it’s even in his logo. I do not agree that the best thing to do is call people you disagree with “asleep” or “sheep,” or to “grow up/a brain.”


Robust engineering, longevity, support, and resale markets do more for the environment than making components user-replaceable.

But if the parts were also user replaceable it would be better for the environment.


"User serviceable" doesn't imply that the user will actually perform any service. I would be willing to posit that for the vast majority of users, an ATX desktop is just as "serviceable" as a Macbook Pro. In the case of the desktop, if it breaks, they take it to their IT department or Best Buy and get a technician to fix it. In the case of the Macbook, they take it to their IT department or the Apple Store and get a technician to fix it. And the Macbook Pro is a darn sight lighter, more portable and more attractive to have sitting on your desk...


This is not taking into account that most people won’t know how to fix the problems that arise from connectors wiggling loose or the replaceable hard drive failing. Additionally, there’s also the problem of the connectors themselves wearing out and breaking: e.g. I have a Lenovo X220 that no longer charges because the power cord connector is broken.


That requires significant trade-offs in durability, weight, and design.

Not to mention you don’t want the typical user (forget the HN audience) to replace the components themselves.

Most professional users are on corporate enterprise device plans and you don’t want employees or the IT department replacing components either. It’s far better and cheaper to get the employee back up and running with a new machine while the one in need of repair gets shipped off under enterprise warranty.


In many cases, a well-designed, durable product can be repairable. While the actual earbuds were not wonderfully well-reviewed, the Samsung Galaxy Buds were both tiny and legitimately repairable: https://www.ifixit.com/Teardown/Samsung+Galaxy+Buds+Teardown...


And they were a shit product. As you yourself admitted. It doesn't help if something is supposedly "repairable" (by the 0.5% of buyers who might be inclined to do such things) if the product is such crap that it gets thrown away after a few weeks.


>My old 2011 MacBook Air is still going strong and being used by my mother.

Coincidentiall, this is also a machine where you still can swap the SSD. With the help of some Alibaba engineering, you can even use a stock m.2 SSD.

The latest Macbooks are bullshit, you cannot exchange anything.


There’re upgrade-friendly laptops on the market. I’ve replaced RAM, disks, keyboard, LCD, CPU, wireless cards in my laptops. Soldered CPUs are unfortunately inevitable on modern ones, but many other components are still replaceable if you pay attention at the time of purchase. Usually voids warranty but I don’t care too much.

As a nice bonus it sometimes saves money. I’ve only picked my netbook for CPU (i3-6157U, 64MB L4 cache), GPU (Iris 550, 0.8 TFlops) and display (13.3” FullHD IPS). Upgraded to adequate amount of RAM (16GB) and larger and faster M.2 SSD. Both were too low out of the box, and even today there’re not many small laptops with 16GB RAM.


Agreed.

> Soldered CPUs are unfortunately inevitable on modern ones...

To be fair, even on desktop replacing a CPU on the same motherboard is a pretty niche thing in my experience. Not to say people don't do this, but most of the people I know upgrade both at the same time, either because of incompatiblity or because of substantial gains with the newer MB. So soldering the two together is not as bad as glueing keyboard to the motherboard in my eyes.


In some cases, upgrading a CPU prolongs useful life of the device.

The desktop I’m using now had i5-4460 at the time of purchase, eventually upgraded to Xeon E3-1230v3. Only going to upgrade motherboard after AMD releases Zen 2 desktop CPUs.

A family member uses a laptop that initially had i3-3110M. I’ve put i7-3612QM there, it’s comparable to modern ones performance-wise despite 6 years difference, e.g. cpubenchmark.net rates i7-3612QM at 6820 points, i5-8265U at 8212 points (because 35W versus 15).

I agree about glued keyboards. Keyboards are exposed to outside world and also subject to mechanical wear. The only thing worse than that is soldered SSDs. Makes data recovery very hard, and also rate of innovations is still fast for them, SSDs that will become available couple years in the future will be both much faster and much larger, upgrading them regularly makes sense for UX.


On a desktop it is possible to replace the motherboard, on a laptop not so much, so not soldering the CPU would at least give you the ability to upgrade the processor to the fastest supported by that motherboard .


True, you have a point there. Of course, standardizing on a few motherboard forms would be even better... one can always dream. ;)


My current weapon of choice is a Lenovo y50-70. Adding RAM is super easy, but I had to replace the keyboard at one point which was a nightmare since it was all glued and had small pins to hold it together, not to mention for some reason you have to disassemble absolutely everything before you actually get to it. In the end I basically just tore the thing out semi-carefully and the new one is just "pinned-in" (the glue isn't even needed). Another adventure was the screen frame which was breaking more and more each time I opened it. For that I drilled holes in a few places and bolted it together with some really small nuts so it still closes fine. It was annoying but a fun experience, got me over my hardware tweaking anxiety for good. I doubt it gets crazier than drilling holes in your laptop.

So yes, laptops should absolutely be made easier to modify, the components get old really fast and I don't wanna buy the whole thing each time I want an extra bit of RAM or some small part gets broken. It's one of the things that make me steer way clear of Apple stuff.


> There’re upgrade-friendly laptops on the market.

yes, but very few, and going fewer and fewer as we speak. Even Lenovo which was famous for that ends up soldering RAM in their recent models and making the battery a hassle to replace while it used to be on the outside before.


Enterprise market is huge. Consumers like thin and shiny things, corporations don’t care, but they employ people with full-time job being counting expenses. Upgradeable laptops are good for them because they can get exactly right hardware without paying for what they won’t use. They rarely upgrade themselves, vendors do, but unless the laptop is designed upgradeable vendors gonna have hard time serving their enterprise customer’s requests, let alone doing it fast.

For an example what I’m talking about, see this for current-gen Intel-based 13.3”: http://h10032.www1.hp.com/ctg/Manual/c05695299.pdf

Update: and if you gonna install Linux, these laptops can always be bought without Windows license. Corporate customers use volume licensing, they don’t need these OEM Windows keys and not willing to pay for them either.


I doubt that enterprises (or even their vendors) do much upgrading. Instead, they have those machines on a refresh cycle and replace them every three years. They do, however, often prize repairability: if you have a fleet of hundreds of the same model of machine, it's easy to maintain spares of the components most prone to failure/damage.


> Enterprise market is huge.

Do you have any number to compare the size of the consumer market vs the Enterprise market?


I don’t have numbers. But this article https://www.gartner.com/en/newsroom/press-releases/2018-10-1... says the PC market is driven by business PCs, both globally, US, and EMEA.


Consumers prefer super-thin soldered machines, sadly. The proof is in the shopping.


Not always, sometimes you cannot see the proof because a company just doesn't offer any other options. If there was a modern laptop that ran macOS that was user upgradeable, I would absolutely get it in my next upgrade cycle. Alas, there isn't one.

Also companies aren't always superrational logic machines that have coldly calculated their every more; there's sometimes a lot of collective delusion going on that can leave their consumers in the cold, who then just make do with the best out of a bad lot that's offered. Recall the recent iPhones - suddenly every other phone had a notch even when it served no purpose; or the removal of audio jacks, for example. There was NO consumer preference expressed there, just one company that decided it that way for its own purposes, and others blindly copying it.


I'm not by any means knowledgeable on the hardware design front. But I think the notch was a hardware design solution to a problem (I'm not sure what it is but probably to fit more components or save space inside the phone for something) and all the others copied it because it was a clever solution to an existing problem and it didn't appear to affect users much.


Consumers "prefer" what the billions of pounds spent on marketing tells them to prefer.

If companies spent money telling consumers to value upgradability and not to buy new stuff all the time, then we'd value that more .. but that doesn't sell more stuff, it just helps save the planet, so why bother ....


Consumers don't know that they have the option to fix their machines. They are trained to toss devices (not just computers, but also cellphones, TVs, and appliances) instead of taking them to a repair shop.


In the defense of upgrade culture, you ever wonder where those old phones you trade in go? Phone companies have been turning them into profit by shipping them to the developing world. We live in an era where the even the most remote and impoverished places on Earth have, at minimum, a cell phone in their villages. And it's that crappy circa 2000 Nokia you had that plays snake. Now they can call emergency services on demand or keep in touch with long distance loved ones.

Capitalism is such a blessing.


Are you serious? No one is using a 2000 Nokia, even in the third world. Do you think it's cheaper to collect phones in the first world, wipe them, test them, and ship them to the third world (assuming they could even connect to any cellular network) than to mass manufacture $15 plastic Androids?


Has Apple offered an upgrade able laptop alongside a non-upgradeable one? Would be possible to draw that conclusion if they sold a new style MacBook Pro alongside an older style one with similar specs.


Yes. Between 2012 and 2016, Apple sold a version of the 2008-2012 unibody MacBook Pro that had an optical drive and upgradeable RAM, while simultaneously offering the Retina MacBook Pro, the first to solder the RAM to the motherboard. Arguably from a specifications standpoint the Retina MacBook Pro was the superior model due to its high-resolution display and its use of a fast proprietary flash drive (replaced with standard NVMe in later versions) instead of slower SATA flash drives. Eventually the unibody MacBook Pro would get long in the tooth due to lack of updates compared to Apple's annually-updated Retina MacBook Pro models, but it was still sold until it was quietly discontinued in 2016 upon the announcement of the controversial touchbar MacBook Pro.


> Has Apple offered an upgrade able laptop alongside a non-upgradeable one?

No, but I would be shocked if they have never run focus groups for this type of stuff.


Consumers are multi modal, though. Many can't be bothered to dig in and debug or want a sleek highly integrated product. Some others care less for those things and want an upgradeable, repairable product. It's my hope that economic solutions will find the resources accommodate both modalities.


No, because such options are almost completely gone from the market. And I can't honestly believe that there is "no market" for it. It's an anomaly because most PC manufacturers are just trying to imitate Apple.


It makes sense to think this when looking at modern consumer tech, but I haven't met people who actually want that sort of thing. It always seems like people are having to settle.


Is that why the sort-of upgradeable 2015 MBPs nowadays go for higher prices on ebay already than the absolutely non-upgradeable 2016/17s?


The 2015s don't have the "a single speck of dust gets in the keyboard and disables a key" problem. All later models do. I don't think upgradeability factors into it much.


Soldering ram is helpful for preventing people cooling the ram and removing it to attempt to copy private encryption keys


Modern intel (and probably most other) boards use hardware scramblers for other reasons (storing all zeros or all ones causes signal integrity issues / current spikes), and secure the scrambling routine with a different code at each boot.

So, unless I’m mistaken, cooling the ram and moving to another doesn’t work any more.


There are two types of scrambling here - one is the bitswapping and byteswapping that you use to make DDR3/4 routing possible. The other is the whitening function for high speed data that ensures you don't have long sequences of the same value without a transition. The latter is a simple pseudorandom scrambler with a fixed random seed generated at boot. It is not cryptographically secure. The former is a simple substitution and quite easy to reverse (and trivial if you have either a schematic or a board to reverse). Both are deterministic and extremely vulnerable to known plaintext attacks. This is not a security feature.

Source: I'm working on a DDR4 layout right now, and the memory controller scrambling and swapping functions are documented in publically-available intel datasheets (for example, see https://www.intel.com/content/dam/www/public/us/en/documents... sections 2.1.6 and 2.1.8)


Seriously interesting. Any pointers?


Couldn't the 3 people in the World who need that just dob some epoxy on there?


Yes, I can't count the number of times soldered RAM has saved me from this. Can you count to zero?


Nobody is going to do this, because good components are a competitive advantage. I can’t see any good manufacturer wanting their good {trackpad, keyboard, case} either being put in a computer that undercuts them or being forced to dumb down their computer to fit the “lowest common denominator”.


To successfully define a laptop standard, it would have taken a consortium of companies. Likely companies which aren't necessarily in the business of selling integrated laptops themselves, but would benefit from the existence of a laptop standard.

It's likely companies like Microsoft (pre-Surface), peripheral manufacturers (eg, Logitech) and motherboard manufacturers (eg, Gigabyte) would have gladly got on board in that era.

It's likely too late to start this in 2019 (but I may be wrong). Certainly the late-1990s would have been the ideal time for this.


This doesn't work if the peripheral manufacturers are themselves big players (which they are): they can already afford fighting to secure a place in the oligarchy of big players and it's not in their interest to open up space for direct competitors. Whenever you become big enough, you start to share some substantial interest with any other big company: the one of not allowing smaller producers to step in.


As I understand it, only a handful of firms actually design their own laptops. Most of them buy from firms like Clevo or Quanta and maybe do final configuration (CPU/RAM/discs). So you'd really only need to convince them.

In a way, this is much like the situation with desktops-- Dell and HP was/is big enough to come up with their own custom mainboards and cases, but most smaller shops are going to go ATX.

I suspect part of the reason we didn't see much laptop standardization was that the second-tier manufacturers are weaker in the laptop sector than desktop, as well as being weaker as a whole than they were in 2000 when ATX was becoming a thing.

Outside of a few narrow gaming and workstation niches, there are few use cases where you can't find a suitable big-brand laptop, so the second-tier brands (and the manufacturers that supply them) are in a position of fighting for scraps, not one where they can start promoting the benefits of standardization.

This is likely worsened by the mindset that laptops are unupgradeable-- people bought ATX desktops figuring they'd buy new mainboards in 3 years, but generally assume the laptop is going to be stuck in place.


Because most money is in pandering to the lowest common denominator and scale, most manufacturers are starting to make their own hardware/software integrated combo. Apple did this, Microsoft is now doing it as well. Razer is going there, and all of them are (commercially) better for it. On the other hand, it's bad for 'us' (the more hacker-y users) as we have less options. It's why so many stick with sub-optimal solutions like Apple MacBooks (they are not ideal but the other off-the-shelf options are so much worse) or custom stuff (modified Thinkpads and Dell laptops). While the former isn't ideal, it's at least standard and scalable, while the latter isn't. Not really at least.


Incorrect. That happens with desktop PC's.

The reason it hasnt happened in laptops is you would have to compromise size and form


This would allow smaller players to step in and to start grinding some market shares of the big players. It would also turn the laptop market from a high margin market to a low margin one. Standardization is just not in the interest of any big player so it's probably never gonna happen. If you are a small player and want to go that direction you're probably gonna be bought out. The only way i see would be to somehow get pervasive open standards and libre schematics implementing them, and then cut out the big players and get several manufacturers to produce them. But that too is hardly gonna happen because of geopolitical problems: most of these manufacturers are domiciled in china and thus this move would cut to much income from western (and korean, japanese) countries. So for that to happen we would have to relocate some manufacturing industry and somehow not put them in the hands of any of our local big players. The problem here is not some problematic business decisions by companies, it's how we organized our economy. It would take radical changes in the economic/industrial policy to make that happen: much stronger anti-trust laws, which would keep companies smaller and force cooperation; public- instead of private-regulated prices so that you don't die to foreign companies' exportation when you start doing that; etc. This would drive cooperation up in all of the economy, take power away from multinationals, reduce waste, hinder "useless innovation". Long road ahead but i think that's what we need and that's what gonna happen at some point anyway: the capitalistic class may still be floating currently, but at some point the systemic crisis (financial instability, natural disasters, political instability, scarcity of energy and other basic resources) is gonna hit them too. What we have to make sure is that they don't get more out-of-sync with that than they currently are.


It’s interesting how competition used to be encouraged in the U.S. and now it’s pretty much the opposite. It’s all about consolidation, oligopolies and monopolies.

If standards lower margins and make entering easier, that’s what should be regulated for.


Lobbyists rule, and lobbyists are employed by large firms.


Adding more competition isn't an answer, because competition is about individual profit, thus monopoly. Neo-liberalism, global free market etc do encourage a world-wide competition: it's already the current trend. What you probably mean is some kind of "healthy/fair competition": the fact we need to add another adjective hints that this is about doing something to balance it. I argue this is about encouraging cooperation. A good balance between both leads to interdependence, which is exactly what we would like: a state where everyone has some possibility of moving around a bit, but where nobody is free to ignore what people they interact with have to say.

A ref i really want to push: "Mutual Aid: A Factor of Evolution" (1902, Kropotkine). There is a whole part mostly about the evolutionary analysis of altruism (the rest is about analyzing several human social orders throughout history: pre-medieval villages, medieval cities and 19c industrial cities).


There was a trend for a while of making business/power laptops much more configurable (I have an old Dell with a hard drive cage that swaps out without removing any screws). But most laptops are more about form rather than function; their design requires reworking all the internals to prevent getting a big clunky heavy box that overheats.

For very low-power machines you might have tons of internal space free, but more powerful laptops need complex heat management in addition to reducing size and weight. It's only now that we have very advanced fabrication techniques and energy-saving designs that we no longer have to hyper-focus on heat exchange.

If size and heat and weight weren't a factor, you can bet that a standard would have arose to manage interchanging parts. But soldered ram is a good example of why that's just not necessary, and can be counter-productive for reducing cost and maximizing form factor.


My experience has been limited by the fact that components increase at the same rate, and to get everything to place nice(r) with each other, you have to upgrade everything. "A new motherboard, CPU, RAM and GPU" is almost buying an entirely new computer. You save a few hundred bucks by keeping the PSU (or maybe change it too after 5 years) and casing, assuming the ports didn't change.


Being able to continue to use the display, keyboard, mouse/trackpad means you can choose higher-end components.

Even if you don't want to keep your widescreen DVI display from 2008, the interoperability means that when you drop it off at a e-waste center, it's more likely to be reused in its current state for a few more years, rather than immediately recycled (reduce, reuse, recycle!)

I do agree there is some degree of changes in interfaces overtime (like IDE to SATA to NVMe M.2), but if you build a system for a similar intended use case the changes within any given 5 year period are small. This means the upgrades you do over a 15 year period will go from a 2.5" platter drive to a 2.5" SATA drive, or from 800x600 to 1024x768 resolution display, but not both at the same time (with a different but significant set of components being shared every upgrade)


The LG Gram teardown on iFixit was amazing. It's "moderately difficult" to remove almost everything including the trackpad and parts I forgot existed.

https://www.ifixit.com/Guide/LG+Gram+15-Inch+Repairability+A...


That’s crazy. I have one and it seems impossibly light for its power. It’s an absolutely nuts achievement.


Standardization limits innovation. If we had standardized on laptop form factors in the late 1990s all laptops would still be an inch and a half thick, and all screens would still be 4:3.


I doubt this.

The standard ATX form factor has been upgraded to reduce size over various years with the vast majority of accepted iterations maintaining the same mounting hole and IO panel locations. I literally have a mini-ITX board sitting in a case I purchased in 1999. This probably fits more into the fear you state in your comment with a reasonably new technology "forced" to consume more space than is necessary, but I think it argues for the opposite by showing that incremental changes to a standard format can allow for wide ranging compatibilities.

For an example, when ATX was altered by squaring the board to the shortest length (microATX), it didn't require a new case or a new power supply to be placed on the market in order to be consumed because it fit within the "too big" ATX case. Then when cases that only fit microATXe became abundant and another incremental change to the motherboard size to DTX, we again didn't have to release new cases or power supplies or IO cards to start to consume this version. It allowed consumers to purchase and use the boards until they decided they wanted to reduce their case size, amortizing the upgrade costs over months instead of requiring larger up front payments.


How are the laptop companies meant to force you to buy a new one every so often if you can just keep upgrading them?


For desktop PCs, the ATX standard means that the entirety of a high-end gaming PC upgrade often consists of just a new motherboard, CPU, RAM and GPU.

And that's great, if you're into generic beige boxes.

It's been years since I put together my own IBM compatible computers. But in the time since then, I haven't really seen any innovation in desktops.

Yes, for a while the processor numbers ticked up, but then plateaued. Graphics cards push the limits, but that has zero to do with the ATX standard, and more to do with using GPUs for non-graphics computation.

The laptop and mobile sectors seem to be what is driving SSD adoption, high DPI displays, power-conscious design, advanced cooling, smaller components, improved imaging input, reliable fingerprint reading, face recognition for security, smaller interchangeable ports, the move from spinning media to solid state or streaming, and probably other things that I can't remember off the top of my head.

Even if you think Apple's touchbar was a disaster, it's the kind of risk that wouldn't be taken in the Wintel desktop industry.

All we've gotten from the desktop side in the last 20 years is more elaborate giant plastic enclosures, LED lights inside the computer, and...? I'm not sure. Even liquid cooling was in laptops in the early part of this century.

Again, I haven't built a desktop in a long time, so if I'm off base I'd like to hear a list of desktop innovations enabled by the ATX standard. But my observation is that ATX is a pickup truck, and laptops are a Tesla.


Nearly all of the tech you have in your laptop was developed, tested, and refined on desktops. PCI based SSD were in desktops before NVMe was a thing. Vapor cooled processors were on budget gaming PCs 10 years ago. Even the MacBook trackpad was based on a desktop keyboard produced by a company called fingerworks. High DPI monitors came first to desktop. High refresh rate came first to desktop. Fingerprint reader? Had one on my secure computer 15 years ago. Face unlock a couple years after that.

Desktop is still the primary place for innovation. Laptops use technology that was introduced and pioneered on desktop, then refined until it could fit in Mobile/Laptop. Don't get me wrong, there's probably more work in getting the tech into Mobile than developing it in the first place... But the genesis of the ideas happen on desktop.

Desktop has the opposite mix of freedom and constraints as mobile. Standard internals, but freedom of space. There are dozens of heat-sink manufacturers for PC... Dozens of small teams focused on one problem. There's some variation between chipsets, but nothing that requires major design changes. These teams can afford to innovate... And customers can afford to try new solutions. If the heat-sink doesn't perform, you're out 5% of the total cost. But there's no similar way to try things out for laptops.

For example... Should a laptop combine all of its thermal dissipation into one single connected system or have isolated heat management? It completely depends on usage and thermal sensitivities of the components... It was desktop water-cooling that gave engineers the ability to test cooling GPU and CPU with the same thermal system to determine where to draw the line.


Huh? Most of todays innovation is about power efficiency. This is driven from mobile and laptops, but benefits desktops and servers as well.


>And that's great, if you're into generic beige boxes.

>All we've gotten from the desktop side in the last 20 years is more elaborate giant plastic enclosures, LED lights inside the computer, and...?

Have you ever built an ATX computer? I assure you, there are plenty of different standard form factor cases out there. The beige box thing was in vogue in the 90s, but today the big trends are sleek black with tempered glass.

And standard form factor desktop does not equal giant tower. You could also do a mini ITX build, a standard that's been around since 2001 for what it's worth.

High DPI displays? This implies high end displays weren't available to desktops first (they were.) A decent CRT could produce much higher DPI than LCDs could (in that era.) Part of the reason why Windows DPI independence sucks is because Microsoft implemented it super early in, without all of the insights Apple had to do it right, and now there's like 4 different DPI mechanisms in Windows.

All in all I'm not sure what really needs "innovating" so badly with desktop form factor. Do we need to solder our RAM to the main board, is that "innovation?"

You kind of say it yourself:

>that has zero to do with the ATX standard,

So would be the case for any form factor standard. It only dictates how things interoperate.


>All we've gotten from the desktop side in the last 20 years is more elaborate giant plastic enclosures, LED lights inside the computer, and...?

Improved efficiency and the demise of bulky storage devices has created a proliferation of small-form-factor designs. We have two proper standards in widespread use (mini-ITX and mini-STX) and an array of proprietary designs from Intel, Zotac and others. It's now possible to get a fast gaming machine the size of a Mac Mini, or a monstrously powerful workstation that'll fit in a shoulder bag.

https://www.zotac.com/us/product/mini_pcs/magnus-en1070k

https://www.sfflab.com/products/dan_a4-sfx


Strawman. Nobody claimed the ATX standard enabled innovation. It enabled the reduction of e-waste as OP indicated. Think of this the next time you trash your innovative phone or laptop because the non-user replaceable battery/ram/ssd/display/whatever failed.


It's worth mentioning that the utility of a pickup truck dramatically exceeds the utility of a Tesla.


Novel form factors are often how laptop manufacturers distinguish themselves from their competitors. There is enough space within a desktop PC case to formalize a standard. As laptops get thinner and thinner, however, many engineering/layout tweaks are used to fit stuff within a thin chasis. Standardizing across different device models would be asking OEMs to stop putting efforts into competing with each other. And I say this as someone who has just had a catastrophic motherboard failure on their 8-months-old laptop and had to do a replacement that would've cost me a new laptop if outside warranty.


Because size and weight is an important distinctive feature for laptops. Customers pay more for smaller, lighter laptops. Using standardized components and chassis would mean a big competitive disadvantage.


At least for phones there is Phoneblocks[1] they are now part of Google's project Ara.

Maybe it could evolve to a laptop experience if blocks get powerful enough and somebody develops compatible chasis.

*update: The project Ara was cancelled in 2016 [2].

[1] https://phonebloks.com

[2] https://www.theverge.com/2016/9/2/12775922/google-project-ar...


Project Ara is no more...


:( exactly I was coming to update my comment https://www.theverge.com/2016/9/2/12775922/google-project-ar... Seems they cancelled it.


It did not happen for phones either. Why should this be different?

Maybe laptops now are mature enough as a product that what you suggest could be feasible but it is too late for business reasons, now.


Maybe we could try and write an open letter to companies and promise support even for less value at first. Chances are slim, but at least we would have done our part.


While I'm not particularly a fan, it's important to make clear the company SUSE (that is, SUSE Linux GmbH) has two distributions. It funds the community-maintained openSUSE distribution, but also has SUSE Linux Enterprise Server (SLES).

openSUSE has a regular ~2 year support window and is probably mostly used by enthusiasts. The commercial SLES product has a 10 year support window (with paid extension for another 3 years).

It's a paid product, but 10+ years of software updates for a stable Linux platform is something that's pretty rare to find, and certainty a great option for certain products. Hopefully I'm corrected on this, but RedHat tends to be the only other vendor supporting Linux distributions for such a long period of time?


Cannonical does 5 years for lts for everyone , likely more for enterprise customers


Shuttleworth announced in November 18.04 will receive 10 years of support as well.


This applies to Ubuntu Core only: https://www.ubuntu.com/core

Standard Ubuntu still has five years: https://www.ubuntu.com/about/release-cycle


Some further pros and cons of padding is discussed elsewhere in the thread.

Providing privacy for the packages that, including dependencies, are less than 100MB in size is something that's probably worth doing. The cost of padding an apt-get process to the nearest say, 100MB, is not necessarily infeasible as far as bandwidth goes.

Instead of padding individual files, how about a means to arbitrarily download some number of bytes from an infinite stream? That would appear to be sufficient to prevent file size analysis (but probably not timing attacks).

Exposing something like /dev/random via a symbolic link and allowing the apt-get client to close the stream after the total transfer reaches 100MB would appear to make it harder to infer packages based on the transferred bytes, without being very difficult to roll out.


> If you really want to mitigate information about downloaded packages you would have to completely revamp apt to randomize package names and sizes, and also randomize read access on mirrors...

There isn't a need to randomize package names, or randomize read access on the mirror, given fetching deb files from a remote HTTP apt repository is a series of GET requests. Randomizing order of these requests can be done completely on the client side.

Package sizes are still problematic. Here's a suggestion: if each deb file was padded to nearest megabyte, and there was a handful of fixed-size files (say, 1MB, 10MB and 100MB), the apt-get client could request a suitably small number of the padding files with each download. This would improve privacy with a minimum of software changes and bandwidth wastage.


If each file were padded to the nearest MiB, the total download size of the packages containing the nosh toolset would increase by almost 3000% from 1.5MiB to 46MiB. No package is greater than 0.5MiB in size.

I am fairly confident that this case is not an outlier. Out of the 847 packages currently in the package cache on one of my machines, 621 are less than 0.5MiB in size.


You're arguing with a strawman.

He didn't mean these file sizes specifically. it would still apply just the same with different file sizes

i.e. create cutoffs every 50 or 100kbyte


You're abusing the notion of a straw man, which this is not.

I am pointing out the consequences of Shasheene's idea as xe explicitly posited it. Xe is free to think about different sizes in turn, but needs to measure and calculate the consequences of whatever size xe then chooses.

No, it would not apply the same with different sizes. Think! This is engineering, and different block sizes make different levels of trade-off. The lower the block size, for example, the fewer packages end up being the same rounded-up size and the easier it is to identify specific packages.

(Hint: One hasn't thought about this properly until one has at least realized that there is a size that Debian packages are already blocked out to, by dint of their being ar archives.)


No, don't do that. Just use tor instead. It's supported on an out of the box Debian install.


It's still useful to be able to connect to the local mirror without tor (and enjoy the fast transfer speeds), but still mitigate privacy leaks from analysis of the transfer and timings.

Transferring apt packages over tor is unlikely to ever become the default, so it's worth trying to improve the non-tor default.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: