Hacker Newsnew | past | comments | ask | show | jobs | submit | ntauthority's commentslogin

> because this policy only applies for non-security bugfixes, and almost all patches these days claim to just be security fixes, including the one which introduced this bug

There's numerous feature flags that seem to just be 'MSRC_[id]' (for the Microsoft Security Response Center), and anecdotally looking through Windows 11 a lot of actual bugfixes (various ReFS driver crashes, for example, have feature flag checks around their fixes) are feature-flagged as per usual with both global (for the whole batch of fixes) and per-feature flags, so this is a bit of an incorrect assumption.

Things breaking downlevel is pretty common anyway, and the emoji picker has been in a pretty bad state since the original picker IME (introduced I believe in RS3, ~2017) was replaced with 'Expressive Input' which also allowed adding GIFs and a few other things but relied on a new UI framework that I suspect was tied to an unrelated internal effort culminating in the '10X' product which only got canceled.. right before Windows 11 development started, and therefore pretty much bitrotted.

Windows 10 was left on a fairly 'bad' release, the 'Iron' semester which was used as a baseline for Server 2022 was still like 10 from a UX perspective (10X was only canceled between that and 'Cobalt', where the Sun Valley work which led to the Windows 11 product happened) but had a fair few bugfixes that didn't get backported to 10 'version 2004' ('Vibranium', I believe, as otherwise the codename would've been 'Chromium' which is bad).


The reserved memory would show up as 'dedicated' memory. Shared is just the amount of host memory that can be assigned to graphics resources, which usually equals the system memory or some amount derived from it.

If the full amount of system memory isn't showing on Windows that's likely an unrelated issue you're experiencing (for example with UEFI/BIOS memory mapping mismatching whatever else) and it working on Linux implies that either Linux gets fed different memory layouts or it parses this broken case fine unlike Windows.

If applications aren't using all the memory, and it's also not showing up as cached, that's odd as Windows usually tends to target around 80% of physical memory usage (unless you're really not using that many apps or there's another driver issue going on). Different OSes account for memory usage differently, and there's rarely one single 'memory used' indicator in modern operating systems.


> If the full amount of system memory isn't showing on Windows that's likely an unrelated issue

Full amount of system memory is seen by Windows. It just refuses to hand out more than 16GB of RAM to it so I assume (maybe wrongly) that this is related to the other 16GB which the GPUs get assigned to but have no use for.

The memory graph in the task manager (Performance) never goes above 50%.


Notably so, I believe Trillian was a paid product and Meebo was actually treated like a real tech startup. Interesting, indeed, how much the sentiment has shifted over the course of 10-15 years...


Qualcomm actually started offering LTS (long-term support) BSP (board support packages) to device manufacturers for a fee. This is why some vendors like Samsung and Microsoft started providing updates for phones with older SoCs for longer than usual: they're paying the additional per-product license fee for being allowed to ship updated firmware.


I actually switched away from a UDM after finding out that I could only hit 500 Mbit/s uplink (out of ~930) due to a PPPoE performance bug as there's no hardware offloading and the old Cortex-A57 cores (in a SoC from a vendor now owned by Amazon, so extremely end-of-life) just couldn't handle that.

Now I'm running a Turris Omnia with the bundled OpenWRT fork for router tasks and that seems to work fine.


Why do you need to use PPPoE? Is that an ISP requirement? It seems uncommon nowadays to need PPPoE.


Not sure about parent but here in Brazil all ISPs are still using PPPoE even under gigabit fiber, it's a miracle they can find a router that is able to push 800 Mbit under single-threaded pppoe. I've yet to find a router capable of doing proper gigabit that isn't some enteprise machine that costs me a car.


In cases like yours, the best solution is probably to get an x86-based fanless mini PC built around a laptop CPU. Those can hit quite high single-threaded speeds and have enough resources to handle not just your routing but also light duty as a home server. Chinese brands like Qotom and Topton and a bunch of others are selling them on AliExpress. They're several hundred dollars, but still cheaper than a lot of enterprise gear, and you can get them with 5 or 6 Ethernet interfaces. Getting a separate consumer WiFi access point/router with minimal CPU power of its own is usually cheaper than trying to add an AP-capable WiFi card to a mini PC.


And if you're going to do that, just run opnsense and (being essentially a distro of full blown BSD) have all the security, flexibility and scalability the machine can provide.


OPNsense security updates are delayed from FreeBSD ports by days to weeks.


Many fiber ISPs here in Europe seem to share the backend infrastructure between DSL and FTTH subscribers and that sadly also involves PPPoE encapsulation.


A major Romanian ISP uses PPPoE and I'm tempted to say that another one does it too and they're offering gigabit speed.


It's not uncommon for DSL at all.


Yeah, but DSL won't have a problem with speed, and routers having too weak cpu to handle it.

GPON does.


FYI: OpenWRT is just Linux and will run fine on x86 hardware as well.


For sure - I casually looked at OpenWRT on x86 and it seemed (on the surface at least) to be more fiddly than opnsense. Updates and handling storage/partitions appeared to be more unclear to me. But that was probably just me, I didn't dig too deep into it.


There is two way of installing OpenWrt on x86:

1. Use the ext4 image and extend the main partition to the full size of the disk. This requires a lot of "fiddling" later in case of upgrade (as parent wrote).

2. Use the combined squashfs image and don't touch the image layout (no resizing, keep the ~100 MB free default / partition). Easy upgrade experience like other embedded devices (get image, open ui or ssh, upload, flash, reboot, done). Oddly, this isn't made clear by the official Wiki at all and the simplest option.

IMO, the best configuration is running your x86 box with Proxmox and run the squashfs OpenWrt in a VM. There is no need for more than 100 MB of space and if you need to install so many packages or apps, better create another VM and use a standard Linux distro. It will also be more standard for many apps to be installed on a full fledged OS instead of the custom OpenWrt layout.

You only need 2vcpu and 256 MB of ram to run standard OpenWrt at 1 Gbps (SQM included if you have a recent CPU). The rest of your box ressource can be used for anything you want.


> I don't have much choice - it's not like there would be five competing apps serving the same purpose (connecting to the people and communities on Facebook).

The same legislation that is requiring Apple to allow sideloading also requires other large players (like Meta) to open their communication platforms up to other service or application developers.

In this hypothetical case, there actually would be five competing apps, some even still distributed on the App Store.


Titanfall did work this way on the EA launcher. There was a tool that it'd run on first launch to decompress the compressed audio files.


But why.

Even when you do want lossless assets, as long as you compress in reasonable chunks there is no downside.


Unless it supports the transactional filesystem API, Windows Update won't even work in the first place. ReFS boot in fact is affected by the same issue at this time.


I thought that was deprecated since Windows Vista or 7.


It was introduced in Vista, deprecated for external use a bit later but the servicing system is still a heavy user of it.


On Windows, I can enumerate a directory that accidentally got millions of files perfectly fine without the system itself breaking and with results being returned incrementally.

On Linux/ext4, it takes an eternity for an enumeration to even start returning results and the entire OS gets stuck on IO while doing so.


True, but that's not a common enough use case to optimize for. I'd prefer Windows updates happened at the same speed Linux ones do.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: