Interesting but I wasn't entirely convinced that it's fake. There might be more to this story. Maybe the seller is the one who got scammed when they bought it 20 years ago.
Maybe they did. Do you suppose they got scammed on eBay - which they "perfectly" remember buying it from - or from the Red Cross, which they have an invoice for?
That German Red Cross invoice is 100% absolutely a fake.
Source: I'm a German who has donated to the red cross. That ain't it.
(Also: There needs to be a a specific key phrase without which the German equivalent of the IRS will come down on you like a ton of bricks. It is missing)
Which is kind of amazing when you think about it. There's no "The Apple Team" group photo that includes employee #10 anywhere? No wonder the scammer chose her when making the fake, it would be impossible to verify the photo without tracking down either Sherry herself or someone who personally knew her back in 1977.
I know that, I wasn’t expressing incredulity at that, I was expressing incredulity at the price of 3,000 DM… while also wanting to make clear that it wasn’t 3 DM.
Yes, I think some of the items here are reasonable but some of them (like full virtualization of desktop OS) don’t fit the iPad model. Just get a Mac at that point.
Why would virtualization not fit the iPad model? It's all about isolating apps from each other and from the system, and ensuring that no mucking around can render the system inoperable. Virtualization is the perfect solution to enabling legacy or technical workflows on such a device without compromising the core OS.
Any app can be evicted from memory at any time, requiring a full restart. The VM could be terminated with zero notice, aborting the OS kernel and leaving the filesystem in an arbitrary state. There is no swap on iOS / iPadOS but even if there was you wouldn't want it burning out your soldered-on storage chip.
If the answer is that this VM program is the only one that never gets killed under memory pressure, bear in mind most iPads still have only 8G RAM so the OS will have to get even less than that, not leaving very much for any workflows on either the host or guest from that point on. Even that is still less capable than macOS on the same 8G RAM because at least macOS can use swap.
It's egregious that you can still buy laptops with 8G RAM today, but at least all of that is available to the OS all the time, it doesn't just abort under memory pressure, and it can swap.
Virtualization may make more sense with a different host or guest platform, but in this particular combination, memory will always be a problem. The lack of swap compounds that further. (I'm not sure about memory compression in iOS but you can't exactly rely on that for already-compressed data)
Even if the host and guest share memory to avoid the hard cutoff, there still needs to be a way for the iPad to free up memory on demand, given that it cannot use swap.
Full virtualization would be fantastic for doing web development on iPad. Imagine if iSH didn't have to do the threaded code ROP dance and could just have a full performance ARM64 Linux distro sitting inside of itself.
Does anyone know of a good way to enable TCP_NODELAY on sockets when you don't have access to the source for that application? I can't find any kernel settings to make it permanent, or commands to change it after the fact.
I've been able to disable delayed acks using `quickack 1` in the routing table, but it seems particularly hard to enable TCP_NODELAY from outside the application.
I've been having exactly the problem described here lately, when communicating between an application I own and a closed source application it interacts with.
> Would some kind of LD_PRELOAD interception for socket(2) work?
That would only work if the call goes through libc, and it's not statically linked. However, it's becoming more and more common to do system calls directly, bypassing libc; the Go language is infamous for doing that, but there's also things like the rustix crate for Rust (https://crates.io/crates/rustix), which does direct system calls by default.
And go is wrong for doing that, at least on Linux. It bypasses optimizations in the vDSO in some cases. On Fuchsia, we made direct syscalls not through the vDSO illegal and it was funny the hacks to go that required. The system ABI of Linux really isn't the syscall interface, its the system libc. That's because the C ABI (and the behaviors of the triple it was compiled for) and its isms for that platform are the linga franca of that system. Going around that to call syscalls directly, at least for the 90% of useful syscalls on the system that are wrapped by libc, is asinine and creates odd bugs, makes crash reporters heuristical unwinders, debuggers, etc all more painful to write. It also prevents the system vendor from implementing user mode optimizations that avoid mode and context switches when necessary. We tried to solve these issues in Fuchsia, but for Linux, Darwin, and hell, even Windows, if you are making direct syscalls and it's not for something really special and bespoke, you are just flat-out wrong.
> The system ABI of Linux really isn't the syscall interface, its the system libc.
You might have reasons to prefer to use libc; some software has reason to not use libc. Those preferences are in conflict, but one of them is not automatically right and the other wrong in all circumstances.
Many UNIX systems did follow the premise that you must use libc and the syscall interface is unstable. Linux pointedly did not, and decided to have a stable syscall ABI instead. This means it's possible to have multiple C libraries, as well as other libraries, which have different needs or goals and interface with the system differently. That's a useful property of Linux.
There are a couple of established mechanism on Linux for intercepting syscalls: ptrace, and BPF. If you want to intercept all uses of a syscall, intercept the syscall. If you want to intercept a particular glibc function in programs using glibc, or for that matter a musl function in a program using musl, go ahead and use LD_PRELOAD. But the Linux syscall interface is a valid and stable interface to the system, and that's why LD_PRELOAD is not a complete solution.
It's true that Linux has a stable-ish syscall table. What is funny is that this caused a whole series of Samsung Android phones to reboot randomly with some apps because Samsung added a syscall at the same position someone else did in upstream linux and folks staticly linking their own libc to avoid boionc libc were rebooting phones when calling certain functions because the Samsung syscall causing kernel panics when called wrong. Goes back to it being a bad idea to subvert your system libc. Now, distro vendors do give out multiple versions of a libc that all work with your kernel. This generally works. When we had to fix ABI issues this happened a few times. But I wouldn't trust building our libc and assuming that libc is portable to any linux machine to copy it to.
> It's true that Linux has a stable-ish syscall table.
It's not "stable-ish", it's fully stable. Once a syscall is added to the syscall table on a released version of the official Linux kernel, it might later be replaced by a "not implemented" stub (which always returns -ENOSYS), but it will never be reused for anything else. There's even reserved space on some architectures for the STREAMS syscalls, which were AFAIK never on any released version of the Linux kernel.
The exception is when creating a new architecture; for instance, the syscall table for 32-bit x86 and 64-bit x86 has a completely different order.
I think what they meant (judging by the example you ignored) is that the table changes (even if append-only) and you don't know which version you actually have when you statically compile your own version. Thus, your syscalls might be using a newer version of the table but it a) not actually be implemented, or b) implemented with something bespoke.
> Thus, your syscalls might be using a newer version of the table but it a) not actually be implemented,
That's the same case as when a syscall is later removed: it returns -ENOSYS. The correct way is to do the call normally as if it were implemented, and if it returns -ENOSYS, you know that this syscall does not exist in the currently running kernel, and you should try something else. That is the same no matter whether it's compiled statically or dynamically; even a dynamic glibc has fallback paths for some missing syscalls (glibc has a minimum required kernel version, so it does not need to have fallback paths for features introduced a long time ago).
> or b) implemented with something bespoke.
There's nothing you can do to protect against a modified kernel which does something different from the upstream Linux kernel. Even going through libc doesn't help, since whoever modified the Linux kernel to do something unexpected could also have modified the C library to do something unexpected, or libc could trip over the unexpected kernel changes.
One example of this happening is with seccomp filters. They can be used to make a syscall fail with an unexpected error code, and this can confuse the C library. More specifically, a seccomp filter which forces the clone3 syscall to always return -EPERM breaks newer libc versions which try the clone3 syscall first, and then fallback to the older clone syscall if clone3 returned -ENOSYS (which indicates an older kernel that does not have the clone3 syscall); this breaks for instance running newer Linux distributions within older Docker versions.
Every kernel I’ve ever used has been different from an upstream kernel, with custom patches applied. It’s literally open source, anyone can do anything to it that they want. If you are using libc, you’d have a reasonable expectation not to need to know the details of those changes. If you call the kernel directly via syscall, then yeah, there is nothing you can do about someone making modifications to open source software.
The complication with the linux syscall interface is that it turns the worse is better up to 11. Like setuid works on a per thread basis, which is seriously not what you want, so every program/runtime must do this fun little thread stop and start and thunk dance.
Yeah, agreed. One of the items on my long TODO list is adding `setuid_process` and `setgid_process` and similar, so that perhaps a decade later when new runtimes can count on the presence of those syscalls, they can stop duplicating that mechanism in userspace.
You seem to be saying 'it was incorrect on Fuchsia, so it's incorrect on Linux'. No, it's correct on Linux, and incorrect on every other platform, as each platform's documentation is very clear on. Go did it incorrectly on FreeBSD, but that's Go being Go; they did it in the first place because it's a Linux-first system and it's correct on Linux. And glibc does not have any special privilege, the vdso optimizations it takes advantage of are just as easily taken advantage of by the Go compiler. There's no reason to bucket Linux with Windows on the subject of syscalls when the Linux manpages are very clear that syscalls are there to be used and exhaustively documents them, while MSDN is very clear that the system interface is kernel32.dll and ntdll.dll, and shuffles the syscall numbers every so often so you don't get any funny ideas.
> The system ABI of Linux really isn't the syscall interface, its the system libc.
Which one? The Linux Kernel doesn't provide a libc. What if you're a static executable?
Even on Operating Systems with a libc provided by the kernel, it's almost always allowed to upgrade the kernel without upgrading the userland (including libc); that works because the interface between userland and kernel is syscalls.
That certainly ties something that makes syscalls to a narrow range of kernel versions, but it's not as if dynamically linking libc means your program will be compatible forever either.
In the case where you're running an Operating System that provides a libc and is OK with removing older syscalls, there's a beginning and an end to support.
Looking at FreeBSD under /usr/include/sys/syscall.h, there's a good number of retired syscalls.
On Linux under /usr/include/x86_64-linux-gnu/asm/unistd_32.h I see a fair number of missing numbers --- not sure what those are about, but 222, 223, 251, 285, and 387-392 are missing. (on Debian 12.1 with linux-image-6.1.0-12-amd64 version 6.1.52-1, if it matters)
> And go is wrong for doing that, at least on Linux. It bypasses optimizations in the vDSO in some cases.
Go's runtime does go through the vDSO for syscalls that support it, though (e.g., [0]). Of course, it won't magically adapt to new functions added in later kernel versions, but neither will a statically-linked libc. And it's not like it's a regular occurrence for Linux to new functions to the vDSO, in any case.
Linux doesn't even have consensus on what libc to use, and ABI breakage between glibc and musl is not unheard of. (Probably not for syscalls but for other things.)
The proliferation of Docker containers seems to go against that. Those really only work well since the kernel has a stable syscall ABI.
So much so that you see Microsoft switching to a stable syscall ABI with Windows 11.
"Decoupling the User/Kernel boundary in Windows is a monumental task and highly non-trivial, however, we have been working hard to stabilize this boundary across all of Windows to provide our customers the flexibility to run down-level containers"
It's not that much work; after all, every libc needs to have its own implementation. The kernel maps the vDSO into memory for you, and gives you the base address as an entry in the auxiliary vector.
But using it does require some basic knowledge of the ELF format on the current platform, in order to parse the symbol table. (Alongside knowledge of which functions are available in the first place.)
It's hard work to NOT have the damn vDSO invade your address space. Only kludge part of Linux, well, apart from Nagle's, dlopen, and that weird zero copy kernel patch that mmap'd -each- socket recv(!) for a while.
It's possible, but tedious: if you disable ASLR to put the stack at the top of virtual memory, then use ELF segments to fill up everything from the mmap base downward (or upward, if you've set that), then the kernel will have nowhere left to put the vDSO, and give up.
(I investigated vDSO placement quite a lot for my x86-64 tiny ELF project: I had to rule out the possibility of a tiny ELF placing its entry point in the vDSO, to bounce back out somewhere else in the address space. It can be done, but not in any ELF file shorter than one that enters its own mapping directly.)
Curiously, there are undocumented arch_prctl() commands to map any of the three vDSOs (32, 64, x32) into your address space, if they are not already mapped, or have been unmapped.
Depending on the specifics, you might be able to add socat in the middle.
Instead of:
your_app —> server
you’d have:
your_app -> localhost_socat -> server
socat has command line options for setting tcp_nodelay. You’d need to convince your closed source app to connect to localhost, though. But if it’s doing a dns lookup, you could probably convince it to connect to localhost with an /etc/hosts entry
Since your app would be talking to socat over a local socket, the app’s tcp_nodelay wouldn’t have any effect.
Why is everyone ignoring the user side? It's not that there are equal users and a Taylor Swift person just listens more, there are millions of more users listening to Taylor Swift. Radiohead could actually do better with the much larger user base Swift brings. Without more data, it's hard to know.
I was going to make an example of what if restaurants used the Spotify model and how people would hate it because they'd be paying for other people's tables but I guess that's buffets and all you can eat sushi places are like. Not exactly shining examples of quality for the end user though.
1. The article literally says ‘ “Spotify already pays nearly 70% of every dollar it generates from music to the record labels and publishers that own the rights for music, and represent and pay artists and songwriters," it continues.’
2. It’s a simplified example for explanatory purposes
Yes that’s the first thing I thought of. Seems like a terrible name for an app meant to strengthen relationships. App idea seems good but my unsolicited advice would be to change the name asap.
article gets it wrong and sounds like a WorldCoin press release in the first sentence:
"The Office of the Data Protection Commissioner (ODPC) has cautioned Kenyans against sharing their details with WorldCoin, a financial public utility that allows the public to easily access digital currencies without the need for a third party to facilitate the transactions."
I was also very surprised by the lack of polish visible in the first 15 minutes of gameplay. It’s still a very good looking game but there are enough rough edges that it’s noticeable and doesn’t seem up to Blizzard’s old standards.
For me, it didn't feel polished until few patches in... Necromancers, and Challenge Rifts, and Seasonal Content were what made Diablo 3 for me.
At launch, D3 was pretty dull. We ran through it, and were bored a day later. All it really had was Adventure Mode, and most of the sets were relatively weak.
Farming was just slow and tedious. Never felt rewarding to run around a low-density world map... the abilities and speed tuning was just off. To be honest, it was pretty boring.
A lot of people quit after getting to max level. Diablo was always a "dungeon grinder" but the grind just wasn't enjoyable. Like at all. At launch the grind just sucked.
There was this quirky little auction house, but it felt like a bolt-on after thought. I don't think anyone really used it.
Over time they really sped up the game play considerably with high-density Rift maps. All the set synergy, and class combo synergy, started to kick in. Adding The Cube, and The Vault, and a bunch of fun little things that sped up the game.
Arguably, inventory management didn't get "slightly less crappy" until fairly recently -- maybe a year ago? I forget when they added Search.
So yeah, like now it's polished. But 11 years ago at launch it certainly had some rough spots!
for what its worth, the person you're replying to is using "polish" in the narrow sense of look and feel. D3 was quite polished at launch, but had no real depth to it, as you say.
What they fixed along the years was the endgame gameplay, but even day one, it was a really polished game, and the first 10 hours or so were solid.
It broke down after that. The auction house that broke the looting dynamic, the lack of items with interesting effects, the limited amount of procedural generation, the focus on a single number (DPS), plus the usual balancing issues.
If you took the game just for the main campaign and stopped after that, you would have a great time. But most people don't play hack and slash like that, especially not the most vocal players.