If you're laying a communications cable, you should just do fiber. It can carry any type of traffic at high data rates, and you can upgrade the speed over time by just replacing the optics at the ends rather than having to replace the whole cable. Fiber plans are only expensive if your service level is expensive, or if you have to pay to get the line run to your building
Yeah, as far as I know the only way to be sure is by putting the parent process into a cgroup. Then to kill all of the child processes you have to freeze the cgroup, enumerate the pids, send sigterm/sigkill to all of them, before unfreezing again.
Yeah, this is going to decimate your battery life. It's great to have in an emergency, don't get me wrong, but I'd probably leave data off otherwise when out remote.
I looked around last week to see if they are on the market yet, and found that Philips already seems to have helium-free MRIs for sale. I'm not sure if they just sealed it better or switched to REBCO.
My understanding is that most of the experiments for NMR with REBCO at this point are geared towards high field NMR (like 30T) whereas medical NMR field strength is down around 1-3T.
I don’t know what the cost delta is between LTS and HTS magnets, but I imagine the much broader applicability of HTS will bring manufacturing costs way down with economies of scale.
In the interim, people are still trying to figure out effective ways to joint the HTS sections.
The Linux kernel doesn't have a stable ABI. Thus, if a kernel function signature changes, or a subsystem gets refactored, etc, drivers get updated as part of the process. If the drivers lived outside the kernel tree, they would have to be updated separately by their own maintainers. That's less efficient and prone to breakage, so generally driver modules are merged into the kernel tree. Often they can even share code with other hardware devices!
Not sure what you mean by "userland" drivers here, but support for kernel modules written in rust is actively being developed. It's already being used for kernel drivers like the Asahi Linux GPU driver for M1 Macs.
But you can write userspace drivers in any language, as long as that language has basic file I/O and mmap() support. There's nothing special about using Rust for userspace drivers.
You're right - for whatever reason I had convinced myself YBCO had issues with both high current and high field. It's just high current - anything outside of a single grain has low current density.
Doing some more research to help make up for my spreading of incorrect info, I did stumble across this - https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5472374/ - it seems like it's a variety of factors that make most HST not really commercially viable for MRI machines at current.
The idea would be to use LVM to get writeback caching on the block-device level, then use those cached LVM devices as backing devices for a ZFS pool.
I've tested this on a small scale prototype level (one HDD and one NVME SSD) and there it worked quite well. But like I said it feels a bit fragile, lots of moving parts.