Any USB-C ports on my laptops go unused, so I exclusively buy devices with min. 3 USB-A ports. All of my peripherals and cables are USB-A, and it's so much easier to tinker with USB-A ports due to the smaller amount of pins (easy to jam in multimeter probes etc)
Same for me, in most cases. I had a work laptop with a USB-C dock. I've got a USB-C ethernet adapter (not used right now, because my laptop has an ethernet port). I've got a couple of USB-C charging cables. Oh, and a DAC dongle that I typically use with my phone.
Then I've got dozens of charging cables, peripherals, storage devices, and such that I've collected across the past 25 or so years, and still want to use.
It seems all this package does is install and configure ada-ts-mode and eglot (to use ada-language-server) with some custom commands (defined in config.lisp). Here's how I adapted it to my config: https://pastes.io/ada-config
Note that I added a format-on-save hook in my config, and since I do completions using corfu and not company, I didn't include that part of the package.
"Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature. CD distortion, the jitteriness of digital video, the crap sound of 8-bit - all of these will be cherished and emulated as soon as they can be avoided. It’s the sound of failure: so much modern art is the sound of things going out of control, of a medium pushing to its limits and breaking apart. The distorted guitar sound is the sound of something too loud for the medium supposed to carry it. The blues singer with the cracked voice is the sound of an emotional cry too powerful for the throat that releases it. The excitement of grainy film, of bleached-out black and white, is the excitement of witnessing events too momentous for the medium assigned to record them." -Brian Eno
My HAM radio instructor (VE3XT) was a transmitter engineer who lamented the resurgence of tube amps in the 2000s. He said that what young people call “warmth” in sound is distortion that his kinfolk worked tirelessly to eliminate.
I didn’t understand it at first, and then I saw the growing interest in compact cassettes and was metaphorically tearing out my hair.
> Whatever you now find weird, ugly, uncomfortable and nasty about a new medium will surely become its signature
Exactly how I feel about AI art today, and doesn't make me super hopeful for the future. Hopefully there been cases where that hasn't been true?
But then I remember seeing datamoshing as weird the first times I saw it used as a transition in the "Off the Air" TV series/anthology, nowadays I think it makes me fuzzy and almost nostalgic instead?
Maybe you mean the dynamic range clipping caused from the bad mastering that happened during the loudness wars, but CD as a medium is impossible to have audio distortion due to its impressive 44.1 kHz sample rate at 16 bit depth.
You speak of "The Loudness Wars." It was a fully intentional choice made by people who knew exactly what they were doing. For the most part, bowing down to label pressure to let people think their stereo is louder.
Except for the guy who did Red Hot Chili Peppers' albums. He was known for compressing sound, and turned it to 12 for the loudness wars. Californication is the poster-child of the issue for a reason.
Notably it needs KiCad to run and takes ~140 hours to boot.
Seems like the reason for KiCad and the slow operation is that the emulation occurs at a really low level, and they've written a program to digitize all of the computer's schematics and convert them into netlists (which is then converted into systemC components for the emulator): https://github.com/Datamuseum-DK/R1000.HwDoc
This reminds me of the turnpike property in optimization: for some optimization problems, no matter what the initial and final conditions are, the optimal path passes through the same section. Then you can exploit this to simplify the problem as getting onto this highway and getting off at the right times.
Why use machine learning to solve a problem that you already know how to solve? The classical method is easier to derive, run and understand. Any failure cases can be easily reasoned about and can be clearly outlined and ameliorated if needed. Your proposed replacement needs lots of training data, a training cost that needs to be paid for any change in the data, a costlier runtime and unknown failure cases that we can't reason about at all. And if we hit any such failure cases the only thing we can do is change the data and cross our fingers. I don't see why learning-based methods would perform better here.
Maybe not applicable for the XR platform here, but you could add introspection capabilities not present in Linux, a la Genera letting the developer hotpatch driver-level code, or get all processes running on a shared address space which lets processes pass pointers around instead of the Unix model of serializing/deserializing data for communication (http://metamodular.com/Common-Lisp/lispos.html)