The easy answer that encompasses 99% of the target foods is:
Avoid any foods that involve multiple rounds of processing, a term that includes baking, frying, adding preservatives,sugars or oils. Generally, if it has a lot of sugar or oil and has a weirdly long shelf life, be suspicious.
Drift towards: easily washable (smooth/peelable) fruits and vegetables, 100% whole wheat bread products, simple meat products like whole animal parts or deboned animal parts.
Dairy lives in the middle ground. If you have zero lactose problem, most dairy is mostly okay, just watch for sugar levels and recognize that most dairy products are calorie dense. Nuts are in this group too for the same reason but oil instead of sugar.
Bonus points for consuming real pro and pre biotics when you can. In the United States this is pretty limited to live culture yogurts, refrigerated kimchi, and refrigerated sourkraut.
They measure at a certain point. Jason Cammisa points it out pretty clearly in an episode of Carmudgeon, with the money quote either here[0] or in the link direct to YouTube here[1]:
> On a recent episode of the Carmudgeon Show podcast, auto journalist Jason Cammisa described a phenomenon occurring with some LED headlights in which there are observable minor spots of dimness among an otherwise bright field of light. “With complex arrays of LEDs and of optics,” he said, “car companies realized they can engineer in a dark spot where it’s being measured, but the rest of the field is vastly over-illuminated. And I’ve had now two car companies’ engineers, when I played stupid and said, ‘What’s the dark spot?’ … And the lighting engineers are all fucking proud of themselves: ‘That’s where they measure the fucking thing!’ And I’m like, ‘You assholes, you’re the reason that every fucking new car is blinding the shit out of everyone.’”
> They measure at a certain point. Jason Cammisa points it out pretty clearly in an episode of Carmudgeon, with the money quote either here[0] or in the link direct to YouTube here[1]:
> On a recent episode of the Carmudgeon Show podcast, auto journalist Jason Cammisa described a phenomenon occurring with some LED headlights in which there are observable minor spots of dimness among an otherwise bright field of light. “With complex arrays of LEDs and of optics,” he said, “car companies realized they can engineer in a dark spot where it’s being measured, but the rest of the field is vastly over-illuminated. And I’ve had now two car companies’ engineers, when I played stupid and said, ‘What’s the dark spot?’ … And the lighting engineers are all fucking proud of themselves: ‘That’s where they measure the fucking thing!’ And I’m like, ‘You assholes, you’re the reason that every fucking new car is blinding the shit out of everyone.’”
Intel and AMD have implemented these improvements with Lunar Lake and Strix Halo. You can buy an x86 laptop with Macbook-like efficiency right now if you know which SoCs to pick.
edit: Correction. I looked at the die image of Strix Halo and thought it looked like it had on-package RAM. It does not. It doesn't use PMIC either. Lunar Lake is the only Apple M-series competitor on x86 at the moment.
Subscriptions make companies lazy and it degrades the product. I'm looking at you: Foundry, Adobe, Maxon, heated seats on BMWs ...
They rest on their laurels, enjoy the increased cash flow, say it allows them to work on regular updates. But this goes from being useful bug fixes, to merely shuffling the UI around, changing the fonts, introducing nonsensical features nobody asked for or can make use of, and gutting useful features for "streamlining" purposes... while longstanding bugs that actually need fixing are still unfixed.
Eventually customers become dissatisfied with the product and make up for lost features and degraded user experience with a smörgåsbord of perpetually licensed or FOSS alternatives from various competitors because they too will want to improve their cash-flow instead of being bled dry every month.
Companies that choose to offer lump-sum permanent licenses have to make a bigger effort to convince customers to upgrade, which means the product improves. Also it makes your customers more committed to your product. You should invite this kind of challenge and forgo the temptation to boost cash-flow because it keeps you on your toes. Subscription-only will seem great for a while but eventually you'll atrophy and fail.
Something similar happened when software went from being released on CDs/DVDs to regular patches and downloads. Not saying we need to go back to that era, but QAs had to work harder back then because distribution was expensive. Nowadays you can release things in an unfinished and broken state.
I'll say, I recommend my Honeywell sync earmuffs for the workshop. They block more noise, have a physical volume knob and accessible buttons, and best of all, the microphone is also noise cancelling directly, meaning you can usually have a conversation with someone while using a power tool or mowing the lawn without much issue. APP2 or 3 for me has never been able to displace these.
I've found Pop!_OS 24.04 beta, with COSMIC, to be more suitable for my preferences than Omarchy. You get the best of both worlds -- hybrid tiling experience. You can toggle between tiling (like Hyprland) and regular desktop environment (like GNOME).
I used cgroups, lxc, chroots, self-extracting executables. I built rugged, portable applications for UNICEF laptops and camps before docker was a thing.
And I think this whole point about "virtualization", "security", making the most use of hardware, reducing costs, and so on, while true, it's an "Enterprise pitch" targeted at heads of tech and security. Nice side effects, but I couldn't care less.
There are real, fundamental benefits to containers for a solo developer running a solo app on a solo server.
Why? My application needs 2 or 3 other folders to write or read files into, maybe 2 or 3 other runtime executables (jvm, node, convert, think of the dozens of OSS CLI tools, not compile-time libraries), maybe apt-get install or download a few other dependencies.
Now I, as a indie developer, can "mkdir" a few files from a shell script. But that "mkdir" will work the first time. It will fail the second time saying "directory already exists". I can "apt-get install" a few things, but upgrading and versioning is a different story altogether. It's a matter of time before I realize I need atleast some barebones ansible or state management. I can tell you many times how I've reinvented "smallish" ansible in shell scripts before docker.
Now if I'm in an enterprise, I need to communicate this entire State of my app – to the sysadmin teams. Forget security and virtualization and all that. I need to explain every single part of the state, versions of java and tomcat, the directories, and all those are moving targets.
Containers reduce state management. A LOT. I can always "mkdir". I can always "apt-get install". It's an ephemeral image. I don't need to write half-broken shell scripts or use ansible or create mini-shell-ansible.
If you use a Dockerfile with docker-compose, you've solved 95% of state management. The only 5% left is to docker-compose the right source.
Skip the enterprisey parts. A normal field engineer or solo developer, like me, who's deploying a service on the field, even on my raspberry pi, would still use containers. It boils down to one word: "State management" which most people completely underestimate as "scripting". Containers grant a large control on state management to me, the developer, and simplify it by making it ephemeral. That's a big thing.
Another reason emacs as an OS (not fully, but you know) is such a great way to get used to things you have on systems. Hence the quote: "GNU is my operating system, linux is just the current kernel".
As a greybeard linux admin, I agree with you though. This is why when someone tells me they are learning linux the first thing I tell them is to just type "info" into the terminal and read the whole thing, and that will put them ahead of 90% of admins. What I don't say is why: Because knowing what tooling is available as a built-in you can modularly script around that already has good docs is basically the linux philosophy in practice.
Of course, we remember the days where systems only had vi and not even nano was a default, but since these days we do idempotent ci/cd configs, adding a tui-editor of choice should be trivial.
People keep trying to answer this question, so I'll try, too, but I'm going to do a better job than anyone else. ;-)
Passkeys are randomly generated passwords that are required to be managed by a password manager. All the major password managers support them, including Apple, Google, Microsoft, Mozilla, and 1Password.
By requiring the passkey to be managed by a password manager, you get some anti-phishing protection. A passkey includes metadata, including the website domain that created it, and the password managers simply won't provide the passkey to the wrong domain. They provide no way for you to copy and paste the passkey into a website, as you can with a password; there's no social-engineering technique someone can use to get you to copy and paste your passkey to an enemy.
A passkey manager is morally required to do an extra factor of authentication (e.g. fingerprint, Face ID, hardware keys, etc.) when you login to a website, but the website has no way of knowing/proving whether that happened; they just get the password.
You reset your passkey the same way you reset your password, because passkeys are just passwords that have to be managed with a password manager. Some sites make it easy to reset your password, some make it hard. You know the drill; there's nothing new or different there.
If you're happy with your password manager, there's no real need to switch, but even very "sophisticated" password users have been known to fall prey to social-engineered phishing attacks.
Are you sure you're never going to copy-and-paste your password into the wrong hands? I don't trust myself that much.
Passkeys can make it harder to switch password managers because the password managers are designed not to let you copy-and-paste a passkey, including from Google's Password Manager to Apple's Password Manager. I think all the password managers kinda like that, and there's something good and bad about it.
I was quite surprised how different work was from school. There are a few specific considerations I never really see discussed:
- In school you can fail the entire class, (ie, all the students) which is less true at work. At work, you're just hiring your "section" of the bell curve, and insofar as being "successful" means "doing well at your job and not getting fired" then a C or D student can potentially be happily and gainfully employed indefinitely. They might have to take a less prestigious job, but they can find their niche and their place. This one really surprised me. You just don't have freedom of movement in school the way you do at work, and so anyone who is observant and hard working can pivot to a relatively-good situation for themselves. This just is not true at school.
- You get nearly endless chances to fail at work, and you usually have a PIP period of weeks or months to parachute to another job if you actually encounter failure. I know some people who have been failures for an entire 30-40 year career.
- If you're bad at writing essays in school, it doesn't matter; you simply need to write essays and getting better or worse at writing essays won't modify the number of essays you need to write. With work on the other hand you can specialize and minimize your weaknesses and play to your strengths. Yes, you can more easily change positions to accomplish this, but even within a single position you can just find ways to focus on the parts of the job you're best at and and excel at that area.
- Very, very few jobs have anything which resembles testing. In the real world you must understand _why_ certain things need to be done, but almost everyone has the opportunity to pause and look up the details via references. Testing really does not represent this whatsoever. It's also the case that some tasks at work will be done over and over again, and in real depth, and via this depth and repetition you will actually memorize things via real behavioral reward mechanisms that are just not possible in a classroom environment.
- You can always seek more clarification in the real world, and can even negotiate your own limitations. Your boss has asked you to do something? Have a conversation with them and explain the limitations in the approach and what sort of partial approach you think might work. This works great in the real world but is much, much more limited in a classroom environment.
I could go on, but I was honestly shocked when I got my first job and I was actually a pretty good employee. This has been true ever since, but I was screwing up in school all the time.
As someone who has been to multiple trade shows to show off our own spreadsheet product that solves some of these problems (https://rowzero.io/home), I can tell you there are a bunch of data engineers and their managers who have a visceral hatred of spreadsheets but have trouble articulating any reason for it.
leverage increases the disparity of returns (so some companies are definitely out of business because the of the leverage put on them) but by far the vast majority of LBO’s are at least moderately successful.
> The only way I can convince myself to do it is by finding a suitably engaging show I can distract myself with on my phone while I huff and puff.
> Combine the task with something you enjoy. You know what makes cleaning out the garage a lot better? Some good tunes.
This motivational advice is deeply misguided. These are very clear examples of "dopamine stacking". The idea is that by combining a stimulating activity (eg watching show/music) with a motivation-requiring activity (eg working out/cleaning) you can get an initial boost in motivation to accomplish the hard task. It works (initially) because the stimulating task (show/music) is giving you a dopamine increase which feels like motivation to complete the hard task. The problem is that if you repeat this behavior with any consistency, your dopamine system quickly adjusts the high activity-combo level of dopamine as a new baseline. Soon not even the dopamine you get from the combination is sufficient to motivate you to accomplish the task. At this point people often seek another short lived dopamine-increasing stimulus to combine into the mix.
You can see this pattern in people who exercise only with some combination of pre-workout, caffeine, music, phone scrolling.
The off-ramp is learning how to derive dopamine (aka "motivation") from the actual activity itself.
I use swaywm and kanshi [0]. It's write once, forget forever. I have one config for each of the display compositions I have (office, home, gaming, eDP...), and "it just works".
I realize this has not much to do with CPU choice per se, but I'm still gonna leave this recommendation here for people who like to build PCs to get stuff done with :) Since I've been able to afford it and the market has had them available, I've been buying desktop systems with proper ECC support.
I've been chasing flimsy but very annoying stability problems (some, of course, due to overclocking during my younger years, when it still had a tangible payoff) enough times on systems I had built that taking this one BIG potential cause out of the equation is worth the few dozens of extra bucks I have to spend on ECC-capable gear many times over.
Trying to validate an ECC-less platform's stability is surprisingly hard, because memtest and friends just aren't very reliably detecting more subtle problems. PRIME95, y-cruncher and linpack (in increasing order of effectiveness) are better than specialzied memory testing software in my experience, but they are not perfect, either.
Most AMD CPUs (but not their APUs with potent iGPUs - there, you will have to buy the "PRO" variants) these days have full support for ECC UDIMMs. If your mainboard vendor also plays ball - annoyingly, only a minority of them enables ECC support in their firmware, so always check for that before buying! - there's not much that can prevent you from having that stability enhancement and reassuring peace of mind.
This doesn’t even come close to CodeCompanion[1], which doesn’t require any new LSP config/dependencies or filetype limitations.
There is no ability to share the current buffer(s) for context, no tool support. This seems like a checkbox release. You’re better off using CodeCompanion with Amazon Bedrock, which includes the added benefit of sovereignty.
Happy to hear about this. Actually, budget should be increased not reduced. From purely ROI terms, NASA has a stellar return on investment. Immense contribution to human civilisation beyond US.
I worry that 7-Zip is going to lose relevance because lack of zstd support. zlib's performance is intolerable for large files and zlib-ng's SIMD implementation only helps here a bit. Which is a shame, because 7-Zip is a pretty amazing container format, especially with its encryption and file splitting capabilities.
100% correct - UUID strings are ASCII < 128 .. but there are imposters for character code 45 ("-"). Assumption based approaches to conversions or field handling should not be assumed to be safe ones just because UUID is "technically" ASCII. Only the keeper of the field can truly enforce that constraint.
This concept has been studied already extensively, e.g [1] (in 2000!) by people like Rivest and Chaum, who have actual decade-old competence in that field.
My father was on chemotherapy with fludarabine, a dna base analog. The way it functions is that it is used in DNA replication, but then doesn’t work, and the daughter cells die.
Typically, patients who get this drug experience a lot of adverse effects, including a highly suppressed immune system and risk of serious infections.
I researched whether there was a circadian rhythm in replication of either the cancer cells or the immune cells: lymphocyte and other progenitors, and found papers indicating that the cancer cells replicated continuously, but the progenitor cells replicated primarily during the day.
Based on this, we arranged for him to get the chemotherapy infusion in the evening, which took some doing, and the result was that his immune system was not suppressed in the subsequent rounds of chemo given using that schedule.
His doctor was very impressed, but said that since there was no clinical study, and it was inconvenient to do this, they would not be changing their protocol for other patients.
Avoid any foods that involve multiple rounds of processing, a term that includes baking, frying, adding preservatives,sugars or oils. Generally, if it has a lot of sugar or oil and has a weirdly long shelf life, be suspicious.
Drift towards: easily washable (smooth/peelable) fruits and vegetables, 100% whole wheat bread products, simple meat products like whole animal parts or deboned animal parts.
Dairy lives in the middle ground. If you have zero lactose problem, most dairy is mostly okay, just watch for sugar levels and recognize that most dairy products are calorie dense. Nuts are in this group too for the same reason but oil instead of sugar.
Bonus points for consuming real pro and pre biotics when you can. In the United States this is pretty limited to live culture yogurts, refrigerated kimchi, and refrigerated sourkraut.