Restrictions on SNAP are tricky business. You can't ask someone on SNAP to spend time preparing food. Prepared meals are expensive, often not accessible, and sometimes difficult to prepare for people with certain disabilities. It might seem strange, but I have known people, very poor people, who rely on "foods in bar and drink form" out of necessity. I have known poor people for whom eating fruit is physically challenging.
SNAP changes like this may be better on a population health level, to be sure. On this I have no evidence. But each restriction placed on food for people living in destitution may mean some people go hungry. (And this excludes issues of caloric density.) I would like to see better data, but sadly, there is none.
+1 – it's all well and good for me to buy just some vegetables this week, because I have a pantry full of hundreds of dollars worth of basics, spices, a herb garden, bulk (more expensive) rice/pasta, etc. I also have a single 9-5 job so can spend an hour each day cooking.
But if I had an empty kitchen, lacked the funds to invest in bulk purchases, and had 30 minutes to cook and eat, I'd be eating very differently.
As others have pointed out, that's not what the restriction seems to be limited to. The distinction isn't based on sugar content but the amount of "processing", which rules out quite a lot of things beyond just candy and soda.
One can say "they probably had data to support it" about virtually any decision. It is not really a defense from critique. It may have been deliberate, but it still feels wrong and bad.
The fact is, most of the systems people use in their day do day that behave the way described simply require no mastery whatsoever. If your product, service, or device is locked behind learning a new skill, any skill, that will inherently limit the possible size of the audience. Far more than most realize. We can rail against this reality, but it is unforgiving. The average person who is struggling to put food on the table only has so many hours in the week to spare to stick it to the man by learning a new operating system.
It seems we’re all experiencing a form of sticker shock, from the bill for getting ease-of-use out of software that we demanded for the past few decades.
Cold take: honestly, just let users learn how to use your software. Put all your options in a consistent location in menus or whatever - it's fine. Yes, it might take them a little bit. No, they won't use every feature. Do make it as easy to learn as possible. Don't alienate the user with UI that changes under their feet.
Is "learning" now a synonym of "friction" in the product and design world? I gather this from many modern thinkpieces. If I am wrong, I would like to see an example of this kind of UI that actually feels both learnable and seamless. Clarity, predictability, learnability, reliability, interoperability, are all sacrificed on this altar.
> The explosive popularity of AI code generation shows users crave more control and flexibility.
I don't see how this follows.
The chart with lines and circles is quite thought-leadershipful. I do not perceive meaning in it, however (lines are jagged/bad, circles are smooth/good?).
I will at least remark that adding a new error to an enum is not a breaking change if they are marked #[non_exhaustive]. The compiler then guarantees that all match statements on the enum contain a generic case.
However, I wouldn't recommend it. Breakage over errors is not necessarily a bad thing. If you need to change the API for your errors, and downstreams are required to have generic cases, they will be forced to silently accept new error types without at least checking what those new error types are for. This is disadvantageous in a number of significant cases.
Indeed, there's almost always a solution to "inergonomics" in Rust, but most are there to provide a guarantee or express an assumption to increase the chance that your code will do what's intended. While that safety can feel a bit exaggerated even for some large systems projects, for a lot of things Rust is just not the right tool if you don't need the guarantees.
On that topic, I've looked some at building games in Rust but I'm thinking it mostly looks like you're creating problems for yourself? Using it for implementing performant backend algorithms and containerised logic could be nice though.
The fight for this kind of legislature has been ongoing for many years as part of a broader program that seeks to shape the kinds of information that can be stored, consumed, and propagated on the Internet. Age verification is only one branch of the fight, but an important one to the many who support government control: it is an inroad that allows governments to say they have a stake in who sees what.
I'll admit this may be naive, but I don't see the problem based on your description. Split each step into its own private function, pass the context by reference / as a struct, unit test each function to ensure its behavior is correct. Write one public orchestrator function which calls each step in the appropriate sequence and test that, too. Pull logic into helper functions whenever necessary, that's fine.
I do not work in finance, but I've written some exceptionally complex business logic this way. With a single public orchestrator function you can just leave the private functions in place next to it. Readability and testability are enhanced by chunking out each step and making logic obvious. Obviously this is a little reductive, but what am I missing?
You're not missing much - what you describe is roughly what I do. My original comment was pushing back against the "70 lines max" orthodoxy, not against splitting at all.
The nuance: the context struct approach works well when steps are relatively independent. It gets messy when step 7 needs to conditionally branch based on something step 3 discovered, and step 12 needs to know about that branch. You end up with flags and state scattered through the struct, or you start passing step outputs explicitly, and the orchestrator becomes a 40-line chain of if/else deciding which steps to run.
For genuinely linear pipelines (parse → transform → validate → output), private functions + orchestrator is clean. For pipelines with lots of conditional paths based on earlier results, I've found keeping more in the orchestrator makes the branching logic visible rather than hidden inside step functions that check context.flags.somethingWeird.
Probably domain-specific. Financial data has a lot of "if we detected X in step 3, skip steps 6-8 and handle differently in step 11" type logic.
Happy to elaborate. The core problem is bank statement reconciliation - matching raw bank transactions to your accounting records.
Sounds simple until you hit the real-world mess:
1. *Ambiguous descriptions*: "CARD 1234 AMAZON" could be office supplies, inventory, or someone's personal expense on the company card. Same vendor, completely different accounting treatment.
2. *Sequential dependencies*: You need to detect transfers first (money moving between your own accounts), because those shouldn't hit expense/income at all. But transfer detection needs to see ALL transactions across ALL accounts before it can match pairs. Then pattern matching runs, but its suggestions might conflict with the transfer detection. Then VAT calculation runs, but some transactions are VAT-exempt based on what pattern matching decided.
3. *Confidence cascades*: If step 3 says "70% confident this is office supplies," step 7 needs to know that confidence when deciding whether to auto-post or flag for review. But step 5 might have found a historical pattern that bumps it to 95%. Now you're tracking confidence origins alongside confidence scores.
4. *The "almost identical" trap*: "AMAZON PRIME" and "AMAZON MARKETPLACE" need completely different treatment. But "AMZN MKTP" and "AMAZON MARKETPLACE" are the same thing. Fuzzy matching helps, but too fuzzy and you miscategorize; too strict and you miss obvious matches.
5. *Retroactive corrections*: User reviews transaction 47 and says "actually this is inventory, not supplies." Now you need to propagate that learning to similar future transactions, but also potentially re-evaluate transactions 48-200 that already processed.
The conditional branching gets gnarly because each step can short-circuit or redirect later steps based on what it discovered. A clean pipeline assumes linear data flow, but this is more like a decision tree where the branches depend on accumulated state from multiple earlier decisions.
Correctness is a poor way to distinguish between human-authored and AI-generated content. Even if it's right, which I doubt (can humans not make wrong statements?), it doesn't do anything to help someone who doesn't know much about what they're searching.
SNAP changes like this may be better on a population health level, to be sure. On this I have no evidence. But each restriction placed on food for people living in destitution may mean some people go hungry. (And this excludes issues of caloric density.) I would like to see better data, but sadly, there is none.
reply