You never need WASM (or any other language, bytecode format, etc) to talk to LLMs. But WASM provides things people might like for agents, eg. strict sandboxing by default.
All ordinary "room temperature and pressure" matter that we're used to -- that we're made of -- can be thought of as bathtub foam compared to a neutron star stuff that is more like a tungsten brick in that analogy.
Well, not quite, because that analogy misses ten orders of magnitude of density difference. That just hurts my brain.
Magnetars are a whole other level of eldritch madness. The energy density of their magnetic fields is ten thousand times the density of lead.
Let that sink in for a minute.
The vacuum around a magnetar contains so much energy in the magnetic field alone that thanks to the E=mc² conversion ratio between energy and mass it has a "mass density" that is the direct equivalent to every single atomic bomb on the planet blowing up all at once and the released energy of all of that getting packed into a cubic centimeter.
aren't pulsars and magnetars very small when talking about stars and planets? Google's AI says about 20km in diameter but would need to double check that. On the other hand, IIRC the energy output of a pulsar compared to its physical size is pretty scary. You wouldn't want one in your neighborhood.
They are both forms of neutron stars, which average around 20km but are the densest objects known to man. Fun fact, one sugar cube of their material would weigh about as much as a mountain (https://imagine.gsfc.nasa.gov/science/objects/neutron_stars1...).
Presumably it means starting multiple copies of Claude Code or whatever other agent powered IDE and having them work on different things at the same time, so while you're waiting for one you can be replying to another, etc. so that multiple workstreams are proceeding in parallel.
I wrote a similar implementation a while back[1]. The major difference is that mine only allows rules of the form "n is divisible by m" whereas the one from the post allows arbitrary predicates over n. Wouldn't be that hard to update though.
Maybe I'm misunderstanding what you're saying but a connection pool seems like almost a canonical example of something that shouldn't be a singleton. You might want connection pools that connect to different databases or to the same database in read-only vs. read/write mode, etc.
I meant "singleton" in the sense of a single value for a type shared by anything that requires one, i.e. a Guice singleton ( https://github.com/google/guice/wiki/scopes#singleton ) not a value in global scope. Or maybe a single value by type with an annotation, the salient point is that there are values in a program that must be shared for correctness. Parameterless constructors prohibit you from using these (unless you have global variables).
Then these different pools can be separate singletons. You still don't want to instantiate multiple identical pools.
You can use the type system to your advantage. Cut a new type and inject a ReadOnlyDataSource or a SecondDatabaseDataSource or whatnot. Figure out what should only have one instance in your app, wrap a type around it, put it in the singleton scope, and inject that.
This has the advantage that you don't need an extra framework/dependency to handle DI, and it means that dependencies are usually much easier to trace (because you've literally got all the code in your project, no metaprogramming or reflection required). There are limits to this style of DI, but in practice I've not reached those limits yet, and I suspect if you do reach those limits, your DI is just too complicated in the first place.
I think most people using these frameworks are aware that DI is just automated instantiation. If your program has a limited number of ways of composing instantiations, it may not be useful to you. The amount of ceremony reduced may not be worth the overhead.
This conversation repeats itself ad infinitum around DI, ORMs, caching, security, logging, validation, etc, etc, etc... no, you don't need a framework. You can write your own. There are three common outcomes of this:
* Your framework gets so complicated that someone rips it out and replaces it with one of the standard frameworks.
* Your framework gets so complicated that it turns into a popular project in its own right.
* Your app dies and your custom framework dies with it.
I'm not suggesting a custom framework here, I'm suggesting no DI framework at all. No reflection, no extra configuration, no nothing, just composing classes manually using the normal structure of the language.
At some point this stops working, I agree — this isn't necessarily an infinitely scalable solution. At that point, switching to (1) is usually a fairly simple endeavour because you're already using DI, you just need to wire it together differently. But I've been surprised at the number of cases where just going without a framework altogether has been completely sufficient, and has been successful for far longer than I'd have originally expected.
Learning more faster and learning for the love of it are not mutually exclusive. Also there are different contexts in which I want to learn, and in some of them, speed matters, in some it doesn't. I don't think it makes sense to make blanket statements like this even about your own learning, much less about others.
I think when speed matters in learning, it's time to modify one's life so that it no longer matters. I understand necessity, but that's just the constraint of a suboptimal condition. I think it makes total sense, and we need more absolute stances rather than acting like optimizing machines.
Just because sometimes you just want a 30 second break down on how to boil pasta, or whatever, doesn't mean we're becoming "optimizing machines", just that the context matters. Sometimes I don't want to or need to know the "fundamentals of why water boils", I just need to know enough in order to complete some other thing.
> it's time to modify one's life so that it no longer matters
It sounds like you have 100% control of your own life, which might be great and all, but it isn't very realistic for most humans on the planet. Time is limited, and what we spend our time on is a choice. I too probably spend too much time learning about stuff I cannot really apply, because I like learning, but sometimes you're faced with something, you need to make a choice within N minutes/hours/days, and spending 1 month researching the topic before making a choice just isn't feasible.
> Just because sometimes you just want a 30 second break down on how to boil pasta, or whatever, doesn't mean we're becoming "optimizing machines"
If I want a 30-second breakdown of how to boil pasta, I'll look on the back of the box.
> Sometimes I don't want to or need to know the "fundamentals of why water boils", I just need to know enough in order to complete some other thing.
False dichotomoy, because even if I want to quickly know how to do something, I don't need AI, nor do I need to research everything either.
> Time is limited, and what we spend our time on is a choice
That's fine, and I agree. But that doesn't mean we need to go to the level of AI to learn something, and I'd argue it's even harmful in the long run as even a study by Microsoft [1] shows that AI makes people stupider, so not really learning at all.
> If I want a 30-second breakdown of how to boil pasta, I'll look on the back of the box.
Yeah, but then you're in a foreign country or whatever, you understand that was just an example right? To illustrate something... Have some imagination.
> False dichotomoy, because even if I want to quickly know how to do something, I don't need AI, nor do I need to research everything either.
Right, I'm not claiming it's impossible to "learn X quickly without AI", but I would make the claim that I can learn X faster with AI, than without. YMMV and all that.
> But that doesn't mean we need to go to the level of AI to learn something
I don't have to use Wikipedia or the Internet instead of going to my local library, but if I'm in a rush, and need something quickly, I probably prefer those two options rather than the last.
> as even a study by Microsoft [1] shows that AI makes people stupider, so not really learning at all.
That study says no such thing, and the amount of people mislead by that paper is kind of shocking to me. If you're curious what the paper actually says, feel free to read the actual paper instead of a "YouTube AI thought-leader" summarization of it, or whatever you got the whole "AI makes people stupider" from: https://www.microsoft.com/en-us/research/wp-content/uploads/...
I did read the paper. One of their conclusions is that it's less likely for people to engage in criticial thinking the more they trust AI. Which to me implies that widespread AI usage will lead to a diminishment of people even practicising critical thinking, which in turn will lead to a overall degradation of the skill. From the paper: "Moreover, while GenAI can improve worker efficiency, it can inhibit critical engagement with work and can potentially lead to long-term overreliance on the tool and diminished skill for independent problem-solving."
> Yeah, but then you're in a foreign country or whatever, you understand that was just an example right? To illustrate something... Have some imagination.
I haven't seen a convincing example yet.
> Right, I'm not claiming it's impossible to "learn X quickly without AI", but I would make the claim that I can learn X faster with AI, than without. YMMV and all that.
Again, I wonder if that's true. Because skim-learning, IMO, does not lead to much real learning in the long run. Like I knew a guy in grad school who did that, and could come up with answers faster at first. But after a couple weeks, he had a very sketchy knowledge of the subject and had to keep looking things up whereas people who were more systematic at learning could answer questions without any reference at all.
I'm not saying skim-learning is bad, sometimes it is necessary but in general, AI takes it too far for the average person and it will most likely lead to mental degradation.
Pracically, most of the time, IMO, where learning speed matters, is when you're competing against someting, could be an individual, or a startup, etc, which doesn't seem to be the optimal condition one would like to be in.
First counterexample that comes to mind, you're traveling to a country for one reason or another, and it'd be helpful to learn or know more of the language. The faster you learn, the more helpful it'll be.
Another example, your friend's band's guitarist got sick and you've got a week to learn a full set's worth of music
Generally I find some urgency to be a nice motivator
> How many of us have seen an LLM produce page-fuls of output, stop, suddenly erase it all, and then balk? The LLM needs to re-analyze that output impassively in order to detect that it crossed an undetected bright line.
That's not what's happening here. A separate process is monitoring for content violations and causing it to be erased. There's no re-analysis going on.