The TL;DR
cagent is a multi-agent runtime that orchestrates AI agents with specialized capabilities and tools, and the interactions between agents. The example scenario I want to share with you is a Microsoft Dynamics 365 Business Central (programming language AL) coding assistant that orchestrates three agents
This is a hell of a lot like saying you ditched driving cars to ride a monorail with only two stops in the wrong neighborhood. You’re comparing apples and oranges and selectively leaving out info to highlight that.
And engineers never are the cause of mistakes? There can't possibly be any data to back up that major outages are more often caused by leadership. I've been in SIEs simply because someone pushed a network outage to a switch network. Statements like these only go to show how much we have to learn, humble ourselves, and stop blaming others all the time.
Leadership can include engineers responsible for technical priorities. If you're down for that long though, it's usually an organizational fuck-up because the priorities didn't include identifying and mitigating systemic failure modes. The proximate cause isn't all that important and the people who set organizational priorities are by-and-large not engineers.
Think of airplane safety. I think it is similar. A good culture can make sure $root-cause is more likely detected, tested, isolated, monitored, easy to roll back and so on.
This article is so me it hurts. I live in the micro-efficiencies; calendar color-coding, protein packets in my laptop bag, dual-purpose walking meetings. It’s not about doing more; it’s about removing drag so I can stay focused on what actually matters. Optimization isn’t a hustle, it’s clarity.
Really appreciated this take, hits close to home. I’ve found LLMs great for speed and scaffolding, but the more I rely on them, the more I notice my problem-solving instincts getting duller. There’s a tradeoff between convenience and understanding, and it’s easy to miss until something breaks. Still bullish on using AI for exploring ideas or clarifying intent, but I’m trying to be more intentional about when I lean in vs. when I slow down and think things through myself.
Just saw this in the release notes, super excited to try it out.
Compose supporting an AI agent stack out of the box could be a big deal. I just saw a Reddit thread around using compose in production and I'm thinking I can put these two together. The idea of spinning up a whole multi-container setup for an LLM app (like a vector DB, orchestrator, backend, etc.) with a single compose up could work to actually get stuff to prod. I’ve been hacking around and struggling with this. Curious to see how far I can push this, might finally be the clean setup I can share.
This is sick. Honestly, this solves a huge pain I’ve run into a bunch; knowing a site “works” but having zero clue if it’s actually good or on-brand without someone manually combing through it.
Love how you’ve wrapped all this in stuff devs already use (Jest, Docker, Testcontainers). No weird tooling, no “just trust the LLM” vibes. And keeping the prompts readable-as-tests? Chef’s kiss.
Genuinely feels like the kind of thing we’ll all be doing a year from now and wondering why we didn’t start sooner.