ProTip: use PNPM, not NPM.
PNPM 10.x shutdown a lot of these attack vectors.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local.
(Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)
Gemini CLI at this stage isn't good at complex coding tasks (vs. Claude Code, Codex, Cursor CLI, Qoder CLI, etc.). Mostly because of the simple ReAct loop, compounded by relatively weak tool calling capability of the Gemini 2.5 Pro model.
> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.
Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.
The correct way of using Gemini CLI is: ABUSE IT!
With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.
> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL
Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.
I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561
That's also my opinion, that jj should be easier for juniors to pick up. However, I felt like there's a lack of learning material targeted at people without prior VCS experience. That's why I wrote "Jujutsu for everyone": https://jj-for-everyone.github.io/.
I try to leave a good commit trail in my PRs. These are often _not_ the reality of how the code was written and originally committed, but a rough approximation of the intended steps with the benefit of hindsight.
If you know you will use it often, uv has `uv tool install ...`. So, after the first `uv tool install ut` you can just run `ut md5 ...` or whatever. Don't need to keep using uvx.
uv also has a couple commands I throw in a systemd unit[1] to keep these tools updated:
Yes, they force you to use a Microsoft account for W11. You can disable WiFi or disconmect the cable, and then it'll let you use a local account. Or you can use Rufus for "burning" the image, which creates an OOBE file for you disabling those (if you check the boxes).
> "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”
I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries)
I'll venture that whoever is going to fine-tune their own models probably already has llama.cpp installed somewhere, or can install if required.
Please, please, never silently attempt to mutate the state of my machine, that is not a good practice at all and will break things more often than it will help because you don't know how the machine is set up in the first place.
The basic idea is, users would construct analyzers with the help of LLMs to extract the proper metadata that can be semantically searched. So when the user does an AI Assisted search with my tool, I would load all the analyzers (description and schema) into the system prompt and the LLM can determine which analyzers can be used to answer the question.
A very simplistic analyzer would be to make it easy to identify backend and frontend code so you can just use the command `!ask find all frontend files` and the LLM will construct a deterministic search that knows to match for frontend files.
Apple is shipping a ~3 billion parameter LLM for macOS, iOS, iPadOS, and visionOS; it's the largest LLM for mobile devices. Right now, Meta, Google, etc., mobile LLMs are in the 1.5—2 billion parameter range. It can seamlessly use much larger models via Apple's Private Cloud Compute servers.
Probably within a couple of weeks of iOS 26, iPadOS 26, and macOS 26 shipping this fall, Apple will have the most widely deployed LLMs accessible to 3rd-party app developers.
In beta versions of the operating systems, end users can already create automations that incorporate the use of these Foundation Models.
There is functionality hidden in the app, so that the manufacturer can save a dime and a half on some buttons. Unfortunately, this line has already been crossed.
The functionality that is hidden: Rinse, Machine Care (self-cleaning), HalfLoad, Eco and Delay start.
It's time for me to re-read the man page for bash. I was not aware of BASH_REMATCH, wow. It's in the first snippet on the linked page, and would save the hassle of using multiple var expansions of the %% and ## et al sort.
You have to accept Apple's licensing agreement as part of downloading XCode to run this tool (which relies on XCode's SDKs etc).
Quoting from the license agreement:
> You may not use the Apple Software, Apple Certificates, or any Services provided hereunder for any purpose not expressly permitted by this Agreement, including any applicable Attachments and Schedules. You agree not to install, use or run the Apple SDKs on any non-Apple-branded computer, and not to install, use or run iOS, iPadOS, macOS, tvOS, visionOS, watchOS, and Provisioning Profiles on or in connection with devices other than Apple-branded products, or to enable others to do so.
Both xtool itself, and anyone who uses it, is violating this license agreement, and apple has shown itself in the past to be a real ass about this sort of thing.
I think this can fly under the radar as long as no one uses it, but as soon as people actually start using this tool in any significant amount, I wouldn't be surprised if apple comes for it.
Based on the comments here, a lot of folks are assuming the primary users of mcp are the end users connecting their claude/vscode/etc to whatever saas platform they're working on. While this _is_ a huge benefit and super cool to use, imo the main benefit is for things like giving complex tool access to centralized agents. Where the mcp servers allow you to build agents that have the tools to do a sort of "custom deep research."
We have deployed this internally at work where business users are giving it a list of 20 jira tickets and asking it to summarize or classify them based on some fuzzy contextual reasoning found in the description/comments. It will happly run 50+ tool calls poking around in Jira/confluence and respond in a few seconds what would have taken them hours to do manually. The fact that it uses mcp under the hood is completely irrelevant but it makes our job as builders much much easier.
Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.
@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?
You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.
Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`
And loading it looks like this: `docker load -i /path/to/my-app.tar`
Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.
Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.
1. Does not default to running post-install scripts (must manually approve each)
2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.
NPM is too insecure for production CLI usage.
And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local. (Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)