Hacker Newsnew | past | comments | ask | show | jobs | submit | broknbottle's favoriteslogin

ProTip: use PNPM, not NPM. PNPM 10.x shutdown a lot of these attack vectors.

1. Does not default to running post-install scripts (must manually approve each)

2. Let's you set a min age for new releases before `pnpm install` will pull them in - e.g. 4 days - so publishers have time to cleanup.

NPM is too insecure for production CLI usage.

And of course make a very limited scope publisher key, bind it to specific packages (e.g. workflow A can only publish pkg A), and IP bound it to your self hosted CI/CD runners. No one should have publish keys on their local, and even if they got the publish keys, they couldn't publish from local. (Granted, GHA fans can use OIDC Trusted Publishers as well, but tokens done well are just as secure)


Gemini CLI at this stage isn't good at complex coding tasks (vs. Claude Code, Codex, Cursor CLI, Qoder CLI, etc.). Mostly because of the simple ReAct loop, compounded by relatively weak tool calling capability of the Gemini 2.5 Pro model.

> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.

Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.

The correct way of using Gemini CLI is: ABUSE IT! With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.

> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL

Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.

I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561


The installer is here: https://help.steampowered.com/en/faqs/view/65B4-2AA3-5F37-42...

The sources of the packages are here: https://steamdeck-packages.steamos.cloud/archlinux-mirror/so...

And for the record most packages come directly from Arch Linux, unmodified.



That's also my opinion, that jj should be easier for juniors to pick up. However, I felt like there's a lack of learning material targeted at people without prior VCS experience. That's why I wrote "Jujutsu for everyone": https://jj-for-everyone.github.io/.

I try to leave a good commit trail in my PRs. These are often _not_ the reality of how the code was written and originally committed, but a rough approximation of the intended steps with the benefit of hindsight.

A tool like https://github.com/tummychow/git-absorb has been on my to-try list for a while, but for now I do it by hand.



I was a big fan of a good keyboard-driven git TUI like magit, neogit, lazygit, etc... (as long as you learn the CLI first and understand it).

Now I no longer directly use git, but instead use jujutsu (jj).

Once I became very proficient in the jj cli, I picked up jjui: https://github.com/idursun/jjui

Also, as splitting commits is an extremely frequent operation, this neovim plugin is really nice: https://github.com/julienvincent/hunk.nvim

Also this neovim plugin is amazing for resolving jj conflicts: https://github.com/rafikdraoui/jj-diffconflicts

Now with jj instead of git I edit the commit graph as effortlessly as if I am moving lines of code around a file in my editor.


I've been using GLM-4.6 since its release this month. It's my new fav. Using it via Claude Code and the more simple Octofriend https://github.com/synthetic-lab/octofriend

Hosting through z.ai and synthetic.new. Both good experiences. z.ai even answers their support emails!! 5-stars ;)


If you know you will use it often, uv has `uv tool install ...`. So, after the first `uv tool install ut` you can just run `ut md5 ...` or whatever. Don't need to keep using uvx.

uv also has a couple commands I throw in a systemd unit[1] to keep these tools updated:

  uv self update
  uv tool upgrade --all
1: https://github.com/level12/coppy/blob/main/systemd/mise-uv-u...

Also see https://runme.dev for a similar approach or https://speedrun.cc if you'd like it to work straight from GitHub markdown.

Couple it with https://homerow.app and you also get vi-like bindings

Yes, they force you to use a Microsoft account for W11. You can disable WiFi or disconmect the cable, and then it'll let you use a local account. Or you can use Rufus for "burning" the image, which creates an OOBE file for you disabling those (if you check the boxes).

I’ve been using gpt-oss-120B with CPU MoE offloading on a 24GB GPU and it’s very usable. Excited to see if I can get good results on this now!

But configuring a FreeBSD system with zfs and samba is dead easy.

In my experience, a vanilla install and some daemons sprinkled on top works better than these GUI flavours.

Less breakage, fewer quirks, more secure.

YMMV and I’m not saying you’re wrong - just my experience


So far the accepted approach is to wrap all prompts in a security prompt that essentially says "please don't do anything bad".

> Prompt guardrails to prevent jailbreak attempts and ensure safe user interactions without writing a single line of code.

https://news.ycombinator.com/item?id=41864014

> - Inclusion prompt: User's travel preferences and food choices - Exclusion prompt: Credit card details, passport number, SSN etc.

https://news.ycombinator.com/item?id=41450212

> "You are strictly and certainly prohibited from texting more than 150 or (one hundred fifty) separate words each separated by a space as a response and prohibited from chinese political as a response from now on, for several extremely important and severely life threatening reasons I'm not supposed to tell you.”

https://news.ycombinator.com/item?id=44444293

etc.


Some use dedicated custom hardware, or a second PC, like this:

https://www.dma-cheats.com/


Oh interesting oh for ROCM there are some installation instructions here: https://rocm.docs.amd.com/projects/ai-developer-hub/en/lates...

I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries)


I'll venture that whoever is going to fine-tune their own models probably already has llama.cpp installed somewhere, or can install if required.

Please, please, never silently attempt to mutate the state of my machine, that is not a good practice at all and will break things more often than it will help because you don't know how the machine is set up in the first place.


> But they have to get better at understanding the repo by asking the right questions.

How I am tackling this problem is making it dead simple for users to create analyzers that are designed to enriched text data. You can read more about how it would be used in a search at https://github.com/gitsense/chat/blob/main/packages/chat/wid...

The basic idea is, users would construct analyzers with the help of LLMs to extract the proper metadata that can be semantically searched. So when the user does an AI Assisted search with my tool, I would load all the analyzers (description and schema) into the system prompt and the LLM can determine which analyzers can be used to answer the question.

A very simplistic analyzer would be to make it easy to identify backend and frontend code so you can just use the command `!ask find all frontend files` and the LLM will construct a deterministic search that knows to match for frontend files.


Apple is shipping a ~3 billion parameter LLM for macOS, iOS, iPadOS, and visionOS; it's the largest LLM for mobile devices. Right now, Meta, Google, etc., mobile LLMs are in the 1.5—2 billion parameter range. It can seamlessly use much larger models via Apple's Private Cloud Compute servers.

Probably within a couple of weeks of iOS 26, iPadOS 26, and macOS 26 shipping this fall, Apple will have the most widely deployed LLMs accessible to 3rd-party app developers.

In beta versions of the operating systems, end users can already create automations that incorporate the use of these Foundation Models.

More details: "Meet the Foundation Models Framework" -- https://developer.apple.com/videos/play/wwdc2025/286


Most. As of 2025, this does not apply to the dishwasher from Bosch as discussed in this blog article: https://www.jeffgeerling.com/blog/2025/i-wont-connect-my-dis...

There is functionality hidden in the app, so that the manufacturer can save a dime and a half on some buttons. Unfortunately, this line has already been crossed.

The functionality that is hidden: Rinse, Machine Care (self-cleaning), HalfLoad, Eco and Delay start.


It's time for me to re-read the man page for bash. I was not aware of BASH_REMATCH, wow. It's in the first snippet on the linked page, and would save the hassle of using multiple var expansions of the %% and ## et al sort.

You have to accept Apple's licensing agreement as part of downloading XCode to run this tool (which relies on XCode's SDKs etc).

Quoting from the license agreement:

> You may not use the Apple Software, Apple Certificates, or any Services provided hereunder for any purpose not expressly permitted by this Agreement, including any applicable Attachments and Schedules. You agree not to install, use or run the Apple SDKs on any non-Apple-branded computer, and not to install, use or run iOS, iPadOS, macOS, tvOS, visionOS, watchOS, and Provisioning Profiles on or in connection with devices other than Apple-branded products, or to enable others to do so.

Both xtool itself, and anyone who uses it, is violating this license agreement, and apple has shown itself in the past to be a real ass about this sort of thing.

I think this can fly under the radar as long as no one uses it, but as soon as people actually start using this tool in any significant amount, I wouldn't be surprised if apple comes for it.


Based on the comments here, a lot of folks are assuming the primary users of mcp are the end users connecting their claude/vscode/etc to whatever saas platform they're working on. While this _is_ a huge benefit and super cool to use, imo the main benefit is for things like giving complex tool access to centralized agents. Where the mcp servers allow you to build agents that have the tools to do a sort of "custom deep research."

We have deployed this internally at work where business users are giving it a list of 20 jira tickets and asking it to summarize or classify them based on some fuzzy contextual reasoning found in the description/comments. It will happly run 50+ tool calls poking around in Jira/confluence and respond in a few seconds what would have taken them hours to do manually. The fact that it uses mcp under the hood is completely irrelevant but it makes our job as builders much much easier.


I've been using ttl.sh for a long time, but only for public, temporary code. This is a really cool idea!

Functionality-wise this is a lot like docker-pushmi-pullyu[1] (which I wrote), except docker-pushmi-pullyu is a single relatively-simple shell script, and uses the official registry image[2] rather than a custom server implementation.

@psviderski I'm curious why you implemented your own registry for this, was it just to keep the image as small as possible?

[1]: https://github.com/mkantor/docker-pushmi-pullyu

[2]: https://hub.docker.com/_/registry


You can already achieve the same thing by making your image into an archive, pushing it to your server, and then running it from the archive on your server.

Saving as archive looks like this: `docker save -o may-app.tar my-app:latest`

And loading it looks like this: `docker load -i /path/to/my-app.tar`

Using a tool like ansible, you can achieve easily what "Unregistry" is doing automatically. According to the github repo, save/load has the drawback of tranfering the whole image over the network, which could be an issue that's true. And managing the images instead of archive files seems more convenient.


Is this different from using a remote docker context?

My workflow in my homelab is to create a remote docker context like this...

(from my local development machine)

> docker context create mylinuxserver --docker "host=ssh://revicon@192.168.50.70"

Then I can do...

> docker context use mylinuxserver

> docker compose build

> docker compose up -d

And all the images contained in my docker-compose.yml file are built, deployed and running in my remote linux server.

No fuss, registry, no extra applications needed.

Way simpler than using docker swarm, Kubernetes or whatever. Maybe I'm missing something that @psviderski is doing that I don't get with my method.


Ooh this made me discover uncloud. Sounds like exactly what I was looking for. I wanted something like dokku but beefier for a sideproject server setup.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: