Hacker Newsnew | past | comments | ask | show | jobs | submit | more chickensong's commentslogin

It may be "quite normal" to you, but that doesn't mean it's good. Stockholm syndrome comes to mind here.

Until technology gives us better controls, we must assume that every app, particularly those from large profit-driven corporations, is hostile.


It may be hostile to you, bit that doesn't mean it is hostile. Paranoia syndrom comes to mind here.


It's true that not all apps are hostile, but my pessimism and paranoia aren't unfounded. If you think the current state of software, security, privacy, etc, is fine and dandy and doesn't warrant skepticism, then our shared reality is probably too fractured to have a meaningful debate.


I work in cybersecurity, heading the group in a company you know. I also develop open source software. So I am painfully aware of the pandemic of cybersecurity issues we face professionally and at home.

Progress has its good and bad aspects, and we must fight as much as possible in some battles, and choose them wisely. This is why the EU efforts around privacy are great, and without their drawbacks. But ultimately they are great.

Being infuriated as I see in this thread about the decision of a company to use mobiles as boarding passes is not something I adhere to. One can always fly with another airline that does not have these restrictions, and complain on another thread how expensive this is.

Saying that all of current technology is evil means going off the grid and living a quiet life in a remote forest. This is of course a solution.

Saying that some technology is evil (and it definitely is) means fighting for these specific things to be regulated. Ryan Air's digital boarding passes is not one of them.


I'm not trying to crusade against digital boarding passes, my issue is with normalizing mandatory apps for all the things.

If we had high quality, trusted software, leveraging open standards, that would be one thing, but instead we have janky proprietary snowflake apps that are borderline malware. Like you said, it's a pandemic of cybersecurity issues, so it's hard for me to accept the 'just install the app' mentality.

I agree we should pick our battles, but I don't believe regulation is the only solution worth fighting for. My comment was to nudge cultural change, by pushing back against what I see as a bad practice.


The LLM usage is fun and interesting. What model are you using, and how much customization are you doing to integrate with the app and maintain character?

I suggest adding an export function to make the characters more portable. Maybe export to PDF as well as JSON.


Using haiku 4.5 right now. I think I could get away with even smaller models. The prompt is here: https://github.com/igor47/csheet/blob/main/src/ai/prompts.ts and my tool use is here but it's really all through the code: https://github.com/igor47/csheet/blob/main/src/tools.ts


And you may ask yourself

Well, how did ip route here?


Letting the bytes go by


Modulated signals flow


I hadn't given much thought to building agents, but the article and this comment are inspiring, thx. It's interesting to consider agents as a new kind of interface/function/broker within a system.


Tmux/screen can also use separate PTY for named sessions, not just multiplex a single session. A separate process is used as well.

Why not just use something like Ansible if you mainly want YAML config management? It handles multiple environments elegantly and isn't constrained to any particular use case.

Not trying to knock your tool; building stuff is cool, and little custom programs are great for internal use, but it's a harder sell to convince the general public to move away from more generic and battle-tested tools. My feedback here, is that I'm still not sure why I'd choose sbsh.


Thanks a lot for the thoughtful comment, I really appreciate it.

You are absolutely right that screen and tmux use separate PTYs for sessions, and they are both extremely powerful. I actually use screen myself and it was one of the main inspirations for sbsh.

The main difference is that sbsh provides manifests to define terminal sessions declaratively. A manifest can include commands, environment variables, prompts, and lifecycle hooks such as onInit and postAttach. This makes it easy to reproduce and share terminal configurations, for example to run the same process locally or inside a CI/CD pipeline without manual setup.

Unlike screen, sbsh includes session discovery and metadata. It keeps a metadata directory that stores detailed information about all running terminals and supervisors, including their profiles and states. This allows you to query or reattach programmatically through the API.

Regarding configuration management, I agree that Ansible is great for provisioning and automation, but my goal with sbsh was different. I wanted a lightweight tool to set up working environments for Terraform, Kubernetes, Python venvs, and similar workflows.

For large Terraform projects with many environment variables, keeping local runs consistent can be difficult. sbsh profiles help with that by defining everything once and setting up a clear prompt to avoid running commands in the wrong environment. The same applies to Kubernetes clusters with multiple environments and namespaces.

You can also share these manifests with teammates so that when they start a terminal session, the environment is configured exactly the same way, with the right variables, cluster context, and visual prompt. This ensures consistency and prevents human error across local and CI/CD environments.

In short, sbsh focuses on developer session reproducibility rather than system configuration. It fills the gap between shell setup, automation, and environment safety.

Thanks again for the feedback. I completely agree that adopting new tools is always a challenge, but I built this to solve my own workflow pain, and it has made a big difference for me.


Thanks for the additional details. I can see how a more lightweight dev tool could be helpful in many situations, and leveraging the API sounds interesting. Anything improving workflow pain is a big win. Thank you for sharing!


Skills seem like the way forward, but Claude still needs to be convinced to activate the skill. If that's not happening reliably, hooks should be able to help.

A sibling comment on hooks mentions some approaches. You could also try leveraging the UserPromptSubmit hook to do some prompt analysis and force relevant skill activation.


It's always laziness. The people that do the bare minimum will likely continue to do so, regardless of AI.

I think we'll be dealing with slop issues for quite some time, but I also have hopes that AI will raise the bar of code in general.


All valid and important points, but missing a painful one, also rarely represented in threads like this: flaky hardware.

Almost every bare metal success story paints a rosy picture of perfect hardware (which thankfully is often the case), or basic hard failures which are easily dealt with. Disk replacement or swapping 1u compute nodes is expected and you probably have spares on hand. But it's a special feeling to debug the more critical parts that likely don't have idle spares just sitting around. The raid controller that corrupts it's memory, reboots, and rolls back to it's previous known-good state. The network equipment that locks up with no explanation. Critical components that worked flawless for months or years, then shit the bed, but reboot cleanly.

Of course everyone built a secure management vlan and has remote serial consoles hooked up to all such devices right? Right? Oh good, they captured some garbled symbols. The vendor's first tier of support will surely not be outsourced offshore or read from a script, and will have a quick answer that explains and fixes everything. Right?

The cloud isn't always the right choice, but if you can make it work, it sure is nice to not deal with entire categories of problems when using it.


Not saying those things don’t happen, but having worked with on-prem for 2 years, and having ran ancient (13 years old currently) servers in my homelab for 5 years, I’ve never seen them. Bad CPU, bad RAM, yes - and modern servers are extremely good at detecting these and alerting you.

In my homelab, in 5 years of running the aforementioned servers (3x Dell R620, and some various Supermicros) 24/7/365, the only thing I had fail was a power supply. Turns out they’re redundant, so I ordered another one, and the spare kept the server up in the meantime. If I was running these for a business, I’d keep hot spares around.


I'm glad it's working for you! It's worked for me in the past as well, but I've also felt the pain. As I mentioned before, it's often the case that things will work, but in some ways, you need to have an increased appetite for risk.

I suppose it depends on scale and requirements. A homelab isn't very relevant IMHO, because the sample size is small and the load is negligible. Push the hardware 24/7 and the cracks are more likely to appear.

A nice-to-have service can suffer some downtime, but if you're running a non-trivial/sizable business or have regulation requirements, downtime can be rough. Keeping spare compute servers is normal, but you'll be hard pressed to convince finance to spend big money on core services (db, storage, networking) that are sitting idle as backups.

Say you convinced finance to spend


Agreed that homelab load is generally small compared to a company’s (though an initial Plex cataloging run will happily max out as many cores as you give it for days).

In the professional environment I mentioned, I think we had somewhere close to 500 physical servers across 3 DCs. They were all Dell Blades, and nothing was virtualized. I initially thought that latter bit was silly, but then I saw that no, they’d pretty well matched compute to load. If needs grew, we’d get another Blade racked.

We could not tolerate unplanned downtime (or rather, our customers couldn’t), but we did have a weekly 3-hour maintenance window, which was SO NICE. It was only a partial outage for customers, and even then, usually only a subset of them at a time. Man, that makes things easier, though.

They were also hybrid AWS, and while I was there, we spun up an entirely new “DC” in a region we didn’t have a physical one. More or less lift-and-shift, except for managed Kafka, and then later EKS.


You have my vote


I agree with your assessment, it's maybe a bit of both.

The internet has given anyone/everyone a voice, for better or for worse, both widening and shortening the feedback loop. Now LLMs are shortening the loop even more, while unable to distinguish fact from fiction. Given how many humans will regurgitate whatever they read or heard as facts without applying any critical thought, the parallels are interesting.

I suspect that LLMs will affect society in several ways, assisting both the common consumers with whatever query they have at the moment, as well as DIY types looking for more in-depth information. Both are learning events, but even when requesting in-depth info, the LLM still feels like a shortcut. I think the gap between superficial and deep understanding of subjects is likely to get wider in the post-LLM world.

I do have hope for the garbage in, garbage out aspect though. The early/current LLMs were trained on plenty of garbage, but I think it's inevitable that will be improved.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: