Hacker Newsnew | past | comments | ask | show | jobs | submit | rattray's commentslogin

is there a way others can contribute their favorite motions? like a link to the relevant bit of source that could be shared here?


I'm not familiar with this parable, but that sounds like a good thing in this case?

Notably, this is not a gun.


things that you think sound good, might not sound good to the authority in charge of determining what is good.

For example using your LLM to criticise, ask questions or perform civil work that is deemed undesirable becomes evil.

You can use google to find how the UK government for example has been using "law" and "terrorism" charges against people for simply tweeting or holding a placard they deem critical of Israel.

Anthropic is showing off these capabilities in order to secure defence contracts. "We have the ability to surveil and engage threats, hire us please".

Anthropic is not a tiny start up exploring AI, it's a behemoth bank rolled by the likes of Google and Amazon. It's a big bet. While money is drying up for AI, there is always one last bastion for endless cash, defence contracts.

You just need a threat.


> You can use google to find how the UK government for example has been using " "law" and "terrorism" charges

how dare they invoke law


their whole point is that the law can be abused


In general, such broad surveillance usually sounds like a bad thing to me.


You are right. If people can see where you are at all times, track your personal info across the web, monitor your DNS, or record your image from every possible angle in every single public space in your city, that would be horrible, and no one would stand for such things. Why, they'd be rioting in the streets, right?

Right?


I’m actually surprised whenever someone familiar with technology thinks that adding more “smart” controls to a mechanical device is a good idea, or even that it will work as intended.

The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.

But as a person familiar with tech, IoT, and how devices work in the real world, do you actually think it would work like that?

“Sorry, you cannot fire this gun right now because the server is down”.

Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.


Never thought about this before but we already have biometric scanners on our phones we rely on and work quite well, why couldn’t it work for guns?


You’ve never had your fingerprint scanner fail because your hand is dirty?

Or you didn’t press it in the right spot?

Or the battery was dead?

If I’m out in the wild and in a situation where a bear is coming my way (actual situation that requires carrying a gun in certain locations) I do not want a fingerprint scanner deciding if I can or cannot use the gun. This is the kind of idea that only makes sense to people who have no familiarity with the real world use cases.


I will admit i am arm chair designing something i have not considered deeply but all your edge cases sound like solvable problems to me and if they aren’t just not a situation to use this particular solution for. E.g biometrics on phones, open the phone for a configurable set amount of time and have fallbacks when biometrics fail (the pin) and emergency overrides for critical functionality like 911 calls. I was not proposing this be rolled out to all guns tomorrow by law but i am equally skeptical it is an intractable problem given some real world design thinking. Or constrain the problem to home defense weapons but not rugged backcountry hunting.


They exist. For example, https://smartgun.com/technology and digging into the specs ... https://smartgun.com/tech-specs

    Facial recognition: 3D Infrared
    Fingerprint: Capacitive


I had a fingerprint scanner on an old phone and it would fail if there was a tiny amount of dirt or liquid on my finger or on the scanner. It's not big deal to have it fail on a phone, it's just a few seconds of inconvenience putting in a passcode instead. On a firearm, that's a critical safety defect. When it comes to safe storage, there are plenty of better options like a safe, a cable/trigger lock, or for carrying, a retention holster (standard for law enforcement).


If the biometric scanner stops you from shooting a target that someone else other than you determined?


> The imagined ideal of a smart gun that perfectly identifies the user, works every time, never makes mistakes, always has a fully charged battery ready to go, and never suffers from unpredictably problems sounds great to a lot of people.

People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?

> “Sorry, you cannot fire this gun right now because the server is down”.

Has anyone ever proposed a smart gun that requires an internet connection to shoot?

> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

People already do this.


> People acccept that regular old dumb guns may jam, run out of ammo, and require regular maintenance. Why are smart ones the only ones expected to be perfect?

This is stated as if smart guns are being held to a different, unachievable standard. In fact, they have all the same limitations you've already pointed out (on top of whatever software is in the way), and are held to the exact same standard as "dumb" guns: when I, the owner, pull the trigger, I expect it to fire.

Users like products that behave as they expect.


> when I, the owner, pull the trigger, I expect it to fire

You’ve never had a misfire or a jam? Ever?


Any smart technology is an added failure rate in addition to those failure modes.

Arguing that something might fail and therefore any additional changes that can introduce failure modes are therefore okay is the absolute worst claim to hear from any engineer. You can’t possibly be trying to make this argument in good faith.


A misfire or a jam are just as possible on a "smart" gun. Again, this is not a unique standard being applied unfairly.

Gun owners already treat reliability as a major factor in purchasing decisions. Whether that reliability is hardware or software is moot, as long as the thing goes "bang" when expected.

It's not hard to see the parallels to LLMs and other software, although ostensibly with much lower stakes.


> Gun owners already treat reliability as a major factor in purchasing decisions.

But zero smart guns are on the market. How are they evaluating this? A crystal ball?

Why do we not consider “doesn’t shoot me, the owner” as a reliability plus?


There are (or were, anyway) smart guns on the market. It's just that nobody wants to buy them.

As far as your comparison with misfires and jams, well... for one thing, your average firearm today has MRBF (mean rounds before failure) in the thousands. Fingerprint readers on my devices, though, fail several times every day. The other thing is that most mechanical failures are well-understood and there are simple procedures to work around them; drilling how to clear various failures properly and quickly is a big part of general firearms training, the goal being to be able to do it pretty much automatically if it happens. But how do you clear a failure of electronics?


> But zero smart guns are on the market. How are they evaluating this? A crystal ball?

It doesn't take a crystal ball to presume that a device designed to prevent a product from working might prevent the product from working in a situation you didn't expect.

> Why do we not consider “doesn’t shoot me, the owner” as a reliability plus?

Taking this question in good faith: You can consider it a plus if you like when shopping for a product, and that's entirely fair. Despite your clear stated preference, it's not relevant (or is a negative) to reliability in the context of "goes bang when I intentionally booger hook the bang switch".

I'm not trying to get into the weeds on guns and gun technology. I generally believe in buying products that behave as I expect them to and don't think they know better than me. It's why I have a linux laptop and an android cell phone, and why I'm getting uneasy about the latter.


Sure; api.anthropic.com is not a mechanical device.


> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms. . .

Sadly, we’re already past this point in the US.


> Or how about when the criminals discover that they can avoid being shot by dressing up in police uniforms, fooling all of the smart guns?

Dressing up in police uniforms is illegal in some jurisdictions (like Germany).

And you might say 'Oh, but criminals won't be deterred by legality or lack thereof.' Remember: the point is to make crime more expensive, so this would be yet another element on which you could get someone behind bars. Either as a separate offense, if you can't make anything else stick or as aggravating circumstances.

> A very similar story is the idea of a drink driving detector in every vehicle. It sounds good when you imagine it being perfect. It doesn’t sound so good when you realize that even a 99.99% false positive avoidance means your own car is almost guaranteed lock you out of driving it some day by mistake during its lifetime, potentially when you need to drive it for work, an appointment, or even an emergency due to a false positive.

So? Might still be a good trade-off overall, especially if that car is cheaper to own than one without the restriction.

Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.


> Cars fail sometimes, so your life can't depend on 100% uptime of your car anyway.

Try using this argument in any engineering context and observe how quickly you become untrusted for any decision making.

Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.


> Try using this argument in any engineering context and observe how quickly you become untrusted for any decision making.

We famously talk about the 'numbers of 9s' of uptime at eg Google. Nothing is 100%.

> Arguing that something doesn’t have 100% reliability and therefore something that makes it less reliable is okay is not real logic that real people use in the real world.

That wasn't my argument at all. What makes you think so?

I'm saying that going for 100% reliability is a fool's errand.

So if the device adds a 1/1,000,000 failure mode, that might be perfectly acceptable.

Especially if it eg halves your insurance payments.

You could also imagine that the devices have an override button, but the button would come with certain consequences.


>but that sounds like a good thing in this case?

Who decides when someone is doing something evil?


Everyone decides that tf?


Well what if you want the AI red team your own applications?

That seems a valid use case that'd get hit.


It depends on who is creating the definition of evil. Once you have a mechanism like this, it isn't long after that it becomes an ideological battleground. Social media moderation is an example of this. It was inevitable for AI usage, but I think folks were hoping the libertarian ideal would hold on a little longer.


It’s notable that the existence of the watchman problem doesn’t invalidate the necessity of regulation; it’s just a question of how you prevent capture of the regulating authority such that regulation is not abused to prevent competitors from emerging. This isn’t a problem unique to statism; you see the same abuse in nominally free markets that exploit the existence of natural monopolies.

Anti-State libertarians posit that preventing this capture at the state level is either impossible (you can never stop worrying about who will watch the watchmen until you abolish the category of watchmen) or so expensive as to not be worth doing (you can regulate it but doing so ends up with systems that are basically totalitarian insofar as the system cannot tolerate insurrection, factionalism, and in many cases, dissent).

The UK and Canada are the best examples of the latter issue; procedures are basically open (you don’t have to worry about disappearing in either country), but you have a governing authority built on wildly unpopular ideas that the systems rely upon for their justification—they cannot tolerate these ideas being criticized.


Well said


Not really. It's like saying you need a license to write code. I don't think they actually want to be policing this, so I'm not sure why they are, other than a marketing post or absolution for the things that still get through their policing?

It'll become apparent how woefully unprepared we are for AIs impact as these issues proliferate. I don't think for a second that Anthropic (or any of the others) is going to be policing this effectively or maybe at all. A lot of existing processes will attempt to erect gates to fend off AI, but I bet most will be ineffective.


Stainless | Engineers, EMs, Former Founders, GTM, Business, more | NYC & SF ONSITE | Full-time

Stainless is building the platform for high-quality, easy-to-use APIs.

When you `npm install openai` or `pip install anthropic` or `go get …cloudflare`, for example, you’re downloading code that we generated.

We’re ~3y old, ~35 people, have strong revenue, several years of runway, and great customers (including OpenAI, Anthropic, and Cloudflare). We’re backed by Sequoia & a16z.

As one of our first 30 engineers, you’ll get autonomy to build great products, skilled peers, opportunity for tremendous impact, and competitive salary, benefits, and equity.

We’re looking for exceptionally productive, thoughtful, tasteful, and kind people with a passion for making software development better for everyone.

Want to build the future of API tooling? See more at stainless.com/jobs


It's in the page footer fwiw. I agree it should be more prominent tho.


Taking this question at face value, because you asked: Stainless generates MCP servers for REST APIs (a ~simple[0] translation of an OpenAPI to a suite of MCP tools).

We actually generate MCP for free (we charge for SDKs), so we're technically not selling, but I don't begrudge GP's comment/sentiment.

[0]https://www.stainless.com/blog/what-we-learned-converting-co... describes some ways in which this is less simple than you think. The "Handling large APIs dynamically" section near the bottom covers the most salient challenge related to converting large APIs to MCP tools, but there's more work to do.


How many tools does your agent have access to?

At Stainless we use https://github.com/dgellow/mcp-front to make it easy for anyone on the team (including non-technical folks) to OAuth into a pretty wide variety of tools for their AI chats, using their creds. All proxied on infra we control.

Even our read replica postgres DB is available, just push a button.


Just 5 or 6. I'm just using the OpenAI tool call API for it; I own the agent (more people should!) so MCP doesn't do much for me.


This. If you are running your agent loop, MCP does nothing.

MCP is an inter-process (or inter-system) communication standard, and it's extremely successful at that. But some people try to shoehorn it into a single system where it makes for a cumbersome fit, like having your service talk to itself via MCP as a subprocess just for the sake of "hey, we have MCP".

If you own your loop AND your business logic lives in the same codebase/process as your agent loop, you don't need MCP at all, period. Just use a good agent framework like PydanticAI, define your tools (and have your framework forward your docstrings/arguments into the context) and you're golden!


Hi! I am a bit lost in all of this. How do you create your own agent and run your own loop? I've looked at PydanticAI but don't get it. Would you please give me an example? Thanks!


Of course! In the PydanticAI docs for Agents, you have a fully defined example of a tool calling agent (roulette_wheel.py)

https://ai.pydantic.dev/agents/#introduction

If you need professional help in any of this, I also do consulting and/or can do mentoring on my knowledge in this area.


Hmm? We have a REST API, CLI, MCP server, and SDKs that all offer the same data/functionality.

MCP is for AI agents, the CLI is for one-off commands by devs who like to poke at things or CI scripts, the TypeScript SDK is for production software written in TypeScript, etc etc.

Was there something we're missing from the "data platform"? A SQL interface?

(I work with yjp20)


Yeah, CLIs actually often do seem better for agents with access to bash, like Claude Code.

That said, many "business users" like those referenced above interact more with a web UI, and asking them to audit bash/CLI interactions might not always work well.

(disclaimer: I work at Stainless; we're actually exploring ways to make MCP servers more "CLI-like" for API use-cases.)


I'm not sure I understand what you mean by "it requires a trip through an LLM for every transaction"?

In a normal case of "production software", yeah, you would not want to add an LLM in the middle to make an API call. That's silly – just write the code to make the API call if you can do it deterministically.

If you're talking to an LLM in a chat experience, and you want that LLM to go interact with some foreign system (i.e., hit an API), you need _some way_ of teaching the LLM how to make that API call. MCP is one such way, and it's probably the easiest at this point.

Doing it through MCP does introduce some latency due to a proxy server, but it doesn't introduce an additional LLM "in the middle".

(Disclaimer: I work at Stainless. Note that we actually sell SDKs at Stainless; our MCP generator is free.)


For the curious and lazy, said book appears to be https://leanpub.com/creatingmcpserverswithoauth

For more curious and lazy people -- what are elicitations?


Just call them “web forms” for LLMs.

You can ask the client to fill in a dropdown or input.

Example they give is a restaurants table booking tool.

Imagine saying book a table at 5pm. But 5pm is taken.

You can “elicit” the user and have them select from a list of available times


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: