Hacker Newsnew | past | comments | ask | show | jobs | submit | nerdjon's commentslogin

Yes and no and this is the problem with the current marketing around AI.

I very much do want what used to be just called ML that was invisible and actually beneficial. Autocorrect, smart touch screen keyboards, music recommendations, etc. But the problem is that all of that stuff is now also just being called "AI" left and right.

That being said I think what most people think of when they say "AI" is really not as beneficial as they are trying to push. It has some uses but I think most of those uses are not going to be in your face AI as we are pushing now and instead in the background.


> what used to be just called ML

FWIW, 10+ years ago I was arguing that your old pocket calculator is as much of an AI as anything ever could be. I only kinda stopped doing that because it's tiring to argue with silly buzzwords, not because anything has changed since. When "these things were called ML" ML was just a buzzword, same as AI and AGI are now. I'm kinda glad "ML" was relieved of that burden, because ultimately it means a very real thing (which is just "parametrizing your algorithm by non-hardcoded values"), and (unlike with basic autocorrect, which no end user even perceives as "AI" or "ML") when you use ChatGPT, you don't use "ML", you use a rigid algorithm not meaningfully different from what was running on your old pocket calculator, except a billion times bigger and no one actually knows what it does.

So, yes, AI is just a stupid marketing buzzword right now, but so was ML, so was blockchain, so was NoSQL and many more. Ultimately this one is more annoying only because of scale, of how detrimental to society the actions of the culpable people (mostly OpenAI, Altman, Musk) were this time.


"AI" is the only term that makes sense for end users because "AI" is the only term that is universally understood. Hackernews types tend to overlook the layman.

And I hope no one gets started about how "AI" is an inaccurate term because it's not. That's exactly what we are doing: simulating intelligence. "ML" is closer to describing the implementation, and, honestly, what difference does it make for most people using it.

It is appropriate to discuss these things at a very high level in most contexts.


Right now? John McCarthy invented the term in order to get a grant, or in other words it was a marketing buzzword from day zero. He says so himself in the lighthill debate, and then the audience breaks out into hoots and howls.

They need to show usage going up and to the right or the house of cards falls apart. So now you’re forced to use it.

I think companies should also advertise when they use JavaScript on the page. “Use this new feature —- why? Because it’s powered by JavaScript”

This is why I use the term "genAI" rather than "AI" when talking about things like LLMs, sora, etc.

Right, it should be invisible to the user. Those formerly-called-ML features are useful. They do a very specific, limited function, and "Just Work."

What I definitively don't want, yet it's what is currently happening, is a chatbot crammed into every single app and then shoved down your throat.


Nobody wants what's currently marketed as "AI" everywhere.

I mean, that is kinda exactly what I said..

But we do have to acknowledge that AI is very much turned into an all encompassing term of everything ML. It is getting harder and harder to read an article about something being done with "AI" and to know if it was a custom purpose built model to do a specific task or is it throwing data into an LLM and hoping for the best.

They are purposefully making it harder and harder to just say "No AI" by obfuscating this so we have to be very specific about what we are talking about.


For a while I made an effort to specify LLM or generative AI vs AI as a whole, but I eventually became convinced that it was no longer valuable. Currently AI is whatever OpenAI, Anthropic, Meta, NVidia, etc say it is, and that is mostly hype and marketing. Thus I have turned my language on its head, specifying "ML" or "recommendation system" or whatever specific pre-GPT technology I mean, and leave "AI" to the whims of the Sams and Darios of SV. I expect the bubble to pop in the next 3-6 months, if not before the end of 2025, taking with it any mention of "AI" in a serious or positive way.

> 3-6 months

Wow, you are an optimist. I do feel "it's close", but I wouldn't bet this close. But I wouldn't argue either, I don't know. Also, when it really pops, the consequences will be more disastrous than the bubble itself feels right now. It's literally hundreds of billions in circular investing. It's absurd.


I feel like I am missing something here, and it is around it being called "hemp".

Does this actually have any impact on legal dispensaries, their products, farms, etc?

Does this make it harder to eventually de-schedule pot.


Yes, in the sense that now it will be illegal to ship cannabis seeds interstate. Under current law, which doesn't expire for a year, cannabis seeds can be shipped legally interstate across the US as they don't exceed the THC content. Doesn't matter if it's a hemp seed or marijuana seed as both are hemp under the old definition in seed form as long as they're under 0.3% THC.

The passed legislation outlaws any seeds that can produce a plant that doesn't satisfy the new definition of hemp. It completely destroys the white market seed industry, on which the legal weed industry partially operates.

Also, prices will go up and quality will go down in the 'legal' weed market, as previously the hemp industry was a check on prices because you could get better product for cheaper than going to a dispensary and with nice lab tested COAs to see what you were getting.


Only indirectly (see other comment).

In 2018 a provision was attached to the Farm Bill to legalize "hemp". The public and presumably the senators were led to believe this was about legalizing textiles and things like that, not drugs. It turned out that the language actually legalized delta-8 too. Many people were displeased with that outcome, because in many states it's completely unregulated with no additional taxes or anything like there is in "legal cannabis" states, and again because it was not understood or anticipated by most people. So now that provision is being reverted in this year's Farm Bill, passage of which was part of the shutdown deal (I think because SNAP benefits are part of the farm bill).

Until a month ago in Texas my kids could buy Delta-8 weed gummies at the gas station by my house (the Texas governor issued some emergency regulations to limit this). You didn't even need to be 18. This bill is targeted at those products legalized by the 2018 loophole.


This is a perfect example of the opportunity for federalism. Any state could —and many did— close the loophole. You mentioned emergency regulation from the Texas governor. New recreational substances are discovered and introduced to market continuously. States can use their legislative authority to address them. Delta-9, Spice, and other delta-8 THC analogues have been successfully addressed by states.

The side effects of this provision make hemp plants in the ground illegal, according to Senator Paul. It is reasonable for the public to be outraged about a hastily-written amendment whose authors failed to understand the unintended consequences.


But I’m not aware of many (any?) states that chose to close the loophole with a ban. Most, even ruby red Texas, just passed a state regulatory regime that included testing and taxation, as well as a 21 year old cutoff for buyers.

Very very few people actually fundamentally disagree with the core idea of identification to vote.

The problem is the act of getting the ID itself. In most (all?) states getting an ID is not free, takes time, and if you lost everything will require jumping through a lot of hoops.

If getting an ID was actually simple, free, and not time consuming than we could have a genuine discussion about ID requirements. But until that point it is very thinly veiled classism and racism.

Also the numbers just simply don't back up this being a serious issue to begin with.

TLDR: Fix the fundamental issues with having identification in the first place and we can talk.


This screams just as genuine as Google saying anything about Privacy.

Both companies are clearly wrong here. There is a small part of me that kinda wants openai to loose this, just so maybe it will be a wake up call to people putting in way too personal of information into these services? Am I too hopeful here that people will learn anything...

Fundamentally I agree with what they are saying though, just don't find it genuine in the slightest coming from them.


Its clearly propaganda. "Your data belongs to you." I'm sure the ToS says otherwise, as OpenAI likely owns and utilizes this data. Yes, they say they are working on end-to-end encryption (whatever that means when they control one end), but that is just a proposal at this point.

Also their framing of the NYT intent makes me strongly distrust anything they say. Sit down with a third party interviewer who asks challenging questions, and I'll pay attention.


"Your data belongs to you" but we can take any of your data we can find and use it for free for ever, without crediting you, notifying you, or giving you any way of having it removed.

It's owned by you but OpenAi has a "perpetual, irrevocable, royalty-free license" to use the data as they see fit.

We can even download it illegally to train our models on it!

Wow it's almost like privately-managed security is a joke that just turns into de-facto surveillance at-scale.

>your data belongs to you

…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Edit: honestly I’m surprised I left out the bit where they just indiscriminately scraped everything they could online to train these models. The stones to go “your data belongs to you” as they clearly feel entitled to our data is unbelievably absurd


>…”as does any culpability for poisoning yourself, suicide, and anything else we clearly enabled but don’t want to be blamed for!”

Should walmart be "culpable" for selling rope that someone hanged themselves with? Should google be "culpable" for returning results about how to commit suicide?


There are current litigation efforts to hold Amazon liable for suicides committed by, in particular, self-poisoning with high-purity sodium nitrite, which, in low concentrations is used as a meat curing agent.

A 2023 lawsuit against Amazon for suicides with sodium nitrite was dismissed but other similar lawsuits continue. The judge held that Amazon, “… had no duty to provide additional warnings, which in this case would not have prevented the deaths, and that Washington law preempted the negligence claims.“


That depends. Does the rope encourage vulnerable people to kill themselves and tell them how to do it? If so, then yes.

This is as unproductive as "guns don't kill people, people do." You're stripping all legitimacy and nuance from the conversation with an overly simplistic response.

>You're stripping all legitimacy and nuance from the conversation with an overly simplistic response.

An overly simplistic claim only deserves an overly simplistic response.


What? The claim is true. The nuance is us discussing if it should be true/allowed. You're simplifying the moral discussion and overall just being rude/dismissive.

Comparing rope and an LLM comes across as disingenuous. I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society.


> Comparing rope and an LLM comes across as disingenuous.

What makes you feel that? Both are tools, both have a wide array of good and bad uses. Maybe it'd be clearer if you explained why you think the two are incomparable except in cases of disingenuousness?

Remember that things are only compared when they are different -- you wouldn't often compare a thing to itself. So, differences don't inherently make things incomparable.

> I struggle to believe that you believe the two are comparable when it comes to the ethics of companies and their impact on society.

I encourage you to broaden your perspectives. For example: I don't struggle to believe that you disagree with the analogy, because smart people disagree with things all the time.

What kind of a conversation would such a rude, dismissive judgement make, anyways? "I have judged that nobody actually believes anything that disagrees with me, therefore my opinions are unanimous and unrivaled!"


A rope isn’t going to tell you to make sure you don’t leave it out on your bed so your loved ones can’t stop you from carrying out the suicide it helped talk you in to.

This is a good observation! The LLM can tell you to kill yourself. The rope can actually actually help you do it.


You are 100% right, a rope likely isn't going to tell you anything. There's one of those differences I mentioned which makes comparisons useful. We could probably name a few differences!

So, what makes you think comparing the 2 tools is invalid? You just compared them yourself, and I don't think you were being disingenuous.


Just because I used italics to emphasize something one time doesn’t mean you get to talk to me like that. I am not a child and you’re being unnecessarily patronizing.

I let it slide in the previous comment and gave you the benefit of the doubt despite what I saw but this comment clearly illustrates how disrespectful you’re being.

Have a good rest of your day man


I think you, as you put it, rudely, patronizingly, disrespectfully responded to the wrong post: mine was a polite one about a comparison between 2 tools and your statement that the comparing posters must be acting in bad faith (whereas you, with your differing opinion, are acting in good faith).

I'm not interested in focusing on tone-policing, since it is one of the lowest forms of debate and usually avoids the substance of the matter. So, I'm happy to return to our discussion about the 2 tools anytime you want to review my previous post and respond to the substance of it. If you're not into that, have a nice day comfortable in the knowledge that I've already turned the other cheek.


Fine let’s not police tone and say it straight: you know the rules here, so stop being a jerk and leave me alone. I don’t want to talk to you anymore.

do you know what happens when you Google how to commit suicide?

The same that happens with chatgpt? ie. if you do it in an overt way you get a canned suicide prevention result, but you can still get the "real" results if you try hard enough to work around the safety measures.

Except Google will never encourage you to do it, unlike the sycophantic Chatbot that will.

The moment we learned ChatGPT helped a teen figure out not just how to take their own life but how to make sure no one can stop them mid-act, we should've been mortified and had a discussion.

But we also decided via Sandy Hook that children can be slaughtered on the altar of the second amendment without any introspection, so I mean...were we ever seriously going to have that discussion?

https://www.nbcnews.com/tech/tech-news/family-teenager-died-...

>Please don't leave the noose out… Let's make this space the first place where someone actually sees you.

How is this not terrifying to read?


An exec loses its wings?

Actually, the first result is the suicide hotline. This is at least true in the US.

my point is, clearly there is a sense of liability/responsibility/whatever you want to call it. not really the same as selling rope, rope doesn't come with suicide warnings

I got one sentence in and thought to myself, "This is about discovery, isn't it?"

And lo, complaints about plaintiffs started before I even had to scroll. If this company hadn't willy-nilly done everything they could to vacuum up the world's data, wherever it may be, however it may have been protected, then maybe they wouldn't be in this predicament.


How do you feel about Google vacuuming up the world's data when they created a search engine? I feel like everybody just ignores this because Google was ostensibly sending traffic to the resulting site. The actual infringement of scraping should be identical between OpenAI and Google. Why is nobody complaining about Google scraping their sites? Is it only because they're getting paid off to not complain?

Everybody acts like this is a moral argument when really it's about whether or not they're getting a piece of the pie.


At the time Google created a search engine, they were not showing the data themselves, they were pointing to where those are. When they started to actually print articles themselves, they got sued. Showing where the thing is and showing content of the thing are two different actions.

So, when google did the same thing, there were complains.

> Why is nobody complaining about Google scraping their sites?

And second, search engines were actually pretty gentle with their sites scrapping. They needed the sites to work, so they respected robots.txt and made sure they wont accidentally DDoS sites by too many requests. AI companies just DDoS sites, do not respect robots.txt and if you block them, they will use another from their infinite amount of IPs.

Otherwise said, even back then, Google was kind trying to be ok non evil citizen. They became sociopathic only much later and even now kind of try to hide it. OpenAI and the rest of AI companies are openly sociopathic and proud of damage they cause.


Ironically there is precedent of Google caring more about this. When they realized location timeline was a gigantic fed honeypot, they made it per-device, locally stored only. No open letters were written in the process of.

Honestly the sooner OpenAI goes bankrupt the better. Just a totally corrupt firm.

I really should take the "invest in companies you hate" advice seriously.

I don't hate them. It is just plain to see they have discovered no scalable business model outside of getting larger and larger amounts of capital from investors to utilize intellectual property from others (either directly in the model aka NYT, or indirectly via web searches) without any rights. It is better for all of us the sooner this fails.

to utilize intellectual property from others (either directly in the model aka NYT, or indirectly via web searches) without any rights

... and put the liability for retrieving said property and hence the culpability for copyright infringement on the enduser:

Since the output would only be generated as a result of user inputs known as prompts, it was not the defendants, but the respective user who would be liable for it, OpenAI had argued.

https://www.reuters.com/world/german-court-sides-with-plaint...


But wait, isn't this what we want? This means the models can be very powerful and that people have to use their judgment when they produce output so that they are held accountable for whether or not they produced something that was infringing. Why is that a bad thing?

Can I ask you why we would the enduser be punishable for the pirating OpenAI did? That would mean governments have to take the next step to protect copyrighted material and what we face then I don't even dare to imagine.

Is there an article that has the US pricing anywhere? It doesn't look like any of these are on the US site yet so I am curious what these will actually cost.

I keep hoping that Ikea would come up with something that can go over a switch to manually control it. Seems like it would be very much within Ikea's target market (renters). There are devices like this on Amazon but having used them in the past they are finicky at best.


When I rented I just replaced things and kept them in a box and put everything back when I left.

The ZigBee range used to include a bulb and remote, so you just leave the mains switch on.

Otherwise, I used floor lights in the past with WiFi switchable sockets before I switched to ZigBee. The WiFi ones wanted to dial home.


> When I use AI to perform X, every single time I run that AI from now until the heat death of the sun it will maybe produce Y. Forever! When it does, we don't understand why, and when it doesn't, we also don't understand why!

To make this even worse, it may even produce Y just enough times to make it seem reliable and then it is unleashed without supervision, running thousands or millions of times, wrecking havoc producing Z in a large number of places.


Exactly. Fundamentally, I want my computer's computations to be deterministic, not probabilistic. And, I don't want the results to arbitrarily change because some company 1,500 miles away from me up-and-decided to "train some new model" or whatever it is they do.

A computer program should deliver reliable, consistent output if it is consistently given the same input. If I wanted inconsistency and unreliability, I'd ask a human to do it.


The problem with this is that it runs counter to AI company valuations.

Their valuations: AI all the things

Reality: AI the minimum number of steps, surrounded by guardrails and traditional deterministic automation

But AI companies won't be worth AI money if that reality persists.


It's not arbitrary ... your precise and deterministic, multi-year, financial analysis needs to be corrected every so often for left-wing bias.

/s ffs


> To make this even worse, it may even produce Y just enough times to make it seem reliable and then it is unleashed without supervision, running thousands or millions of times, wrecking havoc producing Z in a large number of places.

Reminds me of the famous Xerox scanning bug: https://www.dkriesel.com/en/blog/2013/0802_xerox-workcentres...


I honestly thought the same and kind of just gave up the idea of hitting that at night and being done.

Figured having 4 OS installations was already fairly niche that it was largely a self imposed issue. Looking forward to confirming that this fixes the issue in my use case.


By that logic I don't understand why you don't just drink raw sewage instead of waiting for it to be processed and made safe.

The act of making cheese is processing the raw milk. Fun fact Pasteurized milk was also once raw.

Same with meat but basically no one advocates eating raw chicken.

Why am I explaining that things change from a raw to a processed state and becomes safe to consume...


This is a false equivalence. And if the milk and the cow is tested for pathogens, what is the problem?


At the end of the day competition for SpaceX is a good thing so we don't become reliant on a single company and the whims of the person that owns it.

I don't know enough about whether or not they really are behind or if this is just a bit of sensationalized reporting. But this is how it should have likely been from the beginning.


totally, i wish Blue Origin was neck and neck with SpaceX in terms of capabilities and rate of innovation. I'm pretty much a SpaceX superfan but they need the competition.


The article implies the competition is coming from China, who has multiple large projects on the go including one trying to clone Starship.


> I still dislike the term "hallucinations". It comes across like the model did something wrong. It did not, as factually wrong outputs happen per design.

While I do see the issue with the word hallucination providing a humanization to the models, I have yet to come up or see a word that so well explains the problem to non technical people. And quite frankly those are the people that need to understand that this problem still very much exists and is likely never going away.

Technically yeah the model is doing exactly what it is supposed to do and you could argue that all of its output is "hallucination". But for most people the idea of a hallucinated answer is easy enough to understand without diving into how the systems work, and just confusing them more.


> And quite frankly those are the people that need to understand that this problem still very much exists and is likely never going away.

Calling it a hallucination leads people to think that they just need to stop it from hallucinating.

In layman's terms, it'd be better to understand that LLMs are schizophrenic. Even though that's not really accurate either.

A better way to get across that the models really only understand reality by the way they've read about it and then we ask them for answers "in their own words" but that's a lot longer than "hallucination".

It's like the gag in the 40 year old version where he describes breasts feeling like bags of sand.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: