Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

they definitely do but i think in podcasts i generally have a better ability to evaluate how much trust i should place in what i'm hearing. i know if i'm listening to a professional chef podcast, i can probably trust they will generally be right talking about how they like to bake a turkey. if i'm listening to the hot wings guy interview zac efron i know to be less blindly trusting of their info on astrophysics.

with chatgpt i don't know it's "experience" or "education" on a topic and it has no social accountability motivating it to make sure it gets things right so i can't estimate how much i should trust it in the same way.



I guarantee ChatGPT won't lie to you about how to bake a turkey.

The paranoia around hallucination is wildly overblown, especially given the low stakes of this context.


I don't know how people use ChatGPT at all. It confidently hallucinated answers to 4 out of 5 my latest "real" questions, with code examples and everything. Fortunately with code I could easily verify the provided solutions are worthless. Granted I was asking questions about my niche that were hard enough I couldn't easily Google them or find a solution myself, but I think that's the bar for being useful. The only thing it got right was finding a marketing slogan.


Try asking it about the Roman empire (e.g. “who was Nero?”) then checking the answer.

It’s very good at things like that. Go down a whole Roman Empire rabbit hole, have fun with it!

This is what an idle afternoon talking to ChatGPT is about, not trying to get it to do your job for you.


I've done this and, from what I can tell, it is reasonably accurate. However, I did have an instance where I was asking it a series of questions about the First Peloponnesian War, and partway through our discussion it switched topics to the first part of the Peloponnesian War, which are different conflicts. At least, I think they are. It was quite confusing.


Here, I did it for you:

Nero was a Roman Emperor from 54 to 68 AD, known for his controversial and extravagant reign. He was the last emperor of the Julio-Claudian dynasty. Here are some key points about his life and rule:

1. *Early Life and Ascension*: Nero was born Lucius Domitius Ahenobarbus in 37 AD. He was adopted by his great-uncle, Emperor Claudius, becoming Nero Claudius Caesar Drusus Germanicus. He ascended to the throne at the age of 17, after Claudius' death, which many historians believe Nero's mother, Agrippina the Younger, may have orchestrated.

2. *Reign*: Nero's early reign was marked by influence from his mother, tutors, and advisors, notably the philosopher Seneca and the Praetorian Prefect Burrus. During this period, he was seen as a competent ruler, initiating public works and negotiating peace with Parthia.

3. *Infamous Acts*: As Nero's reign progressed, he became known for his self-indulgence, cruelty, and erratic behavior. He is infamously associated with the Great Fire of Rome in 64 AD. While it's a myth that he "fiddled while Rome burned" (the fiddle didn't exist then), he did use the disaster to rebuild parts of the city according to his own designs and erected the opulent Domus Aurea (Golden House).

4. *Persecution of Christians*: Nero is often noted for his brutal persecution of Christians, whom he blamed for the Great Fire. This marked one of the first major Roman persecutions of Christians.

5. *Downfall and Death*: Nero's reign faced several revolts and uprisings. In 68 AD, after losing the support of the Senate and the military, he was declared a public enemy. Facing execution, he committed suicide, reportedly uttering, "What an artist dies in me!"

6. *Legacy*: Nero's reign is often characterized by tyranny, extravagance, and debauchery in historical and cultural depictions. However, some historians suggest that his negative portrayal was partly due to political propaganda by his successors.

His death led to a brief period of civil war, known as the Year of the Four Emperors, before the establishment of the Flavian dynasty.


I also did a brief fact check of a few details here and they were all correct. Zero hallucinations.

Does this make sense? Notice how little it matters if my understanding of Nero is complete or entirely accurate; I’m getting a general gist of the topic, and it seems like a good time.


This is missing the broad concern with hallucination: You are putting your trust in something that delivers all results confidently, even if they were predicted incorrectly. Your counter-argument is lack of trust in other sources (podcasts, the education system), however humans, when they don't know something, generally say they don't know something, whereas LLMs will confidently output incorrect information. Knowing nothing about a certain subject, and (for the sake of argument) lacking research access, I would much rather trust a podcast specializing in a certain area than asking a LLM.

Put more simply: I would rather have no information than incorrect information.

I work in a field of tech history that is under-represented on wikipedia, but represented well in other areas on the internet and the web. It is incredibly easy to get chatGPT to hallucinate information and give incorrect answers when asking very basic questions about this field, whereas this field is talked about and covered quite accurately from the early days of usenet all the way up to modern social media. Until the quality of training data can be improved, I can never use chatgpt for anything relating to this field, as I cannot trust its output.


I am continually surprised by how many people struggle to operate in uncertainty; I am further surprised by how many people seem to delude themselves into thinking that… podcasts… can provide a level of certainty that an LLM cannot.

In life, you exceptionally rarely have “enough” information to make a decision at the critical moment. You would rather know nothing than know some things? That’s not how the world works, not how discovery works, and not even how knowledge works. The things you think are certain are a lot less so than you apparently believe.


It may matter little to you that your understanding is not complete or entirely accurate, but some of my worst experiences have been discussing topics with people who think they have anything insightful to add because they read a wikipedia page or listened to a single podcast episode and then decided that gave them a worthwhile understanding of something that often instead takes years to fully appreciate. A little knowledge and all of that. For one, you don't know what you're missing by omission.


It also doesn’t matter to you, unless you’re claiming you only ever discuss topics you’re extremely well versed in.


It has lied to me about extremely basic things. It abssolutely is worth being paranoid about. You simply cannot blindly trust it.


Good thing nobody here is suggesting blind trust. The mistake being made here is thinking I’m suggesting LLMs are a good way to learn. What I am instead saying is that podcasts are not a good way to learn, and should be treated with the sane level of skepticism one holds for an LLM response.


i appreciate your confidence and would love to know how far you would go with the guarantee ! it makes me realize that there is at least one avenue for some level of trust about gpt accuracy and that's my general awareness of how much written content on the topic it probably had access to during training.

i think maybe your earlier comment was about the average trustworthiness of all podcasts vs the same for all gpt responses. i would probably side with gpt4 in that context.

however, there are plenty of situations where the comparison is between a podcast from the best human in the world at something and gpt which might have less training data or maybe the risks for the topic aren't eating an uncooked turkey but learning cpr wrong or having an airbag not deploy


There are zero podcasts from “the best person in the world”, the very concept is absurd.

No one person is particularly worth listening to individually, and as a podcast??? Good lord no.

LLMs beat podcasts when it comes to, “random exploration of an unfamiliar topic”, every single time.

The real issue here is that you trust podcasts so completely, by the way, not that ChatGPT is some oracle of knowledge. More generally, a skill all people need to develop is the ability to explore an idea without accepting whatever you first find. If you’re spending an afternoon talking with ChatGPT about a topic, you should be able to A) use your existing knowledge to give a rough first-pass validation of the information you’re getting, which will catch most hallucinations out of the gate, as they’re rarely subtle, and B) take what you learn with a hefty grain of salt, as if you’re hearing it from a stranger in a bar.

This is an important skill, and absolutely applies to both podcasts and LLMs. Honestly why such profound deference to podcasts in particular?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: