a company could derive value from increased headcount if investors or shareholders perceive it as a valuable metric and reward the company with more money or a higher valuation regardless of other metrics.
i wonder if companies have seen increased valuations from saying they are hiring for tons of positions without actually following through on the actual hiring
they definitely do but i think in podcasts i generally have a better ability to evaluate how much trust i should place in what i'm hearing. i know if i'm listening to a professional chef podcast, i can probably trust they will generally be right talking about how they like to bake a turkey. if i'm listening to the hot wings guy interview zac efron i know to be less blindly trusting of their info on astrophysics.
with chatgpt i don't know it's "experience" or "education" on a topic and it has no social accountability motivating it to make sure it gets things right so i can't estimate how much i should trust it in the same way.
I don't know how people use ChatGPT at all. It confidently hallucinated answers to 4 out of 5 my latest "real" questions, with code examples and everything. Fortunately with code I could easily verify the provided solutions are worthless. Granted I was asking questions about my niche that were hard enough I couldn't easily Google them or find a solution myself, but I think that's the bar for being useful. The only thing it got right was finding a marketing slogan.
I've done this and, from what I can tell, it is reasonably accurate. However, I did have an instance where I was asking it a series of questions about the First Peloponnesian War, and partway through our discussion it switched topics to the first part of the Peloponnesian War, which are different conflicts. At least, I think they are. It was quite confusing.
Nero was a Roman Emperor from 54 to 68 AD, known for his controversial and extravagant reign. He was the last emperor of the Julio-Claudian dynasty. Here are some key points about his life and rule:
1. *Early Life and Ascension*: Nero was born Lucius Domitius Ahenobarbus in 37 AD. He was adopted by his great-uncle, Emperor Claudius, becoming Nero Claudius Caesar Drusus Germanicus. He ascended to the throne at the age of 17, after Claudius' death, which many historians believe Nero's mother, Agrippina the Younger, may have orchestrated.
2. *Reign*: Nero's early reign was marked by influence from his mother, tutors, and advisors, notably the philosopher Seneca and the Praetorian Prefect Burrus. During this period, he was seen as a competent ruler, initiating public works and negotiating peace with Parthia.
3. *Infamous Acts*: As Nero's reign progressed, he became known for his self-indulgence, cruelty, and erratic behavior. He is infamously associated with the Great Fire of Rome in 64 AD. While it's a myth that he "fiddled while Rome burned" (the fiddle didn't exist then), he did use the disaster to rebuild parts of the city according to his own designs and erected the opulent Domus Aurea (Golden House).
4. *Persecution of Christians*: Nero is often noted for his brutal persecution of Christians, whom he blamed for the Great Fire. This marked one of the first major Roman persecutions of Christians.
5. *Downfall and Death*: Nero's reign faced several revolts and uprisings. In 68 AD, after losing the support of the Senate and the military, he was declared a public enemy. Facing execution, he committed suicide, reportedly uttering, "What an artist dies in me!"
6. *Legacy*: Nero's reign is often characterized by tyranny, extravagance, and debauchery in historical and cultural depictions. However, some historians suggest that his negative portrayal was partly due to political propaganda by his successors.
His death led to a brief period of civil war, known as the Year of the Four Emperors, before the establishment of the Flavian dynasty.
I also did a brief fact check of a few details here and they were all correct. Zero hallucinations.
Does this make sense? Notice how little it matters if my understanding of Nero is complete or entirely accurate; I’m getting a general gist of the topic, and it seems like a good time.
This is missing the broad concern with hallucination: You are putting your trust in something that delivers all results confidently, even if they were predicted incorrectly. Your counter-argument is lack of trust in other sources (podcasts, the education system), however humans, when they don't know something, generally say they don't know something, whereas LLMs will confidently output incorrect information. Knowing nothing about a certain subject, and (for the sake of argument) lacking research access, I would much rather trust a podcast specializing in a certain area than asking a LLM.
Put more simply: I would rather have no information than incorrect information.
I work in a field of tech history that is under-represented on wikipedia, but represented well in other areas on the internet and the web. It is incredibly easy to get chatGPT to hallucinate information and give incorrect answers when asking very basic questions about this field, whereas this field is talked about and covered quite accurately from the early days of usenet all the way up to modern social media. Until the quality of training data can be improved, I can never use chatgpt for anything relating to this field, as I cannot trust its output.
I am continually surprised by how many people struggle to operate in uncertainty; I am further surprised by how many people seem to delude themselves into thinking that… podcasts… can provide a level of certainty that an LLM cannot.
In life, you exceptionally rarely have “enough” information to make a decision at the critical moment. You would rather know nothing than know some things? That’s not how the world works, not how discovery works, and not even how knowledge works. The things you think are certain are a lot less so than you apparently believe.
It may matter little to you that your understanding is not complete or entirely accurate, but some of my worst experiences have been discussing topics with people who think they have anything insightful to add because they read a wikipedia page or listened to a single podcast episode and then decided that gave them a worthwhile understanding of something that often instead takes years to fully appreciate. A little knowledge and all of that. For one, you don't know what you're missing by omission.
Good thing nobody here is suggesting blind trust. The mistake being made here is thinking I’m suggesting LLMs are a good way to learn. What I am instead saying is that podcasts are not a good way to learn, and should be treated with the sane level of skepticism one holds for an LLM response.
i appreciate your confidence and would love to know how far you would go with the guarantee ! it makes me realize that there is at least one avenue for some level of trust about gpt accuracy and that's my general awareness of how much written content on the topic it probably had access to during training.
i think maybe your earlier comment was about the average trustworthiness of all podcasts vs the same for all gpt responses. i would probably side with gpt4 in that context.
however, there are plenty of situations where the comparison is between a podcast from the best human in the world at something and gpt which might have less training data or maybe the risks for the topic aren't eating an uncooked turkey but learning cpr wrong or having an airbag not deploy
There are zero podcasts from “the best person in the world”, the very concept is absurd.
No one person is particularly worth listening to individually, and as a podcast??? Good lord no.
LLMs beat podcasts when it comes to, “random exploration of an unfamiliar topic”, every single time.
The real issue here is that you trust podcasts so completely, by the way, not that ChatGPT is some oracle of knowledge. More generally, a skill all people need to develop is the ability to explore an idea without accepting whatever you first find. If you’re spending an afternoon talking with ChatGPT about a topic, you should be able to A) use your existing knowledge to give a rough first-pass validation of the information you’re getting, which will catch most hallucinations out of the gate, as they’re rarely subtle, and B) take what you learn with a hefty grain of salt, as if you’re hearing it from a stranger in a bar.
This is an important skill, and absolutely applies to both podcasts and LLMs. Honestly why such profound deference to podcasts in particular?
i don't think that's really any brand loyalty for OpenAI. people will use whatever is cheapest and best. in the longer run people will use whatever has the best access and integration.
what's keeping people with OpenAI for now is that chatGPT is free and GPT3.5 and GPT4 are the best. over time I expect the gap in performance to get smaller and the cost to run these to get cheaper.
if google gives me something close to as good as OpenAI's offering for the same price and it pull data from my gmail or my calendar or my google drive then i'll switch to that.
People use "the chatbot from OpenAI" because that's what became famous and got all the world a taste of AI (my dad is on that bandwagon, for instance). There is absolutely no way my dad is going to sign up for an Anthropic account and start making API calls to their LLM.
But I agree that it's a weak moat, if OpenAI were to disappear, I could just tell my dad to use "this same thing but from Google" and he'd switch without thinking much about it.
good points. on second thought, i should give them due credit for building a brand reputation as being "best" that will continue even if they aren't the best at some point, which will keep a lot of people with them. that's in addition to their other advantages that people will stay because it's easier than learning a new platform and there might be lock-in in terms of it being hard to move a trained gpt, or your chat history to another platform.
This, if anything people really don't like the verbose moralizing and anti-terseness of it.
Ok, the first few times you use it maybe it's good to know it doesn't think it's a person, but short and sweet answers just save time, especially when the result is streamed.
Every time I visit an archive.today link from HN I get stuck in a weird pseudo-CloudFlare/ReCaptcha “prove you are human” page, and every time I click “I am human” the page reloads. Are these real?
archive.xxx wants your rough geolocation for 'reasons' (picking a server for your reply has been given) Cloudflare DNS referrals strip this (for your privacy says Cloudflare)
Screw you Cloudflare, says archive.xxx, we will penalise your referrals.
This has been the real ongoing result of using Cloudflares 1.1.1.1 DNS resolver for 12 months or more now.
i'm not sure i've ever seen an ad that i considered helpful, useful, or was grateful to see
i get the dream and i buy things all the time but for me ads don't make that buying experience easier because i dont trust any of the content in them so it doesn't save me any time on researching what i want to buy. i'd rather have irrelevant ads because maybe they are easier to ignore and they won't serve as a constant reminder of how much data companies have on me. seeing an ad for something i recently searched on a different company's site generally makes me unhappy.
i think even just masking the background with a static image for a few of the scenes would be a gigantic improvement to hide the inconsistent quality of each frame