Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

it's really tragic that LLMs became normal consumer tech before public key signatures. It would be trivial for openai to sign every API response, but the interfaces we use to share text don't bother implementing verification (besides github, the superior social network)

i blame zoom for acqui-sunsetting keybase.



Open AI actually solved this, you can share a link to a Chat GPT conversation and then it's trivial to verify that it's authentic. You can't fake Chat GPT output in one of these without hacking Open AI first.


Good point, although how long do those links stay live? Do they get deleted after 30 days or anything? Any idea if OpenAI has ever deleted one, especially for violating content policy or something?


true enough, I guess my comment is more lamenting the culture around sharing screenshots instead of a verifiable source.

Similar story with DALLE3 / sdxl output - its the jpeg that gets shared, no metadata that might link back to proof of where it was generated (assuming the person creating the image doesn't choose to misrepresent it, if they want to lie and say it wasn't DALLE3 then we have to talk about openai facilitating reverse image search...)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: