Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That way we can evaluate after the fact how well people did.

How exactly do you propose to do that given the binary knowledge of the event occuring or not...



Let's say somebody makes a prediction that something will happen, and it doesn't. You could evaluate the prediction differently if they gave it a 55% chance of happening vs a 95% chance.


Over many repeated experiments that might provide useful.

But what if I predict with 100% certainty heads, and the coin comes up heads. Did we learn anything "useful" about my prediction?


This thread is full of many predictions. Some users have many in the same comment, even!

So we are indeed talking about tens or hundreds of them, not just a single one.


I guess you could attach the probability as weight to the event outcome, instead of taking a uniform mean. Not that without probabilities it’s “impossible”, it’s just a different metric




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: