Judging by this thread, surely a part of OpenAI’s business model is to release models with somewhat grey-area outlandish claims, then sit and wait for people to test it out paying top dollars for tokens.
The value OpenAI get here is that people effectively run a massively parallel brute force attack against the new models to figure out exactly what they can and can’t do.
> The value OpenAI get here is that people effectively run a massively parallel brute force attack against the new models to figure out exactly what they can and can’t do.
I'm pretty sure the value they get is the money you pay.
No, in this case it really is the usage. This is a brand new model and nobody knows how best to use it yet. OpenAI researchers have been tweeting as much (sadly I’ve lost the tweet).