As soon as it starts returning to me factual, confirm-able answers consistently. Then I'll use it. I just had to fix something a co-worker fucked up by asking ai how to do it. The responses are so confidently wrong it's like watching Kash Patel tell me that Jeffrey Epstein killed himself.
I agree. Overconfidence and sycophancy is the real problem. This should be the focus of development energy. The models are already capable; now they need to be reliable.