Training a model doesn't mean you understand what the neurons actually do to influence output. Nobody knows that. That's where the black box analogies come in. We know what goes in the box and what comes out. We don't know what the box is doing to the data
Are you saying that people who created ChatGPT don't understand how it works? Or that we the rest of people don't?