“Good” isn’t expressed in code here. GPT3 was trained on a very loose problem (next word prediction). InstructGPT/ChatGPT are trained on reinforcement learning from human raters.
If it was all a computer program it’d be acting like ELIZA.
"good" for gpt was expressed in the way they chose the dataset to include.
Just because generative text models in the past(like ELIZA) were bad doesn't mean that the algorithms we have now are much more than better versions of the same.
If it was all a computer program it’d be acting like ELIZA.