Really this is an effect of all optimization approaches, not just "evolutionary" ones. Even simple parameter hill climbers will do that.
A fun personal example, many years ago I was trying to optimize an antenna design. I used a simple blackbox optimizer to adapt a parametrization of the geometry and a simulator to characterize the performance. I started it off and it was slowly making progress. The next day I came back and was exited to see _very good results_ ... but it turned out that it had made the length of the antenna _negative_ and the simulation was spouting nonsense (like the peak gain was a complex number). :)
The fundamental unreality of negative lengths must have resulted in me not thinking to add that as a constraint or make sure the simulation handled them gracefully... much in the same way that input fuzzing can turn up nasty bugs in otherwise well tested and competently written software.
Really this is an effect of all optimization approaches, not just "evolutionary" ones. Even simple parameter hill climbers will do that.
A fun personal example, many years ago I was trying to optimize an antenna design. I used a simple blackbox optimizer to adapt a parametrization of the geometry and a simulator to characterize the performance. I started it off and it was slowly making progress. The next day I came back and was exited to see _very good results_ ... but it turned out that it had made the length of the antenna _negative_ and the simulation was spouting nonsense (like the peak gain was a complex number). :)
The fundamental unreality of negative lengths must have resulted in me not thinking to add that as a constraint or make sure the simulation handled them gracefully... much in the same way that input fuzzing can turn up nasty bugs in otherwise well tested and competently written software.