Depends. Even something basic like "Check if the produced artifact is a valid .zip/.tar.gz" can be enough in the beginning, probably would have prevented the issue I shared before.
Then once you grow/need higher reliability, you can start adding more advanced checks, like it has the tables/data structures you expect and so on.
I had a funny where I somewhat regularly test an sql backup, then one day it didn't work, it worked the second time, the 3rd and the 4th. I have no idea why it didn't work. It turned into a permanent background process in the back of my head. The endless what-if loop.
I’m not sure what your point is. Business continuity requires a disaster recovery plan that must be tested regularly. It might be considered slog work, but like taking out the garbage, it’s non negotiable and must be done.
"Great, first you wanted more money to buy compute and storage for dev and staging separate from production, and now you even more for 'testing backups'?!"