Well, in closed they find 'bugs' all the time, a large portion of the time you don't need the code at all. That said, you're not solving the issue and probably not doing anything about it.
On top of that, the vast majority of people won't know what to do if they find a bug. In the distant past I was one of those people, and only realized some of the bugs I saw in hindsight. It's been nearly 30 years so the details are mostly gone now, but I remember messing around in Windows and I could make the Microsoft Netmeeting application crash with a buffer overrun error.
Of course I was really new to computers then and understanding that buffer overflows in networked applications were really bad things (and seemingly beyond a lot of people that had been in the industry). Even then attempting to report security issues back in those days would have been far more difficult (hell and even risky in many cases).
So really it is many things that are required. Running into the issue. Having a deep enough understanding of computing to realize the issue is bad. Having a means of reporting the bug to a place where people will look at it. And a security culture that knows when and how to act on the bug report.
Both proprietary and open code runs into problems at (3) because
people don't want to hear it, for commercial or ego reasons.
With FOSS at least you get the direct intervention route of simply
fixing and publishing the patch, which done responsibly may or not
force a maintainer's hand. With proprietary you can piss into the wind
and be ghosted, or sued, hence more irresponsible/anonymous
disclosure.
Anything that maximises the likelihood of getting from (1) to (4)
safely must be a good thing, so I think FOSS yields the better
security model.
On top of that, the vast majority of people won't know what to do if they find a bug. In the distant past I was one of those people, and only realized some of the bugs I saw in hindsight. It's been nearly 30 years so the details are mostly gone now, but I remember messing around in Windows and I could make the Microsoft Netmeeting application crash with a buffer overrun error.
Of course I was really new to computers then and understanding that buffer overflows in networked applications were really bad things (and seemingly beyond a lot of people that had been in the industry). Even then attempting to report security issues back in those days would have been far more difficult (hell and even risky in many cases).
So really it is many things that are required. Running into the issue. Having a deep enough understanding of computing to realize the issue is bad. Having a means of reporting the bug to a place where people will look at it. And a security culture that knows when and how to act on the bug report.