Exactly my thoughts too. I'm fine with a simple noise-removal pass, but if the AI is context-aware, what's to stop it saying "hmm, this brain would look more like a normal brain if I remove these tumors". Obviously, they'll test for that, but that only handles common cases they concider, it's always going to be a risk for more unusual sceanrios, and the danger with altering the data is that anyone looking at the results wont have a way to tell how dubious that data is.
Reminds me of https://en.wikipedia.org/wiki/Xerox#Character_substitution_b... which was _so much_ worse than the equivilent OCR bug because it occured at the image level, where everyone expects errors to to produce noise, not contextly sensible and sharp _but wrong_ characters.
EDIT: based on other comments below, this is thankfully not the case, the AI just understands noise, it doesn't try to "fill in the blanks" based on how brains are supposed to look.
The ML denoising is within-sample across voxels—-or so I presume from similar work in small animal MRI. And you can always have the “with” and “without”. I do not see any problem if a radiologist is in the review process.
Denoising can on average improve the result, but sometimes it will be wrong.
Spotting when it goes wrong is potentially a difficult task, but generally the difficulty scales pretty clearly with the difficulty of understanding the original image anyway. If you can't spot when a denoising filter has screwed up, chances are you wouldn't have spotted anything interesting in the original image anyway.
But once an AI is context-aware things get way more complicated - it will try very hard to produce an image that doesn't _look_ wrong. Even if it goes wrong, it can go wrong and still succeed in managing to make an image that looks correct, it just no longer matches the real brain that was scanned. Perhaps it decided a tumor was just a smudge on the lense, and invented some brain to go behind it. An operator expecting to see brain and seeing brain wont think anything of it. When the patient dies, they may look back and say "wow, that tumor didn't exist at all just 3 days before! that should be impossible!".
tldr: Having an ai that might make mistakes is one thing, having an ai that can just invent exactly the data everyone is expecting to see is dangerous.
(reposting my comment) From my layman's understanding, the process is this:
1. Measure outside interference sources
2. Measure MRI of "nothing"
3. Use ML to estimate f(interference) = noise
4. Subtract estimated noise from signal
So the noise removal process has no awareness of brains, skeletons, etc.
Reminds me of https://en.wikipedia.org/wiki/Xerox#Character_substitution_b... which was _so much_ worse than the equivilent OCR bug because it occured at the image level, where everyone expects errors to to produce noise, not contextly sensible and sharp _but wrong_ characters.
EDIT: based on other comments below, this is thankfully not the case, the AI just understands noise, it doesn't try to "fill in the blanks" based on how brains are supposed to look.