Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, you might have some work on corruption but it won't be silent and it would be recoverable without data loss

I'd personally replace rather than re-add a drive with corruption but perhaps I'm overly paranoid



If it were only one data block in one stripe I'd be confident re-adding the same drive (and have done so); this is overwhelmingly likely to be a transient error (e.g. bit rot on the drive or a RAM bit flip while writing; either in the drive itself or the machine's main memory) that won't recur.

The MD "check" action can confirm this (it will iterate every stripe and report all parity/data mismatches, so if it only reports one ...) and some distributions ship a cronjob that automatically does this on a monthly basis.

If it were a corrupt parity block in a stripe (i.e. a filesystem with strong error detection reports no errors but the MD check action still reports a data/parity mismatch), this is usually more indicative of a lost write during a re-write operation (e.g. the machine was powered off in the middle of updating the contents of a stripe), as the parity is written last -- i.e. the parity would be for the old data in that stripe, not the data as it is now.

The MD "repair" action (if you are ABSOLUTELY CERTAIN that it is the parity that is bad) will automatically correct this problem, which you should do, as the failure of a disk containing a data block within that stripe will then leave you with incorrectly calculated data that will then start showing up as filesystem errors (if you're fortunate enough to be using such a filesystem).

Of course all of the usual caveats about checking SMART statistics apply in determining whether a drive is still suitable for continued use. If the same drive kept showing up with the same problems, I'd retire it; if the drive starts reporting an increase in reallocated sector count, I'd retire it; and so on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: