Drives do have bad spots. I have found a few on used drives at least.<br>One that hung to freeze the computer which I returned for exchange<br>Some more that I located the bad spot using SMART and formatted around it (leave the bad spot in Free Space) and used for years after that. One more that lasted for maybe 6 months - it seemed borderline worn out before but the customer wanted to keep it, I said as long as he backed everything up ...<br>
<br><br>You can do the extended off line test with SMART: <br>smartctl -t long /dev/sdx<br><br>takes hours.<br><br>Another way just as time consuming get a wipe program of your choice that writes zeroes to the entire drive, <br>
<br>either way afterwards check the SMART error logs: smartctl -a /dev/sdx <br>(at the bottom)<br><br>Brian<br><br><br><div class="gmail_quote">On Thu, Jul 7, 2011 at 11:19 PM, Andy Isaacson <span dir="ltr"><<a href="mailto:adi@hexapodia.org">adi@hexapodia.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;"><div class="im">On Thu, Jul 07, 2011 at 06:48:36PM -0700, Seth David Schoen wrote:<br>
> <a href="http://www.coker.com.au/bonnie++/" target="_blank">http://www.coker.com.au/bonnie++/</a><br>
<br>
</div>I prefer fio(1). �<a href="http://linux.die.net/man/1/fio" target="_blank">http://linux.die.net/man/1/fio</a> and<br>
<a href="http://git.kernel.dk/?p=fio.git;a=summary" target="_blank">http://git.kernel.dk/?p=fio.git;a=summary</a><br>
<div class="im"><br>
> It seems to me that trying to check for errors is very unlikely to<br>
> find anything, because hard drives have extensive internal soft<br>
> error correction. �The probability of an error that gets detected<br>
> by the drive and reported to SMART must be _much_ higher than the<br>
> probability of an error where bad data silently reaches the<br>
> application.<br>
<br>
</div>Soft errors get reported in SMART as Hardware_ECC_Recovered and<br>
Reallocated_Sector_Ct and Raw_Read_Error_Rate (though how to interpret<br>
various vendors' RRER values is only documented in NDA materials AFAIK).<br>
You can also simply measure how long a given IO took to complete as a<br>
proxy for sector unreliability.<br>
<br>
I am pretty skeptical of the value of a burn-in cycle for disks; if you<br>
care so much about your data, just buy enterprise spindles (and pay the<br>
premium) or else use a software reliability layer above your cheap-ass<br>
SATA storage. �But, y'know, shrug. �To each his own data reliability<br>
model. :)<br>
<font color="#888888"><br>
-andy<br>
</font><div><div></div><div class="h5">_______________________________________________<br>
Noisebridge-discuss mailing list<br>
<a href="mailto:Noisebridge-discuss@lists.noisebridge.net">Noisebridge-discuss@lists.noisebridge.net</a><br>
<a href="https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss" target="_blank">https://www.noisebridge.net/mailman/listinfo/noisebridge-discuss</a><br>
</div></div></blockquote></div><br>