I just bought a new 6TB hard drive to replace two 1TB drives holding important data that are now past 6.5 years power-on-time. My experience with drives is once they get past 3 months - they live for 5 years or so, then go. These two are not showing any problems yet, but as I said: Important data and nearly 7 years of use. On this same server are also two 2TB drives that are approaching 4 years power_on_time. Total space in use is about 3.4TB but it's growing at the rate of several 100GB a month.
My plan is to transfer all the data and workload to the new single drive leaving the old drives as backups until they do actually die.
I decided I should put the new drive through some paces prior to expending the energy moving all that data.
I'm curious as to if or how you folks test your hard drives. I've not really done this in the past, but it seems to be a sensible thing to do.
I did discover a nifty little hard drive tester: whdd which I installed from git (after a few needed dependencies) and it's running a read test at the moment (pic attached).
Next I'll run a long s.m.a.r.t. test. Tips? Methods? Comments?
My plan is to transfer all the data and workload to the new single drive leaving the old drives as backups until they do actually die.
I decided I should put the new drive through some paces prior to expending the energy moving all that data.
I'm curious as to if or how you folks test your hard drives. I've not really done this in the past, but it seems to be a sensible thing to do.
I did discover a nifty little hard drive tester: whdd which I installed from git (after a few needed dependencies) and it's running a read test at the moment (pic attached).
Next I'll run a long s.m.a.r.t. test. Tips? Methods? Comments?
Comment