Bug#578352: mdadm: failed devices become spares!

Tim Small tim at buttersideup.com
Mon May 17 21:09:45 UTC 2010


Pierre Vignéras wrote:
> And the next question is: how to activate those 2 spare drives? I was 
> expecting mdadm to use them automagically.
>   

If you want to experiment with different ways of getting the data back,
but without risking writing anything to the drives, you could do this:

1. Use dmsetup to create copy-on-write "virtual drives" which
"see-through" to the content of your real drives, but don't risk writing
anything at all to them.

2. Use mdadm --create --assume-clean ...blahblah...
/dev/mapper/cow_drive_1  .....

to force mdadm to put the array back together the way you think it was
(the output of examine will be useful here).  You'll need to specify (at
least - from memory):

. stripe size
. metadata version (this affects metadata location on the drives)
. correct device order (with or without a single failed drive)


... after that you can run a read-only (or read-write) check on the COW
md partition to verify that you've got your data back, then mount it
read-only etc.  Once you're happy that your commands are going to get
things running again, you can run them "for real" on the non-COW devices.

See the recent list archives for my post on using a similar set of
commands for HW RAID data forensics, along with references....

HTH,

Tim.





More information about the pkg-mdadm-devel mailing list