Bug#603343: i have recreated this bug:a new added hdd is a spare instead of an active device

Denis Toader denis at nano-net.ro
Sun Dec 26 05:00:01 UTC 2010


Package: mdadm
Version: 2.6.7.1
Severity: important

i use ubuntu 10.04 .
it's possible that i forgot once to "--set-faulty" before removing a
device. 
first i created raid1 with 2 devices. after encountering the bug i grow
the raid from 2 devices to 4 hoping it will make room for other disks to
add.
here is the config:

dennis1 ~ $ cat /proc/mdstatiscuous mode
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10] 
md1 : active raid1 sdd1[2] sdc1[0] sda1[1]
      30716160 blocks [3/3] [UUU]
      bitmap: 14/235 pages [56KB], 64KB chunk

md3 : active raid1 sda3[4](S) sdd3[5] sdc3[0]
      452550976 blocks [4/1] [U___]
      bitmap: 45/216 pages [180KB], 1024KB chunk

unused devices: <none>

dennis1 ~ $ sudo mdadm --examine /dev/sdc3
[sudo] password for denis: 
/dev/sdc3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 4c2769e4:568869bc:2a972c67:788b998b (local to host
dennis1)
  Creation Time : Wed Dec 22 05:55:45 2010
     Raid Level : raid1
  Used Dev Size : 452550976 (431.59 GiB 463.41 GB)
     Array Size : 452550976 (431.59 GiB 463.41 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 3

    Update Time : Sun Dec 26 06:40:59 2010
          State : clean
Internal Bitmap : present
 Active Devices : 1
Working Devices : 3
 Failed Devices : 3
  Spare Devices : 2
       Checksum : a424be5c - correct
         Events : 170512


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync   /dev/sdc3

   0     0       8       35        0      active sync   /dev/sdc3
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       0        0        3      faulty removed
   4     4       8        3        4      spare   /dev/sda3
   5     5       8       51        5      spare   /dev/sdd3

dennis1 ~ $ sudo mdadm --examine /dev/sda3
/dev/sda3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 4c2769e4:568869bc:2a972c67:788b998b (local to host
dennis1)
  Creation Time : Wed Dec 22 05:55:45 2010
     Raid Level : raid1
  Used Dev Size : 452550976 (431.59 GiB 463.41 GB)
     Array Size : 452550976 (431.59 GiB 463.41 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 3

    Update Time : Sun Dec 26 06:42:05 2010
          State : clean
Internal Bitmap : present
 Active Devices : 1
Working Devices : 3
 Failed Devices : 3
  Spare Devices : 2
       Checksum : a424becc - correct
         Events : 170550


      Number   Major   Minor   RaidDevice State
this     4       8        3        4      spare   /dev/sda3

   0     0       8       35        0      active sync   /dev/sdc3
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       0        0        3      faulty removed
   4     4       8        3        4      spare   /dev/sda3
   5     5       8       51        5      spare   /dev/sdd3

dennis1 ~ $ sudo mdadm --detail /dev/md3
/dev/md3:
        Version : 00.90
  Creation Time : Wed Dec 22 05:55:45 2010
     Raid Level : raid1
     Array Size : 452550976 (431.59 GiB 463.41 GB)
  Used Dev Size : 452550976 (431.59 GiB 463.41 GB)
   Raid Devices : 4
  Total Devices : 3
Preferred Minor : 3
    Persistence : Superblock is persistent

  Intent Bitmap : Internal

    Update Time : Sun Dec 26 06:44:20 2010
          State : active, degraded
 Active Devices : 1
Working Devices : 3
 Failed Devices : 0
  Spare Devices : 2

           UUID : 4c2769e4:568869bc:2a972c67:788b998b (local to host
dennis1)
         Events : 0.170627

    Number   Major   Minor   RaidDevice State
       0       8       35        0      active sync   /dev/sdc3
       1       0        0        1      removed
       5       8       51        2      spare rebuilding   /dev/sdd3
       3       0        0        3      removed

       4       8        3        -      spare   /dev/sda3




out of curiosity i will :

dennis1 ~ $ sudo mdadm --grow --force -n 1 /dev/md3
mdadm: Cannot set raid-devices for /dev/md3: Device or resource busy
dennis1 ~ $ sudo mdadm --grow -n 2 /dev/md3
dennis1 ~ $ sudo mdadm --grow --force -n 1 /dev/md3
mdadm: Cannot set raid-devices for /dev/md3: Device or resource busy


dennis1 ~ $ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10] 
md1 : active raid1 sdd1[2] sdc1[0] sda1[1]
      30716160 blocks [3/3] [UUU]
      bitmap: 16/235 pages [64KB], 64KB chunk

md3 : active raid1 sda3[2](S) sdd3[3] sdc3[0]
      452550976 blocks [2/1] [U_]
      bitmap: 45/216 pages [180KB], 1024KB chunk

unused devices: <none>

dennis1 ~ $ sudo mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 4c2769e4:568869bc:2a972c67:788b998b (local to host
dennis1)
  Creation Time : Wed Dec 22 05:55:45 2010
     Raid Level : raid1
  Used Dev Size : 452550976 (431.59 GiB 463.41 GB)
     Array Size : 452550976 (431.59 GiB 463.41 GB)
   Raid Devices : 2
  Total Devices : 3
Preferred Minor : 3

    Update Time : Sun Dec 26 06:49:12 2010
          State : clean
Internal Bitmap : present
 Active Devices : 1
Working Devices : 3
 Failed Devices : 1
  Spare Devices : 2
       Checksum : a424c261 - correct
         Events : 170800


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync   /dev/sdc3

   0     0       8       35        0      active sync   /dev/sdc3
   1     1       0        0        1      faulty removed
   2     2       8        3        2      spare   /dev/sda3
   3     3       8       51        3      spare   /dev/sdd3

dennis1 ~ $ sudo mdadm --grow -n 5 /dev/md3
dennis1 ~ $ sudo mdadm --examine /dev/sdc3
/dev/sdc3:
          Magic : a92b4efc
        Version : 00.90.00
           UUID : 4c2769e4:568869bc:2a972c67:788b998b (local to host
dennis1)
  Creation Time : Wed Dec 22 05:55:45 2010
     Raid Level : raid1
  Used Dev Size : 452550976 (431.59 GiB 463.41 GB)
     Array Size : 452550976 (431.59 GiB 463.41 GB)
   Raid Devices : 5
  Total Devices : 3
Preferred Minor : 3

    Update Time : Sun Dec 26 06:49:53 2010
          State : clean
Internal Bitmap : present
 Active Devices : 1
Working Devices : 3
 Failed Devices : 4
  Spare Devices : 2
       Checksum : a424c2fd - correct
         Events : 170826


      Number   Major   Minor   RaidDevice State
this     0       8       35        0      active sync   /dev/sdc3

   0     0       8       35        0      active sync   /dev/sdc3
   1     1       0        0        1      faulty removed
   2     2       0        0        2      faulty removed
   3     3       0        0        3      faulty removed
   4     4       0        0        4      faulty removed
   5     5       8        3        5      spare   /dev/sda3
   6     6       8       51        6      spare   /dev/sdd3

dennis1 ~ $ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5]
[raid4] [raid10] 
md1 : active raid1 sdd1[2] sdc1[0] sda1[1]
      30716160 blocks [3/3] [UUU]
      bitmap: 14/235 pages [56KB], 64KB chunk

md3 : active raid1 sda3[5](S) sdd3[6] sdc3[0]
      452550976 blocks [5/1] [U____]
      bitmap: 45/216 pages [180KB], 1024KB chunk

unused devices: <none>
dennis1 ~ $ sudo mdadm /dev/md3 -add /dev/sda3 /dev/sdd3
[sudo] password for denis: 
mdadm: option -d not valid in manage mode
dennis1 ~ $ sudo mdadm /dev/md3 -add /dev/sda3
mdadm: option -d not valid in manage mode
dennis1 ~ $ sudo mdadm /dev/md3 -a /dev/sda3
mdadm: Cannot open /dev/sda3: Device or resource busy
dennis1 ~ $ sudo mdadm /dev/md3 -a /dev/sdd3
mdadm: Cannot open /dev/sdd3: Device or resource busy

the simptoms are identical:
if i remove now one of the spares, zero the superblock and add it back
it will start rebuilding on it untill around 50% and then it will stop
leaving me with the above data.

this happens only for the md3 device. for the md1 never had a problem.

thanks
Denis Toader







More information about the pkg-mdadm-devel mailing list