Bug#552130: mdadm: possibly a byte order problem?
Bjørn Mork
bjorn at mork.no
Sun Jan 17 15:05:04 UTC 2010
found 552130 3.0.3-2
thanks
Bjørn Mork <bjorn at mork.no> writes:
> Is this the reason why the bitmap always is ignored on reassembly?
>
> My /etc/mdadm/mdadm.conf contains
> ARRAY /dev/md2 level=raid5 num-devices=3 bitmap=/boot/md2-raid5-bitmap UUID=6c4c0385:4f0b4770:7067137c:e3b1885e
Nope, I can answer that myself. After inspecting the kernel code, I see
that any UUID mismatch would have caused the driver to complain. From
drivers/md/bitmap.c:
/*
* if we have a persistent array superblock, compare the
* bitmap's UUID and event counter to the mddev's
*/
if (memcmp(sb->uuid, bitmap->mddev->uuid, 16)) {
printk(KERN_INFO "%s: bitmap superblock UUID mismatch\n",
bmname(bitmap));
goto out;
}
So this is probably just a display problem in mdadm. Should be fixed,
but it's not a high priority issue. There are probably not many who
would ever notice it.
But I've found something interesting when doing some experiments in a
virtual machine running sid with the very latest mdadm and kernel:
kvm-sid:~# apt-cache policy mdadm
mdadm:
Installed: 3.0.3-2
Candidate: 3.0.3-2
Version table:
*** 3.0.3-2 0
500 http://ftp.no.debian.org sid/main Packages
100 /var/lib/dpkg/status
kvm-sid:~# apt-cache policy linux-image-2.6.32-trunk-amd64
linux-image-2.6.32-trunk-amd64:
Installed: 2.6.32-5
Candidate: 2.6.32-5
Version table:
*** 2.6.32-5 0
500 http://ftp.no.debian.org sid/main Packages
100 /var/lib/dpkg/status
I have the exact same problem on boot:
Begin: Assembling all MD arrays ... [ 3.886337] md: md127 stopped.
[ 3.895478] md: bind<hdc>
[ 3.902700] md: bind<hdd>
[ 3.908051] md: bind<hdb>
[ 3.925030] raid5: device hdb operational as raid disk 0
[ 3.927064] raid5: device hdd operational as raid disk 2
[ 3.929413] raid5: device hdc operational as raid disk 1
[ 3.950506] raid5: allocated 3230kB for md127
[ 3.955361] 0: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 3.957689] 2: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 3.959881] 1: w=3 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 3.961792] raid5: raid level 5 set md127 active with 3 out of 3 devices, algorithm 2
[ 3.964949] RAID5 conf printout:
[ 3.966401] --- rd:3 wd:3
[ 3.967706] disk 0, o:1, dev:hdb
[ 3.969169] disk 1, o:1, dev:hdc
[ 3.970639] disk 2, o:1, dev:hdd
[ 3.972159] md127: detected capacity change from 0 to 209584128
mdadm: /dev/md/0_0 has been started with 3 drives.
[ 3.978320] md127: unknown partition table
Success: assembled all arrays.
kvm-sid:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active (auto-read-only) raid5 hdb[0] hdd[2] hdc[1]
204672 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
unused devices: <none>
but if I stop and start the array again, using the very same script
which is running on boot, then the bitmap *is* used:
kvm-sid:~# /etc/init.d/mdadm-raid stop
[ 428.737789] md: md127 stopped.
[ 428.739121] md: unbind<hdb>
[ 428.740549] md: export_rdev(hdb)
[ 428.742028] md: unbind<hdd>
[ 428.743242] md: export_rdev(hdd)
[ 428.744819] md: unbind<hdc>
[ 428.746034] md: export_rdev(hdc)
[ 428.749249] md127: detected capacity change from 209584128 to 0
Stopping MD array md127...done (stopped).
kvm-sid:~# /etc/init.d/mdadm-raid start
[ 431.274080] md: md0 stopped.
[ 431.289056] md: bind<hdc>
[ 431.292148] md: bind<hdd>
[ 431.305671] md: bind<hdb>
[ 431.557240] raid5: device hdb operational as raid disk 0
[ 431.559041] raid5: device hdd operational as raid disk 2
[ 431.560834] raid5: device hdc operational as raid disk 1
[ 431.566055] raid5: allocated 3230kB for md0
[ 431.567740] 0: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 431.569585] 2: w=2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 431.571383] 1: w=3 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
[ 431.573210] raid5: raid level 5 set md0 active with 3 out of 3 devices, algorithm 2
[ 431.576138] RAID5 conf printout:
[ 431.577506] --- rd:3 wd:3
[ 431.578757] disk 0, o:1, dev:hdb
[ 431.580130] disk 1, o:1, dev:hdc
[ 431.581524] disk 2, o:1, dev:hdd
[ 431.608425] md0: bitmap initialized from disk: read 1/1 pages, set 0 bits
[ 431.610543] created bitmap (13 pages) for device md0
[ 431.616338] md0: detected capacity change from 0 to 209584128
[ 431.619311] md0: unknown partition table
Assembling MD array md0...done (started [3/3]).
Generating udev events for MD arrays...done.
kvm-sid:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active (auto-read-only) raid5 hdb[0] hdd[2] hdc[1]
204672 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/13 pages [0KB], 4KB chunk, file: /boot/md0-radi5-bitmap
unused devices: <none>
This happens even if I have INITRDSTART='none' in /etc/default/mdadm and
the bitmap file is located on the root file system:
kvm-sid:~# egrep ^INITRDSTART /etc/default/mdadm
INITRDSTART='none'
kvm-sid:~# df /boot/md0-radi5-bitmap
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/hda1 1913244 986944 829112 55% /
Do also note the change from md127 to md0. Weird. Why?
All the 3 components list 0 as the preferred minor in their superblocks:
kvm-sid:~# mdadm --examine /dev/hdb
/dev/hdb:
Magic : a92b4efc
Version : 0.90.00
UUID : ea4c9167:80dcf6e7:6ccfee12:1ebe32ef (local to host kvm-sid)
Creation Time : Sun Jan 17 14:48:04 2010
Raid Level : raid5
Used Dev Size : 102336 (99.95 MiB 104.79 MB)
Array Size : 204672 (199.91 MiB 209.58 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Jan 17 14:56:10 2010
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : 368bb5dc - correct
Events : 33
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 0 3 64 0 active sync /dev/hdb
0 0 3 64 0 active sync /dev/hdb
1 1 22 0 1 active sync /dev/hdc
2 2 22 64 2 active sync /dev/hdd
kvm-sid:~# mdadm --examine /dev/hdc
/dev/hdc:
Magic : a92b4efc
Version : 0.90.00
UUID : ea4c9167:80dcf6e7:6ccfee12:1ebe32ef (local to host kvm-sid)
Creation Time : Sun Jan 17 14:48:04 2010
Raid Level : raid5
Used Dev Size : 102336 (99.95 MiB 104.79 MB)
Array Size : 204672 (199.91 MiB 209.58 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Jan 17 14:56:10 2010
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : 368bb5b1 - correct
Events : 33
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 1 22 0 1 active sync /dev/hdc
0 0 3 64 0 active sync /dev/hdb
1 1 22 0 1 active sync /dev/hdc
2 2 22 64 2 active sync /dev/hdd
kvm-sid:~# mdadm --examine /dev/hdd
/dev/hdd:
Magic : a92b4efc
Version : 0.90.00
UUID : ea4c9167:80dcf6e7:6ccfee12:1ebe32ef (local to host kvm-sid)
Creation Time : Sun Jan 17 14:48:04 2010
Raid Level : raid5
Used Dev Size : 102336 (99.95 MiB 104.79 MB)
Array Size : 204672 (199.91 MiB 209.58 MB)
Raid Devices : 3
Total Devices : 3
Preferred Minor : 0
Update Time : Sun Jan 17 14:56:10 2010
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Checksum : 368bb5f3 - correct
Events : 33
Layout : left-symmetric
Chunk Size : 64K
Number Major Minor RaidDevice State
this 2 22 64 2 active sync /dev/hdd
0 0 3 64 0 active sync /dev/hdb
1 1 22 0 1 active sync /dev/hdc
2 2 22 64 2 active sync /dev/hdd
Bjørn
More information about the pkg-mdadm-devel
mailing list