Bug#398312: INITRDSTART='none' doesn't work

dean gaudet dean at arctic.org
Mon Nov 13 03:07:10 CET 2006


Package: mdadm
Version: 2.5.5-1
Severity: grave

even though i have INITRDSTART='none' in my /etc/default/mdadm and rebuilt 
the initrd, it still goes and does array discovery at boot time.

this is marked grave because it can cause dataloss if drives with stale 
superblocks are put together in an unexpected manner resulting in an array 
rebuild.  (i.e. same reasoning as #398310)

here's my current setup:

	# grep -ve '^#' -e '^ *$' /etc/mdadm/mdadm.conf
	DEVICE partitions
	CREATE owner=root group=disk mode=0660 auto=yes
	HOMEHOST <system>
	MAILADDR root
	# grep -ve '^#' -e '^ *$' /etc/default/mdadm
	INITRDSTART='none'
	AUTOSTART=false
	AUTOCHECK=false
	START_DAEMON=false
	VERBOSE=false
	USE_DEPRECATED_MDRUN=false

notice i have no arrays defined.

	# dpkg-reconfigure linux-image-`uname -r`
	Running depmod.
	Finding valid ramdisk creators.
	Using mkinitramfs-kpkg to build the ramdisk.
	W: mdadm: /etc/mdadm/mdadm.conf defines no arrays.
	Not updating initrd symbolic links since we are being updated/reinstalled
	(twinlark.1 was configured last, according to dpkg)
	Not updating image symbolic links since we are being updated/reinstalled
	(twinlark.1 was configured last, according to dpkg)
	Running postinst hook script /sbin/update-grub.
	Searching for GRUB installation directory ... found: /boot/grub
	Testing for an existing GRUB menu.lst file ... found: /boot/grub/menu.lst
	Searching for splash image ... none found, skipping ...
	Found kernel: /boot/vmlinuz-2.6.17.11
	Found kernel: /boot/vmlinuz-2.6.17-2-amd64
	Found kernel: /boot/vmlinuz-2.6.16-2-amd64-generic
	Found kernel: /boot/memtest86+.bin
	Updating /boot/grub/menu.lst ... done

notice it complains that i have no arrays defined.

	# mkdir /tmp/initrd
	# cd /tmp/initrd
	# zcat /boot/initrd.img-`uname -r` | cpio -i
	26975 blocks

ok now i look at scripts/local-top/mdadm ... i note it sets MD_DEVS=all,
which presumably should be overridden by /conf/md.conf... yet conf/md.conf
contains:
	
	# cat conf/md.conf
	MD_HOMEHOST='groove242'

missing MD_DEVS=none.

also, scripts/local-top/mdadm goes on to test there's an
/etc/mdadm/mdadm.conf, which isn't present in the initrd.  because
etc/mdadm/mdadm.conf isn't there, scripts/local-top/mdadm goes on to
autodiscover all arrays... and then because of the missing MD_DEVS=none
it assembles them all.

as mentioned, this can result in data loss.

while i think the root of the problem is that MD_DEVS=none wasn't copied
from /etc/default/mdadm settings... i think this habit of discovering
and starting all arrays is a bad one.  if i built my initrd without an
mdadm.conf i don't see why you would create one... maybe if you asked first
"unable to find root device, should i try to autodiscover and start arrays?"
or required an option on the kernel command line...

anyhow, now to go see if this didn't ruin the drives i'm trying to recover
(see #398310).

-dean




More information about the pkg-mdadm-devel mailing list