Bug#398310: don't assemble all arrays on install

dean gaudet dean at arctic.org
Mon Nov 13 11:16:23 CET 2006


On Mon, 13 Nov 2006, martin f krafft wrote:

> severity 398310 important
> retitle 398310 let user choose when to start which array
> tags 398310 confirmed help
> thanks
> 
> also sprach dean gaudet <dean at arctic.org> [2006.11.13.0230 +0100]:
> > i had 4 disks which i had experimented with sw raid10 on a few months 
> > back... i never zeroed the superblocks.  i ended up putting them into 
> > production in a 3ware hw raid10.  today the 3ware freaked out... and i put 
> > the disks into another box to attempt forensics and to try constructing 
> > *read-only* software arrays to see if i could recover the data.
> > 
> > when i did "apt-get install mdadm" it found the old superblocks from my 
> > experiments a few months ago... and tried to start the array!
> 
> You can set AUTOSTART=false in /etc/default/mdadm or via debconf,
> and no arrays will be started.

right, now i know that i should create an /etc/default/mdadm *before* i 
install mdadm... because unlike other packages, mdadm does potentially 
dangerous things just by installing it.  i'll keep that in mind :)


> I do like the idea of selecting which arrays to start when.
> Ideally, for each array, you'd select whether to start it from
> initramfs, from init.d at boot, from init.d at install time, or from
> init.d run manually. You can distinguish between the latter three
> using the runlevel and a custom variable passed from postinst.

it gets worse when you start considering external bitmaps... i posted to 
linux-raid about the dependency problems here.  you can't autostart an 
array with external bitmap until the bitmap is available... and if the 
bitmap is on a filesystem which is on another md device (think many disk 
raid5 external bitmap on raid1 root disks) then you need some md devices 
to start, some filesystems to be mounted, and then some more md devices to 
start and more filesystems to be mounted.

i think the only solution is to go entirely event based... start meshing 
into udev or something.  you'd have to be able to express the dependencies 
of a device/filesystem somehow though.  ugh.


> In any case, I don't consider the bug you filed to be grave because
> you forgot to zero the superblocks.

actually, after playing with the disks with md, and then moving them into 
3ware hardware raid, i did zero the disks... through the 3ware hw raid.  
the problem is that the 3ware hw raid superblock is even larger than the 
md raid superblock (100MB vs. a few MB in my limited experiments)... so 
even though i zeroed the hw raid device it went nowhere near the stale md 
superblock (even the 3ware hw raid superblock never touched it).

it took me a while to figure out that this was what happenned -- at first 
i thought mdadm had somehow read 3ware superblocks... there had been talk 
of an industry standard but i was skeptical it ever went anywhere.

-dean




More information about the pkg-mdadm-devel mailing list