<div dir="ltr">My apologies. The expected behavior is that all the needed devices (/dev/sdc, /dev/sda2, and /dev/sde) would have started up and been available, before mdadm started up, so that mdadm would assemble the array with all devices, instead of in a degraded state.<br>
<br>The actual behavior seen is that /dev/sde was up and available at timestamp 3.135237, while mdadm had actually been started at timestamp 2.171754, nearly a second before all devices finished initializing.<br><br>thanks!<br>
</div><div class="gmail_extra"><br><br><div class="gmail_quote">On Thu, Jun 27, 2013 at 4:51 AM, Michael Prokop <span dir="ltr"><<a href="mailto:mika@debian.org" target="_blank">mika@debian.org</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">reassign 714155 mdadm<br>
thanks<br>
<br>
* Brian Minton [Wed Jun 26, 2013 at 08:59:06AM -0400]:<br>
> Package: initramfs-tools<br>
> Version: 0.113<br>
> Severity: normal<br>
<br>
> Dear Maintainer,<br>
<br>
> bminton@bminton:~$ dmesg|grep sde<br>
> [ 3.114799] sd 8:0:0:0: [sde] 2930277168 512-byte logical blocks: (1.50 TB/1.36 TiB)<br>
> [ 3.115888] sd 8:0:0:0: [sde] Write Protect is off<br>
> [ 3.119758] sd 8:0:0:0: [sde] Mode Sense: 00 3a 00 00<br>
> [ 3.119808] sd 8:0:0:0: [sde] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA<br>
> [ 3.134660] sde: unknown partition table<br>
> [ 3.135237] sd 8:0:0:0: [sde] Attached SCSI disk<br>
> [45018.662644] md: export_rdev(sde)<br>
> [45018.748222] md: bind<sde><br>
> [45018.772292] disk 2, o:1, dev:sde<br>
> bminton@bminton:~$ dmesg|grep md1<br>
> [ 2.164616] md: md1 stopped.<br>
> [ 2.171754] md/raid:md1: device sdc operational as raid disk 0<br>
> [ 2.172104] md/raid:md1: device sda2 operational as raid disk 1<br>
> [ 2.173021] md/raid:md1: allocated 3282kB<br>
> [ 2.173416] md/raid:md1: raid level 5 active with 2 out of 3 devices, algorithm 5<br>
> [ 2.174093] md1: detected capacity change from 0 to 3000603639808<br>
> [ 2.177433] md1: unknown partition table<br>
> [45018.773937] md: recovery of RAID array md1<br>
<br>
> Here's some info about my RAID setup:<br>
<br>
> Personalities : [raid6] [raid5] [raid4]<br>
> md1 : active raid5 sde[3] sdc[0] sda2[1]<br>
> 2930276992 blocks level 5, 64k chunk, algorithm 5 [3/2] [UU_]<br>
> [===>.................] recovery = 19.4% (285554336/1465138496) finish=684.0min speed=28741K/sec<br>
<br>
> unused devices: <none><br>
> /dev/md1:<br>
> Version : 0.90<br>
> Creation Time : Wed Jun 3 09:16:22 2009<br>
> Raid Level : raid5<br>
> Array Size : 2930276992 (2794.53 GiB 3000.60 GB)<br>
> Used Dev Size : 1465138496 (1397.26 GiB 1500.30 GB)<br>
> Raid Devices : 3<br>
> Total Devices : 3<br>
> Preferred Minor : 1<br>
> Persistence : Superblock is persistent<br>
<br>
> Update Time : Wed Jun 19 11:22:09 2013<br>
> State : clean, degraded, recovering<br>
> Active Devices : 2<br>
> Working Devices : 3<br>
> Failed Devices : 0<br>
> Spare Devices : 1<br>
<br>
> Layout : parity-last<br>
> Chunk Size : 64K<br>
<br>
> Rebuild Status : 19% complete<br>
<br>
> UUID : bfa46bf0:67d6e997:e473ac2a:9f2b3a7b<br>
> Events : 0.2609536<br>
<br>
> Number Major Minor RaidDevice State<br>
> 0 8 32 0 active sync /dev/sdc<br>
> 1 8 2 1 active sync /dev/sda2<br>
> 3 8 64 2 spare rebuilding /dev/sde<br>
<br>
[snip package information]<br>
<br>
I'm not sure what initramfs-tools could do about that, AFAICS it's<br>
an issue with mdadm's i-t hook, so reassigning to mdadm.<br>
<br>
PS: It would be nice to write a few more words about<br>
misbehaviour/expected behaviour and not just c/p some logs into a<br>
bug report.<br>
<br>
regards,<br>
-mika-<br>
<br>-----BEGIN PGP SIGNATURE-----<br>
Version: GnuPG v1.4.9 (GNU/Linux)<br>
<br>
iEYEARECAAYFAlHL/S0ACgkQ2N9T+zficujlvQCeOVPTQxyBLlEEfDZnj2eX/SGi<br>
U10AnAno3fmNnQM8VEJ/dhlXt+kfgoDN<br>
=8vTT<br>
-----END PGP SIGNATURE-----<br>
<br></blockquote></div><br></div>