r230 - mdadm/trunk/debian

madduck at users.alioth.debian.org madduck at users.alioth.debian.org
Wed Oct 25 18:53:00 UTC 2006


Author: madduck
Date: 2006-10-25 18:52:59 +0000 (Wed, 25 Oct 2006)
New Revision: 230

Modified:
   mdadm/trunk/debian/FAQ
   mdadm/trunk/debian/changelog
Log:
* Added more RAID10 information to the FAQ.

Modified: mdadm/trunk/debian/FAQ
===================================================================
--- mdadm/trunk/debian/FAQ	2006-10-25 17:56:35 UTC (rev 229)
+++ mdadm/trunk/debian/FAQ	2006-10-25 18:52:59 UTC (rev 230)
@@ -114,7 +114,9 @@
   the event of two failing disks. In a RAID10 configuration, if one disk is
   already dead, the RAID can only survive if any of the two disks in the other
   RAID1 array fails, but not if the second disk in the degraded RADI1 array
-  fails. A RAID6 across four disks can cope with any two disks failing.
+  fails (see next item, 4b). A RAID6 across four disks can cope with any two
+  disks failing. However, RAID6 is noticeably slower than RAID5. RAID5 and
+  RAID4 do not differ much.
 
   If you can afford the extra disks (storage *is* cheap these days), I suggest
   RAID1/10 over RAID4/5/6. If you don't care about performance but need as
@@ -125,6 +127,19 @@
   workstation on RAID5. Anything disk-intensive brings the system to its
   knees; I will have to migrate to RAID10 at one point.
 
+4b. Can a 4-disk RAID10 survive two disk failures?
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+  In 2/3 of the cases, yes, and it does not matter which layout you use. When
+  you assemble 4 disks into a RAID10, you essentially stripe a RAID0 across
+  two RAID1, so the four disks A,B,C,D become two pairs: A,B and C,D. If
+  A fails, the RAID6 can only survive if the second failing disk is either
+  C or D; If B fails, your array is dead.
+
+  Thus, if you see a disk failing, replace it as soon as possible!
+
+  If you need to handle two failing disks out of a set of four, you have to
+  use RAID6.
+
 5. How to convert RAID5 to RAID10?
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
   You have me convinced, I want to convert my RAID5 to a RAID10. I have three
@@ -135,13 +150,14 @@
 
     mdadm --create -l 10 -n4 -pn2 /dev/md1 /dev/sd[cd] missing missing
 
-  For some reason, mdadm actually cares about the order of devices you give
-  it. If you intersperse the missing keywords with the physical drives, it
-  should work:
+  For internal reasons, mdadm actually cares about the order of devices you
+  give it. If you intersperse the missing keywords with the physical drives,
+  it should work:
 
-    mdadm --create -l 10 -n4 -pn2 /dev/md1 /dev/sdc missing /dev/sdd missing
+    mdadm --create -l 10 -n4 -pn2 /dev/md1 missing /dev/sd[cd] missing
 
-  See: http://marc.theaimsgroup.com/?l=linux-raid&m=116004333406395&w=2
+  Also see item (4b) further down and this thread:
+    http://marc.theaimsgroup.com/?l=linux-raid&m=116004333406395&w=2
 
 6. What is the difference between RAID1+0 and RAID10?
 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Modified: mdadm/trunk/debian/changelog
===================================================================
--- mdadm/trunk/debian/changelog	2006-10-25 17:56:35 UTC (rev 229)
+++ mdadm/trunk/debian/changelog	2006-10-25 18:52:59 UTC (rev 230)
@@ -1,3 +1,9 @@
+mdadm (2.5.5-1~unreleased.3) UNRELEASED; urgency=low
+
+  * Added more RAID10 information to the FAQ.
+
+ -- martin f. krafft <madduck at debian.org>  Wed, 25 Oct 2006 20:52:41 +0200
+
 mdadm (2.5.5-1~unreleased.2) UNRELEASED; urgency=low
 
   * Send udev events for arrays assembled from the initramfs or by the init




More information about the pkg-mdadm-commits mailing list