Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Search...
# Tutorial search
On this page
How To
Set Up How To Set Up Software RAID1 On A Running System
(Incl. GRUB Configuration) (Debian Etch)
Software 1 Preliminary Note
! Tutorial Info
RAID1 On 2 Installing mdadm
3 Preparing /dev/sdb
A Running Author: falko
This guide explains how to set up software RAID1 on an already running Debian Etch 11
system. The GRUB bootloader will be configured in such a way that the system will
still be able to boot if one of the hard drives fails (no matter which one).
I do not issue any guarantee that this will work for you!
1 Preliminary Note
In this tutorial I'm using a Debian Etch system with two hard drives, /dev/sda and
/dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda
has the following partitions:
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 4.4G 729M 3.4G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 56K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/sda1 137M 12M 118M 10% /boot
server1:~#
fdisk -l
server1:~# fdisk -l
2 Installing mdadm
The most important tool for setting up RAID is mdadm. Let's install it like this:
modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10
Now run
cat /proc/mdstat
3 Preparing /dev/sdb
To create a RAID1 array on our already running system, we must prepare the
/dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive
to it, and finally add /dev/sda to the RAID1 array.
First, we copy the partition table from /dev/sda to /dev/sdb so that both disks
have exactly the same layout:
sfdisk -d /dev/sda | sfdisk /dev/sdb
The command
fdisk -l
should now show that both HDDs have the same layout:
server1:~# fdisk -l
Next we must change the partition type of our three partitions on /dev/sdb to
Linux raid autodetect:
fdisk /dev/sdb
If there are no remains from previous RAID installations, each of the above
commands will throw an error like this one (which is nothing to worry about):
Next >>
Recommend 7 11
Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch)
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4
14 Comment(s)
Add comment
Name * Email *
Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms
Comments
From: Reply
Thanks for a great howto! It was a life saver, as I have never set up RAID before, let
Submit comment
alone on a running system. The install did not go perfectly however and so I thought I
might share my notes and a couple of suggestions. Luckily I did a backup of the entire
system before beginning, so I was able to restore the system and begin again after I
could not get the system to boot off the RAID array. N.B. I did the install on a Debian
testing system (lenny/amd64), but I've checked that everything applies to etch as well.
1. If the disks are not brand new, mdadm will detect the previous filesystem when
creating the array and ask if you want to continue. Answer, 'yes'. I also got a segfault
error from mdadm and a warning that the disk was 'dirty'. The warning could probably
be avoided by zeroing the entire disk with dd. Despite the error and warning everything
worked as it should.
2. Instead of: cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --
examine --scan >> /etc/mdadm/mdadm.conf do: mv /etc/mdadm/mdadm.conf
/etc/mdadm/mdadm.conf_orig /usr/share/mdadm/mkconf >>
/etc/mdadm/mdadm.conf This will create a proper mdadm.conf and remove the control
file in /var/lib/mdadm (if arrays are found). If you do not remove the control file, you
should get a warning message when updating the initrd images.
3. When editing GRUBS's menu.lst, I followed the advice in the comments and put the
stanza for the RAID array, before the '### BEGIN AUTOMAGIC KERNELS LIST' line. If
you put your custom stanzas inside the AUTOMAGIC area, they will be overwritten
during the next kernel upgrade. Instead of: update-initramfs -u I had to do: dpkg-
reconfigure mdadm When asked to specify the arrays needed for the root filesystem, I
answered with the appropriate devices (in my case only /dev/md0) instead of selecting
the default, 'all'. Otherwise I kept to the default answers. After the initrd images had
been created, I updated GRUB: update-grub
4. Instead of using the GRUB shell, I used grub-install to install the boot loader on the
hard drives: grub-install /dev/sda grub-install /dev/sdb
5. After having added both disks to the arrays, it was time to update the initrd again.
First I executed: dpkg-reconfigure mdadm and was informed that the initrd would not
be updated, because it was a custom image. The configure script informed me that I
could force the update by running 'update-initramfs' with the '-t' option, so that is what
I did: update-initramfs -u -t
6. Every time you update the initrd image, you also have to re-install GRUB in the
MBR's: grub-install /dev/sda grub-install /dev/sdb Otherwise the system will not boot
and you will be thrown into the GRUB shell. Other notes: It is normal for 'fdisk -l' to
report stuff like, 'Disk /dev/md0 doesn't contain a valid partition table'. This is because
fdisk cannot read md arrays correctly. If you forget to re-install GRUB in the MBR's,
after updating your initrd, and get the GRUB shell on reboot. Do the following: Boot
from a Debian Installer CD (full or netinst) of the same architecture as your install (so if
you're running amd64, it has to be a amd64 CD). Boot the CD in 'rescue' mode. After
networking has been set up and the disks have been detected press CTRL+ALT+F2,
followed by Enter, to get a prompt. Execute the following commands (md0=/boot and
md2=/): mkdir /mnt/mydisk mount /dev/md2 /mnt/mydisk mount /dev/md0
/mnt/mydisk/boot mount -t proc none /mnt/mydisk/proc mount -o bind /dev
/mnt/mydisk/dev chroot /mnt/mydisk grub-install /dev/sda grub-install /dev/sdb exit
umount /mnt/mydisk/proc umount /mnt/mydisk/dev umount /mnt/mydisk/boot umount
/mnt/mydisk reboot
From: Reply
Thank you so much for this detailed howto. It saved me a lot of pain and worked
perfectly on Ubuntu 8.04.1 LTS.
It looks different than the one in the tutorial but I attribute that to being ubuntu rather
than debian.
Can anyone shed some light on this for me? Did I miss a step or are there other steps
involved because of Ubuntu?
Jason.
Dear all,
this howto just worked for me flawlessly for my brand-new Debian Lenny (testing)
today (03-Jan-2009) !!!
No issues, no problems at all. I had several different partitions, even extended ones, I
only had to follow on paper, which partition goes into which numbered array - that's it
;-)
(And my boot partition wasn't /boot but simply /, I did everything accordingly -
flawless!!!)
THANK YOU VERY MUCH for this HowTo, I've NEVER EVER raid-ed before and it's a
success :)
md0 = /
md1 = swap
md2 = /home
This all on an Abit NF7-S2, BIOS-Raid OFF, 2 x SATA2 Samsung 320G, Sempron 2800+,
2x512 DDR400 ;-)
lol:~# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/md0 18778 2312 15512 13% /
tmpfs 506 0 506 0% /lib/init/rw
udev 10 1 10 2% /dev
tmpfs 506 0 506 0% /dev/shm
/dev/md2 280732 192 266281 1% /home
lol:~#
Great tutorial,
my compliments
I wonder, what's the point in having the swap on a raid1? shouldn't it be better to add
/dev/sda2 and /dev/sdb2 directly as two separate swap devices?
Nice howto!
I would like to correct some minor errors though:
Ext2 on boot instead of ext3... Ext3 on /boot is just a waste of space and resources.
You dont need journaling for your boot partition :)
and why make a raid array for swap? swap will stripe data as a raid0 anyway.. just tell
linux to swap to two different physical disks and voila. Striping made easy :p
Happy raiding :p
(If you suddenly need a lot of swapspace, you can use "swapon" command to swap to
memorysticks or whatever you need, unlike fstab fixing, swapon wil get resetted on
reboot) ;)
I spent hours trying to work out not only how to set up a software RAID, but also how
to do it on a boot partition. I didn't even come close to looking at a live system. I got
nowhere until I found this HOWTO which does it all very well. Thank you!
Andy
You might like to put a link somewhere in this howto to your newer howto detailing the
install with Grub2. I spent some time following this howto and tripping up on Grub2
and doing lots of googling, before finally realising that what I thought were google hits
on your existing howto were actually pointing to a separate but very similarly named
howto, that covers Grub2!
I did manage to lose all my existing data following this. I was not doing this with a root
partition so I had no issues with partitions being in use and I specified both disks in the
create command rather than the "missing" placeholder - maybe that was my problem.
Regards
Search...
# Tutorial search
Author: falko
Tags: debian, storage
4 Creating On this page
Our RAID
" Share This Page
Arrays 4 Creating Our RAID Arrays
5 Adjusting The System To RAID1
6 Preparing GRUB (Part 1) Tweet
Now let's create
our RAID arrays Recommend 0
/dev/md0,
1
/dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to
/dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and
/dev/sda3 can't be added right now (because the system is currently running on
them), therefore we use the placeholder missing in the following three commands:
The command
cat /proc/mdstat
should now show that you have three degraded RAID arrays ([_U] or [U_] means
that an array is degraded while [UU] means that the array is ok):
Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2
and swap on /dev/md1):
mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any
information about our new RAID arrays yet) to the new situation:
cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
cat /etc/mdadm/mdadm.conf
At the bottom of the file you should now see details about our three (degraded) RAID
arrays:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about thi
s file.
#
Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array
/dev/md1):
mkdir /mnt/md0
mkdir /mnt/md2
mount
server1:~# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts
(rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
server1:~#
vi /etc/fstab
vi /etc/mtab
Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback
1 right after default 0:
vi /boot/grub/menu.lst
[...]
default 0
fallback 1
[...]
This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails
to boot, kernel #2 will be booted.
In the same file, go to the bottom where you should find some kernel stanzas. Copy
the first of them and paste the stanza before the first existing stanza; replace
root=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):
[...]
## ## End Default Options ##
root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will
reboot the system in a few moments; the system will then try to boot from our (still
degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).
update-initramfs -u
cp -dpRx / /mnt/md2
cd /boot
cp -dpRx . /mnt/md0
Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:
grub
root (hd0,0)
grub>
setup (hd0)
grub>
root (hd1,0)
grub>
setup (hd1)
grub>
quit
Now, back on the normal shell, we reboot the system and hope that it boots ok from
our RAID arrays:
reboot
Recommend 0 1
Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 4
2 Comment(s)
Add comment
Name * Email *
Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms
Comments
From: Reply
Falko, thank you, this is a wonderful HOWTO, I've used it for two servers now. On the
Submit comment
second one, the reboot at the end of this page failed with a GRUB error:
Booting Debian ..
root (hd1,0)
Filesystem type is .. partition type.. kernel (all as expected)
Error 2: Bad file or directory type
At this point I was very glad I could still boot from the old non-raid partitions (phew!)
A bit of reading turned up this explanation on fedoraforum
Sure enough, tune2fs -l showed the old sda1 had 128 byte inodes, while sdb1/md0 had
256 byte inodes. I had the choice of upgrading grub or re-making md0's filesystem with
smaller inodes.
I decided the smaller inodes were safer (I like to mess with aptitude as little as
possible). I re-ran the instructions with this mkfs command instead, and it's all good
now.
mkfs.ext3 -I 128 /dev/md0
This will not be needed when grub is updated to a version that can read fs's with 256-
byte inodes.
Search...
# Tutorial search
Author: falko
Tags: debian, storage
7 Preparing On this page
/dev/sda
" Share This Page
7 Preparing /dev/sda
8 Preparing GRUB (Part 2)
If all goes well, Tweet
you should now
find /dev/md0 Recommend 0
and /dev/md2 in the output of
0
df -h
server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 4.4G 730M 3.4G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 137M 17M 114M 13% /boot
server1:~#
The output of
cat /proc/mdstat
should be as follows:
Now we must change the partition types of our three partitions on /dev/sda to
Linux raid autodetect as well:
fdisk /dev/sda
server1:~# fdisk /dev/sda
WARNING: Re-
reading the partition table failed with error 16: Device or res
ource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#
Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID
arrays:
cat /proc/mdstat
... and you should see that the RAID arrays are being synchronized:
Wait until the synchronization has finished (the output should then look like this:
).
cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf
cat /etc/mdadm/mdadm.conf
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about thi
s file.
#
vi /boot/grub/menu.lst
[...]
## ## End Default Options ##
In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't
remove the # at the beginning of the line!):
[...]
# kopt=root=/dev/md2 ro
[...]
update-initramfs -u
reboot
Recommend 0 0
Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 4
14 Comment(s)
Add comment
Name * Email *
p
Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms
Comments
Thx for this. Successfully used your guide to setup Juanty 9.04 with RAID5.
Submit comment
Points to note, RAID5 will NOT work when boot partition is raid5. For example, if you
have:
md0 = swap
md1 = root (boot within root)
Then you will not be able to write your grub properly to each drive due to raid5 not
having separate copies of files on each disc. Grub boots at disk level and not at
software raid level it seems.
My work around was to have boot separate. I chose:
md0=swap (3x drives within raid5, sda1, sdb1, sdc1)
md1=boot (2x drives within raid1, sda2, sdb2) 3rd drive is not needed unless 2 drives
fail at once, and because drives a mirrored completely you are able to write to grub.
md2=root (3x drives within raid5, sda3, sdb3, sdc3)
I'll be writing my own guide for raid1 and raid5 so you can see the difference in
commands, but will referrence this guide a lot as it helped me the most out of all the
ubuntu raid guides i found on google.
From: Reply
Hello,
This instruction looks very useful however I would like to ask could someone please
make this to suite the default and recommended hdd setup of Debian (single partition)
please
The article shows writing out a modified partition table, getting the message:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
and then, without rebooting, trying to write to the new partitions (running "mdadm -
add ...").
Doing that is extremely dangerous to any data on that disk---and even if there is no
data, doing that means mdadm might not be initializing something (the kernel's old
view of partition N) other than what you meant (your new partition N).
QUOTE:
not quite... you need to do the GRUB part from page 2 again to make this work - just
got stuck in a 'GRUB' prompt after reload - can be fixed with a rescue system and a
new grub setup on the hd's.
Otherwise the howto works just fine - thank you!
-Lars
I followed this guide to setup raid 1 (mirror of existing disc,to another) of an seperate
disc containing vmware virtual disc files.
I hope, I won't lose any data, but that's a risk I'd had to take. Right now it's
synchronising the vmware disc with the harddisc containing no data... at this point, I
can't access the harddisc containing the vmware files - so I have my fingers crossed :-)
I'll post an update as soon, as the synchronisation is complete, byfar it's only 18%
complete.
I would recommand everyone there is using this guide to synchronise data between
two discs to unmount EVERY disc that you're making changes to, BEFORE making any
changes at all. If you somehow fail to do so, it can lead to serious data loss. A point
that I think this guide, failed to mention.
Besides that, thank you very much for sharing you're knowledge!
- Simon Sessing
Denmark
THANK YOU for this wonderful howto. I managed to get RAID set up on Debian Lenny
with no changes to your instructions.
Hi thanks for writing this guide. I managed to setup my servers software raid
successfully using this guide. Been using hardware raid all along. Thanks
Great tutorial, worked perfectly for me in Debain Lenny substituting sda and sdb with
hda and hdd, and a few extra partitions ... thanks for posting. :)
Thank you for this excellent guideline. I followed on Ubuntu 9.10. The only thing
different is to setup the grub2. It is suposed i shouldn't edit the grub.cnf (former
grub.lst) but i did to change the root device) then mounted the /dev/md2 on
/mnt/md2 and then /dev/md0 on /mnt/md2/boot. Mounted sys, proc and dev also to
make the chroot. Later i did the dpkg-reconfigure grub-pc and selected the both disks
to install grub on mbr. Everything worked the first time i tried.
Thanks again
/ Juan
I just did this for 9.10 Ubuntu as well. This procedure really needs to be updated
for GRUB2, which in and of itself is an excersise in tedium. However, GRUB2 is
slightly smarter & seemed to auto-configure a few of the drive details here and
there. However, there were some major departures from this procedure.
You don't need to (& should not) modify grub.cfg directly. Instead, I created a
custom grub config file: /etc/grub.d/06_custom which would contain my RAID
entries and put them above the other grub boot options during the "degraded"
sections of the installation. There's a few tricks there in how to format a custom
file correctly: there is some "EOF" crayziness, and also you should be using UUIDs,
so you have to make sure you get the right UUIDs, instead of using /dev/sd[XX]
notation. In the end, my 06_custom looked like:
#! /bin/sh -e
echo "Adding RAID boot options" >&2
cat << EOF
menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd1)"
{
recordfail=1
if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd1,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-
2c4fa99a5872 ro quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}
EOF
Also, you have to figure out which pieces of 10_linux to comment out to get rid of
the non-RAID boot options; for that:
#linux_entry "${OS}, Linux ${version}" \
# "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_EXTRA}
${GRUB_CMDLINE_LINUX_DEFAULT}" \
# quiet
#if [ "x${GRUB_DISABLE_LINUX_RECOVERY}" != "xtrue" ]; then
# linux_entry "${OS}, Linux ${version} (recovery mode)" \
# "single ${GRUB_CMDLINE_LINUX}"
#fi
Overall, this was the best non-RAID -> RAID migration how-to I could find. Thanks
very much for putting this out there.
I had already set up my RAID 1 before hitting your tutorial, but this reading made me
understand everything better - much better! Thank you very much!
This guide is awesme, it is just all you need to transform an usual one SATA disk into
RAID1 if you follow all instruction.
Thanks again ... thanks, thanks. You save me from some days work to configure again
a server.
Search...
# Tutorial search
Author: falko
Tags: debian, storage
9 Testing On this page
" Share This Page
9 Testing
Now let's
10 Links
simulate a hard Tweet
drive failure. It
doesn't matter Recommend 0
if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb
has failed. 1
To simulate the hard drive failure, you can either shut down the system and remove
/dev/sdb from the system, or you (soft-)remove it like this:
shutdown -h now
Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you
should now put /dev/sdb in /dev/sda's place and connect the new HDD as
/dev/sdb!) and boot the system. It should still start without problems.
Now run
cat /proc/mdstat
The output of
fdisk -l
server1:~# fdisk -l
(If you get an error, you can try the --force option:
cat /proc/mdstat
grub
root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit
That's it. You've just replaced a failed hard drive in your RAID1 array.
10 Links
<< Prev
Recommend 0 1
Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 4
7 Comment(s)
Add comment
Name * Email *
p
Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms
Comments
From: Reply
In general, I replaced Disk Id's of Ubuntu Gutsy by devices and is working great. I'm
Submit comment
writing from my Gutsy destop.
A few weeks ago I lost my /home partition. As consultant, I also works in home
therefore I don't have time for backup, thus I think is a good solution to have a RAID1.
First, I used Debian Etch, but it doesn't support easily my ATI Radeon 9200 video card,
and caused problems with vmware.
I redo all the process but for Ubuntu Gutsy Gibbon 7.10, replacing the Disk ID's by
devices. Also, for mnemotecnic reasons and easy recovery, I used md1 (boot) md2
(swap) and md5 (root).
Well,
This Howto really worked out of the box. This was my first RAID installation using
Debian stable 5.03 and after wasting my time with the installer to set up a RAID this
worked straight without any complaints. Really a good job, well done,
Lothar
Just follow steps adjusting number of 'md' and partitions and disks.
Maybe more details about the swap partition on each disk.
Usefull or useless to have an md made of swaps.....?
use:
mkswap /dev/mdX
swapon -a
swapon -s
Bye
Aren't md0, md1 and md2 supposed to be operational after disk failure? Contents of
/proc/mdstat suggest that raid1 is still running with one disc but subsequent call to
fdisk shows that there are no valid partitions on md0, md1 and md2.
Might it be copy-paste error?
Otherwise very good tutorial.