Sei sulla pagina 1di 31

English | Deutsch Log in or Sign up

Search...

Tutorials Tags Forums Contribute Subscribe ISPConfig News

# Tutorial search

Tutorials How To Set Up Software RAID1 On A Running


Sign up now!

On this page
How To
Set Up How To Set Up Software RAID1 On A Running System
(Incl. GRUB Configuration) (Debian Etch)
Software 1 Preliminary Note
! Tutorial Info
RAID1 On 2 Installing mdadm
3 Preparing /dev/sdb
A Running Author: falko

System Tags: debian, storage

(Incl. GRUB Configuration) (Debian Etch)


" Share This Page

Version 1.0 Tweet


Author: Falko Timme
Recommend 7

This guide explains how to set up software RAID1 on an already running Debian Etch 11
system. The GRUB bootloader will be configured in such a way that the system will
still be able to boot if one of the hard drives fails (no matter which one).

I do not issue any guarantee that this will work for you!

1 Preliminary Note

In this tutorial I'm using a Debian Etch system with two hard drives, /dev/sda and
/dev/sdb which are identical in size. /dev/sdb is currently unused, and /dev/sda
has the following partitions:

/dev/sda1: /boot partition, ext3;


/dev/sda2: swap;
/dev/sda3: / partition, ext3

In the end I want to have the following situation:

/dev/md0 (made up of /dev/sda1 and /dev/sdb1): /boot partition, ext3;


/dev/md1 (made up of /dev/sda2 and /dev/sdb2): swap;
/dev/md2 (made up of /dev/sda3 and /dev/sdb3): / partition, ext3

This is the current situation:

df -h

server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 4.4G 729M 3.4G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 56K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/sda1 137M 12M 118M 10% /boot
server1:~#

fdisk -l
server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 80 498015 82 Linux s
wap / Solaris
/dev/sda3 81 652 4594590 83 Linux

Disk /dev/sdb: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table


server1:~#

2 Installing mdadm

The most important tool for setting up RAID is mdadm. Let's install it like this:

apt-get install initramfs-tools mdadm

You will be asked the following question:

MD arrays needed for the root filesystem: <-- all

Afterwards, we load a few kernel modules (to avoid a reboot):

modprobe md
modprobe linear
modprobe multipath
modprobe raid0
modprobe raid1
modprobe raid5
modprobe raid6
modprobe raid10

Now run

cat /proc/mdstat

The output should look as follows:

server1:~# cat /proc/mdstat


Personalities : [linear] [multipath] [raid0] [raid1] [raid6]
[raid5] [raid4] [raid10]
unused devices: <none>
server1:~#

3 Preparing /dev/sdb

To create a RAID1 array on our already running system, we must prepare the
/dev/sdb hard drive for RAID1, then copy the contents of our /dev/sda hard drive
to it, and finally add /dev/sda to the RAID1 array.

First, we copy the partition table from /dev/sda to /dev/sdb so that both disks
have exactly the same layout:
sfdisk -d /dev/sda | sfdisk /dev/sdb

The output should be as follows:

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb


Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature


/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System


/dev/sdb1 * 63 289169 289107 83 Linux
/dev/sdb2 289170 1285199 996030 82 Linux swap /
Solaris
/dev/sdb3 1285200 10474379 9189180 83 Linux
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then


use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=5
12 count=1
(See fdisk(8).)
server1:~#

The command

fdisk -l

should now show that both HDDs have the same layout:

server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 18 144553+ 83 Linux
/dev/sda2 19 80 498015 82 Linux s
wap / Solaris
/dev/sda3 81 652 4594590 83 Linux

Disk /dev/sdb: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sdb1 * 1 18 144553+ 83 Linux
/dev/sdb2 19 80 498015 82 Linux s
wap / Solaris
/dev/sdb3 81 652 4594590 83 Linux
server1:~#

Next we must change the partition type of our three partitions on /dev/sdb to
Linux raid autodetect:

fdisk /dev/sdb

server1:~# fdisk /dev/sdb


Command (m for help): <-- m
Command action
a toggle a bootable flag
b edit bsd disklabel
c toggle the dos compatibility flag
d delete a partition
l list known partition types
m print this menu
n add a new partition
o create a new empty DOS partition table
p print the partition table
q quit without saving changes
s create a new empty Sun disklabel
t change a partition's system id
u change display/entry units
v verify the partition table
w write table to disk and exit
x extra functionality (experts only)

Command (m for help): <-- t


Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- L

0 Empty 1e Hidden W95 FAT1 80 Old Minix be


Solaris boot
1 FAT12 24 NEC DOS 81 Minix / old Lin bf
Solaris
2 XENIX root 39 Plan 9 82 Linux swap / So c1
DRDOS/sec (FAT-
3 XENIX usr 3c PartitionMagic 83 Linux c4
DRDOS/sec (FAT-
4 FAT16 <32M 40 Venix 80286 84 OS/2 hidden C: c6
DRDOS/sec (FAT-
5 Extended 41 PPC PReP Boot 85 Linux extended c7
Syrinx
6 FAT16 42 SFS 86 NTFS volume set da
Non-FS data
7 HPFS/NTFS 4d QNX4.x 87 NTFS volume set db
CP/M / CTOS / .
8 AIX 4e QNX4.x 2nd part 88 Linux plaintext de
Dell Utility
9 AIX bootable 4f QNX4.x 3rd part 8e Linux LVM df
BootIt
a OS/2 Boot Manag 50 OnTrack DM 93 Amoeba e1
DOS access
b W95 FAT32 51 OnTrack DM6 Aux 94 Amoeba BBT e3
DOS R/O
c W95 FAT32 (LBA) 52 CP/M 9f BSD/OS e4
SpeedStor
e W95 FAT16 (LBA) 53 OnTrack DM6 Aux a0 IBM Thinkpad hi eb
BeOS fs
f W95 Ext'd (LBA) 54 OnTrackDM6 a5 FreeBSD ee
EFI GPT
10 OPUS 55 EZ-
Drive a6 OpenBSD ef EFI (FAT-12/16/
11 Hidden FAT12 56 Golden Bow a7 NeXTSTEP f0
Linux/PA-RISC b
12 Compaq diagnost 5c Priam Edisk a8 Darwin UFS f1
SpeedStor
14 Hidden FAT16 <3 61 SpeedStor a9 NetBSD f4
SpeedStor
16 Hidden FAT16 63 GNU HURD or Sys ab Darwin boot f2
DOS secondary
17 Hidden HPFS/NTF 64 Novell Netware b7 BSDI fs fd
Linux raid auto
18 AST SmartSleep 65 Novell Netware b8 BSDI swap fe
LANstep
1b Hidden W95 FAT3 70 DiskSecure Mult bb Boot Wizard hid ff
BBT
1c Hidden W95 FAT3 75 PC/IX
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect
)

Command (m for help): <-- t


Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect
)

Command (m for help): <-- t


Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect
)

Command (m for help): <-- w


The partition table has been altered!

Calling ioctl() to re-read partition table.


Syncing disks.
server1:~#

To make sure that there are no


remains from previous RAID installations on /dev/sdb, we run the following
commands:

mdadm --zero-superblock /dev/sdb1


mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3

If there are no remains from previous RAID installations, each of the above
commands will throw an error like this one (which is nothing to worry about):

server1:~# mdadm --zero-superblock /dev/sdb1


mdadm: Unrecognised md component device - /dev/sdb1
server1:~#

Otherwise the commands will not display anything at all.

Next >>

view as pdf | print

Share this page: Tweet Follow @howtoforgecom 26.1K followers

Recommend 7 11

Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch)
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4

14 Comment(s)
Add comment
Name * Email *

Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms

Comments

From: Reply

Thanks for a great howto! It was a life saver, as I have never set up RAID before, let
Submit comment
alone on a running system. The install did not go perfectly however and so I thought I
might share my notes and a couple of suggestions. Luckily I did a backup of the entire
system before beginning, so I was able to restore the system and begin again after I
could not get the system to boot off the RAID array. N.B. I did the install on a Debian
testing system (lenny/amd64), but I've checked that everything applies to etch as well.
1. If the disks are not brand new, mdadm will detect the previous filesystem when
creating the array and ask if you want to continue. Answer, 'yes'. I also got a segfault
error from mdadm and a warning that the disk was 'dirty'. The warning could probably
be avoided by zeroing the entire disk with dd. Despite the error and warning everything
worked as it should.
2. Instead of: cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig mdadm --
examine --scan >> /etc/mdadm/mdadm.conf do: mv /etc/mdadm/mdadm.conf
/etc/mdadm/mdadm.conf_orig /usr/share/mdadm/mkconf >>
/etc/mdadm/mdadm.conf This will create a proper mdadm.conf and remove the control
file in /var/lib/mdadm (if arrays are found). If you do not remove the control file, you
should get a warning message when updating the initrd images.
3. When editing GRUBS's menu.lst, I followed the advice in the comments and put the
stanza for the RAID array, before the '### BEGIN AUTOMAGIC KERNELS LIST' line. If
you put your custom stanzas inside the AUTOMAGIC area, they will be overwritten
during the next kernel upgrade. Instead of: update-initramfs -u I had to do: dpkg-
reconfigure mdadm When asked to specify the arrays needed for the root filesystem, I
answered with the appropriate devices (in my case only /dev/md0) instead of selecting
the default, 'all'. Otherwise I kept to the default answers. After the initrd images had
been created, I updated GRUB: update-grub
4. Instead of using the GRUB shell, I used grub-install to install the boot loader on the
hard drives: grub-install /dev/sda grub-install /dev/sdb
5. After having added both disks to the arrays, it was time to update the initrd again.
First I executed: dpkg-reconfigure mdadm and was informed that the initrd would not
be updated, because it was a custom image. The configure script informed me that I
could force the update by running 'update-initramfs' with the '-t' option, so that is what
I did: update-initramfs -u -t
6. Every time you update the initrd image, you also have to re-install GRUB in the
MBR's: grub-install /dev/sda grub-install /dev/sdb Otherwise the system will not boot
and you will be thrown into the GRUB shell. Other notes: It is normal for 'fdisk -l' to
report stuff like, 'Disk /dev/md0 doesn't contain a valid partition table'. This is because
fdisk cannot read md arrays correctly. If you forget to re-install GRUB in the MBR's,
after updating your initrd, and get the GRUB shell on reboot. Do the following: Boot
from a Debian Installer CD (full or netinst) of the same architecture as your install (so if
you're running amd64, it has to be a amd64 CD). Boot the CD in 'rescue' mode. After
networking has been set up and the disks have been detected press CTRL+ALT+F2,
followed by Enter, to get a prompt. Execute the following commands (md0=/boot and
md2=/): mkdir /mnt/mydisk mount /dev/md2 /mnt/mydisk mount /dev/md0
/mnt/mydisk/boot mount -t proc none /mnt/mydisk/proc mount -o bind /dev
/mnt/mydisk/dev chroot /mnt/mydisk grub-install /dev/sda grub-install /dev/sdb exit
umount /mnt/mydisk/proc umount /mnt/mydisk/dev umount /mnt/mydisk/boot umount
/mnt/mydisk reboot

From: Reply

Thank you so much for this detailed howto. It saved me a lot of pain and worked
perfectly on Ubuntu 8.04.1 LTS.

From: nochids Reply

I am relatively new to linux and am completely dependent on these tutorials. I bought


a server and installed Suse 10.3. After running Ubuntu on my desktop and laptop, I
decided to change the server to run Ubuntu as well (I didn't uninstall Suse - just
installed over it??). After installing the server based on "The Perfect Ubuntu Server
8.04" (http://www.howtoforge.com/perfect-server-ubuntu8.04-lts) I installed the
ISPConfig as detailed at the end. Then to install RAID, I followed the tutorial perfectly
I think, but at the end of step 6 after rebooting, I still show sda1 rather than md0.
root@costarica:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/costarica-root
228G 1.4G 216G 1% /
varrun 502M 108K 502M 1% /var/run
varlock 502M 0 502M 0% /var/lock
udev 502M 76K 502M 1% /dev
devshm 502M 0 502M 0% /dev/shm
/dev/sda1 236M 26M 198M 12% /boot
root@costarica:~#
vi /etc/fstab shows the following:
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
# /dev/mapper/costarica-root
UUID=2e3442d3-c650-480a-a923-4775de238b7f / ext3
relatime,errors=remount-ro,usrquota,grpquota 0 1
# /dev/md0
UUID=251a68c2-1497-433b-b415-d49ca8f2125e /boot ext3 relatime
0 2
# /dev/mapper/costarica-swap_1
UUID=03c6c32e-38bd-4707-9df5-dcdd3049825a none swap sw
0 0
/dev/scd0 /media/cdrom0 udf,iso9660 user,noauto,exec,utf8 0 0

It looks different than the one in the tutorial but I attribute that to being ubuntu rather
than debian.

Can anyone shed some light on this for me? Did I miss a step or are there other steps
involved because of Ubuntu?

Thanks for any help.

Jason.

From: Anonymous Reply

Dear all,
this howto just worked for me flawlessly for my brand-new Debian Lenny (testing)
today (03-Jan-2009) !!!
No issues, no problems at all. I had several different partitions, even extended ones, I
only had to follow on paper, which partition goes into which numbered array - that's it
;-)
(And my boot partition wasn't /boot but simply /, I did everything accordingly -
flawless!!!)

THANK YOU VERY MUCH for this HowTo, I've NEVER EVER raid-ed before and it's a
success :)
md0 = /
md1 = swap
md2 = /home
This all on an Abit NF7-S2, BIOS-Raid OFF, 2 x SATA2 Samsung 320G, Sempron 2800+,
2x512 DDR400 ;-)

lol:~# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
/dev/md0 18778 2312 15512 13% /
tmpfs 506 0 506 0% /lib/init/rw
udev 10 1 10 2% /dev
tmpfs 506 0 506 0% /dev/shm
/dev/md2 280732 192 266281 1% /home
lol:~#

Cheers from Europe !

From: Anonymous Reply

Great tutorial,
my compliments

From: Johan Boul Reply

I wonder, what's the point in having the swap on a raid1? shouldn't it be better to add
/dev/sda2 and /dev/sdb2 directly as two separate swap devices?

From: Froi Reply

Can I apply this How-to to my PPC Debian Etch? Thanks!

From: nord Reply

Nice howto!
I would like to correct some minor errors though:
Ext2 on boot instead of ext3... Ext3 on /boot is just a waste of space and resources.
You dont need journaling for your boot partition :)
and why make a raid array for swap? swap will stripe data as a raid0 anyway.. just tell
linux to swap to two different physical disks and voila. Striping made easy :p
Happy raiding :p
(If you suddenly need a lot of swapspace, you can use "swapon" command to swap to
memorysticks or whatever you need, unlike fstab fixing, swapon wil get resetted on
reboot) ;)

From: Andy Beverley Reply

I spent hours trying to work out not only how to set up a software RAID, but also how
to do it on a boot partition. I didn't even come close to looking at a live system. I got
nowhere until I found this HOWTO which does it all very well. Thank you!
Andy

From: Anonymous Reply

It works just perfectly with ubuntu 8.04


Thanks for the brilliant how-to
From: Alex Dekker Reply

You might like to put a link somewhere in this howto to your newer howto detailing the
install with Grub2. I spent some time following this howto and tripping up on Grub2
and doing lots of googling, before finally realising that what I thought were google hits
on your existing howto were actually pointing to a separate but very similarly named
howto, that covers Grub2!

From: bob143 Reply

I did manage to lose all my existing data following this. I was not doing this with a root
partition so I had no issues with partitions being in use and I specified both disks in the
create command rather than the "missing" placeholder - maybe that was my problem.

From: Losteron Reply

Hi thanks for the tutorial.


I got a question about /etc/fstab file.
My File contains must UUIDs not sda or sdb.
Can I just replace them with sda and sdb?

Regards

From: linux fan Reply

Of course, you mean not "cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf"


but "cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig", right?

Tutorials How To Set Up Software RAID1 On A Running

Xenforo skin by Xenfocus Contact Help Imprint Tutorials Top $


RSS-Feed
Howtoforge projektfarm GmbH. Terms
English | Deutsch Log in or Sign up

Search...

Tutorials Tags Forums Contribute Subscribe ISPConfig News

# Tutorial search

Tutorials How To Set Up Software RAID1 On A Running


Sign up now!

How To Set Up Software RAID1 On A


Running System (Incl. GRUB
Configuration) (Debian Etch) - Page 2 ! Tutorial Info

Author: falko
Tags: debian, storage
4 Creating On this page
Our RAID
" Share This Page
Arrays 4 Creating Our RAID Arrays
5 Adjusting The System To RAID1
6 Preparing GRUB (Part 1) Tweet
Now let's create
our RAID arrays Recommend 0
/dev/md0,
1
/dev/md1, and /dev/md2. /dev/sdb1 will be added to /dev/md0, /dev/sdb2 to
/dev/md1, and /dev/sdb3 to /dev/md2. /dev/sda1, /dev/sda2, and
/dev/sda3 can't be added right now (because the system is currently running on
them), therefore we use the placeholder missing in the following three commands:

mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1


mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2
mdadm --create /dev/md2 --level=1 --raid-disks=2 missing /dev/sdb3

The command

cat /proc/mdstat

should now show that you have three degraded RAID arrays ([_U] or [U_] means
that an array is degraded while [UU] means that the array is ok):

server1:~# cat /proc/mdstat


Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [r
aid5] [raid4] [raid10]
md2 : active raid1 sdb3[1]
4594496 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]


497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]


144448 blocks [2/1] [_U]

unused devices: <none>


server1:~#

Next we create filesystems on our RAID arrays (ext3 on /dev/md0 and /dev/md2
and swap on /dev/md1):

mkfs.ext3 /dev/md0
mkswap /dev/md1
mkfs.ext3 /dev/md2
Next we must adjust /etc/mdadm/mdadm.conf (which doesn't contain any
information about our new RAID arrays yet) to the new situation:

cp /etc/mdadm/mdadm.conf /etc/mdadm/mdadm.conf_orig
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Display the contents of the file:

cat /etc/mdadm/mdadm.conf

At the bottom of the file you should now see details about our three (degraded) RAID
arrays:

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about thi
s file.
#

# by default, scan all partitions (/proc/partitions) for


MD superblocks.
# alternatively, specify devices to scan, using wildcards
if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions


CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local


system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alert


s
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:


04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck
$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35
d103e3:01b5209e:be9ff10a
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e
19f9e4:01b5209e:be9ff10a
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae
381162:01b5209e:be9ff10a

5 Adjusting The System To RAID1

Now let's mount /dev/md0 and /dev/md2 (we don't need to mount the swap array
/dev/md1):

mkdir /mnt/md0
mkdir /mnt/md2

mount /dev/md0 /mnt/md0


mount /dev/md2 /mnt/md2

You should now find both arrays in the output of

mount
server1:~# mount
/dev/sda3 on / type ext3 (rw,errors=remount-ro)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts
(rw,noexec,nosuid,gid=5,mode=620)
/dev/sda1 on /boot type ext3 (rw)
/dev/md0 on /mnt/md0 type ext3 (rw)
/dev/md2 on /mnt/md2 type ext3 (rw)
server1:~#

Next we modify /etc/fstab. Replace /dev/sda1 with /dev/md0, /dev/sda2


with /dev/md1, and /dev/sda3 with /dev/md2 so that the file looks as follows:

vi /etc/fstab

# /etc/fstab: static file system information.


#
# <file system> <mount point> <type> <options> <
dump> <pass>
proc /proc proc defaults 0
0
/dev/md2 / ext3 defaults,errors=re
mount-ro 0 1
/dev/md0 /boot ext3 defaults 0
2
/dev/md1 none swap sw 0
0
/dev/hdc /media/cdrom0 udf,iso9660 user,noauto
0 0
/dev/fd0 /media/floppy0 auto rw,user,noauto 0
0

Next replace /dev/sda1 with /dev/md0 and /dev/sda3 with /dev/md2 in


/etc/mtab:

vi /etc/mtab

/dev/md2 / ext3 rw,errors=remount-ro 0 0


tmpfs /lib/init/rw tmpfs rw,nosuid,mode=0755 0 0
proc /proc proc rw,noexec,nosuid,nodev 0 0
sysfs /sys sysfs rw,noexec,nosuid,nodev 0 0
udev /dev tmpfs rw,mode=0755 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev 0 0
devpts /dev/pts devpts rw,noexec,nosuid,gid=5,mode=620 0
0
/dev/md0 /boot ext3 rw 0 0

Now up to the GRUB boot loader. Open /boot/grub/menu.lst and add fallback
1 right after default 0:

vi /boot/grub/menu.lst

[...]
default 0
fallback 1
[...]

This makes that if the first kernel (counting starts with 0, so the first kernel is 0) fails
to boot, kernel #2 will be booted.

In the same file, go to the bottom where you should find some kernel stanzas. Copy
the first of them and paste the stanza before the first existing stanza; replace
root=/dev/sda3 with root=/dev/md2 and root (hd0,0) with root (hd1,0):

[...]
## ## End Default Options ##

title Debian GNU/Linux, kernel 2.6.18-4-486 RAI


D (hd1)
root (hd1,0)
kernel /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd /initrd.img-2.6.18-4-486
savedefault

title Debian GNU/Linux, kernel 2.6.18-4-486


root (hd0,0)
kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
initrd /initrd.img-2.6.18-4-486
savedefault

title Debian GNU/Linux, kernel 2.6.18-4-486 (si


ngle-user mode)
root (hd0,0)
kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro s
ingle
initrd /initrd.img-2.6.18-4-486
savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

root (hd1,0) refers to /dev/sdb which is already part of our RAID arrays. We will
reboot the system in a few moments; the system will then try to boot from our (still
degraded) RAID arrays; if it fails, it will boot from /dev/sda (-> fallback 1).

Next we adjust our ramdisk to the new situation:

update-initramfs -u

Now we copy the contents of /dev/sda1 and /dev/sda3 to /dev/md0 and


/dev/md2 (which are mounted on /mnt/md0 and /mnt/md2):

cp -dpRx / /mnt/md2

cd /boot
cp -dpRx . /mnt/md0

6 Preparing GRUB (Part 1)

Afterwards we must install the GRUB bootloader on the second hard drive /dev/sdb:

grub

On the GRUB shell, type in the following commands:

root (hd0,0)

grub> root (hd0,0)


Filesystem type is ext2fs, partition type 0x83

grub>
setup (hd0)

grub> setup (hd0)


Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd0)"... 15 sectors are e
mbedded.
succeeded
Running "install /grub/stage1 (hd0) (hd0)1+15 p (hd0,0)/grub/s
tage2 /grub/menu.lst"... succeeded
Done.

grub>

root (hd1,0)

grub> root (hd1,0)


Filesystem type is ext2fs, partition type 0xfd

grub>

setup (hd1)

grub> setup (hd1)


Checking if "/boot/grub/stage1" exists... no
Checking if "/grub/stage1" exists... yes
Checking if "/grub/stage2" exists... yes
Checking if "/grub/e2fs_stage1_5" exists... yes
Running "embed /grub/e2fs_stage1_5 (hd1)"... 15 sectors are e
mbedded.
succeeded
Running "install /grub/stage1 (hd1) (hd1)1+15 p (hd1,0)/grub/s
tage2 /grub/menu.lst"... succeeded
Done.

grub>

quit

Now, back on the normal shell, we reboot the system and hope that it boots ok from
our RAID arrays:

reboot

<< Prev Next >>

view as pdf | print

Share this page: Tweet Follow @howtoforgecom 26.1K followers

Recommend 0 1

Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 2 -
Page 4

2 Comment(s)
Add comment
Name * Email *

Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms

Comments

From: Reply

Falko, thank you, this is a wonderful HOWTO, I've used it for two servers now. On the
Submit comment
second one, the reboot at the end of this page failed with a GRUB error:
Booting Debian ..
root (hd1,0)
Filesystem type is .. partition type.. kernel (all as expected)
Error 2: Bad file or directory type
At this point I was very glad I could still boot from the old non-raid partitions (phew!)
A bit of reading turned up this explanation on fedoraforum
Sure enough, tune2fs -l showed the old sda1 had 128 byte inodes, while sdb1/md0 had
256 byte inodes. I had the choice of upgrading grub or re-making md0's filesystem with
smaller inodes.
I decided the smaller inodes were safer (I like to mess with aptitude as little as
possible). I re-ran the instructions with this mkfs command instead, and it's all good
now.
mkfs.ext3 -I 128 /dev/md0
This will not be needed when grub is updated to a version that can read fs's with 256-
byte inodes.

From: wayan Reply

Step 5 Adjusting The System To RAID1


Don't edit /etc/fstab and /etc/mtab
edit only file /mnt/md2/etc/fstab and /mnt/md2/etc/mtab
sometimes linux fail boot from /dev/md2 after reboot, you can normal boot to original
linux configuration after fail.
Tutorials How To Set Up Software RAID1 On A Running

Xenforo skin by Xenfocus Contact Help Imprint Tutorials Top $


RSS-Feed
Howtoforge projektfarm GmbH. Terms
English | Deutsch Log in or Sign up

Search...

Tutorials Tags Forums Contribute Subscribe ISPConfig News

# Tutorial search

Tutorials How To Set Up Software RAID1 On A Running


Sign up now!

How To Set Up Software RAID1 On A


Running System (Incl. GRUB
Configuration) (Debian Etch) - Page 3 ! Tutorial Info

Author: falko
Tags: debian, storage
7 Preparing On this page
/dev/sda
" Share This Page
7 Preparing /dev/sda
8 Preparing GRUB (Part 2)
If all goes well, Tweet
you should now
find /dev/md0 Recommend 0
and /dev/md2 in the output of
0

df -h

server1:~# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 4.4G 730M 3.4G 18% /
tmpfs 126M 0 126M 0% /lib/init/rw
udev 10M 68K 10M 1% /dev
tmpfs 126M 0 126M 0% /dev/shm
/dev/md0 137M 17M 114M 13% /boot
server1:~#

The output of

cat /proc/mdstat

should be as follows:

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sdb3[1]
4594496 blocks [2/1] [_U]

md1 : active raid1 sdb2[1]


497920 blocks [2/1] [_U]

md0 : active raid1 sdb1[1]


144448 blocks [2/1] [_U]

unused devices: <none>


server1:~#

Now we must change the partition types of our three partitions on /dev/sda to
Linux raid autodetect as well:

fdisk /dev/sda
server1:~# fdisk /dev/sda

Command (m for help): <-- t


Partition number (1-4): <-- 1
Hex code (type L to list codes): <-- fd
Changed system type of partition 1 to fd (Linux raid autodetect
)

Command (m for help): <-- t


Partition number (1-4): <-- 2
Hex code (type L to list codes): <-- fd
Changed system type of partition 2 to fd (Linux raid autodetect
)

Command (m for help): <-- t


Partition number (1-4): <-- 3
Hex code (type L to list codes): <-- fd
Changed system type of partition 3 to fd (Linux raid autodetect
)

Command (m for help): <-- w


The partition table has been altered!

Calling ioctl() to re-read partition table.

WARNING: Re-
reading the partition table failed with error 16: Device or res
ource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
Syncing disks.
server1:~#

Now we can add /dev/sda1, /dev/sda2, and /dev/sda3 to the respective RAID
arrays:

mdadm --add /dev/md0 /dev/sda1


mdadm --add /dev/md1 /dev/sda2
mdadm --add /dev/md2 /dev/sda3

Now take a look at

cat /proc/mdstat

... and you should see that the RAID arrays are being synchronized:

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sda3[2] sdb3[1]
4594496 blocks [2/1] [_U]
[=====>...............] recovery = 29.7% (1367040/459449
6) finish=0.6min speed=85440K/sec

md1 : active raid1 sda2[0] sdb2[1]


497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]


144448 blocks [2/2] [UU]

unused devices: <none>


server1:~#

(You can run

watch cat /proc/mdstat


to get an ongoing output of the process. To leave watch, press CTRL+C.)

Wait until the synchronization has finished (the output should then look like this:

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sda3[0] sdb3[1]
4594496 blocks [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]


497920 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]


144448 blocks [2/2] [UU]

unused devices: <none>


server1:~#

).

Then adjust /etc/mdadm/mdadm.conf to the new situation:

cp /etc/mdadm/mdadm.conf_orig /etc/mdadm/mdadm.conf
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

/etc/mdadm/mdadm.conf should now look something like this:

cat /etc/mdadm/mdadm.conf

# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about thi
s file.
#

# by default, scan all partitions (/proc/partitions) for


MD superblocks.
# alternatively, specify devices to scan, using wildcards
if desired.
DEVICE partitions

# auto-create devices with Debian standard permissions


CREATE owner=root group=disk mode=0660 auto=yes

# automatically tag new arrays as belonging to the local


system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alert


s
MAILADDR root

# This file was auto-generated on Mon, 26 Nov 2007 21:22:


04 +0100
# by mkconf $Id: mkconf 261 2006-11-09 13:32:35Z madduck
$
ARRAY /dev/md0 level=raid1 num-devices=2 UUID=72d23d35:35
d103e3:2b3d68b9:a903a704
ARRAY /dev/md1 level=raid1 num-devices=2 UUID=a50c4299:9e
19f9e4:2b3d68b9:a903a704
ARRAY /dev/md2 level=raid1 num-devices=2 UUID=99fee3a5:ae
381162:2b3d68b9:a903a704

8 Preparing GRUB (Part 2)


We are almost done now. Now we must modify /boot/grub/menu.lst again. Right
now it is configured to boot from /dev/sdb (hd1,0). Of course, we still want the
system to be able to boot in case /dev/sdb fails. Therefore we copy the first kernel
stanza (which contains hd1), paste it below and replace hd1 with hd0. Furthermore
we comment out all other kernel stanzas so that it looks as follows:

vi /boot/grub/menu.lst

[...]
## ## End Default Options ##

title Debian GNU/Linux, kernel 2.6.18-4-486 RAI


D (hd1)
root (hd1,0)
kernel /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd /initrd.img-2.6.18-4-486
savedefault

title Debian GNU/Linux, kernel 2.6.18-4-486 RAI


D (hd0)
root (hd0,0)
kernel /vmlinuz-2.6.18-4-486 root=/dev/md2 ro
initrd /initrd.img-2.6.18-4-486
savedefault

#title Debian GNU/Linux, kernel 2.6.18-4-486


#root (hd0,0)
#kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro
#initrd /initrd.img-2.6.18-4-486
#savedefault

#title Debian GNU/Linux, kernel 2.6.18-4-486 (si


ngle-user mode)
#root (hd0,0)
#kernel /vmlinuz-2.6.18-4-486 root=/dev/sda3 ro s
ingle
#initrd /initrd.img-2.6.18-4-486
#savedefault

### END DEBIAN AUTOMAGIC KERNELS LIST

In the same file, there's a kopt line; replace /dev/sda3 with /dev/md2 (don't
remove the # at the beginning of the line!):

[...]
# kopt=root=/dev/md2 ro
[...]

Afterwards, update your ramdisk:

update-initramfs -u

... and reboot the system:

reboot

It should boot without problems.


That's it - you've successfully set up software RAID1 on your running Debian Etch
system!

<< Prev Next >>

view as pdf | print

Share this page: Tweet Follow @howtoforgecom 26.1K followers

Recommend 0 0

Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 3 -
Page 4

14 Comment(s)
Add comment
Name * Email *

p

Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms

Comments

From: Rik Bignell Reply

Thx for this. Successfully used your guide to setup Juanty 9.04 with RAID5.
Submit comment
Points to note, RAID5 will NOT work when boot partition is raid5. For example, if you
have:
md0 = swap
md1 = root (boot within root)
Then you will not be able to write your grub properly to each drive due to raid5 not
having separate copies of files on each disc. Grub boots at disk level and not at
software raid level it seems.
My work around was to have boot separate. I chose:
md0=swap (3x drives within raid5, sda1, sdb1, sdc1)
md1=boot (2x drives within raid1, sda2, sdb2) 3rd drive is not needed unless 2 drives
fail at once, and because drives a mirrored completely you are able to write to grub.
md2=root (3x drives within raid5, sda3, sdb3, sdc3)
I'll be writing my own guide for raid1 and raid5 so you can see the difference in
commands, but will referrence this guide a lot as it helped me the most out of all the
ubuntu raid guides i found on google.

Watch http://www.richardbignell.co.uk/ for new guides.

From: Reply

Hello,
This instruction looks very useful however I would like to ask could someone please
make this to suite the default and recommended hdd setup of Debian (single partition)
please

From: Anonymous Reply

The article shows writing out a modified partition table, getting the message:
WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
The kernel still uses the old table.
The new table will be used at the next reboot.
and then, without rebooting, trying to write to the new partitions (running "mdadm -
add ...").
Doing that is extremely dangerous to any data on that disk---and even if there is no
data, doing that means mdadm might not be initializing something (the kernel's old
view of partition N) other than what you meant (your new partition N).

From: Lars Reply

QUOTE:

... and reboot the system:


reboot
It should boot without problems.

not quite... you need to do the GRUB part from page 2 again to make this work - just
got stuck in a 'GRUB' prompt after reload - can be fixed with a rescue system and a
new grub setup on the hd's.
Otherwise the howto works just fine - thank you!
-Lars

From: Anonymous Reply

I followed this guide to setup raid 1 (mirror of existing disc,to another) of an seperate
disc containing vmware virtual disc files.
I hope, I won't lose any data, but that's a risk I'd had to take. Right now it's
synchronising the vmware disc with the harddisc containing no data... at this point, I
can't access the harddisc containing the vmware files - so I have my fingers crossed :-)
I'll post an update as soon, as the synchronisation is complete, byfar it's only 18%
complete.
I would recommand everyone there is using this guide to synchronise data between
two discs to unmount EVERY disc that you're making changes to, BEFORE making any
changes at all. If you somehow fail to do so, it can lead to serious data loss. A point
that I think this guide, failed to mention.
Besides that, thank you very much for sharing you're knowledge!
- Simon Sessing
Denmark

From: Anonymous Reply

THANK YOU for this wonderful howto. I managed to get RAID set up on Debian Lenny
with no changes to your instructions.

From: Singapore website design Reply

Hi thanks for writing this guide. I managed to setup my servers software raid
successfully using this guide. Been using hardware raid all along. Thanks

From: Ben Reply

Great tutorial, worked perfectly for me in Debain Lenny substituting sda and sdb with
hda and hdd, and a few extra partitions ... thanks for posting. :)

From: Juan De Stefano Reply

Thank you for this excellent guideline. I followed on Ubuntu 9.10. The only thing
different is to setup the grub2. It is suposed i shouldn't edit the grub.cnf (former
grub.lst) but i did to change the root device) then mounted the /dev/md2 on
/mnt/md2 and then /dev/md0 on /mnt/md2/boot. Mounted sys, proc and dev also to
make the chroot. Later i did the dpkg-reconfigure grub-pc and selected the both disks
to install grub on mbr. Everything worked the first time i tried.
Thanks again
/ Juan

From: Anonymous Reply

I just did this for 9.10 Ubuntu as well. This procedure really needs to be updated
for GRUB2, which in and of itself is an excersise in tedium. However, GRUB2 is
slightly smarter & seemed to auto-configure a few of the drive details here and
there. However, there were some major departures from this procedure.

You don't need to (& should not) modify grub.cfg directly. Instead, I created a
custom grub config file: /etc/grub.d/06_custom which would contain my RAID
entries and put them above the other grub boot options during the "degraded"
sections of the installation. There's a few tricks there in how to format a custom
file correctly: there is some "EOF" crayziness, and also you should be using UUIDs,
so you have to make sure you get the right UUIDs, instead of using /dev/sd[XX]
notation. In the end, my 06_custom looked like:

#! /bin/sh -e
echo "Adding RAID boot options" >&2
cat << EOF
menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd1)"
{
recordfail=1
if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd1,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-
2c4fa99a5872 ro quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

menuentry "Ubuntu, Linux 2.6.31-20-generic RAID (hd0)"


{
recordfail=1
if [ -n ${have_grubenv} ]; then save_env recordfail; fi
set quiet=1
insmod ext2
set root=(hd0,0)
search --no-floppy --fs-uuid --set b79ba888-2180-4c7a-b744-2c4fa99a5872
linux /boot/vmlinuz-2.6.31-20-generic root=UUID=b79ba888-2180-4c7a-b744-
2c4fa99a5872 ro quiet splash
initrd /boot/initrd.img-2.6.31-20-generic
}

EOF

Also, you have to figure out which pieces of 10_linux to comment out to get rid of
the non-RAID boot options; for that:
#linux_entry "${OS}, Linux ${version}" \
# "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_EXTRA}
${GRUB_CMDLINE_LINUX_DEFAULT}" \
# quiet
#if [ "x${GRUB_DISABLE_LINUX_RECOVERY}" != "xtrue" ]; then
# linux_entry "${OS}, Linux ${version} (recovery mode)" \
# "single ${GRUB_CMDLINE_LINUX}"
#fi

Overall, this was the best non-RAID -> RAID migration how-to I could find. Thanks
very much for putting this out there.

From: Alex Dekker Reply

It has indeed been updated for Grub2: http://www.howtoforge.com/how-to-set-


up-software-raid1-on-a-running-system-incl-grub2-configuration-debian-squeeze

From: Vlad P Reply

I had already set up my RAID 1 before hitting your tutorial, but this reading made me
understand everything better - much better! Thank you very much!

From: Cristian Reply

This guide is awesme, it is just all you need to transform an usual one SATA disk into
RAID1 if you follow all instruction.
Thanks again ... thanks, thanks. You save me from some days work to configure again
a server.

From: Rory Reply

Thank you for this perfect tutortial.


It works perfectly even for Ubuntu. Had to mess with grub2 instead, but aside from
that, it's brilliant. Used it on three machines without a glitch.

Tutorials How To Set Up Software RAID1 On A Running


Xenforo skin by Xenfocus Contact Help Imprint Tutorials Top $
RSS-Feed
Howtoforge projektfarm GmbH. Terms
English | Deutsch Log in or Sign up

Search...

Tutorials Tags Forums Contribute Subscribe ISPConfig News

# Tutorial search

Tutorials How To Set Up Software RAID1 On A Running


Sign up now!

How To Set Up Software RAID1 On A


Running System (Incl. GRUB
Configuration) (Debian Etch) - Page 4 ! Tutorial Info

Author: falko
Tags: debian, storage
9 Testing On this page
" Share This Page
9 Testing
Now let's
10 Links
simulate a hard Tweet
drive failure. It
doesn't matter Recommend 0
if you select /dev/sda or /dev/sdb here. In this example I assume that /dev/sdb
has failed. 1

To simulate the hard drive failure, you can either shut down the system and remove
/dev/sdb from the system, or you (soft-)remove it like this:

mdadm --manage /dev/md0 --fail /dev/sdb1


mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md2 --fail /dev/sdb3

mdadm --manage /dev/md0 --remove /dev/sdb1


mdadm --manage /dev/md1 --remove /dev/sdb2
mdadm --manage /dev/md2 --remove /dev/sdb3

Shut down the system:

shutdown -h now

Then put in a new /dev/sdb drive (if you simulate a failure of /dev/sda, you
should now put /dev/sdb in /dev/sda's place and connect the new HDD as
/dev/sdb!) and boot the system. It should still start without problems.

Now run

cat /proc/mdstat

and you should see that we have a degraded array:

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sda3[0]
4594496 blocks [2/1] [U_]

md1 : active raid1 sda2[0]


497920 blocks [2/1] [U_]
md0 : active raid1 sda1[0]
144448 blocks [2/1] [U_]

unused devices: <none>


server1:~#

The output of

fdisk -l

should look as follows:

server1:~# fdisk -l

Disk /dev/sda: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Device Boot Start End Blocks Id System


/dev/sda1 * 1 18 144553+ fd Linux r
aid autodetect
/dev/sda2 19 80 498015 fd Linux r
aid autodetect
/dev/sda3 81 652 4594590 fd Linux r
aid autodetect

Disk /dev/sdb: 5368 MB, 5368709120 bytes


255 heads, 63 sectors/track, 652 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdb doesn't contain a valid partition table

Disk /dev/md0: 147 MB, 147914752 bytes


2 heads, 4 sectors/track, 36112 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md0 doesn't contain a valid partition table

Disk /dev/md1: 509 MB, 509870080 bytes


2 heads, 4 sectors/track, 124480 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md2: 4704 MB, 4704763904 bytes


2 heads, 4 sectors/track, 1148624 cylinders
Units = cylinders of 8 * 512 = 4096 bytes

Disk /dev/md2 doesn't contain a valid partition table


server1:~#

Now we copy the partition table of /dev/sda to /dev/sdb:

sfdisk -d /dev/sda | sfdisk /dev/sdb

(If you get an error, you can try the --force option:

sfdisk -d /dev/sda | sfdisk --force /dev/sdb

server1:~# sfdisk -d /dev/sda | sfdisk /dev/sdb


Checking that no-one is using this disk right now ...
OK

Disk /dev/sdb: 652 cylinders, 255 heads, 63 sectors/track

sfdisk: ERROR: sector 0 does not have an msdos signature


/dev/sdb: unrecognized partition table type
Old situation:
No partitions found
New situation:
Units = sectors of 512 bytes, counting from 0

Device Boot Start End #sectors Id System


/dev/sdb1 * 63 289169 289107 fd Linux raid au
todetect
/dev/sdb2 289170 1285199 996030 fd Linux raid au
todetect
/dev/sdb3 1285200 10474379 9189180 fd Linux raid au
todetect
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table

Re-reading the partition table ...

If you created or changed a DOS partition, /dev/foo7, say, then


use dd(1)
to zero the first 512 bytes: dd if=/dev/zero of=/dev/foo7 bs=5
12 count=1
(See fdisk(8).)
server1:~#

Afterwards we remove any


remains of a previous RAID array from /dev/sdb...

mdadm --zero-superblock /dev/sdb1


mdadm --zero-superblock /dev/sdb2
mdadm --zero-superblock /dev/sdb3

... and add /dev/sdb to the RAID array:

mdadm -a /dev/md0 /dev/sdb1


mdadm -a /dev/md1 /dev/sdb2
mdadm -a /dev/md2 /dev/sdb3

Now take a look at

cat /proc/mdstat

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sdb3[2] sda3[0]
4594496 blocks [2/1] [U_]
[======>..............] recovery = 30.8% (1416256/459449
6) finish=0.6min speed=83309K/sec

md1 : active raid1 sdb2[1] sda2[0]


497920 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]


144448 blocks [2/2] [UU]

unused devices: <none>


server1:~#

Wait until the synchronization has finished:

server1:~# cat /proc/mdstat


Personalities : [raid1]
md2 : active raid1 sdb3[1] sda3[0]
4594496 blocks [2/2] [UU]

md1 : active raid1 sdb2[1] sda2[0]


497920 blocks [2/2] [UU]

md0 : active raid1 sdb1[1] sda1[0]


144448 blocks [2/2] [UU]

unused devices: <none>


server1:~#
Then run

grub

and install the bootloader on both HDDs:

root (hd0,0)
setup (hd0)
root (hd1,0)
setup (hd1)
quit

That's it. You've just replaced a failed hard drive in your RAID1 array.

10 Links

The Software-RAID Howto: http://tldp.org/HOWTO/Software-RAID-HOWTO.html


Debian: http://www.debian.org

<< Prev

view as pdf | print

Share this page: Tweet Follow @howtoforgecom 26.1K followers

Recommend 0 1

Sub pages
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 1
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 2
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian Etch) - Page 4 -
Page 3
How To Set Up Software RAID1 On A Running System (Incl. GRUB Configuration) (Debian
Etch) - Page 4

7 Comment(s)
Add comment
Name * Email *

p

Submit comment
I'm not a robot
reCAPTCHA
Privacy - Terms

Comments

From: Reply

In general, I replaced Disk Id's of Ubuntu Gutsy by devices and is working great. I'm
Submit comment
writing from my Gutsy destop.
A few weeks ago I lost my /home partition. As consultant, I also works in home
therefore I don't have time for backup, thus I think is a good solution to have a RAID1.
First, I used Debian Etch, but it doesn't support easily my ATI Radeon 9200 video card,
and caused problems with vmware.
I redo all the process but for Ubuntu Gutsy Gibbon 7.10, replacing the Disk ID's by
devices. Also, for mnemotecnic reasons and easy recovery, I used md1 (boot) md2
(swap) and md5 (root).

From: Jairzhino Bolivar Reply

Well,

I just wanted to thank the team/person that putthis tutorial together,


this is a very valuable tutorial. I follow it using debian 5.0 (lenny) and
everything works very nicely I could not enjoy more looking at the
syncing process and testing process of the system booting even after i
removed the drive (hd1) This really allow everybody to protect their data
from hard drive failures. Thank you!!! Sooo! Much!!
I notice that when you run the command to install mdadm "citadel" install as well. Is
there away I can run the command apt-get install mdadm skipping "citadel"?
Again this is greatand very simple. I am using the same tutorial to create two more
disks raid1 for array storage.
This is just cool!!!

From: L. RIchter Reply

This Howto really worked out of the box. This was my first RAID installation using
Debian stable 5.03 and after wasting my time with the installer to set up a RAID this
worked straight without any complaints. Really a good job, well done,
Lothar

From: Franck78 Reply

Just follow steps adjusting number of 'md' and partitions and disks.
Maybe more details about the swap partition on each disk.
Usefull or useless to have an md made of swaps.....?
use:
mkswap /dev/mdX
swapon -a
swapon -s

Bye

From: Anonymous Reply

Excellent tutorial. Thank you.

From: Leo Matteo Reply

A very clear "manual" to set up a RAID1 in a running system.


I will try it in a while. I hope all will run well. (if yes or no, I will anyway comment
results.
Thank you (from Uruguay).

From: Max Reply

Aren't md0, md1 and md2 supposed to be operational after disk failure? Contents of
/proc/mdstat suggest that raid1 is still running with one disc but subsequent call to
fdisk shows that there are no valid partitions on md0, md1 and md2.
Might it be copy-paste error?
Otherwise very good tutorial.

Tutorials How To Set Up Software RAID1 On A Running

Xenforo skin by Xenfocus Contact Help Imprint Tutorials Top $


RSS-Feed
Howtoforge projektfarm GmbH. Terms

Potrebbero piacerti anche