Sei sulla pagina 1di 3

Crezione Raid 5 array con 14 dischi da 70Gb con 2 spare

mdadm --create --verbose /dev/md0 --level=5 --raid-device=12 /dev/mpath/disk01 /


dev/mpath/disk02 /dev/mpath/disk03 /dev/mpath/disk04 \
/dev/mpath/disk05 /dev/mpath/disk06 /dev/mpath/disk07 /
dev/mpath/disk08 /dev/mpath/disk09 /dev/mpath/disk10 \
/dev/mpath/disk11 /dev/mpath/disk12 \
--spare-device=2 /dev/mpath/disk13 /dev/mpath/disk14
mdadm --detail /dev/md0
Per controllare il rebuild dei dischi possiamo anche fare un cat del file mdstat
:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 dm-15[14] dm-3[13](S) dm-2[12](S) dm-14[10] dm-13[9] dm-12[8]
dm-11[7] dm-10[6] dm-9[5] dm-8[4] dm-7[3] dm-6[2] dm-5[1] dm-4[0]
788556032 blocks level 5, 64k chunk, algorithm 2 [12/11] [UUUUUUUUUUU_]
[=========>...........] recovery = 49.6% (35563424/71686912) finish=39.5m
in speed=15225K/sec
unused devices: <none>
################################################################################
##################
Se vogliamo rimuovere il raid appena creato procedere nel seguente modo:
mdadm -S /dev/md0
mdadm --remove /dev/md0
################################################################################
##################
Se rimuoviamo dei dischi e al riavvio non vuole ripartire il device md:
mdadm -A --force /dev/md0
Se non vede il superblock forzare una risincronizzazione:
mdadm -S /dev/md0
(ferma il raid)
mdadm --assemble /dev/md0 --update=resync
mdadm -D /dev/md0 (controlliamo la percentuale di rebuild.Esempio: Rebuild Sta
tus : 5% complete)
################################################################################
##################
In questo caso non creiamo il logical volume ma formattiamo il disco in ext3 e l
o abilitiamo in fstab:
[root@rmsx52 ~]# mkfs.ext3 /dev/md0
mke2fs 1.39 (29-May-2006)
Etichetta del filesystem=
Tipo SO: Linux
Dimensione blocco=4096 (log=2)
Dimensione frammento=4096 (log=2)
98582528 inode, 197139008 blocchi
9856950 blocchi (5.00%) riservati per l'utente root
Primo blocco dati=0
Maximum filesystem blocks=0

6017 gruppi di blocchi


32768 blocchi per gruppo, 32768 frammenti per gruppo
16384 inode per gruppo
Backup del superblocco salvati nei blocchi:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968,
102400000
Scrittura delle tavole degli inode: fatto
Creazione del journal (32768 blocchi): fatto
Scrittura delle informazioni dei superblocchi e dell'accounting del filesystem:
fatto
Questo filesystem verr automaticamente controllato ogni 26 mount, o
180 giorni, a seconda di quale venga prima. Usare tune2fs -c o -i per cambiare.
[root@rmsx52 ~]#
################################################################################
##################
Seconda ipotesi:
Creazione del logical volume:
pvcreate /dev/md0
vgcreate lvm-raid-exp700 /dev/md0
Impostazione degli extend a 16Mb (di default viene impostato a 4Mb ma per un dis
co da 550Gb lo alziamo a 16Mb)
vgcreate -s 16M lvm-raid-exp700
Verifica:
vgdisplay lvm-raid-exp700
lvcreate -l 57235 lvm-raid-exp700 -n u01
mount /dev/lvm-raid-exp700/u01 /u01
################################################################################
##################
[root@rmsx52 u01]# mdadm /dev/md0 --add /dev/mpath/disk11 /dev/mpath/disk13
mdadm: added /dev/mpath/disk11
mdadm: re-added /dev/mpath/disk13
[root@rmsx52 u01]# mdadm -D /dev/md0
/dev/md0:
Version : 00.90.03
Creation Time : Tue Jul 17 14:12:26 2012
Raid Level : raid5
Array Size : 788556032 (752.03 GiB 807.48 GB)
Used Dev Size : 71686912 (68.37 GiB 73.41 GB)
Raid Devices : 12
Total Devices : 14
Preferred Minor : 0
Persistence : Superblock is persistent
Update Time
State
Active Devices
Working Devices
Failed Devices
Spare Devices

:
:
:
:
:
:

Sat Jul 21 04:02:53 2012


clean
12
14
0
2

Layout : left-symmetric
Chunk Size : 64K
UUID : 36499c84:57859581:02f2e4f1:e6603f9f
Events : 0.2214
Number
0
1
2
3
4
5
6
7
8
9
10
11
12
13

Major
253
253
253
253
253
253
253
253
253
253
253
253

Minor
5
6
4
7
8
9
10
11
14
2
3
13

RaidDevice
0
1
2
3
4
5
6
7
8
9
10
11

State
active
active
active
active
active
active
active
active
active
active
active
active

12
15

spare
spare

253
253

sync
sync
sync
sync
sync
sync
sync
sync
sync
sync
sync
sync

/dev/dm-5
/dev/dm-6
/dev/dm-4
/dev/dm-7
/dev/dm-8
/dev/dm-9
/dev/dm-10
/dev/dm-11
/dev/dm-14
/dev/dm-2
/dev/dm-3
/dev/dm-13

/dev/dm-12
/dev/dm-15

[root@rmsx52 u01]# cat /proc/mdstat


Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 dm-15[13](S) dm-12[12](S) dm-5[0] dm-3[10] dm-13[11] dm-2[9]
dm-14[8] dm-11[7] dm-10[6] dm-9[5] dm-8[4] dm-7[3] dm-4[2] dm-6[1]
788556032 blocks level 5, 64k chunk, algorithm 2 [12/12] [UUUUUUUUUUUU]
unused devices: <none>
[root@rmsx52 u01]#
################################################################################
##################
Per testare i dischi possiamo generare un file da 200Gb con il seguente comando:
dd if=/dev/zero of=file_200GB bs=1024 count=200000