Red Hat Software Raid

This is a quick and dirty document on software raid there are many more documents on the web that go into greater detail, the following is covered in this document:

The following was tested using a centos 4 installation on Dell hardware, three partitions have already been created /dev/sdb10, /dev/sdb11 and /dev/sdb12 all are 1Gb in size.

Set the partition type

Set partition type

[root]# fdisk /dev/sdb
Command (m for help): t
Paratition number (1-12): 10

Partition ID (L to list options): FD
Command (m for help): w
Command (m for help): q

Note: repeat above for /dev/sdb11 and /dev/sdb12

Create the array

Using mkraid

1. Update the configuration file /etc/raidtab with these lines of code

[root]# vi /etc/raidtab
raiddev /dev/md0
raid-level 1
nr-raid-disks 2
nr-spare-disks 1
persistent-superblock
chunk-size 4
device /dev/sdb10
raid-disk 0
device /dev/sdb11
raid-disk 1
device /dev/sdb12
spare-disk 0

2. Now make the RAID device md0 and create a filesystem on it

[root]# mkraid /dev/md0
[root]# mke2fs -j –b 4096 –R stride=8 /dev/md0

3. Now add entry to /etc/fstab and mount it:

Add line to fstab:
          /dev/md0   /mirror1    ext3       defaults 1   2

Create the mount point:
          [root]# mkdir /mirror1

Mount the mirror:
          [root]# mount /mirror1

Using mdadm

mdadm -C /dev/md0 -l1 -n2 /dev/sdb10 /dev/sdb11 -x1 /dev/sdb12
               
-C   create an array               
-l   the raid level (raid 1 in this case)               
-n   Number of devices in raid (2 devices in this case)
-x   Number of spare disks in the raid (1 in this case)

mdadm configuration file

Saving the configuration echo "DEVICE partitions" > /etc/mdadm.conf
mdadm --details --scan >> /etc/mdadm.conf
Lost the configuration file echo "DEVICE partitions" > /etc/mdadm.conf
mdadm --examine --scan >> /etc/mdadm.conf
mdadm --assemble --scan

Check the raid array

/proc/mdstat cat /proc/mdstat
lsraid lsraid –a /dev/md0
mdadm mdadm --detail /dev/md0
mdadm -D /dev/md0

Simulating a drive failure

To test the raid integrity you might want to simulate a disk failure again there are a number of ways to do this.

raidtools raidsetfaulty /dev/md0 /dev/sdb11
mdadm mdadm -–manage /dev/md0 –f /dev/sdb11

Use the above “check the array” options to see that the disk has been faulted.

Remove a disk from the array

To remove a disk from the raid array use the following commands, the disk has to be faulty not faulted (see above) to allow this option to work, at this point the disk can be physically removed.

raidtools raidhotremove /dev/md0 /dev/sdb11
mdadm mdadm --manage /dev/md0 –r /dev/sdb11

Use the above “check the array” options to see that the disk has been removed.

Add a disk to the array

Adding a disk to the array could result in two outcomes, if the array is already degraded the new will be used to fix the fault (if a hot spare has not already been used), if not then the disk will be used as a hot spare.

raidtools raidhotadd /dev/md0 /dev/sdb11
mdadm mdadm --manage /dev/md0 –a /dev/sdb11

Use the above “check the array” options to see that the disk has been added.

Extend the array

mdadm mdadm --grow /dev/md0 -n3

Starting and stopping array

raidtools raidstop /dev/md0                 
raidstart /dev/md0  
mdadm

mdadm -S /dev/md0
mdadm -A -R /dev/md0

-S  stopping
-A  assemble
-R  starting

Persistant-superblock
When an array is initialised with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the beginning of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times. 

Autodetection
Autodetection allows the RAID devices to be automatically recognized by the kernel at boot-time, right after the ordinary partition detection is done.