MDRAID

mdadm Cheat Sheet
Description Command Details
Show RAID rebuild status cat /proc/mdstat  
Show RAID status mdadm --detail /dev/md0 Status for RAID md0
Add disk to RAID mdadm /dev/md0 --add /dev/sda1 Add sda1 to md0
Remove disk from RAID mdadm --manage /dev/md0 --remove /dev/sda1 Remove sda1 from md0
Fail a disk from RAID mdadm --manage /dev/md0 --fail /dev/sda1 Fail sda1 from md0

Create RAID 1 (Mirror)

In this example, we will create a RAID 1 using mdadm. Create tables and partitions for sda:

$ sudo parted -s /dev/sda 'mklabel gpt'
$ sudo parted -s /dev/sda 'mkpart primary 1 -1'

Copy the partition from sda over to sdb with sgdisk:

$ sudo sgdisk /dev/sda --replicate=/dev/sdb
$ sudo sgdisk --randomize-guids /dev/sdb

The RAID can now be assembled:

$ sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 /dev/sdb1

The RAID build status can be seen with cat /proc/mdstat. To see a continuous update, you can use the watch command in front. The command will then be “watch cat /proc/mdstat.

Replace faulty disk

When you want to replace a failed drive, you should start by making sure that the drive is marked with “failed”. Checking the drive status can be done the “mdadm –detail /dev/md0” command . Marking the disk as failed and removing it from the RAID can be done with mdadm:

$ sudo mdadm --manage /dev/md0 --fail /dev/sda1 --remove /dev/sda1

Copy the partition table from another disk in the RAID onto the new disk

If the partition is GPT use:

$ sudo sgdisk /dev/sdb --replicate=/dev/sda
$ sudo sgdisk --randomize-guids /dev/sda

If the partition is MBR use:

$ sh -c "/sbin/sfdisk --dump /dev/sdb | /sbin/sfdisk /dev/sda"

The disk can now be added to the RAID. The rebuild process will start automatically:

$ sudo mdadm --manage /dev/md1 --add /dev/sda1

RAID Rebuild Speed

To see your Linux kernel speed limits imposed on the RAID rebuild, use:

$ cat /proc/sys/dev/raid/speed_limit_max
200000
$ cat /proc/sys/dev/raid/speed_limit_min
1000
This means that The rebuild speed will have a guaranteed speed of 1MB/s and a maximum speed of 200 MB/s. The actual speed will be higher and will depend on the system load and what other processes are running at that time.
To increase the minimum speed limit, you need to enter a higher value in speed_limit_min:
$ echo 50000 | sudo tee /proc/sys/dev/raid/speed_limit_min

Convert a single disk system to RAID 1 (Mirror)

In this example we have Ubuntu 18.04 LTS installed on a single disk and will create RAID 1 using a secondary empty disk.

Let us start by examining the current partition layout:

$ sudo parted --list

Model: SEAGATE ST9300603SS (scsi)
Disk /dev/sda: 300GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name  Flags
 1      1049kB  2097kB  1049kB                     bios_grub
 2      2097kB  300GB   300GB   ext4


Error: /dev/sdb: unrecognised disk label
Model: SEAGATE ST9300603SS (scsi)
Disk /dev/sdb: 300GB
Sector size (logical/physical): 512B/512B
Partition Table: unknown
Disk Flags:

We can see that the disk uses GPT partition table with a BIOS boot partition sda1. Otherwise, there is only one main root file system sda2. The secondary disk sdb does not yet have a partition table.

Let’s start by copying the existing partition table to the secondary disk and make the new partition UUID unique:

$ sudo sgdisk /dev/sda --replicate /dev/sdb
The operation has completed successfully.
$ sudo sgdisk --randomize-guids /dev/sdb
The operation has completed successfully.

Now that we have an identical partition table on the secondary disk, we will configure a RAID 1 with 2 devices with the first one purposely missing and the second one being sdb2.

$ sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 missing /dev/sdb2

mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

As the current root file system is EXT4, we will create a new EXT4 file system on the new RAID md0:

$ sudo mkfs.ext4 /dev/md0
mke2fs 1.44.1 (24-Mar-2018)
Creating filesystem with 73208576 4k blocks and 18309120 inodes
Filesystem UUID: ff25d882-f65b-4e0e-ad49-f8d9756a0f89
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
        4096000, 7962624, 11239424, 20480000, 23887872, 71663616

Allocating group tables: done
Writing inode tables: done
Creating journal (262144 blocks): done
Writing superblocks and filesystem accounting information: done

It is important that the new disk has GRUB2 bootloader installed so that we can boot from the disk. The easiest way to do this is by reconfiguring the grub-pc package:

$ sudo dpkg-reconfigure grub-pc

Here we will add /dev/sdb to the installation targets.

We will then make a new mount point and mount the new file system there:

$ sudo mkdir /mnt/new-raid
$ sudo mount /dev/md0 /mnt/new-raid

With the new file system mounted, The process of copying data over to the RAID can begin. To copy the data over, we will use rsync:

$ sudo rsync -auHxv --exclude={"/proc/*","/sys/*","/mnt/*"} /* /mnt/new-raid/

Warning

To minimize the risk of the data changing during the copying process, we recommend booting the server into our live recovery tool. Alternatively, you can switch to the single-user mode by running sudo systemctl isolate rescue.target.

As we want the new file system to be automatically mounted during boot, we will configure the /mnt/new-raid/etc/fstab file to use the new file system UUID, by replacing the existing entry.

To find the UUID, we will use the blkid command.

$ blkid
/dev/sda2: UUID="f85e5be9-1e2d-4a21-9b83-95ead999115b" TYPE="ext4" PARTUUID="103e1094-527c-4ae4-8954-d03a154b6554"
/dev/sdb2: UUID="19a48b4b-129e-ad05-18ab-ff29b31ee60e" UUID_SUB="e4447c1f-e47a-f93e-8888-e5ea46404f98" LABEL="e82-103-137-138s:0" TYPE="linux_raid_member" PARTUUID="97a6665e-6fb1-4042-bb2d-ac83d401cf32"
/dev/md0: UUID="c6d055e7-18b5-478b-b753-f6b24211b3d1" TYPE="ext4"

In our example, /mnt/new-raid/etc/fstab will looks like this.

UUID=c6d055e7-18b5-478b-b753-f6b24211b3d1 /               ext4    errors=remount-ro 0       1

The RAID is now up and running and we will now reboot the system. During the GRUB menu, press ‘e’ to enter edit mode so that we can make the system boot from the RAID. When in edit mode, we will have it look like this:

set root='(md/0)'
linux   /boot/vmlinuz-4.15.0-64-generic root=UUID=1956153f-9345-4930-97e3-413784d12d2b ro console=ttyS0,115200n8 console=tty0

When the command line has been changed, the boot can continue and will boot up in our RAID. To check if the server has booted to the RAID you can use the mount command. The output should look like this:

$ mount
/dev/md0 on / type ext4 (rw,noatime,errors=remount-ro)

From this, we can confirm that we have booted from the RAID which is mounted as the root file system.

Now that we have established that we have boot from the RAID, we can begin to add the first disk. Before we add the disk to the RAID, we will have to format the disk and copy the partition table from sdb to sda:

$ sudo wipefs --all /dev/sda
$ sudo sgdisk /dev/sdb --replicate /dev/sda
The operation has completed successfully.

Now that the disk has been formatted and it has an identical partition table to the RAID, the disk can be added. The RAID will automatically start to rebuild when the disk has been added.

$ sudo mdadm /dev/md127 --add /dev/sda2
mdadm: added /dev/sda2

Attention

The RAID has to finish rebuilding before continuing this guide.

We want to re-install GRUB on sda so that we will have GRUB on both disks and can boot from both of them.

$ sudo dpkg-reconfigure grub-pc
$ sudo update-grub

When GRUB has been updated, we can reboot and start-up in our fully functioning RAID 1 that has 2 functioning disks. The RAID has been established and made the active boot location without losing any data.