Actions

Increase or Grow Raid array in Linux

From zen2

Revision as of 08:08, 25 July 2013 by Chris (talk | contribs) (1 revision)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

The following information was obtained here and here

Adding partitions

When new disks are added, existing raid partitions can be grown to use the new disks. After the new disk was partitioned, the RAID level 1/4/5/6 array can be grown for example using this command (assuming that before growing it contains three drives):

mdadm --add /dev/md1 /dev/sdb3
mdadm --grow --raid-devices=4 /dev/md1

The process can take even 10 hours. There is a critical section at start, which cannot be backed up. To allow recovery after unexpected power failure, an additional option --backup-file= can be specified.

You may get an error like: Cannot set device size/shape for /dev/md5: Device or resource busy

In this case, check to see if you are using an intent bitmap. If you are you must remove the bitmap first, grow the array, then re-add the bitmap.

 mdadm --grow /dev/mdX -b none
 mdadm --grow /dev/mdX -n<new number of drives>
 mdadm --grow /dev/mdX -b internal

After mdadm finished growing the array it does not automatically modify /etc/mdadm.conf. Therefore you will realize that mdadm will not find the grown device /dev/md1.

To make mdadm find your array edit /etc/mdadm.conf and correct the num-devices information of your Array. (btw: using ubuntu its /etc/mdadm/mdadm.conf)

OLD:

 DEVICE partitions
 ARRAY /dev/md1 level=raid5 num-devices=3 metadata=00.90 spares=1 UUID=b05d00ce:f6224b94:64ae041e:7a8d916f

NEW:

 DEVICE partitions
 ARRAY /dev/md1 level=raid5 num-devices=4 metadata=00.90 spares=1 UUID=b05d00ce:f6224b94:64ae041e:7a8d916f

Remark for Raid5/6: You will realize that (checking with 'df') the size of your array has not changed. If you added one 1TB drive to your existing 3-1TB-drive-array mdadm will not automatically add the new 1TB of space. Mdadm will just spread the old 3TB-array over 4 drives occupying 750GB of each drive and leaving 250GB on each drive blank. To change that, have your array unmounted and:

 resize2fs

respectively

 resize_reiserfs

Expanding existing partitions

It is possible to migrate the whole array to larger drives (e.g. 250 GB to 1 TB) by replacing one by one. In the end the number of devices will be the same, the data will remain intact, and you will have more space available to you.


Extending an existing RAID array

In order to increase the usable size of the array, you must increase the size of all disks in that array. Depending on the size of your disks, this may take days to complete. It is also important to note that while the array undergoes the resync process, it is vulnerable to irrecoverable failure if another drive were to fail. It would (of course) be a wise idea to completely back up your data before continuing.

First, choose a drive and completely remove it from the array

mdadm -f /dev/md0 /dev/sdd1
mdadm -r /dev/md0 /dev/sdd1

Next, partition the new drive so that you are using the amount of space you will eventually use on all new disks. For example, if you are going from 100 GB drives to 250 GB drives, you will want to partition the new 250 GB drive to use 250 GB, not 100 GB. Also, remember to set the partition type to 0xDA - Non-fs data (or 0xFD, Linux raid autodetect if you are still using the deprecated autodetect).

fdisk /dev/sde

Now add the new disk to the array:

mdadm --add /dev/md0 /dev/sde1

Allow the resync to fully complete before continuing. You will now have to repeat the above steps for *each* disk in your array. Once all of the drives in your array have been replaced with larger drives, we can grow the space on the array by issuing:

mdadm --grow /dev/md0 --size=max

The array now represents one disk using all of the new available space.

Extending the filesystem

Now that you have expanded the underlying partition, you must now resize your filesystem to take advantage of it.

You may want to perform an fsck on the file system first to make sure there are no underlying issues before attempting to resize the file system

 fsck /dev/md0

For an ext2/ext3 filesystem:

resize2fs /dev/md0

For a reiserfs filesystem:

resize_reiserfs /dev/md0

Please see filesystem documentation for other filesystems.