Resizing your Linux based NAS

In an earlier article article I outlined how to manually create a NAS device with standard Linux tools. This article shows you how to resize the array that you created. In the example that I used, we have replaced our 1 terabyte drives with 2 terabyte drives. Then we extend our logical volume and the file system that sits on top.

Warning, if the data is important or mission critical, BACK IT UP! One wrong step and you could very likely hose all of your data!

The first thing we need to do is replace the drives, one at time. This may take a while (about 4 hours per disk on my machine) as the array has to be rebuilt each and every time you replace a drive. During this period you are vulnerable to data loss as you no longer have a valid parity drive. If another disk fails in this time period then will likely lose your data. In case you missed it before, BACK UP YOUR IMPORTANT DATA FIRST!

On my machine the drives were installed into hot swappable drive trays, and the SATA chipset supported hot swapping. Your system may differ, in which case you may need to shutdown and reboot to swap every drive.

On my machine I just removed the first drive. I checked dmesg and noticed that the drive I removed was /dev/sdd. If you check the array you should see that one drive is missing:

mdadm --detail /dev/md1

I then inserted the new disk, in my case this was /dev/sdn. I created a single partition that encompassed the entire drive. You can use any partitioning tool for this (parted, fdisk, or my favorite cfdisk). At this point we can add the new drive to the array with:

mdadm --add /dev/md1 /dev/sdn1

You can watch the progress of the rebuild with the following command:

watch -n 3 cat /proc/mdstat

It will look something like this:

  1. /proc/mdstat                                                                                                                                 Sun Feb 27 15:49:38 2011
  2.  
  3. Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
  4. md0 : active raid5 sdi1[1] sdh1[0] sdg1[3] sde1[2]
  5.       2930279808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
  6.  
  7. md1 : active raid5 sdp1[1] sdo1[2] sdn1[0] sdm1[3]
  8.       5860535808 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
  9.       [==============>......]  resync = 70.4% (1375555968/1953511936) finish=9048.9min speed=1064K/sec
  10.  
  11. unused devices: <none>

Once the resync is complete, you can proceed to the next disk.

Once you have all of your disks replaced, the old may still be listed as faulty spares. Use the mdadm --manage /dev/md1 --remove faulty command to remove these entries.

Once the software RAID device is resized (ie. you have replaced all the drives with new and or larger ones), we have a couple of steps to extend the LVM (assuming you are using LVM). If you are not using LVM and have a file system directly on the /dev/mdX device then you can skip to the last step, which is resizing the file system.

I have not resized an LVM is while, but if memory serves one has to run pvresize, then lvresize. From there you have to resize the file system itself. If you are using one of the extN file systems, you could use resize2fs. Recent versions of resize2fs do not require you to remove the journal (in other words, resize2fs has native support for ext3 and ext4).