My RAID6 was originally planned with 6 drives, but only had 5 for a while due to space concerns with the case. However, I found as a file, media, and multiple cryptocurrency node it filled up the 2.7TB pretty quickly. So, I got a new case (and some more RAM), which has proper space for 6 3.5″ drives (and 2 5.25″). When migrating to this I decided to add the extra 1TB WD Red NAS drive I bought but have not been able to use.
The case is a Fractal Design Define Mini, and I am thoroughly impressed. Six 3.5″ slots, two 5.25″ external slots, and lots of sound padding on the doors and sides.
Keep in mind in my case, the device node is
/dev/sdf, yours will likely be different. After connecting the disk and powering up, partition the disk. I have set it up identical to all other disks, 128MiB EFI partition on
sdf2 (you can use
gdisk to partition manually or
sfdisk to copy the partition structure of another drive). Having the EFI partition on each disk allows me to boot if any drive fails, since the mdraid is setup only across the second partitions on each drive. That is, my boot partition is not part of the array. Note that you have to synchronize any updated kernel or boot files across these boot partitions. This can be automated with a kernel build time hook.
Now you can add the disk (more accurately, the partition) as a spare:
mdadm --add /dev/md0 /dev/sdf2
If you cat
/proc/mdstat, you will see it’s been added as a spare:
md0 : active raid6 sdf2(S) sdc2 sdb2 sda2 sde2 sdd2
mdadm will now resync the array, which will take a while (about 4 hours, in my case). Be sure to adjust your mdadm speed limits in /proc/sys/dev/raid/speed_limit_max to speed things up considerably.
Once that’s done, you can grow the array. You can technically grow an array while it’s still syncing but when sensitive data is in play I tend to get a little superstitious:
mdadm --grow /dev/md0 --raid-devices=6
This will, again, take hours and since I have all of our digital lives on here (and backed up, but still), I am still too nervous to continue until after the reshape is done. Though you can without issue, I think…
md0 : active raid6 sdf2 sdc2 sdb2 sda2 sde2 sdd2 2929501200 blocks super 1.2 level 6, 16k chunk, algorithm 2 [6/6] [UUUUUU] [>....................] reshape = 0.1% (1146936/976500400) finish=585.9min speed=27744K/sec
Now normally this is where you’d extend the filesystem, but the next layer is the encryption, so we have to resize that first:
cryptsetup resize root
That only takes a few seconds, since it just adds available free space. Now you can finally extend the filesystem itself:
/dev/mapper/root 2.7T 2.2T 540G 81% /
/dev/mapper/root 3.6T 2.2T 1.5T 61% /
That’s it. A dmcrypt layer really only adds one quick command to the procedure. Now hopefully it lasts this way for a while!