• Published on | Mar 19, 2017 | by Chris Osborn


Storage comparison vs 25 years ago My server has been getting rather full from all the cartoon DVDs I have been ripping so I've been saving up my money to replace the 2TB drives in it with 8TB drives. The last time I spent this amount of money on hard drives all I got was a 5.25" full-height 350MB drive. Quite a difference 25 years makes!

I didn't want to build a new server or do a new install from scratch, because it's a lot of work to configure a new server and this one is working the way I like. Don't fix what ain't broke. Decided to upgrade the drives and expand the RAID by following this guide. I can always upgrade the OS later.

First problem: I can't continue to use MBR! Found that out when I tried to add what should have been a new 7TB partition and Linux told me it was smaller than the 1.5TB partition on the old drives. Had to switch to GPT, which as you'll see caused issues later.

Label your drives! Thought I had drives labeled correctly but I didn't. Ended up changing the partition table on a running drive and the RAID went down after I rebooted. Luckily I was able to restore the partition table and bring it back up. As suggested by @0x3d0g on Twitter, I'm now putting the drive serial number on my labels and will use /dev/disk/by-id if I need to mess with them in the future.

After swapping drives one at a time over four days the last thing to do was to get Grub re-installed so I could boot after swapping the final drive. This turned out to be fairly tricky because of the switch from MBR to GPT. Because of that change I needed to add special partition for grub. Of course this late in the game having already setup three drives and having the fourth one offline I didn't want to have to start over with repartitioning just to add a partition I didn't know I needed beforehand.

Luckily by default modern partitioning tools start at sector 2048 so there was a very small amount of space at the beginning of the drive that was unused. I was able to create a small grub boot partition for grub to install to.

What do you mean no partitions? After making grub do an install to one of the new 8TB drives and rebooting I just got thrown into the rescue shell. Trying to manually enter commands to choose the correct partition didn't work, grub insisted there were no partitions on any of the drives. Grub had installed itself without GPT support!

Tried lots of different things to try to get booted up. Grub from USB stick would just freak out. Couldn't stick old HD with grub on SATA 5 or 6 since BIOS wouldn't boot from it. Motherboard I'm using doesn't do EFI so I couldn't try to boot from an EFI shell.

Ended up putting back in the original drive in order to boot on it. More research and I found there's a special flag  you need to give to grub-install to get the GPT module installed too.

grub-install --modules="part_gpt part_msdos" /dev/sda

Once I did that I was able to remove the final MBR drive and get the server booted up and running entirely off the new 8TB drives! After all the drives had synchronized I started expanding the space. First step was to expand each RAID to use the new larger partitions. The next step was to expand the Physical Volume, then expand the Logical Volume.

The final step was to expand the ext4 filesystem. It went fine on the root partition, but when I tried to expand the /buffet partition from 4.5TB up to 19TB I got an error from resize2fs:

New size too large to be expressed in 32 bits

I actually found that to be pretty funny! My filesystem was now going to be so ridiculously huge that a 32 bit integer wasn't big enough! It wasn't until now that it dawned on me just how much more space I was adding.

Because I hadn't really given it much thought when I first setup the server I hadn't used the 64 bit option when originally creating the ext4 filesystem. And because my server is using Debian 7 (remember how I said I didn't want to upgrade?) the e2fstools weren't able to convert the filesystem to 64 bit.

Fixing this wasn't too hard though. I just went and got the latest e2fsprogs and compiled them and installed them. Once I had done that I was able to boot up in single user mode and take the /buffet partition offline and convert it to 64 bit. After that I had no problem with resize2fs.

I've now got what seems like an absurd amount of space. But I know eventually even this amount of space will be filled up and I'll have to upgrade again. But until then I've got so much room for activities!

Join The Discussion