Xyon (xyon) wrote,
Xyon
xyon

File Server Upgrade, Take 2


With the addition of a SYBA SY-JM363-2S1P PCI Express SATA / IDE Combo Controller Card I was able to attach my two new hard drives and proceed with the upgrade. (A really good thing about this card is that the devices showed up in the BIOS/CMOS/whatever as SCSI drives, not "Peripheral Hard Drive controller") which means that the pre-device boot and the post-device boot first drive were the same device; the other card didn't do that, and caused other issues).

For the most part I was just following the Gentoo wiki guide "Resize LVM2 on RAID5". The forced integrity check of my 1TB array took about 2 hours; which was pretty close to the initial system predicted time. I guess the number of times that I've seen IE or (file) Explorer do time predictions and be way, way short always leaves me with a warm and fuzzy feeling when something predicts a timespan correctly.

After the check I was about to upgrade when tekman commented that a couple of recent studies suggest that RAID-5 isn't very good, because the probability of one hard drive failing given that another hard drive has failed is noticeably higher than the rate of failure alone. (The CMU study, Disk failures in the real world: What does an MTTF of 1,000,000 hours mean to you?, very much says that; and I have still yet to finish the Google study, Failure Trends in a Large Disk Drive Population) So a break for reading of the CMU paper and idly investigating whether or not MD RAID-5 can be converted to MD RAID-6 (it turns out that it cannot, at least as of last week), and then I decide to just go ahead and kick off the expansion from 1TB to 2TB. (Current plan is to finish the Google paper and decide if I need to spend a weekend offloading the data and move it from 2TB MD RAID-5 to 1.5TB MD RAID-6)

One thing that wasn't sure about was whether or not you could grow the array by more than one device at a time (I couldn't think of a valid reason for it not working; but nothing said that it -would- work, either); but I figured that it would a) work, and grow across both, b) work, and grow across one, c) report an error and not do anything. Since the command didn't immediately fail, I was hoping it was one of (a) or (b), and not the evil (d) (corrupt everything and mock me). Predicted time to completion: 10 hours.

2 hours later I get home, check on the status, and it has 3 hours until completion... nice. (I'm also okay with the initial prediction being too long, it seems) The array is still usable while it's growing (albeit a tad slow; though Zoom Player eventually got the buffering right and stopped stuttering), and I get in a few episodes of Scrubs before going to bed with about an hour left to complete.

Next morning I run the pvresize, decide to skip the fsck since I had just done one before the md expansion (since it was insisting that it wanted to do a check during boot time); run a vastly superior lvresize (the wiki suggests doing (e.g.) -L +200M, to grow it by 200MB; but I just wanted it to grow as big as it could, so I did pvdisplay to see total physical extents; and then lvresize -l <that number>); and resize2fs, which fails since the device has been mounted since the last fsck. Okay, fsck it is.

In case you are wondering what the poorly-explained -S argument is to resize2fs; it is your chunk size (cat /prod/mdstat) divided by your filesystem block size (dumpe2fs /dev/your/device | grep "Block size:"). Supposedly it can figure that out on its own; but I don't know if that works if LVM is between MD and the filesystem.

So, resize2fs, remount the device, and viola:
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/lvm--raid-lvm0
                      1.8T  827G  952G  47% /mnt/raid

Subscribe
  • Post a new comment

    Error

    default userpic

    Your reply will be screened

    Your IP address will be recorded 

    When you submit the form an invisible reCAPTCHA check will be performed.
    You must follow the Privacy Policy and Google Terms of use.
  • 3 comments