Difference between revisions of "Linux Software RAID and SATA Hot Swap"
Line 178: | Line 178: | ||
I decided to pull that drive and make the remaining 2TB drive into a standalone, without RAID. | I decided to pull that drive and make the remaining 2TB drive into a standalone, without RAID. | ||
− | ==Fiasco #3: The | + | ==Fiasco #3: The Surprise Array== |
Revision as of 13:57, 22 January 2015
Why Software RAID?
I know there are a million pages online about Linux Software RAID, but I wanted to record my own experience with it.
My home server has a lot of storage:
- a 160GB RAID1 array, for my boot volume, on one of the motherboard's RAID controllers
- a 500GB RAID1 array, for backups, again on a RAID controller
- a 2TB RAID1 array, for home directories and virtual machines, on a RAID controller
- a 3TB RAID1 array, for other stuff, using software RAID.
- a single 3TB drive, for daily backups of my 3TB array
My motherboard is a number of years old now, and the onboard controllers could not do RAID for 3TB drives, as they only recognized them as 873GB. So I left these as standard drives, and set them up in software RAID.
My goal for this endeavor was to convert my 500GB and 2TB over to software RAID. The reasons being:
- Actually getting notifications regarding any issues
- Control over rebuilds, being able to add/remove disks
- Not being tied to a specific RAID controller with a specific firmware version. If the motherboard were to die, I can easily move the drives.
- No reboots required to work with the drives
- Linux can do SATA hot swap, so I don't need to power down to swap a disk
The minor performance hit isn't an issue, so the pros far outweigh the cons.
Fiasco #1: The 500GB Array
I decided to do the 500GB array first, since it was small and quick to work with.
I moved the data off the drive, rebooted the server to get into the BIOS, deleted the array, then booted the server back up. Then I (not showing any of these steps, you'll see why...):
- partitioned the drives using fdisk
- created the RAID1 array and waited for it to sync
- formatted it
- mounted the drive
- put all my files back on
Then I rebooted the server, and what do I get? NO OPERATING SYSTEM FOUND
I shut down the server and unplugged the two 500GB drives, and it found the operating system just fine. The 3TB array is using software RAID, but didn't trigger the same issue. Why? To have drives >2.2TB, you need a GUID Partition Table (GPT) [1] on the drive, not the standard msdos partition table. My motherboard won't attempt to boot from a GPT drive.
Now to rebuild the array using GPT drives...
Rebuilding The Array
I can't boot the server with the drives plugged in, and running them on USB to SATA converters is just horrible. What to do? Linux supports SATA hot swap! I booted the server up, then just plugged the drives in. They are instantly recognized by the system, and added in as sd[x] devices.
- Reenable the array
mdadm -A /dev/md1
- Mount the drive
mount /dev/md1 /mnt/500GB-array
- Move all the data off the drive
- Stop the array, zero the superblocks and remove the array
mdadm -S /dev/md1 mdadm --zero-superblock /dev/sdi1 mdadm --zero-superblock /dev/sdh1 mdadm --remove /dev/md1 rm /dev/md1
- Create a new partition table and partitions using parted on the first drive
[root@vmware dev]# parted sdi GNU Parted 1.8.1 Using /dev/sdi Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) print Model: ATA ST3500630AS (scsi) Disk /dev/sdi: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags 1 32.3kB 500GB 500GB primary raid (parted) rm 1 (parted) print Model: ATA ST3500630AS (scsi) Disk /dev/sdi: 500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Number Start End Size Type File system Flags (parted) mklabel Warning: The existing disk label on /dev/sdi will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes New disk label type? [msdos]? gpt (parted) unit GB (parted) mkpart primary 0.00GB 500.0GB (parted) print Model: ATA ST3500630AS (scsi) Disk /dev/sdi: 500GB Sector size (logical/physical): 512B/512B Partition Table: gpt Number Start End Size File system Name Flags 1 0.00GB 500GB 500GB primary (parted) quit
- Do the same thing on the second drive
- Create the array
[root@vmware dev]# mdadm --create /dev/md1 --level=1 --metadata=1.2 --raid-devices=2 /dev/sdh1 /dev/sdi1 mdadm: metadata format 1.02 unknown, ignored. mdadm: metadata format 1.02 unknown, ignored. mdadm: array /dev/md1 started. [root@vmware dev]# cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] [raid1] md1 : active raid1 sdi1[1] sdh1[0] 488386414 blocks super 1.2 [2/2] [UU] [>....................] resync = 0.1% (783360/488386414) finish=103.7min speed=78336K/sec md0 : active raid1 sdc1[0] sdd1[1] 5860532736 blocks super 1.2 [2/2] [UU] unused devices: <none>
- At any point during the resync, you can format the drive, mount it, and start using it. Obviously, it will not have working redundancy until it has fully synced.
Fiasco #2: The 2TB Array
Next on the agenda was to make the 2TB RAID1 array into a software array.
One of my motivators behind this project was to correct the 37 bad sectors that showed up on one of my 2TB drives. I figured I would work that in, between deleting the hardware array and creating the software array. I was going to use the linux program badblocks to verify and fix the drive.
Using badblocks
Badblocks is a handy program, similar to Spinrite. It works by taking 4 passes over the drive (by default):
- The first pass writes the pattern "10101010" to all bits on the drive, then it reads it back to verify
- The second pass writes the pattern "01010101" to all the bits, then verifies
- The third pass writes all ones to the drive, and verifies.
- The final pass writes all zeroes to the drive, and verifies.
If there are no issues with the drive, it takes about 24 hours to run. And at the end, you have a completely zeroed drive.
When it's successful, it looks like this:
[root@vmware /]# badblocks -wvs /dev/sda Checking for bad blocks in read-write mode From block 0 to 390711384 Testing with pattern 0xaa: done Reading and comparing: done Testing with pattern 0x55: done Reading and comparing: done Testing with pattern 0xff: done Reading and comparing: done Testing with pattern 0x00: done Reading and comparing: done Pass completed, 0 bad blocks found.
To my dismay, my drive had more than 37 bad sectors. A lot more. The output looked like this:
[root@vmware mnt]# badblocks -wvs /dev/sda Checking for bad blocks in read-write mode From block 0 to 1953514584 Testing with pattern 0xaa: done Reading and comparing: 1113920 1113920/ 1953514584 1115128 1115128/ 1953514584 1115248 1115248/ 1953514584 1116392 1116392/ 1953514584 1118944 1118944/ 1953514584 2340632 2340632/ 1953514584 2350736 2350736/ 1953514584 2356936 2356936/ 1953514584 2362000 2362000/ 1953514584 2399560 2399560/ 1953514584 2413312 2413312/ 1953514584 2430776 2430776/ 1953514584
I decided to pull that drive and make the remaining 2TB drive into a standalone, without RAID.