Configure RAID 12025/01/06 |
Configure RAID 1 to add 2 new Disks on a computer. |
|
[1] | This example is based on the environment like follows. It shows to install new Disks [sdb] and [sdc] on this computer and configure RAID 1. |
[root@dlp ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cs-root 71G 3.2G 68G 5% / devtmpfs 4.0M 0 4.0M 0% /dev tmpfs 7.7G 0 7.7G 0% /dev/shm tmpfs 3.1G 8.7M 3.1G 1% /run tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service /dev/vda2 960M 406M 555M 43% /boot tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service tmpfs 1.0M 0 1.0M 0% /run/credentials/serial-getty@ttyS0.service tmpfs 1.6G 4.0K 1.6G 1% /run/user/0 |
[2] | Create a partition on new Disks and set RAID flag. |
[root@dlp ~]# parted --script /dev/sdb "mklabel gpt" [root@dlp ~]# parted --script /dev/sdc "mklabel gpt" [root@dlp ~]# parted --script /dev/sdb "mkpart primary 0% 100%" [root@dlp ~]# parted --script /dev/sdc "mkpart primary 0% 100%" [root@dlp ~]# parted --script /dev/sdb "set 1 raid on" [root@dlp ~]# parted --script /dev/sdc "set 1 raid on"
|
[3] | Configure RAID 1. |
[root@dlp ~]# mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdb1 /dev/sdc1
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
Continue creating array [y/N]? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
# show status [root@dlp ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 167638016 blocks super 1.2 [2/2] [UU] [=>...................] resync = 6.3% (10603904/167638016) finish=12.6min speed=206316K/sec bitmap: 2/2 pages [8KB], 65536KB chunk unused devices: <none> # status turns like follows if syncing finished # that's OK to configure RAID 1 [root@dlp ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 167638016 blocks super 1.2 [2/2] [UU] bitmap: 0/2 pages [0KB], 65536KB chunk unused devices: <none> |
[4] | To use RAID device, Ceate any filesystem and mount it as usual. |
# for example, format with xfs and mount it on /mnt [root@dlp ~]# mkfs.xfs /dev/md0 meta-data=/dev/md0 isize=512 agcount=4, agsize=10477376 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=1 = reflink=1 bigtime=1 inobtcount=1 nrext64=1 = exchange=0 data = bsize=4096 blocks=41909504, imaxpct=25 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1, parent=0 log =internal log bsize=4096 blocks=20463, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Discarding blocks...Done.[root@dlp ~]# mount /dev/md0 /mnt [root@dlp ~]# df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/cs-root xfs 71G 3.2G 68G 5% / devtmpfs devtmpfs 4.0M 0 4.0M 0% /dev tmpfs tmpfs 7.7G 0 7.7G 0% /dev/shm tmpfs tmpfs 3.1G 8.7M 3.1G 1% /run tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/systemd-journald.service /dev/vda2 xfs 960M 406M 555M 43% /boot tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/getty@tty1.service tmpfs tmpfs 1.0M 0 1.0M 0% /run/credentials/serial-getty@ttyS0.service tmpfs tmpfs 1.6G 4.0K 1.6G 1% /run/user/0 /dev/md0 xfs 160G 3.1G 157G 2% /mnt # to set in fstab # because the name of md*** sometimes changes when hardwares are changed, set it with UUID [root@dlp ~]# blkid | grep md /dev/md0: UUID="97cea3bc-8b10-4c71-957c-a8d75a5dc7ff" BLOCK_SIZE="512" TYPE="xfs"
[root@dlp ~]#
vi /etc/fstab # set with UUID
UUID=4f905149-bcf9-42c1-8c98-7a4f0e4095af / xfs defaults 0 0
UUID=4162725a-1d28-42a8-93ce-9138431b6bd8 /boot xfs defaults 0 0
UUID=57dcca14-dd42-4233-9a7c-d7ed916ec366 none swap defaults 0 0
UUID=97cea3bc-8b10-4c71-957c-a8d75a5dc7ff /mnt xfs defaults 0 0
# that's OK, even if the name of md*** are changed, it is mounted normally [root@dlp ~]# df -hT /mnt Filesystem Type Size Used Avail Use% Mounted on /dev/md127 xfs 160G 3.1G 157G 2% /mnt |
[5] | If a member Disk in RAID array would be failure, re-configure RAID 1 like follows after swapping new Disk. |
# the status is like follows in failure [root@dlp ~]# cat /proc/mdstat Personalities : [raid1] md0 : active (auto-read-only) raid1 sdb1[0] 167638016 blocks super 1.2 [2/1] [U_] bitmap: 0/2 pages [0KB], 65536KB chunk unused devices: <none> # after swapping new disk, re-configure like follows [root@dlp ~]# mdadm --manage /dev/md0 --add /dev/sdc1
mdadm: added /dev/sdc1
[root@dlp ~]# cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdc1[1] sdb1[0] 167638016 blocks super 1.2 [2/2] [UU] [=>...................] resync = 6.3% (10603904/167638016) finish=12.6min speed=206316K/sec bitmap: 2/2 pages [8KB], 65536KB chunk unused devices: <none> |
|