Recently, with the increasing functionality of media devices, the number of large-capacity items such as images and moving images is increasing. When data is stored on various HDDs, there is a problem that it becomes difficult to find which data is on which HDD. Therefore, I would like to bundle multiple HDDs and handle them as one large-capacity HDD. This time, I will explain how to bundle multiple HDDs using LVM.
--A total of 5 HDDs
--Each HDD is 8TB
--Integrate capacity into two groups
First, check the disk information from the following to confirm the name of the disk to be integrated.
$ sudo fdisk -l
# ------------------------------------------------------------------ #
Disk /dev/sdf: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdg: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdh: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
# ------------------------------------------------------------------ #
Also check the serial number to make sure it is not mistaken for another HDD.
$ sudo smartctl -a /dev/sde | grep Model
# ------------------------------------------------------------------ #
Device Model: ST8000VN004-2M2101
# ------------------------------------------------------------------ #
Also check that it is not mounted.
$ df -h
If it is mounted, unlock it below
$ umount <YOUR MOUNT POINT>
Initialize 5 disks
$ sudo pvcreate /dev/sdd
WARNING: ext4 signature detected on /dev/sdd at offset 1080. Wipe it? [y/n]: y
Wiping ext4 signature on /dev/sdd.
Physical volume "/dev/sdd" successfully created.
Do the same for sde, sdf, sdg, sdh
.
Next, integrate the initialized disks.
This time, sdd, sde, sdf, sdg, sdh`` 8TB disks
For 3 for 1
and 2 for 1
, that is, 24TB
and 16TB
logical drives
Integrate and build.
First, integrate from 24TB
.
$ sudo vgcreate exthd1Group /dev/sdd /dev/sde /dev/sdf
# ------------------------------------------------------------------ #
Volume group "exthd1Group" successfully created
# ------------------------------------------------------------------ #
Similarly, do the 16TB
one.
$ sudo vgcreate exthd2Group /dev/sdg /dev/sdh
Next, create a logical volume. Create a logical volume by specifying ʻexthd1Group` of the HDD group created above.
$ sudo lvcreate -l 100%FREE -n lv0 exthd1Group
# ------------------------------------------------------------------ #
Logical volume "lv0" created.
# ------------------------------------------------------------------ #
Similarly, do for ʻexthd2Group`.
$ sudo lvcreate -l 100%FREE -n lv0 exthd2Group
If you do not want to use up all the capacity, you can change -l 100% FREE
to -L 8T
.
lv0
refers to the name of the logical volume.
Check the created volume.
$ sudo lvdisplay
# ------------------------------------------------------------------ #
--- Logical volume ---
LV Path /dev/exthd2Group/lv0
LV Name lv0
VG Name exthd2Group
LV UUID
LV Write Access read/write
LV Creation host, time N/A, 2020-02-10 14:09:41 +9999
LV Status available
# open 0
LV Size 14.55 TiB
Current LE 3815442
Segments 2
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:3
--- Logical volume ---
LV Path /dev/exthd1Group/lv0
LV Name lv0
VG Name exthd1Group
LV UUID
LV Write Access read/write
LV Creation host, time N/A, 2020-02-10 14:09:35 +9999
LV Status available
# open 0
LV Size 21.83 TiB
Current LE 5723163
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:2
# ------------------------------------------------------------------ #
All you have to do is format and use it.
Format the volume of each HDD group with ʻext4. First, for the volume of ʻexthd1Group
.
$ sudo mkfs -t ext4 /dev/exthd1Group/lv0
Similarly, for volumes of ʻexthd2Group`.
$ sudo mkfs -t ext4 /dev/exthd2Group/lv0
First, check the created and initialized volume.
$ sudo fdisk -l
# ------------------------------------------------------------------ #
Disk /dev/mapper/exthd1Group-lv0: 21.9 TiB, 24004685463552 bytes, 46884151296 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk /dev/mapper/exthd2Group-lv0: 14.6 TiB, 16003123642368 bytes, 31256100864 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
# ------------------------------------------------------------------ #
It's well done. Now, mount these.
$ sudo mount /dev/exthd1Group/lv0 ./hdd1
$ sudo mount /dev/exthd2Group/lv0 ./hdd2
If you want to mount automatically at startup, check the volume ʻUUID of each integrated HDD group and describe it in
fstab. First, check ʻUUID
.
$ sudo blkid /dev/exthd1Group/lv0
# ------------------------------------------------------------------ #
/dev/exthd1Group/lv0: UUID="7ii63fbf-deed-4ff1-b4af-8156f" TYPE="ext4"
# ------------------------------------------------------------------ #
Similarly, check ʻexthd2Group`.
$ sudo blkid /dev/exthd2Group/lv0
# ------------------------------------------------------------------ #
/dev/exthd2Group/lv0: UUID="8ii63fbf-deed-4ff1-b4af-8156f" TYPE="ext4"
# ------------------------------------------------------------------ #
Finally, write in fstab
.
$ sudo vim /etc/fstab
# ------------------------------------------------------------------ #
Postscript
UUID=7ii63fbf-deed-4ff1-b4af-8156f /hdd1 ext4 defaults 0 3
UUID=8ii63fbf-deed-4ff1-b4af-8156f /hdd2 ext4 defaults 0 3
# ------------------------------------------------------------------ #
If you already have evidence of using LVM when doing pvcreate
,
$ sudo pvdisplay
Check the existing disk in to see the PV and which group the PV belongs to. If it is Unkown, it is likely that the physical device was pulled out after creating the group.
To be consistent, remove Unkown by doing the following:
$ sudo vgreduce --removemissing <YOUR_GROUP>
Then delete the group.
$ sudo vgremove <YOUR_GROUP>
Delete PV.
$ sudo pvremove <YOUR_PV>
Recommended Posts