Une façon de réaliser RAID1 est de créer une matrice RAID en utilisant l'ancien pilote du noyau MD et de créer un LVM dessus, appelé LVM sur RAID (vous n'avez pas besoin de créer un LVM).
Cette section décrit comment créer un volume RAID LVM qui est mis en miroir par la fonction LVM. LVM prend en charge RAID 1/4/5/6/10. (Utilise en interne le pilote du noyau MD)
Travaillez à la racine.
ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sdf
Il est également possible de supprimer la partition et de mettre le disque entier sous contrôle LVM.
gdisk /dev/sde
Command (? for help): p
Abréviation
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Number Start (sector) End (sector) Size Code Name
Command (? for help): n
Abréviation
Hex code or GUID (L to show codes, Enter = 8300): 8e00
Changed type of partition to 'Linux LVM'
Command (? for help): p
Disk /dev/sde: 3907029168 sectors, 1.8 TiB
Abréviation
Number Start (sector) End (sector) Size Code Name
1 2048 3907029134 1.8 TiB 8E00 Linux LVM
Command (? for help): w
Abréviation
Do you want to proceed? (Y/N): y
Abréviation
The operation has completed successfully.
gdisk /dev/sdf
Abréviation
Vérification
ls -F /dev/sde* /dev/sdf*
/dev/sde /dev/sde1 /dev/sdf /dev/sdf1
pvcreate <devices>
pvcreate /dev/sde1 /dev/sdf1
Physical volume "/dev/sde1" successfully created.
Physical volume "/dev/sdf1" successfully created.
Vérification
pvs
pvdisplay
pvs
PV VG Fmt Attr PSize PFree
/dev/sde1 lvm2 --- <1.82t <1.82t
/dev/sdf1 lvm2 --- <1.82t <1.82t
pvdisplay
"/dev/sde1" is a new physical volume of "<1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/sde1
Nom VG Initialement vide
PV Size <1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
"/dev/sdf1" is a new physical volume of "<1.82 TiB"
--- NEW Physical volume ---
PV Name /dev/sdf1
Nom VG Initialement vide
PV Size <1.82 TiB
Allocatable NO
PE Size 0
Total PE 0
Free PE 0
Allocated PE 0
PV UUID 6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5
Effacer
pvremove <devices>
pvremove /dev/sde1 /dev/sdf1
vgcreate <volume name> <devices>
vgcreate vg1 /dev/sde1 /dev/sdf1
Volume group "vg1" successfully created
Vérification
vgs
vgdisplay
vgscan
pvdisplay
vgs
VG #PV #LV #SN Attr VSize VFree
vg1 2 0 0 wz--n- <3.64t <3.64t
vgdisplay
--- Volume group ---
VG Name vg1
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 1
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 0
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size <3.64 TiB
PE Size 4.00 MiB
Total PE 953862
Alloc PE / Size 0 / 0
Free PE / Size 953862 / <3.64 TiB
VG UUID eGU09g-QCWJ-pbCR-a9rf-YtKW-8b9e-BUqIPI
vgscan
Reading all physical volumes. This may take a while...
Found volume group "vg1" using metadata type lvm2
pvdisplay
--- Physical volume ---
PV Name /dev/sde1
Nom VG vg1 ← Nom VG
PV Size <1.82 TiB / not usable <4.07 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476931
Free PE 476931
Allocated PE 0
PV UUID BTiUhz-TssW-6hFv-tr70-QO3Y-QbU3-mpNL8N
--- Physical volume ---
PV Name /dev/sdf1
Nom VG vg1 ← Nom VG
PV Size <1.82 TiB / not usable <4.07 MiB
Allocatable yes
PE Size 4.00 MiB
Total PE 476931
Free PE 476931
Allocated PE 0
PV UUID 6Il3bs-mr2f-46RS-rV5O-Tavu-PPcu-QNofe5
vgrename <old vg name> <new vg name>
vgremove <vg name>
lvcreate --type raid1 -m <N> -L <S> -n <LV> <VG>
--N: spécifiez le nombre de miroirs. 1 s'il y a 2 disques --S: spécifiez la taille de LV --LV: nom LV que vous souhaitez créer --VG: nom VG à utiliser (créé par vgcreate)
lvcreate --type raid1 -m 1 -L 1.8T -n lv1 vg1
Rounding up size to full physical extent 1.80 TiB
Logical volume "lv1" created.
Vérification
lvs
lvdisplay
lvs
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
lv1 vg1 rwi-a-r--- 1.80t 0.00
lvdisplay
--- Logical volume ---
LV Path /dev/vg1/lv1
LV Name lv1
VG Name vg1
LV UUID WBAnij-OHQm-8jwT-OV0X-iVAk-9FMT-DpYiXJ
LV Write Access read/write
LV Creation host, time d1, 2020-05-04 21:05:59 +0900
LV Status available
# open 0
LV Size 1.80 TiB
Current LE 471860
Mirrored volumes 2
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 254:4
lvrename /dev/<vg name>/<old lv name> /dev/<vg name>/<new lv name>
lvremove <lv path>
lvremove /dev/vg1/lv1
Commande mkfs <chemin lv>
mkfs.ext4 /dev/vg1/lv1
Ici, le point de montage est / disks / raid
.
mount /dev/vg1/lv1 /disks/raid
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg1-lv1 1.8T 77M 1.7T 1% /disks/raid
blkid | grep mapper
/dev/mapper/vg1-lv1_rimage_0: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1_rimage_1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
/dev/mapper/vg1-lv1: UUID="896732c2-ce1b-4edc-889f-c749afcde18f" TYPE="ext4"
ls -l /dev/disk/by-uuid/
lrwxrwxrwx 1 root root 10 May 4 20:15 2f0035ec-eb69-4cb1-9a02-7774a7de8c87 -> ../../sdc1
lrwxrwxrwx 1 root root 10 May 4 20:15 4d0dc8c5-4640-4817-becc-966186982c06 -> ../../sda2
lrwxrwxrwx 1 root root 10 May 4 21:13 896732c2-ce1b-4edc-889f-c749afcde18f -> ../../dm-4
lrwxrwxrwx 1 root root 10 May 4 20:15 e3641bda-7386-4330-b325-e57d11f420bb -> ../../sdb1
lrwxrwxrwx 1 root root 10 May 4 20:15 f42dc771-696a-42e7-9f2b-9dbb6057c6b8 -> ../../sdd1
lrwxrwxrwx 1 root root 10 May 4 20:15 FA3B-8EA8 -> ../../sda1
vim /etc/fstab
UUID="896732c2-ce1b-4edc-889f-c749afcde18f" /disks/raid ext4 noatime,nodiratime,relatime 0 0
Recommended Posts