I'm still learning the basics of AWS. This time, I reconfirmed the operation of EBS, which I didn't usually care about (or at all).
--Attach a volume to an instance and reattach the volume to another instance to make sure the data remains. ――Speaking of storage, only S3 was used, but I reconfirmed that local volume (EBS) can also be used as a storage area.
I created test-ebs
. I made it with 4G. HDD or SSD can be anything.
Attached to instance A.
Check if the volume is attached. Ssh to instance A.
Last login: Fri Nov 13 03:16:54 UTC 2020 on pts/0
[root@ip-xxx-xxx-xxx-xxx ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 4G 0 disk
I was able to confirm that 4G was attached with the name xvdf
. This device file resides in / dev / xvdf
.
Format the file system for linux.
[root@ip-xxx-xxx-xxx-xxx ~]# mke2fs /dev/xvdf
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
262144 inodes, 1048576 blocks
52428 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Writing superblocks and filesystem accounting information: done
You can now mount it on your Linux file system.
Mount on / mnt
.
[root@ip-xxx-xxx-xxx-xxx ~]# mount /dev/xvdf /mnt
[root@ip-xxx-xxx-xxx-xxx ~]# ls /mnt/
lost+found
I was able to confirm that it was mounted.
Create a file in this volume.
[root@ip-xxx-xxx-xxx-xxx ~]# ce /mnt
[root@ip-xxx-xxx-xxx-xxx mnt]# echo hoge > hoge.txt
[root@ip-xxx-xxx-xxx-xxx mnt]# cat hoge.txt
hoge
I created it and wrote hoge in the contents.
First, unmount the instance A volume.
[root@ip-172-31-27-44 ~]# ls /mnt/
hoge.txt lost+found
[root@ip-172-31-27-44 ~]# umount /mnt
[root@ip-172-31-27-44 ~]# ls /mnt/
[root@ip-172-31-27-44 ~]#
Since you have umount
, the contents of / mnt
are empty.
Detach on the AWS console.
Attach ebs to server B in the same way as for instance A.
Check instance B by ssh to see if the volume is attached.
yokohama@ip-yyy-yyy-yyy-yyy [12:41:29 PM] [~] [master *]
-> % lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 64G 0 disk
└─xvda1 202:1 0 64G 0 part /
xvdf 202:80 0 4G 0 disk
There is a 4G guy. The device file name was xvdf
. Mount and check the contents.
[root@ip-yyyy-yyyy-yyyy]~# mount /dev/xvdf /mnt
[root@ip-yyyy-yyyy-yyyy]~# ls /mnt
hoge.txt lost+found
[root@ip-yyyy-yyyy-yyyy]~# cat /mnt/hoge.txt
hoge
It was confirmed that hoge.txt exists in the mounted ebs and the contents are also saved properly.
There seems to be various uses, but I was able to confirm that the data in ebs is permanently stored in this way.
By the way, I thought it would be convenient if I could attach this volume to multiple ec2s, but as far as I checked the menu on the AWS console, it didn't seem to be possible.
I feel it again, but until about 20 years ago, I added a physical hard disk to the motherboard, formatted the file system and mounted it, but it's an elastic era ~