Since I bought a new PC, I lightly measured the IO performance of each file system. The environment of the KVM host is as follows.
KVM Host: Ubuntu 20.04 (Linux 5.4.0-60-generic) KVM Guest: Windows 10 OS Build 19042.685 CPU: AMD Ryzen 5 3600 6-core Memory: 128 GB SSD: WD_BLACK SN850 2TB
I mounted a disk formatted with EXT4, XFS, ZFS on/test, copied the image of qcow2 to it, and compared the performance easily with CrystalDiskMark.
All format parameters are default. CrystalDiskMark only measured 1 GB once.
------------------------------------------------------------------------------
CrystalDiskMark 8.0.0 x64 (C) 2007-2020 hiyohiyo
Crystal Dew World: https://crystalmark.info/
------------------------------------------------------------------------------
* MB/s = 1,000,000 bytes/s [SATA/600 = 600,000,000 bytes/s]
* KB = 1000 bytes, KiB = 1024 bytes
EXT4:
[Read]
SEQ 1MiB (Q= 8, T= 1): 897.182 MB/s [ 855.6 IOPS] < 9310.23 us>
SEQ 1MiB (Q= 1, T= 1): 882.161 MB/s [ 841.3 IOPS] < 1183.96 us>
RND 4KiB (Q= 32, T= 1): 36.359 MB/s [ 8876.7 IOPS] < 3472.53 us>
RND 4KiB (Q= 1, T= 1): 33.073 MB/s [ 8074.5 IOPS] < 120.00 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 835.608 MB/s [ 796.9 IOPS] < 9726.00 us>
SEQ 1MiB (Q= 1, T= 1): 872.855 MB/s [ 832.4 IOPS] < 1196.97 us>
RND 4KiB (Q= 32, T= 1): 36.836 MB/s [ 8993.2 IOPS] < 3518.83 us>
RND 4KiB (Q= 1, T= 1): 33.367 MB/s [ 8146.2 IOPS] < 119.12 us>
XFS:
[Read]
SEQ 1MiB (Q= 8, T= 1): 995.806 MB/s [ 949.7 IOPS] < 8381.42 us>
SEQ 1MiB (Q= 1, T= 1): 889.653 MB/s [ 848.4 IOPS] < 1174.34 us>
RND 4KiB (Q= 32, T= 1): 34.602 MB/s [ 8447.8 IOPS] < 3659.19 us>
RND 4KiB (Q= 1, T= 1): 33.413 MB/s [ 8157.5 IOPS] < 118.95 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 803.229 MB/s [ 766.0 IOPS] < 10384.00 us>
SEQ 1MiB (Q= 1, T= 1): 830.395 MB/s [ 791.9 IOPS] < 1258.21 us>
RND 4KiB (Q= 32, T= 1): 34.978 MB/s [ 8539.6 IOPS] < 3620.99 us>
RND 4KiB (Q= 1, T= 1): 31.885 MB/s [ 7784.4 IOPS] < 124.74 us>
ZFS:
[Read]
SEQ 1MiB (Q= 8, T= 1): 860.625 MB/s [ 820.8 IOPS] < 9697.54 us>
SEQ 1MiB (Q= 1, T= 1): 611.799 MB/s [ 583.5 IOPS] < 1707.94 us>
RND 4KiB (Q= 32, T= 1): 33.500 MB/s [ 8178.7 IOPS] < 3812.68 us>
RND 4KiB (Q= 1, T= 1): 33.310 MB/s [ 8132.3 IOPS] < 119.36 us>
[Write]
SEQ 1MiB (Q= 8, T= 1): 587.469 MB/s [ 560.3 IOPS] < 14216.82 us>
SEQ 1MiB (Q= 1, T= 1): 533.718 MB/s [ 509.0 IOPS] < 1959.07 us>
RND 4KiB (Q= 32, T= 1): 32.717 MB/s [ 7987.5 IOPS] < 3937.58 us>
RND 4KiB (Q= 1, T= 1): 28.359 MB/s [ 6923.6 IOPS] < 140.57 us>
Sequential I/O drops ZFS one step further. Random I/O is surprisingly good for EXT4.
Incidentally, the following is the test result with dd when the KVM guest is Ubuntu 20.04. The file system for/is ext4.
4k write: dd if=/dev/zero of=test1 bs=4k count=262144 oflag=direct 1M write: dd if=/dev/zero of=test3 bs=1024k count=1024 oflag=direct
dd | EXT4 | XFS | ZFS |
---|---|---|---|
4k write | 97.7 MB/s | 106 MB/s | 74.9 MB/s |
1M write | 1.7 GB/s | 2.3 GB/s | 1.4 GB/s |
1MB has faster XFS, but 4k performance may be more important in practice.
I also compared the performance on the KVM host with fio and dd. The parameters of fio are based on the following.
I tried benchmarking EXT4 vs XFS vs Btrfs vs ZFS with fio.
bs=4k
$ fio -rw=write -bs=4k -size=100m -numjobs=40 -runtime=60 -direct=1 -invalidate=1 -ioengine=libaio -iodepth=32 -iodepth_batch=32 -group_reporting -name=seqwrite
$ fio -rw=randwrite -bs=4k -size=100m -numjobs=40 -runtime=60 -direct=1 -invalidate=1 -ioengine=libaio -iodepth=32 -iodepth_batch=32 -group_reporting -name=randwrite
bs=16k:
$ fio -rw=write -bs=16k -size=100m -numjobs=40 -runtime=60 -direct=1 -invalidate=1 -ioengine=libaio -iodepth=32 -iodepth_batch=32 -group_reporting -name=seqwrite
$ fio -rw=randwrite -bs=16k -size=100m -numjobs=40 -runtime=60 -direct=1 -invalidate=1 -ioengine=libaio -iodepth=32 -iodepth_batch=32 -group_reporting -name=randwrite
Click here for the results.
fio | EXT4 | XFS | ZFS |
---|---|---|---|
4k seq-write | 2987MB/s | 3066MB/s | 2393MB/s |
4k rand-write | 2883MB/s | 2675MB/s | 176MB/s |
16k seq-write | 2873MB/s | 2907MB/s | 3785MB/s |
16k rand-write | 2826MB/s | 2834MB/s | 612MB/s |
I also measured dd.
4k write: dd if=/dev/zero of=test1 bs=4k count=262144 oflag=direct 1M write: dd if=/dev/zero of=test3 bs=1024k count=1024 oflag=direct
dd | EXT4 | XFS | ZFS |
---|---|---|---|
4k write | 304 MB/s | 264 MB/s | 720 MB/s |
16k write | 949 MB/s | 832 MB/s | 1.6 GB/s |
1M write | 4.0 GB/s | 4.1 GB/s | 2.5 GB/s |
fio should be doing 40 simultaneous I/Os, so it may be closer to the actual operation. XFS is a little faster. Fio's ZFS random I/O performance was measured twice and was similar. This is strange because 4k/16k is very fast for ZFS with dd.
Although I/O performance is said to be, the reason why only write is measured on the KVM host is that read seems to be in the cache and a messy value appears. Well, normally the cache should be enabled, so it's just for reference.
I wonder if I can use it with XFS. ..
Recommended Posts