From the conclusion, I couldn't (yet). Here, I will leave what I tried to do this time (video encoding) and the result.
In addition, my environment is as follows (new this year). Windows10 Pro Insider Preview Build 21292 Intel Core i7-10750H 2.60GHz NVIDIA GeForece RTX 2060 with Max-Q Also, the Linux Distro on WSL2 is Ubuntu 18.04 LTS.
Recently, video data has been accumulated both publicly and privately, and it was difficult to save it due to its large capacity, so I decided to compress it all at once. I often study on Ubuntu 18.04 on WSL2, so I thought about solving this problem on top of that. Speaking of simple video editing on Ubuntu, there is ffmpeg, so I will use this.
First, install ffmpeg etc. (In my case, I got an error when trying to use NVENC without nasm, yasm, but one may be enough). You may also have other libraries you need.
Install ffmpeg etc.
sudo apt-get install ffmpeg nasm yasm
We have prepared a command to encode and compress all .MOV videos saved in the current directory to mp4 at once.
Video converter
for i in *.MOV; do ffmpeg -i "$i" -crf 30 -vf scale=1280:-1 "${i%.*}_c.mp4"; done
Store the file name of ○○ .MOV in i
, compress the file with ffmpeg
with a fixed quality ( -crf
) at a compression rate of 30, and fix the number of vertical size pixels to 1280 ( -vf
), and remove the extension from i so that it becomes ○○ \ _c.mp4.
In addition, \ _c is not necessary, but if you only want to compress with mp4 → mp4, it will be a file with the same name, so I dare to add it.
Also, -vf
fixes the aspect ratio, so if the vertical video is confused, you need to devise it (I think you can turn off this option in the first place). If -crf
is 30, it will be a little rough, so change it according to your preference.
Since there are various references for ffmpeg options, I will omit the details here.
In my environment, compression starts at a speed of about x1.00 to x2.00 (about x5.00 for mp4 → mp4), and if there are many videos, it will take a huge amount of time, so "If you can use GPU, it will be faster. I thought, "Is it possible?", And recently it has become possible to access hardware on WSL2 as follows, so I tried this.
This is very easy to understand in NVIDIA Sasaki's article, so you can do it exactly as it is. Not surprisingly, CUDA worked fine in my environment (Build 21292). (As an aside, I'm a poorly trained Windows user, so I was on the Insider Program beta channel (laughs). This time I'm on the dev channel and I'm 21292 (but I'm a small person so often I don't have the courage to update))
The command to use NVENC is as follows. "-Vcodec h264_nvenc" is added.
Video conversion (GPU)
for i in *.MOV; do ffmpeg -i "$i" -crf 30 -vf scale=1280:-1 -vcodec h264_nvenc "${i%.*}_c.mp4"; done
However, when I run it, I get the following error.
Video converter
Cannot load libnvidia-encode.so.1
[h264_nvenc @ 0x55acf1093f40] The minimum required Nvidia driver for nvenc is 378.13 or newer
Error initializing output stream 0:0 -- Error while opening encoder for output stream #0:0 - maybe
incorrect parameters such as bit_rate, rate, width or height
The error is that the libnvidia-encode.so.1
required for encoding cannot be read.
I tried searching including the folder containing the CUDA related libraries, but I still can't find it.
Try to find the file
~$ find /usr/local/cuda/ -name libnvidia-encode.so.1
~$
After a lot of research, I found Article on almost the same problem (2020.9) on the NVIDIA forum (I tried to try hardware access from Docker on WSL2). There, it was written as follows, and I understood that encoding/decoding will be solved in the future.
Yes, support for encoding/decoding in WSL2 is coming in a future driver. It was confirmed in the WSLConf’s CUDA session.
I think that CUDA is proceeding with Deep Learning first and it may take some time, so I will ask the CPU to do my best for a while.
CUDA on WSL2 seems to have many restrictions, such as the NVIDIA Management Library (NVML) API not yet implemented. However, it is said that it can be used for deep learning such as tensorflow-gpu, so I will wait for a long time to support encoding etc. in the future. First of all, I will continue to learn about Deep Learning, which is the original purpose of CUDA (?) (I'm playing with Open AI Gym now, but I can't think of a good output and I immediately detour ... )
Also, as an aside, I'm not sure what is commonly used (?) In such a development environment. I think that one way is to use NGC container from linux distro on WSL2 using docker, but in the end I am packing various things in WSL2. (I wonder if there are other things like using a Mac or using Linux directly.) That said, I really like WSL2 as a person living in a Windows-required environment (although it's still unstable and benchmark results aren't the best). I used to use cygwin to create a Linux-like environment on windows, but I still personally didn't like apt-cyg, which manages cygwin packages. For example, ffmpeg etc. needs to be built and installed from the source code, and when you try to do it, the necessary package is not enough and you can install that package as well. It was quite empty. I would like to keep up with the times while collecting information in this area as well.
Recommended Posts