I tried a deep learning model called PIFuHD that generates a three-dimensional mesh from one (or multiple) photos, so this is a memo. It's a memo because I was addicted to running it in a Windows environment.
Quoting the video on GitHub, you can do the following:
This environment uses Anaconda. (PIFuHD uses Python, so a Python environment is required)
Normally, when you install Anaconda, a GUI is also included, so you can set the environment there. From there, you can also launch a terminal with the Anaconda environment set up, so all you need to do is run a Python script from the command line.
However, when I try various deep learning models, there are always moments when I work with Shell scripts. Of course, it cannot be executed from the Windows command prompt.
So, this time, I would like to set up to use Anaconda (conda
command) even in the Bash environment of Git for Windows.
(If you just want to try the PIFuHD demo, you just have to hit some commands in .sh
yourself, so I can't say it's not necessary)
After installing Anaconda, configure it so that you can use the conda
command on Git for Windows.
First, install Anaconda. Click the Get Started button on the Anaconda page, then click ** Install Anaconda Individual Edition ** to jump to the Download Page (https://www.anaconda.com/products/individual).
There is a ** Download ** button on the download page, click it to scroll to the download link at the bottom of the page, select ** 64-Bit Graphical Installer ** from there to download the installer and Anaconda To install.
After the installation is complete, pass the Path so that Bash will recognize it.
Then the conda
command itself will be recognized.
In my environment, I added the following to .basahrc
.
export PATH=$PATH:/c/Users/[USER_NAME]/anaconda3
export PATH=$PATH:/c/Users/[USER_NAME]/anaconda3/Scripts
Please rewrite as appropriate according to the installation location. If you keep the default, you can use the same as above.
conda activate
After passing through Path, when I tried to activate Anaconda's environment and executed $ conda activate
, the following error occurred.
$ conda activate
CommandNotFoundError: Your shell has not been properly configured to use 'conda activate'.
If using 'conda activate' from a batch script, change your
invocation to 'CALL conda.bat activate'.
To initialize your shell, run
$ conda init <SHELL_NAME>
Currently supported shells are:
- bash
- cmd.exe
- fish
- tcsh
- xonsh
- zsh
- powershell
See 'conda init --help' for more information and options.
IMPORTANT: You may need to close and restart your shell after running 'conda init'.
I solved it by referring to the following article.
Apparently it's a problem because what is written automatically when you run conda init
is not written.
So by manually adding the following to .bashrc
, conda
is recognized and can be used safely on Bash of Git for Windows.
# >>> conda init >>>
__conda_setup="$(CONDA_REPORT_ERRORS=false '$HOME/anaconda3/bin/conda' shell.bash hook 2> /dev/null)"
if [ $? -eq 0 ]; then
\eval "$__conda_setup"
else
if [ -f "$HOME/anaconda3/etc/profile.d/conda.sh" ]; then
. "$HOME/anaconda3/etc/profile.d/conda.sh"
CONDA_CHANGEPS1=false conda activate base
else
\export PATH="$PATH:$HOME/anaconda3/bin"
fi
fi
unset __conda_setup
# <<< conda init <<<
After adding this, you can run $ source ~ / .bashrc
or restart the terminal and you will be able to run$ conda activate [ENV_NAME]
.
If the environment is applied successfully, you should see the environment name at the prompt.
After building the Anaconda environment, we will set up for PIFU HD. The README on GitHub states the following, so we will set it up so that they can be used.
Requirements
sudo apt-get install freeglut3-dev
for ubuntu users)By the way, the manual setup is described in the README of Repository of version without HD as follows, so I set it up referring to this. (I referred to the ** OR manually setup environment ** part. ** I didn't refer to the part above that because I use miniconda **)
Windows demo installation instuction
conda
to PATHGit\bin\bash.exe
eval "$(conda shell.bash hook)"
then conda activate my_env
because of thisenv create -f environment.yml
(look this)conda create —name pifu python where pifu is name of your environment
conda activate
conda install pytorch torchvision cudatoolkit=10.1 -c pytorch
conda install pillow
conda install scikit-image
conda install tqdm
conda install -c menpo opencv
Git\mingw64\bin
sh ./scripts/download_trained_model.sh
sh ./scripts/test.sh
In addition to the above, I also installed the following items. (It is written in the README of PIFuHD)
Install each as follows.
$ conda install -c conda-forge trimesh
$ conda install -c anaconda pyopengl
When I tried running the sample without installing GLUT, the following error occurred.
OpenGL.error.NullFunctionError: Attempt to call an undefined function glutInit, check for bool(glutInit) before calling
This is an error because there is no GLUT. However, I didn't know how to install it, and this article helped me.
-I tried running PIFuHD on Windows for the time being --Qiita
In this article, it is written that it was resolved by referring to another article. Click here for another article (Using OpenGL with Python on Windows --Tadao Yamaoka's Diary)
If you quote from that,
Install FreeGLUT
You need to install GLUT separately from PyOpenGL. GLUT seems to be a platform-independent library for handling OpenGL.
Original GLUT cannot download Windows 64-bit binaries. Instead, install FreeGLUT, which provides a 64-bit binary.
From the FreeGLUT page Martin Payne's Windows binaries (MSVC and MinGW) Follow the link, Download freeglut 3.0.0 for MSVC Download freeglut-MSVC-3.0.0-2.mp.zip from.
Copy freeglut.dll from freeglut \ bin \ x64 \ in the zip file to C: \ Windows \ System32.
You can now use GLUT.
For the installation of ffmpeg, I referred to this article.
-[Windows] Procedure for installing FFmpeg
Basically, just drop the ʻexefile, save it, add it to Path and make it recognized. The above article is an example used at the command prompt, but since I am using Git Bash this time, I added the path to
.bashrc` as follows.
export PATH=$PATH:/path/to/bin/ffmpeg
As mentioned in this article that I referred to, I still got an error after completing the above setup. The error that occurred is as follows.
Traceback (most recent call last):
File "C:\Users\[USER_NAME]\anaconda3\envs\pifu\lib\runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "C:\Users\[USER_NAME]\anaconda3\envs\pifu\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "D:\MyDesktop\Python-Projects\pifuhd\apps\render_turntable.py", line 69, in <module>
renderer = ColorRender(width=args.width, height=args.height)
File "D:\MyDesktop\Python-Projects\pifuhd\lib\render\gl\color_render.py", line 34, in __init__
CamRender.__init__(self, width, height, name, program_files=program_files)
File "D:\MyDesktop\Python-Projects\pifuhd\lib\render\gl\cam_render.py", line 32, in __init__
Render.__init__(self, width, height, name, program_files, color_size, ms_rate)
File "D:\MyDesktop\Python-Projects\pifuhd\lib\render\gl\render.py", line 45, in __init__
_glut_window = glutCreateWindow("My Render.")
File "C:\Users\[USER_NAME]\anaconda3\envs\pifu\lib\site-packages\OpenGL\GLUT\special.py", line 73, in glutCreateWindow
return __glutCreateWindowWithExit(title, _exitfunc)
ctypes.ArgumentError: argument 1: <class 'TypeError'>: wrong type
If you refer to the above article, it seems to be solved by rewriting the following part in lib / render / gl / render.py
.
class Render:
def __init__(self, width=1600, height=1200, name='GL Renderer',
program_files=['simple.fs', 'simple.vs'], color_size=1, ms_rate=1):
self.width = width
self.height = height
self.name = name
self.display_mode = GLUT_DOUBLE | GLUT_RGB | GLUT_DEPTH
self.use_inverse_depth = False
global _glut_window
if _glut_window is None:
glutInit()
glutInitDisplayMode(self.display_mode)
glutInitWindowSize(self.width, self.height)
glutInitWindowPosition(0, 0)
#To specify the argument string`b`Add
#_glut_window = glutCreateWindow("My Render.")
_glut_window = glutCreateWindow(b"My Render.")
Well, this is really the setup. All you have to do now is run the demo script provided.
bash ./scripts/demo.sh
When you do this, the video output from the sample photo will be displayed as shown below.
By the way, here is the photo that was the basis for generating this model ↓
First, let's take a look at the Shell script used to generate the mesh.
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
python -m apps.simple_test
# python apps/clean_mesh.py -f ./results/pifuhd_final/recon
python -m apps.render_turntable -f ./results/pifuhd_final/recon -ww 512 -hh 512
That's all there is to it. One commented out, but this is probably whether to clear the previously generated mesh etc.
Looking at the first process (ʻapps / simple_test.py`), I set some parameters.
# Copyright (c) Facebook, Inc. and its affiliates. All rights reserved.
from .recon import reconWrapper
import argparse
###############################################################################################
## Setting
###############################################################################################
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input_path', type=str, default='./sample_images')
parser.add_argument('-o', '--out_path', type=str, default='./results')
parser.add_argument('-c', '--ckpt_path', type=str, default='./checkpoints/pifuhd.pt')
parser.add_argument('-r', '--resolution', type=int, default=512)
parser.add_argument('--use_rect', action='store_true', help='use rectangle for cropping')
args = parser.parse_args()
###############################################################################################
## Upper PIFu
###############################################################################################
resolution = str(args.resolution)
start_id = -1
end_id = -1
cmd = ['--dataroot', args.input_path, '--results_path', args.out_path,\
'--loadSize', '1024', '--resolution', resolution, '--load_netMR_checkpoint_path', \
args.ckpt_path,\
'--start_id', '%d' % start_id, '--end_id', '%d' % end_id]
reconWrapper(cmd, args.use_rect)
The setting items are the location of the input data, the location of the output data, the resolution, etc.
Looking at the input, it seems that the images in the ./sample_images
folder are used.
So, I placed the image I want to generate a mesh there and executed it.
Click here for the results.
You can generate it normally. (The original image was picked up, so I won't post it here.)
It was confirmed that it can be generated normally even if it is not originally included.
I've only encountered this problem a few times, but when I tried to generate some meshes, I had a problem that the mesh generation was interrupted due to a CUDA Out of Memory error.
There may be a normal way to release memory, but once I did $ conda deactivate
, I was able to solve it.
(Once you do it, there is no memory error after that ... mystery)
If you come across it, give it a try.
With reference to this article, I will post the environment of Anaconda that succeeded in outputting PI FuHD.
-How to save and rebuild the conda environment
The above is written with the following command.
$ conda list --explicit > env_name.txt
You can copy and paste the above list and build the environment with the following command.
$ conda create -n [env_name] --file env_name.txt
##Finally
It's amazing how many meshes can be generated from a single photo. Moreover, it seems that the accuracy will be higher if it is generated from multiple photos. (And it seems that it can be generated from videos)
In the near future, the time will come when we will share what we used to share in photos in the form of these 3D models. There is no doubt that data will be shared three-dimensionally in the AR era, so I'm looking forward to that era coming from now on.