(For this article, I just tried running the code on github below.)
Overview One photo Highly accurate 3D image ↓ Easy creation.
(↓ This is also an example of PNG editing. It does not mean anything at all, but this pizza sharply processed the depth PNG. There is a part where the image is accidentally dragged, but due to the nature of the image of pizza, it can be hidden well...)
(↓ An example of PNG editing explained in the latter half of the book. I don't care, but I thought that this technique was well done... I explained it separately.)
Below is a paper and github.
https://arxiv.org/pdf/2004.04727.pdf Paper ``3D Photography using Context-aware Layered Depth Inpainting'' Meng-Li Shih1
github is below. https://github.com/vt-vl-lab/3d-photo-inpainting
Paper abstract
We propose a method for converting a single RGB-D input image into a 3D photo — a multi-layer representation for novel view synthesis that contains hallucinated color and depth structures in regions occluded in the original view. We use a Layered Depth Image with explicit pixel connectivity as underlying representation, and present a learning-based inpainting model that synthesizes new local color-and-depth content into the occluded region in a spatial context-aware manner. The resulting 3D photos can be efficiently rendered with motion parallax using standard graphics engines. We validate the effectiveness of our method on a wide range of challenging everyday scenes and show less artifacts compared with the state of the arts.
It will be added, 2020/05/23 ** About 3 days ago, the depth could be edited in PNG. ** I tried it.
I tried (work) (The github code worked easily in my windows environment. Maybe a GPU is needed. No need for a good GPU. Processing time was 1-2 minutes.)
As a creation procedure, Put the image you want to process in the image folder Just type the following command.
python main.py --config argument.yml
For publication here, the original photo is "High Quality Free Image Material" https://pixabay.com/ja/ Obtained from (Similar to the King Kong in the overview)
Since some objects are moved (shaking) from one photo, there are of course parts where there is no information, but they seem to be generated by force. great!!
**I'm sorry, it may take about 30 seconds to display. .. .. **
All inputs are single stills 1 image!
I tried (additional work) I used the paintings published by the Metropolitan Museum of Art.
I tried (failed. CG work etc.)
Again, the original photos were taken from "High Quality Free Image Materials". ↓ https://pixabay.com/ja/
↓Failure?. (Rather, computers don't work well together...)
↓ Failure. Escher. Also, balloons may be difficult. Maybe you could name it "Balloon problem". However, the picture is beautiful. .. ..
↓ Somehow the texture. .. .. Failure.
↓ Normal failure.
↓ Normal failure. Separately, examples of improvements are shown in the following articles. ... [3]. Seulement profondeur sans permission feat. Intel-isl.
(2020/05/23) It was possible to edit the depth in PNG about 3 days ago. I edited the depth.
Step 1: In argument.yml, Change depth_format to png and execute.
... depth_format:'.png' ...
Then the depth will be saved as PNG. (I don't want to display this png ↓ at all... but it's probably displayed larger by the way...)
![graffiti-745071 -copie.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/635153/41ecdffa-b58a-ffbc-da3c-3dfecd66025f.png)
Step 2:
Edit the PNG.
(At the bottom of the car, deep (= black) where the depth is shallow.)
See the results below to see if you can do such a crude edit.
![graffiti-745071.png](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/635153/e0f227c8-37a6-b6e0-68df-1f6d206c95ec.png)
Step 3:
Set require_midas to False so that the edited PNG is used.
require_midas: False
Then rerun the conversion.
![Télécharger modifier.gif](https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/635153/64bb82f9-d9c7-3e90-0b39-07766e2b011d.gif)
As you intended, the depth of the bottom of the car is deeper. However, just like PNG,
It didn't look good with rough PNG editing.
I think that people who can do paint processing normally are fine. I can't. With a brush
I painted it on and off. .. ..
Although it may be difficult to convey what I did because the work is rough, you can edit the depth expressed as PNG as a screen when you are not satisfied with the automatically detected depth.
As an example of editing PNG, edge enhancement of PNG or **sharpening example** is the pizza at the beginning.
#PNG editing example, result added (2020/06/07)
Acclaim failed.
**Apply another image depth PNG** and see if there are any interesting effects. "
↓ (very) failed
<img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/635153/41ef2581-0964-3bfc-bad8-2a1ba0ba6ed5.gif" width=480>
Image for depth used here (Source: https://pixabay.com/en/)
<img src="https://qiita-image-store.s3.ap-northeast-1.amazonaws.com/0/635153/e2503326-b37e-15de-6cf0-5d5c17f1bcd2.jpeg " width=200>
⇒ Again, what I was able to recognize.
* If you want to feel it originates from the ground, this method is impossible.
* Depth is very flat. Originally, it's supposed to be like that.
↓ I reversed the depth in black and white. (Very) failure → failure, improved to some extent.
<< See the top of this article for images >>>
# Information that may be useful
* For GFORCE GTX1050Ti (dedicated GPU memory 4.0GB),
**May not work due to lack of GPU memory**.
This is **not dependent on image size**,
Probably something related to the amount of memory required for 3D representation in the image contents?
I have not found a solution. It may be manageable by changing parameters.
Since it was a red car, about 3.4GB of dedicated GPU memory was used and about 2.7GB of white was used.
Regarding this memory problem, why not use Google Colab or something?
* I showed the editing of depth information by PNG, but surely the default is numpy?.
If you can handle that data (numpy) rattling, it may be possible to edit that.
* I feel that the depth information expressed in PNG is normalized somewhere. I feel that the result was the same even if I changed the depth range of PNG. :star:
#Summary
I don't understand how technically great.
However, I wonder if it is forcible.
(In the first place, people's perception when looking at a photograph is a formidable hindrance, so...)
I need to study the technology here a little more.
(For the time being, I understand the processing habits, so I think I can make a little more fun work.)
At this stage, I have no idea what it can be used for.
**Please try it yourself. "Indoor selfie" is recommended**
#Summary 2
Described after "I tried (additional work)".
Unfortunately, **I got tired of seeing it in no time. .. .. ..**
It's tough if you don't have an output for that purpose. .. .. ..
It's just a **effect**. .. .. That makes me tired. Furthermore, we look forward to the next level of technology.
However, I'm tired of it, but I'm interested in technology, although I don't understand it at all. .. ..
It was easy to edit the depth.
#from now on
I think it is better to display the name of the pixabay author, separately.
Is there a different way to show the processed image? ? ?
**If you have any comments, please.**
reference:
[J'ai essayé de créer facilement une image 3D de haute précision avec une seule photo [2].(Essayez de traiter la profondeur avec numpy)](https://qiita.com/torinokaijyu/items/e761c00c87d6a00b8c30)
[](https://qiita.com/torinokaijyu/items/6889598b732851c2e8fd)
[J'ai essayé de créer facilement une image 3D de haute précision avec une seule photo [3]. Arbitrairement seule profondeur Une autre méthode (Intel)-isl's MiDaS et autres).](https://qiita.com/torinokaijyu/items/6889598b732851c2e8fd)
:star2: [J'ai essayé de créer facilement une image 3D de haute précision avec une seule photo [0].(J'ai confirmé comment capturer l'espace)](https://qiita.com/torinokaijyu/items/d93b83b6a135f1b9a660)
:new:[J'ai essayé de créer facilement une image 3D de haute précision avec une seule photo [-1】。(Pouvez-vous vraiment voir la zone cachée?)](https://qiita.com/torinokaijyu/items/20a5a478881c9d00f9ac)
Recommended Posts