Since the maximum number of sheets that can be collected at one time was 60 according to the specifications of the scraping source, up to 120 sheets were supported.
I didn't have a cascade classification file, so I added it.
The number of cards that can be collected at one time has been increased from 120 to unlimited. The code in that part is dirty with in locals ()
I think it's quite possible to do machine learning with the theme of a person's face, but it's a hassle to pick up and process each one, so I made something that does everything automatically. I felt that someone had already made it, but there was nothing that could be used as it was, so I will publish it here.
If you do not have the following two, please enter them. Excuse me… ・ OpenCV3 series ・ Beautiful Soup 4 series
Make sure that the terminal you are running is connected to the internet to collect from the internet.
Run FaceImageCollector> image_gather> gather.py. You will be asked in the order of keyword, number of sheets, and edit mode.
In the image, I set 30 images of Zuckerberg and trimmed the face. The download will start.
When the specified number of sheets is completed, an img folder will be created in image_gather and a folder with the keyword specified this time will be created there.
-Allow automatic trimming even when an object other than a person is entered as a keyword. -Because contour drawing is not implemented, implement it. (A rectangle is drawn on the face detected as a first aid measure)
If you have any other ideas that would be useful if you could do it automatically, please let me know!
It's not a long code, but I'll explain it when I have time. Thank you for visiting us so far!
Recommended Posts