This is Domo. I'm sorry.
I would like to create a simple system using the Computer Vision API, which is one of Azure Cognitive Services, and the Microsoft Azure Storage SDK for Java, which is one of the Java SDKs for Azure.
Agenda ――What I made this time --Cognitive Services Computer Vision API thumbnail creation function --Let's use Storage from Java SDK ――Miscellaneous feelings that I actually tried --Summary
This time, I created a simple file upload system. Anyway, I made it first, so I didn't make it.
Function (1): Upload a file from the screen and display it on the screen Function (2): Uploaded files automatically create thumbnail images and save them at the same time
The basis of the system is Spring Boot, and html display and form data exchange are Thymeleaf. I used tutorials /2.1/usingthymeleaf_ja.html # introduction to thymeleaf). Use the Azure SDK within Spring Boot to interact with Storage.
Click here for the finished image. Thank you for using the BootStrap template.
When you upload a file from the file upload part on the left side
More files will be displayed
Click to see the original image
This time, I used Computer Vision API to process the thumbnail image. The Computer Vision API was originally an API for discriminating people's faces and facial expressions, and analyzing and classifying what is in the picture, but as a by-product of that function, a function called "creating a nice thumbnail image" It also has.
It's okay if you can think of "good feeling" as "easy to understand what is in the picture". If the subject is shifted to the right, if you crop the image without thinking about it, you will end up with a thumbnail image that you don't really know what the photo shows. The Computer Vision API analyzes the position of the subject, crops the area around the subject to the specified size, and creates a thumbnail image.
For example, this photo
If you make a thumbnail cut out in the upper left without thinking about anything, it will look like ↓ and you will get a thumbnail that you do not know what the photo is.
If you have Computer Vision analyze it and create a thumbnail, it will look "good" ↓
If you would like to try various things with your own photos, there is a demo page, so please try it. If you would like to see other charges and other functions, please refer to here.
Cognitive Services operates by hitting the API directly.
Azure Storage is a file storage service. Data can be saved in various forms such as BLOB / file / queue / table. In this system, a container (directory) is created in this Storage and the original image and thumbnail image are saved as BLOBs.
This time, I used one of the SDKs, Microsoft Azure Storage SDK for Java. There are various other Java SDKs for Azure prepared for each service, so be sure to choose the SDK for the service you want to use. The SDK for Java is available in the Maven Repository (http://search.maven.org/#search%7Cga%7C3%7CAzure).
Just add a dependency in Gradle and you're good to go.
dependencies {
・ ・ ・ ・ ・ ・
compile 'com.microsoft.azure:azure-storage:4.0.0'
・ ・ ・
}
Please refer to here for a sample of how to use it. Other Java-related support is summarized in here.
There is an upper limit on the image size that can be used with Computer Vision, but this time there is no image size adjustment or even check processing ... So uploading an image with a file size larger than the upper limit is a system error The image will occur ... I would like to improve it later. .. ..
Regarding the Java SDK, I was at a loss when I was looking at the Maven repository, asking "Which one should I use ..." because the information was scarce at first, but once I found the one I was looking for, I just set the dependencies. , I was able to start using it without much trouble. I was able to upload files and get a list smoothly.
I thought I shouldn't do it Computer Vision Thumbnail API Demo Page. Since the response is binary, it is returned with garbled characters. It was also in Java code sample, but apparently ʻEntityUtils.toString (entity)` is done at the very end. It's like. If you convert the image data into a character string, that would be the case. Since it is a demo page of the API, I would like you to see an image in the response.
By the way, in order to use the Java code sample in the document as it is, it is necessary to getContent ()
the entity (acquired image data). It looks like this ..
Finally, I gave the code I created this time to GitHub. To be honest, it's just getting messy, so I'd like to make a few more corrections and READMEs.
Since there is still little information related to Java in both Azure Storage and Computer Vision, I thought it would be good if the threshold was lowered if it was further enhanced. Also, the current situation is that not all Azure services can be used from Java, and it is often the case that Java (and Python) cannot be used with the service you want to use, so I hope that more areas can be used.
That was all.
Recommended Posts