The day when a beginner who started programming for two and a half months made a web application using Flask

Introduction

Thank you for reading this article first!

It's been about two weeks since I published an article about data analysis.

This time I finished taking Aidemy's AI application course, so I would like to make a web application.

The app is an app that sorts images.

Who is the article for?

Those who are starting or have just started programming. Especially for working adults to learn programming

If you read it to those who are interested, it will be one of the references. Because it was my job to create the app

It's a little time at the end and the time spent on holidays. Therefore, "Which is the best when programming while working?

Can you program a degree? To the question that is worrisome for working people and programming beginners

I think I can provide one of the sample answers.

Precautions for readers

I don't really understand the content of the code because I attach great importance to the output itself.

We may not be able to give a detailed explanation. I'm sorry.

This article tells you that you can output in programming without having a strict understanding of the code.

I want to send it through. This is one of the most important things I've noticed recently. Express it as much as possible,

I hope it will be communicated to everyone.

environment

MacBookAir macOS : Catalina 10.15.5 Atom : 1.53.0 JupyterNotebook

Text

0. Theme setting and final goal

This app is an app that uses image recognition. If the machine can easily distinguish similar things from the human eye,

I thought it was good, so for the time being, I thought about two things that were appropriately similar. With butter, margarine, flatfish

Flounder, Kai Ato and Ai Kato. After thinking about various things, I made cabbage and lettuce. It was easy to collect image data

Because.

Implement an app on the Web that will determine whether it is cabbage or lettuce. This is the ultimate goal.

This time I will use a text editor called Atom, but the structure of the final directory created on Atom

I will put it in advance.

cabbage_lettuce/  ├ image/  │  └ additional_image  │ ├ cabbage  │ └ lettuce  ├ model.h5/  │ └ my_model  ├ static/  │ └ stylesheet.css  ├ template/  │ └ index.html  ├ exe.py  └ imagenet.py

1. Data collection

First, create a directory named cabbage_lettuce. Below that is a file called imagenet.py

make. In this section, we will build imagenet.py. Image data is down from a data source called ImageNet

Load   The contents of imagenet.py are mostly copied and pasted from the following references, so please see the following page for details.

I look forward to working with you.

(Reference) ImageNet How to download ImageNet

After writing imagenet.py, use the command line. Learn how to use the command line with Progate

Did. You can take the command line study course for free, so if you are not familiar with it, please try it. very much

It's easy to understand.

Since I'm using a Mac, I use a command line called Terminal. Use the terminal to get imagenet.py

You can start it with "python imagenet.py", but for that you can do it in the location where imagenet.py exists.

You need to move to a directory one level higher. Take a look at the directory structure written above

And, the directory and files are branched starting from the cabbage_lettuce directory, but this is

If you follow in the opposite direction, you are going up the hierarchy. According to this description, it is one level above imagenet.py.

The directory is the cabbage_lettuce directory.

Use the cd command to change to another directory. If you are one level above cabbage_lettuce,

You can go to cabbage_lettuce by running the code in the terminal. The hierarchy above cabbage_lettuce is

On Mac, you can check it from an existing app called Finder.

Terminal


cd cabbage_lettuce

If "username cabbage_lettuce%" is displayed, it is successful. If successful, do the following for lettuce and cabbage

Download the image.

Terminal


python imagenet.py

2. Model construction

The model will be made with Jupyter Notebook. It's very convenient when you want to tweak the data in detail.

JupyterNotebook


from sklearn.model_selection import train_test_split
from keras.callbacks import ModelCheckpoint
from keras.layers import Conv2D, MaxPooling2D, Dense, Dropout, Flatten
from keras.models import Sequential
from keras.utils import np_utils
from keras import optimizers
from keras.preprocessing.image import img_to_array, load_img
import keras
import glob
import numpy as np
import matplotlib.pyplot as plt


#Image directory path
root_dir = '/cabbage_lettuce/image/'
#Image directory name
veg = ['cabbage', 'lettuce']

X = []  #List that stores 2D data of images
y = []  #label(Correct answer)List to store information about

for label, img_title in enumerate(veg):
    file_dir = root_dir + img_title
    img_file = glob.glob(file_dir + '/*')
    for i in img_file:
        try:

            img = img_to_array(load_img(i, target_size=(128, 128)))
            X.append(img)
            y.append(label)
        except:
            print(i + "Failed to read")
#4-dimensional list of Numpy arrays(*, 244, 224, 3)
X = np.asarray(X)
y = np.asarray(y)

#Convert pixel values from 0 to 1
X = X.astype('float32') / 255.0
#Label One-Convert to hot label
y = np_utils.to_categorical(y, 2)

#Divide the data
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)
xy = (X_train, X_test, y_train, y_test)

model = Sequential()
#Input layer,Hidden layer(Activation function:relu)
model.add ( Conv2D ( 32, (3, 3), activation='relu', padding='same', input_shape=X_train.shape[1:]) )
model.add ( MaxPooling2D ( pool_size=(2, 2) ) )

model.add ( Conv2D ( 32, (3, 3), activation='relu', padding='same' ) )
model.add ( MaxPooling2D ( pool_size=(2, 2) ) )

model.add ( Conv2D ( 64, (3, 3), activation='relu' ) )
model.add ( MaxPooling2D ( pool_size=(2, 2) ) )

model.add ( Conv2D ( 128, (3, 3), activation='relu' ) )
model.add ( MaxPooling2D ( pool_size=(2, 2) ) )

model.add ( Flatten () )
model.add ( Dense ( 512, activation='relu' ) )
model.add ( Dropout ( 0.5 ) )

#Output layer(2 class classification)(Activation function:softmax)
model.add ( Dense ( 2, activation='softmax' ) )

#Compile (learning rate:1e-3, loss function: categorical_crossentropy, optimization algorithm: RMSprop, merit function: accuracy(Correct answer rate))
rms = optimizers.RMSprop ( lr=1e-3 )
model.compile ( loss='categorical_crossentropy',
                optimizer=rms,
                metrics=['accuracy'] )

#Learning model epoch
epoch = 10

#Learning with the built model
model.fit (X_train,y_train,batch_size=64,epochs=epoch,validation_data=(X_test, y_test) )

model.save('/cabbage_lettuce/model.h5/my_model')

scores = model.evaluate(X_test, y_test, verbose=1)

print('Test accuracy:', scores[1])

This code is almost like a template, so it's better to read articles written by other people for detailed explanations.

I hope you will understand how to do it. To explain only one thing, the try and except statements in the for statement

It is possible to identify defective image data that cannot be read and is mixed in the image data. to this

Therefore, editing the image data has become a little easier.

The above code outputs the correct answer rate of this model. Result is

Test accuracy: 0.3184931506849315

31% !? It's terrible accuracy, it's about the same accuracy as me because I can not distinguish until I bite it (laugh)

Since the goal this time is to create an app, there is no problem with the purpose without the work of improving accuracy. No more time

I don't want to hang it, so I'll move on.

3. Express the app on the web

We will create Flask, a tool for creating web applications, and the HTML & CSS part that creates the appearance of the application.

exe.py


import os
from flask import Flask, request, redirect, url_for, render_template, flash
from werkzeug.utils import secure_filename
from tensorflow.keras.models import Sequential, load_model
from tensorflow.keras.preprocessing import image
import numpy as np

classes = ["cabbage","lettuce"]
num_classes = len(classes)
image_size = 128

UPLOAD_FOLDER = "./image/additional_image"
ALLOWED_EXTENSIONS = set(['png', 'jpg', 'jpeg', 'gif'])

app = Flask(__name__)

def allowed_file(filename):
    return '.' in filename and filename.rsplit('.', 1)[1].lower() in ALLOWED_EXTENSIONS

model = load_model('./model.h5/my_model')#Load the trained model

@app.route('/', methods=['GET', 'POST'])
def upload_file():
    if request.method == 'POST':
        if 'file' not in request.files:
            flash('No file')
            return redirect(request.url)
        file = request.files['file']
        if file.filename == '':
            flash('No file')
            return redirect(request.url)
        if file and allowed_file(file.filename):
            filename = secure_filename(file.filename)
            file.save(os.path.join(UPLOAD_FOLDER, filename))
            filepath = os.path.join(UPLOAD_FOLDER, filename)

            #Read the received image and convert it to np format
            img = image.load_img(filepath,grayscale=False,target_size=(image_size,image_size))
            img = image.img_to_array(img)
            data = np.array([img])
            #Pass the transformed data to the model for prediction
            result = model.predict(data)[0]
            predicted = result.argmax()
            pred_answer = "this is" + classes[predicted] + "is"

            return render_template("index.html",answer=pred_answer)

    return render_template("index.html",answer="")


if __name__ == "__main__":
    port = int(os.environ.get('PORT', 8080))
    app.run(host ='0.0.0.0',port = port)

Allow users to submit image files, put the submitted data into the image recognition model we created earlier,

The content is to return the output.

index.html


<!DOCTYPE html>
<html lang="ja">
<head>
    <meta charset="UTF-8">
    <title>CL_Discriminator</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <link rel="stylesheet" href="./static/stylesheet.css">
</head>
<body>
    <header>


    </header>

    <div class="main">
        <h2>Identifies whether the image sent is cabbage or lettuce</h2>
        <p>Please send the image</p>
        <form method="POST" enctype="multipart/form-data">
            <input class="file_choose" type="file" name="file">
            <input class="btn" value="hand in" type="submit">
        </form>
        <div class="answer">{{answer}}</div>
    </div>



</body>
</html>

stylesheet.css


header {
    background-color: #76B55B;
    height: 60px;
    margin: -8px;
    display: flex;
    flex-direction: row-reverse;
    justify-content: space-between;
}

.main {
    height: 370px;
}

h2 {
    color: #444444;
    margin: 90px 0px;
    text-align: center;
}

p {
    color: #444444;
    margin: 70px 0px 30px 0px;
    text-align: center;
}

.answer {
    color: #444444;
    margin: 70px 0px 30px 0px;
    text-align: center;
}

form {
    text-align: center;
}

Now you're ready to go.

On the terminal, change to the directory one level above exe.py (to cabbage_lettuce), and then execute the following.

Terminal


python exe.py

Since the URL is included in the output result, when I moved to it, the application was expressed on the Web.

0BDAC639-8900-4197-8CBD-3B9989B32E0A_1_201_a.jpeg

I will submit an image of lettuce picked up from the net as a trial.

F102613A-CEC6-4764-93B6-3791325A9FFD_1_201_a.jpeg

I returned calmly, but I made a mistake.

Well, I would like to end the deliverable report here.

in conclusion

Finally, I would like to reflect on my learning and the app. Speed up learning for the last month or so

I am consciously learning programming, and I have come up with some cohesive opinions about learning.

I will write about it. If you want to speed up your learning, it's like you used to do when you were a student.

I thought I had to escape from the paradigm of output learning. So, output → imp

Learning, that is, first deciding what you want to make, and if you stumble in the process of making, how about checking (inputting) each time?

I took the method of finishing it as a product. This is quite difficult because of my own experience of self-education.

There is a problem. By attending an online school, I create an environment where I can ask questions to people with practical experience.

Now, it is possible to establish the learning of output → input. Many times in this app production

I stumbled upon it, and each time the mentor helped me many times. Of course you can ask someone and get the right feedback

In the environment where you can return, even beginners can learn quickly by any means. However, there are also problems

I will. I learned that there is no problem because it worked as expected for the time being even with an ambiguous understanding without being familiar with the contents of the code.

It is a point that advances. As you can see in this article, as programming learning becomes more sophisticated,

We can expect more complex problems to emerge and the need for comprehensive competence. Proceed with a shallow understanding as it is now

If you continue, there will be too many points to stumble, and on the contrary, efficiency will deteriorate. This is the future

I think it's an issue. Finally, the recently learned and numb words are suitable for programming beginners, including myself.

I thought, so I would like to quote that word and finish writing the article.

The end of all our quests To return to the place of origin, It is the first time to know the place. --T.S. Elliott

In the future, I plan to learn programming from various angles. After that, analyze the data again and recreate the app

By the time, I think that I have a deep understanding of where I am and I am able to produce high quality deliverables.

I'm really excited.

If any of the readers are beginners like me, let's do our best together!

Thank you so much for reading the article to the end! !!

References

Aidemy AI App Course A beginner who has been programming for 2 months tried to analyze the real GDP of Japan in time series using the SARIMA model ImageNet [How to download ImageNet] (http://murayama.hatenablog.com/entry/2017/11/18/160818) [Image recognition using CNN Horses and deer](https://qiita.com/nuhsodnok/items/a3fb71bba4e0148e782f#%E8%A9%A6%E3%81%97%E3%81%9F%E3%81%93 % E3% 81% A8% E3% 81% 9D% E3% 81% AE1) Summary of how to write environmental information Qiita article creation method beginner's memorandum

Recommended Posts

The day when a beginner who started programming for two and a half months made a web application using Flask
Creating a web application using Flask ②
Creating a web application using Flask ①
Creating a web application using Flask ③
Creating a web application using Flask ④
Try using the web application framework Flask
I want to make a web application using React and Python flask
A story about a programming beginner who made a business efficiency map application with GeoDjango
I made a web application that maps IT event information with Vue and Flask
A beginner who has been programming for 2 months tried to analyze the real GDP of Japan in time series with the SARIMA model.
Let's make a WEB application for phone book with flask Part 1
Let's make a WEB application for phone book with flask Part 2
Let's make a WEB application for phone book with flask Part 3
I made a scaffolding tool for the Python web framework Bottle
Let's make a WEB application for phone book with flask Part 4