The program created in Previous article caused a stack overflow. The cause is quite clear, and since it is recalled, it will overwhelm the memory if it repeats more than a certain amount. I thought that there would be no problem if the data was used for research, but the data I was actually using overflowed, so I decided to change the algorithm.
python | 3.7.4 |
---|
I haven't used the library in particular.
The cause was that it was squeezing memory, so I decided to solve it by repeating it using for instead of recalling it. However, it takes too much time to look at all the cells each time and process them, so create a function that fills the 6 directions (up / down / left / right + Z-axis up / down) of your own cell, and save the coordinates of the cell to be searched. And only process that cell. I don't know even if I write it in sentences, so I will write the order of processing.
First, a table shows what the cells are in.
Status symbol | Contents of the state |
---|---|
0 | One of the binarization |
1 | One of the binarization |
2 | Filled cell |
Then, it is the content of the actual processing. I will not repeat the first one (naturally)
It is like this. Last time, I searched one cell at a time, but this time I have an image of remembering the cells to be searched in advance and processing them all at once.
Let's take a look at the code. The first is the main iterative function.
new_fill.py
def fill(data, plot):
stack=[]
end_stack=[]
data, defaultvalue= plot_point(data, plot, stack) #Specify the first point
while stack: #Repeat unless empty
for pop_data in stack: #Extract data from the stack
stack.remove(pop_data) #Delete the retrieved data
data = step_fill(data, pop_data["x"], pop_data["y"], pop_data["z"], defaultvalue, stack) #Fill function
end_stack.append(pop_data) #Save as searched data
return data
It looks like this.
As I said above, save the data of the cell to be searched, and it will end when there are no more cells to search It is a simple program.
The program that specifies the first point is almost the same as the previous one. The difference is that the coordinate data is put on the stack, so the return value changes and it is put on the stack internally.
new_fill.py
def plot_point(data, plot, stack):
defaultvalue = data[plot["z"]][plot["y"]][plot["x"]]
data[plot["z"]][plot["y"]][plot["x"]]=2
stack.append(plot)
return data, defaultvalue
You just added stack.append (plot)
.
And it is a function that fills the surroundings, but this is the function that originally made the restart call. This time I simply fill myself and put the surrounding unsearched coordinates on the stack It is a function such as.
new_fill.py
def step_fill(data, x, y, z, defaultvalue, stack):
data[z][y][x]=2
if (data[z][y][x-1]==defaultvalue) and (not {"x":x-1, "y":y, "z":z} in stack):
stack.append({"x":x-1, "y":y, "z":z})
if (data[z][y][x+1]==defaultvalue) and (not {"x":x+1, "y":y, "z":z} in stack):
stack.append({"x":x+1, "y":y, "z":z})
if (data[z][y-1][x]==defaultvalue) and (not {"x":x, "y":y-1, "z":z} in stack):
stack.append({"x":x, "y":y-1, "z":z})
if (data[z][y+1][x]==defaultvalue) and (not {"x":x, "y":y+1, "z":z} in stack):
stack.append({"x":x, "y":y+1, "z":z})
if (data[z-1][y][x]==defaultvalue) and (not {"x":x, "y":y, "z":z-1} in stack):
stack.append({"x":x, "y":y, "z":z-1})
if (data[z+1][y][x]==defaultvalue) and (not {"x":x, "y":y, "z":z+1} in stack):
stack.append({"x":x, "y":y, "z":z+1})
return data
It's simple. I feel like I can write something more beautifully, so I'll think about it.
With the above three, you can implement 3D fill.
By the way, I compared the time of the previous original experiment.
Previous fill | This fill | |
---|---|---|
1 | 0.004987001419067383 | 0.12566423416137695 |
2 | 0.003987789154052734 | 0.10970711708068848 |
3 | 0.003989219665527344 | 0.11269998550415039 |
4 | 0.004986763000488281 | 0.11568784713745117 |
5 | 0.00598454475402832 | 0.11369585990905762 |
6 | 0.015959978103637695 | 0.11469197273254395 |
7 | 0.004986763000488281 | 0.11768507957458496 |
8 | 0.003989934921264648 | 0.11369562149047852 |
9 | 0.003988981246948242 | 0.1136932373046875 |
10 | 0.005983829498291016 | 0.11469554901123047 |
average | 0.00588448047637 | 0.115191650390625 |
The processing time has increased by about 200 times. Maybe there is a mixture of useless processing, so I'll think about it again.
Recommended Posts