If you want to do something like a deque with a numpy.ndarray, is it faster to simply use np.append (x, y) and np.delete (x, 0), or use a deque and then a deque? I will verify that.
Suspicious code that seems to be late
x = np.append(x, 1)
x = np.delete(x, 0)
I added it at the end and deleted the beginning.
numpy seems to secure a fixed length area when creating an array, so changing the length seems to be somewhat slow.
This verification seems to be about FIFO [First-In First-Out / First-In First-Out / First-in First-Out], but it is assumed that the whole is read without reading the beginning. In other words, it is a verification when used like a buffer.
This time I want to use numpy.mean () and numpy.std () every time I add it, so I limited the conditions so that it will be numpy.ndarray at the end. If you just want to create an array, it's faster to use the deque as it is, but then you can't use numpy's mean () or std () as it is.
This time, assuming that you use the numpy function for each loop, I would like to compare it including the conversion cost.
x = np.roll(x, -1)
x[-1] = 1
The simplest way to do this is to shift it by one like a deque and substitute it at the back.
x = np.append(x, 1)
x = np.delete(x, 0)
Just what you want to do. This seems to be late, so I am verifying this time.
x[0:-1] = x[1:]
x[-1] = 1
Slicing seems to be fast, so it looks like it's doing something similar to roll, but it's subtly different.
d.append(1)
x = np.array(d)
Where d is the deque generated by d = deque (maxlen). If maxlen is specified, the beginning will disappear without permission. Deque is overwhelmingly fast if you simply append, but this time you need to convert it to numpy.array each time.
Then it is verification which is the fastest. Click here for the code used for verification
import time
import numpy as np
from collections import deque
xlen = 1000
n = 100000
x = np.zeros(xlen)
s = time.time()
for i in range(n):
x = np.roll(x, -1)
x[-1] = 1
print(time.time() - s)
s = time.time()
for i in range(n):
x = np.append(x, 1)
x = np.delete(x, 0)
print(time.time() - s)
s = time.time()
for i in range(n):
x[0:-1] = x[1:]
x[-1] = 1
print(time.time() - s)
#Declaration and initialization of deque
d = deque(maxlen=xlen)
for i in range(xlen):
d.append(1)
s = time.time()
for i in range(n):
d.append(1)
x = np.array(d)
print(time.time() - s)
I ran it with python3.8.
Method | xlen=In the case of 100 | xlen=In the case of 10000 |
---|---|---|
numpy.roll()use | 2.58s | 3.28s |
append and delete | 1.79s | 2.78s |
Copy one before and substitute at the end | 0.100s | 0.366s |
Manipulate with deque and then numpy.Convert to ndarray | 1.52s | 88.0s |
The fastest result was ** copying in slices and assigning at the end **. The alleged code is also unlikely to be the wrong choice. The deque may also be good because np.array () is not so slow when the array is short. If it gets longer, it will take a tremendous amount of time for np.array (), which is dangerous.
I hope it will be helpful for you.
Recommended Posts