I would like to study the contents of the library, which I had only pasted until now, little by little. The reason for writing this article is that I would like to confirm my thoughts and give advice, so please comment! (Especially speeding up, computational complexity)
Find the shortest distance from one starting point to another vertex for a graph with no negative cost edges. (Single starting point shortest path)
Consider the case where the number of vertices of the graph is $ V $ and the number of edges is $ E $. First, prepare two lists. One is to record the shortest distance from the starting point, and the other is to record whether the shortest path has been determined. (It seems that it can be combined into one, but it is the same as when understanding it.) Here, the distance to your own point is fixed at 0, and the distance to other points is INF. Prepare a priority queue to retrieve the minimum value for the cost of an edge. Then, the side extending from its own point is put in the priority queue. At this time, ** with respect to the destination of the edge with the minimum cost, there is no lower cost via other edges **. Because the cost is non-negative, it will be a detour. Therefore, the side with the minimum cost is extracted, and the shortest distance to the apex of this destination is determined. Furthermore, the edges extending from this apex are also put in the priority queue in the same way. The distance is ** the distance to this vertex + the cost of the side **. Here, since the side having the destination for which the shortest distance is fixed is not related to the shortest distance, the amount of calculation becomes lighter if it is not included. Repeat the above process until all the shortest distances are determined. (Implementation keeps the priority queue empty)
import heapq
def dijkstra(s):
#The shortest distance from the start point to each vertex
d = [float('inf')]*n
d[s] = 0
#Whether each vertex has been visited
used = [False]*n
used[s] = True
#Heap to record temporary distance
que = []
for e in g[s]:
heapq.heappush(que, e)
while que:
u, v = heapq.heappop(que)
if used[v]:
continue
d[v] = u
used[v] = True
for e in g[v]:
if not used[e[1]]:
heapq.heappush(que, [e[0] + d[v], e[1]])
return d
n = 4
g = [[[1, 1], [4, 2]], [[1, 0], [2, 2], [5, 3]], [[4, 0], [2, 1], [1, 3]], [[1, 2], [5, 1]]] #Adjacency list
print(dijkstra(0))
[0, 1, 3, 4]
Addition to the priority queue is $ O (E) $ depending on the number of edges. The size of the priority queue is proportional to $ V $ by not including the edges unless they are the shortest distance, so each operation is $ O (logV) $. Therefore, the amount of calculation is $ O (ElogV) $.
I wondered if the size of the priority queue is proportional to the number of sides $ E $, but I was a little reluctant to prun it in proportion to $ V $. If you write it down concretely, you can see that it will never reach $ E $, but I would like you to tell me if you can easily express it with some mathematical formula. In the first place, when the edges are stretched between all the vertices, it becomes $ E∝V ^ 2 $, which is worse than other $ O (V ^ 2) $ algorithms, so I don't think it is necessary to think about it!
The above example. We will select and update the one with the lowest cost among the vertices that can be reached.
I would like to update it as a memorandum in the future. I'm more motivated because the articles are sometimes used as references.
[Programming Contest Challenge Book](https://www.amazon.co.jp/%E3%83%97%E3%83%AD%E3%82%B0%E3%83%A9%E3%83%9F% E3% 83% B3% E3% 82% B0% E3% 82% B3% E3% 83% B3% E3% 83% 86% E3% 82% B9% E3% 83% 88% E3% 83% 81% E3% 83% A3% E3% 83% AC% E3% 83% B3% E3% 82% B8% E3% 83% 96% E3% 83% 83% E3% 82% AF-% E7% AC% AC2% E7% 89 % 88-% EF% BD% 9E% E5% 95% 8F% E9% A1% 8C% E8% A7% A3% E6% B1% BA% E3% 81% AE% E3% 82% A2% E3% 83% AB% E3% 82% B4% E3% 83% AA% E3% 82% BA% E3% 83% A0% E6% B4% BB% E7% 94% A8% E5% 8A% 9B% E3% 81% A8% E3% 82% B3% E3% 83% BC% E3% 83% 87% E3% 82% A3% E3% 83% B3% E3% 82% B0% E3% 83% 86% E3% 82% AF% E3% 83% 8B% E3% 83% 83% E3% 82% AF% E3% 82% 92% E9% 8D% 9B% E3% 81% 88% E3% 82% 8B% EF% BD% 9E-% E7% A7 % 8B% E8% 91% 89-% E6% 8B% 93% E5% 93% 89-ebook / dp / B00CY9256C) [Juppy's blog](https://juppy.hatenablog.com/entry/2019/02/18/%E8%9F%BB%E6%9C%AC_python_%E3%83%97%E3%83%A9%E3 % 82% A4% E3% 82% AA% E3% 83% AA% E3% 83% 86% E3% 82% A3% E3% 82% AD% E3% 83% A5% E3% 83% BC% 28heapq% 29 % E3% 82% 92% E7% 94% A8% E3% 81% 84% E3% 81% 9F% E3% 83% 97% E3% 83% AA% E3% 83% A0% E6% B3% 95)
Recommended Posts