Since the Monte Carlo method was useless, I did the mathematics seriously. If the expected value to be obtained is n, then n = p * (1 + n) + (1 --p) * 0, so n = p / (1 --p) Will be.
p = float(input())
print(p / (1 - p))
If you search for "Broadly-sense monotonous increase competition pro", the code for POJ-3666: Making the Grade will appear, so you can comment out and post the code on the broad-sense monotonous decrease column side (cruel).
Addendum: Solved properly. When Y i-1 </ sub> after movement is x, if the sum of the distances moved up to that point is dp [i -1] [x], then dp [i ] [x] is min (dp [i -1] [0], dp [i -1] [1], ..., dp [i -1] [x]) + abs (Y i < / sub> --x).
N = int(input())
Y = list(map(int, input().split()))
dp = [[0] * (10 ** 4 + 1) for _ in range(N)]
for i in range(N):
t = float('inf')
for j in range(10 ** 4 + 1):
t = min(t, dp[i - 1][j])
dp[i][j] = t + abs(Y[i] - j)
print(min(dp[N - 1]))
In Python, it becomes TLE and can only be passed in PyPy. By the way, although it is written in 2D, it actually works in 1D because there is no overlap. Also, the destination of Y i </ sub> There is no problem with the coordinates of Y 1 </ sub>, ..., Y N </ sub>. If you write with the above in mind, the code will pass even in Python. ..
N = int(input())
Y = list(map(int, input().split()))
a = sorted(set(Y))
dp = [0] * len(a)
for i in range(N):
t = float('inf')
for j in range(len(a)):
t = min(t, dp[j])
dp[j] = t + abs(Y[i] - a[j])
print(min(dp))
Recommended Posts