I solved from A problem to O problem. However, many of them referred to other people's codes or googled. Since the code that became AC is posted almost as it is, I think that there is useless description etc. .. ..
A If it is a character string, it outputs error. Easy with try.
#A
try:
print(int(input()) * 2)
except:
print('error')
B Whether it has increased or decreased is classified by case. Just write it honestly.
# B
N = int(input())
A_last = int(input())
for i in range(N-1):
A = int(input())
if A_last == A:
print('stay')
elif A_last > A:
print('down {}'.format(A_last - A))
elif A_last < A:
print('up {}'.format(A - A_last))
A_last = A
C Just sort and pick up the third.
# C
data = sorted(list(map(int, input().split())), reverse=True)
print(data[2])
D Compare the set of $ 1 \ sim N $ with the content obtained from the standard input. The only element of the complement that has been rewritten and disappeared, and the one with the largest number (two this time) has been rewritten and increased.
# D
N = int(input())
arr = {i+1 for i in range(N)}
As = [int(input()) for i in range(N)]
import collections
d = dict(collections.Counter(As))
try:
removed = list(arr - set(d.keys()))[0]
added = max(d.keys(), key=lambda k: d[k])
print(added, removed)
except:
print('Correct')
E Create a matrix that stores the follow relationships between users, and set the initial values of all elements to'N'. After that, you can simply update the matrix according to the input.
# E
N, Q = list(map(int, input().split()))
Ss = [input().split() for i in range(Q)]
mat = {str(i+1):{str(j+1): 'N' for j in range(N)} for i in range(N)}
for q in range(Q):
inp = Ss[q]
if inp[0] == '1':
mat[inp[1]][inp[2]] = 'Y'
elif inp[0] == '2':
users = [user for user in mat.keys() if mat[user][inp[1]] == 'Y']
for user in users:
mat[inp[1]][user] = 'Y'
else:
follows = [user for user in mat[inp[1]].keys() if mat[inp[1]][user] == 'Y']
follow_targets = set([])
for follow in follows:
follow_targets |= set([user for user in mat[follow].keys() if mat[follow][user] == 'Y'])
for target in follow_targets:
mat[inp[1]][target] = 'Y'
for i in range(N):
mat[str(i+1)][str(i+1)] = 'N'
print(''.join([mat[str(i+1)][str(j+1)] for j in range(N)]))
F The word is extracted by recording the index containing uppercase letters and dividing it into even numbers. After that, it can be sorted appropriately.
# F
S = input()
ids = [i for i in range(len(S)) if S[i].isupper()]
N = len(ids)
strs = [S[ids[n]:ids[n+1]+1] for n in range(0, N, 2)]
sorted_strs = sorted(strs, key=lambda s: s.lower())
print(''.join(sorted_strs))
G Since the number of people and the number of groups are small ($ 3 ^ {10} $ pattern because 10 people are 3 groups), it is possible to evaluate all patterns.
# G
N = int(input())
As = [list(map(int, input().split())) for i in range(N-1)]
import itertools
d = {'{}:{}'.format(elem[0], elem[1]): As[elem[0]][elem[1] - elem[0] - 1] for elem in itertools.combinations(range(N), 2)}
g_size = 3
def div_to_groups(comb):
return [[i for i in range(N) if comb[i] == g+1] for g in range(g_size)]
combs = list(itertools.product(range(1, g_size+1), repeat=N))
combs = map(div_to_groups, combs)
def sumup_group_happiness(group):
def get_happiness(pair):
return d['{}:{}'.format(pair[0], pair[1])]
return sum(map(get_happiness, itertools.combinations(group, 2)))
def sumup_all_groups(comb):
return sum(map(sumup_group_happiness, comb))
print(max(map(sumup_all_groups, combs)))
H The amount of calculation is reduced by holding the minimum value of the remaining number of cards in the odd number and the minimum value of the remaining number of cards in all the cards. When query 1 comes in, reduce the number of corresponding cards and update each if necessary compared to the recorded minimum. If query 2 comes in, subtract the number of units sold to the current minimum odd number. Query 3 can do this for the minimum number of all cards.
# H
N = int(input())
Cs = {i+1:v for i, v in zip(range(N), map(int, input().split()))}
Q = int(input())
Ss = [list(map(int, input().split())) for i in range(Q)]
sold = 0
min_all, min_odd = min(Cs.values()), min(list(Cs.values())[::2])
sub_all, sub_odd = 0, 0
for s in Ss:
if s[0] == 1:
sell = s[2]
if s[1] % 2 == 0 and Cs[s[1]] - sub_all >= sell:
Cs[s[1]] -= sell
sold += sell
min_all = min(min_all, Cs[s[1]] - sub_all)
elif Cs[s[1]] - sub_odd >= sell:
Cs[s[1]] -= sell
sold += sell
min_odd = min(min_odd, Cs[s[1]] - sub_odd)
min_all = min(min_all, min_odd)
elif s[0] == 2:
sell = s[1]
if min_odd >= sell:
min_odd -= sell
sub_odd += sell
min_all = min(min_all, min_odd)
sold += sell * int((N+1)/2)
else:
sell = s[1]
if min_all >= sell:
min_all -= sell
sub_all += sell
min_odd -= sell
sub_odd += sell
sold += sell * N
print(sold)
I Create a state by considering the state of having or not having a part as a bit. The initial state is assumed to have nothing (all 0 bit strings), and each time a purchase is made, an OR operation with the bit string corresponding to the product is performed and the state transitions. Whether or not to make a transition is made when the total cost when a new purchase is made from the current state is compared with the cost recorded at the transition destination and the new purchase is cheaper.
# I
N, M = list(map(int, input().split()))
SCs = [input().split() for i in range(M)]
SCs = [{'mask': int(sc[0].replace('Y', '1').replace('N', '0'), 2), 'price': int(sc[1])} for sc in SCs]
size = 1 << N
max_price = sum([p['price'] for p in SCs]) + 1
price = {}
price[0] = 0
for i in range(size):
if i in price.keys():
for sc in SCs:
price[i | sc['mask']] = min(price.get(i | sc['mask'], max_price), price[i] + sc['price'])
print(price.get(size-1, -1))
J Calculate the shortest path from the lower left corner, lower right corner, and upper right corner to a square and its cost, and subtract the cost of the doubled route. Do this for all cells and the minimum is the answer. We used scipy to calculate the shortest path. At that time, it is necessary to prepare an adjacency matrix, so it was easily created on the spot. (I think there is a library that creates adjacency matrices ...)
# J
H, W = list(map(int, input().split()))
A = [list(map(int, input().split())) for i in range(H)]
from scipy.sparse.csgraph import dijkstra, csgraph_from_dense
import numpy as np
def get_adj_matrix(A, H, W):
size = H * W
ret = [[10**9]* (size) for i in range(size)]
for i in range(size):
h_i = int(i / W)
w_i = i % W
for add in [(-1, 0), (1, 0), (0, -1), (0, 1)]:
h_j = h_i + add[0]
w_j = w_i + add[1]
j = h_j * W + w_j
if h_j < 0 or h_j >= H or w_j < 0 or w_j >= W:
continue
else:
ret[i][j] = A[h_j][w_j]+1.0e-15 / size
return ret
csr = csgraph_from_dense(get_adj_matrix(A, H, W), null_value=10**9)
start = (H-1)*W
mid = H*W-1
end = W-1
from_start = dijkstra(csr, indices=[start])
from_mid = dijkstra(csr, indices=[mid])
from_end = dijkstra(csr, indices=[end])
double_counts = [-A[h][w]*2 for h in range(H) for w in range(W)]
arr = np.array([from_start, from_mid, from_end, double_counts])
print(int(arr.sum(axis=0).min()))
K Depth-first search is performed. At that time, count the number of descendant nodes of the subtree rooted at each boss node. In the array where the nodes are recorded by depth-first search, they are recorded as [..., root, subtree descendant node, another route, ...]. Therefore, it is sufficient to judge whether there is an index of the employee who wants to judge whether it is a subordinate or not between [boss index, boss index + number of descendant nodes of subtree].
# K
N = int(input())
connections = [[i+1, int(input())] for i in range(N)]
Q = int(input())
ABs = [list(map(int, input().split())) for i in range(Q)]
import sys
sys.setrecursionlimit(10**6)
class tree:
#Depth-first search
def __init__(self, connections):
"""
connection:
* [from, to]Array with elements
*from is the connection source, to is the connection destination
*root to is-1 (specification)
"""
self.parent_to_child = {i+1: [] for i in range(len(connections))}
for connection in connections:
if connection[1] == -1:
self.root = connection[0]
else:
self.parent_to_child[connection[1]].append(connection[0])
self.structure = []
self.nodeid_to_strucure_loc = {}
self.nodeid_to_num_descendant = {}
def compute_depth_first_sub_tree_from(self, parent):
before_counting_descendant = len(self.structure)
self.nodeid_to_strucure_loc[parent] = before_counting_descendant
self.structure.append(parent)
children = self.parent_to_child.get(parent, [])
for child in children:
self.compute_depth_first_sub_tree_from(child)
after_counting_descendant = len(self.structure)
self.nodeid_to_num_descendant[parent] = after_counting_descendant - before_counting_descendant
def compute_depth_first_tree(self):
self.compute_depth_first_sub_tree_from(self.root)
t = tree(connections)
t.compute_depth_first_tree()
for ab in ABs:
#If b is the parent, it is doing a depth-first search, so the structure that stores the searched node
#position of b(lob_c)Between loc and b descendant_There should be a
loc_a = t.nodeid_to_strucure_loc[ab[0]]
loc_b = t.nodeid_to_strucure_loc[ab[1]]
shift = t.nodeid_to_num_descendant[ab[1]]
print('Yes' if loc_b < loc_a < loc_b + shift else 'No')
L Since the number of small towers is small (5), it is sufficient to create a minimum spannning tree for all patterns (120 ways) that can pass through the small towers and obtain the minimum cost.
# L
N, M = list(map(int, input().split()))
xycs = list([list(map(int, input().split())) for i in range(N)])
XYCs = list([list(map(int, input().split())) for i in range(M)])
import itertools
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import minimum_spanning_tree
def compute_cost(x1, x2):
r = ((x1[0] - x2[0]) ** 2 + (x1[1] - x2[1]) ** 2) ** .5
return r if x1[2] == x2[2] else 10 * r
cases = []
for i in range(M+1):
cases.extend(itertools.combinations(XYCs, r=i))
res = 10**30
for case in cases:
data = xycs[:]
data.extend(case)
size = len(data)
cost_matrix = [[compute_cost(data[frm], data[to]) for to in range(size)] for frm in range(size)]
csr = csr_matrix(cost_matrix)
res = min(res, minimum_spanning_tree(csr).sum())
print('{:.10f}'.format(res))
M This is the explanation video as it is. Monster $ N $ Add 1 help monster to your body Find the best for all combinations with Binary Search.
# M
N, M = list(map(int, input().split()))
monsters = [list(map(int, input().split())) for i in range(N)]
helpers = [list(map(int, input().split())) for i in range(M)]
import numpy as np
def find_BS(X_lower, X_higher, eval_function, max_iter=100):
iter_num = 0
while iter_num <= max_iter:
X = (X_higher + X_lower) / 2
res = eval_function(X)
if not res: X_higher = X
else: X_lower = X
iter_num += 1
return X
def get_score(monsters):
np_arr_monsters = np.array(monsters)
A = np_arr_monsters.T[0]
B = np_arr_monsters.T[1]
def eval_func(X):
sorted_val = sorted(B - X * A)
upper = sorted_val[-5:]
return sum(upper) > 0
X_lower = 0
X_higher = max(monsters, key=lambda m: m[1])[1]
max_iter = 25
X = find_BS(X_lower, X_higher, eval_func, max_iter)
Y = B - X * A
targets = np.array(sorted(np.array([A, B, Y]).T, key=lambda a: a[2])[-5:]).T
return sum(targets[1]) / sum(targets[0])
scores = [get_score(monsters + [helper]) for helper in helpers]
print('{:.10f}'.format(max(scores)))
N When the query $ (l, r, p) $ arrives, it splits the query into two events $ (l, p), (r + c, -p) $, each from coordinates $ 0 $ to $ W $. Run while receiving the event. The event $ (a, b) $ means that the cost increases by $ b $ at positions larger than the coordinates $ a $. This will allow you to pass your car through $ (w-c, w) $, $ l \ leq w \ leq r + c $ If so, it costs land preparation cost. From this, the events are sorted in ascending order of coordinates, and each event runs to the coordinates $ W $ to find the smallest one.
# N
n, w, c = list(map(int, input().split()))
events = [[c, 0]]
for i in range(n):
l, r, p = list(map(int, input().split()))
events.extend([[l, p], [r+c, -p]])
events.sort()
ans = 10**100
cost = 0
for loc, price in events:
if loc > w:
break
cost += price
if c <= loc:
ans = min(ans, cost)
print(ans)
O This was quite impressive. The following two articles are for reference.
https://atcoder.jp/contests/past201912-open/submissions/12871791 https://rsk0315.hatenablog.com/entry/2019/12/29/051900
Based on the dice obtained by rolling the dice, consider a strategy to select the dice with the maximum expected value of the number of times you can roll later. It is formulated as follows. If you roll a dice and get a $ X $ roll, the expected number of times you can roll $ n_X $ is
n_X = \frac{1}{6}\left(1 + \max_d E_{Y_d}\left[n_{Y_d}\delta(Y_d \gt X)\right]\right).
Here, $ d $ is a variable that indicates which dice it is, and $ Y_d $ is a random variable that corresponds to the outcome of the dice $ d $. As for the feeling of this,
-$ \ frac {1} {6} $ corresponds to the probability that a certain eye $ X $ will appear --One item in parentheses $ 1 $ corresponds to the dice roll for which $ X $ was obtained. -$ E_ {Y_d} \ left [n_ {Y_d} \ delta (Y_d \ gt X) \ right] $ is the average number of rolls when a number greater than $ X $ appears under a dice $ d $ --The next dice with the highest average number of rolls corresponds to $ \ max_d $
Put this feeling into the code.
# O
N = int(input())
dice_eyes = []
for dice in range(N):
dice_eyes.extend([{'eye_value': dice_eye, 'dice_id': dice} for dice_eye in map(int, input().split())])
sorted_eyes = sorted(dice_eyes, key=lambda eye: eye['eye_value'], reverse=True)
# (At the time of each loop)Expected roll of each dice
Exp_dices = [0] * N
# (At the time of each loop)Maximum expected number of rolls
max_Exp = 0
for eye in sorted_eyes:
#N for eye eye_Calculate X
#Since the order is the size of the eyes, the maximum expected value at that time is max._Become an Exp
exp_length_of_eye = (1 + max_Exp) / 6.
Exp_dices[eye['dice_id']] += exp_length_of_eye
max_Exp = max(max_Exp, Exp_dices[eye['dice_id']])
print('{:.10f}'.format(max_Exp + 1))
It was an impression that I finished the race (I wanted to say it), but I was able to solve it by myself only about half, so I thought that I should do more by the 3rd test at the end of the month. .. ..
Recommended Posts