The part about basic knowledge of tips you should know when programming competitive programming with Python2 has been divided.
The Python version is ** 2.7.5 ** (Python3 has very different specifications such as input / output, so it is recommended to refer to other articles).
In competitive programming, program execution time, memory usage, stack area usage, etc. are often limited.
Although it depends on the rules of the contest, this limitation often works against LL such as Python, and it is easy for things like "AC was done in C ++, but TLE in Python".
For example, the online judge service AtCoder has many problems with a time limit of 2 seconds and a memory limit of 256 MB. This time, we will use this as a reference to investigate each limit value.
Computational complexity is a very important concept in competitive programming. In many cases, the rough calculation time is estimated by counting the number of loops.
Famous in the competitive programming area [Programming Contest Challenge Book](http://www.amazon.co.jp/%E3%83%97%E3%83%AD%E3%82%B0%E3%83%A9% E3% 83% 9F% E3% 83% B3% E3% 82% B0% E3% 82% B3% E3% 83% B3% E3% 83% 86% E3% 82% B9% E3% 83% 88% E3% 83% 81% E3% 83% A3% E3% 83% AC% E3% 83% B3% E3% 82% B8% E3% 83% 96% E3% 83% 83% E3% 82% AF-% E7% A7 According to% 8B% E8% 91% 89-% E6% 8B% 93% E5% 93% 89 / dp / 4839931992) (Arimoto), in the case of C ++,
If the execution time limit is 1 second $ 10 ^ 7 $ will probably be in time. $ 10 ^ 8 $ is strict unless it is a very simple process.
It seems that.
This is used as the baseline for measurement.
For example, consider a code that turns a loop $ 10 ^ 6 $ times and outputs it.
caltest.py
for i in xrange(1000000):
print i
This execution time is measured using the time command.
$ time python caltest.py
0
1
...
999999
python caltest.py 1.49s user 1.11s system 49% cpu 5.237 total
user is the program execution time.
It's almost over, and if you do a little heavy processing in the loop, you'll die soon.
If this is set to $ 2 * 10 ^ 6 $,
caltest2.py
for i in xrange(2000000):
print i
$ time python caltest2.py
0
1
...
1999999
python caltest2.py 3.00s user 2.26s system 50% cpu 10.475 total
Even if you simply look at the loop, you can see that the output of about $ 2 * 10 ^ 6 $ times is useless. I feel like I've heard from someone that "Python is tight even in $ 10 ^ 6 $ loops", but that's what it supports.
By the way, if you do the same thing in C ++,
caltest.cpp
#include <cstdio>
using namespace std;
int main() {
for(int i = 0; i < 2000000; i++) printf("%d\n", i);
return 0;
}
$ g++ caltest.cpp
$ time ./a.out
0
1
...
1999999
./a.out 1.27s user 2.10s system 34% cpu 9.714 total
become that way. Since the output code was compared this time, there was not much difference, but it was still more than twice as fast.
In general, it seems that there is a difference in execution time of about 10 to 100 times between C ++ and Python. C++ vs. Python vs. Perl vs. PHP performance benchmark - /contrib/famzah
In Python, some optimization is done for the list, and it is difficult to estimate the memory usage of the array by type like C ++.
This time, for some examples, we will use memory_profiler to measure memory usage.
$ pip install -U memory_profiler
$ pip install psutil
For example, to check the memory usage of an integer variable
memtest.py
@profile
def main():
l = 0
if __name__ == '__main__':
main()
$ python -m memory_profiler memtest.py
Filename: memtest.py
Line # Mem usage Increment Line Contents
================================================
1 8.555 MiB 0.000 MiB @profile
2 def main():
3 8.559 MiB 0.004 MiB l = 0
To do.
Note that @profile
is added to the beginning of the function you want to measure.
"Increment" is the amount of memory used on that line.
In this case, it can be seen that l = 0
consumes 0.004 Mib (≈4KB) of memory.
Try it on the list as well.
memtest2.py
@profile
def main():
l = [0] * 1000000
if __name__ == '__main__':
main()
$ python -m memory_profiler memtest2.py
Filename: memtest2.py
Line # Mem usage Increment Line Contents
================================================
1 8.781 MiB 0.000 MiB @profile
2 def main():
3 16.418 MiB 7.637 MiB l = [0] * 1000000
It doesn't seem to be simply 1,000,000 times the integer type.
About 8 bytes per element?
Also, if you try the 2D list,
memtest3.py
@profile
def main():
l = [[0] * 1000 for _ in xrange(1000)]
if __name__ == '__main__':
main()
$ python -m memory_profiler memtest3.py
Filename: memtest3.py
Line # Mem usage Increment Line Contents
================================================
1 8.742 MiB 0.000 MiB @profile
2 def main():
3 16.676 MiB 7.934 MiB l = [[0] * 1000 for _ in xrange(1000)]
become that way.
If the number of elements is the same as the 1D list, the memory usage is about the same, or the 2D list seems to be a little more.
Try increasing the number of elements a little.
memtest4.py
@profile
def main():
l = [0] * 100000000
if __name__ == '__main__':
main()
$ python -m memory_profiler memtest4.py
Filename: memtest4.py
Line # Mem usage Increment Line Contents
================================================
1 8.398 MiB 0.000 MiB @profile
2 def main():
3 771.344 MiB 762.945 MiB l = [0] * 100000000
memtest5.py
@profile
def main():
l = [[0] * 10000 for _ in xrange(10000)]
if __name__ == '__main__':
main()
$ python -m memory_profiler memtest5.py
Filename: memtest5.py
Line # Mem usage Increment Line Contents
================================================
1 8.383 MiB 0.000 MiB @profile
2 def main():
3 779.613 MiB 771.230 MiB l = [[0] * 10000 for _ in xrange(10000)]
The limit is exceeded when the number of elements is about $ 10 ^ 7 $ to $ 10 ^ 8 $.
However, if you have to make such a large list, there is a high possibility that there is a problem with the implementation.
About the depth of recursion in Python --- A memorandum of numerical calculation (provisional)
When writing recursive code and inputting large data, the recursion depth may exceed the default maximum value.
For example, for the following code that finds the factorial of n by linear iteration,
rectest.py
# -*- coding: utf-8 -*-
def fact_iter(n, res):
if n == 0:
return res
else:
return fact_iter(n - 1, n * res)
if __name__ == '__main__':
print fact_iter(999, 1) # 999!Should be output
When you do this,
...
File "rectest.py", line 8, in fact_iter
return fact_iter(n - 1, n * res)
File "rectest.py", line 8, in fact_iter
return fact_iter(n - 1, n * res)
File "rectest.py", line 8, in fact_iter
return fact_iter(n - 1, n * res)
RuntimeError: maximum recursion depth exceeded
I get an error like this.
>>> import sys
>>> sys.getrecursionlimit()
1000
As mentioned above, the recursive depth seems to be limited to 1000 by default. In this case, for example, even if a two-dimensional array of about 100 * 100 is DFS (recursive version), an error will occur.
If you want to change this to, for example, 10000
import sys
sys.setrecursionlimit(10000)
And it is sufficient.
The "recursion depth" here does not necessarily match the number of recursion (as the programmer thinks).
For example, in the above example, the limit value of the recursion depth is 1000, but 999! (In this case, fact_iter ()
is called 1000 times) cannot be calculated.
Regarding recursionlimit, it seems better to save a little more.