"It scales very much" or something that you write well. I was impressed with the meaning in the news below, so I decided to try Grumpy (Go running Python). Mynavi News: Google develops a very scaled Python execution environment with Go
I feel that the original blog was extracted to create the headline "Very Scale", so let's read the original blog (former blog) diagonally as well. ..
https://opensource.googleblog.com/2017/01/grumpy-go-running-python.html
《Very Scale》 description: “Grumpy” is still in the experimental stage, but it is highly scalable in the Fib benchmark compared to CPython.
=> Description on the original blog.
In particular, Grumpy has no global interpreter lock, and it leverages Go’s garbage collection for object lifetime management instead of counting references. We think Grumpy has the potential to scale more gracefully than CPython for many real world workloads. Results from Grumpy’s synthetic Fibonacci benchmark demonstrate some of this potential:
For Global Interpreter Lock (GIL), see [Japanese wikipedia](https://ja.wikipedia.org/wiki/%E3%82%B0%E3%83%AD%E3%83%BC% E3% 83% 90% E3% 83% AB% E3% 82% A4% E3% 83% B3% E3% 82% BF% E3% 83% 97% E3% 83% AA% E3% 82% BF% E3% 83% AD% E3% 83% 83% E3% 82% AF) is also easy to understand. The combination of Python and non-threadsafe C code means that per-process locking limits scalability. The benchmark results posted in the original blog are as follows.
You can see that Grumpy is slower in single thread, but the processing power scales linearly in multithread. However, this result is due to the discontinuation of support for Python and C extension modules.
(Original blog) First, we decided to forgo support for C extension modules. This means that Grumpy cannot leverage the wealth of existing Python C extensions but it gave us a lot of flexibility to design an API and object representation that scales for parallel workloads ..
I thought the following description was quite misleading, so let's check the actual situation.
Description in << Very Scale >>: Go was originally developed with the aim of achieving high performance in parallel processing, and is good at parallel processing. It seems that Grumpy still needs development time in terms of compatibility, but Grumpy, which demonstrates high performance on multi-core / processor, may be used as a highly important Python execution environment in the future.
Former blog =>
Second, Grumpy is not an interpreter. Grumpy programs are compiled and linked just like any other Go program. ... the biggest advantage is that interoperability with Go code becomes very powerful and straightforward: Grumpy programs can import Go packages just like Python modules! (I'm saying): Grumpy programs generated from Python are compiled and linked like any other Go program ... The biggest advantage of this is interoperability with Go. Grumpy programs can import Go packages just like Python modules.
Grumpy probably didn't intend to run existing python code on Go when it gave up calling C code. I think that future development around Grumpy will not be in the direction of running existing Python frameworks such as Django, but in the direction of making it easier to combine Go libraries and Python code.
I wanted to know what code Grumpy spits out, so I tried running Grumpy. The original code looks like this.
benchmark1.py
#-*- using:utf-8 -*-
import time
if __name__ == '__main__':
start = time.time()
for i in range(1,50):
print (i, 100.0 / (i * i * i ))
elapsed_time = time.time() - start
print ("------- result ------")
print (elapsed_time * 1000 , "ms")
Execution time in Cpython is about 1.3ms.
$ python benchmark1.py (1, 100.0) (2, 12.5) .. (Omitted) .. (49, 0.0008499859752314087) ------- result ------ (1.3232231140136719, 'ms')
Introduce Grumpy. For Linux (ubuntu16.04) with Go, python, and gcc, it was easy to deploy as directed on the github site.
git clone https://github.com/google/grumpy cd grumpy make export GOPATH=
PWD/build export PYTHONPATH= PWD/build/lib/python2.7/site-packages tools/grumpc benchmark1.py > benchmark1.go go build benchmark1.go
Try running the benchmark1 binary created from benchmark1.go.
$ ./benchmark1 (1, 100) (2, 12.5) .. (Omitted) .. (49, 0.0008499859752314087) ------- result ------ (5.621671676635742, 'ms')
... as expected, it takes longer than Cpython.
The Go source code generated by Grumpy is long, so I'll put it below. At this point, it doesn't seem to be something that humans should read. ..
https://gist.github.com/kmry/906a8154a37bb0060d35eee20a3e06ca
If you're curious about grumpy's python code on Go, it's a good idea to look at the code on github first. The Go implementation compatible with the python standard library is also advanced to some extent.
For example, if you look at the time module used to measure the execution time above, https://github.com/google/grumpy/search?q=time
You can see that it is an implementation wrapped in Go's standard library, as shown below.
grumpy/lib/time.py
"""Time access and conversions."""
from __go__.time import Now, Second, Sleep # pylint: disable=g-multiple-import
def sleep(secs):
Sleep(secs * Second)
def time():
return float(Now().UnixNano()) / Second
If a "wow person" appears as an extension of these efforts, a WAF-compatible framework for Python may appear on grumpys. #Unguaranteed :)
After a little more information is available, it may be fun to write parallel processing using the go library in python.
Recommended Posts