--Target ... C, PHP, HHVM, Ruby, Python, PyPy, and our Kinx
PyPy is now available, so I added it. How is the result? !!
I have read the benchmark in this article [PHP8] JIT can be used with PHP.
I'm benchmarking Mandelbrot with JIT in PHP8, but! , This is the one that our Kinx can also buoy with the native
method! It is an event with the purpose of trying benchmarking from the feeling of.
Making an article also means that it was a ** game **. Or rather than expected. You did it!
By the way, what is Kinx? Please refer to the following.
Various benchmark programs are posted here (https://gist.github.com/dstogov/12323ad13d3240aee8f1)](https://gist.github.com/dstogov/12323ad13d3240aee8f1), but they are based on this.
Since the environment is different, the same result will not be obtained, so we will measure all in several languages. The versions of the various languages could not be matched exactly to the original article, but I will measure it closer and compare it with the original article.
However, as mentioned in the Original article, only PHP is ** cheated **. While avoiding that, as a practical matter, I / O performance also interferes with the measurement, so I will roughly delete the output part. However (continuing the paradox), C goes without optimization because without the output, it seems to be over-optimized and not benchmarked. A kind of handicap. Still fast.
After the execution starts, the execution time is measured using the timer prepared in each language. So, I noticed that it doesn't include the time to parse and compile the source. C can't be helped.
What made me feel uncomfortable is that ** HHVM is perceived to be slow, but the final output is fast **. Specifically, the result is that HHVM is perceptually slower than PHP, but the numbers are better than PHP.
It has been confirmed that the following forms are all output when output is performed. My Kinx (native) also worked as expected.
*
*
*
*
*
***
*****
*****
***
*
*********
*************
***************
*********************
*********************
*******************
*******************
*******************
*******************
***********************
*******************
*******************
*********************
*******************
*******************
*****************
***************
*************
*********
*
***************
***********************
* ************************* *
*****************************
* ******************************* *
*********************************
***********************************
***************************************
*** ***************************************** ***
*************************************************
***********************************************
*********************************************
*********************************************
***********************************************
***********************************************
***************************************************
*************************************************
*************************************************
***************************************************
***************************************************
* *************************************************** *
***** *************************************************** *****
****** *************************************************** ******
******* *************************************************** *******
***********************************************************************
********* *************************************************** *********
****** *************************************************** ******
***** *************************************************** *****
***************************************************
***************************************************
***************************************************
***************************************************
*************************************************
*************************************************
***************************************************
***********************************************
***********************************************
*******************************************
*****************************************
*********************************************
**** ****************** ****************** ****
*** **************** **************** ***
* ************** ************** *
*********** ***********
** ***** ***** **
* * * *
Well, this is the benchmark of the main subject. First from the program code.
C
The version of gcc.
$ gcc --version
gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0
Copyright (C) 2017 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
C looks like this.
mandelbrot.c
#include <stdio.h>
#include <sys/time.h>
#define BAILOUT 16
#define MAX_ITERATIONS 1000
int mandelbrot(double x, double y)
{
double cr = y - 0.5;
double ci = x;
double zi = 0.0;
double zr = 0.0;
int i = 0;
while(1) {
i ++;
double temp = zr * zi;
double zr2 = zr * zr;
double zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
int main (int argc, const char * argv[]) {
struct timeval aTv;
gettimeofday(&aTv, NULL);
long init_time = aTv.tv_sec;
long init_usec = aTv.tv_usec;
int x,y;
for (y = -39; y < 39; y++) {
//printf("\n");
for (x = -39; x < 39; x++) {
volatile int i = mandelbrot(x/40.0, y/40.0);
//if (i==0)
// printf("*");
//else
// printf(" ");
}
}
//printf ("\n");
gettimeofday(&aTv,NULL);
double query_time = (aTv.tv_sec - init_time) + (double)(aTv.tv_usec - init_usec)/1000000.0;
printf ("C Elapsed %0.3f\n", query_time);
return 0;
}
PHP/HHVM
PHP version.
$ php --version
PHP 7.2.24-0ubuntu0.18.04.6 (cli) (built: May 26 2020 13:09:11) ( NTS )
Copyright (c) 1997-2018 The PHP Group
Zend Engine v3.2.0, Copyright (c) 1998-2018 Zend Technologies
with Zend OPcache v7.2.24-0ubuntu0.18.04.6, Copyright (c) 1999-2018, by Zend Technologies
HHVM version.
$ hhvm --version
HipHop VM 3.21.0 (rel)
Compiler: 3.21.0+dfsg-2ubuntu2
Repo schema: ebd0a4633a34187463466c1d3bd327c131251849
PHP and HHVM are the same source code.
mandelbrot.php
<?php
define("BAILOUT",16);
define("MAX_ITERATIONS",1000);
class Mandelbrot
{
function Mandelbrot()
{
$d1 = microtime(1);
for ($y = -39; $y < 39; $y++) {
for ($x = -39; $x < 39; $x++) {
$this->iterate($x/40.0,$y/40.0);
}
}
$d2 = microtime(1);
$diff = $d2 - $d1;
printf("PHP Elapsed %0.3f\n", $diff);
}
function iterate($x,$y)
{
$cr = $y-0.5;
$ci = $x;
$zr = 0.0;
$zi = 0.0;
$i = 0;
while (true) {
$i++;
$temp = $zr * $zi;
$zr2 = $zr * $zr;
$zi2 = $zi * $zi;
$zr = $zr2 - $zi2 + $cr;
$zi = $temp + $temp + $ci;
if ($zi2 + $zr2 > BAILOUT)
return $i;
if ($i > MAX_ITERATIONS)
return 0;
}
}
}
$m = new Mandelbrot();
?>
Ruby
Ruby version.
$ ruby --version
ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux-gnu]
Ruby source code.
mandelbrot.rb
BAILOUT = 16
MAX_ITERATIONS = 1000
class Mandelbrot
def initialize
#puts "Rendering"
for y in -39...39 do
#puts
for x in -39...39 do
i = iterate(x/40.0,y/40.0)
#if (i == 0)
# print "*"
#else
# print " "
#end
end
end
end
def iterate(x,y)
cr = y-0.5
ci = x
zi = 0.0
zr = 0.0
i = 0
while(1)
i += 1
temp = zr * zi
zr2 = zr * zr
zi2 = zi * zi
zr = zr2 - zi2 + cr
zi = temp + temp + ci
return i if (zi2 + zr2 > BAILOUT)
return 0 if (i > MAX_ITERATIONS)
end
end
end
time = Time.now
Mandelbrot.new
#puts
puts "Ruby Elapsed %f" % (Time.now - time)
Python/PyPy
Python version.
$ python --version
Python 2.7.15+
Version of PyPy.
$ pypy --version
Python 2.7.13 (5.10.0+dfsg-3build2, Feb 06 2018, 18:37:50)
[PyPy 5.10.0 with GCC 7.3.0]
Python source code. PyPy is the same.
mandelbrot.py
import sys, time
stdout = sys.stdout
BAILOUT = 16
MAX_ITERATIONS = 1000
class Iterator:
def __init__(self):
#print 'Rendering...'
for y in range(-39, 39):
#stdout.write('\n')
for x in range(-39, 39):
i = self.mandelbrot(x/40.0, y/40.0)
#if i == 0:
#stdout.write('*')
#else:
#stdout.write(' ')
def mandelbrot(self, x, y):
cr = y - 0.5
ci = x
zi = 0.0
zr = 0.0
i = 0
while True:
i += 1
temp = zr * zi
zr2 = zr * zr
zi2 = zi * zi
zr = zr2 - zi2 + cr
zi = temp + temp + ci
if zi2 + zr2 > BAILOUT:
return i
if i > MAX_ITERATIONS:
return 0
t = time.time()
Iterator()
print 'Python Elapsed %.02f' % (time.time() - t)
Kinx/Kinx(native)
Kinx version.
$ kinx -v
kinx version 0.9.2
However, there was a bug in the 0.9.2 release that const constants were not propagated correctly in native, so I used a partially fixed one ... The commit is done.
Kinx regular version source code.
mandelbrot.kx
const BAILOUT = 16;
const MAX_ITERATIONS = 1000;
function mandelbrot(x, y) {
var cr = y - 0.5;
var ci = x;
var zi = 0.0;
var zr = 0.0;
var i = 0;
while (true) {
i++;
var temp = zr * zi;
var zr2 = zr * zr;
var zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
var tmr = new SystemTimer();
var x,y;
for (y = -39; y < 39; y++) {
#System.print("\n");
for (x = -39; x < 39; x++) {
var i = mandelbrot(x/40.0, y/40.0);
#if (i==0)
# System.print("*");
#else
# System.print(" ");
}
}
#System.print("\n");
System.print("Kinx Elapsed %0.3f\n" % tmr.elapsed());
Kinx (native) source code. If you know the type from the calculation result, you don't have to write it, so you can just add : dbl
to the argument.
mandelbrotn.kx
const BAILOUT = 16;
const MAX_ITERATIONS = 1000;
native mandelbrot(x:dbl, y:dbl) {
var cr = y - 0.5;
var ci = x;
var zi = 0.0;
var zr = 0.0;
var i = 0;
while (true) {
i++;
var temp = zr * zi;
var zr2 = zr * zr;
var zi2 = zi * zi;
zr = zr2 - zi2 + cr;
zi = temp + temp + ci;
if (zi2 + zr2 > BAILOUT)
return i;
if (i > MAX_ITERATIONS)
return 0;
}
}
var tmr = new SystemTimer();
var x,y;
for (y = -39; y < 39; y++) {
#System.print("\n");
for (x = -39; x < 39; x++) {
var i = mandelbrot(x/40.0, y/40.0);
#if (i==0)
# System.print("*");
#else
# System.print(" ");
}
}
#System.print("\n");
System.print("Kinx(native) Elapsed %0.3f\n" % tmr.elapsed());
The result is displayed. The average of 10 times. Rearrange in order of speed.
I'm sorry, there was a problem with the measurement program and I measured again.
language | version | Measurement time(Seconds,10 times average) | time(real),10 times average |
---|---|---|---|
C | 7.4.0 | 0.018 | 0.046 |
PyPy | 5.10.0 | 0.020 | 0.122 |
Kinx(native) | 0.9.2 | 0.048 | 0.107 |
HHVM | 3.21.0 | 0.068 | 0.552 |
PHP | 7.2.24 | 0.182 | 0.241 |
Ruby | 2.5.1 | 0.365 | 0.492 |
Kinx | 0.9.2 | 0.393 | 0.457 |
Python | 2.7.15 | 0.564 | 0.601 |
Hooray! ** Experience slower than PHP ** Faster than HHVM. Also, the regular version was ** as fast as the Ruby VM I thought was fast **, so I'm simply happy.
Or rather, ** PyPy is unusually fast **. I felt a difference in case. Great.
The perceived slowness of HHVM is also shown in the table. The compile time is probably long. It is unavoidable due to the language specifications. Kinx (native) also sees a penalty for compile time.
Now, let's compare it with the result of Original article. In this benchmark, the output is also suppressed, but there is a big difference in the environment. I thought the rational trends were similar ... but only HHVM is weird. It should be JIT on by default, so compare it with JIT on. On the contrary, the result of the JIT off version of the original article is too slow and I'm not sure what is correct. Others are generally environmentally ** about twice as fast as they are now.
language | version | Measurement time(Seconds,10 times average) | Original article result | Original article version |
---|---|---|---|---|
C | 7.4.0 | 0.018 | 0.022 | 4.9.2 |
PyPy | 5.10.0 | 0.020 | ||
Kinx(native) | 0.9.2 | 0.048 | ||
HHVM | 3.21.0 | 0.068 | 0.030 | 3.5.0 |
PHP | 7.2.24 | 0.182 | 0.281 | 7 |
Ruby | 2.5.1 | 0.365 | 0.684 | 2.1.5 |
Kinx | 0.9.2 | 0.393 | ||
Python | 2.7.15 | 0.564 | 1.128 | 2.7.8 |
Benchmarking is fun, when it's a game. Native
is hard to come by, but this is one of the characteristics (individuality) of Kinx, so I want to take good care of it. Let's do our best.
See you next time.
P.S. By the way, this is the script used for measurement. I tried using the recently implemented Process. The above numbers are the numbers that appear in ʻaverage`.
mandbench.kx
using Process;
var count = 10;
var command = [$$[1], $$[2]];
var r = [];
var re = /[0-9]+\.[0-9]+/;
for (var i = 0; i < count; ++i) {
var result = "";
var [r1, w1] = new Pipe();
var p1 = new Process(command, { out: w1 }).run();
w1.close();
while (p1.isAlive() || r1.peek() > 0) {
var buf = r1.read();
if (buf.length() < 0) {
System.println("Error...");
return 1;
} else if (buf.length() > 0) {
result += buf;
} else {
// System.println("no input...");
}
}
re.reset(result);
if (re.find()) {
r.push(Double.parseDouble(re.group[0].string));
}
}
var total = r.reduce(&(r, e) => r + e);
System.println("total : %8.3f" % total);
System.println("count : %8d" % r.length());
System.println("average: %8.3f" % (total / r.length()));
Recommended Posts