Now "average of sums from 1 to 10" and its execution speed

Introduction

The other day I haloed to python, so I went back to the beginning and did something like a basic exercise I tried to do something like "the average of the sums from 1 to 10" Introduction of the solution at that time

Normal pattern code

# -*- coding: utf-8 -*-
def normal_average(start, end):
    #0 to be float when calculating average.Initializing to 0
    sum = 0.0
    size = end - start + 1
    for i in range(start, end + 1):
        sum += i
    else:
        print "normal_average = %f"%(sum/size)

start = 1
end = 10
normal_average(start, end)
 result
normal_average = 5.500000

There is a feeling that the + 1 part is not good, but it is okay python I'm a beginner, so the bad things about it are cute

Another pattern of code

def another_average(start, end):
    print "another_average = %f"%((start + end)/2.0)

start = 1
end = 10
another_average(start, end)
 result
another_average = 5.500000

I thought that the average of the sum of consecutive integers shouldn't be a loop without turning it, so I wrote it. I wonder if this way of solving is also an ant This should make a difference in processing speed as the number increases.

Processing speed measurement

The larger the number, the greater the difference, so this time I will try to find the average of the sums from 1 to 100 million.

# -*- coding: utf-8 -*-
import time

def normal_average(start, end):
    sum = 0.0
    size = end - start + 1
    for i in range(start, end + 1):
        sum += i
    else:
        print "normal_average = %f"%(sum/size)

def another_average(start, end):
    print "another_average = %f"%((start + end)/2.0)


#Main executive
start = 1
end = 100000000

#Measurement of execution time
normal_start_time = time.time()
normal_average(start, end)
normal_end_time = time.time()
print "execute time is %f"%(normal_end_time - normal_start_time)
print

another_start_time = time.time()
another_average(start, end)
another_end_time = time.time()
print "execute time is %f"%(another_end_time - another_start_time)
 result
normal_average = 50000000.500000
execute time is 12.961456

another_average = 50000000.500000
execute time is 0.000006

(Isn't it overwhelming? Our army ...!) Obviously, the processing speed is different.

in conclusion

It is said that if you change the logic in the first place, you may get overwhelmingly fast results. Of course, this solution is not without problems, but at least it is required faster than looping to get the "average of the sum of consecutive integers". It was a lesson that it was important to change the viewpoint and solve the problem.

Recommended Posts

Now "average of sums from 1 to 10" and its execution speed
[Python] Hit Keras from TensorFlow and TensorFlow from c ++ to speed up execution
Introduction to Scapy ① (From installation to execution of Scapy)
[Ansible installation procedure] From installation to execution of playbook
From the introduction of pyethapp to the execution of contract
From editing to execution
Getting Started with Poetry From installation to execution and version control
[Notes / Updated from time to time] This and that of Azure Functions
[Introduction to pytorch-lightning] Autoencoder of MNIST and Cifar10 made from scratch ♬
[For beginners] Summary of suffering from kaggle's EDA and its struggle
Python (from first time to execution)
Ford-Fulkerson Method and Its Applications-Supplement to Chapter 8 of the Algorithm Quick Reference-
How to speed up instantiation of BeautifulSoup
From Python to using MeCab (and CaboCha)
From MuJoCo license acquisition, renewal to execution
Porting and modifying doublet-solver from python2 to python3.
From Attention of Zero Tsuku 2 to Transformer
Summary of vtkThreshold (updated from time to time)
Speed comparison of murmurhash3, md5 and sha1
How to make VS Code aware of the venv environment and its benefits
[Python] From morphological analysis of CSV data to CSV output and graph display [GiNZA]
From Excel file to exe and release of tool that spits out CSV
Attempt to gradually improve solid programs (trade-off between description amount and execution speed)