I need to measure the execution time of a Python program having the following structure:
import numpy
import pandas
def func1():
code
def func2():
code
if __name__ == '__main__':
func1()
func2()
If I want to use "time.time()", where should I put them in the code? I want to get the execution time for the whole program.
Alternative 1:
import time
start = time.time()
import numpy
import pandas
def func1():
code
def func2():
code
if __name__ == '__main__':
func1()
func2()
end = time.time()
print("The execution time is", end - start)
Alternative 2:
import numpy
import pandas
def func1():
code
def func2():
code
if __name__ == '__main__':
import time
start = time.time()
func1()
func2()
end = time.time()
print("The execution time is", end - start)
In linux: you could run this file test.py using the time command
time python3 test.py
After your program runs it will give you the following output:
real 0m0.074s
user 0m0.004s
sys 0m0.000s
this link will tell the difference between the three times you get
The whole program:
import time
t1 = time.time()
import numpy
import pandas
def func1():
code
def func2():
code
if __name__ == '__main__':
func1()
func2()
t2 = time.time()
print("The execution time is", t2 - t1)
Related
In the following simple program, the callback passed to pool.map_async() does not seem to work properly. Could someone point out what is wrong?
import os
import multiprocessing
import time
def cube(x):
return "{}^3={}".format(x, x**3)
def prt(value):
print(value)
if __name__ == "__main__":
pool = multiprocessing.Pool(3)
start_time = time.perf_counter()
result = pool.map_async(cube, range(1,1000), callback=prt)
finish_time = time.perf_counter()
print(f"Program finished in {finish_time-start_time} seconds")
$ python3 /var/tmp/cube_map_async_callback.py
Program finished in 0.0001492840237915516 seconds
$
I am trying to run a piece of code using asyncio and reduce the time execution of the whole code. Below is my code which is taking around 6 seconds to fully execute itself
Normal function calls- (approach 1)
from time import time, sleep
import asyncio
def find_div(range_, divide_by):
lis_ = []
for i in range(range_):
if i % divide_by == 0:
lis_.append(i)
print("found numbers for range {}, divided by {}".format(range_, divide_by))
return lis_
if __name__ == "__main__":
start = time()
find_div(50800000, 341313)
find_div(10005200, 32110)
find_div(50000340, 31238)
print(time()-start)
The output of the above code is just the total execution time which is 6 secs.
Multithreaded Approach- (approach 2)
Used multithreading in this, but surprisingly the time increased
from time import time, sleep
import asyncio
import threading
def find_div(range_, divide_by):
lis_ = []
for i in range(range_):
if i % divide_by == 0:
lis_.append(i)
print("found numbers for range {}, divided by {}".format(range_, divide_by))
return lis_
if __name__ == "__main__":
start = time()
t1 = threading.Thread(target=find_div, args=(50800000, 341313))
t2 = threading.Thread(target=find_div, args=(10005200, 32110))
t3 = threading.Thread(target=find_div, args=(50000340, 31238))
t1.start()
t2.start()
t3.start()
t1.join()
t2.join()
t3.join()
print(time()-start)
The output of the above code is 12 secs.
Multiprocessing approach- (approach 3)
from time import time, sleep
import asyncio
from multiprocessing import Pool
def multi_run_wrapper(args):
return find_div(*args)
def find_div(range_, divide_by):
lis_ = []
for i in range(range_):
if i % divide_by == 0:
lis_.append(i)
print("found numbers for range {}, divided by {}".format(range_, divide_by))
return lis_
if __name__ == "__main__":
start = time()
with Pool(3) as p:
p.map(multi_run_wrapper,[(50800000, 341313),(10005200, 32110),(50000340, 31238)])
print(time()-start)
The output of the multiprocessing code is 3 secs which is better than the normal function call approach.
Asyncio Approach- (approach 4)
from time import time, sleep
import asyncio
async def find_div(range_, divide_by):
lis_ = []
for i in range(range_):
if i % divide_by == 0:
lis_.append(i)
print("found numbers for range {}, divided by {}".format(range_, divide_by))
return lis_
async def task():
tasks = [find_div(50800000, 341313),find_div(10005200, 32110),find_div(50000340, 31238)]
result = await asyncio.gather(*tasks)
print(result)
if __name__ == "__main__":
start = time()
asyncio.run(task())
print(time()-start)
The above code is also taking around 6 seconds which is the same as the normal execution function call that is the Approach 1.
Problem-
Why is my Asyncio approach not working as expected and reducing the overall time? What is wrong in the code?
You have code that exclusively uses the CPU.
Code like this cannot be sped up using async.
Async shines when you have tasks that are waiting on something not CPU related, e.g. a network request or reading from disk. This is generally true for all languages.
In python, also the thread based approach will not help you, as this still restricts you to a single core and not true parallel execution. This is due to the Global Interpreter Lock (GIL). The overhead of starting and switching between threads makes it slower than the simple version.
In this regard, threads are similar to async in python, it only helps if the time you are waiting is not spend mainly on the CPU or if you are calling code that's not bound by the GIL, e.g. c extensions.
Using multiprocessing really uses multiple cpu cores, so it is faster than the normal solution.
asyncio def run(time):
await asyncio.sleep(time)
This code takes 1 min 40 seconds
from datetime import datetime
now = datetime.now()
task=[]
for i in range(10):
await run(10)
now1=datetime.now()
print(now1-now)
OPTIMIZED USING async-->
THis takes 10 seconds only
from datetime import datetime
now = datetime.now()
task=[]
for i in range(10):
task.append(asyncio.create_task(run(10)))
await asyncio.gather(*task)
now1=datetime.now()
print(now1-now)
So I got 2 .py files and am trying to import the test function from the first to the secon one. But every time I try that I just get a "BrokenProcessPool: A process in the process pool was terminated abruptly while the future was running or pending." Error. I have no idea what Im messing up help is very much appreciated
parallel.py:
import time
from concurrent import futures
def test(t):
time.sleep(t)
print("Ich habe {} Sekunden gewartet. Zeit {:.0f}".format(t, time.time()))
def main():
print("Startzeit: {:.0f}".format(time.time()))
start = time.perf_counter()
with futures.ThreadPoolExecutor(max_workers=3) as ex:
ex.submit(test, 9)
ex.submit(test, 4)
ex.submit(test, 5)
ex.submit(test, 6)
print("Alle Aufgaben gestartet.")
print("Alle Aufgaben erledigt.")
finish = time.perf_counter()
print("Fertig in ",round(finish-start,2)," seconds(s)")
if __name__ == "__main__":
main()
parallel2.py:
import parallel
import time
import concurrent.futures
# =============================================================================
# def test(t):
# time.sleep(t)
# return ("Ich habe {} Sekunden gewartet. Zeit {:.0f}".format(t, time.time()))
# =============================================================================
def main():
print("Startzeit: {:.0f}".format(time.time()))
start = time.perf_counter()
with concurrent.futures.ProcessPoolExecutor() as executor:
f1 = executor.submit(parallel.test, 9)
f2 = executor.submit(parallel.test, 5)
f3 = executor.submit(parallel.test, 4)
f4 = executor.submit(parallel.test, 6)
print(f1.result())
print(f2.result())
print(f3.result())
print(f4.result())
finish = time.perf_counter()
print("Fertig in ",round(finish-start,2)," seconds(s)")
if __name__ =="__main__":
main()
Test that solution:
Remove a condition if __name__ == "__main__" from parallel.py.
You put the condition in both scripts: if __name__ == "__main__" to execute the main function.
When doing this your script checks if it is the main module, and executes the function only if the return is true.
When you import another script, your module is no longer "__main__", so the return does not satisfy the condition imposed for the function to run. .
So lets say i have this code:
...
connect()
find_links()
find_numbers()
in fact what it does is login to an account,get some numbers and one link:
example:
1.23, 1.32 , 32.1, 2131.3 link.com/stats/
1.32, 1.41 , 3232.1, 21211.3 link.com/stats/
so all i want to do is make these functions run every one hour
and then print the time so i can then compare results.I tried:
sched = BlockingScheduler()
#sched.scheduled_job('interval', seconds=3600 )
def do_that():
connect()
find_links()
find_numbers()
print(datetime.datetime.now())
but this just executes one time the functions and then just prints the date.
This should call the function once, then wait 3600 second(an hour), call function, wait, ect. Does not require anything outside of the standard library.
from time import sleep
from threading import Thread
from datetime import datetime
def func():
connect()
find_links()
find_numbers()
print(datetime.now())
if __name__ == '__main__':
Thread(target = func).start()
while True:
sleep(3600)
Thread(target = func).start()
Your code may take some time to run. If you want to execute your function precisely an hour from the previous start time, try this:
from datetime import datetime
import time
def do_that():
connect()
find_links()
find_numbers()
print(datetime.now())
if __name__ == '__main__':
starttime = time.time()
while True:
do_that()
time.sleep(3600.0 - ((time.time() - starttime) % 3600.0))
I have the following block of code that is part of a larger program. I am trying to get it to print the execution time once all of the threads are closed but can't seem to get it to work. Any ideas?
import time
import csv
import threading
import urllib.request
def openSP500file():
SP500 = reader(open(r'C:\Users\test\Desktop\SP500.csv', 'r'), delimiter=',')
for x in SP500:
indStk = x[0]
t1 = StockData(indStk)
t1.start()
if not t1.isAlive():
print(time.clock()-start_time, 'seconds')
else:
pass
def main():
openSP500file()
if __name__ == '__main__':
start_time = time.clock()
main()
Thanks!
You aren't waiting for all the threads to finish (only the last one created). Perhaps something like this in your thread-spawning loop?
threads = []
for x in SP500:
t1 = StockData(x[0])
t1.start()
threads.append(t1)
for t in threads:
t.join()
... print running time