I am running the following code try to measure how long my PG process finish, however, the "toc-tic" displays as soon as the whole loop finish, is there any way that I can measure the total time and time for individual thread? Thanks
tic = time.clock()
for i in range(0,2):
start = i * step
end = start + step
pg = PatternGenerator()
pg.counter = start
pg.pos = i
pg.data = lines[start:end]
pg.start()
toc = time.clock()
print toc - tic
Regards,
Andy
Join the threads, before toc!
You can put the objects to a list, then call join on them!
before the for :
pglist = []
... start the threads...
for pg in pglist:
pg.join()
toc = time.clock()
print toc - tic
Related
I'm trying to modify a Python script to multiprocess with "Process". The problem is it's not working. In a first step, the content is retrieved sequentially (test1, test2). In a second one, it is to be called in parallel (test1 and test2). There is practically no speed difference. If you execute the functions individually, you will notice a difference. In my opinion, parallelization should only take as long as the longest individual process. What am I missing here?
import multiprocessing
import time
def test1(k):
k = k * k
for e in range(1, k):
e = e**k
def test2(k):
k = k * k
for e in range(1, k):
e = e + 5 - 5*k ** 4000
if __name__ == '__main__':
start = time.time()
test1(100)
test2(100)
end = time.time()
print(end-start)
start = time.time()
worker_1 = multiprocessing.Process(target=test1(100))
worker_1.start()
worker_2 = multiprocessing.Process(target=test2, args=(100,))
worker_2.start()
worker_1.join()
worker_2.join()
end = time.time()
print(end-start)
I want to add that I checked the task manager and saw that only 1 core is used. (4 real Core only 25% CPU => 1Core 100% used)
I know Pool Class, but I don't want to use it.
Thank you for your help.
Update
Hello, everybody,
the one with the "typo" was unfavorable. Sorry about that. Bakuriu, thank you for your answer. In fact, you're right. I think it was the typo and too much work. :-( So I changed the example once again. For all who are interested:
I create two functions, in the first part of the main I run 3 times the functions sequentially. My computer needs approx. 36 sec. Then I start two new processes. These calculate their results here in parallel. As a small addition, the skin process of the program itself also calculates the function test1, which should show that the main program itself can also do something. I get a computing time of 12 sec. So that it is comprehensible for all in the Internet, what this means I once attached a picture here.
Task Manager
import multiprocessing
import time
def test1(k):
k = k * k
for e in range(1, k):
e = e**k
def test2(k):
k = k * k
for e in range(1, k):
e = e**k
if __name__ == '__main__':
start = time.time()
test1(100)
test2(100)
test1(100)
end = time.time()
print(end-start)
start = time.time()
worker_1 = multiprocessing.Process(target=test1, args=(100,))
worker_1.start()
worker_2 = multiprocessing.Process(target=test2, args=(100,))
worker_2.start()
test1(100)
worker_1.join()
worker_2.join()
end = time.time()
print(end-start)
Your code is executing sequentially because instead of passing test1 to the Process's target argument you are passing test1's result to it!
You want to do this:
worker_1 = multiprocessing.Process(target=test1, args=(100,))
As you do in the other call not this:
worker_1 = multiprocessing.Process(target=test1(100))
This code is first executing test1(100), then returns None and assigns that to target spawning an "empty process". After that you spawn a second process that executes test2(100). So you execute the code sequentially plus you add the overhead of spawning two processes.
I am working on a project which accurate timer is really crucial. I am working on python and am using timer.sleep() function.
I noticed that timer.sleep() function will add additional delay because of the scheduling problem (refer to timer.sleep docs). Due to that issue, the longer my program runs, the more inaccurate the timer is.
Is there any more accurate timer/ticker to sleep the program or solution for this problem?
Any help would be appreciated. Cheers.
I had a solution similar to above, but it became processor heavy very quickly. Here is a processor-heavy idea and a workaround.
def processor_heavy_sleep(ms): # fine for ms, starts to work the computer hard in second range.
start = time.clock()
end = start + ms /1000.
while time.clock() < end:
continue
return start, time.clock()
def efficient_sleep(secs, expected_inaccuracy=0.5): # for longer times
start = time.clock()
end = secs + start
time.sleep(secs - expected_inaccuracy)
while time.clock() < end:
continue
return start, time.clock()
output of efficient_sleep(5, 0.5) 3 times was:
(3.1999303695151594e-07, 5.0000003199930365)
(5.00005983869791, 10.00005983869791)
(10.000092477987678, 15.000092477987678)
This is on windows. I'm running it for 100 loops right now. Here are the results.
(485.003749358414, 490.003749358414)
(490.0037919174879, 495.0037922374809)
(495.00382903668014, 500.00382903668014)
The sleeps remain accurate, but the calls are always delayed a little. If you need a scheduler that accurately calls every xxx secs to the millisecond, that would be a different thing.
the longer my program runs, the more inaccurate the timer is.
So, for example by expecting 0.5s delay, it will be time.sleep(0.5 - (start-end)). But still didn't solve the issue
You seem to be complaining about two effects, 1) the fact that timer.sleep() may take longer than you expect, and 2) the inherent creep in using a series of timer.sleep() calls.
You can't do anything about the first, short of switching to a real-time OS. The underlying OS calls are defined to sleep for at least as long as requested. They only guarantee that you won't wake early; they make no guarantee that you won't wake up late.
As for the second, you ought to figure your sleep time according to an unchanging epoch, not from your wake-up time. For example:
import time
import random
target = time.time()
def myticker():
# Sleep for 0.5s between tasks, with no creep
target += 0.5
now = time.time()
if target > now:
time.sleep(target - now)
def main():
previous = time.time()
for _ in range(100):
now = time.time()
print(now - previous)
previous = now
# simulate some work
time.sleep(random.random() / 10) # Always < tick frequency
# time.sleep(random.random()) # Not always < tick frequency
myticker()
if __name__ == "__main__":
main()
Working on Linux with zero knowledge of Windows, I may be being naive here but is there some reason that writing your own sleep function, won't work for you?
Something like:
import time
def sleep_time():
start_time = time.time()
while (time.time() - start_time) < 0.0001:
continue
end_time = time.time() + 60 # run for a minute
cnt = 0
while time.time() < end_time:
cnt += 1
print('sleeping',cnt)
sleep_time()
print('Awake')
print("Slept ",cnt," Times")
I am new to Python. I want to add wait between two function calls.
Below is the code snap, but with this code wait is not working. My code goes in to pause as soon as it reaches the first line of uploadFullstackZiptoCDN().
How Can I make sure that I have a pause of 5 minutes between the functions?
uploadFullstackZiptoCDN(fsartifactFile,fullStackgroup_ID,fsVersion,sdpIP,cdnIP)
makeRestCalls(ugdmHostIP,ipmessagingHostIP,cdnIP,fsVersion,fsartifactFile,'FullStack')
time.sleep(300)
makeappUpgradeZip(appartifactFile,appgroup_ID,appversion,sdpIP,cdnIP)
uploadZiptoCDN(cdnIP,appartifactFile,appversion)
Code below produces a delay which seems to be well-controlled.
It might likely be adapted to your needs.
Among other differences, it allows for more granularity in the start and stop times than time.sleep.
#!/usr/bin/python3
import time
t0 = time.time()
nsecs = 300
while True :
t1 = time.time()
if ( (t1 - t0) > nsecs ) :
break
print( t1 - t0 )
I bet this has been asked before but I must be searching the wrong things because i can't find anything. I have created a simple game that gives the user simple math problems that they then must answer. I want to time how long it takes them to answer these.
So basically i want a startTimer() at the beginning of my code, and a stopTimer() at the end of my code and have the time that has elapsed be saved as a variable.
If you just want difference, use time.clock() or time.time() from the time module.
import time
t1 = time.clock() # or t1 = time.time()
...
t2 = time.clock() # or t2 = time.time()
elapsedTime = t2 - t1
Refer to https://docs.python.org/2/library/time.html
t1 = time.time()
raw_input("enter your guess")
print("You took {} to answer".format(time.time() - t1))
Running it on Ubuntu 14 with Python 2.7.6
I simplified script to show my problem:
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
print time.clock() - start_time, "lol seconds"
def worker(n):
print time.clock() - start_time, "multiprocesor seconds"
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
lol()
mp_handler()
And the output:
8e-06 lol seconds
6.9e-05 lol seconds
-0.030019 multiprocesor seconds
-0.029907 multiprocesor seconds
Process finished with exit code 0
Using time.time() gives non-negative values (as marked here Timer shows negative time elapsed)
but I'm curious what is the problem with time.clock() in python multiprocessing and reading time from CPU.
multiprocessing spawns new processes and time.clock() on linux has the same meaning of the C's clock():
The value returned is the CPU time used so far as a clock_t;
So the values returned by clock restart from 0 when a process start. However your code uses the parent's process start_time to determine the time spent in the child process, which is obviously incorrect if the child processes CPU time resets.
The clock() function makes sense only when handling one process, because its return value is the CPU time spent by that process. Child processes are not taken into account.
The time() function on the other hand uses a system-wide clock, and thus can be used even between different processes (although it is not monotonic, so it might return wrong results if somebody changes the system time during the events).
Forking a running python instance is probably faster then starting a new one from scratch, hence start_time is almost always bigger then the value returned by time.clock().
Take into account that the parent process also had to read your file on disk, perform the imports which may require reading other .py files, searching directories etc.
The forked child processes don't have to do all that.
Example code that shows that the return value of time.clock() resets to 0:
from __future__ import print_function
import time
import multiprocessing
data = range(1, 3)
start_time = time.clock()
def lol():
for i in data:
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "lol seconds")
def worker(n):
t = time.clock()
print('t: ', t, end='\t')
print(t - start_time, "multiprocesor seconds")
def mp_handler():
p = multiprocessing.Pool(1)
p.map(worker, data)
if __name__ == '__main__':
print('start_time', start_time)
lol()
mp_handler()
Result:
$python ./testing.py
start_time 0.020721
t: 0.020779 5.8e-05 lol seconds
t: 0.020804 8.3e-05 lol seconds
t: 0.001036 -0.019685 multiprocesor seconds
t: 0.001166 -0.019555 multiprocesor seconds
Note how t is monotonic for the lol case while goes back to 0.001 in the other case.
To add a concise Python 3 example to Bakuriu's excellent answer above you can use the following method to get a global timer independent of the subprocesses:
import multiprocessing as mp
import time
# create iterable
iterable = range(4)
# adds three to the given element
def add_3(num):
a = num + 3
return a
# multiprocessing attempt
def main():
pool = mp.Pool(2)
results = pool.map(add_3, iterable)
return results
if __name__ == "__main__": #Required not to spawn deviant children
start=time.time()
results = main()
print(list(results))
elapsed = (time.time() - start)
print("\n","time elapsed is :", elapsed)
Note that if we had instead used time.process_time() instead of time.time() we will get an undesired result.