I'm trying to create a simple game using pygame and everything was ok so far. The last couple of days though I realized that a problem occurred with the time.clock(). I read the documentation and the function should count the time of the game since it starts. I wanted to spawn an alien group every 8 seconds and it worked(I'm working on debian os) but as I mentioned the last 2 days it doesn't count properly. The system needs about 20 seconds in real time in order for time.clock to print 8.0 and the aliens to spawn and at first I thought that I messed up with the counters but how can this be, It worked fine in the beginning so I tried to run the same code on the windows partition and it was also fine. So is this a problem with the system clock or anything else? I replaced the time.clock on debian with time.time and also worked fine. Did anyone in the past run into the same problem? Can you help me check if something else is the problem(both operating systems run python 3.6)? If you don't understand something or need anything more just ask me.
Thank you for your time
here is a portion of the time.clock use in the game:
sergeant_spawn_time_limit = 8.0
sergeant_spawn_time = 0.0
if game_stage == 2 or game_stage == 3 or score >= 400:
if time.clock() - sergeant_spawn_time > sergeant_spawn_time_limit:
for spawn_sergeant in range(5):
sergeant = AlienSergeant(display_width, display_height, 50, 88)
all_sprites_list.add(sergeant)
alien_sergeant_list.add(sergeant)
sergeant_spawn_time = time.clock()
The behaviour of time.clock() is platform dependend:
time.clock()
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact
the very definition of the meaning of “processor time”, depends on
that of the C function of the same name.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Deprecated since version 3.3: The behaviour of this function depends
on the platform: use perf_counter() or process_time() instead,
depending on your requirements, to have a well defined behaviour.
So it's really the wrong tool to use here. Either use time.time() or pygame's clock or it's build-in event system. You'll find a lot of examples, like here or here.
Related
I am measuring the response time on a function using the time module. The time module is supposed to output seconds as a float, so I am saving a start time value (time.clock()) and taking another reading at the end, and using the difference as a runtime. While watching the results, we noted the runtimes seemed high -- something that seemed to take less than 2 seconds, was printing as 3-and-change, for instance. Based on the perceived issue, I decided to double-check the results using the datetime module. Printing the two side-by-side shows the time module values are almost double the datetime values.
Anyone know why that might be?
Here is my code:
for datum in data:
start = datetime.datetime.now()
startts = time.clock()
check = test_func(datum)
runtime = datetime.datetime.now() - start
runts = time.clock() - startts
print(check, "Time required:", runtime, "or", runts)
Some of my results:
XYZ Time required: 0:00:01.985303 or 3.7836029999999994
XYZ Time required: 0:00:01.476289 or 3.3465039999999817
XYZ Time required: 0:00:01.454407 or 3.7140109999999993
XYZ Time required: 0:00:01.550416 or 3.860824000000008
I am assuming this sort of issue would have been noticed before, and I am just missing something basic in my implementation. Can someone clue me in?
Looks like time.clock() has been Deprecated since version 3.3
Maybe this will help ?
time.clock()
On Unix, return the current processor time as a floating point number
expressed in seconds. The precision, and in fact the very definition
of the meaning of “processor time”, depends on that of the C function
of the same name.
On Windows, this function returns wall-clock seconds elapsed since the
first call to this function, as a floating point number, based on the
Win32 function QueryPerformanceCounter(). The resolution is typically
better than one microsecond.
Deprecated since version 3.3: The behaviour of this function depends
on the platform: use perf_counter() or process_time() instead,
depending on your requirements, to have a well defined behaviour.
We found the issue. The test_func I am testing is using a multi-threaded process. I both did not know that, and did not know it was an issue.
The time module uses processor time (https://docs.python.org/3.6/library/time.html), while the datetime module uses wall clock time (https://docs.python.org/3.6/library/datetime.html). Using the difference in the datetime timestamps told me how much actual time had elapsed, and for our purposes was the relevant information.
I hope this helps someone else in the future!
When I run PyCharm profiler (a quick intro video is here -- https://www.youtube.com/watch?v=QSueV8MYtlw ) I get thousands of lines like hasattr or npyio.py (where did it come from, I do not even use numpy) which do not help me to understand what's going on at all.
How can I make make PyCharm profiler to show only timings of my source code, not any libraries or system calls?
In other words, can the time spent in system calls and libraries be assigned to my functions which call them?
In other words (version two), all I want is number of milliseconds next to each line of my python code, not anything else.
I created a code to provide an example and hopefully provide an acceptable answer:
import datetime as dt
class something:
def something_else(self):
other_list = range(100000)
for num in other_list:
datetimeobj = dt.datetime.fromtimestamp(num)
print(num)
print(datetimeobj)
def something_different(self):
other_list = range(100000)
for num in other_list:
datetimeobj = dt.datetime.fromtimestamp(num)
print(num)
print(datetimeobj)
st = something()
st.something_else()
st.something_different()
The code resulted in the below picture, which I have sorted based on name. (This is in my case possible because all of the builtin methods are prefixed by "<". After doing this I can now see that main took 100 % of the total time (Column: Time (ms)). something_else took 50.8 % of the time and something_different took 49.2 % of the time (Totaling to 100 % as well) (Column: Time (ms)) The time spent inside each of the two home-grown methods was 2.0 % for each (Column: Own Time (ms)) -> This means that underlying calls from something_else accounted for 48.8 % and for something_different 47.2 % and the parts that I wrote accounted for 4.0 % of the total time. The remaining 96.0 % of the code happens from the built-in methods, that I call.
Your questions were:
How can I make make PyCharm profiler to show only timings of my source code, not any libraries or system calls? -> That's what you see in the column: "Own Time (ms)" -> 2.0 % (Time spent inside the specific method.)
In other words, can the time spent in system calls and libraries be assigned to my functions which call them? -> That's what you see in the column: "Time (ms)" (Time spent including underlying methods.)
Subtract the two columns and you get time spent only in underlying methods.
I have unfortunately been unable to find a method for filtering in the profiler, but it is possible to export the list by copying it and this way you could create something else to do the filtering on e.g. "<built_in" to clean up the data.
I am developing a script that utilizes a function I made to control the relays of an 8 channel relay board on a Raspberry Pi 3. The function works, and calling the function works. I am trying to develop this script so when the current time equals another time, such as Zone 1 start time, the relays turn on/off depending on the status that is received by another part in the code.
I have tested it without this time equals part, and everything works. I seem to be running into some problems when I add this level of complexity. Here is a sample of my code:
while True:
from datetime import datetime
import time
import smbus
ValveStatus='00000001' #0 is closed, 1 is open.
R1_1,R2_1,R3_1,R4_1,R5_1,R6_1,R7_1,R8_1=list(map(int, ValveStatus))
currenttime=datetime.today().strftime('%Y-%m-%d %H:%M:%S')
Z1S_Timestamp='2018-07-09 10:25:11'
if(currenttime==Z1S_Timestamp):
if(R8_1==1):
SetRelayState(BoardOne,8,"ON")
else:
SetRelayState(BoardOne,8,"OFF")
No matter how many times I changed the code, it will never work with this timing method. It never enters the loop and therefore the relay never opens. Is there a better way to do this rather than simply having if equal to statements? I am open to editing it, but the relays still need to open around the time of the start time. I think a margin of 1 or 2 minutes is okay, since timing it exactly equal is not 100% necessary.
Would something like:
currenttime= '2018-07-09 12:53:55' #hard coding just for example purposes
if('2018-07-09 12:52:55' <= currenttime <= '2018-07-09 12:54:55'):
do the things
Be a more valid/correct/pythonically correct method?
Sure - I would do the opposite though - convert all times to datetime() objects and use those for comparison:
TIME_MARGIN = datetime.timedelta(seconds=120) # use a margin of 2 minutes
time_compare = datetime.datetime(2018, 7, 9, 12, 52, 55)
current_time = datetime.datetime.now()
if (time_compare - TIME_MARGIN) < current_time < (time_compare + TIME_MARGIN):
#do something
I'm posing this question mostly out of curiosity. I've written some code that is doing some very time intensive work. So, before executing my workhorse function, I wrapped it up in a couple of calls to time.clock(). It looks something like this:
t1 = time.clock()
print this_function_takes_forever(how_long_parameter = 20)
t2 = time.clock()
print t2 - t1
This worked fine. My function returned correctly and t2 - t1 gave me a result of 972.29, or about 16 minutes.
However, when I changed my code to this
t1 = time.clock()
print this_function_takes_forever(how_long_parameter = 80)
t2 = time.clock()
print t2 - t1
My function still returned fine, but the result of t2 - t1 was:
None
-1741
I'm curious as to what implementation detail causes this. Both the None, and the negative number are perplexing to me. Does it have something to do with a signed type? How does this explain the None?
The Python docs say:
On Unix, return the current processor time as a floating point number expressed in seconds. The precision, and in fact the very definition of the meaning of “processor time”, depends on that of the C function of the same name
The manpage of the referenced C function then explains the issue:
Note that the time can wrap around. On a 32-bit system where CLOCKS_PER_SEC equals 1000000 this function will
return the same value approximately every 72 minutes.
A quick guess... Looks like an overflow. The default data type is probably a signed data type (putting the first bit to 1 on a signed integer gives a negative number).
Try putting the result of the substraction in a variable (double), and then printing that.
If it still prints like that, you can try converting it from double to string, and then using 'print' function on the string.
The None has a very simple answer, your function does not return a value. Actually I gather that is does under normal circumstances, but not when how_long_parameter = 80. Because your function seems to be returning early (probably because execution reaches the end of the function where there is an implicit return None in Python) the negative time might be because your function takes almost no time to complete in this case? So look for the bug in your function and correct it.
The actual answer as to why you get a negative time depends on the operating system you are using, because clock() is implemented differently on different platforms. On Windows it uses QueryPerformanceCounter(), on *nix it uses the C function clock().
I was optimising some Python code, and tried the following experiment:
import time
start = time.clock()
x = 0
for i in range(10000000):
x += 1
end = time.clock()
print '+=',end-start
start = time.clock()
x = 0
for i in range(10000000):
x -= -1
end = time.clock()
print '-=',end-start
The second loop is reliably faster, anywhere from a whisker to 10%, depending on the system I run it on. I've tried varying the order of the loops, number of executions etc, and it still seems to work.
Stranger,
for i in range(10000000, 0, -1):
(ie running the loop backwards) is faster than
for i in range(10000000):
even when loop contents are identical.
What gives, and is there a more general programming lesson here?
I can reproduce this on my Q6600 (Python 2.6.2); increasing the range to 100000000:
('+=', 11.370000000000001)
('-=', 10.769999999999998)
First, some observations:
This is 5% for a trivial operation. That's significant.
The speed of the native addition and subtraction opcodes is irrelevant. It's in the noise floor, completely dwarfed by the bytecode evaluation. That's talking about one or two native instructions around thousands.
The bytecode generates exactly the same number of instructions; the only difference is INPLACE_ADD vs. INPLACE_SUBTRACT and +1 vs -1.
Looking at the Python source, I can make a guess. This is handled in ceval.c, in PyEval_EvalFrameEx. INPLACE_ADD has a significant extra block of code, to handle string concatenation. That block doesn't exist in INPLACE_SUBTRACT, since you can't subtract strings. That means INPLACE_ADD contains more native code. Depending (heavily!) on how the code is being generated by the compiler, this extra code may be inline with the rest of the INPLACE_ADD code, which means additions can hit the instruction cache harder than subtraction. This could be causing extra L2 cache hits, which could cause a significant performance difference.
This is heavily dependent on the system you're on (different processors have different amounts of cache and cache architectures), the compiler in use, including the particular version and compilation options (different compilers will decide differently which bits of code are on the critical path, which determines how assembly code is lumped together), and so on.
Also, the difference is reversed in Python 3.0.1 (+: 15.66, -: 16.71); no doubt this critical function has changed a lot.
$ python -m timeit -s "x=0" "x+=1"
10000000 loops, best of 3: 0.151 usec per loop
$ python -m timeit -s "x=0" "x-=-1"
10000000 loops, best of 3: 0.154 usec per loop
Looks like you've some measurement bias
I think the "general programming lesson" is that it is really hard to predict, solely by looking at the source code, which sequence of statements will be the fastest. Programmers at all levels frequently get caught up by this sort of "intuitive" optimisation. What you think you know may not necessarily be true.
There is simply no substitute for actually measuring your program performance. Kudos for doing so; answering why undoubtedly requires delving deep into the implementation of Python, in this case.
With byte-compiled languages such as Java, Python, and .NET, it is not even sufficient to measure performance on just one machine. Differences between VM versions, native code translation implementations, CPU-specific optimisations, and so on will make this sort of question ever more tricky to answer.
"The second loop is reliably faster ..."
That's your explanation right there. Re-order your script so the subtraction test is timed first, then the addition, and suddenly addition becomes the faster operation again:
-= 3.05
+= 2.84
Obviously something happens to the second half of the script that makes it faster. My guess is that the first call to range() is slower because python needs to allocate enough memory for such a long list, but it is able to re-use that memory for the second call to range():
import time
start = time.clock()
x = range(10000000)
end = time.clock()
del x
print 'first range()',end-start
start = time.clock()
x = range(10000000)
end = time.clock()
print 'second range()',end-start
A few runs of this script show that the extra time needed for the first range() accounts for nearly all of the time difference between '+=' and '-=' seen above:
first range() 0.4
second range() 0.23
It's always a good idea when asking a question to say what platform and what version of Python you are using. Sometimes it does't matter. This is NOT one of those times:
time.clock() is appropriate only on Windows. Throw away your own measuring code and use -m timeit as demonstrated in pixelbeat's answer.
Python 2.X's range() builds a list. If you are using Python 2.x, replace range with xrange and see what happens.
Python 3.X's int is Python2.X's long.
Is there a more general programming lesson here?
The more general programming lesson here is that intuition is a poor guide when predicting run-time performance of computer code.
One can reason about algorithmic complexity, hypothesise about compiler optimisations, estimate cache performance and so on. However, since these things can interact in non-trivial ways, the only way to be sure about how fast a particular piece of code is going to be is to benchmark it in the target environment (as you have rightfully done.)
With Python 2.5 the biggest problem here is using range, which will allocate a list that big to iterate over it. When using xrange, whichever is done second is a tiny bit faster for me. (Not sure if range has become a generator in Python 3.)
Your experiment is faulty. The way this experiment should be designed is to write 2 different programs - 1 for addition, 1 for subtraction. They should be exactly the same and run under the same conditions with the data being put to file. Then you need to average the runs (at least several thousand), but you'd need a statistician to tell you an appropriate number.
If you wanted to analyze different methods of addition, subtraction, and looping, again each of those should be a separate program.
Experimental error might arise from heat of processor and other activity going on the cpu, so i'd execute the runs in a variety of patterns...
That would be remarkable, so I have thoroughly evaluated your code and also setup the expiriment as I would find it more correct (all declarations and function calls outside the loop). Both versions I have run five times.
Running your code validated your claims:
-= takes constantly less time; 3.6% on average
Running my code, though, contradicts the outcome of your experiment:
+= takes on average (not always) 0.5% less time.
To show all results I have put plots online:
Your evaluation: http://bayimg.com/kadAeaAcN
My evaluation: http://bayimg.com/KadaAaAcN
So, I conclude that your experiment has a bias, and it is significant.
Finally here is my code:
import time
addtimes = [0.] * 100
subtracttimes = [0.] * 100
range100 = range(100)
range10000000 = range(10000000)
j = 0
i = 0
x = 0
start = 0.
for j in range100:
start = time.clock()
x = 0
for i in range10000000:
x += 1
addtimes[j] = time.clock() - start
for j in range100:
start = time.clock()
x = 0
for i in range10000000:
x -= -1
subtracttimes[j] = time.clock() - start
print '+=', sum(addtimes)
print '-=', sum(subtracttimes)
The running loop backwards is faster because the computer has an easier time comparing if a number is equal to 0.