so I have more of a general question that I can't wrap my head around, and I haven't seen explicitely explained within the docs. So let's take a random two events from my simulation (what they exactly are shouldn't matter in the scope of this question)
10.1622 Customer02: Do something
13.6176 Customer08: Do somethiing
The first column is the internal time these events took place. Can I ask someone to explain what is the interpretation of these numbers? Are these simply meant to be real world seconds, meaning that 3.5~~ real world seconds passed between the first event and the second in the simulation and that the first event took place 10 real world seconds into the simulation?
What is the practice if I want times in my simulation (like interval between customers arriving, the time it takes to serve a customer) etc. to be expressed in real-world time? Let's say I have a variable "intervalbetweencustomers" which is set at "10.0" at the moment. If I want it to have the value of a real-world minute, how do i do that?
The "tick" of the simpy clock can be in any unit you want (seconds, minutes, hours, ect)
Tick are not integers so you can have half a tick.
Just pick a unit and convert everything to that unit when you need a time related parameter for a simpy function like env.timeout
simpy does not have a time units as a parameter so any conversion you will need to do yourself. There are python libraries if you need to convert dates, or the difference between two dates, to a number
Related
I'm currrently writing an alarm clock in Python, However i have some technical difficulties.
The user has the option for the alarm to repeat (on given days), or to not repeat. They then provide the minutes and hour at which they want the alarm to trigger.
For my alarm system to work, i need to know the time as an epoch of when the alarm should trigger.
If i am trying to set an alarm, (for example for 19:30, time will always be inputted as 24 hours), i need the alarm to be able to find out the epoch time of the next time it is 19:30, because it could either be on the same day if i set the alarm before 19:30, or it could be for the next day if i set the alarm after 19:30.
Because of this it means i can't simply do time.localtime(), and then take the resulting struct_time object and swap out the hours and minutes to the integers of 19 and 30 (located at indexes 3 and 4 respectively of the object's named tuple), as i would also have to correctly assign the values of the month, day, and day of the year in order to have a valid struct_time object, which, whilst possible, would require a lot of manipulating, when i feel like there is likely a much more reasonable way of doing this.
Any help would be much appreciated
You can simply use the timestamp method on the result. This will return the epoch time of the datetime instance. This will work in almost any circumstance, especially since it is a simple alarm clock, but be aware of this wraning from the docs.
Naive datetime instances are assumed to represent local time and this method relies on the platform C mktime() function to perform the conversion. Since datetime supports wider range of values than mktime() on many platforms, this method may raise OverflowError for times far in the past or far in the future.
Depending on your program architecture you might also consider using the amount of seconds between two times, which can be done using simple subtraction to get a timedelta and the total_seconds function:
import time
import datetime
start = datetime.datetime.now()
time.sleep(2)
end = datetime.datetime.now()
# print total seconds
print((end - start).total_seconds())
The more I read about datetime arithmetic, the bigger a headache I get.
There's lots of different kinds of time:
Civil time
UTC
TAI
UNIX time
system time
thread time
CPU time
And then the clocks can run faster or slower or jump backwards or forwards because of
daylight savings
moving across timezones
leap seconds
NTP synchronization
general relativity
And how these are dealt with depends in turn on:
Operating system
Hardware
Programming language
So please can somebody tell me, for my specific use case, the safest and most reliable way to measure a short interval? Here is what I am doing:
I'm making a game in Python (3.7.x) and I need to keep track of how long it has been since certain events. For example, how long the player has been holding a button, or how long since an enemy has spotted the player, or how long since a level was loaded. Timescales should be accurate to the millisecond (nanoseconds are overkill).
Here are scenarios I want to be sure are averted:
You play the game late at night. In your timezone, on that night, the clocks go forward an hour at 2am for DST, so the minutes go: 1:58, 1:59, 2:00, 3:01, 3:02. Every time-related variable in the game suddenly has an extra hour added to it -- it thinks you'd been holding down that button for an hour and 2 seconds instead of just 2 seconds. Catastrophe ensues.
The same, but the IERS decides to insert or subtract a leap second sometime that day. You play through the transition, and all time variables get an extra second added or subtracted. Catastrophe ensues.
You play the game on a train or plane and catastrophe ensues when you cross a timezone boundary and/or the International Date Line.
The game works correctly in the above scenarios on some hardware and operating systems, but not others. I.e. it breaks on Linux but not Window, or vice versa.
And I can't really write tests for these since the problematic events come around so rarely. I need to get it right the first time. So, what time-related function do I need to use? I know there's plain old time.time(), but also a bewildering array of other options like
time.clock()
time.perf_counter()
time.process_time()
time.monotonic()
and then nanosecond variants of all of the above.
From reading the documentation it seems like time.monotonic() is the one I want. But if reading about all the details of timekeeping has taught me anything, it's that these things are never quite what they seem. Once upon a time, I thought I knew what a "second" was. Now I'm not so sure.
So, how do I make sure my game clocks work properly?
The specification of time module is the best place to look for details about each of those.
There, you can easily see that:
time.clock() is deprecated and should be replaced with other functions
time.process_time() counts only CPU time spent by your process, so it is not suitable for measuring wall clock time (which is what you need)
time.perf_counter() has the same problem as time.process_time()
time.time() is just about right, but it will give bad timings if user modifies the current time
time.monotonic() - this seems to be the safest bet for measuring time intervals - note that this does not give you current time at all, but it gives you a correct difference between two time points
As for the nanoseconds versions, you should use those only if you need nanoseconds.
I am working on driving down the execution time on a program I've refactored, and I'm having trouble understanding the profiler output in PyCharm and how it relates to the output I would get if I run cProfile directly. (My output is shown below, with two lines of interest highlighted that I want to be sure I understand correctly before attempting to make fixes.) In particular, what do the Time and Own Time columns represent? I am guessing Own Time is the time consumed by the function, minus the time of any other calls made within that function, and time is the total time spent in each function (i.e. they just renamed tottime and cumtime, respectively), but I can't find anything that documents that clearly.
Also, what can I do to find more information about a particularly costly function using either PyCharm's profiler or vanilla cProfile? For example, _strptime seems to be costing me a lot of time, but I know it is being used in four different functions in my code. I'd like to see a breakdown of how those 2 million calls are spread across my various functions. I'm guessing there's a disproportionate number in the calc_near_geo_size_and_latency function, but I'd like more proof of that before I go rewriting code. (I realize that I could just profile the functions individually and compare, but I'm hoping for something more concise.)
I'm using Python 3.6 and PyCharm Professional 2018.3.
In particular, what do the Time and Own Time columns represent? I am guessing Own Time is the time consumed by the function, minus the time of any other calls made within that function, and time is the total time spent in each function (i.e. they just renamed tottime and cumtime, respectively), but I can't find anything that documents that clearly.
You can see definitions of own time and time here: https://www.jetbrains.com/help/profiler/Reference__Dialog_Boxes__Properties.html
Own time - Own execution time of the chosen function. The percentage of own time spent in this call related to overall time spent in this call in the parentheses.
Time - Execution time of the chosen function plus all time taken by functions called by this function. The percentage of time spent in this call related to time spent in all calls in the parentheses.
This is also confirmed by a small test:
Also, what can I do to find more information about a particularly costly function using either PyCharm's profiler or vanilla cProfile?
By default pycharm does use cProfile as a profiler. Perhaps you're asking about using cProfile on the command line? There are plenty of examples of doing so here: https://docs.python.org/3.6/library/profile.html
For example, _strptime seems to be costing me a lot of time, but I know it is being used in four different functions in my code. I'd like to see a breakdown of how those 2 million calls are spread across my various functions.
Note that the act of measuring something will have an impact on the measurement retrieved. For a function or method that is called many times, especially 2 million, the profiler itself will have a significant impact on the measured value.
I'm totally new to PsychoPy and I'm working with Builder. I'm not familiar with Python coding at all.
I have audio stimuli that have variable durations. In each trial, I want the second stimulus to start 500ms or 1500ms after the end of the first stimulus. Is there a way to do this in Builder? If I have to do it on Coder, what should I do?
Thank you very much!
Absolutely. Think of 500ms and 1500ms as two different conditions that you loop over in addition. These two conditions are crossed with the different durations.
In you conditions file, where you have the different durations (or you could just do that using a random function of course), for every duration add two rows with a column "soa" (or whatever you want to call it) with the two values 500ms and 1500ms. In the builder interface you can choose whether order of presentation should be sequential, randomized within block or fully randomized across all trials (not just within block). Also, if you don't want it balanced (e.g. 20% 1500ms and 80% 500ms), you can just add the appropriate number of rows to achieve this balance (1 out of 5 is 1500 ms).
Nearly all demos handles trials in this way, so take a look in Builder --> Demos, click on the loop and see how it's done there. Also, read the relevant section of the online documentation and see a video tutorial also incorporating it.
In concrete terms, when you add a Sound component in Builder, you just need to add an expression in the "Start (time)" field that takes account of the duration of the first sound stimulus and the ISI for this trial.
So if you have a column for the ISI in the conditions file as Jonas suggests (let's say it is called "ISI") and a Sound component for the first auditory stimulus (called, say, "sound1"), then you could put this in the Start field of the second sound stimulus:
$sound1.getDuration() + ISI
The $ symbol indicates that this line is to be interpreted as a Python code expression and not as a literal duration.
This assumes that sound1 starts at the very beginning of a trial. If it starts, say 1 second into the trial, then just add a constant to the expression:
$1.0 + sound1.getDuration() + ISI
Your ISI column should contain values in seconds. If you prefer milliseconds, then do this:
$sound1.getDuration() + ISI/1000.0
Sorted by total time, the second longest executing function is "{built-in method mainloop}" ? I looked at the same entry with pstats_viewer.py and clicked it and it says :
Function Exclusive time Inclusive time Primitive calls Total calls Exclusive per call Inclusive per call
Tkinter.py:359:mainloop 0.00s 561.03s (26.3%) 1 1 0.00s 561.03s
What does this mean?
Edit
Here's part of the cProfile output from a longer run of my code. The more ODE's I solve, the more time is devoted to mainloop. This is crazy! I thought that my runtime was getting killed by either branch divergence in my CUDA kernel or Host-GPU memory transfers. God, I'm a horrible programmer!
How have I made Tkinter take so much of my runtime?
mainloop is the event loop in Tkinter. It waits for events and processes them as they come in.
This is a recurring thing that you will see in all GUIs as well as any other event-driven frameworks like Twisted or Tornado.
First of all, it's a lot easier to see if you change tabs to spaces, as in:
Function Exclusive time Inclusive time Primitive calls Total calls Exclusive per call Inclusive per call
Tkinter.py:359:mainloop 0.00s 561.03s (26.3%) 1 1 0.00s 561.03s
Exclusive time means time that the program counter was in that routine. For a top-level routine you would expect this to be practically zero.
Inclusive time means including time in all routines that the routine calls. For a top-level routine you would expect this to be practically 100%.
(I don't understand what that 26.3% means.)
If you are trying to get more speed, what you need to do is find activity that 1) has a high percent inclusive time, and 2) that you can do something about.
This link shows the method I use.
After you speed something up, you will still find things that take a high percent inclusive time, but the overall elapsed time will be less.
Eventually you will get to a point where some things still take a high percent, but you can no longer figure out how to improve it.