So I'm working with time to format seconds to the format MM:SS:TSHSTS, meaning minutes:seconds:1/10 second 1/100 second 1/1000 second. This is basicly to calculate lap times for a racing simulator.
My input comes in seconds so for instance 124.6073 seconds -> 02:04:607.
I've produced this function and it works, I just want to know how can I optimize this since in my head this is kinda hammered as I couldn't think of something better at the time.
def convert_lap_times(seconds_per_lap):
minutes = seconds_per_lap // 60
seconds = seconds_per_lap % 60
rest = seconds_per_lap % 1
rest_str = str(rest).split('0.')
decimal_rest = rest_str[1]
return "%02i:%02i.%0.3s" % (minutes, seconds, decimal_rest)
So the rest is the decimal part of said number and all worked fine until that point. The "big" issue becomes when I want to get rid of the integer part, for instance 0.607, I just wanted 607 so I parsed it as a string... is there a better way?
Thanks a lot and all the best!
You could do it like this:
def convert_lap_times(seconds_per_lap):
return "{:02.0f}:{:06.3f}".format(*divmod(seconds_per_lap,60))
print(convert_lap_times(124.6073)) # 02:04.607
Related
I prompt the user to input what time they start and finish their job. Then I need to calculate what they will earn (given a 97 currency/hour salary). The answer should also not have any decimals (so it should be rounded off). I can't seem to get it to work though.
As shown below, I tried taking the difference between the two inputs from the user and then splitting them to hours and minutes. After that just doing the calculations.
difference = round(float(finishing_time)-float(start_time), 2)
hours, minutes = str(difference).split(".")
salary_hours = int(hours)*97
salary_minutes = int(minutes)//60*97
salary = salary_hours + salary_minutes
So if start_time = 8.30 and finishing_time = 11.15 the salary should be 267, but I get 291 currency.
A couple of things to be careful of, is the rounding off that occurs at every level, which also occurs when you do math by hand and pencil! There is a reason why when you perform calculations one typically does the rounding off when the entire calculation has been performed otherwise one would come up with a vastly different answer as you pointed out.
I'd tackle this perhaps by doing something like this
from datetime import datetime
# quick way to generate a datetime object with start time would be like
start_time = datetime.now()
# replace the hours and minutes you want in your case its
start_time = start_time.replace(hour=8, minute=30)
end_time = start_time.replace(hour=11, minute=15)
# calling replace returns a new distinct datetime object
def calculate_salary(start_time, finish_time, pay_rate):
# will be in seconds
delta_seconds = finish_time - start_time
# convert to hours
hours_worked = (delta_seconds.seconds) / 3600
# calculate pay
pay = hours_worked * pay_rate
return pay
In this case calling the function gives a value of
In [1]: calculate_salary(start_time, end_time, 97)
Out[1]: 266.75
While i dont advocate doing calculations on time without a time module. I assume you know what your doing and that your calculations are simple I.E they wont rolle over midnight and the finish time will always be greater than the start time and finish on the same day. With that in mind the following code should produce your result without using a datetime module. However like #william bright answer, a datetime module would be my prefernce for code like this.
def get_hours(time_string):
time_split = time_string.split(".")
whole_hours = int(time_split[0])
fraction_hours = int(time_split[1]) / 60
return whole_hours + fraction_hours
start_time=input("start time: ")
finish_time=input("finish_time: ")
total_hours = get_hours(finish_time)-get_hours(start_time)
salary = total_hours*97
print(round(salary))
OUTPUT
start time: 8.30
finish_time: 11.15
267
So, my bad for perhaps being unclear in my statement, but since this is a work in progress for the next couple weeks/months, what I came up with was the following:
starting_time = input("At what time did you start working? ")
finishing_time = input("At what time did you finish working? ")
hours1, minutes1 = starting_time.split(".")
hours2, minutes2 = finishing_time.split(".")
minutes1 = float(minutes1)/60
starting_time_new = float(hours1)+minutes1
minutes2 = float(minutes2)/60
finishing_time_new = float(hours2)+minutes2
salary = round((finishing_time_new-starting_time_new)*97)
print("Started working at:",b)
print("Stopped working at:",s)
print("Your salary is",salary,"currency.")
The solution from where I started was to just focus on changing the minutes to the correct decimals instead of focusing on the hours too.
I am well aware that it is far from perfect, in fact, it is probably really bad. However, I am new to programming in Python and taking a course to be better.
Bright minds of Stackoverflow, I have a quest for you.
Currently I am running a loop in which calculations and data aquisition happen. These get more and more complicated over time. I want each run of the loop to last exactly one second. Due to the growing time of the calculations a simple "sleep(1)" at the end does not really help.
while True:
#here calculations happen that take more and more time
print 'some of the data'
sleep(1)
I was hoping to use datetime to calculate the seconds/milliseconds before these calculations and after to enter the difference into the sleep command. But i can't quite get my head around it. Can anyone help me out?
a=datetime.now()
#calculations
b=datetime.now()
calctime=(b-a).total_seconds()
sleep(1-calctime)
Try this:
from datetime import datetime
import time
def test():
a = datetime.now()
# calculations
b = datetime.now()
calctime = (b - a).total_seconds()
print("one")
time.sleep((1 - calctime) if (1-calctime)>0.0 else 0) #if your calculation already took 1 or more than 1 second then then make the waiting time 0
print("two")
test()
a=datetime.now()
#calculations
b=datetime.now()
calctime=b-a
ms = calctime.microseconds
if calctime.seconds == 0:
sleep(1-ms/1000000)
Additional info here: Python speed testing - Time Difference - milliseconds
I'd like to create a revenue counter for the sales team at work and would love to use Python. E.g. Joe Bloggs shifts his target from 22.1 to 23.1 (difference of 1.0.) I'd like the counter to tick evenly from 22.1 to 23.1 over an hour.
I've created this script, which works fine for counting a minute (runs 2 seconds over the minute); however, when it's supposed to run for an hour, it runs for 47 minutes.
Question: Does anyone know why it runs faster when I set it to an hour? Is sleep.time inaccurate?
import time
def rev_counter(time_length):
time_start = (time.strftime("%H:%M:%S"))
prev_pp = 22.1
new_pp = 23.1
difference = new_pp - prev_pp
iter_difference = (difference / 100000.) # Divide by 100,000 to show 10 decimal places
time_difference = ((time_length / difference) / 100000.)
i = prev_pp
while i < new_pp:
print("%.10f" % i)
i = i + iter_difference
time.sleep(time_difference)
time_end = (time.strftime("%H:%M:%S"))
print "Time started at", time_start
print "Time ended at", time_end
rev_counter(60) # 60 seconds. Returns 62 seconds
rev_counter(600) # 10 minutes. Returns 10 minutes, 20 secs
rev_counter(3600) # 1 hour. Returns 47 minutes
Please note this quote from the Python documentation for time.sleep()
The actual suspension time may be less than that requested because any
caught signal will terminate the sleep() following execution of that
signal's catching routine. Also, the suspension time may be longer
than requested by an arbitrary amount because of the scheduling of
other activity in the system.
As a suggestion, if faced with this problem, I would use a variable to track the time that the interval starts. When sleep wakes up, check to see if the expected time has elapsed. If not, restart a sleep for the difference, etc.
First of all, your loop doesn't only contain sleep statements -- the things you do between calling time.sleep take time, too, so if you do 10 repetions, you'll spent only 10% of the time doing these compared to when you have 100 iterations through your loop.
Is sleep.time inaccurate?
Yes. Or well. Quite.
I come from a real-time signal processing background. PC clocks are only somewhat accurate, and the time you spend in your OS, your standard libraries, your scripting language run time and your scripting logic between the point in time when a piece of hardware notifies you that your time has elapsed and the point in time your software notices is significant.
I just noticed time.sleep taking way too long (5-30000 times longer for input values between .0001 to 1 second), and searching for an answer, found this thread. I ran some tests and it is consistently doing this (see code and results below). The weird thing is, I restarted, then it was back to normal, working very accurately. When code started to hang it was time.sleep taking 10000 times too long?!
So a restart is a temporary solution, but not sure what the cause is/ permanent solution is.
import numpy as np
import time
def test_sleep(N,w):
data = []
for i in xrange(N):
t0 = time.time()
time.sleep(w)
t1 = time.time()
data.append(t1-t0)
print "ave = %s, min = %s, max = %s" %(np.average(data), np.min(data), np.max(data))
return data
data1 = test_sleep(20,.0001)
Out: ave = 2.95489487648, min = 1.11787080765, max = 3.23506307602
print data1
Out: [3.1929759979248047,
3.121081829071045,
3.1982388496398926,
3.1221959590911865,
3.098078966140747,
3.131525993347168,
3.12644100189209,
3.1535091400146484,
3.2167508602142334,
3.1277999877929688,
3.1103289127349854,
3.125699996948242,
3.1129801273345947,
3.1223208904266357,
3.1313750743865967,
3.1280829906463623,
1.117870807647705,
1.3357980251312256,
3.235063076019287,
3.189779043197632]
data2 = test_sleep(20, 1)
Out: ave = 9.44276217222, min = 1.00008392334, max = 10.9998381138
print data2
Out: [10.999573945999146,
10.999622106552124,
3.8115758895874023,
1.0000839233398438,
3.3502109050750732,
10.999613046646118,
10.99983811378479,
10.999617099761963,
10.999662160873413,
10.999619960784912,
10.999650955200195,
10.99962306022644,
10.999721050262451,
10.999620914459229,
10.999532222747803,
10.99965500831604,
10.999596118927002,
10.999563932418823,
10.999600887298584,
4.6992621421813965]
this may be something simple, yet I can't figure out a way to do this; I am trying to print the seconds. I use:
r = int(pygame.time.get_ticks)
print(r/1000)
It prints out the seconds but what I want it to do is to print out the seconds just once. So basically it would output something like: 11111112222222233333333..., when I am looking for something like: 1 2 3...
If anyone can help me out, Id really appreciate it. Thank you.
You need to track the last value for seconds, and only print if the new value for seconds changes.
last_seconds = None
while True:
# Run game
r = int(pygame.time.get_ticks)
seconds = r / 1000
if seconds != last_seconds:
print(seconds)
last_seconds = seconds
You could do this with a closure by writing a function that captures the "last ticks" and compares this with the "current ticks". This way you only print the "time since init" every 1s.
Example:
def get_time_since_init():
last_ticks = int(pygame.time.get_ticks())
def wrapper()
ticks = int(pygame.time.get_ticks())
delta = (ticks - last_ticks) / 1000
if delta > 1:
last_ticks = ticks
print(ticks / 1000)
return wrapper
I have to time the implementation I did of an algorithm in one of my classes, and I am using the time.time() function to do so. After implementing it, I have to run that algorithm on a number of data files which contains small and bigger data sets in order to formally analyse its complexity.
Unfortunately, on the small data sets, I get a runtime of 0 seconds even if I get a precision of 0.000000000000000001 with that function when looking at the runtimes of the bigger data sets and I cannot believe that it really takes less than that on the smaller data sets.
My question is: Is there a problem using this function (and if so, is there another function I can use that has a better precision)? Or am I doing something wrong?
Here is my code if ever you need it:
import sys, time
import random
from utility import parseSystemArguments, printResults
...
def main(ville):
start = time.time()
solution = dynamique(ville) # Algorithm implementation
end = time.time()
return (end - start, solution)
if __name__ == "__main__":
sys.argv.insert(1, "-a")
sys.argv.insert(2, "3")
(algoNumber, ville, printList) = parseSystemArguments()
(algoTime, solution) = main(ville)
printResults(algoTime, solution, printList)
The printResults function:
def printResults(time, solution, printList=True):
print ("Temps d'execution = " + str(time) + "s")
if printList:
print (solution)
The solution to my problem was to use the timeit module instead of the time module.
import timeit
...
def main(ville):
start = timeit.default_timer()
solution = dynamique(ville)
end = timeit.default_timer()
return (end - start, solution)
Don't confuse the resolution of the system time with the resolution of a floating point number. The time resolution on a computer is only as frequent as the system clock is updated. How often the system clock is updated varies from machine to machine, so to ensure that you will see a difference with time, you will need to make sure it executes for a millisecond or more. Try putting it into a loop like this:
start = time.time()
k = 100000
for i in range(k)
solution = dynamique(ville)
end = time.time()
return ((end - start)/k, solution)
In the final tally, you then need to divide by the number of loop iterations to know how long your code actually runs once through. You may need to increase k to get a good measure of the execution time, or you may need to decrease it if your computer is running in the loop for a very long time.