I am looking to create an if then statement that involves the current time of the day. For example I want something like if it is past 2pm then do this function.
I have tried using the time module but I can't seem to find a way to get just the time of day without the extra stuff like the date. Any help?
Here is a start, and I think it'll be enough for you to get to the answer and use it how you need.
import time
print(time.strftime("%Y-%m-%d %H:%M"))
print(time.strftime("I can format the date and time in many ways. Time is: %H:%M"))
Output (when I ran it):
2017-06-21 10:40
I can format the date and time in many ways. Time is: 10:40
Related
I have to convert date and time returned by ls of rsync into Unix epoch time float as returned by time.time().
For me here at this moment it looks like:
2017/05/24 hh:mm:ss.
But as far as I know it can vary from machine to machine as rsync uses ssh and native ls, I expect through it.
Is there any easy way to universally convert most common human readable date and time back to the Unix time float?
To be clear. I want to be able to convert any textual representation of D and T into the float.
If datetime can do this I cannot find how at the moment.
You need to use time.strptime first and then calendar.timegm
there are different options depending if you want to convert to local time or UTC time. Have a look to the documentation for that.
To get the float part, you need to input hours, minutes, seconds and milliseconds. In your example, you give only the year, month and day, so the rest is supposed to be zero i.e. no milliseconds thus no float part.
Here a minimal example:
import calendar, time
t = time.strptime('2017/05/24', '%Y/%m/%d')
epoch = calendar.timegm(time.struct_time(t))
print(epoch)
1495584000
I get this error when trying to append two Pandas DFs together in a for loop:
Aggdata=Aggdata.append(Newdata)
This is the full error:
File "pandas\tslib.pyx", line 4096, in pandas.tslib.tz_localize_to_utc (pandas
\tslib.c:69713)
pytz.exceptions.NonExistentTimeError: 2017-03-12 02:01:24
However, in my files, I do not have such a time stamp, but I do have ones like 03/12/17 00:45:26 or 03/12/17 00:01:24. Where it is 2 hours before daylight savings. And if I manually delete the offending row, I get that same error for the next row with times between 12 and 1am on the 12th of March.
My original date/time column has no TZ info, but I calculate another column in EST, before the concatenation and localize it to EST, with time with TZ information:
`data['EST_DateTimeStamp']=pd.DatetimeIndex(pd.to_datetime(data['myDate'])).tz_localize('US/Eastern').tz_convert('US/Eastern')`
Doing some research here, I understand that 2 to 3am on the 12th should be having such error, but why midnight to 1am. So am I localizing it incorrectly? and then why is the error on the append line, and not the localization line?
I was able to reproduce this behavior in a very simple MCVE, saved here:
https://codeshare.io/GLjrLe
It absolutely boggles my mind that the error is raised on the third append, and only if the next 3 appends follow. In others words, if I comment out the last 3 copies of appends, it works fine.. can't imagine what is happening.
Thank you for reading.
In case someone else may still find this helpful:
Talking about it with #hashcode55, the solution was to upgrade Pandas on my server, as this was likely a bug in my previous version of that module.
The problem seems to occur at daylight savings switch - there are local times that do not exist, once per year. In the opposite direction there will be duplicate times.
This could be from say your input dates being converted from UTC to "local time" by adding a fixed offset. When you try to localize these you will hit a non existent times over that hour (or 30 minutes if you are in Adelaide).
I intend to find the time difference between two time variables in seconds. The issue here is that I am referring to time in a different zone. I have managed to find a solution, but it is a mix of pandas.datetime function and python datetime library. I guess, the objective can be achieved with just pandas/numpy alone and with fewer lines of code. below is my code, appreciate any guidance on how can i achieve the final_output more efficiently.
import pandas as pd
from datetime import timedelta
local_time = pd.to_datetime('now').tz_localize('UTC').tz_convert('Asia/Dubai')
t1 = timedelta(hours=local_time.now('Asia/Dubai').hour, minutes=local_time.now('Asia/Dubai').minute)
t2 = timedelta(hours=9, minutes=14)
final_output = (t2 - t1).seconds
You may want to convert both times to UTC, then find the difference. Programmers usually like to work with UTC until the time reaches the front end.
I am trying to automate some data processing on an excel file and in the data processing file I use a macro that I guess converts the date column from "mm/dd/yyyy" into the days since January 1, 1900 as per excel convention. So when I try to do a similar thing in Python 2.7.4. It seems to be 2 days off. For example, 2/25/2015 becomes 42060 when done by the excel macro. When I run this code:
import datetime
gotDay = datetime.date(2015,2,25)
epoch = datetime.date(1900,1,1)
print((gotDay-epoch).days)
I get 42058.
I could totally just add 2 to everything in Python, but I was wondering why this happens.
Thanks!
The correct answer is 42058, so there's something wrong w/the excel macro.
My guess is that excel is counting both the start and end date as a day, when really you're only interested in the time between those dates.
It's like saying how many days between today and tomorrow. You could say one, which is what python is doing, or you could say 3 (today, the day between, and tomorrow) which is what I'm guessing your excel macro is doing.
This is an old thread, but since it came up in google, I wanted to put in an answer. The off-by-two problem is explained like this:
One day because 1900-01-01 is day #1, not day #0. In Excel, if you convert 1900/1/1 into a number, it gives the answer 1, not 0. In python:
print((datetime.date(1900,1,1)-datetime.date(1900,1,1)).days) = 0.
Another day because Excel implemented a bug in Lotus 1-2-3 for backwards compatibility. Apparently, Lotus 1-2-3 incorrectly assumed that 1900 was a leap year.
https://support.microsoft.com/en-us/help/214326/excel-incorrectly-assumes-that-the-year-1900-is-a-leap-year
I'm using Python 2.6.6 and PyQt4. I have a start QDateTime object and I am iteratively adding 60 seconds to create a list of every minute within a given time span. I have discovered that there are several cases where adding two different seconds values to the QDateTime object produces the same time.
Here's an example of the problem:
from PyQt4 import QtCore
start = QtCore.QDateTime.fromString("2010-11-01 00:00", "yyyy-MM-dd hh:mm")
print start.addSecs(522540).toString("yyyy-MM-dd hh:mm")
print start.addSecs(526140).toString("yyyy-MM-dd hh:mm")
And the resulting output:
2010-11-07 01:09
2010-11-07 01:09
I've been banging my head on the keyboard trying to figure this out. What am I doing incorrectly?
it probably depends on your locale settings:
seems DST in the United States and other countries ended on 2010-11-07...
so i'd bet it's a result of that.
if you get any strange values from doing calculations with dates, always check if there hasn't been DST change or a leap year and consider different locales. sadly time isn't always as linear as it seems.