I am looking for a code on how to convert timestamps from some GPS data in a csv file to local time (in this case PST). I do have some other files I would have to convert also to CDT and EDT.
This is what the output looks like:
2019-09-18T07:07:48.000Z
I would like to create a separate column in the right of the Excel for the Date and another for the time EX:
TIME_UTC DATE TIME_PST
2019-09-18T07:07:48.000Z 09-18-2019 12:07:48 AM
I only know basic Python and nothing about Excel in python so it would be super helpful!
Thank you!!!
By calling to localize you tell in what TZ your time is. So, in your example you say that your date is in UTC, then you call astimezone for UTC. FOr example:
utc_dt = pytz.utc.localize(datetime.pstnow())
pst_tz = timezone('US/Pacific')
pst_dt = pst_tz.normalize(pst_dt.astimezone(utc_tz))
pst_dt.strftime(fmt)
For more example, visit here
If you want to use Excel Formula:
For the date:
=INT(SUBSTITUTE(LEFT(A2,LEN(A2)-1),"T"," ")-TIME(7,0,0))
For the Time:
=MOD(SUBSTITUTE(LEFT(A2,LEN(A2)-1),"T"," ")-TIME(7,0,0),1)
And format the output with the desire format: mm-dd-yyyy and hh:mm:ss AM/PM respectively.
Related
I'm working with a .sav (SPSS) file in Python. All the variables look fine after import while using PyreadStat (also when using Pandas) except for the datetime variables. They read in as exponential numbers of type float using Python. But their original SPSS format is dd-mmm-yy (e.g., 02-feb-2021) of type date.
This is how the date variable looks like
1.383160e+10
Is there a way to convert this format to datetime using Python?
I've tried various ways of using the datetime module and time module. But what I get is a date from the year 2408
# Here I'm using the float from the first row in the dataframe
time.gmtime(13831603200)
The results
time.struct_time(tm_year=2408, tm_mon=4, tm_mday=22, tm_hour=0, tm_min=0, tm_sec=0, tm_wday=1, tm_yday=113, tm_isdst=0)
When I use the datetime module:
python_date = datetime.fromtimestamp(13831603200).strftime('%d-%b-%Y, %H:%M:%S')
print(python_date)
22-Apr-2408, 00:00:00
[How the datetime variable (Vdatesub) is showing when using Python][1]
[1]: https://i.stack.imgur.com/I7yza.png
This is answered under these two posts (one Python, one R):
Convert 'seconds since October 14, 1582' to Python datetime
Read SPSS file into R, the data format for date is wrong, and generate more variable
In short: the date is stored as number of seconds from 14 Oct 1582, while Python starts at the Epoch date (01 Jan 1970).
You would need to calculate the number of seconds between 1582-10-14 and 1970-01-01 to adjust the timestamp value as per this post:
Timestamp out of range for platform localtime()/gmtime() function
(Possibly 12,218,515,200 seconds)
I am migrating an InfluxDB database to mySQL. I have managed to export the influx data to a CSV file, which is great, but now I am stuck with the date and time field which has been given to me.
I have no idea what format it is in, after doing some research it tells me that it is in epoch time, but using python to try and convert the timestamp to an ISO format, it isn't recognised as a valid timestamp. Any idea how to get this converted. Ideally to separate date and time columns. The data that I have got is something like this :
time,absoluteHumidity
1578152602608558363,5.788981747966442
1578152608059500073,4.769760557208695
1578152613662193439,5.788981747966442
And the python that I was using to try and convert it, was this :
from datetime import datetime, timezone
print (datetime.fromtimestamp(1578152602608558363, timezone.utc))
Any help or suggestions would be appreciated !
According to the influxdb docs they store timestamp values with nanoseconds precision.
However the datetime.fromtimestamp method expects a floating point number and its integer part is in second precision.
So generally your approach is right you just need to divide the influx timestamp by 1e9 and it should just work:
from datetime import datetime, timezone
print(datetime.fromtimestamp(1578152602608558363 / 1e9, timezone.utc))
I'm trying to parse a .csv file into a dataframe. The csv has multiple timezones because of daylight savings that happened during the recording of the data (ones at +01:00 others at +02:00). Here's a snippet for understanding:
After reading in the csv file, I have setup my code as follows:
df_vitals.Date_time = pd.to_datetime(df_vitals.Date_time, format ='%Y-%m-%d %H:%M:%S%z')
df_vitals.Date_time = df_vitals.Date_time.dt.tz_convert("Europe/Madrid")
Where Date_time is my column containing the mixed timezones. I get the following error:
AttributeError: Can only use .dt accessor with datetimelike values
Note that this works perfectly fine for my csv files with just one time zone (i.e. where no daylight savings happened)
How can I properly parse csv files that have more than one time zone in it?
Instead of using format, set the utc param of to_datetime:
utc (boolean): Return UTC DatetimeIndex if True (converting any tz-aware datetime.datetime objects as well).
df_vitals.Date_time = pd.to_datetime(df_vitals.Date_time, utc=True)
I am using the Google Sheets API for inserting some data on my sheet.
Every row has a date cell that has values in this format "2017-02-01 19:33:56+00:00" I would like it to display it in this format (02 January 2017) but I would like to have the original format hidden on the same cell.
Is there a script or something that I can use to display the date in that format?
Just add a custom numberformat from the format menu. This one should do the trick "dd MMM yyyy". The date doesn't change so if you want to go back to the old one you always can. You can use a script to switch between them using range.setNumberFormat
You can use this script to setNumberFormats():
SpreadsheetApp.getActiveRange().setNumberFormat("dd MMM yyyy");
I not familiar with the ss+00:00 not sure what that means. But perhaps you can checked the documentation below. You might be able to use range.getNumberFormat() to get and save that format for future use.
var formatString=SpreadsheetApp.getActiveRange().getNumberFormat();
Number Formats Documentation
This format "2017-02-01 19:33:56+00:00" is not being recognized for Google Sheets, it's because it has an additional value to the final that indicates the timezone of the date "+00:00", i decided to create an small script for getting ride of that timezone value and also add/sub hours and minutes to the date so i can see the correct hour in my timezone.
from datetime import datetime, timedelta
def formatDatetime(o_date, f_hours=0, f_minutes=0):
# ----- Removing the timezone from the data str
date_a = o_date.split('+')
date = date_a[0]
#----------------------------------------------------
# ----- Format/Add/Sub Hours and minutes
date = datetime.strptime(date, '%Y-%m-%d %H:%M:%S')
date = date + timedelta(hours=f_hours)
date = date + timedelta(minutes=f_minutes)
date_f = format(date, '%m/%d/%Y %H:%M:%S')
# -------------------
return date_f
Now google sheet recognizes this value as a date, so i can make a custom formatting for showing a nice date in my worksheet.
I have many strings of dates and times (or both), like these:
'Thu Jun 18 19:30:21 2015'
'21:07:52'
I want to convert these times to the proper datetime format while also changing the timezone to UTC. The current timezone is 4 hours behind UTC. Is there a way that I can tell python to add 4 hours while converting the formats? Can it also take care of the date in UTC such that when the hour goes past 24 the date changes and time resets?
I will ultimately be inserting these into a mysql table into fields with the 'datetime' and 'time' data type, but they all need to be in UTC.
I would approach this with time.strptime() to parse the source time string, time.mktime() to convert the resulting time vector into an epoch time (seconds since 1970-01-01 00:00:00), and time.strftime() to format the time as you like.
For the timezone adjustment, you could add 4*3600 to the epoch time value or, more generally, append a timezone string to the source and use %Z to parse it.