I am having some troubles pivoting a dataframe with a datetime value as the index.
my df looks like this:
Timestamp Value
2016-01-01 00:00:00 16.546900
2016-01-01 01:00:00 16.402375
2016-01-01 02:00:00 16.324250
Where the timestamp is a, datetime64[ns]. I am trying to pivot the table so that it looks like this.
Hour 0 1 2 4 ....
Date
2016-01-01 16.5 16.4 16.3 17 ....
....
....
I've tried using the code below but am getting an error when I run it.
df3 = pd.pivot_table(df2,index=np.unique(df2.index.date),columns=np.unique(df2.index.hour),values=df2.Temp)
KeyError Traceback (most recent call last)
in ()
1 # Pivot Table
----> 2 df3 = pd.pivot_table(df2,index=np.unique(df2.index.date),columns=np.unique(df2.index.hour),values=df2.Temp)
~\Anaconda3\lib\site-packages\pandas\core\reshape\pivot.py in pivot_table(data, values, index, columns, aggfunc, fill_value, margins, dropna, margins_name)
56 for i in values:
57 if i not in data:
---> 58 raise KeyError(i)
59
60 to_filter = []
KeyError: 16.5469
Any help or insights would be greatly appreciated.
A different way of accomplishing this without the lambda is to create the indices from the DateTimeIndex.
df2 = pd.pivot_table(df, index=df.index.date, columns=df.index.hour, values="Value")
I slightly extended input data like below (assuming no duplicated entries in the same date/hour)
Timestamp Value
2016-01-01 00:00:00 16.546900
2016-01-01 01:00:00 16.402375
2016-01-01 02:00:00 16.324250
2016-01-01 04:00:00 16.023928
2016-01-03 04:00:00 16.101919
2016-01-05 23:00:00 13.405928
It looks a bit awkward, but something like below works.
df2['Date'] = df2.Timestamp.apply(lambda x: str(x).split(" ")[0])
df2['Hour'] = df2.Timestamp.apply(lambda x: str(x).split(" ")[1].split(":")[0])
df3 = pd.pivot_table(df2, values='Value', index='Date', columns='Hour')
[Output]
Hour 00 01 02 04 23
Date
2016-01-01 16.5469 16.402375 16.32425 16.023928 NaN
2016-01-03 NaN NaN NaN 16.101919 NaN
2016-01-05 NaN NaN NaN NaN 13.405928
Finally if your columns need to be integer,
df3.columns = [int(x) for x in df3.columns]
Hope this helps.
Adapting #Seanny123 's answer above for an arbitrary cadence:
start = [2018, 1, 1, 0, 0, 0]
end = [date.today().year, date.today().month, date.today().day]
quant='freq'
sTime_tmp = datetime.datetime(start[0], start[1], start[2], tzinfo = pytz.UTC)
eTime_tmp = datetime.datetime(end[0], end[1], end[2], tzinfo = pytz.UTC)
cadence = '5min'
t = pd.date_range(start=sTime_tmp,
end=eTime_tmp,
freq = cadence)
keo = pd.DataFrame(np.nan, index=t, columns=[quant])
keo[quant] = 0
keo = pd.pivot_table(keo, index=keo.index.time, columns=keo.index.date, values=quant)
keo
Related
I am trying to convert a dataframe in which hourly data appears in distinct columns, like here:
... to a dataframe that only contains two columns ['datetime', 'value'].
For example:
Datetime
value
2020-01-01 01:00:00
0
2020-01-01 02:00:00
0
...
...
2020-01-01 09:00:00
106
2020-01-01 10:00:00
2852
Any solution without using a for-loop?
Use DataFrame.melt with convert values to datetimes and add hours by to_timedelta with remove H:
df = df.melt('Date')
td = pd.to_timedelta(df.pop('variable').str.strip('H').astype(int), unit='H')
df['Date'] = pd.to_datetime(df['Date']) + td
You can do it by applying several function to DataFrame:
from datetime import datetime
# Example DataFrame
df = pd.DataFrame({'date': ['1/1/2020', '1/2/2020', '1/3/2020'],
'h1': [0, 222, 333],
'h2': [44, 0, 0],
"h3": [1, 2, 3]})
# To simplify I used only hours in range 1...3, so You must change it to 25
HOURS_COUNT = 4
df["hours"] = df.apply(lambda row: [h for h in range(1, HOURS_COUNT)], axis=1)
df["hour_values"] = df.apply(lambda row: {h: row[f"h{h}"] for h in range(1, HOURS_COUNT)}, axis=1)
df = df.explode("hours")
df["value"] = df.apply(lambda row: row["hour_values"][row["hours"]], axis=1)
df["date_full"] = df.apply(lambda row: datetime.strptime(f"{row['date']} {row['hours']}", "%m/%d/%Y %H"), axis=1)
df = df[["date_full", "value"]]
df = df.loc[df["value"] > 0]
So initial DataFrame is:
date h1 h2 h3
0 1/1/2020 0 44 1
1 1/2/2020 222 0 2
2 1/3/2020 333 0 3
And result DataFrame is:
date_full value
0 2020-01-01 02:00:00 44
0 2020-01-01 03:00:00 1
1 2020-01-02 01:00:00 222
1 2020-01-02 03:00:00 2
2 2020-01-03 01:00:00 333
2 2020-01-03 03:00:00 3
I am looking for a elegant solution to selecting common timestamps from multiple dataframes. I know that something like this could work supposing the dataframe of common timestamps to be df:
df = df1[df1['Timestamp'].isin(df2['Timestamp'])]
However, if I have several other dataframes, this solution becomes quite unelegant. Therefore, I have been wondering if there is an easier approach to achieve my goal when working with multiple dataframes.
So, let's say for example that I have:
date1 = pd.date_range(start='1/1/2018', end='1/02/2018', freq='H')
date2 = pd.date_range(start='1/1/2018', end='1/02/2018', freq='15min')
date3 = pd.date_range(start='1/1/2018', end='1/02/2018', freq='45min')
date4 = pd.date_range(start='1/1/2018', end='1/02/2018', freq='30min')
data1 = np.random.randn(len(date1))
data2 = np.random.randn(len(date2))
data3 = np.random.randn(len(date3))
data4 = np.random.randn(len(date4))
df1 = pd.DataFrame(data = {'date1' : date1, 'data1' : data1})
df2 = pd.DataFrame(data = {'date2' : date2, 'data2' : data2})
df3 = pd.DataFrame(data = {'date3' : date3, 'data3' : data3})
df4 = pd.DataFrame(data = {'date4' : date4, 'data4' : data4})
I would like as an output a dataframe containing the common timestamps of the four dataframes as well as the respective data column out of each of them, for example (just to illustrate what I mean, it doesn't reflect on the result):
commom Timestamp data1 data2 data3 data4
0 2018-01-01 00:00:00 -1.129439 1.2312 1.11 -0.83
1 2018-01-01 01:00:00 0.853421 0.423 0.241 0.123
2 2018-01-01 02:00:00 -1.606047 1.001 -0.005 -0.12
3 2018-01-01 03:00:00 -0.668267 0.98 1.11 -0.23
[...]
You can use reduce from functools to perform the complete inner merge. We'll need to rename the columns just so the merge is a bit easier.
from functools import reduce
lst = [df1.rename(columns={'date1': 'Timestamp'}), df2.rename(columns={'date2': 'Timestamp'}),
df3.rename(columns={'date3': 'Timestamp'}), df4.rename(columns={'date4': 'Timestamp'})]
reduce(lambda l,r: l.merge(r, on='Timestamp'), lst)
Timestamp data1 data2 data3 data4
0 2018-01-01 00:00:00 -0.971201 -0.978107 1.163339 0.048824
1 2018-01-01 03:00:00 -1.063810 0.125318 -0.818835 -0.777500
2 2018-01-01 06:00:00 0.862549 -0.671529 1.902272 0.011490
3 2018-01-01 09:00:00 1.030826 -1.306481 0.438610 -1.817053
4 2018-01-01 12:00:00 -1.191646 -1.700694 1.007190 -1.932421
5 2018-01-01 15:00:00 -1.803248 0.415256 0.690243 1.387650
6 2018-01-01 18:00:00 -0.304502 0.514616 0.974318 -0.062800
7 2018-01-01 21:00:00 -0.668874 -1.262635 -0.504298 -0.043383
8 2018-01-02 00:00:00 -0.943615 1.010958 1.343095 0.119853
Alternatively concat with an 'inner' join and setting the Timestamp to the index
pd.concat([x.set_index('Timestamp') for x in lst], axis=1, join='inner')
If it would be acceptable to name every timestamp column in the same way (date for example), something like this could work:
def common_stamps(*args): # *args lets you feed it any number of dataframes
df = pd.concat([df_i.set_index('date') for df_i in args], axis=1)\
.dropna()\ # this removes all rows with `uncommon stamps`
.reset_index()
return df
df = common_stamps(df1, df2, df3, df4)
print(df)
Output:
date data1 data2 data3 data4
0 2018-01-01 00:00:00 -0.667090 0.487676 -1.001807 -0.200328
1 2018-01-01 03:00:00 -1.639815 2.320734 -0.396013 -1.838732
2 2018-01-01 06:00:00 0.469890 0.626428 0.040004 -2.063454
3 2018-01-01 09:00:00 -0.916928 -0.260329 -0.598313 0.383281
4 2018-01-01 12:00:00 0.132670 1.771344 -0.441651 0.664980
5 2018-01-01 15:00:00 -0.761542 0.255955 1.378836 -1.235562
6 2018-01-01 18:00:00 -0.120083 0.243652 -1.261733 1.045454
7 2018-01-01 21:00:00 0.339921 -0.901171 1.492577 -0.797161
8 2018-01-02 00:00:00 -1.397864 -0.173818 -0.581590 -0.402472
I have a pandas dataframe with over 1000 timestamps (below) that I would like to loop through:
2016-02-22 14:59:44.561776
I'm having a hard time splitting this time stamp into 2 columns- 'date' and 'time'. The date format can stay the same, but the time needs to be converted to CST (including milliseconds).
Thanks for the help
Had same problem and this worked for me.
Suppose the date column in your dataset is called "date"
import pandas as pd
df = pd.read_csv(file_path)
df['Dates'] = pd.to_datetime(df['date']).dt.date
df['Time'] = pd.to_datetime(df['date']).dt.time
This will give you two columns "Dates" and "Time" with splited dates.
I'm not sure why you would want to do this in the first place, but if you really must...
df = pd.DataFrame({'my_timestamp': pd.date_range('2016-1-1 15:00', periods=5)})
>>> df
my_timestamp
0 2016-01-01 15:00:00
1 2016-01-02 15:00:00
2 2016-01-03 15:00:00
3 2016-01-04 15:00:00
4 2016-01-05 15:00:00
df['new_date'] = [d.date() for d in df['my_timestamp']]
df['new_time'] = [d.time() for d in df['my_timestamp']]
>>> df
my_timestamp new_date new_time
0 2016-01-01 15:00:00 2016-01-01 15:00:00
1 2016-01-02 15:00:00 2016-01-02 15:00:00
2 2016-01-03 15:00:00 2016-01-03 15:00:00
3 2016-01-04 15:00:00 2016-01-04 15:00:00
4 2016-01-05 15:00:00 2016-01-05 15:00:00
The conversion to CST is more tricky. I assume that the current timestamps are 'unaware', i.e. they do not have a timezone attached? If not, how would you expect to convert them?
For more details:
https://docs.python.org/2/library/datetime.html
How to make an unaware datetime timezone aware in python
EDIT
An alternative method that only loops once across the timestamps instead of twice:
new_dates, new_times = zip(*[(d.date(), d.time()) for d in df['my_timestamp']])
df = df.assign(new_date=new_dates, new_time=new_times)
The easiest way is to use the pandas.Series dt accessor, which works on columns with a datetime dtype (see pd.to_datetime). For this case, pd.date_range creates an example column with a datetime dtype, therefore use .dt.date and .dt.time:
df = pd.DataFrame({'full_date': pd.date_range('2016-1-1 10:00:00.123', periods=10, freq='5H')})
df['date'] = df['full_date'].dt.date
df['time'] = df['full_date'].dt.time
In [166]: df
Out[166]:
full_date date time
0 2016-01-01 10:00:00.123 2016-01-01 10:00:00.123000
1 2016-01-01 15:00:00.123 2016-01-01 15:00:00.123000
2 2016-01-01 20:00:00.123 2016-01-01 20:00:00.123000
3 2016-01-02 01:00:00.123 2016-01-02 01:00:00.123000
4 2016-01-02 06:00:00.123 2016-01-02 06:00:00.123000
5 2016-01-02 11:00:00.123 2016-01-02 11:00:00.123000
6 2016-01-02 16:00:00.123 2016-01-02 16:00:00.123000
7 2016-01-02 21:00:00.123 2016-01-02 21:00:00.123000
8 2016-01-03 02:00:00.123 2016-01-03 02:00:00.123000
9 2016-01-03 07:00:00.123 2016-01-03 07:00:00.123000
If your timestamps are already in pandas format (not string), then:
df["date"] = df["timestamp"].date
dt["time"] = dt["timestamp"].time
If your timestamp is a string, you can parse it using the datetime module:
from datetime import datetime
data1["timestamp"] = df["timestamp"].apply(lambda x: \
datetime.strptime(x,"%Y-%m-%d %H:%M:%S.%f"))
Source:
http://pandas.pydata.org/pandas-docs/stable/timeseries.html
If your timestamp is a string, you can convert it to a datetime object:
from datetime import datetime
timestamp = '2016-02-22 14:59:44.561776'
dt = datetime.strptime(timestamp, '%Y-%m-%d %H:%M:%S.%f')
From then on you can bring it to whatever format you like.
Try
s = '2016-02-22 14:59:44.561776'
date,time = s.split()
then convert time as needed.
If you want to further split the time,
hour, minute, second = time.split(':')
try this:
def time_date(datetime_obj):
date_time = datetime_obj.split(' ')
time = date_time[1].split('.')
return date_time[0], time[0]
In addition to #Alexander if you want a single liner
df['new_date'],df['new_time'] = zip(*[(d.date(), d.time()) for d in df['my_timestamp']])
If your timestamp is a string, you can convert it to pandas timestamp before splitting it.
#convert to pandas timestamp
data["old_date"] = pd.to_datetime(data.old_date)
#split columns
data["new_date"] = data["old_date"].dt.date
data["new_time"] = data["old_date"].dt.time
I have a dataset of samples covering multiple days, all with a timestamp.
I want to select rows within a specific time window. E.g. all rows that were generated between 1pm and 3 pm every day.
This is a sample of my data in a pandas dataframe:
22 22 2018-04-12T20:14:23Z 2018-04-12T21:14:23Z 0 6370.1
23 23 2018-04-12T21:14:23Z 2018-04-12T21:14:23Z 0 6368.8
24 24 2018-04-12T22:14:22Z 2018-04-13T01:14:23Z 0 6367.4
25 25 2018-04-12T23:14:22Z 2018-04-13T01:14:23Z 0 6365.8
26 26 2018-04-13T00:14:22Z 2018-04-13T01:14:23Z 0 6364.4
27 27 2018-04-13T01:14:22Z 2018-04-13T01:14:23Z 0 6362.7
28 28 2018-04-13T02:14:22Z 2018-04-13T05:14:22Z 0 6361.0
29 29 2018-04-13T03:14:22Z 2018-04-13T05:14:22Z 0 6359.3
.. ... ... ... ... ...
562 562 2018-05-05T08:13:21Z 2018-05-05T09:13:21Z 0 6300.9
563 563 2018-05-05T09:13:21Z 2018-05-05T09:13:21Z 0 6300.7
564 564 2018-05-05T10:13:14Z 2018-05-05T13:13:14Z 0 6300.2
565 565 2018-05-05T11:13:14Z 2018-05-05T13:13:14Z 0 6299.9
566 566 2018-05-05T12:13:14Z 2018-05-05T13:13:14Z 0 6299.6
How do I achieve that? I need to ignore the date and just evaluate the time component. I could traverse the dataframe in a loop and evaluate the date time in that way, but there must be a more simple way to do that..
I converted the messageDate which was read a a string to a dateTime by
df["messageDate"]=pd.to_datetime(df["messageDate"])
But after that I got stuck on how to filter on time only.
Any input appreciated.
datetime columns have DatetimeProperties object, from which you can extract datetime.time and filter on it:
import datetime
df = pd.DataFrame(
[
'2018-04-12T12:00:00Z', '2018-04-12T14:00:00Z','2018-04-12T20:00:00Z',
'2018-04-13T12:00:00Z', '2018-04-13T14:00:00Z', '2018-04-13T20:00:00Z'
],
columns=['messageDate']
)
df
messageDate
# 0 2018-04-12 12:00:00
# 1 2018-04-12 14:00:00
# 2 2018-04-12 20:00:00
# 3 2018-04-13 12:00:00
# 4 2018-04-13 14:00:00
# 5 2018-04-13 20:00:00
df["messageDate"] = pd.to_datetime(df["messageDate"])
time_mask = (df['messageDate'].dt.hour >= 13) & \
(df['messageDate'].dt.hour <= 15)
df[time_mask]
# messageDate
# 1 2018-04-12 14:00:00
# 4 2018-04-13 14:00:00
I hope the code is self explanatory. You can always ask questions.
import pandas as pd
# Prepping data for example
dates = pd.date_range('1/1/2018', periods=7, freq='H')
data = {'A' : range(7)}
df = pd.DataFrame(index = dates, data = data)
print df
# A
# 2018-01-01 00:00:00 0
# 2018-01-01 01:00:00 1
# 2018-01-01 02:00:00 2
# 2018-01-01 03:00:00 3
# 2018-01-01 04:00:00 4
# 2018-01-01 05:00:00 5
# 2018-01-01 06:00:00 6
# Creating a mask to filter the value we with to have or not.
# Here, we use df.index because the index is our datetime.
# If the datetime is a column, you can always say df['column_name']
mask = (df.index > '2018-1-1 01:00:00') & (df.index < '2018-1-1 05:00:00')
print mask
# [False False True True True False False]
df_with_good_dates = df.loc[mask]
print df_with_good_dates
# A
# 2018-01-01 02:00:00 2
# 2018-01-01 03:00:00 3
# 2018-01-01 04:00:00 4
df=df[(df["messageDate"].apply(lambda x : x.hour)>13) & (df["messageDate"].apply(lambda x : x.hour)<15)]
You can use x.minute, x.second similarly.
try this after ensuring messageDate is indeed datetime format as you have done
df.set_index('messageDate',inplace=True)
choseInd = [ind for ind in df.index if (ind.hour>=13)&(ind.hour<=15)]
df_select = df.loc[choseInd]
you can do the same, even without making the datetime column as an index, as the answer with apply: lambda shows
it just makes your dataframe 'better looking' if the datetime is your index rather than numerical one.
I have a dataframe with IDs and timestamps as a multi-index. The index in the dataframe is sorted by IDs and timestamps and I want to pick the lastest timestamp for each IDs. for example:
IDs timestamp value
0 2010-10-30 1
2010-11-30 2
1 2000-01-01 300
2007-01-01 33
2010-01-01 400
2 2000-01-01 11
So basically the result I want is
IDs timestamp value
0 2010-11-30 2
1 2010-01-01 400
2 2000-01-01 11
What is the command to do that in pandas?
Given this setup:
import pandas as pd
import numpy as np
import io
content = io.BytesIO("""\
IDs timestamp value
0 2010-10-30 1
0 2010-11-30 2
1 2000-01-01 300
1 2007-01-01 33
1 2010-01-01 400
2 2000-01-01 11""")
df = pd.read_table(content, header=0, sep='\s+', parse_dates=[1])
df.set_index(['IDs', 'timestamp'], inplace=True)
using reset_index followed by groupby
df.reset_index(['timestamp'], inplace=True)
print(df.groupby(level=0).last())
yields
timestamp value
IDs
0 2010-11-30 00:00:00 2
1 2010-01-01 00:00:00 400
2 2000-01-01 00:00:00 11
This does not feel like the best solution, however. There should be a way to do this without calling reset_index...
As you point out in the comments, last ignores NaN values. To not skip NaN values, you could use groupby/agg like this:
df.reset_index(['timestamp'], inplace=True)
grouped = df.groupby(level=0)
print(grouped.agg(lambda x: x.iloc[-1]))
One can also use
df.groupby("IDs").tail(1)
This will take the last row of each label in level "IDs" and will not ignore NaN values.