I had a df such as
ID | Half Hour Bucket | clock in time | clock out time | Rate
232 | 4/1/19 8:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54
342 | 4/1/19 8:30 PM | 4/1/19 7:12 PM | 4/1/19 7:22 PM | 0.23
232 | 4/1/19 7:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54
I want my output to be
ID | Half Hour Bucket | clock in time | clock out time | Rate | Mins
232 | 4/1/19 8:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54 |
342 | 4/1/19 8:30 PM | 4/1/19 7:12 PM | 4/1/19 7:22 PM | 0.23 |
232 | 4/1/19 7:00 PM | 4/1/19 7:12 PM | 4/1/19 10:45 PM | 0.54 |
Where minutes represents the difference between clock out time and clock in time.
But I can only contain the minutes value for the half hour bucket on the same row it corresponds to.
For example for id 342 it would be ten minutes and the 10 mins would be on that row.
But for ID 232 the clock in to clock out time spans 3 hours. I would only want the 30 mins for 8 to 830 in the first row and the 18 mins in the third row. for the minutes in the half hour bucket like 830-9 or 9-930 that dont exist in the first row, I would want to create a new row in that same df that contains nans for everything except the half hour bucket and mins field for the minutes that do not exist in the original row.
the 30 mins from 8-830 would stay in the first row, but I would want 5 new rows for all the half hour buckets that aren't 4/1/19 8:00 PM as new rows with only the half hour bucket and the rate carrying over from the row. Is this possible?
I thank anyone for their time!
Realised my first answer probably wasn't what you wanted. This version, hopefully, is. It was a bit more involved than I first assumed!
Create Data
First of all create a dataframe to work with, based on that supplied in the question. The resultant formatting isn't quite the same but that would be easily fixed, so I've left it as-is here.
import math
import numpy as np
import pandas as pd
# Create a dataframe to work with from the data provided in the question
columns = ['id', 'half_hour_bucket', 'clock_in_time', 'clock_out_time' , 'rate']
data = [[232, '4/1/19 8:00 PM', '4/1/19 7:12 PM', '4/1/19 10:45 PM', 0.54],
[342, '4/1/19 8:30 PM', '4/1/19 7:12 PM', '4/1/19 07:22 PM ', 0.23],
[232, '4/1/19 7:00 PM', '4/1/19 7:12 PM', '4/1/19 10:45 PM', 0.54]]
df = pd.DataFrame(data, columns=columns)
def convert_cols_to_dt(df):
# Convert relevant columns to datetime format
for col in df:
if col not in ['id', 'rate']:
df[col] = pd.to_datetime(df[col])
return df
df = convert_cols_to_dt(df)
# Create the mins column
df['mins'] = (df.clock_out_time - df.clock_in_time)
Output:
id half_hour_bucket clock_in_time clock_out_time rate mins
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 0 days 03:33:00.000000000
1 342 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 0 days 00:10:00.000000000
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 0 days 03:33:00.000000000
Solution
Next define a simple function to return a list of length equal to the number of 30-minute intervals in the min column.
def upsample_list(x):
multiplier = math.ceil(x.total_seconds() / (60 * 30))
return list(range(multiplier))
And apply this to the dataframe:
df['samples'] = df.mins.apply(upsample_list)
Next, create a new row for each list item in the 'samples' column (using the answer provided by Roman Pekar here):
s = df.apply(lambda x: pd.Series(x['samples']),axis=1).stack().reset_index(level=1, drop=True)
s.name = 'sample'
Join s to the dataframe and clean up the extra columns:
df = df.drop('samples', axis=1).join(s, how='inner').drop('sample', axis=1)
Which gives us this:
id half_hour_bucket clock_in_time clock_out_time rate mins
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
0 232 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
1 342 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 00:10:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
2 232 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
Nearly there!
Reset the index:
df = df.reset_index(drop=True)
Set duplicate rows to NaN:
df = df.mask(df.duplicated())
Which gives:
id half_hour_bucket clock_in_time clock_out_time rate mins
0 232.0 2019-04-01 20:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
1 NaN NaT NaT NaT NaN NaT
2 NaN NaT NaT NaT NaN NaT
3 NaN NaT NaT NaT NaN NaT
4 NaN NaT NaT NaT NaN NaT
5 NaN NaT NaT NaT NaN NaT
6 NaN NaT NaT NaT NaN NaT
7 NaN NaT NaT NaT NaN NaT
8 342.0 2019-04-01 20:30:00 2019-04-01 19:12:00 2019-04-01 19:22:00 0.23 00:10:00
9 232.0 2019-04-01 19:00:00 2019-04-01 19:12:00 2019-04-01 22:45:00 0.54 03:33:00
10 NaN NaT NaT NaT NaN NaT
11 NaN NaT NaT NaT NaN NaT
12 NaN NaT NaT NaT NaN NaT
13 NaN NaT NaT NaT NaN NaT
14 NaN NaT NaT NaT NaN NaT
15 NaN NaT NaT NaT NaN NaT
16 NaN NaT NaT NaT NaN NaT
Lastly, forward fill the half_hour_bucket and rate columns.
df[['half_hour_bucket', 'rate']] = df[['half_hour_bucket', 'rate']].ffill()
Final output:
id half_hour_bucket clock_in_time clock_out_time rate mins
0 232.0 2019-04-01 20:00:00 2019-04-01_19:12:00 2019-04-01_22:45:00 0.54 03:33:00
1 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
2 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
3 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
4 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
5 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
6 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
7 NaN 2019-04-01 20:00:00 NaT NaT 0.54 NaT
8 342.0 2019-04-01 20:30:00 2019-04-01_19:12:00 2019-04-01_19:22:00 0.23 00:10:00
9 232.0 2019-04-01 19:00:00 2019-04-01_19:12:00 2019-04-01_22:45:00 0.54 03:33:00
10 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
11 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
12 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
13 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
14 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
15 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
16 NaN 2019-04-01 19:00:00 NaT NaT 0.54 NaT
Related
My goal is selecting the column Sabah in dataframe prdt and entering every value to repeated rows called Sabah in dataframe prcal
prcal
Vakit Start_Date End_Date Start_Time End_Time
0 Sabah 2022-01-01 2022-01-01 NaN NaN
1 Güneş 2022-01-01 2022-01-01 NaN NaN
2 Öğle 2022-01-01 2022-01-01 NaN NaN
3 İkindi 2022-01-01 2022-01-01 NaN NaN
4 Akşam 2022-01-01 2022-01-01 NaN NaN
..........................................................
2184 Sabah 2022-12-31 2022-12-31 NaN NaN
2185 Güneş 2022-12-31 2022-12-31 NaN NaN
2186 Öğle 2022-12-31 2022-12-31 NaN NaN
2187 İkindi 2022-12-31 2022-12-31 NaN NaN
2188 Akşam 2022-12-31 2022-12-31 NaN NaN
2189 rows × 5 columns
prdt
Day Sabah Güneş Öğle İkindi Akşam Yatsı
0 2022-01-01 06:51:00 08:29:00 13:08:00 15:29:00 17:47:00 19:20:00
1 2022-01-02 06:51:00 08:29:00 13:09:00 15:30:00 17:48:00 19:21:00
2 2022-01-03 06:51:00 08:29:00 13:09:00 15:30:00 17:48:00 19:22:00
3 2022-01-04 06:51:00 08:29:00 13:09:00 15:31:00 17:49:00 19:22:00
4 2022-01-05 06:51:00 08:29:00 13:10:00 15:32:00 17:50:00 19:23:00
...........................................................................
360 2022-12-27 06:49:00 08:27:00 13:06:00 15:25:00 17:43:00 19:16:00
361 2022-12-28 06:50:00 08:28:00 13:06:00 15:26:00 17:43:00 19:17:00
362 2022-12-29 06:50:00 08:28:00 13:07:00 15:26:00 17:44:00 19:18:00
363 2022-12-30 06:50:00 08:28:00 13:07:00 15:27:00 17:45:00 19:18:00
364 2022-12-31 06:50:00 08:28:00 13:07:00 15:28:00 17:46:00 19:19:00
365 rows × 7 columns
Selected every row called sabah prcal.iloc[::6,:]
Made a list for prdt['Sabah'].
When integrating prcal.iloc[::6,:] = prdt['Sabah'][0:365] I get a value error:
ValueError: Must have equal len keys and value when setting with an iterable
I have a dataset with three inputs X1,X2,X3 including date and time.
Here In X3 column contain with 0 and 5. Here what I want to code is first 5 value contain in X3 column time take as start time and it will be equal to 0 time.
Other time is not changing if 5 value contain in X3 column. Only I want is first time of every day put it as 0 time.
date time x3
10/3/2018 6:15:00 0
10/3/2018 6:45:00 5
10/3/2018 7:45:00 0
10/3/2018 9:00:00 0
10/3/2018 9:25:00 0
10/3/2018 9:30:00 0
10/3/2018 11:00:00 0
10/3/2018 11:30:00 0
10/3/2018 13:30:00 0
10/3/2018 13:50:00 5
10/3/2018 15:00:00 0
10/3/2018 15:25:00 0
10/3/2018 16:25:00 0
10/3/2018 18:00:00 0
10/3/2018 19:00:00 0
10/3/2018 19:30:00 0
10/3/2018 20:00:00 0
10/3/2018 22:05:00 0
10/3/2018 22:15:00 5
10/3/2018 23:40:00 0
10/4/2018 6:58:00 5
10/4/2018 13:00:00 0
10/4/2018 16:00:00 0
10/4/2018 17:00:00 0
As you see I have X3 column data with values 0 and 5 with date and time.
First taking the value of 5
desired output
10/3/208 6:45:00 5 start time 6:45:00 convert 00:00:00
10/3/2018 13:50:00 5 Not taking
10/3/2018 22:15:00 5 Not taking
10/4/2018 6:58:00 5 start time 6:58:00 convert 00:00:00
I just want to code like this. Can anyone help me to solve this problem?
when we used this code it is giving with time difference of each row. I just don't want the difference of time in each rows. I just want to read start time and it should be converted to the 0 time.
I tried this code, and it gave the time difference of each rows also
df['time_diff']= pd.to_datetime(df['date'] + " " + df['time'],
format='%d/%m/%Y %H:%M:%S', dayfirst=True)
mask = df['x3'].ne(0)
df['Duration'] = df[mask].groupby(['date','x3'])['time_diff'].transform('first')
df['Duration'] = df['time_diff'].sub(df['Duration']).dt.total_seconds().div(3600)
This gave me time duration each of 5 values.
Here what I exactly want:
For filter only first values of 5 per groups add DataFrame.drop_duplicates:
df['time_diff']= pd.to_datetime(df['date'] + " " + df['time'],
format='%d/%m/%Y %H:%M:%S', dayfirst=True)
mask = df['x3'].eq(5)
df['Duration'] = (df[mask].drop_duplicates(['date','x3'])
.groupby(['date','x3'])['time_diff']
.transform('first'))
df['Duration'] = df['time_diff'].sub(df['Duration']).dt.total_seconds().div(3600)
print (df)
date time x3 time_diff Duration
0 10/3/2018 6:15:00 0 2018-03-10 06:15:00 NaN
1 10/3/2018 6:45:00 5 2018-03-10 06:45:00 0.0
2 10/3/2018 7:45:00 0 2018-03-10 07:45:00 NaN
3 10/3/2018 9:00:00 0 2018-03-10 09:00:00 NaN
4 10/3/2018 9:25:00 0 2018-03-10 09:25:00 NaN
5 10/3/2018 9:30:00 0 2018-03-10 09:30:00 NaN
6 10/3/2018 11:00:00 0 2018-03-10 11:00:00 NaN
7 10/3/2018 11:30:00 0 2018-03-10 11:30:00 NaN
8 10/3/2018 13:30:00 0 2018-03-10 13:30:00 NaN
9 10/3/2018 13:50:00 5 2018-03-10 13:50:00 NaN
10 10/3/2018 15:00:00 0 2018-03-10 15:00:00 NaN
11 10/3/2018 15:25:00 0 2018-03-10 15:25:00 NaN
12 10/3/2018 16:25:00 0 2018-03-10 16:25:00 NaN
13 10/3/2018 18:00:00 0 2018-03-10 18:00:00 NaN
14 10/3/2018 19:00:00 0 2018-03-10 19:00:00 NaN
15 10/3/2018 19:30:00 0 2018-03-10 19:30:00 NaN
16 10/3/2018 20:00:00 0 2018-03-10 20:00:00 NaN
17 10/3/2018 22:05:00 0 2018-03-10 22:05:00 NaN
18 10/3/2018 22:15:00 5 2018-03-10 22:15:00 NaN
19 10/3/2018 23:40:00 0 2018-03-10 23:40:00 NaN
20 10/4/2018 6:58:00 5 2018-04-10 06:58:00 0.0
21 10/4/2018 13:00:00 0 2018-04-10 13:00:00 NaN
22 10/4/2018 16:00:00 0 2018-04-10 16:00:00 NaN
23 10/4/2018 17:00:00 0 2018-04-10 17:00:00 NaN
I have two data frames like following, data frame A has datetime even with minutes, data frame B only has hour.
df:A
dataDate original
2018-09-30 11:20:00 3
2018-10-01 12:40:00 10
2018-10-02 07:00:00 5
2018-10-27 12:50:00 5
2018-11-28 19:45:00 7
df:B
dataDate count
2018-09-30 10:00:00 300
2018-10-01 12:00:00 50
2018-10-02 07:00:00 120
2018-10-27 12:00:00 234
2018-11-28 19:05:00 714
I like to merge the two on the basis of hour date and hour, so that now in dataframe A should have all the rows filled on the basis of merge on date and hour
I can try to do it via
A['date'] = A.dataDate.date
B['date'] = B.dataDate.date
A['hour'] = A.dataDate.hour
B['hour'] = B.dataDate.hour
and then merge
merge_df = pd.merge(A,B, how='left', left_on=['date', 'hour'],
right_on=['date', 'hour'])
but its a very long process, Is their an efficient way to perform the same operation with the help of pandas time series or date functionality?
Use map if need append only one column from B to A with floor for set minutes and seconds if exist to 0:
d = dict(zip(B.dataDate.dt.floor('H'), B['count']))
A['count'] = A.dataDate.dt.floor('H').map(d)
print (A)
dataDate original count
0 2018-09-30 11:20:00 3 NaN
1 2018-10-01 12:40:00 10 50.0
2 2018-10-02 07:00:00 5 120.0
3 2018-10-27 12:50:00 5 234.0
4 2018-11-28 19:45:00 7 714.0
For general solution use DataFrame.join:
A.index = A.dataDate.dt.floor('H')
B.index = B.dataDate.dt.floor('H')
A = A.join(B, lsuffix='_left')
print (A)
dataDate_left original dataDate count
dataDate
2018-09-30 11:00:00 2018-09-30 11:20:00 3 NaT NaN
2018-10-01 12:00:00 2018-10-01 12:40:00 10 2018-10-01 12:00:00 50.0
2018-10-02 07:00:00 2018-10-02 07:00:00 5 2018-10-02 07:00:00 120.0
2018-10-27 12:00:00 2018-10-27 12:50:00 5 2018-10-27 12:00:00 234.0
2018-11-28 19:00:00 2018-11-28 19:45:00 7 2018-11-28 19:05:00 714.0
Fairly new to python and pandas here.
I make a query that's giving me back a timeseries. I'm never sure how many data points I receive from the query (run for a single day), but what I do know is that I need to resample them to contain 24 points (one for each hour in the day).
Printing m3hstream gives
[(1479218009000L, 109), (1479287368000L, 84)]
Then I try to make a dataframe df with
df = pd.DataFrame(data = list(m3hstream), columns=['Timestamp', 'Value'])
and this gives me an output of
Timestamp Value
0 1479218009000 109
1 1479287368000 84
Following I do this
daily_summary = pd.DataFrame()
daily_summary['value'] = df['Value'].resample('H').mean()
daily_summary = daily_summary.truncate(before=start, after=end)
print "Now daily summary"
print daily_summary
But this is giving me a TypeError: Only valid with DatetimeIndex, TimedeltaIndex or PeriodIndex, but got an instance of 'RangeIndex'
Could anyone please let me know how to resample it so I have 1 point for each hour in the 24 hour period that I'm querying for?
Thanks.
First thing you need to do is convert that 'Timestamp' to an actual pd.Timestamp. It looks like those are milliseconds
Then resample with the on parameter set to 'Timestamp'
df = df.assign(
Timestamp=pd.to_datetime(df.Timestamp, unit='ms')
).resample('H', on='Timestamp').mean().reset_index()
Timestamp Value
0 2016-11-15 13:00:00 109.0
1 2016-11-15 14:00:00 NaN
2 2016-11-15 15:00:00 NaN
3 2016-11-15 16:00:00 NaN
4 2016-11-15 17:00:00 NaN
5 2016-11-15 18:00:00 NaN
6 2016-11-15 19:00:00 NaN
7 2016-11-15 20:00:00 NaN
8 2016-11-15 21:00:00 NaN
9 2016-11-15 22:00:00 NaN
10 2016-11-15 23:00:00 NaN
11 2016-11-16 00:00:00 NaN
12 2016-11-16 01:00:00 NaN
13 2016-11-16 02:00:00 NaN
14 2016-11-16 03:00:00 NaN
15 2016-11-16 04:00:00 NaN
16 2016-11-16 05:00:00 NaN
17 2016-11-16 06:00:00 NaN
18 2016-11-16 07:00:00 NaN
19 2016-11-16 08:00:00 NaN
20 2016-11-16 09:00:00 84.0
If you want to fill those NaN values, use ffill, bfill, or interpolate
df.assign(
Timestamp=pd.to_datetime(df.Timestamp, unit='ms')
).resample('H', on='Timestamp').mean().reset_index().interpolate()
Timestamp Value
0 2016-11-15 13:00:00 109.00
1 2016-11-15 14:00:00 107.75
2 2016-11-15 15:00:00 106.50
3 2016-11-15 16:00:00 105.25
4 2016-11-15 17:00:00 104.00
5 2016-11-15 18:00:00 102.75
6 2016-11-15 19:00:00 101.50
7 2016-11-15 20:00:00 100.25
8 2016-11-15 21:00:00 99.00
9 2016-11-15 22:00:00 97.75
10 2016-11-15 23:00:00 96.50
11 2016-11-16 00:00:00 95.25
12 2016-11-16 01:00:00 94.00
13 2016-11-16 02:00:00 92.75
14 2016-11-16 03:00:00 91.50
15 2016-11-16 04:00:00 90.25
16 2016-11-16 05:00:00 89.00
17 2016-11-16 06:00:00 87.75
18 2016-11-16 07:00:00 86.50
19 2016-11-16 08:00:00 85.25
20 2016-11-16 09:00:00 84.00
Let's try:
daily_summary = daily_summary.set_index('Timestamp')
daily_summary.index = pd.to_datetime(daily_summary.index, unit='ms')
For once an hour:
daily_summary.resample('H').mean()
or for once a day:
daily_summary.resample('D').mean()
I'm having an issue changing a pandas DataFrame index to a datetime from an integer. I want to do it so that I can call reindex and fill in the dates between those listed in the table. Note that I have to use pandas 0.7.3 at the moment because I'm also using qstk, and qstk relies on pandas 0.7.3
First, here's my layout:
(Pdb) df
AAPL GOOG IBM XOM date
1 0 0 4000 0 2011-01-13 16:00:00
2 0 1000 4000 0 2011-01-26 16:00:00
3 0 1000 4000 0 2011-02-02 16:00:00
4 0 1000 4000 4000 2011-02-10 16:00:00
6 0 0 1800 4000 2011-03-03 16:00:00
7 0 0 3300 4000 2011-06-03 16:00:00
8 0 0 0 4000 2011-05-03 16:00:00
9 1200 0 0 4000 2011-06-10 16:00:00
11 1200 0 0 4000 2011-08-01 16:00:00
12 0 0 0 4000 2011-12-20 16:00:00
(Pdb) type(df['date'])
<class 'pandas.core.series.Series'>
(Pdb) df2 = DataFrame(index=df['date'])
(Pdb) df2
Empty DataFrame
Columns: array([], dtype=object)
Index: array([2011-01-13 16:00:00, 2011-01-26 16:00:00, 2011-02-02 16:00:00,
2011-02-10 16:00:00, 2011-03-03 16:00:00, 2011-06-03 16:00:00,
2011-05-03 16:00:00, 2011-06-10 16:00:00, 2011-08-01 16:00:00,
2011-12-20 16:00:00], dtype=object)
(Pdb) df2.merge(df,left_index=True,right_on='date')
AAPL GOOG IBM XOM date
1 0 0 4000 0 2011-01-13 16:00:00
2 0 1000 4000 0 2011-01-26 16:00:00
3 0 1000 4000 0 2011-02-02 16:00:00
4 0 1000 4000 4000 2011-02-10 16:00:00
6 0 0 1800 4000 2011-03-03 16:00:00
8 0 0 0 4000 2011-05-03 16:00:00
7 0 0 3300 4000 2011-06-03 16:00:00
9 1200 0 0 4000 2011-06-10 16:00:00
11 1200 0 0 4000 2011-08-01 16:00:00
12 0 0 0 4000 2011-12-20 16:00:00
I have tried multiple things to get a datetime index:
1.) Using the reindex() method with a list of datetime values. This creates a datetime index, but then fills in NaNs for the data in the DataFrame. I'm guessing that this is because the original values are tied to the integer index and reindexing to datetime tries to fill the new indices with default values (NaNs if no fill method is indicated). Thusly:
(Pdb) df.reindex(index=df['date'])
AAPL GOOG IBM XOM date
date
2011-01-13 16:00:00 NaN NaN NaN NaN NaN
2011-01-26 16:00:00 NaN NaN NaN NaN NaN
2011-02-02 16:00:00 NaN NaN NaN NaN NaN
2011-02-10 16:00:00 NaN NaN NaN NaN NaN
2011-03-03 16:00:00 NaN NaN NaN NaN NaN
2011-06-03 16:00:00 NaN NaN NaN NaN NaN
2011-05-03 16:00:00 NaN NaN NaN NaN NaN
2011-06-10 16:00:00 NaN NaN NaN NaN NaN
2011-08-01 16:00:00 NaN NaN NaN NaN NaN
2011-12-20 16:00:00 NaN NaN NaN NaN NaN
2.) Using DataFrame.merge with my original df and a second dataframe, df2, that is basically just a datetime index with nothing else. So I end up doing something like:
(pdb) df2.merge(df,left_index=True,right_on='date')
AAPL GOOG IBM XOM date
1 0 0 4000 0 2011-01-13 16:00:00
2 0 1000 4000 0 2011-01-26 16:00:00
3 0 1000 4000 0 2011-02-02 16:00:00
4 0 1000 4000 4000 2011-02-10 16:00:00
6 0 0 1800 4000 2011-03-03 16:00:00
8 0 0 0 4000 2011-05-03 16:00:00
7 0 0 3300 4000 2011-06-03 16:00:00
9 1200 0 0 4000 2011-06-10 16:00:00
11 1200 0 0 4000 2011-08-01 16:00:00
(and vice-versa). But I always end up with this kind of thing, with integer indices.
3.) Starting with an empty DataFrame with a datetime index (created from the 'date' field of df) and a bunch of empty columns. Then I attempt to assign each column by setting the columns with the same
names to be equal to the columns from df:
(Pdb) df2['GOOG']=0
(Pdb) df2
GOOG
date
2011-01-13 16:00:00 0
2011-01-26 16:00:00 0
2011-02-02 16:00:00 0
2011-02-10 16:00:00 0
2011-03-03 16:00:00 0
2011-06-03 16:00:00 0
2011-05-03 16:00:00 0
2011-06-10 16:00:00 0
2011-08-01 16:00:00 0
2011-12-20 16:00:00 0
(Pdb) df2['GOOG'] = df['GOOG']
(Pdb) df2
GOOG
date
2011-01-13 16:00:00 NaN
2011-01-26 16:00:00 NaN
2011-02-02 16:00:00 NaN
2011-02-10 16:00:00 NaN
2011-03-03 16:00:00 NaN
2011-06-03 16:00:00 NaN
2011-05-03 16:00:00 NaN
2011-06-10 16:00:00 NaN
2011-08-01 16:00:00 NaN
2011-12-20 16:00:00 NaN
So, how in pandas 0.7.3 do I get df to be re-created with an datetime index instead of the integer index? What am I missing?
I think you are looking for set_index:
In [11]: df.set_index('date')
Out[11]:
AAPL GOOG IBM XOM
date
2011-01-13 16:00:00 0 0 4000 0
2011-01-26 16:00:00 0 1000 4000 0
2011-02-02 16:00:00 0 1000 4000 0
2011-02-10 16:00:00 0 1000 4000 4000
2011-03-03 16:00:00 0 0 1800 4000
2011-06-03 16:00:00 0 0 3300 4000
2011-05-03 16:00:00 0 0 0 4000
2011-06-10 16:00:00 1200 0 0 4000
2011-08-01 16:00:00 1200 0 0 4000
2011-12-20 16:00:00 0 0 0 4000