I'm trying to make a time series plot with seaborn from a dataframe that has multiple series.
From this post:
seaborn time series from pandas dataframe
I gather that tsplot isn't going to work as it is meant to plot uncertainty.
So is there another Seaborn method that is meant for line charts with multiple series?
My dataframe looks like this:
print(df.info())
print(df.describe())
print(df.values)
print(df.index)
output:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 253 entries, 2013-01-03 to 2014-01-03
Data columns (total 5 columns):
Equity(24 [AAPL]) 253 non-null float64
Equity(3766 [IBM]) 253 non-null float64
Equity(5061 [MSFT]) 253 non-null float64
Equity(6683 [SBUX]) 253 non-null float64
Equity(8554 [SPY]) 253 non-null float64
dtypes: float64(5)
memory usage: 11.9 KB
None
Equity(24 [AAPL]) Equity(3766 [IBM]) Equity(5061 [MSFT]) \
count 253.000000 253.000000 253.000000
mean 67.560593 194.075383 32.547436
std 6.435356 11.175226 3.457613
min 55.811000 172.820000 26.480000
25% 62.538000 184.690000 28.680000
50% 65.877000 193.880000 33.030000
75% 72.299000 203.490000 34.990000
max 81.463000 215.780000 38.970000
Equity(6683 [SBUX]) Equity(8554 [SPY])
count 253.000000 253.000000
mean 33.773277 164.690180
std 4.597291 10.038221
min 26.610000 145.540000
25% 29.085000 156.130000
50% 33.650000 165.310000
75% 38.280000 170.310000
max 40.995000 184.560000
[[ 77.484 195.24 27.28 27.685 145.77 ]
[ 75.289 193.989 26.76 27.85 146.38 ]
[ 74.854 193.2 26.71 27.875 145.965]
...,
[ 80.167 187.51 37.43 39.195 184.56 ]
[ 79.034 185.52 37.145 38.595 182.95 ]
[ 77.284 186.66 36.92 38.475 182.8 ]]
DatetimeIndex(['2013-01-03', '2013-01-04', '2013-01-07', '2013-01-08',
'2013-01-09', '2013-01-10', '2013-01-11', '2013-01-14',
'2013-01-15', '2013-01-16',
...
'2013-12-19', '2013-12-20', '2013-12-23', '2013-12-24',
'2013-12-26', '2013-12-27', '2013-12-30', '2013-12-31',
'2014-01-02', '2014-01-03'],
dtype='datetime64[ns]', length=253, freq=None, tz='UTC')
This works (but I want to get my hands dirty with Seaborn):
df.plot()
Output:
Thank you for your time!
Update1:
df.to_dict() returned:
https://gist.github.com/anonymous/2bdc1ce0f9d0b6ccd6675ab4f7313a5f
Update2:
Using #knagaev sample code, I've narrowed it down to this difference:
current dataframe (output of print(current_df)):
Equity(24 [AAPL]) Equity(3766 [IBM]) \
2013-01-03 00:00:00+00:00 77.484 195.2400
2013-01-04 00:00:00+00:00 75.289 193.9890
2013-01-07 00:00:00+00:00 74.854 193.2000
2013-01-08 00:00:00+00:00 75.029 192.8200
2013-01-09 00:00:00+00:00 73.873 192.3800
desired dataframe (output of print(desired_df)):
Date Company Kind Price
0 2014-01-02 IBM Open 187.210007
1 2014-01-02 IBM High 187.399994
2 2014-01-02 IBM Low 185.199997
3 2014-01-02 IBM Close 185.529999
4 2014-01-02 IBM Volume 4546500.000000
5 2014-01-02 IBM Adj Close 171.971090
6 2014-01-02 MSFT Open 37.349998
7 2014-01-02 MSFT High 37.400002
8 2014-01-02 MSFT Low 37.099998
9 2014-01-02 MSFT Close 37.160000
10 2014-01-02 MSFT Volume 30632200.000000
11 2014-01-02 MSFT Adj Close 34.960000
12 2014-01-02 ORCL Open 37.779999
13 2014-01-02 ORCL High 38.029999
14 2014-01-02 ORCL Low 37.549999
15 2014-01-02 ORCL Close 37.840000
16 2014-01-02 ORCL Volume 18162100.000000
What's the best way to reorganize the current_df to desired_df?
Update 3:
I finally got it working from the help of #knagaev:
I had to add a dummy column as well as finesse the index:
df['Datetime'] = df.index
melted_df = pd.melt(df, id_vars='Datetime', var_name='Security', value_name='Price')
melted_df['Dummy'] = 0
sns.tsplot(melted_df, time='Datetime', unit='Dummy', condition='Security', value='Price', ax=ax)
to produce:
You can try to get hands dirty with tsplot.
You will draw your line charts with standard errors ("statistical additions")
I tried to simulate your dataset. So here is the results
import pandas.io.data as web
from datetime import datetime
import seaborn as sns
stocks = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT']
start = datetime(2014,1,1)
end = datetime(2014,3,28)
f = web.DataReader(stocks, 'yahoo',start,end)
df = pd.DataFrame(f.to_frame().stack()).reset_index()
df.columns = ['Date', 'Company', 'Kind', 'Price']
sns.tsplot(df, time='Date', unit='Kind', condition='Company', value='Price')
By the way this sample is very imitative. The parameter "unit" is "Field in the data DataFrame identifying the sampling unit (e.g. subject, neuron, etc.). The error representation will collapse over units at each time/condition observation. " (from documentation). So I used the 'Kind' field for illustrative purposes.
Ok, I made an example for your dataframe.
It has dummy field for "noise cleaning" :)
import pandas.io.data as web
from datetime import datetime
import seaborn as sns
stocks = ['ORCL', 'TSLA', 'IBM','YELP', 'MSFT']
start = datetime(2010,1,1)
end = datetime(2015,12,31)
f = web.DataReader(stocks, 'yahoo',start,end)
df = pd.DataFrame(f.to_frame().stack()).reset_index()
df.columns = ['Date', 'Company', 'Kind', 'Price']
df_open = df[df['Kind'] == 'Open'].copy()
df_open['Dummy'] = 0
sns.tsplot(df_open, time='Date', unit='Dummy', condition='Company', value='Price')
P.S. Thanks to #VanPeer - now you can use seaborn.lineplot for this problem
Related
I have a dataframe like this:
datetime type d13C ... dayofyear week dmy
1 2018-01-05 15:22:30 air -8.88 ... 5 1 5-1-2018
2 2018-01-05 15:23:30 air -9.08 ... 5 1 5-1-2018
3 2018-01-05 15:24:30 air -10.08 ... 5 1 5-1-2018
4 2018-01-05 15:25:30 air -9.51 ... 5 1 5-1-2018
5 2018-01-05 15:26:30 air -9.61 ... 5 1 5-1-2018
... ... ... ... ... ... ...
341543 2018-12-17 12:42:30 air -9.99 ... 351 51 17-12-2018
341544 2018-12-17 12:43:30 air -9.53 ... 351 51 17-12-2018
341545 2018-12-17 12:44:30 air -9.54 ... 351 51 17-12-2018
341546 2018-12-17 12:45:30 air -9.93 ... 351 51 17-12-2018
341547 2018-12-17 12:46:30 air -9.66 ... 351 51 17-12-2018
Full data here: https://drive.google.com/file/d/1KmOwnpvrG2Edz1AlLyD0CKZlBpaFervM/view?usp=sharing
I'm plotting d13C column on the Y-axis and inverse total_co2 on the X and then fitting a regression line for each day in the data. I then filter out and store the dates I want depending on if the r^2 value of the regression line is > 0.8 like this:
import pandas as pd
from numpy.polynomial.polynomial import polyfit
import numpy as np
from scipy import stats
df = pd.read_csv('dataset.txt', usecols = ['datetime', 'type', 'total_co2', 'd13C', 'day','month','year','dayofyear','week','hour'], dtype = {'total_co2':
np.float64, 'd13C':np.float64, 'day':str, 'month':str, 'year':str,'week':str, 'hour': str, 'dayofyear':str})
df['dmy'] = df['day'] +'-'+ df['month'] +'-'+ df['year'] # adding a full date column to make it easir to filter through
# the rows, ie. each day
# window18 = df[((df['year']=='2018'))] # selecting just the data from the year 2018
accepted_dates_list = [] # creating an empty list to store the dates that we're interested in
for d in df['dmy'].unique(): # this will pass through each day, the .unique() ensures that it doesnt go over the same days
acceptable_date = {} # creating a dictionary to store the valid dates
period = df[df.dmy==d] # defining each period from the dmy column
p = (period['total_co2'])**-1
q = period['d13C']
c,m = polyfit(p,q,1) # intercept and gradient calculation of the regression line
slope, intercept, r_value, p_value, std_err = stats.linregress(p, q) # getting some statistical properties of the regression line
if r_value**2 >= 0.8:
acceptable_date['period'] = d # populating the dictionary with the accpeted dates and corresponding other values
acceptable_date['r-squared'] = r_value**2
acceptable_date['intercept'] = intercept
accepted_dates_list.append(acceptable_date) # sending the valid stuff in the dictionary to the list
else:
pass
accepted_dates18 = pd.DataFrame(accepted_dates_list) # converting the list to a df
print(accepted_dates18)
But now I want to do the same thing, just over three day periods which I'm trying to select from the day of year column (unsure if this is the best way or not). For example, I would want to fit the regression line using all the rows with dayofyear=5, dayofyear=6, dayofyear=7, then for the next three days until the end of the data. There are some days missing, but essentially I just need to do this for every 3 days in the data.
The output dataframe I am then trying to get would have the list of the three day intervals with the r^2 >0.8, so anything like this that will show the valid date range:
Accepted dates
0 23-08-2018 - 25-08-2018
1 26-08-2018 - 28-08-2018
2 31-08-2018 - 02-09-2018
3 15-09-2018 - 17-09-2018
4 24-09-2018 - 26-09-2018
I'm not too sure what to do to iterate over every three days. Any help would go a long way, thanks!
Your code loops through a list of unique dates and filters the dataframe on each iteration.
Pandas implemented this with df.groupby(). It can be used to loop and get each group or it can be combined with aggregations, function applications, and transformations. You can read more about it on the user guide. This function can return groups according to any the columns (or set of columns) in df, levels of the index, or any other exogenous list-like with the same length as df (we are grouping rows, but note it can also group columns). It even has implementations for the most common statistical aggregations like mean, stdev, and corr, among many others.
Now to your problem. You not only want the correlation but the equation, so you do need to loop. And to get three-day groups you can use that dayofyear column with a twist.
Take this data
import io
fo = io.StringIO(
'''datetime,d13C
2018-01-05 15:22:30,-8.88
2018-01-05 15:23:30,-9.08
2018-01-06 15:24:30,-10.0
2018-01-06 15:25:30,-9.51
2018-01-07 15:26:30,-9.61
2018-01-07 15:27:30,-9.61
2018-01-08 15:28:30,-9.61
2018-01-08 15:29:30,-9.61
2018-01-09 15:26:30,-9.61
2018-01-09 15:27:30,-9.61
''')
df = pd.read_csv(fo)
df.datetime = pd.to_datetime(df.datetime)
fo.close()
With the code for grouping and looping
first_day = 5
days_to_group = 3
for doy, gdf in df.groupby((df.datetime.dt.dayofyear.sub(first_day) // days_to_group)
* days_to_group + first_day):
print(gdf, '\n')
print(doy, '\n')
Output
datetime d13C
0 2018-01-05 15:22:30 -8.88
1 2018-01-05 15:23:30 -9.08
2 2018-01-06 15:24:30 -10.00
3 2018-01-06 15:25:30 -9.51
4 2018-01-07 15:26:30 -9.61
5 2018-01-07 15:27:30 -9.61
5
datetime d13C
6 2018-01-08 15:28:30 -9.61
7 2018-01-08 15:29:30 -9.61
8 2018-01-09 15:26:30 -9.61
9 2018-01-09 15:27:30 -9.61
8
Now you can plug your code into this loop and get what you need.
PS
You can also use df.datetime.dt.floor('3d') as the grouper but I am not aware of how to control the first_day, so use it with caution.
Here is one approach. As I understand it, the primary goal is to get from current observations (multiple per day) to a 3-day moving average. First, I created a smaller, simpler data set:
import pandas as pd
df = pd.DataFrame({'counter': [*range(100)],
'date': pd.date_range('2020-01-01', periods=100, freq='7H')})
df = df.set_index('date')
print(df.head())
counter
date
2020-01-01 00:00:00 0
2020-01-01 07:00:00 1
2020-01-01 14:00:00 2
2020-01-01 21:00:00 3
2020-01-02 04:00:00 4
Second, I re-sampled on a daily basis:
df2 = df['counter'].resample('1D').mean() # <-- called df2
print(df2.head())
date
2020-01-01 1.5
2020-01-02 5.0
2020-01-03 8.5
2020-01-04 12.0
2020-01-05 15.5
Freq: D, Name: counter, dtype: float64
Third, I computed mean value for a 3-day moving window:
print(df2.rolling(3).mean().head())
date
2020-01-01 NaN
2020-01-02 NaN
2020-01-03 5.0
2020-01-04 8.5
2020-01-05 12.0
Freq: D, Name: counter, dtype: float64
Seems like resample().mean() and rolling().mean() would be useful in this case.
I have a timeseries data for a full year for every minute.
timestamp day hour min rainfall_rate
2010-01-01 00:00:00 1 0 0 x
2010-01-01 00:01:00 1 0 1 1
2010-01-01 00:02:00 1 0 2 2
2010-01-01 00:03:00 1 0 3 x
2010-01-01 00:04:00 1 0 4 5
... ...
2010-12-31 23:55:00 365 23 55 3
2010-12-31 23:56:00 365 23 56 9
2010-12-31 23:57:00 365 23 57 32
2010-12-31 23:58:00 365 23 58 12
2010-12-31 23:59:00 365 23 59 22
I used sampled_df = rainfall_df.groupby(pd.Grouper(freq="M")).resample('D').sum(), to group the data by month and calculate the daily sum of rainfall_rate.
Structure of sampled_df.
How to plot the monthly data against the timestamp for every months. How do I index rainfall_rate? I want the data of rainfall_rate daily for every month. Also is the grouping correct? Suppose I want to plot timestamp vs rainfall_rate for the month of January. How do I do that?
I am new to pandas.
To generate a plot from the resulting resampled data, simply call DataFrame.plot. However, since you have a multindex with two timestamps for month and day indicator, call DataFrame.reset_index to drop the redundant month level. And for specific month plotting, run boolean indexing on the day index for specific month:
import matplotlib.pyplot as plt
...
# RESET INDEX AND FILTER COLUMNS
sampled_df = (sampled_df.reindex(['rainfall_rate'], axis='columns')
.reset_index(level=0, drop=True)
)
### ALL MONTHS
sampled_df.plot(kind='line')
### ONLY JANUARY
sampled_df[sampled_df.index.month == 1].plot(kind='line')
To demonstrate with random, seeded data:
Data
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
np.random.seed(22820)
rainfall_df = pd.DataFrame({'timestamp': pd.date_range('2010-01-01 00:00',
'2010-12-31 23:59',
freq="min"),
'rainfall_rate': np.random.normal(1, 2, 525600)
})
Resampling
sampled_df = (rainfall_df.set_index('timestamp')
.groupby(pd.Grouper(freq="M"))
.resample('D')
.sum()
)
sampled_df.tail(10)
# rainfall_rate
# timestamp
# 2010-12-22 1454.287302
# 2010-12-23 1367.539650
# 2010-12-24 1460.319823
# 2010-12-25 1464.392407
# 2010-12-26 1338.139227
# 2010-12-27 1454.540103
# 2010-12-28 1553.949133
# 2010-12-29 1301.670684
# 2010-12-30 1536.173442
# 2010-12-31 1333.492614
Plots
sampled_df = sampled_df.reset_index(level=0, drop=True)
### ALL MONTHS
sampled_df.plot(kind='line')
### ONLY JANUARY
sampled_df[sampled_df.index.month == 1].plot(kind='line')
I am getting incorrect data using the resample function to convert minute data into daily data. After a careful examination of the output, I have discovered that the process is outputting the open of the 16:00 bar for the DAILY OPEN. Furthermore, it is outputting the close of the 9:31 bar as the DAILY CLOSE.
Here's my code:
import numpy as np
import pandas as pd
from pylab import mpl, plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%matplotlib inline
import cufflinks as cf
df = pd.read_csv('ES#CMin_Pit.csv', index_col='Date', parse_dates=['Date'])
df.tail()
Time Inc Vol Volume Open High Low Close
Date
2005-09-07 09:34:00 2309.0 39145.0 1150.75 1151.00 1150.50 1150.75
2005-09-07 09:33:00 1803.0 36836.0 1150.75 1150.75 1150.25 1150.50
2005-09-07 09:32:00 972.0 35033.0 1150.75 1150.75 1150.50 1150.75
2005-09-07 09:31:00 1440.0 34061.0 1150.75 1151.00 1150.50 1150.50
NaT NaN NaN NaN NaN NaN NaN NaN
conversion = {'Open' : 'first', 'High' : 'max', 'Low' : 'min', 'Close' : 'last', 'Volume' : 'sum'}
data_day = df.resample('D').apply(conversion)
data_day.tail(5)
Open High Low Close Volume
Date
2018-05-20 NaN NaN NaN NaN 0.0
2018-05-21 2732.50 2739.25 2725.25 2730.50 210297692.0
2018-05-22 2726.00 2741.75 2721.50 2738.25 179224835.0
2018-05-23 2731.75 2732.75 2708.50 2710.50 292305588.0
2018-05-24 2726.00 2730.50 2705.75 2725.00 312575571.0
I suspect that the problem is establishing the "conversion" dictionary, however, I have seen this method uses more than once in my research. Any suggestions to specify the appropriate bar to pull the daily open and close values from? Namely, to use the "first," of 9:31 minute bar instead of the 16:00 bar for the DAILY OPEN price. Additionally to use the "last" of the 16:00 bar instead of the 9:31 bar for the DAILY CLOSE price? Thanks LL
The possible problem can be that your time series data is not ordered in a time increasing order.
Try:
data_day = df.groupby(df.Date).ohlc()
I think you just mixed up groupby and resample in your mind, because you want to select certain values of a day (the first, the last, the max etc.) what can be done by grouping. But resample recalculates your data by interpolating and mean to just one single value per day to redraw the characteristic over time with different samples as close as possible to the original.
In short: just replace resample by groupby.
I have a csv-file with time series data, the first column is the date in the format %Y:%m:%d and the second column is the intraday time in the format '%H:%M:%S'. I would like to import this csv-file into a multiindex dataframe or panel object.
With this code, it already works:
_file_data = pd.read_csv(_file,
sep=",",
header=0,
index_col=['Date', 'Time'],
thousands="'",
parse_dates=True,
skipinitialspace=True
)
It returns the data in the following format:
Date Time Volume
2016-01-04 2018-04-25 09:01:29 53645
2018-04-25 10:01:29 123
2018-04-25 10:01:29 1345
....
2016-01-05 2018-04-25 10:01:29 123
2018-04-25 12:01:29 213
2018-04-25 10:01:29 123
1st question:
I would like to show the second index as a pure time-object not datetime. To do that, I have to declare two different date-pasers in the read_csv function, but I can't figure out how. What is the "best" way to do that?
2nd question:
After I created the Dataframe, I converted it to a panel-object. Would you recommend doing that? Is the panel-object the better choice for such a data structure? What are the benefits (drawbacks) of a panel-object?
1st question:
You can create multiple converters and define parsers in dictionary:
import pandas as pd
temp=u"""Date,Time,Volume
2016:01:04,09:00:00,53645
2016:01:04,09:20:00,0
2016:01:04,09:40:00,0
2016:01:04,10:00:00,1468
2016:01:05,10:00:00,246
2016:01:05,10:20:00,0
2016:01:05,10:40:00,0
2016:01:05,11:00:00,0
2016:01:05,11:20:00,0
2016:01:05,11:40:00,0
2016:01:05,12:00:00,213"""
def converter1(x):
#convert to datetime and then to times
return pd.to_datetime(x).time()
def converter2(x):
#define format of datetime
return pd.to_datetime(x, format='%Y:%m:%d')
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp),
index_col=['Date','Time'],
thousands="'",
skipinitialspace=True,
converters={'Time': converter1, 'Date': converter2})
print (df)
Volume
Date Time
2016-01-04 09:00:00 53645
09:20:00 0
09:40:00 0
10:00:00 1468
2016-01-05 10:00:00 246
10:20:00 0
10:40:00 0
11:00:00 0
11:20:00 0
11:40:00 0
12:00:00 213
Sometimes is possible use built-in parser, e.g. if format of dates is YY-MM-DD:
import pandas as pd
temp=u"""Date,Time,Volume
2016-01-04,09:00:00,53645
2016-01-04,09:20:00,0
2016-01-04,09:40:00,0
2016-01-04,10:00:00,1468
2016-01-05,10:00:00,246
2016-01-05,10:20:00,0
2016-01-05,10:40:00,0
2016-01-05,11:00:00,0
2016-01-05,11:20:00,0
2016-01-05,11:40:00,0
2016-01-05,12:00:00,213"""
def converter(x):
#define format of datetime
return pd.to_datetime(x).time()
#after testing replace 'pd.compat.StringIO(temp)' to 'filename.csv'
df = pd.read_csv(pd.compat.StringIO(temp),
index_col=['Date','Time'],
parse_dates=['Date'],
thousands="'",
skipinitialspace=True,
converters={'Time': converter})
print (df.index.get_level_values(0))
DatetimeIndex(['2016-01-04', '2016-01-04', '2016-01-04', '2016-01-04',
'2016-01-05', '2016-01-05', '2016-01-05', '2016-01-05',
'2016-01-05', '2016-01-05', '2016-01-05'],
dtype='datetime64[ns]', name='Date', freq=None)
Last possible solution is convert datetime to times in MultiIndex by set_levels - after processing:
df.index = df.index.set_levels(df.index.get_level_values(1).time, level=1)
print (df)
Volume
Date Time
2016-01-04 09:00:00 53645
09:20:00 0
09:40:00 0
10:00:00 1468
2016-01-05 10:00:00 246
10:00:00 0
10:20:00 0
10:40:00 0
11:00:00 0
11:20:00 0
11:40:00 213
2nd question:
Panel in pandas 0.20.+ is deprecated and will be removed in a future version.
To convert to a time series use pd.to_timedelta.
Ex:
import pandas as pd
df = pd.DataFrame({"Time": ["2018-04-25 09:01:29", "2018-04-25 10:01:29", "2018-04-25 10:01:29"]})
df["Time"] = pd.to_timedelta(pd.to_datetime(df["Time"]).dt.strftime('%H:%M:%S'))
print df["Time"]
Output:
0 09:01:29
1 10:01:29
2 10:01:29
Name: Time, dtype: timedelta64[ns]
I'm new to Pandas and Zipline, and I'm trying to learn how to use them (and use them with this data that I have). Any sorts of tips, even if no full solution, would be much appreciated. I have tried a number of things, and have gotten quite close, but run into indexing issues, Exception: Reindexing only valid with uniquely valued Index objects, in particular. [Pandas 0.10.0, Python 2.7]
I'm trying to transform monthly returns data I have for thousands of stocks in postgres from the form:
ticker_symbol :: String, monthly_return :: Float, date :: Timestamp
e.g.
AAPL, 0.112, 28/2/1992
GS, 0.13, 30/11/1981
GS, -0.23, 22/12/1981
NB: The frequency of the reporting is monthly, but there is going to be considerable NaN data here, as not all of the over 6000 companies I have here are going to be around at the same time.
…to the form described below, which is what Zipline needs to run its backtester. (I think. Can Zipline's backtester work with monthly data like this, easily? I know it can, but any tips for doing this?)
The below is a DataFrame (of timeseries? How do you say this?), in the format I need:
> data:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 2268 entries, 1993-01-04 00:00:00+00:00 to 2001-12-31 00:00:00+00:00
Data columns:
AA 2268 non-null values
AAPL 2268 non-null values
GE 2268 non-null values
IBM 2268 non-null values
JNJ 2268 non-null values
KO 2268 non-null values
MSFT 2268 non-null values
PEP 2268 non-null values
SPX 2268 non-null values
XOM 2268 non-null values
dtypes: float64(10)
The below is a TimeSeries, and is in the format I need.
> data.AAPL:
Date
1993-01-04 00:00:00+00:00 73.00
1993-01-05 00:00:00+00:00 73.12
...
2001-12-28 00:00:00+00:00 36.15
2001-12-31 00:00:00+00:00 35.55
Name: AAPL, Length: 2268
Note, there isn't return data here, but prices instead. They're adjusted (by Zipline's load_from_yahoo—though, from reading the source, really by functions in pandas) for dividends, splits, etc, so there's an isomorphism (less the initial price) between that and my return data (so, no problem here).
(EDIT: Let me know if you'd like me to write what I have, or attach my iPython notebook or a gist; I just doubt it'd be helpful, but I can absolutely do it if requested.)
I suspect you are trying to set the date as the index too early. My suggestion would be to first set_index as date and company name, then you can unstack the company name and resample.
Something like this:
In [11]: df1
Out[11]:
ticker_symbol monthly_return date
0 AAPL 0.112 1992-02-28 00:00:00
1 GS 0.130 1981-11-30 00:00:00
2 GS -0.230 1981-12-22 00:00:00
df2 = df2.set_index(['date','ticker_symbol'])
df3 = df2.unstack(level=1)
df4 = df.resample('M')
In [14]: df2
Out[14]:
monthly_return
date ticker_symbol
1992-02-28 AAPL 0.112
1981-11-30 GS 0.130
1981-12-22 GS -0.230
In [15]: df3
Out[15]:
monthly_return
ticker_symbol AAPL GS
date
1981-11-30 NaN 0.13
1981-12-22 NaN -0.23
1992-02-28 0.112 NaN
In [16]: df4
Out[16]:
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 124 entries, 1981-11-30 00:00:00 to 1992-02-29 00:00:00
Freq: M
Data columns:
(monthly_return, AAPL) 1 non-null values
(monthly_return, GS) 2 non-null values
dtypes: float64(2)