Here's a piece of code, I don't get why on the last column rm-5, I get NaN for the first 4 items.
I understand that for the rm columns the 1st 4 items aren't filled because there is no data available, but if I shift the column calculation should be made, shouldn't it ?
Similarly I don't get why there are 5 and not 4 items in the rm-5 column that are NaN
import pandas as pd
import numpy as np
index = pd.date_range('2000-1-1', periods=100, freq='D')
df = pd.DataFrame(data=np.random.randn(100), index=index, columns=['A'])
df['rm']=pd.rolling_mean(df['A'],5)
df['rm-5']=pd.rolling_mean(df['A'].shift(-5),5)
print df.head(n=8)
print df.tail(n=8)
A rm rm-5
2000-01-01 0.109161 NaN NaN
2000-01-02 -0.360286 NaN NaN
2000-01-03 -0.092439 NaN NaN
2000-01-04 0.169439 NaN NaN
2000-01-05 0.185829 0.002341 0.091736
2000-01-06 0.432599 0.067028 0.295949
2000-01-07 -0.374317 0.064222 0.055903
2000-01-08 1.258054 0.334321 -0.132972
A rm rm-5
2000-04-02 0.499860 -0.422931 -0.140111
2000-04-03 -0.868718 -0.458962 -0.182373
2000-04-04 0.081059 -0.443494 -0.040646
2000-04-05 0.500275 -0.093048 NaN
2000-04-06 -0.253915 -0.008288 NaN
2000-04-07 -0.159256 -0.140111 NaN
2000-04-08 -1.080027 -0.182373 NaN
2000-04-09 0.789690 -0.040646 NaN
You can change the order of operations. Now you are first shifting and afterwards taking the mean. Due to your first shift you create your NaN's at the end.
index = pd.date_range('2000-1-1', periods=100, freq='D')
df = pd.DataFrame(data=np.random.randn(100), index=index, columns=['A'])
df['rm']=pd.rolling_mean(df['A'],5)
df['shift'] = df['A'].shift(-5)
df['rm-5-shift_first']=pd.rolling_mean(df['A'].shift(-5),5)
df['rm-5-mean_first']=pd.rolling_mean(df['A'],5).shift(-5)
print( df.head(n=8))
print( df.tail(n=8))
A rm shift rm-5-shift_first rm-5-mean_first
2000-01-01 -0.120808 NaN 0.830231 NaN 0.184197
2000-01-02 0.029547 NaN 0.047451 NaN 0.187778
2000-01-03 0.002652 NaN 1.040963 NaN 0.395440
2000-01-04 -1.078656 NaN -1.118723 NaN 0.387426
2000-01-05 1.137210 -0.006011 0.469557 0.253896 0.253896
2000-01-06 0.830231 0.184197 -0.390506 0.009748 0.009748
2000-01-07 0.047451 0.187778 -1.624492 -0.324640 -0.324640
2000-01-08 1.040963 0.395440 -1.259306 -0.784694 -0.784694
A rm shift rm-5-shift_first rm-5-mean_first
2000-04-02 -1.283123 -0.270381 0.226257 0.760370 0.760370
2000-04-03 1.369342 0.288072 2.367048 0.959912 0.959912
2000-04-04 0.003363 0.299997 1.143513 1.187941 1.187941
2000-04-05 0.694026 0.400442 NaN NaN NaN
2000-04-06 1.508863 0.458494 NaN NaN NaN
2000-04-07 0.226257 0.760370 NaN NaN NaN
2000-04-08 2.367048 0.959912 NaN NaN NaN
2000-04-09 1.143513 1.187941 NaN NaN NaN
For more see:
http://pandas.pydata.org/pandas-docs/stable/computation.html#moving-rolling-statistics-moments
http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.shift.html
Related
My original dataframe looked like:
timestamp variables value
1 2017-05-26 19:46:41.289 inf 0.000000
2 2017-05-26 20:40:41.243 tubavg 225.489639
... ... ... ...
899541 2017-05-02 20:54:41.574 caspre 684.486450
899542 2017-04-29 11:17:25.126 tvol 50.895000
Now I want to bucket this dataset by time, which can be done with the code:
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.groupby(pd.Grouper(key='timestamp', freq='5min'))
But I also want all the different metrics to become columns in the new dataframe. For example the first two rows from the original dataframe would look like:
timestamp inf tubavg caspre tvol ...
1 2017-05-26 19:46:41.289 0.000000 225.489639 xxxxxxx xxxxx
... ... ... ...
xxxxx 2017-05-02 20:54:41.574 xxxxxx xxxxxx 684.486450 50.895000
Now as it can be seen the time has been bucketed by 5 min intervals and will look at all the values of variables and try to create columns for those columns for all buckets. The bucket has assumed the very first value of the time it had bucketed with.
in order to solve this, I have tried a couple of different solutions, but can't seem to find anything without constant errors.
Try unstacking the variables column from rows to columns with .unstack(1). The parameter is 1, because we want the second index column (0 would be the first)
Then, drop the level of the multi-index you just created to make it a little bit cleaner with .droplevel().
Finally, use pd.Grouper. Since the date/time is on the index, you don't need to specify a key.
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.set_index(['timestamp','variables']).unstack(1)
df.columns = df.columns.droplevel()
df = df.groupby(pd.Grouper(freq='5min')).mean().reset_index()
df
Out[1]:
variables timestamp caspre inf tubavg tvol
0 2017-04-29 11:15:00 NaN NaN NaN 50.895
1 2017-04-29 11:20:00 NaN NaN NaN NaN
2 2017-04-29 11:25:00 NaN NaN NaN NaN
3 2017-04-29 11:30:00 NaN NaN NaN NaN
4 2017-04-29 11:35:00 NaN NaN NaN NaN
... ... ... ... ...
7885 2017-05-26 20:20:00 NaN NaN NaN NaN
7886 2017-05-26 20:25:00 NaN NaN NaN NaN
7887 2017-05-26 20:30:00 NaN NaN NaN NaN
7888 2017-05-26 20:35:00 NaN NaN NaN NaN
7889 2017-05-26 20:40:00 NaN NaN 225.489639 NaN
Another way would be to .groupby the variables as well and then .unstack(1) again:
df['timestamp'] = pd.to_datetime(df['timestamp'], errors='coerce')
df = df.groupby([pd.Grouper(freq='5min', key='timestamp'), 'variables']).mean().unstack(1)
df.columns = df.columns.droplevel()
df = df.reset_index()
df
Out[1]:
variables timestamp caspre inf tubavg tvol
0 2017-04-29 11:15:00 NaN NaN NaN 50.895
1 2017-05-02 20:50:00 684.48645 NaN NaN NaN
2 2017-05-26 19:45:00 NaN 0.0 NaN NaN
3 2017-05-26 20:40:00 NaN NaN 225.489639 NaN
I wonder if there is a way to up resample a DataFrame without having to decide how NAs should be filled immediately.
I tried the following but got the Future Warning:
FutureWarning: .resample() is now a deferred operation use .resample(...).mean() instead of .resample(...)
Code:
import pandas as pd
dates = pd.date_range('2015-01-01', '2016-01-01', freq='BM')
dummy = [i for i in range(len(dates))]
df = pd.DataFrame({'A': dummy})
df.index = dates
df.resample('B')
Is there a better way to do this, that doesn't show warnings?
Thanks.
Use Resampler.asfreq:
print (df.resample('B').asfreq())
A
2015-01-30 0.0
2015-02-02 NaN
2015-02-03 NaN
2015-02-04 NaN
2015-02-05 NaN
2015-02-06 NaN
2015-02-09 NaN
2015-02-10 NaN
2015-02-11 NaN
2015-02-12 NaN
2015-02-13 NaN
2015-02-16 NaN
2015-02-17 NaN
2015-02-18 NaN
2015-02-19 NaN
2015-02-20 NaN
2015-02-23 NaN
2015-02-24 NaN
2015-02-25 NaN
2015-02-26 NaN
2015-02-27 1.0
2015-03-02 NaN
2015-03-03 NaN
2015-03-04 NaN
...
...
I have 2 datasets (cex2.txt and cex3) wich I would like to resample in pandas. With one dataset I get the expected output, with the other not.
The datasets are tick data and are exactly equally formatted. Actually, the 2 datasets are only from two different days.
import pandas as pd
import datetime as dt
pd.set_option ('display.mpl_style', 'default')
time_converter = lambda x: dt.datetime.fromtimestamp(float(x))
data_frame = pd.read_csv('cex2.txt', sep=';', converters={'time': time_converter})
data_frame.drop('Unnamed: 7', axis=1, inplace=True)
data_frame.drop('low', axis=1, inplace=True)
data_frame.drop('high', axis=1, inplace=True)
data_frame.drop('last', axis=1, inplace=True)
data_frame = data_frame.reindex_axis(['time', 'ask', 'bid', 'vol'], axis=1)
data_frame.set_index(pd.DatetimeIndex(data_frame['time']), inplace=True)
ask = data_frame['ask'].resample('15Min', how='ohlc')
bid = data_frame['bid'].resample('15Min', how='ohlc')
vol = data_frame['vol'].resample('15Min', how='sum')
print ask
from the cex2.txt dataset I get this wrong output:
open high low close
1970-01-01 01:00:00 NaN NaN NaN NaN
1970-01-01 01:15:00 NaN NaN NaN NaN
1970-01-01 01:30:00 NaN NaN NaN NaN
1970-01-01 01:45:00 NaN NaN NaN NaN
1970-01-01 02:00:00 NaN NaN NaN NaN
1970-01-01 02:15:00 NaN NaN NaN NaN
1970-01-01 02:30:00 NaN NaN NaN NaN
1970-01-01 02:45:00 NaN NaN NaN NaN
1970-01-01 03:00:00 NaN NaN NaN NaN
1970-01-01 03:15:00 NaN NaN NaN NaN
from the cex3.txt dataset I get correct values:
open high low close
2014-08-10 13:30:00 0.003483 0.003500 0.003483 0.003485
2014-08-10 13:45:00 0.003485 0.003570 0.003467 0.003471
2014-08-10 14:00:00 0.003471 0.003500 0.003470 0.003494
2014-08-10 14:15:00 0.003494 0.003500 0.003493 0.003498
2014-08-10 14:30:00 0.003498 0.003549 0.003498 0.003500
2014-08-10 14:45:00 0.003500 0.003533 0.003487 0.003533
2014-08-10 15:00:00 0.003533 0.003600 0.003520 0.003587
I'm really at my wits' end. Does anyone have an idea why this happens?
Edit:
Here are the data sources:
https://dl.dropboxusercontent.com/u/14055520/cex2.txt
https://dl.dropboxusercontent.com/u/14055520/cex3.txt
Thanks!
So I have a dataFrame:
Units fcast currerr curpercent fcastcum unitscum cumerrpercent
2013-09-01 3561 NaN NaN NaN NaN NaN NaN
2013-10-01 3480 NaN NaN NaN NaN NaN NaN
2013-11-01 3071 NaN NaN NaN NaN NaN NaN
2013-12-01 3234 NaN NaN NaN NaN NaN NaN
2014-01-01 2610 2706 -96 -3.678161 2706 2610 -3.678161
2014-02-01 NaN 3117 NaN NaN 5823 NaN NaN
2014-03-01 NaN 3943 NaN NaN 9766 NaN NaN
And I want to load a value, the index of the current month which is found by getting the last item that has "units" filled in, into a variable, "curr_month" that will have a number of uses (including text display and using as a slicing operator)
This is way ugly but almost works:
curr_month=mergederrs['Units'].dropna()
curr_month=curr_month[-1:].index
curr_month
But curr_month is
<class 'pandas.tseries.index.DatetimeIndex'>
[2014-01-01]
Length: 1, Freq: None, Timezone: None
Which is Unhashable, so this fails
mergederrs[curr_month:]
The docs are great for creating the DF but a bit sparse of getting individual items out!
I'd probably write
>>> df.Units.last_valid_index()
Timestamp('2014-01-01 00:00:00')
but a slight tweak on your approach should work too:
>>> df.Units.dropna().index[-1]
Timestamp('2014-01-01 00:00:00')
It's the difference between somelist[-1:] and somelist[-1].
[Note that I'm assuming that all of the nan values come at the end. If there are valids and then NaNs and then valids, and you want the last valid in the first group, that would be slightly different.]
I have time-indexed data:
df2 = pd.DataFrame({ 'day': pd.Series([date(2012, 1, 1), date(2012, 1, 3)]), 'b' : pd.Series([0.22, 0.3]) })
df2 = df2.set_index('day')
df2
b
day
2012-01-01 0.22
2012-01-03 0.30
What is the best way to extend this data frame so that it has one row for every day in January 2012 (say), where all columns are set to NaN (here only b) where we don't have data?
So the desired result would be:
b
day
2012-01-01 0.22
2012-01-02 NaN
2012-01-03 0.30
2012-01-04 NaN
...
2012-01-31 NaN
Many thanks!
Use this (current as of pandas 1.1.3):
ix = pd.date_range(start=date(2012, 1, 1), end=date(2012, 1, 31), freq='D')
df2.reindex(ix)
Which gives:
b
2012-01-01 0.22
2012-01-02 NaN
2012-01-03 0.30
2012-01-04 NaN
2012-01-05 NaN
[...]
2012-01-29 NaN
2012-01-30 NaN
2012-01-31 NaN
For older versions of pandas replace pd.date_range with pd.DatetimeIndex.
You can resample passing day as frequency, without specifying a fill_method parameter missing values will be NaN filled as you desired
df3 = df2.asfreq('D')
df3
Out[16]:
b
2012-01-01 0.22
2012-01-02 NaN
2012-01-03 0.30
To answer your second part, I can't think of a more elegant way at the moment:
df3 = DataFrame({ 'day': Series([date(2012, 1, 4), date(2012, 1, 31)])})
df3.set_index('day',inplace=True)
merged = df2.append(df3)
merged = merged.asfreq('D')
merged
Out[46]:
b
2012-01-01 0.22
2012-01-02 NaN
2012-01-03 0.30
2012-01-04 NaN
2012-01-05 NaN
2012-01-06 NaN
2012-01-07 NaN
2012-01-08 NaN
2012-01-09 NaN
2012-01-10 NaN
2012-01-11 NaN
2012-01-12 NaN
2012-01-13 NaN
2012-01-14 NaN
2012-01-15 NaN
2012-01-16 NaN
2012-01-17 NaN
2012-01-18 NaN
2012-01-19 NaN
2012-01-20 NaN
2012-01-21 NaN
2012-01-22 NaN
2012-01-23 NaN
2012-01-24 NaN
2012-01-25 NaN
2012-01-26 NaN
2012-01-27 NaN
2012-01-28 NaN
2012-01-29 NaN
2012-01-30 NaN
2012-01-31 NaN
This constructs a second time series and then we just append and call asfreq('D') as before.
Here's another option:
First add a NaN record on the last day you want, then resample. This way resampling will fill the missing dates for you.
Starting Frame:
import pandas as pd
import numpy as np
from datetime import date
df2 = pd.DataFrame({ 'day': pd.Series([date(2012, 1, 1), date(2012, 1, 3)]), 'b' : pd.Series([0.22, 0.3]) })
df2= df2.set_index('day')
df2
Out:
b
day
2012-01-01 0.22
2012-01-03 0.30
Filled Frame:
df2 = df2.set_value(date(2012,1,31),'b',np.float('nan'))
df2.asfreq('D')
Out:
b
day
2012-01-01 0.22
2012-01-02 NaN
2012-01-03 0.30
2012-01-04 NaN
2012-01-05 NaN
2012-01-06 NaN
2012-01-07 NaN
2012-01-08 NaN
2012-01-09 NaN
2012-01-10 NaN
2012-01-11 NaN
2012-01-12 NaN
2012-01-13 NaN
2012-01-14 NaN
2012-01-15 NaN
2012-01-16 NaN
2012-01-17 NaN
2012-01-18 NaN
2012-01-19 NaN
2012-01-20 NaN
2012-01-21 NaN
2012-01-22 NaN
2012-01-23 NaN
2012-01-24 NaN
2012-01-25 NaN
2012-01-26 NaN
2012-01-27 NaN
2012-01-28 NaN
2012-01-29 NaN
2012-01-30 NaN
2012-01-31 NaN
Mark's answer seems to not be working anymore on pandas 1.1.1.
However, using the same idea, the following works:
from datetime import datetime
import pandas as pd
# get start and desired end dates
first_date = df['date'].min()
today = datetime.today()
# set index
df.set_index('date', inplace=True)
# and here is were the magic happens
idx = pd.date_range(first_date, today, freq='D')
df = df.reindex(idx)
EDIT: just found out that this exact use case is in the docs:
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.reindex.html#pandas.DataFrame.reindex
def extendframe(df, ndays):
"""
(df, ndays) -> df that is padded by ndays in beginning and end
"""
ixd = df.index - datetime.timedelta(ndays)
ixu = df.index + datetime.timedelta(ndays)
ixx = df.index.union(ixd.union(ixu))
df_ = df.reindex(ixx)
return df_
Not exactly the question since here you know that the second index is all days in January, but suppose you have another index say from another data frame df1, which might be disjoint and with a random frequency. Then you can do this:
ix = pd.DatetimeIndex(list(df2.index) + list(df1.index)).unique().sort_values()
df2.reindex(ix)
Converting indices to lists allows one to create a longer list in a natural way.