Shifting Date time index - python

I am trying to shift my date time index such that 2018-04-09 will show as 2018-04-08 one day ahead and only shifting the last row, I tried a few ways with different error such as below:
df.index[-1] = df.index[-1] + pd.offsets.Day(1)
TypeError: Index does not support mutable operations
Can you kindly advise a suitable way please?
My df looks like this:
FinalPosition
dt
2018-04-03 1.32
2018-04-04 NaN
2018-04-05 NaN
2018-04-06 NaN
2018-04-09 NaN

Use rename if values of DatetimeIndex are unique:
df = df.rename({df.index[-1]: df.index[-1] + pd.offsets.Day(1)})
print (df)
FinalPosition
dt
2018-04-03 1.32
2018-04-04 NaN
2018-04-05 NaN
2018-04-06 NaN
2018-04-10 NaN
If possible not unique for me working DatetimeIndex.insert:
df.index = df.index[:-1].insert(len(df), df.index[-1] + pd.offsets.Day(1))

Use .iloc
Ex:
import pandas as pd
df = pd.DataFrame({"datetime": ["2018-04-09"]})
df["datetime"] = pd.to_datetime(df["datetime"])
print df["datetime"].iloc[-1:] - pd.offsets.Day(1)
Output:
0 2018-04-08
Name: datetime, dtype: datetime64[ns]

Related

How can I count the rows between a date index and a date one month in the future in pandas vectorized to add them as a column?

I have a dataframe (df) with a date index. And I want to achieve the following:
1. Take Dates column and add one month -> e.g. nxt_dt = df.index + np.timedelta64(month=1) and lets call df.index curr_dt
2. Find the nearest entry in Dates that is >= nxt_dt.
3 Count the rows between curr_dt and nxt_dt and put them into a column in df.
The result is supposed to look like this:
px_volume listed_sh ... iv_mid_6m '30d'
Dates ...
2005-01-03 228805 NaN ... 0.202625 21
2005-01-04 189983 NaN ... 0.203465 22
2005-01-05 224310 NaN ... 0.202455 23
2005-01-06 221988 NaN ... 0.202385 20
2005-01-07 322691 NaN ... 0.201065 21
Needless to mention that there are only dates/rows in the df for which there are observations.
I can think of some different ways to get this done in loops, but since the data I work with is quite big, I would really like to avoid to loop through rows to fill them.
Is there a way in pandas to get this done vectorized?
If you are OK to reindex this should do the job:
import numpy as np
import pandas as pd
df = pd.DataFrame({'date': ['2020-01-01', '2020-01-08', '2020-01-24', '2020-01-29', '2020-02-09', '2020-03-04']})
df['date'] = pd.to_datetime(df['date'])
df['value'] = 1
df = df.set_index('date')
df = df.reindex(pd.date_range('2020-01-01','2020-03-04')).fillna(0)
df = df.sort_index(ascending=False)
df['30d'] = df['value'].rolling(30).sum() - 1
df.sort_index().query("value == 1")
gives:
value 30d
2020-01-01 1.0 3.0
2020-01-08 1.0 2.0
2020-01-24 1.0 2.0
2020-01-29 1.0 1.0
2020-02-09 1.0 NaN
2020-03-04 1.0 NaN

How to apply series in isocalendar function in pandas python

I have the following table:
Time
2016-09-10T23:20:00.000000000
2016-08-10T23:20:00.000000000
2016-09-10T23:20:00.000000000
2017-09-10T23:20:00.000000000
2016-09-10T23:20:00.000000000
I wish to used isocalender to get the work weeks, so any ideas can share me?
Time WW
2016-01-01T23:20:00.000000000 201601
2016-01-01T23:20:00.000000000 201601
2016-01-01T23:20:00.000000000 201601
2017-01-01T23:20:00.000000000 201701
2018-01-01T23:20:00.000000000 201801
You can use:
#convert column to datetime
df['Time'] = pd.to_datetime(df['Time'])
#simplier solution with strftime
df['WW'] = df['Time'].dt.strftime('%G-%V')
#solution with isocalendar
df['WW1'] = df['Time'].apply(lambda x: str(x.isocalendar()[0]) + '-' +
str(x.isocalendar()[1]).zfill(2))
print (df)
Time WW WW1
0 2017-01-01 00:00:00 2016-52 2016-52 <- changed datetime
1 2016-08-10 23:20:00 2016-32 2016-32
2 2016-09-10 23:20:00 2016-36 2016-36
3 2017-09-10 23:20:00 2017-36 2017-36
4 2016-09-10 23:20:00 2016-36 2016-36
Thank you #Fierr for correct '%Y-%V' to '%G-%V'.

How to calculate a index series for a event window

Suppose I have a time series like so:
pd.Series(np.random.rand(20), index=pd.date_range("1990-01-01",periods=20))
1990-01-01 0.018363
1990-01-02 0.288625
1990-01-03 0.460708
1990-01-04 0.663063
1990-01-05 0.434250
1990-01-06 0.504893
1990-01-07 0.587743
1990-01-08 0.412223
1990-01-09 0.604656
1990-01-10 0.960338
1990-01-11 0.606765
1990-01-12 0.110480
1990-01-13 0.671683
1990-01-14 0.178488
1990-01-15 0.458074
1990-01-16 0.219303
1990-01-17 0.172665
1990-01-18 0.429534
1990-01-19 0.505891
1990-01-20 0.242567
Freq: D, dtype: float64
Suppose the event date is on 1990-01-05 and 1990-01-15. I want to subset the data down to a window of length (-2,+2) around the event, but with an added column yielding the relative number of days from the event date (which has value 0):
1990-01-01 0.460708 -2
1990-01-04 0.663063 -1
1990-01-05 0.434250 0
1990-01-06 0.504893 1
1990-01-07 0.587743 2
1990-01-13 0.671683 -2
1990-01-14 0.178488 -1
1990-01-15 0.458074 0
1990-01-16 0.219303 1
1990-01-17 0.172665 2
Freq: D, dtype: float64
This question is related to my previous question here : Event Study in Pandas
Leveraging your previous solution from 'Event Study in Pandas' by #jezrael:
import numpy as np
import pandas as pd
s = pd.Series(np.random.rand(20), index=pd.date_range("1990-01-01",periods=20))
date1 = pd.to_datetime('1990-01-05')
date2 = pd.to_datetime('1990-01-15')
window = 2
dates = [date1, date2]
s1 = pd.concat([s.loc[date - pd.Timedelta(window, unit='d'):
date + pd.Timedelta(window, unit='d')] for date in dates])
Convert to dataframe:
df = s1.to_frame()
df['Offset'] = pd.Series(data=np.arange(-window,window+1).tolist()*len(dates),index=s1.index)
df

Pandas and csv import into dataframe. How to best to combine date anbd date fields into one

I have a csv file that I am trying to import into pandas.
There are two columns of intrest. date and hour and are the first two cols.
E.g.
date,hour,...
10-1-2013,0,
10-1-2013,0,
10-1-2013,0,
10-1-2013,1,
10-1-2013,1,
How do I import using pandas so that that hour and date is combined or is that best done after the initial import?
df = DataFrame.from_csv('bingads.csv', sep=',')
If I do the initial import how do I combine the two as a date and then delete the hour?
Thanks
Define your own date_parser:
In [291]: from dateutil.parser import parse
In [292]: import datetime as dt
In [293]: def date_parser(x):
.....: date, hour = x.split(' ')
.....: return parse(date) + dt.timedelta(0, 3600*int(hour))
In [298]: pd.read_csv('test.csv', parse_dates=[[0,1]], date_parser=date_parser)
Out[298]:
date_hour a b c
0 2013-10-01 00:00:00 1 1 1
1 2013-10-01 00:00:00 2 2 2
2 2013-10-01 00:00:00 3 3 3
3 2013-10-01 01:00:00 4 4 4
4 2013-10-01 01:00:00 5 5 5
Apply read_csv instead of read_clipboard to handle your actual data:
>>> df = pd.read_clipboard(sep=',')
>>> df['date'] = pd.to_datetime(df.date) + pd.to_timedelta(df.hour, unit='D')/24
>>> del df['hour']
>>> df
date ...
0 2013-10-01 00:00:00 NaN
1 2013-10-01 00:00:00 NaN
2 2013-10-01 00:00:00 NaN
3 2013-10-01 01:00:00 NaN
4 2013-10-01 01:00:00 NaN
[5 rows x 2 columns]
Take a look at the parse_dates argument which pandas.read_csv accepts.
You can do something like:
df = pandas.read_csv('some.csv', parse_dates=True)
# in which case pandas will parse all columns where it finds dates
df = pandas.read_csv('some.csv', parse_dates=[i,j,k])
# in which case pandas will parse the i, j and kth columns for dates
Since you are only using the two columns from the cdv file and combining those into one, I would squeeze into a series of datetime objects like so:
import pandas as pd
from StringIO import StringIO
import datetime as dt
txt='''\
date,hour,A,B
10-1-2013,0,1,6
10-1-2013,0,2,7
10-1-2013,0,3,8
10-1-2013,1,4,9
10-1-2013,1,5,10'''
def date_parser(date, hour):
dates=[]
for ed, eh in zip(date, hour):
month, day, year=list(map(int, ed.split('-')))
hour=int(eh)
dates.append(dt.datetime(year, month, day, hour))
return dates
p=pd.read_csv(StringIO(txt), usecols=[0,1],
parse_dates=[[0,1]], date_parser=date_parser, squeeze=True)
print p
Prints:
0 2013-10-01 00:00:00
1 2013-10-01 00:00:00
2 2013-10-01 00:00:00
3 2013-10-01 01:00:00
4 2013-10-01 01:00:00
Name: date_hour, dtype: datetime64[ns]

how to resample without skipping nan values in pandas

I am trying get the 10 days aggregate of my data which has NaN values. The sum of 10 days should return a nan values if there is a NaN value in the 10 day duration.
When I apply the below code, pandas is considering NaN as Zero and returning the sum of remaining days.
dateRange = pd.date_range(start_date, periods=len(data), freq='D')
# Creating a data frame so that the timeseries can handle numpy array.
df = pd.DataFrame(data)
base_Series = pd.DataFrame(list(df.values), index=dateRange)
# Converting to aggregated series
agg_series = base_Series.resample('10D', how='sum')
agg_data = agg_series.values
Sample Data:
2011-06-01 46.520536
2011-06-02 8.988311
2011-06-03 0.133823
2011-06-04 0.274521
2011-06-05 1.283360
2011-06-06 2.556313
2011-06-07 0.027461
2011-06-08 0.001584
2011-06-09 0.079193
2011-06-10 2.389549
2011-06-11 NaN
2011-06-12 0.195844
2011-06-13 0.058720
2011-06-14 6.570925
2011-06-15 0.015107
2011-06-16 0.031066
2011-06-17 0.073008
2011-06-18 0.072198
2011-06-19 0.044534
2011-06-20 0.240080
Output:
2011-06-01 62.254651
2011-06-11 7.301481
This uses numpy sum which will return nan if nan is present in the sum
In [35]: s = Series(randn(100),index=date_range('20130101',periods=100))
In [36]: s.iloc[11] = np.nan
In [37]: s.resample('10D',how=lambda x: x.values.sum())
Out[37]:
2013-01-01 6.910729
2013-01-11 NaN
2013-01-21 -1.592541
2013-01-31 -2.013012
2013-02-10 1.129273
2013-02-20 -2.054807
2013-03-02 4.669622
2013-03-12 3.489225
2013-03-22 0.390786
2013-04-01 -0.005655
dtype: float64
to filter out those days which have any NaNs, I propose that you do
noNaN_days_only = s.groupby(lambda x: x.date).filter(lambda x: ~x.isnull().any()
where s is a DataFrame
Just apply an agg function:
agg_series = base_Series.resample('10D').agg(lambda x: np.nan if np.isnan(x).all() else np.sum(x) )

Categories