I have a time series data in pandas, and I would like to group by a certain time window in each year and calculate its min and max.
For example:
times = pd.date_range(start = '1/1/2011', end = '1/1/2016', freq = 'D')
df = pd.DataFrame(np.random.rand(len(times)), index=times, columns=["value"])
How to group by time window e.g. 'Jan-10':'Mar-21' for each year and calculate its min and max for column value?
You can use the resample method.
df.resample('5d').agg(['min','max'])
I'm not sure if there's a direct way to do it without first creating a flag for the days required. The following function is used to create a flag required:
# Function for flagging the days required
def flag(x):
if x.month == 1 and x.day>=10: return True
elif x.month in [2,3,4]: return True
elif x.month == 5 and x.day<=21: return True
else: return False
Since you require for each year, it would be a good idea to have the year as a column.
Then the min and max for each year for given periods can be obtained with the code below:
times = pd.date_range(start = '1/1/2011', end = '1/1/2016', freq = 'D')
df = pd.DataFrame(np.random.rand(len(times)), index=times, columns=["value"])
df['Year'] = df.index.year
pd.pivot_table(df[list(pd.Series(df.index).apply(flag))], values=['value'], index = ['Year'], aggfunc=[min,max])
The output will look like follows:
Sample Output
Hope that answers your question... :)
You can define the bin edges, then throw out the bins you don't need (every other) with .loc[::2, :]. Here I'll define two functions just to check we're getting the date ranges we want within groups (Note since left edges are open, need to subtract 1 day):
import pandas as pd
edges = pd.to_datetime([x for year in df.index.year.unique()
for x in [f'{year}-02-09', f'{year}-03-21']])
def min_idx(x):
return x.index.min()
def max_idx(x):
return x.index.max()
df.groupby(pd.cut(df.index, bins=edges)).agg([min_idx, max_idx, min, max]).loc[::2, :]
Output:
value
min_idx max_idx min max
(2011-02-09, 2011-03-21] 2011-02-10 2011-03-21 0.009343 0.990564
(2012-02-09, 2012-03-21] 2012-02-10 2012-03-21 0.026369 0.978470
(2013-02-09, 2013-03-21] 2013-02-10 2013-03-21 0.039491 0.946481
(2014-02-09, 2014-03-21] 2014-02-10 2014-03-21 0.029161 0.967490
(2015-02-09, 2015-03-21] 2015-02-10 2015-03-21 0.006877 0.969296
(2016-02-09, 2016-03-21] NaT NaT NaN NaN
Related
I have two dataframes
import numpy as np
import pandas as pd
test1 = pd.date_range(start='1/1/2018', end='1/10/2018')
test1 = pd.DataFrame(test1)
test1.rename(columns = {list(test1)[0]: 'time'}, inplace = True)
test2 = pd.date_range(start='1/5/2018', end='1/20/2018')
test2 = pd.DataFrame(test2)
test2.rename(columns = {list(test2)[0]: 'time'}, inplace = True)
Now in first dataframe I create column
test1['values'] = np.zeros(10)
I want to fill this column, next to each date there should be the index of the closest date from second dataframe. I want it to look like this:
0 2018-01-01 0
1 2018-01-02 0
2 2018-01-03 0
3 2018-01-04 0
4 2018-01-05 0
5 2018-01-06 1
6 2018-01-07 2
7 2018-01-08 3
Of course my real data is not evenly spaced and has minutes and seconds, but the idea is same. I use the following code:
def nearest(items, pivot):
return min(items, key=lambda x: abs(x - pivot))
for k in range(10):
a = nearest(test2['time'], test1['time'][k]) ### find nearest timestamp from second dataframe
b = test2.index[test2['time'] == a].tolist()[0] ### identify the index of this timestamp
test1['value'][k] = b ### assign this value to the cell
This code is very slow on large datasets, how can I make it more efficient?
P.S. timestamps in my real data are sorted and increasing just like in these artificial examples.
You could do this in one line, using numpy's argmin:
test1['values'] = test1['time'].apply(lambda t: np.argmin(np.absolute(test2['time'] - t)))
Note that applying a lambda function is essentially also a loop. Check if that satisfies your requirements performance-wise.
You might also be able to leverage the fact that your timestamps are sorted and the timedelta between each timestamp is constant (if I got that correctly). Calculate the offset in days and derive the index vector, e.g. as follows:
offset = (test1['time'] - test2['time']).iloc[0].days
if offset < 0: # test1 time starts before test2 time, prepend zeros:
offset = abs(offset)
idx = np.append(np.zeros(offset), np.arange(len(test1['time'])-offset)).astype(int)
else: # test1 time starts after or with test2 time, use arange right away:
idx = np.arange(offset, offset+len(test1['time']))
test1['values'] = idx
I have timeseries data (euro/usd).
I want to create new column with conditions that
(It easier to read in my code to understand the conditions.)
if minimum of 3 previous high prices less than or equal to the current price then it will be 'BUY_SIGNAL' and if maximum of 3 previous low prices higher than or equal to the current price then it will be 'SELL_SIGNAL'.
Here is my table looks like
DATE OPEN HIGH LOW CLOSE
0 1990.09.28 1.25260 1.25430 1.24680 1.24890
1 1990.10.01 1.25170 1.26500 1.25170 1.25480
2 1990.10.02 1.25520 1.26390 1.25240 1.26330
3 1990.10.03 1.26350 1.27000 1.26030 1.26840
4 1990.10.04 1.26810 1.27750 1.26710 1.27590
and this is my code (I try to create 2 functions and it does not work)
def target_label(df):
if df['HIGH']>=[df['HIGH'].shift(1),df['HIGH'].shift(2),df['HIGH'].shift(3)].min(axis=1):
return 'BUY_SIGNAL'
if df['LOW']>=[df['LOW'].shift(1),df['LOW'].shift(2),df['LOW'].shift(3)].min(axis=1):
return 'SELL_SIGNAL'
else:
return 'NO_SIGNAL'
def target_label(df):
if df['HIGH']>=df[['HIGH1','HIGH2','HIGH3'].min(axis=1):
return 'BUY_SIGNAL'
if df['LOW']<=df[['LOW1','LOW2','LOW3']].max(axis=1):
return 'SELL_SIGNAL'
else:
return 'NO_SIGNAL'
d_df.apply (lambda df: target_label(df), axis=1)
You can use rolling(3).min() to get the minimum of previous 3 rows. The same would work for other functions like max, mean, etc. Something like the following:
df['signal'] = np.where(
df['HIGH'] >= df.shift(1).rolling(3)['HIGH'].min(), 'BUY_SIGNAL',
np.where(
df['LOW'] >= df.shift(1).rolling(3)['LOW'].min(), 'SELL_SIGNAL',
'NO_SIGNAL'
)
)
I have a data frame that looks like that:
group date value
g_1 1/2/2019 11:03:00 3
g_1 1/2/2019 11:04:00 5
g_1 1/2/2019 10:03:32 100
g_2 4/3/2019 09:11:09 46
I want to calculate the time difference between occurrences (in seconds) per group.
Example output:
groups_time_diff = {'g_1': [23,5666,7878], 'g_2: [0.2,56,2343] ,...}
This is my code:
groups_time_diff = defaultdict(list)
for group in tqdm(groups):
group_df = unit_df[unit_df['group'] == group]
dates = list(group_df['time'])
while len(dates) != 0:
min_date = min(dates)
dates.remove(min_date)
if len(dates) > 0:
second_min_date = min(dates)
date_diff = second_min_date - min_date
groups_time_diff[group].append(date_diff.seconds)
This takes forever to run and I am looking for a more time efficient way to get the desired output.
Any ideas?
Try this:
sorted_group_df = group_df.sort_values(by='time',ascending=True)
dates = sorted_group_df['time']
one = dates[1:-1].reset_index(drop=True)
two = dates[0:-1].reset_index(drop=True)
date_difference = one - two
date_difference_in_seconds = date_difference.dt.seconds
Try at first sort your dates. Then subtract these two series:
dates = dates.sort_values()
pd.Series.subtract(dates[0:-1], dates[1:-1])
You are using min function twice in each iteration that is not efficient.
Hope this helps.
I have a dataframe :
CAT ^GSPC
Date
2012-01-06 80.435059 1277.810059
2012-01-09 81.560600 1280.699951
2012-01-10 83.962914 1292.079956
....
2017-09-16 144.56653 2230.567646
and I want to find the slope of the stock / and S&P index for the last 63 days for each period. I have tried :
x = 0
temp_dct = {}
for date in df.index:
x += 1
max(x, (len(df.index)-64))
temp_dct[str(date)] = np.polyfit(df['^GSPC'][0+x:63+x].values,
df['CAT'][0+x:63+x].values,
1)[0]
However I feel this is very "unpythonic" , but I've had trouble integrating rolling/shift functions into this.
My expected output is to have a column called "Beta" that has the slope of the S&P (x values) and stock (y values) for all dates available
# this will operate on series
def polyf(seri):
return np.polyfit(seri.index.values, seri.values, 1)[0]
# you can store the original index in a column in case you need to reset back to it after fitting
df.index = df['^GSPC']
df['slope'] = df['CAT'].rolling(63, min_periods=2).apply(polyf, raw=False)
After running this, there will be a new column store the fitting result.
I have a dataframe of jobs for different people with star and end time for each job. I'd like to count, every four months, how many jobs each person is responsible for. I figured out away to do it but I'm sure it's tremendously inefficient (I'm new to pandas). It takes quite a while to compute when I run the code on my complete dataset (hundreds of persons and jobs).
Here is what I have so far.
#create a data frame
import pandas as pd
import numpy as np
df = pd.DataFrame({'job': pd.Categorical(['job1','job2','job3','job4']),
'person': pd.Categorical(['p1', 'p1', 'p2','p2']),
'start': ['2015-01-01', '2015-06-01', '2015-01-01', '2016- 01- 01'],
'end': ['2015-07-01', '2015- 12-31', '2016-03-01', '2016-12-31']})
df['start'] = pd.to_datetime(df['start'])
df['end'] = pd.to_datetime(df['end'])
Which gives me
I then create a new dataset with
bdate = min(df['start'])
edate = max(df['end'])
dates = pd.date_range(bdate, edate, freq='4MS')
people = sorted(set(list(df['person'])))
df2 = pd.DataFrame(np.zeros((len(dates), len(people))), index=dates, columns=people)
for d in pd.date_range(bdate, edate, freq='MS'):
for p in people:
contagem = df[(df['person'] == p) &
(df['start'] <= d) &
(df['end'] >= d)]
pos = np.argmin(np.abs(dates - d))
df2.iloc[pos][p] = len(contagem.index)
df2
And I get
I'm sure there must be a better way of doing this without having to loop through all dates and persons. But how?
This answer assumes that each job-person combination is unique. It creates a series for every row with the value equal to the job an index that expands the dates. Then it resamples every 4th month (which is not quarterly but what your solution describes) and counts the unique non-na occurrences.
def make_date_range(x):
return pd.Series(index=pd.date_range(x.start.values[0], x.end.values[0], freq='M'), data=x.job.values[0])
# Iterate through each job person combo and make an entry for each month with the job as the value
df1 = df.groupby(['job', 'person']).apply(make_date_range).unstack('person')
# remove outer level from index
df1.index = df1.index.droplevel('job')
# resample each month counting only unique values
df1.resample('4MS').agg(lambda x: len(x[x.notnull()].unique()))
Output
person p1 p2
2015-01-01 1 1
2015-05-01 2 1
2015-09-01 1 1
2016-01-01 0 2
2016-05-01 0 1
2016-09-01 0 1
And here is a long one line solution that iterates over every rows and creates a new dataframe and stacks all of them together via pd.concat and then resamples.
pd.concat([pd.DataFrame(index = pd.date_range(tup.start, tup.end, freq='4MS'),
data=[[tup.job]],
columns=[tup.person]) for tup in df.itertuples()])\
.resample('4MS').count()
And another one that is faster
df1 = pd.melt(df, id_vars=['job', 'person'], value_name='date').set_index('date')
g = df1.groupby([pd.TimeGrouper('4MS'), 'person'])['job']
g.agg('nunique').unstack('person', fill_value=0)