How can I a weighted moving average using yfinance and pandas - python

I want to compare the 50 day moving average and 50 day weighted moving average of a company.
import yfinance as yf
import datetime as dt
start = '2021-05-01' # format: YYYY-MM-DD
end = dt.datetime.now() # today
stock='AMD'
df = yf.download(stock,start, end, interval='1h')
This is just to set up the data frame.
The code below adds a column to the data frame with the moving average, but I have been unsuccessful trying to do the same for a weighted moving average.
df['50MA']= df.iloc[:, 4].rolling(window=50).mean()
This is what I have which is incorrect
for i in range(len(df.index)):
df['W50MA']=(df.iloc[i, 4]) * (df.iloc[i, 5]/sum(df.iloc[:, 5]))

You could try something like this:
weights = np.array(list(range(1, 51))) / 100
sum_weights = np.sum(weights)
def weighted_ma(value):
return np.sum(weights*value) / sum_weights
df['50WMA'] = df.iloc[:, 4].rolling(window=50).apply(weighted_ma)

Related

Identify in a pandas series when the trend changes from positive to negative

I have a pandas dataframe with securities prices and several moving average trend lines of various moving average lengths. The data frames are sufficiently large that I would like to identify the most efficient way to capture the index of a particular series where the slope changes (In this example, let's just say from positive to negative for a given series in the dataframe.)
My hack seems very "hacky". I am currently doing the following (Note, imagine this is for a single moving average series):
filter = (df.diff()>0).diff().dropna(axis=0)
new_df = df[filter].dropna(axis=0)
Full example code below:
import pandas as pd
import numpy as np
from datetime import datetime, timedelta
# Create a sample Dataframe
date_today = datetime.now()
days = pd.date_range(date_today, date_today + timedelta(7), freq='D')
close = pd.Series([1,2,3,4,2,1,4,3])
df = pd.DataFrame({"date":days, "prices":close})
df.set_index("date", inplace=True)
print("Original DF")
print(df)
# Long Explanation
updays = (df.diff()>0) # Show True for all updays false for all downdays
print("Updays df is")
print(updays)
reversal_df = (updays.diff()) # this will only show change days as True
reversal_df.dropna(axis=0, inplace=True) # Handle the first day
trade_df = df[reversal_df].dropna() # Select only the days where the trend reversed
print("These are the days where the trend reverses it self from negative to positive or vice versa ")
print(trade_df)
# Simplified below by combining the above into two lines
filter = (df.diff()>0).diff().dropna(axis=0)
new_df = df[filter].dropna(axis=0)
print("The final result is this: ")
print(new_df)
Any help here would be appreciated. Note, I'm more interested in balancing efficiencies between how best to do this so I can understand it, and how to make it sufficiently quick to compute.
Multiple moving average solution.
Look for the comment # *** THE SOLUTION BEGINS HERE *** to see the solution, before that is just generating data, printing and plotting to validate.
What I do here is to calculate the sign of MVA slopes so a positive slope will have a value of 1 and a negative slope a value of -1.
Slope_i = MVA(i, ask; periods) - MVA(i, ask; periods)
m<periods>_slp_sgn_i = Sign(Slope_i)
Then to spot slope changes I Calculate:
m<periods>slp_chg = sign(m<periods>_slp_sgn_i - m<periods>_slp_sgn_i-1)
So for example if the slope changes from 1 (positive) to -1 (negative):
sign (-1 - 1) = sign(-2) = -1
In the other hand, if the changes from -1 to 1:
sign (1 - - 1) = sign(2) = 1
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# GENERATE DATA RANDOM PRICE
_periods = 1000
_value_0 = 1.1300
_std = 0.0005
_freq = '5T'
_output_col = 'ask'
_last_date = pd.to_datetime('2021-12-15')
_p_t = np.zeros(_periods)
_wn = np.random.normal(loc=0, scale=_std, size=_periods)
_p_t[0] = _value_0
_wn[0] = 0
_date_index = pd.date_range(end=_last_date, periods=_periods, freq=_freq)
df= pd.DataFrame(np.stack([_p_t, _wn], axis=1), columns=[_output_col, "wn"], index=_date_index)
for i in range(1, _periods):
df.iloc[i][_output_col] = df.iloc[i - 1][_output_col] + df.iloc[i].wn
print(df.head(5))
# CALCULATE MOVING AVERAGES (3)
df['mva_25'] = df['ask'].rolling(25).mean()
df['mva_50'] = df['ask'].rolling(50).mean()
df['mva_100'] = df['ask'].rolling(100).mean()
# plot to check
df['ask'].plot(figsize=(15,5))
df['mva_25'].plot(figsize=(15,5))
df['mva_50'].plot(figsize=(15,5))
df['mva_100'].plot(figsize=(15,5))
plt.show()
# *** THE SOLUTION BEGINS HERE ***
# calculate mva slopes directions
# positive slope: 1, negative slope -1
df['m25_slp_sgn'] = np.sign(df['mva_25'] - df['mva_25'].shift(1))
df['m50_slp_sgn'] = np.sign(df['mva_50'] - df['mva_50'].shift(1))
df['m100_slp_sgn'] = np.sign(df['mva_100'] - df['mva_100'].shift(1))
# CALCULATE CHANGE IN SLOPE
# from 1 to -1: -1
# from -1 to 1: 1
df['m25_slp_chg'] = np.sign(df['m25_slp_sgn'] - df['m25_slp_sgn'].shift(1))
df['m50_slp_chg'] = np.sign(df['m50_slp_sgn'] - df['m50_slp_sgn'].shift(1))
df['m100_slp_chg'] = np.sign(df['m100_slp_sgn'] - df['m100_slp_sgn'].shift(1))
# clean NAN
df.dropna(inplace=True)
# print data to visually check
print(df.iloc[20:40][['mva_25', 'm25_slp_sgn', 'm25_slp_chg']])
# query where slope of MVA25 changes from positive to negative
df[(df['m25_slp_chg'] == -1)].head(5)
WARNING: Data is generated random so you'll see the plots and the printings change each time you execute the code.

how to create a stacked bar chart indicating time spent on nest per day

I have some data of an owl being present in the nest box. In a previous question you helped me visualize when the owl is in the box:
In addition I created a plot of the hours per day spent in the box with the code below (probably this can be done more efficiently):
import pandas as pd
import matplotlib.pyplot as plt
# raw data indicating time spent in box (each row represents start and end time)
time = pd.DatetimeIndex(["2021-12-01 18:08","2021-12-01 18:11",
"2021-12-02 05:27","2021-12-02 05:29",
"2021-12-02 22:40","2021-12-02 22:43",
"2021-12-03 19:24","2021-12-03 19:27",
"2021-12-06 18:04","2021-12-06 18:06",
"2021-12-07 05:28","2021-12-07 05:30",
"2021-12-10 03:05","2021-12-10 03:10",
"2021-12-10 07:11","2021-12-10 07:13",
"2021-12-10 20:40","2021-12-10 20:41",
"2021-12-12 19:42","2021-12-12 19:45",
"2021-12-13 04:13","2021-12-13 04:17",
"2021-12-15 04:28","2021-12-15 04:30",
"2021-12-15 05:21","2021-12-15 05:25",
"2021-12-15 17:40","2021-12-15 17:44",
"2021-12-15 22:31","2021-12-15 22:37",
"2021-12-16 04:24","2021-12-16 04:28",
"2021-12-16 19:58","2021-12-16 20:09",
"2021-12-17 17:42","2021-12-17 18:04",
"2021-12-17 22:19","2021-12-17 22:26",
"2021-12-18 05:41","2021-12-18 05:44",
"2021-12-19 07:40","2021-12-19 16:55",
"2021-12-19 20:39","2021-12-19 20:52",
"2021-12-19 21:56","2021-12-19 23:17",
"2021-12-21 04:53","2021-12-21 04:59",
"2021-12-21 05:37","2021-12-21 05:39",
"2021-12-22 08:06","2021-12-22 17:22",
"2021-12-22 20:04","2021-12-22 21:24",
"2021-12-22 21:44","2021-12-22 22:47",
"2021-12-23 02:20","2021-12-23 06:17",
"2021-12-23 08:07","2021-12-23 16:54",
"2021-12-23 19:36","2021-12-23 23:59:59",
"2021-12-24 00:00","2021-12-24 00:28",
"2021-12-24 07:53","2021-12-24 17:00",
])
# create dataframe with column indicating presence (1) or absence (0)
time_df = pd.DataFrame(data={'present':[1,0]*int(len(time)/2)}, index=time)
# calculate interval length and add to time_df
time_df['interval'] = time_df.index.to_series().diff().astype('timedelta64[m]')
# add column with day to time_df
time_df['day'] = time.day
#select only intervals where owl is present
timeinbox = time_df.iloc[1::2, :]
interval = timeinbox.interval
day = timeinbox.day
# sum multiple intervals per day
interval_tot = [interval[0]]
day_tot = [day[0]]
for i in range(1, len(day)):
if day[i] == day[i-1]:
interval_tot[-1] +=interval[i]
else:
day_tot.append(day[i])
interval_tot.append(interval[i])
# recalculate to hours
for i in range(len(interval_tot)):
interval_tot[i] = interval_tot[i]/(60)
plt.figure(figsize=(15, 5))
plt.grid(zorder=0)
plt.bar(day_tot, interval_tot, color='g', zorder=3)
plt.xlim([1,31])
plt.xlabel('day in December')
plt.ylabel('hours per day in nest box')
plt.xticks(np.arange(1,31,1))
plt.ylim([0, 24])
Now I would like to combine all data in one plot by making a stacked bar chart, where each day is represented by a bar and each bar indicating for each of the 24*60 minutes whether the owl is present or not. Is this possible from the current data structure?
The data seems to have been created manually, so I have changed the format of the data presented. The approach I took was to create the time spent and the time not spent, with a continuous index of 1 minute intervals with the start and end time as the difference time and a flag of 1. Now to create non-stay time, I will create a time series index of start and end date + 1 at 1 minute intervals. Update the original data frame with the newly created index. This is the data for the graph. In the graph, based on the data frame extracted in days, create a color list with red for stay and green for non-stay. Then, in a bar graph, stack the height one. It may be necessary to consider grouping the data into hourly units.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from datetime import timedelta
import io
data = '''
start_time,end_time
"2021-12-01 18:08","2021-12-01 18:11"
"2021-12-02 05:27","2021-12-02 05:29"
"2021-12-02 22:40","2021-12-02 22:43"
"2021-12-03 19:24","2021-12-03 19:27"
"2021-12-06 18:04","2021-12-06 18:06"
"2021-12-07 05:28","2021-12-07 05:30"
"2021-12-10 03:05","2021-12-10 03:10"
"2021-12-10 07:11","2021-12-10 07:13"
"2021-12-10 20:40","2021-12-10 20:41"
"2021-12-12 19:42","2021-12-12 19:45"
"2021-12-13 04:13","2021-12-13 04:17"
"2021-12-15 04:28","2021-12-15 04:30"
"2021-12-15 05:21","2021-12-15 05:25"
"2021-12-15 17:40","2021-12-15 17:44"
"2021-12-15 22:31","2021-12-15 22:37"
"2021-12-16 04:24","2021-12-16 04:28"
"2021-12-16 19:58","2021-12-16 20:09"
"2021-12-17 17:42","2021-12-17 18:04"
"2021-12-17 22:19","2021-12-17 22:26"
"2021-12-18 05:41","2021-12-18 05:44"
"2021-12-19 07:40","2021-12-19 16:55"
"2021-12-19 20:39","2021-12-19 20:52"
"2021-12-19 21:56","2021-12-19 23:17"
"2021-12-21 04:53","2021-12-21 04:59"
"2021-12-21 05:37","2021-12-21 05:39"
"2021-12-22 08:06","2021-12-22 17:22"
"2021-12-22 20:04","2021-12-22 21:24"
"2021-12-22 21:44","2021-12-22 22:47"
"2021-12-23 02:20","2021-12-23 06:17"
"2021-12-23 08:07","2021-12-23 16:54"
"2021-12-23 19:36","2021-12-24 00:00"
"2021-12-24 00:00","2021-12-24 00:28"
"2021-12-24 07:53","2021-12-24 17:00"
'''
df = pd.read_csv(io.StringIO(data), sep=',')
df['start_time'] = pd.to_datetime(df['start_time'])
df['end_time'] = pd.to_datetime(df['end_time'])
time_df = pd.DataFrame()
for idx, row in df.iterrows():
rng = pd.date_range(row['start_time'], row['end_time']-timedelta(minutes=1), freq='1min')
tmp = pd.DataFrame({'present':[1]*len(rng)}, index=rng)
time_df = time_df.append(tmp)
date_add = pd.date_range(time_df.index[0].date(), time_df.index[-1].date()+timedelta(days=1), freq='1min')
time_df = time_df.reindex(date_add, fill_value=0)
time_df['day'] = time_df.index.day
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(8,15))
ax.set_yticks(np.arange(0,1500,60))
ax.set_ylim(0,1440)
ax.set_xticks(np.arange(1,25,1))
days = time_df['day'].unique()
for d in days:
#if d == 1:
day_df = time_df.query('day == #d')
colors = [ 'r' if p == 1 else 'g' for p in day_df['present']]
for i in range(len(day_df)):
ax.bar(d, height=1, width=0.5, bottom=i+1, color=colors[i])
plt.show()

pandas calculate delta time

Here's some code where that will generate some random data, and chart plus lines representing 30th & 90th percentiles.
import pandas as pd
import numpy as np
from numpy.random import randint
import matplotlib.pyplot as plt
%matplotlib inline
np.random.seed(10) # added for reproductibility
rng = pd.date_range('10/9/2018 00:00', periods=10, freq='1H')
df = pd.DataFrame({'Random_Number':randint(1, 100, 10)}, index=rng)
df.plot()
plt.axhline(df.quantile(0.3)[0], linestyle="--", color="g")
plt.axhline(df.quantile(0.90)[0], linestyle="--", color="r")
plt.show()
Outputs: (minus the highlighted part of the chart)
Im trying to figure out if its possible to calculate the time in the data it takes to reach (highlighted yellow) from green to the red line.
I can manually enter in the data:
minStart = df.loc[df['Random_Number'] < 18].index[0]
maxStart = df.loc[df['Random_Number'] > 90].index[0]
hours = maxStart - minStart
hours
Which will output:
Timedelta('0 days 05:00:00')
But if I attempt to use:
minStart = df.loc[df['Random_Number'] < df.quantile(0.3)].index[0]
maxStart = df.loc[df['Random_Number'] > df.quantile(0.90)].index[0]
hours = maxStart - minStart
hours
This will throw an ValueError: Can only compare identically-labeled Series objects
Would there be a better method to madness? Ideally it would be nice to create some sort of an algorithm that can calculate delta Time to it takes to go from 30th - 90th percentile and then delta back from 90th - 30th.. But I may have to put some thought towards how that could be accomplished..
minStart = df.loc[df['Random_Number'] < df.quantile(0.3)[0]].index[0]
maxStart = df.loc[df['Random_Number'] > df.quantile(0.90)[0]].index[0]
hours = maxStart - minStart
hours
df.quantile doesn't return a number so you need to get the first entry of it

How to use a < or > of one column in dataframe to then use another columns data from that same date on? [duplicate]

This question already has answers here:
How do I select rows from a DataFrame based on column values?
(16 answers)
Closed 4 years ago.
I am attempting to use an oscillator (relative strength index) to know when to buy and sell a stock. I created a dataframe for RSI and closing price. I am able to plot both but I want to also add to my plot when the RSI hits a buy and sell signal. So in order to do this I need to create a comparison of my RSI column when the RSI drops below 25, which will trigger my buy signal and a sell signal for my RSI if it goes over 85. My issue is that I cannot figure out pull my closing price column on the date when my RSI column drops below 25 until the date when my RSI column rises above 85. All I get is Nan in my new dataframe column.
#rsi
import pandas
import warnings
import pandas_datareader.data as web
import datetime
import matplotlib.pyplot as plt
warnings.filterwarnings('ignore')
# Window length for moving average
window_length = 14
# Dates
start = datetime.datetime(2016, 1, 5)
end = datetime.datetime(2016, 12, 31)
# Get data
data = web.DataReader('FB', 'morningstar', start, end)
df= pd.DataFrame(data)
# Get just the close
close = data['Close']
# Get the difference in price from previous step
delta = close.diff()
# Get rid of the first row, which is NaN since it did not have a previous
# row to calculate the differences
delta = delta[1:]
# Make the positive gains (up) and negative gains (down) Series
up, down = delta.copy(), delta.copy()
up[up < 0] = 0
down[down > 0] = 0
# Calculate the EWMA
roll_up1 = pandas.stats.moments.ewma(up, window_length)
roll_down1 = pandas.stats.moments.ewma(down.abs(), window_length)
# Calculate the RSI based on EWMA
RS1 = roll_up1 / roll_down1
RSI1 = 100.0 - (100.0 / (1.0 + RS1))
# Calculate the SMA
roll_up2 = pandas.rolling_mean(up, window_length)
roll_down2 = pandas.rolling_mean(down.abs(), window_length)
# Calculate the RSI based on SMA
RS2 = roll_up2 / roll_down2
RSI2 = 100.0 - (100.0 / (1.0 + RS2))
df['RSI2']=RSI2
df=df.dropna(axis=0)
df['RSI2']=df['RSI2'].astype(float)
df['BUY']=df['Close'][df['RSI2'] < 25]
print (df['BUY'])
# Compare graphically
plt.figure()
df['BUY'].plot(title='FB',figsize = (20, 5))
plt.show()
RSI1.plot(title='Relative Strength Index',figsize = (20, 5))
RSI2.plot(figsize = (20, 5))
plt.legend(['RSI via EWMA', 'RSI via SMA'])
plt.show()
If i got your question correctly, then What you are looking for is there in pandas (pd.query() just like in SQL, e.g
df['rsi_query'] = np.zeros(df.shape[0])
myquery = df.query('RSI>.25 & RSI<.85').index
df.iloc[myquery, -1] = 1(replace it with what you want)
Further reference

Efficient Python Pandas Stock Beta Calculation on Many Dataframes

I have many (4000+) CSVs of stock data (Date, Open, High, Low, Close) which I import into individual Pandas dataframes to perform analysis. I am new to python and want to calculate a rolling 12month beta for each stock, I found a post to calculate rolling beta (Python pandas calculate rolling stock beta using rolling apply to groupby object in vectorized fashion) however when used in my code below takes over 2.5 hours! Considering I can run the exact same calculations in SQL tables in under 3 minutes this is too slow.
How can I improve the performance of my below code to match that of SQL? I understand Pandas/python has that capability. My current method loops over each row which I know slows performance but I am unaware of any aggregate way to perform a rolling window beta calculation on a dataframe.
Note: the first 2 steps of loading the CSVs into individual dataframes and calculating daily returns only takes ~20seconds. All my CSV dataframes are stored in the dictionary called 'FilesLoaded' with names such as 'XAO'.
Your help would be much appreciated!
Thank you :)
import pandas as pd, numpy as np
import datetime
import ntpath
pd.set_option('precision',10) #Set the Decimal Point precision to DISPLAY
start_time=datetime.datetime.now()
MarketIndex = 'XAO'
period = 250
MinBetaPeriod = period
# ***********************************************************************************************
# CALC RETURNS
# ***********************************************************************************************
for File in FilesLoaded:
FilesLoaded[File]['Return'] = FilesLoaded[File]['Close'].pct_change()
# ***********************************************************************************************
# CALC BETA
# ***********************************************************************************************
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
#Build Custom "Rolling_Apply" function
def rolling_apply(df, period, func, min_periods=None):
if min_periods is None:
min_periods = period
result = pd.Series(np.nan, index=df.index)
for i in range(1, len(df)+1):
sub_df = df.iloc[max(i-period, 0):i,:]
if len(sub_df) >= min_periods:
idx = sub_df.index[-1]
result[idx] = func(sub_df)
return result
#Create empty BETA dataframe with same index as RETURNS dataframe
df_join = pd.DataFrame(index=FilesLoaded[MarketIndex].index)
df_join['market'] = FilesLoaded[MarketIndex]['Return']
df_join['stock'] = np.nan
for File in FilesLoaded:
df_join['stock'].update(FilesLoaded[File]['Return'])
df_join = df_join.replace(np.inf, np.nan) #get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.replace(-np.inf, np.nan)#get rid of infinite values "inf" (SQL won't take "Inf")
df_join = df_join.fillna(0) #get rid of the NaNs in the return data
FilesLoaded[File]['Beta'] = rolling_apply(df_join[['market','stock']], period, calc_beta, min_periods = MinBetaPeriod)
# ***********************************************************************************************
# CLEAN-UP
# ***********************************************************************************************
print('Run-time: {0}'.format(datetime.datetime.now() - start_time))
Generate Random Stock Data
20 Years of Monthly Data for 4,000 Stocks
dates = pd.date_range('1995-12-31', periods=480, freq='M', name='Date')
stoks = pd.Index(['s{:04d}'.format(i) for i in range(4000)])
df = pd.DataFrame(np.random.rand(480, 4000), dates, stoks)
df.iloc[:5, :5]
Roll Function
Returns groupby object ready to apply custom functions
See Source
def roll(df, w):
# stack df.values w-times shifted once at each stack
roll_array = np.dstack([df.values[i:i+w, :] for i in range(len(df.index) - w + 1)]).T
# roll_array is now a 3-D array and can be read into
# a pandas panel object
panel = pd.Panel(roll_array,
items=df.index[w-1:],
major_axis=df.columns,
minor_axis=pd.Index(range(w), name='roll'))
# convert to dataframe and pivot + groupby
# is now ready for any action normally performed
# on a groupby object
return panel.to_frame().unstack().T.groupby(level=0)
Beta Function
Use closed form solution of OLS regression
Assume column 0 is market
See Source
def beta(df):
# first column is the market
X = df.values[:, [0]]
# prepend a column of ones for the intercept
X = np.concatenate([np.ones_like(X), X], axis=1)
# matrix algebra
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values[:, 1:])
return pd.Series(b[1], df.columns[1:], name='Beta')
Demonstration
rdf = roll(df, 12)
betas = rdf.apply(beta)
Timing
Validation
Compare calculations with OP
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
print(calc_beta(df.iloc[:12, :2]))
-0.311757542437
print(beta(df.iloc[:12, :2]))
s0001 -0.311758
Name: Beta, dtype: float64
Note the first cell
Is the same value as validated calculations above
betas = rdf.apply(beta)
betas.iloc[:5, :5]
Response to comment
Full working example with simulated multiple dataframes
num_sec_dfs = 4000
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(480, 4), dates, cols) for i in range(num_sec_dfs)}
market = pd.Series(np.random.rand(480), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = roll(df.pct_change().dropna(), 12).apply(beta)
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0001'].head(20)
Using a generator to improve memory efficiency
Simulated data
m, n = 480, 10000
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
stocks = pd.Index(['s{:04d}'.format(i) for i in range(n)])
df = pd.DataFrame(np.random.rand(m, n), dates, stocks)
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([df, market], axis=1)
Beta Calculation
def beta(df, market=None):
# If the market values are not passed,
# I'll assume they are located in a column
# named 'Market'. If not, this will fail.
if market is None:
market = df['Market']
df = df.drop('Market', axis=1)
X = market.values.reshape(-1, 1)
X = np.concatenate([np.ones_like(X), X], axis=1)
b = np.linalg.pinv(X.T.dot(X)).dot(X.T).dot(df.values)
return pd.Series(b[1], df.columns, name=df.index[-1])
roll function
This returns a generator and will be far more memory efficient
def roll(df, w):
for i in range(df.shape[0] - w + 1):
yield pd.DataFrame(df.values[i:i+w, :], df.index[i:i+w], df.columns)
Putting it all together
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
Validation
OP beta calc
def calc_beta(df):
np_array = df.values
m = np_array[:,0] # market returns are column zero from numpy array
s = np_array[:,1] # stock returns are column one from numpy array
covariance = np.cov(s,m) # Calculate covariance between stock and market
beta = covariance[0,1]/covariance[1,1]
return beta
Experiment setup
m, n = 12, 2
dates = pd.date_range('1995-12-31', periods=m, freq='M', name='Date')
cols = ['Open', 'High', 'Low', 'Close']
dfs = {'s{:04d}'.format(i): pd.DataFrame(np.random.rand(m, 4), dates, cols) for i in range(n)}
market = pd.Series(np.random.rand(m), dates, name='Market')
df = pd.concat([market] + [dfs[k].Close.rename(k) for k in dfs.keys()], axis=1).sort_index(1)
betas = pd.concat([beta(sdf) for sdf in roll(df.pct_change().dropna(), 12)], axis=1).T
for c, col in betas.iteritems():
dfs[c]['Beta'] = col
dfs['s0000'].head(20)
calc_beta(df[['Market', 's0000']])
0.0020118230147777435
NOTE:
The calculations are the same
While efficient subdivision of the input data set into rolling windows is important to the optimization of the overall calculations, the performance of the beta calculation itself can also be significantly improved.
The following optimizes only the subdivision of the data set into rolling windows:
def numpy_betas(x_name, window, returns_data, intercept=True):
if intercept:
ones = numpy.ones(window)
def lstsq_beta(window_data):
x_data = numpy.vstack([window_data[x_name], ones]).T if intercept else window_data[[x_name]]
beta_arr, residuals, rank, s = numpy.linalg.lstsq(x_data, window_data)
return beta_arr[0]
indices = [int(x) for x in numpy.arange(0, returns_data.shape[0] - window + 1, 1)]
return DataFrame(
data=[lstsq_beta(returns_data.iloc[i:(i + window)]) for i in indices]
, columns=list(returns_data.columns)
, index=returns_data.index[window - 1::1]
)
The following also optimizes the beta calculation itself:
def custom_betas(x_name, window, returns_data):
window_inv = 1.0 / window
x_sum = returns_data[x_name].rolling(window, min_periods=window).sum()
y_sum = returns_data.rolling(window, min_periods=window).sum()
xy_sum = returns_data.mul(returns_data[x_name], axis=0).rolling(window, min_periods=window).sum()
xx_sum = numpy.square(returns_data[x_name]).rolling(window, min_periods=window).sum()
xy_cov = xy_sum - window_inv * y_sum.mul(x_sum, axis=0)
x_var = xx_sum - window_inv * numpy.square(x_sum)
betas = xy_cov.divide(x_var, axis=0)[window - 1:]
betas.columns.name = None
return betas
Comparing the performance of the two different calculations, you can see that as the window used in the beta calculation increases, the second method dramatically outperforms the first:
Comparing the performance to that of #piRSquared's implementation, the custom method takes roughly 350 millis to evaluate compared to over 2 seconds.
Further optimizing on #piRSquared's implementation for both speed and memory. the code is also simplified for clarity.
from numpy import nan, ndarray, ones_like, vstack, random
from numpy.lib.stride_tricks import as_strided
from numpy.linalg import pinv
from pandas import DataFrame, date_range
def calc_beta(s: ndarray, m: ndarray):
x = vstack((ones_like(m), m))
b = pinv(x.dot(x.T)).dot(x).dot(s)
return b[1]
def rolling_calc_beta(s_df: DataFrame, m_df: DataFrame, period: int):
result = ndarray(shape=s_df.shape, dtype=float)
l, w = s_df.shape
ls, ws = s_df.values.strides
result[0:period - 1, :] = nan
s_arr = as_strided(s_df.values, shape=(l - period + 1, period, w), strides=(ls, ls, ws))
m_arr = as_strided(m_df.values, shape=(l - period + 1, period), strides=(ls, ls))
for row in range(period, l):
result[row, :] = calc_beta(s_arr[row - period, :], m_arr[row - period])
return DataFrame(data=result, index=s_df.index, columns=s_df.columns)
if __name__ == '__main__':
num_sec_dfs, num_periods = 4000, 480
dates = date_range('1995-12-31', periods=num_periods, freq='M', name='Date')
stocks = DataFrame(data=random.rand(num_periods, num_sec_dfs), index=dates,
columns=['s{:04d}'.format(i) for i in
range(num_sec_dfs)]).pct_change()
market = DataFrame(data=random.rand(num_periods), index=dates, columns=
['Market']).pct_change()
betas = rolling_calc_beta(stocks, market, 12)
%timeit betas = rolling_calc_beta(stocks, market, 12)
335 ms ± 2.69 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
HERE'S THE SIMPLEST AND FASTEST SOLUTION
The accepted answer was too slow for what I needed and the I didn't understand the math behind the solutions asserted as faster. They also gave different answers, though in fairness I probably just messed it up.
I don't think you need to make a custom rolling function to calculate beta with pandas 1.1.4 (or even since at least .19). The below code assumes the data is in the same format as the above problems--a pandas dataframe with a date index, percent returns of some periodicity for the stocks, and market values are located in a column named 'Market'.
If you don't have this format, I recommend joining the stock returns to the market returns to ensure the same index with:
# Use .pct_change() only if joining Close data
beta_data = stock_data.join(market_data), how = 'inner').pct_change().dropna()
After that, it's just covariance divided by variance.
ticker_covariance = beta_data.rolling(window).cov()
# Limit results to the stock (i.e. column name for the stock) vs. 'Market' covariance
ticker_covariance = ticker_covariance.loc[pd.IndexSlice[:, stock], 'Market'].dropna()
benchmark_variance = beta_data['Market'].rolling(window).var().dropna()
beta = ticker_covariance / benchmark_variance
NOTES: If you have a multi-index, you'll have to drop the non-date levels to use the rolling().apply() solution. I only tested this for one stock and one market. If you have multiple stocks, a modification to the ticker_covariance equation after .loc is probably needed. Last, if you want to calculate beta values for the periods before the full window (ex. stock_data begins 1 year ago, but you use 3yrs of data), then you can modify the above to and expanding (instead of rolling) window with the same calculation and then .combine_first() the two.
Created a simple python package finance-calculator based on numpy and pandas to calculate financial ratios including beta. I am using the simple formula (as per investopedia):
beta = covariance(returns, benchmark returns) / variance(benchmark returns)
Covariance and variance are directly calculated in pandas which makes it fast. Using the api in the package is also simple:
import finance_calculator as fc
beta = fc.get_beta(scheme_data, benchmark_data, tail=False)
which will give you a dataframe of date and beta or the last beta value if tail is true.
but these would be blockish when you require beta calculations across the dates(m) for multiple stocks(n) resulting (m x n) number of calculations.
Some relief could be taken by running each date or stock on multiple cores, but then you will end up having huge hardware.
The major time requirement for the solutions available is finding the variance and co-variance and also NaN should be avoided in (Index and stock) data for a correct calculation as per pandas==0.23.0.
Thus running again would result stupid move unless the calculations are cached.
numpy variance and co-variance version also happens to miss-calculate the beta if NaN are not dropped.
A Cython implementation is must for huge set of data.

Categories