Pyhton code for rolling window regression by groups - python

I would like to perform a rolling window regression for panel data over a period of 12 months and get the monthly intercept fund wise as output. My data has Funds (ID) with monthly returns.
enter image description here
Request you to please help me with the python code for the same.

In statsmodels there is rolling OLS. You can use that with groupby
Sample code:
import pandas as pd
import numpy as np
from statsmodels.regression.rolling import RollingOLS
# Read data & adding "intercept" column
df = pd.read_csv('sample_rolling_regression_OLS.csv')
df['intercept'] = 1
# Groupby then apply RollingOLS
df.groupby('name')[['y', 'intercept', 'x']].apply(lambda g: RollingOLS(g['y'], g[['intercept', 'x']], window=6).fit().params)
Sample data: or you can download at: https://www.dropbox.com/s/zhklsg5cmfksufm/sample_rolling_regression_OLS.csv?dl=0
name y x intercept
0 a 13.7 7.8 1
1 a -14.7 -9.7 1
2 a -3.4 -0.6 1
3 a 7.4 3.3 1
4 a -5.3 -1.9 1
5 a -8.3 -2.3 1
6 a 8.9 3.7 1
7 a 10.0 7.9 1
8 a 1.8 -0.4 1
9 a 6.7 3.1 1
10 a 17.4 9.9 1
11 a 8.9 7.7 1
12 a -3.1 -1.5 1
13 a -12.2 -7.9 1
14 a 7.6 4.9 1
15 a 4.2 2.3 1
16 a -15.3 -5.6 1
17 a 9.9 6.7 1
18 a 11.0 5.2 1
19 a 5.7 5.1 1
20 a -0.3 -0.6 1
21 a -15.0 -8.7 1
22 a -10.6 -5.7 1
23 a -16.0 -9.1 1
24 b 16.7 8.5 1
25 b 9.2 8.2 1
26 b 4.7 3.4 1
27 b -16.7 -8.7 1
28 b -4.8 -1.5 1
29 b -2.6 -2.2 1
30 b 16.3 9.5 1
31 b 15.8 9.8 1
32 b -10.8 -7.3 1
33 b -5.4 -3.4 1
34 b -6.0 -1.8 1
35 b 1.9 -0.6 1
36 b 6.3 6.1 1
37 b -14.7 -8.0 1
38 b -16.1 -9.7 1
39 b -10.5 -8.0 1
40 b 4.9 1.0 1
41 b 11.1 4.5 1
42 b -14.8 -8.5 1
43 b -0.2 -2.8 1
44 b 6.3 1.7 1
45 b -14.1 -8.7 1
46 b 13.8 8.9 1
47 b -6.2 -3.0 1

Related

correlation matrix with group-by and sort

I am trying calculate correlation matrix with groupby and sort. I have 100 companies from 11 industries. I would like to group by industry and sort by their total assets (atq), and then calculate the correlation of data.pr_multi with this order. however, when I do sort and groupby, it reverses back and calculates by alphabetical order.
The code I use:
index
datafqtr
tic
pr_multi
atq
industry
0
2018Q1
A
NaN
8698.0
4
1
2018Q2
A
-0.0856845728151735
8784.0
4
2
2018Q3
A
0.0035103320774146
8349.0
4
3
2018Q4
A
-0.0157732687260246
8541.0
4
4
2018Q1
AAL
NaN
53280.0
5
5
2018Q2
AAL
-0.2694380292532717
52622.0
5
the code I use:
data1=data18.sort_values(['atq'],ascending=False).groupby('industry').head()
df = data1.pivot_table('pr_multi', ['datafqtr'], 'tic')
# calculate correlation matrix using inbuilt pandas function
correlation_matrix = df.corr()
correlation_matrix.head()
IIUC, you want to calculate the correlation between the order based on the groupby and the pr_multi column. use:
data1=data18.groupby('industry')['atq'].apply(lambda x: x.sort_values(ascending=False))
np.corrcoef(data1.reset_index()['level_1'], data18['pr_multi'].astype(float).fillna(0))
Output:
array([[ 1. , -0.44754795],
[-0.44754795, 1. ]])
import pandas as pd
import numpy as np
df = pd.read_csv('data.csv')
df.groupby('name')[['col1','col2']].corr() # you can put as many desired columns here
Out put:
y x
name
a y 1.000000 0.974467
a x 0.974467 1.000000
b y 1.000000 0.975120
b x 0.975120 1.000000
The data is like this:
name col1 col2
0 a 13.7 7.8
1 a -14.7 -9.7
2 a -3.4 -0.6
3 a 7.4 3.3
4 a -5.3 -1.9
5 a -8.3 -2.3
6 a 8.9 3.7
7 a 10.0 7.9
8 a 1.8 -0.4
9 a 6.7 3.1
10 a 17.4 9.9
11 a 8.9 7.7
12 a -3.1 -1.5
13 a -12.2 -7.9
14 a 7.6 4.9
15 a 4.2 2.3
16 a -15.3 -5.6
17 a 9.9 6.7
18 a 11.0 5.2
19 a 5.7 5.1
20 a -0.3 -0.6
21 a -15.0 -8.7
22 a -10.6 -5.7
23 a -16.0 -9.1
24 b 16.7 8.5
25 b 9.2 8.2
26 b 4.7 3.4
27 b -16.7 -8.7
28 b -4.8 -1.5
29 b -2.6 -2.2
30 b 16.3 9.5
31 b 15.8 9.8
32 b -10.8 -7.3
33 b -5.4 -3.4
34 b -6.0 -1.8
35 b 1.9 -0.6
36 b 6.3 6.1
37 b -14.7 -8.0
38 b -16.1 -9.7
39 b -10.5 -8.0
40 b 4.9 1.0
41 b 11.1 4.5
42 b -14.8 -8.5
43 b -0.2 -2.8
44 b 6.3 1.7
45 b -14.1 -8.7
46 b 13.8 8.9
47 b -6.2 -3.0
​

Merging different length dataframe in Python/pandas

I have 2 dataframe:
df1
aa gg pm
1 3.3 0.5
1 0.0 4.7
1 9.3 0.2
2 0.3 0.6
2 14.0 91.0
3 13.0 31.0
4 13.1 64.0
5 1.3 0.5
6 3.3 0.5
7 11.1 3.0
7 11.3 24.0
8 3.2 0.0
8 5.3 0.3
8 3.3 0.3
and df2:
aa gg st
1 3.3 in
2 0.3 in
5 1.3 in
7 11.1 in
8 5.3 in
I would like to merge these two dataframe on col aa and gg to get results like:
aa gg pm st
1 3.3 0.5 in
1 0.0 4.7
1 9.3 0.2
2 0.3 0.6 in
2 14.0 91.0
3 13.0 31.0
4 13.1 64.0
5 1.3 0.5 in
6 3.3 0.5
7 11.1 3.0 in
7 11.3 24.0
8 3.2 0.0
8 5.3 0.3 in
8 3.3 0.3
I want to map the col st details to based on col aa and gg.
please let me know how to do this.
You can multiple float columns by 1000 or 10000 and convert to integers and then use these new columns for join:
df1['gg_int'] = df1['gg'].mul(1000).astype(int)
df2['gg_int'] = df2['gg'].mul(1000).astype(int)
df = df1.merge(df2.drop('gg', axis=1), on=['aa','gg_int'], how='left')
df = df.drop('gg_int', axis=1)
print (df)
aa gg pm st
0 1 3.3 0.5 in
1 1 0.0 4.7 NaN
2 1 9.3 0.2 NaN
3 2 0.3 0.6 in
4 2 14.0 91.0 NaN
5 3 13.0 31.0 NaN
6 4 13.1 64.0 NaN
7 5 1.3 0.5 in
8 6 3.3 0.5 NaN
9 7 11.1 3.0 in
10 7 11.3 24.0 NaN
11 8 3.2 0.0 NaN
12 8 5.3 0.3 in
13 8 3.3 0.3 NaN

Numpy Separating CSV into columns

I'm trying to use a CSV imported from bballreference.com. But as you can see, the separated values are all in one row rather than separated by columns. On NumPy Pandas, what would be the easiest way to fix this? I've googled to no avail.
csv on jupyter
I don't know how to post CSV file in a clean way but here it is:
",,,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Totals,Shooting,Shooting,Shooting,Per Game,Per Game,Per Game,Per Game,Per Game,Per Game"
"Rk,Player,Age,G,GS,MP,FG,FGA,3P,3PA,FT,FTA,ORB,DRB,TRB,AST,STL,BLK,TOV,PF,PTS,FG%,3P%,FT%,MP,PTS,TRB,AST,STL,BLK"
"1,Kevin Durant\duranke01,29,5,5,182,54,107,9,28,22,27,3,34,37,24,7,6,10,7,139,.505,.321,.815,36.5,27.8,7.4,4.8,1.4,1.2"
"2,Klay Thompson\thompkl01,27,5,5,183,38,99,12,43,11,11,3,29,32,9,1,2,6,11,99,.384,.279,1.000,36.7,19.8,6.4,1.8,0.2,0.4"
"3,Stephen Curry\curryst01,29,4,3,125,32,67,15,34,19,19,2,19,21,14,8,2,15,6,98,.478,.441,1.000,31.2,24.5,5.3,3.5,2.0,0.5"
"4,Draymond Green\greendr01,27,5,5,186,27,55,8,20,12,15,12,47,59,50,12,8,18,16,74,.491,.400,.800,37.1,14.8,11.8,10.0,2.4,1.6"
"5,Andre Iguodala\iguodan01,34,5,4,140,14,29,4,12,7,12,4,21,25,17,10,2,3,7,39,.483,.333,.583,27.9,7.8,5.0,3.4,2.0,0.4"
"6,Quinn Cook\cookqu01,24,4,0,58,12,27,0,10,6,8,1,8,9,4,1,0,2,4,30,.444,.000,.750,14.4,7.5,2.3,1.0,0.3,0.0"
"7,Kevon Looney\looneke01,21,5,0,113,12,17,0,0,4,8,10,19,29,5,4,1,2,17,28,.706,,.500,22.6,5.6,5.8,1.0,0.8,0.2"
"8,Shaun Livingston\livinsh01,32,5,0,79,11,27,0,0,4,4,0,6,6,12,0,1,3,9,26,.407,,1.000,15.9,5.2,1.2,2.4,0.0,0.2"
"9,David West\westda01,37,5,0,40,8,14,0,0,0,0,2,5,7,13,2,4,3,4,16,.571,,,7.9,3.2,1.4,2.6,0.4,0.8"
"10,Nick Young\youngni01,32,4,2,41,3,11,3,10,2,3,0,4,4,1,1,0,1,3,11,.273,.300,.667,10.2,2.8,1.0,0.3,0.3,0.0"
"11,JaVale McGee\mcgeeja01,30,3,1,19,3,8,0,1,0,0,4,2,6,0,0,1,0,2,6,.375,.000,,6.2,2.0,2.0,0.0,0.0,0.3"
"12,Zaza Pachulia\pachuza01,33,2,0,8,1,2,0,0,2,4,4,2,6,0,2,0,1,1,4,.500,,.500,4.2,2.0,3.0,0.0,1.0,0.0"
"13,Jordan Bell\belljo01,23,4,0,23,1,4,0,0,1,2,1,5,6,5,2,2,0,2,3,.250,,.500,5.8,0.8,1.5,1.3,0.5,0.5"
"14,Damian Jones\jonesda03,22,1,0,3,0,1,0,0,2,2,0,0,0,0,0,0,0,0,2,.000,,1.000,3.2,2.0,0.0,0.0,0.0,0.0"
",Team Totals,26.5,5,,1200,216,468,51,158,92,115,46,201,247,154,50,29,64,89,575,.462,.323,.800,240.0,115.0,49.4,30.8,10.0,5.8"
It seems that the first two rows of your CSV file are headers, but the default behavior of pd.read_csv thinks that only the first row is header.
Also, the beginning and trailing quotes make pd.read_csv think the text in between is a single field/column.
You could try the following:
Remove the beginning and trailing quotes, and
bbal = pd.read_csv('some_file.csv', header=[0, 1], delimiter=',')
Following is how you could use Python to remove the beginning and trailing quotes:
# open 'quotes.csv' in read mode with variable in_file as handle
# open 'no_quotes.csv' in write mode with variable out_file as handle
with open('quotes.csv') as in_file, open('no_quotes.csv', 'w') as out_file:
# read in_file line by line
# the variable line stores each line as string
for line in in_file:
# line[1:-1] slices the string to omit the first and last character
# append a newline character '\n' to the sliced line
# write the string with newline to out_file
out_file.write(line[1:-1] + '\n')
# read_csv on 'no_quotes.csv'
bbal = pd.read_csv('no_quotes.csv', header=[0, 1], delimiter=',')
bbal.head()
Consider reading in csv as a text file to be stripped of the beginning/end quotes per line on a text file read which tell the parser all data between is a singular value. And use built-in StringIO to read text string into dataframe instead of saving to disk for import.
Additionally, skip the first row of repeated Totals and Per Game and even the last row that aggregates since you can do that with pandas.
from io import StringIO
import pandas as pd
with open('BasketballCSVQuotes.csv') as f:
csvdata = f.read().replace('"', '')
df = pd.read_csv(StringIO(csvdata), skiprows=1, skipfooter=1, engine='python')
print(df)
Output
Rk Player Age G GS MP FG FGA 3P 3PA ... PTS FG% 3P% FT% MP.1 PTS.1 TRB.1 AST.1 STL.1 BLK.1
0 1.0 Kevin Durant\duranke01 29.0 5 5.0 182 54 107 9 28 ... 139 0.505 0.321 0.815 36.5 27.8 7.4 4.8 1.4 1.2
1 2.0 Klay Thompson\thompkl01 27.0 5 5.0 183 38 99 12 43 ... 99 0.384 0.279 1.000 36.7 19.8 6.4 1.8 0.2 0.4
2 3.0 Stephen Curry\curryst01 29.0 4 3.0 125 32 67 15 34 ... 98 0.478 0.441 1.000 31.2 24.5 5.3 3.5 2.0 0.5
3 4.0 Draymond Green\greendr01 27.0 5 5.0 186 27 55 8 20 ... 74 0.491 0.400 0.800 37.1 14.8 11.8 10.0 2.4 1.6
4 5.0 Andre Iguodala\iguodan01 34.0 5 4.0 140 14 29 4 12 ... 39 0.483 0.333 0.583 27.9 7.8 5.0 3.4 2.0 0.4
5 6.0 Quinn Cook\cookqu01 24.0 4 0.0 58 12 27 0 10 ... 30 0.444 0.000 0.750 14.4 7.5 2.3 1.0 0.3 0.0
6 7.0 Kevon Looney\looneke01 21.0 5 0.0 113 12 17 0 0 ... 28 0.706 NaN 0.500 22.6 5.6 5.8 1.0 0.8 0.2
7 8.0 Shaun Livingston\livinsh01 32.0 5 0.0 79 11 27 0 0 ... 26 0.407 NaN 1.000 15.9 5.2 1.2 2.4 0.0 0.2
8 9.0 David West\westda01 37.0 5 0.0 40 8 14 0 0 ... 16 0.571 NaN NaN 7.9 3.2 1.4 2.6 0.4 0.8
9 10.0 Nick Young\youngni01 32.0 4 2.0 41 3 11 3 10 ... 11 0.273 0.300 0.667 10.2 2.8 1.0 0.3 0.3 0.0
10 11.0 JaVale McGee\mcgeeja01 30.0 3 1.0 19 3 8 0 1 ... 6 0.375 0.000 NaN 6.2 2.0 2.0 0.0 0.0 0.3
11 12.0 Zaza Pachulia\pachuza01 33.0 2 0.0 8 1 2 0 0 ... 4 0.500 NaN 0.500 4.2 2.0 3.0 0.0 1.0 0.0
12 13.0 Jordan Belelljo01 23.0 4 0.0 23 1 4 0 0 ... 3 0.250 NaN 0.500 5.8 0.8 1.5 1.3 0.5 0.5
13 14.0 Damian Jones\jonesda03 22.0 1 0.0 3 0 1 0 0 ... 2 0.000 NaN 1.000 3.2 2.0 0.0 0.0 0.0 0.0
[14 rows x 30 columns]

Calculating the accumulated summation of clustered data in data frame in pandas

Given the following data frame:
index value
1 0.8
2 0.9
3 1.0
4 0.9
5 nan
6 nan
7 nan
8 0.4
9 0.9
10 nan
11 0.8
12 2.0
13 1.4
14 1.9
15 nan
16 nan
17 nan
18 8.4
19 9.9
20 10.0
…
in which the data 'value' is separated into a number of clusters by value NAN. is there any way I can calculate some values such as accumulate summation, or mean of the clustered data, for example, I want calculate the accumulated sum and generate the following data frame:
index value cumsum
1 0.8 0.8
2 0.9 1.7
3 1.0 2.7
4 0.9 3.6
5 nan 0
6 nan 0
7 nan 0
8 0.4 0.4
9 0.9 1.3
10 nan 0
11 0.8 0.8
12 2.0 2.8
13 1.4 4.2
14 1.9 6.1
15 nan 0
16 nan 0
17 nan 0
18 8.4 8.4
19 9.9 18.3
20 10.0 28.3
…
Any suggestions?
Also as a simple extension of the problem, if two clusters of data are close enough, such as there are only 1 NAN separate them we consider the as one cluster of data, such that we can have the following data frame:
index value cumsum
1 0.8 0.8
2 0.9 1.7
3 1.0 2.7
4 0.9 3.6
5 nan 0
6 nan 0
7 nan 0
8 0.4 0.4
9 0.9 1.3
10 nan 1.3
11 0.8 2.1
12 2.0 4.1
13 1.4 5.5
14 1.9 7.4
15 nan 0
16 nan 0
17 nan 0
18 8.4 8.4
19 9.9 18.3
20 10.0 28.3
Thank you for the help!
You can do the first part using the compare-cumsum-groupby pattern. Your "simple extension" isn't quite so simple, but we can still pull it off, by finding out the parts of value that we want to treat as zero:
n = df["value"].isnull()
clusters = (n != n.shift()).cumsum()
df["cumsum"] = df["value"].groupby(clusters).cumsum().fillna(0)
to_zero = n & (df["value"].groupby(clusters).transform('size') == 1)
tmp_value = df["value"].where(~to_zero, 0)
n2 = tmp_value.isnull()
new_clusters = (n2 != n2.shift()).cumsum()
df["cumsum_skip1"] = tmp_value.groupby(new_clusters).cumsum().fillna(0)
produces
>>> df
index value cumsum cumsum_skip1
0 1 0.8 0.8 0.8
1 2 0.9 1.7 1.7
2 3 1.0 2.7 2.7
3 4 0.9 3.6 3.6
4 5 NaN 0.0 0.0
5 6 NaN 0.0 0.0
6 7 NaN 0.0 0.0
7 8 0.4 0.4 0.4
8 9 0.9 1.3 1.3
9 10 NaN 0.0 1.3
10 11 0.8 0.8 2.1
11 12 2.0 2.8 4.1
12 13 1.4 4.2 5.5
13 14 1.9 6.1 7.4
14 15 NaN 0.0 0.0
15 16 NaN 0.0 0.0
16 17 NaN 0.0 0.0
17 18 8.4 8.4 8.4
18 19 9.9 18.3 18.3
19 20 10.0 28.3 28.3

Updating Pandas DataFrame column conditionally using other columns

With a DataFrame like the one below, how do I set c1len equal to zero when c1pos equals zero? I would then like to do the same for c2len/c2pos. Is there an easy way to do it without creating a bunch of columns to arrive at the desired answer?
distance c1pos c1len c2pos c2len daysago
line_date
2013-06-22 7.00 9 0.0 9 6.4 27
2013-05-18 8.50 6 4.6 7 4.9 62
2012-12-31 8.32 5 4.6 5 2.1 200
2012-12-01 8.00 7 7.1 6 8.6 230
2012-11-03 7.00 7 0.0 7 2.7 258
2012-10-15 7.00 7 0.0 8 5.2 277
2012-09-22 8.32 10 10.1 8 4.1 300
2012-09-15 9.00 10 12.5 9 12.1 307
2012-08-18 7.00 8 0.0 8 9.2 335
2012-08-02 9.00 5 3.5 5 2.2 351
2012-07-14 12.00 3 4.5 3 3.5 370
2012-06-16 8.32 7 3.7 7 5.1 398
I do't think you have anything that actually satifies those conditions, but
this will work
This creates a boolean mask for when the rows of the column in question (e.g. c2pos)
are 0; then it is setting the column c2len to 0 for those that are True
In [15]: df.loc[df.c2pos==0,'c2len'] = 0
In [16]: df.loc[df.c1pos==0,'c1len'] = 0
In [17]: df
Out[17]:
distance c1pos c1len c2pos c2len daysago
2013-06-22 7.00 9 0.0 9 6.4 27
2013-05-18 8.50 6 4.6 7 4.9 62
2012-12-31 8.32 5 4.6 5 2.1 200
2012-12-01 8.00 7 7.1 6 8.6 230
2012-11-03 7.00 7 0.0 7 2.7 258
2012-10-15 7.00 7 0.0 8 5.2 277
2012-09-22 8.32 10 10.1 8 4.1 300
2012-09-15 9.00 10 12.5 9 12.1 307
2012-08-18 7.00 8 0.0 8 9.2 335
2012-08-02 9.00 5 3.5 5 2.2 351
2012-07-14 12.00 3 4.5 3 3.5 370
2012-06-16 8.32 7 3.7 7 5.1 398

Categories