Related
I have a bunch of data points in a timeseries in a pandas dataframe. Each column is supposedly independent of each other. I want to create a montecarlo process to calculate expected values for each of the columns. For that, my expectation is that the underlying data follows a brownian motion pattern, so I'd need to generate a normal distribution over the differences between points in time space.
I transform my data like this:
diffs = (data.diff() / data.shift(1))
This is what I have at the moment:
data = diffs.describe()
This gives the following output:
A B C
count 4986.000000 4963.000000 1861.000000
mean 0.000285 0.000109 0.000421
std 0.015759 0.015426 0.014676
...
I process it like this to generate more samples:
import numpy as np
desired_samples = 1000
random = np.random.default_rng().normal(loc=[data.loc[["mean"]].to_numpy()], scale=[data.loc[["std"]].to_numpy()], size=[len(data.columns), desired_samples])
However this gives me an error:
ValueError: shape mismatch: objects cannot be broadcast to a single shape. Mismatch is between arg 0 with shape (441, 1000) and arg 1 with shape (1, 1, 441).
What I'd want is just a matrix of random values whose columns have the same std and mean as the sample's columns. I.e. such as when I do random.describe(), I'd get something like:
A B C
count 1000.0 1000.0 1000.0
mean 0.000285 0.000109 0.000421
std 0.015759 0.015426 0.014676
...
What'd be the correct way to generate those samples?
You could use apply() to create a data frame of random normal values using the attributes of the associated columns.
Generate Test Data
nv = 50
d = {'A':np.random.normal(1,1,nv),'B':np.random.normal(2,2,nv),'C':np.random.normal(3,3,nv)}
df = pd.DataFrame(d)
print(df)
A B C
0 0.276252 -2.833479 5.746740
1 1.562030 1.497242 2.557416
2 0.883105 -0.861824 3.106192
3 0.352372 0.014653 4.006219
4 1.475524 3.151062 -1.392998
5 2.011649 -2.289844 4.371251
6 3.230964 3.578058 0.610422
7 0.366506 3.391327 0.812932
8 1.669673 -1.021665 4.262500
9 1.835547 4.292063 6.983015
10 1.768208 4.029970 3.971751
...
45 0.501706 0.926860 7.008008
46 1.759266 -0.215047 4.560403
47 1.899167 0.690204 -0.538415
48 1.460267 1.506934 1.306303
49 1.641662 1.066182 0.049233
df.describe()
A B C
count 50.000000 50.000000 50.000000
mean 0.962083 1.522234 2.992492
std 1.073733 1.848754 2.838976
Generate Random Values with same approx (calculated) Mean and STD
mat = df.apply(lambda x: np.random.normal(x.mean(),x.std(),100))
print(mat)
A B C
0 0.234955 2.201961 1.910073
1 1.973203 3.528576 5.925673
2 -0.858201 2.234295 1.741338
3 2.245650 2.805498 0.135784
4 1.913691 2.134813 2.246989
.. ... ... ...
95 2.996207 2.248727 2.792658
96 0.663609 4.533541 1.518872
97 0.848259 -0.348086 2.271724
98 3.672370 1.706185 -0.862440
99 0.392051 0.832358 -0.354981
[100 rows x 3 columns]
mat.describe()
A B C
count 100.000000 100.000000 100.000000
mean 0.877725 1.332039 2.673327
std 1.148153 1.749699 2.447532
If you want the matrix to be numpy
mat.to_numpy()
array([[ 0.78881292, 3.09428714, -1.22757096],
[ 0.13044099, -1.02564025, 2.6566989 ],
[ 0.06090083, 1.50629474, 3.61487469],
[ 0.71418932, 1.88441111, 5.84979454],
[ 2.34287411, 2.58478867, -4.04433653],
[ 1.41846256, 0.36414635, 8.47482082],
[ 0.46765842, 1.37188986, 3.28011085],
[ 0.87433273, 3.45735286, 1.13351138],
[ 1.59029413, 4.0227165 , 3.58282534],
[ 2.23663894, 2.75007385, -0.36242541],
[ 1.80967311, 1.29206572, 1.73277577],
[ 1.20787923, 2.75529187, 4.64721489],
[ 2.33466341, 6.43830387, 4.31354348],
[ 0.87379125, 3.00658046, 4.94270155],
etc ...
I currently have data which contains a location name, latitude, longitude and then a number value associated locations. The final goal for me would to get a dataframe that has the sum of the values of each location within specific distance ranges. A sample dataframe is below:
IDVALUE,Latitude,Longitude,NumberValue
ID1,44.968046,-94.420307,1
ID2,44.933208,-94.421310,10
ID3,33.755787,-116.359998,15
ID4,33.844843,-116.54911,207
ID5,44.92057,-93.44786,133
ID6,44.240309,-91.493619,52
ID7,44.968041,-94.419696,39
ID8,44.333304,-89.132027,694
ID9,33.755783,-116.360066,245
ID10,33.844847,-116.549069,188
ID11,44.920474,-93.447851,3856
ID12,44.240304,-91.493768,189
Firstly, I managed to get the distances between each of them using the haversine function. Using the code below I turned the latlongs into radians and then created a matrix where the diagonals are infinite values.
df_latlongs['LATITUDE'] = np.radians(df_latlongs['LATITUDE'])
df_latlongs['LONGITUDE'] = np.radians(df_latlongs['LONGITUDE'])
dist = DistanceMetric.get_metric('haversine')
latlong_df = pd.DataFrame(dist.pairwise(df_latlongs[['LATITUDE','LONGITUDE']].to_numpy())*6373, columns=df_latlongs.IDVALUE.unique(), index=df_latlongs.IDVALUE.unique())
np.fill_diagonal(latlong_df.values, math.inf)
This distance matrix is then in kilometres. What I'm struggling with next is to be able to filter the distances of each of the locations and get the total number of values within a range and link this to the original dataframe.
Below is the code I have used to filter the distance matrix to get all of the locations within 500 meters:
latlong_df_rows = latlong_df[latlong_df < 0.5]
latlong_df_rows = latlong_df_rows.dropna(how='all', axis=0)
latlong_df_rows = latlong_df_rows.dropna(how='all', axis=1)
My attempt was to them get a list for each location of the locations that were in this value using the code below:
within_range_df = latlong_df_rows.apply(lambda row: row[row < 0.05].index.tolist(), axis=1)
within_range_df = within_range_df.to_frame()
within_range_df = within_range_df.dropna(how='all', axis=0)
within_range_df = within_range_df.dropna(how='all', axis=1)
From here I was going to try and get the NumberValue from the original dataframe by looping through the list of values to obtain another column for the number for that location. Then sum all of them. The final dataframe would ideally look like the following:
ID VALUE,<500m,500-1000m,>100m
ID1,x1,y1,z1
ID2,x2,y2,z2
ID3,x3,y3,z3
ID4,x4,y4,z4
ID5,x5,y5,z5
ID6,x6,y6,z6
ID7,x7,y7,z7
ID8,x8,y8,z8
ID9,x9,y9,z9
ID10,x10,y10,z10
ID11,x11,y11,z11
ID12,x12,y12,z12
Where x y and z are the total number values for the nearest locations for different distances. I know this is probably really weird and overcomplicated so any tips to change the question or anything else that is needed I'll be happy to provide. Cheers
I would define a helper function, making use of BallTree, e.g.
from sklearn.neighbors import BallTree
import pandas as pd
import numpy as np
df = pd.read_csv('input.csv')
We use query_radius() to get the IDs and use list comprehension to get the values and sum them;
locations_radians = np.radians(df[["Latitude","Longitude"]].values)
tree = BallTree(locations_radians, leaf_size=12, metric='haversine')
def summed_numbervalue_for_radius( radius_in_m=100):
distance_in_meters = radius_in_m
earth_radius = 6371000
radius = distance_in_meters / earth_radius
ids_within_radius = tree.query_radius(locations_radians, r=radius, count_only=False)
values_as_array = np.array(df.NumberValue)
summed_values = [values_as_array[ix].sum() for ix in ids_within_radius]
return np.array(summed_values)
With the helper function you can do for instance;
df = df.assign( sum_100=summed_numbervalue_for_radius(100))
df = df.assign( sum_500=summed_numbervalue_for_radius(500))
df = df.assign( sum_1000=summed_numbervalue_for_radius(1000))
df = df.assign( sum_1000_to_5000=summed_numbervalue_for_radius(5000)-summed_numbervalue_for_radius(1000))
Will give you
IDVALUE Latitude Longitude NumberValue sum_100 sum_500 sum_1000 \
0 ID1 44.968046 -94.420307 1 40 40 40
1 ID2 44.933208 -94.421310 10 10 10 10
2 ID3 33.755787 -116.359998 15 260 260 260
3 ID4 33.844843 -116.549110 207 395 395 395
4 ID5 44.920570 -93.447860 133 3989 3989 3989
5 ID6 44.240309 -91.493619 52 241 241 241
6 ID7 44.968041 -94.419696 39 40 40 40
7 ID8 44.333304 -89.132027 694 694 694 694
8 ID9 33.755783 -116.360066 245 260 260 260
9 ID10 33.844847 -116.549069 188 395 395 395
10 ID11 44.920474 -93.447851 3856 3989 3989 3989
11 ID12 44.240304 -91.493768 189 241 241 241
sum_1000_to_5000
0 10
1 40
2 0
3 0
4 0
5 0
6 10
7 0
8 0
9 0
10 0
11 0
I have a dataframe which has 37 variables and 50,000 rows. There are both categorical and numerical features. I would like to do the normalization function to some columns in the dataframe.
Here is a fake dataset:
diagnosis gender area age weight score compactness class
447 1 95.88 50 117.66 674.8 80 0
167 0 109.3 65 118.8 886.3 35.6 2
444 0 117.5 80 160.85 990 64.2 2
100 0 88.05 35 94.98 582.7 35.23 1
227 1 97.45 40 15.51 684.5 70 1
I want to do normalization only to area, weight, score, compactness for example. How should I do it? BTW, I found a stand deviation method from here , but it meant for normalizing the whole dataset and the code is:
# identify outliers with standard deviation
from numpy.random import seed
from numpy.random import randn
from numpy import mean
from numpy import std
# calculate summary statistics
data_mean, data_std = mean(data), std(data)
# identify outliers
cut_off = data_std * 3
lower, upper = data_mean - cut_off, data_mean + cut_off
# identify outliers
outliers = [x for x in data if x < lower or x > upper]
print('Identified outliers: %d' % len(outliers))
# remove outliers
outliers_removed = [x for x in data if x >= lower and x <= upper]
print('Non-outlier observations: %d' % len(outliers_removed))
My question is how can do normalization only to some columns in a dataframe? Thanks for your help in advance!
I actually have a written function for automatic normalization that I use. It is the following:
n <-function(x){
d=dim(x)
c=colMeans(x)
xm=sapply(1:d[2],function(i){
x[,i]=x[,i]-c[i]
})
# xm is the x with removed means
v=var(xm) # variance matrix
xn=sapply(1:d[2],function(i){
xm[,i]=xm[,i]/sqrt(v[i,i])
})
xn
}
Then just apply this function to the desired columns.
tochange=c("age","weight","score")
df[,tochange]=n(df[,tochange])
> df
diagnosis gender area age weight score
[1,] 447 1 95.88 -0.2161373 0.3000106 -0.5282662
[2,] 167 0 109.30 0.5943775 0.3212536 0.7290858
[3,] 444 0 117.50 1.4048924 1.1048216 1.3455747
[4,] 100 0 88.05 -1.0266521 -0.1226130 -1.0757939
[5,] 227 1 97.45 -0.7564805 -1.6034728 -0.4706004
compactness class
[1,] 80.00 0
[2,] 35.60 2
[3,] 64.20 2
[4,] 35.23 1
[5,] 70.00 1
Pandas has the very handy function to do pairwise correlation of columns using pd.corr().
That means it is possible to compare correlations between columns of any length. For instance:
df = pd.DataFrame(np.random.randint(0,100,size=(100, 10)))
0 1 2 3 4 5 6 7 8 9
0 9 17 55 32 7 97 61 47 48 46
1 8 83 87 56 17 96 81 8 87 0
2 60 29 8 68 56 63 81 5 24 52
3 42 76 6 75 7 59 19 17 3 63
...
Now it is possible to test correlation between all 10 columns with df.corr(method='pearson'):
0 1 2 3 4 5 6 7 8 9
0 1.000000 0.082789 -0.094096 -0.086091 0.163091 0.013210 0.167204 -0.002514 0.097481 0.091020
1 0.082789 1.000000 0.027158 -0.080073 0.056364 -0.050978 -0.018428 -0.014099 -0.135125 -0.043797
2 -0.094096 0.027158 1.000000 -0.102975 0.101597 -0.036270 0.202929 0.085181 0.093723 -0.055824
3 -0.086091 -0.080073 -0.102975 1.000000 -0.149465 0.033130 -0.020929 0.183301 -0.003853 -0.062889
4 0.163091 0.056364 0.101597 -0.149465 1.000000 -0.007567 -0.017212 -0.086300 0.177247 -0.008612
5 0.013210 -0.050978 -0.036270 0.033130 -0.007567 1.000000 -0.080148 -0.080915 -0.004612 0.243713
6 0.167204 -0.018428 0.202929 -0.020929 -0.017212 -0.080148 1.000000 0.135348 0.070330 0.008170
7 -0.002514 -0.014099 0.085181 0.183301 -0.086300 -0.080915 0.135348 1.000000 -0.114413 -0.111642
8 0.097481 -0.135125 0.093723 -0.003853 0.177247 -0.004612 0.070330 -0.114413 1.000000 -0.153564
9 0.091020 -0.043797 -0.055824 -0.062889 -0.008612 0.243713 0.008170 -0.111642 -0.153564 1.000000
Is there a simple way to also get the corresponding p-values (ideally in pandas), as it is returned e.g. by scipy's kendalltau()?
Why not using the "method" argument of pandas.DataFrame.corr():
pearson : standard correlation coefficient.
kendall : Kendall Tau correlation coefficient.
spearman : Spearman rank correlation.
callable: callable with input two 1d ndarrays and returning a float.
from scipy.stats import kendalltau, pearsonr, spearmanr
def kendall_pval(x,y):
return kendalltau(x,y)[1]
def pearsonr_pval(x,y):
return pearsonr(x,y)[1]
def spearmanr_pval(x,y):
return spearmanr(x,y)[1]
and then
corr = df.corr(method=pearsonr_pval)
Probably just loop. It's basically what pandas does in the source code to generate the correlation matrix anyway:
import pandas as pd
import numpy as np
from scipy import stats
df_corr = pd.DataFrame() # Correlation matrix
df_p = pd.DataFrame() # Matrix of p-values
for x in df.columns:
for y in df.columns:
corr = stats.pearsonr(df[x], df[y])
df_corr.loc[x,y] = corr[0]
df_p.loc[x,y] = corr[1]
If you want to leverage the fact that this is symmetric, so you only need to calculate this for roughly half of them, then do:
mat = df.values.T
K = len(df.columns)
correl = np.empty((K,K), dtype=float)
p_vals = np.empty((K,K), dtype=float)
for i, ac in enumerate(mat):
for j, bc in enumerate(mat):
if i > j:
continue
else:
corr = stats.pearsonr(ac, bc)
#corr = stats.kendalltau(ac, bc)
correl[i,j] = corr[0]
correl[j,i] = corr[0]
p_vals[i,j] = corr[1]
p_vals[j,i] = corr[1]
df_p = pd.DataFrame(p_vals)
df_corr = pd.DataFrame(correl)
#pd.concat([df_corr, df_p], keys=['corr', 'p_val'])
This will work:
from scipy.stats import pearsonr
column_values = [column for column in df.columns.tolist() ]
df['Correlation_coefficent'], df['P-value'] = zip(*df.T.apply(lambda x: pearsonr(x[column_values ],x[column_values ])))
df_result = df[['Correlation_coefficent','P-value']]
Does this work for you?
#call the correlation function, you could round the values if needed
df_c = df_c.corr().round(1)
#get the p values
pval = df_c.corr(method=lambda x, y: pearsonr(x, y)[1]) - np.eye(*rho.shape)
#set the p values, *** for less than 0.001, ** for less than 0.01, * for less than 0.05
p = pval.applymap(lambda x: ''.join(['*' for t in [0.001,0.01,0.05] if x<=t]))
#dfc_2 below will give you the dataframe with correlation coefficients and p values
df_c2 = df_c.astype(str) + p
#you could also plot the correlation matrix using sns.heatmap if you want
#plot the triangle
matrix = np.triu(df_c.corr())
#convert to array for the heatmap
df_c3 = df_c2.to_numpy()
#plot the heatmap
plt.figure(figsize=(13,8))
sns.heatmap(df_c, annot = df_c3, fmt='', vmin=-1, vmax=1, center= 0, cmap= 'coolwarm', mask = matrix)
I'm trying to do simple linear regression using this small Dataset (Screenshot).
The dataset is records divided into small time blocks of 4 years each (Except for the 2nd to the last time block of 2016-2018).
What I'm trying to do is try to predict the output of records for the timeblock of 2019-2022. To do this, I placed a 2019-2022 time block with all its rows containing the value of 0 (Since there's nothing made during that time since it's the future). I did that to accommodate the syntax of sklearn's train_test_split and went with this code:
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
df = pd.read_csv("TCO.csv")
df = df[['2000-2003', '2004-2007', '2008-2011','2012-2015','2016-2018','2019-2022']]
linreg = LinearRegression()
X1_train, X1_test, y1_train, y1_test = train_test_split(df[['2000-2003','2004-2007','2008-2011',
'2012-2015','2016-2018']],df['2019-2022'],test_size=0.4,random_state = 42)
linreg.fit(X1_train, y1_train)
linreg.intercept_
list( zip( ['2000-2003','2004-2007','2008-2011','2012-2015','2016-2018'],list(linreg.coef_)))
y1_pred = linreg.predict(X1_test)
print(y1_pred)
test_pred_df = pd.DataFrame({'actual': y1_test,
'predicted': np.round(y1_pred, 2),
'residuals': y1_test - y1_pred})
print(test_pred_df[0:10].to_string())
For some reason, the algorithm would always return a 0 as the final prediction for all rows with 0 residuals (This is due to the timeblock of 2019-2022 having all rows of zero.)
I think I did something wrong but I can't tell what it is. (I'm a beginner in this topic.) Can someone point out what went wrong and how to fix it?
Edit: I added a copy-able version of the data:
df = pd.DataFrame( {'Country:':['Brunei','Cambodia','Indonesia','Laos',
'Malaysia','Myanmar','Philippines','Singaore',
'Thailand','Vietnam'],
'2000-2003': [0,0,14,1,6,0,25,8,26,8],
'2004-2007': [0,3,15,6,21,0,37,11,44,36],
'2008-2011': [0,5,31,9,75,0,58,27,96,61],
'2012-2015': [5,11,129,35,238,3,99,65,170,96],
'2016-2018': [6,22,136,17,211,10,66,89,119,88]})
Based on your data, I think this is what you ask for [Edit: see updated version below]:
import pandas as pd
from sklearn.linear_model import LinearRegression
df = pd.DataFrame( {'Country:':['Brunei','Cambodia','Indonesia','Laos',
'Malaysia','Myanmar','Philippines','Singaore',
'Thailand','Vietnam'],
'2000-2003': [0,0,14,1,6,0,25,8,26,8],
'2004-2007': [0,3,15,6,21,0,37,11,44,36],
'2008-2011': [0,5,31,9,75,0,58,27,96,61],
'2012-2015': [5,11,129,35,238,3,99,65,170,96],
'2016-2018': [6,22,136,17,211,10,66,89,119,88]})
# create a transposed version with country in header
df_T = df.T
df_T.columns = df_T.iloc[-1]
df_T = df_T.drop("Country:")
# create a new columns for target
df["2019-2022"] = np.NaN
# now fit a model per country and add the prediction
for country in df_T:
y = df_T[country].values
X = np.arange(0,len(y))
m = LinearRegression()
m.fit(X.reshape(-1, 1), y)
df.loc[df["Country:"] == country, "2019-2022"] = m.predict(5)[0]
This prints:
Country: 2000-2003 2004-2007 2008-2011 2012-2015 2016-2018 2019-2022
Brunei 0 0 0 5 6 7.3
Cambodia 0 3 5 11 22 23.8
Indonesia 14 15 31 129 136 172.4
Laos 1 6 9 35 17 31.9
Malaysia 6 21 75 238 211 298.3
Myanmar 0 0 0 3 10 9.5
Philippines 25 37 58 99 66 100.2
Singaore 8 11 27 65 89 104.8
Thailand 26 44 96 170 119 184.6
Vietnam 8 36 61 96 88 123.8
Forget about my comment with shift(). I thought about it, but it makes not sense for this small amount of data, I think. But considering time series methods and treating each country's series as a time series may still be worth for you.
Edit:
Excuse me. The above code is unnessary complicated, but was just result of me going through it step by step. Of course it can simply be done row by row like tihs:
import pandas as pd
from sklearn.linear_model import LinearRegression
df = pd.DataFrame( {'Country:':['Brunei','Cambodia','Indonesia','Laos',
'Malaysia','Myanmar','Philippines','Singaore',
'Thailand','Vietnam'],
'2000-2003': [0,0,14,1,6,0,25,8,26,8],
'2004-2007': [0,3,15,6,21,0,37,11,44,36],
'2008-2011': [0,5,31,9,75,0,58,27,96,61],
'2012-2015': [5,11,129,35,238,3,99,65,170,96],
'2016-2018': [6,22,136,17,211,10,66,89,119,88]})
# create a new columns for target
df["2019-2022"] = np.NaN
for idx, row in df.iterrows():
y = row.drop(["Country:", "2019-2022"]).values
X = np.arange(0,len(y))
m = LinearRegression()
m.fit(X.reshape(-1, 1), y)
df.loc[idx, "2019-2022"] = m.predict(len(y)+1)[0]
1500 rows should be no problem.