Let's say that we have two samples data1 and data2 with their respective weights weight1 and weight2 and that we want to calculate the Kolmogorov-Smirnov statistic between the two weighted samples.
The way we do that in python follows:
import numpy as np
def ks_w(data1,data2,wei1,wei2):
ix1=np.argsort(data1)
ix2=np.argsort(data2)
wei1=wei1[ix1]
wei2=wei2[ix2]
data1=data1[ix1]
data2=data2[ix2]
d=0.
fn1=0.
fn2=0.
j1=0
j2=0
j1w=0.
j2w=0.
while(j1<len(data1))&(j2<len(data2)):
d1=data1[j1]
d2=data2[j2]
w1=wei1[j1]
w2=wei2[j2]
if d1<=d2:
j1+=1
j1w+=w1
fn1=(j1w)/sum(wei1)
if d2<=d1:
j2+=1
j2w+=w2
fn2=(j2w)/sum(wei2)
if abs(fn2-fn1)>d:
d=abs(fn2-fn1)
return d
where we just modify to our purpose the classical two-sample KS statistic as implemented in Press, Flannery, Teukolsky, Vetterling - Numerical Recipes in C - Cambridge University Press - 1992 - pag.626.
Our questions are:
is anybody aware of any other way to do it?
is there any library in python/R/* that performs it?
what about the test? Does it exist or should we use a reshuffling procedure in order to evaluate the statistic?
This solution is based on the code for scipy.stats.ks_2samp and runs in about 1/10000 the time (notebook):
import numpy as np
def ks_w2(data1, data2, wei1, wei2):
ix1 = np.argsort(data1)
ix2 = np.argsort(data2)
data1 = data1[ix1]
data2 = data2[ix2]
wei1 = wei1[ix1]
wei2 = wei2[ix2]
data = np.concatenate([data1, data2])
cwei1 = np.hstack([0, np.cumsum(wei1)/sum(wei1)])
cwei2 = np.hstack([0, np.cumsum(wei2)/sum(wei2)])
cdf1we = cwei1[[np.searchsorted(data1, data, side='right')]]
cdf2we = cwei2[[np.searchsorted(data2, data, side='right')]]
return np.max(np.abs(cdf1we - cdf2we))
Here's a test of its accuracy and performance:
ds1 = np.random.rand(10000)
ds2 = np.random.randn(40000) + .2
we1 = np.random.rand(10000) + 1.
we2 = np.random.rand(40000) + 1.
ks_w2(ds1, ds2, we1, we2)
# 0.4210415232236593
ks_w(ds1, ds2, we1, we2)
# 0.4210415232236593
%timeit ks_w2(ds1, ds2, we1, we2)
# 100 loops, best of 3: 17.1 ms per loop
%timeit ks_w(ds1, ds2, we1, we2)
# 1 loop, best of 3: 3min 44s per loop
To add to Luca Jokull's answer, if you want to also return a p-value (similar to the unweighted scipy.stats.ks_2samp function), the suggested ks_w2() function can be modified as follows:
from scipy.stats import distributions
def ks_weighted(data1, data2, wei1, wei2, alternative='two-sided'):
ix1 = np.argsort(data1)
ix2 = np.argsort(data2)
data1 = data1[ix1]
data2 = data2[ix2]
wei1 = wei1[ix1]
wei2 = wei2[ix2]
data = np.concatenate([data1, data2])
cwei1 = np.hstack([0, np.cumsum(wei1)/sum(wei1)])
cwei2 = np.hstack([0, np.cumsum(wei2)/sum(wei2)])
cdf1we = cwei1[np.searchsorted(data1, data, side='right')]
cdf2we = cwei2[np.searchsorted(data2, data, side='right')]
d = np.max(np.abs(cdf1we - cdf2we))
# calculate p-value
n1 = data1.shape[0]
n2 = data2.shape[0]
m, n = sorted([float(n1), float(n2)], reverse=True)
en = m * n / (m + n)
if alternative == 'two-sided':
prob = distributions.kstwo.sf(d, np.round(en))
else:
z = np.sqrt(en) * d
# Use Hodges' suggested approximation Eqn 5.3
# Requires m to be the larger of (n1, n2)
expt = -2 * z**2 - 2 * z * (m + 2*n)/np.sqrt(m*n*(m+n))/3.0
prob = np.exp(expt)
return d, prob
This is the asymptotic method that scipy's original unweighted function uses.
This is a R version of a two-tails weighted KS statistic following the suggestion of Numerical Methods of Statistics by Monohan, pg. 334 in 1E and pg. 358 in 2E.
ks_weighted <- function(vector_1,vector_2,weights_1,weights_2){
F_vec_1 <- ewcdf(vector_1, weights = weights_1, normalise=FALSE)
F_vec_2 <- ewcdf(vector_2, weights = weights_2, normalise=FALSE)
xw <- c(vector_1,vector_2)
d <- max(abs(F_vec_1(xw) - F_vec_2(xw)))
## P-VALUE with NORMAL SAMPLE
# n_vector_1 <- length(vector_1)
# n_vector_2<- length(vector_2)
# n <- n_vector_1 * n_vector_2/(n_vector_1 + n_vector_2)
# P-VALUE EFFECTIVE SAMPLE SIZE as suggested by Monahan
n_vector_1 <- sum(weights_1)^2/sum(weights_1^2)
n_vector_2 <- sum(weights_2)^2/sum(weights_2^2)
n <- n_vector_1 * n_vector_2/(n_vector_1 + n_vector_2)
pkstwo <- function(x, tol = 1e-06) {
if (is.numeric(x))
x <- as.double(x)
else stop("argument 'x' must be numeric")
p <- rep(0, length(x))
p[is.na(x)] <- NA
IND <- which(!is.na(x) & (x > 0))
if (length(IND))
p[IND] <- .Call(stats:::C_pKS2, p = x[IND], tol)
p
}
pval <- 1 - pkstwo(sqrt(n) * d)
out <- c(KS_Stat=d, P_value=pval)
return(out)
}
Related
I'm trying to fit SIR Epidemics Spread Model to the current new case data of the countries. In order to do that I used the work here: https://github.com/epimath/param-estimation-SIR . Main Idea was to fit best possible SIR's Infected curve to the new case data for that specific country, and calculate total predicted case number and the days that belong to %98 and %95 of total cases. The problem is, when I select Brazil, Mexico or United States. It shows that it will never end. I am curious about the reason. Any help about what can be done to deal with this non converging cases would be appreciated.
Please change the selected_country variable from "Spain" to one of those three countries(Brazil, Mexico or United States) to reproduce the result that leads me to ask here.
P.S. I know the limitations of the work. For example, new case number is bound to the number of tests etc. Please ignore those limitations. I'd like to see what is needed to produce a result out of the following code.
Here are some outputs:
Spain (Expected Output Example)
Turkey (Expected Output Example)
France (Expected Output Example)
USA (Unexpected Output Example)
Brazil (Unexpected Output Example)
I suspect something that cause gamma(the rate of recovering) parameter too small which leads the same amount of cases for each day. But I couldn't go further and found out what causing that. (I understood that by checking paramests variable by printing and examining it's values.)
Here you can find my code below.
import scipy.optimize as optimize
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import poisson
from scipy.stats import norm
import json
from scipy.integrate import odeint as ode
import pandas as pd
from datetime import datetime
time_start = datetime.timestamp(datetime.now())
output = {"result": "error"}
error = False
def model(ini, time_step, params):
Y = np.zeros(3) # column vector for the state variables
X = ini
mu = 0
beta = params[0]
gamma = params[1]
Y[0] = mu - beta * X[0] * X[1] - mu * X[0] # S
Y[1] = beta * X[0] * X[1] - gamma * X[1] - mu * X[1] # I
Y[2] = gamma * X[1] - mu * X[2] # R
return Y
def x0fcn(params, data):
S0 = 1.0 - (data[0] / params[2])
I0 = data[0] / params[2]
R0 = 0.0
X0 = [S0, I0, R0]
return X0
def yfcn(res, params):
return res[:, 1] * params[2]
# cost function for the SIR model for python 2.7
# Marisa Eisenberg (marisae#umich.edu)
# Yu-Han Kao (kaoyh#umich.edu) -7-9-17
def NLL(params, data, times): # negative log likelihood
params = np.abs(params)
data = np.array(data)
res = ode(model, x0fcn(params, data), times, args=(params,))
y = yfcn(res, params)
nll = sum(y) - sum(data * np.log(y))
# note this is a slightly shortened version--there's an additive constant term missing but it
# makes calculation faster and won't alter the threshold. Alternatively, can do:
# nll = -sum(np.log(poisson.pmf(np.round(data),np.round(y)))) # the round is b/c Poisson is for (integer) count data
# this can also barf if data and y are too far apart because the dpois will be ~0, which makes the log angry
# ML using normally distributed measurement error (least squares)
# nll = -sum(np.log(norm.pdf(data,y,0.1*np.mean(data)))) # example WLS assuming sigma = 0.1*mean(data)
# nll = sum((y - data)**2) # alternatively can do OLS but note this will mess with the thresholds
# for the profile! This version of OLS is off by a scaling factor from
# actual LL units.
return nll
df = pd.read_csv('https://github.com/owid/covid-19-data/raw/master/public/data/owid-covid-data.csv')
selected_location = 'Spain'
selected_df = df[df.location == selected_location].reset_index()
selected_df.date = pd.to_datetime(selected_df.date)
print(selected_df.head())
selected_df.date = pd.to_datetime(selected_df.date)
selected_df = selected_df[['date', 'new_cases']]
print(selected_df)
df = selected_df
optimizer = optimize.minimize(NLL, params, args=(data, times), method='Nelder-Mead',
options={'disp': False, 'return_all': False, 'xatol': 3.1201, 'fatol': 0.0001,
'adaptive': False})
paramests = np.abs(optimizer.x)
iniests = x0fcn(paramests, data)
print('Paramests:')
print(paramests)
times_long = range(0, int(len(times) * 10))
start_day = df['date'][0]
dates_long = []
for i in range(0, int(len(times) * 10)):
dates_long.append(start_day + (np.timedelta64(1, 'D') * i))
# print(df)
# print(dates_long)
# sys.exit()
#### Re-simulate and plot the model with the final parameter estimates ####
xest = ode(model, iniests, times_long, args=(paramests,))
# print(xest)
est_measure = yfcn(xest, paramests)
# plt.plot(times, data, 'k-o', linewidth=1, label='Data')
json_dict = {}
time_end = datetime.timestamp(datetime.now())
json_dict['duration'] = time_end - time_start
json_df = pd.DataFrame()
json_df['dates'] = dates_long
json_df['new_cases'] = df['new_cases']
json_df['prediction'] = est_measure
json_df = json_df.fillna("")
json_df['cumulative'] = json_df['prediction'].cumsum()
json_df = json_df[json_df['prediction'] >= 1]
if error == True:
json_dict['result'] = 'error'
json_dict['message'] = error_message
json_dict['timestamp'] = datetime.timestamp(datetime.now())
json_dict['chart_data'] = json_df.drop(columns=['prediction'], axis=1)
else:
json_dict['result'] = 'success'
json_dict['day_for_95_percent_predicted_cases'] = \
json_df[json_df['cumulative'] > (json_df['cumulative'].iloc[-1] * 0.95)]['dates'].reset_index(drop=True)[0]
json_dict['day_for_98_percent_predicted_cases'] = \
json_df[json_df['cumulative'] > (json_df['cumulative'].iloc[-1] * 0.98)]['dates'].reset_index(drop=True)[0]
# json_dict['timestamp'] = str(f"{datetime.now():%Y-%m-%d %H:%M:%S}")
json_dict['timestamp'] = datetime.timestamp(datetime.now())
json_dict['chart_data'] = json_df.to_dict()
json_string = json.dumps(json_dict, default=str)
print(json_string)
output = json_string # json string
plt.plot(json_df['dates'], json_df['prediction'], 'r-', linewidth=3, label='Predicted New Cases')
plt.bar(df['date'], data)
plt.axvline(x=json_dict['day_for_95_percent_predicted_cases'], label='(95%) '+str(json_dict['day_for_95_percent_predicted_cases'].date()),color='red')
plt.axvline(x=json_dict['day_for_98_percent_predicted_cases'], label='(98%) '+str(json_dict['day_for_98_percent_predicted_cases'].date()),color='green')
plt.xlabel('Time')
plt.ylabel('Individuals')
plt.legend()
plt.show()
I would like to calculate Matthew correlation coefficient for two matrices A and B. Looping over columns of A, and calculate MCC for that column and all 2000 rows of matrix B, then take the max index. The code is:
import numpy as np
import pandas as pd
from sklearn.metrics import matthews_corrcoef as mcc
A = pd.read_csv('A.csv', squeeze=True)
B = pd.read_csv('B.csv', squeeze=True)
ind = {}
for col in A:
ind[col] = np.argmax(list(mcc(B.iloc[i], A[col]) for i in range(2000)))
print(ind[col])
My problem is that it takes really long time (one second for each column). I saw almost the same code in R running much faster (like in 5 seconds). How can this be? Can I improve my Python code?
R Code:
A <- as.matrix(read.csv(file='A.csv'))
B <- t(as.matrix(read.csv(file='B.csv', check.names = FALSE)))
library('mccr')
C <- rep(NA, ncol(A))
for (query in 1:ncol(A)) {
mcc <- sapply(1:ncol(B), function(i)
mccr(A[, query], B[, i]))
C[query] <- which.max(mcc)
}
Maybe try this using numpy and dot products in python
def compute_mcc(true_labels, pred_labels):
"""Compute matthew's correlation coefficient.
:param true_labels: 2D integer array (features x samples)
:param pred_labels: 2D integer array (features x samples)
:return: mcc (samples1 x samples2)
"""
# prep inputs for confusion matrix calculations
pred_labels_1 = pred_labels == 1; pred_labels_0 = pred_labels == 0
true_labels_1 = true_labels == 1; true_labels_0 = true_labels == 0
# dot product of binary matrices
confusion_dot = lambda a,b: np.dot(a.T.astype(int), b.astype(int)).T
TP = confusion_dot(pred_labels_1, true_labels_1)
TN = confusion_dot(pred_labels_0, true_labels_0)
FP = confusion_dot(pred_labels_1, true_labels_0)
FN = confusion_dot(pred_labels_0, true_labels_1)
mcc = (TP * TN) - (FP * FN)
denom = np.sqrt((TP + FP) * (TP + FN) * (TN + FP) * (TN + FN))
# avoid dividing by 0
denom[denom == 0] = 1
return mcc / denom
I am following Andrew's Coursera course on machine learning. I am trying to build a 3 layers neural net for digit recognition in Python (784 input, 25 hidden, 10 output). However, I am unable to get the predictions (of the training data) correct (accuracy < 5% at 100 iter, accuracy not increasing with iteration).
J (the cost function) seems to be going down (see photo 1) and I have done gradient checking (before minimizing) and it seems to match to around 1e-11 (see photo 2).
I have compared the theta1 and theta2 after 100 iterations to my working matlab code (see code snippet 1 for octave and code snippet 2 for python). It seems theta1 is reasonably similar but theta2 is very different -- see code snippet 2. (I know they should differ because of the different optimisation routines. However, firstly, I have place the same initial thetas into both codes. Secondly, my reasoning is that they should start to converge, or at least get close, after 100 iterations)
The only error I see is:
-c:32: RuntimeWarning: overflow encountered in exp
when running the sigmoid during the optimising. However, I was told that this is not essential and it is normal to encounter this error during optimising? Furthermore, because it is a sigmoid, anytime the input is large, it will tend towards 1 anyways.
I have also attached my code in snippet 3. I have cut out all the other non-essential bits (like gradient checking) to make it as short as possible.
I would appreciate any help into this as I cannot even find where it is going wrong, let alone fix it. Thank you.
Photos:
J (cost function) decreasing to 1.8 after 12 iterations
Gradient checking before optimizing, they look very similar
Code snippet:
Initializing Neural Network Parameters ...
initial1
-0.0100100
-0.0771400
-0.1113800
-0.0230100
0.0547800
-0.0505500
-0.0731200
-0.0988700
0.0128000
-0.0855400
-0.1002500
-0.1137200
-0.0669300
-0.0999900
0.0084500
-0.0363200
-0.0588600
-0.0431100
-0.1133700
-0.0326300
0.0282800
0.0052400
-0.1134600
-0.0617700
0.0267600
initial2
0.0273700
0.1026000
-0.0502100
-0.0699100
0.0190600
0.1004000
0.0784600
-0.0075900
-0.0362100
0.0286200
Doing fminunc
Training Neural Network...
Iteration 100 | Cost: 6.219605e-01
theta1
-0.0099719
-0.0768462
-0.1109559
-0.0229224
0.0545714
-0.0503575
-0.0728415
-0.0984935
0.0127513
-0.0852143
-0.0998682
-0.1132869
-0.0666751
-0.0996092
0.0084178
-0.0361817
-0.0586359
-0.0429458
-0.1129383
-0.0325057
0.0281723
0.0052200
-0.1130279
-0.0615348
0.0266581
theta2
1.124918
1.603780
-1.266390
-0.848874
0.037956
-1.360841
2.145562
-1.448657
-1.262285
-1.357635
theta1_initial
[-0.01001 -0.07714 -0.11138 -0.02301 0.05478 -0.05055 -0.07312 -0.09887
0.0128 -0.08554 -0.10025 -0.11372 -0.06693 -0.09999 0.00845 -0.03632
-0.05886 -0.04311 -0.11337 -0.03263 0.02828 0.00524 -0.11346 -0.06177
0.02676]
theta2_initial
[ 0.02737 0.1026 -0.05021 -0.06991 0.01906 0.1004 0.07846 -0.00759
-0.03621 0.02862]
Doing fminunc
-c:32: RuntimeWarning: overflow encountered in exp
theta1
[-0.00997202 -0.07680716 -0.11086841 -0.02292044 0.05455335 -0.05034252
-0.07280686 -0.09842603 0.01275117 -0.08516515 -0.0997987 -0.11319546
-0.06664666 -0.09954009 0.00841804 -0.03617494 -0.05861458 -0.04293555
-0.1128474 -0.0325006 0.02816879 0.00522031 -0.1129369 -0.06151103
0.02665508]
theta2
[ 0.27954826 -0.08007496 -0.36449273 -0.22988024 0.06849659 -0.47803973
1.09023041 -0.25570559 -0.24537494 -0.40341995]
#-----------------BEGIN HEADERS-----------------
import numpy as np
import matplotlib.pyplot as plt
from scipy import stats
import csv
import scipy
#-----------------END HEADERS-----------------
#-----------------BEGIN FUNCTION 1-----------------
def randinitialize(L_in, L_out):
w = np.zeros((L_out, 1 + L_in))
epsilon_init = 0.12
w = np.random.rand(L_out, 1 + L_in) * 2 * epsilon_init - epsilon_init
return w
#-----------------END FUNCTION 1-----------------
#-----------------BEGIN FUNCTION 2-----------------
def sigmoid(lz):
g = 1.0/(1.0+np.exp(-lz))
return g
#-----------------END FUNCTION 2-----------------
#-----------------BEGIN FUNCTION 3-----------------
def sigmoidgradient(lz):
g = np.multiply(sigmoid(lz),(1-sigmoid(lz)))
return g
#-----------------END FUNCTION 3-----------------
#-----------------BEGIN FUNCTION 4-----------------
def nncostfunction(ltheta_ravel, linput_layer_size, lhidden_layer_size, lnum_labels, lx, ly, llambda_reg):
ltheta1 = np.array(np.reshape(ltheta_ravel[:lhidden_layer_size * (linput_layer_size + 1)], (lhidden_layer_size, (linput_layer_size + 1))))
ltheta2 = np.array(np.reshape(ltheta_ravel[lhidden_layer_size * (linput_layer_size + 1):], (lnum_labels, (lhidden_layer_size + 1))))
ltheta1_grad = np.zeros((np.shape(ltheta1)))
ltheta2_grad = np.zeros((np.shape(ltheta2)))
y_matrix = []
lm = np.shape(lx)[0]
eye_matrix = np.eye(lnum_labels)
for i in range(len(ly)):
y_matrix.append(eye_matrix[int(ly[i])-1,:]) #The minus one as python is zero based
y_matrix = np.array(y_matrix)
a1 = np.hstack((np.ones((lm,1)), lx)).astype(float)
z2 = sigmoid(ltheta1.dot(a1.T))
a2 = (np.concatenate((np.ones((np.shape(z2)[1], 1)), z2.T), axis=1)).astype(float)
a3 = sigmoid(ltheta2.dot(a2.T))
h = a3
J_unreg = 0
J = 0
J_unreg = (1/float(lm))*np.sum(\
-np.multiply(y_matrix,np.log(h.T))\
-np.multiply((1-y_matrix),np.log(1-h.T))\
,axis=None)
J = J_unreg + (llambda_reg/(2*float(lm)))*\
(np.sum(\
np.multiply(ltheta1[:,1:],ltheta1[:,1:])\
,axis=None)+np.sum(\
np.multiply(ltheta2[:,1:],ltheta2[:,1:])\
,axis=None))
delta3 = a3.T - y_matrix
delta2 = np.multiply((delta3.dot(ltheta2[:,1:])), (sigmoidgradient(ltheta1.dot(a1.T))).T)
cdelta2 = ((a2.T).dot(delta3)).T
cdelta1 = ((a1.T).dot(delta2)).T
ltheta1_grad = (1/float(lm))*cdelta1
ltheta2_grad = (1/float(lm))*cdelta2
theta1_hold = ltheta1
theta2_hold = ltheta2
theta1_hold[:,0] = 0;
theta2_hold[:,0] = 0;
ltheta1_grad = ltheta1_grad + (llambda_reg/float(lm))*theta1_hold;
ltheta2_grad = ltheta2_grad + (llambda_reg/float(lm))*theta2_hold;
thetagrad_ravel = np.concatenate((np.ravel(ltheta1_grad), np.ravel(ltheta2_grad)))
return (J, thetagrad_ravel)
#-----------------END FUNCTION 4-----------------
#-----------------BEGIN FUNCTION 5-----------------
def predict(ltheta1, ltheta2, x):
m, n = np.shape(x)
p = np.zeros(m)
h1 = sigmoid((np.hstack((np.ones((m,1)),x.astype(float)))).dot(ltheta1.T))
h2 = sigmoid((np.hstack((np.ones((m,1)),h1))).dot(ltheta2.T))
for i in range(0,np.shape(h2)[0]):
p[i] = np.argmax(h2[i,:])
return p
#-----------------END FUNCTION 5-----------------
## Setup the parameters you will use for this exercise
input_layer_size = 784; # 28x28 Input Images of Digits
hidden_layer_size = 25; # 25 hidden units
num_labels = 10; # 10 labels, from 0 to 9
data = []
#Reading in data, split into X and y, rewrite label 0 to 10 (for easy comparison to course)
with open('train.csv', 'rb') as csvfile:
has_header = csv.Sniffer().has_header(csvfile.read(1024))
csvfile.seek(0) # rewind
data_csv = csv.reader(csvfile, delimiter=',')
if has_header:
next(data_csv)
for row in data_csv:
data.append(row)
data = np.array(data)
x = data[:,1:]
y = data[:,0]
y = y.astype(int)
for i in range(len(y)):
if y[i] == 0:
y[i] = 10
#Set basic parameters
m, n = np.shape(x)
lambda_reg = 1.0
#Randomly initalize weights for Theta_initial
#theta1_initial = np.genfromtxt('tt1.csv', delimiter=',')
#theta2_initial = np.genfromtxt('tt2.csv', delimiter=',')
theta1_initial = randinitialize(input_layer_size, hidden_layer_size);
theta2_initial = randinitialize(hidden_layer_size, num_labels);
theta_initial_ravel = np.concatenate((np.ravel(theta1_initial), np.ravel(theta2_initial)))
#Doing optimize
fmin = scipy.optimize.minimize(fun=nncostfunction, x0=theta_initial_ravel, args=(input_layer_size, hidden_layer_size, num_labels, x, y, lambda_reg), method='L-BFGS-B', jac=True, options={'maxiter': 10, 'disp': True})
fmin
theta1 = np.array(np.reshape(fmin.x[:hidden_layer_size * (input_layer_size + 1)], (hidden_layer_size, (input_layer_size + 1))))
theta2 = np.array(np.reshape(fmin.x[hidden_layer_size * (input_layer_size + 1):], (num_labels, (hidden_layer_size + 1))))
p = predict(theta1, theta2, x);
for i in range(len(y)):
if y[i] == 10:
y[i] = 0
correct = [1 if a == b else 0 for (a, b) in zip(p,y)]
accuracy = (sum(map(int, correct)) / float(len(correct)))
print 'accuracy = {0}%'.format(accuracy * 100)
I think I have fixed the problem: it seems I messed up the index
should be:
y_matrix.append(eye_matrix[int(ly[i]),:])
instead of:
y_matrix.append(eye_matrix[int(ly[i])-1,:])
I am trying to optimize a snippet that gets called a lot (millions of times) so any type of speed improvement (hopefully removing the for-loop) would be great.
I am computing a correlation function of some j'th particle with all others
C_j(|r-r'|) = sqrt(E((s_j(r')-s_k(r))^2)) averaged over k.
My idea is to have a variable corrfun which bins data into some bins (the r, defined elsewhere). I find what bin of r each s_k belongs to and this is stored in ind. So ind[0] is the index of r (and thus the corrfun) for which the j=0 point corresponds to. Multiple points can fall into the same bin (in fact I want bins to be big enough to contain multiple points) so I sum together all of the (s_j(r')-s_k(r))^2 and then divide by number of points in that bin (stored in variable rw). The code I ended up making for this is the following (np is for numpy):
for k, v in enumerate(ind):
if j==k:
continue
corrfun[v] += (s[k]-s[j])**2
rw[v] += 1
rw2 = rw
rw2[rw < 1] = 1
corrfun = np.sqrt(np.divide(corrfun, rw2))
Note, the rw2 business was because I want to avoid divide by 0 problems but I do return the rw array and I want to be able to differentiate between the rw=0 and rw=1 elements. Perhaps there is a more elegant solution for this as well.
Is there a way to make the for-loop faster? While I would like to not add the self interaction (j==k) I am even ok with having self interaction if it means I can get significantly faster calculation (length of ind ~ 1E6 so self interaction is probably insignificant anyways).
Thank you!
Ilya
Edit:
Here is the full code. Note, in the full code I am averaging over j as well.
import numpy as np
def twopointcorr(x,y,s,dr):
width = np.max(x)-np.min(x)
height = np.max(y)-np.min(y)
n = len(x)
maxR = np.sqrt((width/2)**2 + (height/2)**2)
r = np.arange(0, maxR, dr)
print(r)
corrfun = r*0
rw = r*0
print(maxR)
''' go through all points'''
for j in range(0, n-1):
hypot = np.sqrt((x[j]-x)**2+(y[j]-y)**2)
ind = [np.abs(r-h).argmin() for h in hypot]
for k, v in enumerate(ind):
if j==k:
continue
corrfun[v] += (s[k]-s[j])**2
rw[v] += 1
rw2 = rw
rw2[rw < 1] = 1
corrfun = np.sqrt(np.divide(corrfun, rw2))
return r, corrfun, rw
I debug test it the following way
from twopointcorr import twopointcorr
import numpy as np
import matplotlib.pyplot as plt
import time
n=1000
x = np.random.rand(n)
y = np.random.rand(n)
s = np.random.rand(n)
print('running two point corr functinon')
start_time = time.time()
r,corrfun,rw = twopointcorr(x,y,s,0.1)
print("--- Execution time is %s seconds ---" % (time.time() - start_time))
fig1=plt.figure()
plt.plot(r, corrfun,'-x')
fig2=plt.figure()
plt.plot(r, rw,'-x')
plt.show()
Again, the main issue is that in the real dataset n~1E6. I can resample to make it smaller, of course, but I would love to actually crank through the dataset.
Here is the code that use broadcast, hypot, round, bincount to remove all the loops:
def twopointcorr2(x, y, s, dr):
width = np.max(x)-np.min(x)
height = np.max(y)-np.min(y)
n = len(x)
maxR = np.sqrt((width/2)**2 + (height/2)**2)
r = np.arange(0, maxR, dr)
osub = lambda x:np.subtract.outer(x, x)
ind = np.clip(np.round(np.hypot(osub(x), osub(y)) / dr), 0, len(r)-1).astype(int)
rw = np.bincount(ind.ravel())
rw[0] -= len(x)
corrfun = np.bincount(ind.ravel(), (osub(s)**2).ravel())
return r, corrfun, rw
to compare, I modified your code as follows:
def twopointcorr(x,y,s,dr):
width = np.max(x)-np.min(x)
height = np.max(y)-np.min(y)
n = len(x)
maxR = np.sqrt((width/2)**2 + (height/2)**2)
r = np.arange(0, maxR, dr)
corrfun = r*0
rw = r*0
for j in range(0, n):
hypot = np.sqrt((x[j]-x)**2+(y[j]-y)**2)
ind = [np.abs(r-h).argmin() for h in hypot]
for k, v in enumerate(ind):
if j==k:
continue
corrfun[v] += (s[k]-s[j])**2
rw[v] += 1
return r, corrfun, rw
and here is the code to check the results:
import numpy as np
n=1000
x = np.random.rand(n)
y = np.random.rand(n)
s = np.random.rand(n)
r1, corrfun1, rw1 = twopointcorr(x,y,s,0.1)
r2, corrfun2, rw2 = twopointcorr2(x,y,s,0.1)
assert np.allclose(r1, r2)
assert np.allclose(corrfun1, corrfun2)
assert np.allclose(rw1, rw2)
and the %timeit results:
%timeit twopointcorr(x,y,s,0.1)
%timeit twopointcorr2(x,y,s,0.1)
outputs:
1 loop, best of 3: 5.16 s per loop
10 loops, best of 3: 134 ms per loop
Your original code on my system runs in about 5.7 seconds. I fully vectorized the inner loop and got it to run in 0.39 seconds. Simply replace your "go through all points" loop with this:
points = np.column_stack((x,y))
hypots = scipy.spatial.distance.cdist(points, points)
inds = np.rint(hypots.clip(max=maxR) / dr).astype(np.int)
# go through all points
for j in range(n): # n.b. previously n-1, not sure why
ind = inds[j]
np.add.at(corrfun, ind, (s - s[j])**2)
np.add.at(rw, ind, 1)
rw[ind[j]] -= 1 # subtract self
The first observation was that your hypot code was computing 2D distances, so I replaced that with cdist from SciPy to do it all in a single call. The second was that the inner for loop was slow, and thanks to an insightful comment from #hpaulj I vectorized that as well using np.add.at().
Since you asked how to vectorize the inner loop as well, I did that later. It now takes 0.25 seconds to run, for a total speedup of over 20x. Here's the final code:
points = np.column_stack((x,y))
hypots = scipy.spatial.distance.cdist(points, points)
inds = np.rint(hypots.clip(max=maxR) / dr).astype(np.int)
sn = np.tile(s, (n,1)) # n copies of s
diffs = (sn - sn.T)**2 # squares of pairwise differences
np.add.at(corrfun, inds, diffs)
rw = np.bincount(inds.flatten(), minlength=len(r))
np.subtract.at(rw, inds.diagonal(), 1) # subtract self
This uses more memory but does produce a substantial speedup vs. the single-loop version above.
Ok, so as it turns out outer products are incredibly memory expensive, however, using answers from #HYRY and #JohnZwinck i was able to make code that is still roughly linear in n in memory and computes fast (0.5 seconds for the test case)
import numpy as np
def twopointcorr(x,y,s,dr,maxR=-1):
width = np.max(x)-np.min(x)
height = np.max(y)-np.min(y)
n = len(x)
if maxR < dr:
maxR = np.sqrt((width/2)**2 + (height/2)**2)
r = np.arange(0, maxR+dr, dr)
corrfun = r*0
rw = r*0
for j in range(0, n):
ind = np.clip(np.round(np.hypot(x[j]-x,y[j]-y) / dr), 0, len(r)-1).astype(int)
np.add.at(corrfun, ind, (s - s[j])**2)
np.add.at(rw, ind, 1)
rw[0] -= n
corrfun = np.sqrt(np.divide(corrfun, np.maximum(rw,1)))
r=np.delete(r,-1)
rw=np.delete(rw,-1)
corrfun=np.delete(corrfun,-1)
return r, corrfun, rw
I am currently working with some Raman Spectra data, and I am trying to correct my data caused by florescence skewing. Take a look at the graph below:
I am pretty close to achieving what I want. As you can see, I am trying to fit a polynomial in all my data whereas I should really just be fitting a polynomial at the local minimas.
Ideally I would want to have a polynomial fitting which when subtracted from my original data would result in something like this:
Are there any built in libs that does this already?
If not, any simple algorithm one can recommend for me?
I found an answer to my question, just sharing for everyone who stumbles upon this.
There is an algorithm called "Asymmetric Least Squares Smoothing" by P. Eilers and H. Boelens in 2005. The paper is free and you can find it on google.
def baseline_als(y, lam, p, niter=10):
L = len(y)
D = sparse.csc_matrix(np.diff(np.eye(L), 2))
w = np.ones(L)
for i in xrange(niter):
W = sparse.spdiags(w, 0, L, L)
Z = W + lam * D.dot(D.transpose())
z = spsolve(Z, w*y)
w = p * (y > z) + (1-p) * (y < z)
return z
The following code works on Python 3.6.
This is adapted from the accepted correct answer to avoid the dense matrix diff computation (which can easily cause memory issues) and uses range (not xrange)
import numpy as np
from scipy import sparse
from scipy.sparse.linalg import spsolve
def baseline_als(y, lam, p, niter=10):
L = len(y)
D = sparse.diags([1,-2,1],[0,-1,-2], shape=(L,L-2))
w = np.ones(L)
for i in range(niter):
W = sparse.spdiags(w, 0, L, L)
Z = W + lam * D.dot(D.transpose())
z = spsolve(Z, w*y)
w = p * (y > z) + (1-p) * (y < z)
return z
There is a python library available for baseline correction/removal. It has Modpoly, IModploy and Zhang fit algorithm which can return baseline corrected results when you input the original values as a python list or pandas series and specify the polynomial degree.
Install the library as pip install BaselineRemoval. Below is an example
from BaselineRemoval import BaselineRemoval
input_array=[10,20,1.5,5,2,9,99,25,47]
polynomial_degree=2 #only needed for Modpoly and IModPoly algorithm
baseObj=BaselineRemoval(input_array)
Modpoly_output=baseObj.ModPoly(polynomial_degree)
Imodpoly_output=baseObj.IModPoly(polynomial_degree)
Zhangfit_output=baseObj.ZhangFit()
print('Original input:',input_array)
print('Modpoly base corrected values:',Modpoly_output)
print('IModPoly base corrected values:',Imodpoly_output)
print('ZhangFit base corrected values:',Zhangfit_output)
Original input: [10, 20, 1.5, 5, 2, 9, 99, 25, 47]
Modpoly base corrected values: [-1.98455800e-04 1.61793368e+01 1.08455179e+00 5.21544654e+00
7.20210508e-02 2.15427531e+00 8.44622093e+01 -4.17691125e-03
8.75511661e+00]
IModPoly base corrected values: [-0.84912125 15.13786196 -0.11351367 3.89675187 -1.33134142 0.70220645
82.99739548 -1.44577432 7.37269705]
ZhangFit base corrected values: [ 8.49924691e+00 1.84994576e+01 -3.31739230e-04 3.49854060e+00
4.97412948e-01 7.49628529e+00 9.74951576e+01 2.34940300e+01
4.54929023e+01
Recently, I needed to use this method. The code from answers works well, but it obviously overuses the memory. So, here is my version with optimized memory usage.
def baseline_als_optimized(y, lam, p, niter=10):
L = len(y)
D = sparse.diags([1,-2,1],[0,-1,-2], shape=(L,L-2))
D = lam * D.dot(D.transpose()) # Precompute this term since it does not depend on `w`
w = np.ones(L)
W = sparse.spdiags(w, 0, L, L)
for i in range(niter):
W.setdiag(w) # Do not create a new matrix, just update diagonal values
Z = W + D
z = spsolve(Z, w*y)
w = p * (y > z) + (1-p) * (y < z)
return z
According to my benchmarks bellow, it is also about 1,5 times faster.
%%timeit -n 1000 -r 10 y = randn(1000)
baseline_als(y, 10000, 0.05) # function from #jpantina's answer
# 20.5 ms ± 382 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
%%timeit -n 1000 -r 10 y = randn(1000)
baseline_als_optimized(y, 10000, 0.05)
# 13.3 ms ± 874 µs per loop (mean ± std. dev. of 10 runs, 1000 loops each)
NOTE 1: The original article says:
To emphasize the basic simplicity of the algorithm, the number of iterations has been fixed to 10. In practical applications one should check whether the weights show any change; if not, convergence has been attained.
So, it means that the more correct way to stop iteration is to check that ||w_new - w|| < tolerance
NOTE 2: Another useful quote (from #glycoaddict's comment) gives an idea how to choose values of the parameters.
There are two parameters: p for asymmetry and λ for smoothness. Both have to be
tuned to the data at hand. We found that generally 0.001 ≤ p ≤ 0.1 is a good choice (for a signal with positive peaks) and 102 ≤ λ ≤ 109, but exceptions may occur. In any case one should vary λ on a grid that is approximately linear for log λ. Often visual inspection is sufficient to get good parameter values.
I worked the version of the algorithm referenced by glinka in a previous comment, which is an improvement of the penalized weighted linear squares method published in a relatively recent paper. I took Rustam Guliev's code to build this one:
from scipy import sparse
from scipy.sparse import linalg
import numpy as np
from numpy.linalg import norm
def baseline_arPLS(y, ratio=1e-6, lam=100, niter=10, full_output=False):
L = len(y)
diag = np.ones(L - 2)
D = sparse.spdiags([diag, -2*diag, diag], [0, -1, -2], L, L - 2)
H = lam * D.dot(D.T) # The transposes are flipped w.r.t the Algorithm on pg. 252
w = np.ones(L)
W = sparse.spdiags(w, 0, L, L)
crit = 1
count = 0
while crit > ratio:
z = linalg.spsolve(W + H, W * y)
d = y - z
dn = d[d < 0]
m = np.mean(dn)
s = np.std(dn)
w_new = 1 / (1 + np.exp(2 * (d - (2*s - m))/s))
crit = norm(w_new - w) / norm(w)
w = w_new
W.setdiag(w) # Do not create a new matrix, just update diagonal values
count += 1
if count > niter:
print('Maximum number of iterations exceeded')
break
if full_output:
info = {'num_iter': count, 'stop_criterion': crit}
return z, d, info
else:
return z
In order to test the algorithm, I created a spectrum similar to the one shown in Fig. 3 of the paper, by first generating a simulated spectra consisting of multiple Gaussian peaks:
def spectra_model(x):
coeff = np.array([100, 200, 100])
mean = np.array([300, 750, 800])
stdv = np.array([15, 30, 15])
terms = []
for ind in range(len(coeff)):
term = coeff[ind] * np.exp(-((x - mean[ind]) / stdv[ind])**2)
terms.append(term)
spectra = sum(terms)
return spectra
x_vals = np.arange(1, 1001)
spectra_sim = spectra_model(x_vals)
Then, I created a third-order interpolating polynomial using 4 points taken directly from the paper:
from scipy.interpolate import CubicSpline
x_poly = np.array([0, 250, 700, 1000])
y_poly = np.array([200, 180, 230, 200])
poly = CubicSpline(x_poly, y_poly)
baseline = poly(x_vals)
noise = np.random.randn(len(x_vals)) * 0.1
spectra_base = spectra_sim + baseline + noise
Finally, I used the baseline correction algorithm to subtract the baseline out of the altered spectra (spectra_base):
_, spectra_arPLS, info = baseline_arPLS(spectra_base, lam=1e4, niter=10,
full_output=True)
The results were (for reference, I compared with the pure ALS implementation by Rustam Guliev's, using lam = 1e4 and p = 0.001):
I know this is an old question, but I stumpled upon it a few months ago and implemented the equivalent answer using spicy.sparse routines.
# Baseline removal
def baseline_als(y, lam, p, niter=10):
s = len(y)
# assemble difference matrix
D0 = sparse.eye( s )
d1 = [numpy.ones( s-1 ) * -2]
D1 = sparse.diags( d1, [-1] )
d2 = [ numpy.ones( s-2 ) * 1]
D2 = sparse.diags( d2, [-2] )
D = D0 + D2 + D1
w = np.ones( s )
for i in range( niter ):
W = sparse.diags( [w], [0] )
Z = W + lam*D.dot( D.transpose() )
z = spsolve( Z, w*y )
w = p * (y > z) + (1-p) * (y < z)
return z
Cheers,
Pedro.