Swap and group column names in a pandas DataFrame - python

I have a data frame with some quantitative data and one qualitative data. I would like to use describe to compute stats and group by column using the qualitative data. But I do not obtain the order I want for the level. Hereafter is an example:
df = pd.DataFrame({k: np.random.random(10) for k in "ABC"})
df["qual"] = 5 * ["init"] + 5 * ["final"]
The DataFrame looks like:
A B C qual
0 0.298217 0.675818 0.076533 init
1 0.015442 0.264924 0.624483 init
2 0.096961 0.702419 0.027134 init
3 0.481312 0.910477 0.796395 init
4 0.166774 0.319054 0.645250 init
5 0.609148 0.697818 0.151092 final
6 0.715744 0.067429 0.761562 final
7 0.748201 0.803647 0.482738 final
8 0.098323 0.614257 0.232904 final
9 0.033003 0.590819 0.943126 final
Now I would like to group by the qual column and compute statistical descriptors using describe. I did the following:
ddf = df.groupby("qual").describe().transpose()
ddf.unstack(level=0)
And I got
qual final init
A B C A B C
count 5.000000 5.000000 5.000000 5.000000 5.000000 5.000000
mean 0.440884 0.554794 0.514284 0.211741 0.574539 0.433959
std 0.347138 0.284931 0.338057 0.182946 0.274135 0.355515
min 0.033003 0.067429 0.151092 0.015442 0.264924 0.027134
25% 0.098323 0.590819 0.232904 0.096961 0.319054 0.076533
50% 0.609148 0.614257 0.482738 0.166774 0.675818 0.624483
75% 0.715744 0.697818 0.761562 0.298217 0.702419 0.645250
max 0.748201 0.803647 0.943126 0.481312 0.910477 0.796395
I am close to what I want but I would like to swap and group the column index such as:
A B C
qual initial final initial final initial final
Is there a way to do it ?

Use columns.swaplevel and then sort_index by level=0 and axis='columns':
ddf = df.groupby('qual').describe().T.unstack(level=0)
ddf.columns = ddf.columns.swaplevel(0,1)
ddf = ddf.sort_index(level=0, axis='columns')
Or in one line using DataFrame.swaplevel instead of index.swaplevel:
ddf = ddf.swaplevel(0,1, axis=1).sort_index(level=0, axis='columns')
A B C
qual final init final init final init
count 5.00 5.00 5.00 5.00 5.00 5.00
mean 0.44 0.21 0.55 0.57 0.51 0.43
std 0.35 0.18 0.28 0.27 0.34 0.36
min 0.03 0.02 0.07 0.26 0.15 0.03
25% 0.10 0.10 0.59 0.32 0.23 0.08
50% 0.61 0.17 0.61 0.68 0.48 0.62
75% 0.72 0.30 0.70 0.70 0.76 0.65
max 0.75 0.48 0.80 0.91 0.94 0.80

Try ddf.stack().unstack(level=[0,2]), inplace of ddf.unstack(level=0)

Related

how to highlight pandas data frame on selected rows

I have the data like this:
df:
A-A A-B A-C A-D A-E
Tg 0.37 10.24 5.02 0.63 20.30
USL 0.39 10.26 5.04 0.65 20.32
LSL 0.35 10.22 5.00 0.63 20.28
1 0.35 10.23 5.05 0.65 20.45
2 0.36 10.19 5.07 0.67 20.25
3 0.34 10.25 5.03 0.66 20.33
4 0.35 10.20 5.08 0.69 20.22
5 0.33 10.17 5.05 0.62 20.40
Max 0.36 10.25 5.08 0.69 20.45
Min 0.33 10.17 5.03 0.62 20.22
I would like to color-highlight the data (index 1-5 in this df) by comparing Max and Min of the data (last two rows) to USL and LSL respectively. if Max > USL or Min < LSL, I would like to highlight the corresponding data points as red. if Max == USL or Min == LSL, corresponding data point as yellow and otherwise everything green.
I tried this :
highlight = np.where(df.loc['Max']>df.loc['USL'], 'background-color: red', '')
df.style.apply(lambda _: highlight)
but i get the error:
ValueError: Function <function <lambda> at 0x7fb681b601f0> created invalid index labels.
Usually, this is the result of the function returning a Series which contains invalid labels, or returning an incorrectly shaped, list-like object which cannot be mapped to labels, possibly due to applying the function along the wrong axis.
Result index has shape: (5,)
Expected index shape: (10,)
Out[58]:
<pandas.io.formats.style.Styler at 0x7fb681b52e20>
Use custom function for create DataFrame of styles by conditions:
#changed data for test
print (df)
A-A A-B A-C A-D
Tg 0.37 10.24 5.02 0.63
USL 0.39 10.26 5.04 0.65
LSL 0.33 0.22 5.00 10.63
1 0.35 10.23 5.05 0.65
2 0.36 10.19 5.07 0.67
3 0.34 10.25 5.03 0.66
4 0.35 10.20 5.08 0.69
5 0.33 10.17 5.05 0.62
Max 0.36 10.25 5.08 0.69
Min 0.33 10.17 5.03 0.62
def hightlight(x):
c1 = 'background-color:red'
c2 = 'background-color:yellow'
c3 = 'background-color:green'
#if values of index are strings
r = list('12345')
#if values of index are integers
r = [1,2,3,4,5]
m1 = (x.loc['Max']>x.loc['USL']) | (x.loc['Min']<x.loc['LSL'])
print (m1)
m2 = (x.loc['Max']==x.loc['USL']) | (x.loc['Min']==x.loc['LSL'])
print (m2)
#DataFrame with same index and columns names as original filled empty strings
df1 = pd.DataFrame('', index=x.index, columns=x.columns)
#modify values of df1 columns by boolean mask
df1.loc[r, :] = np.select([m1, m2], [c1, c2], default=c3)
return df1
df.style.apply(hightlight, axis=None)
EDIT: For compare 1-5 and Min/Max use:
def hightlight(x):
c1 = 'background-color:red'
c2 = 'background-color:yellow'
c3 = 'background-color:green'
#if values of index are strings
r = list('12345')
#if values of index are integers
# r = [1,2,3,4,5]
r += ['Max','Min']
m1 = (x.loc[r]>x.loc['USL']) | (x.loc[r]<x.loc['LSL'])
m2 = (x.loc[r]==x.loc['USL']) | (x.loc[r]==x.loc['LSL'])
#DataFrame with same index and columns names as original filled empty strings
df1 = pd.DataFrame('', index=x.index, columns=x.columns)
#modify values of df1 columns by boolean mask
df1.loc[r, :] = np.select([m1, m2], [c1, c2], default=c3)
return df1
df.style.apply(hightlight, axis=None)

Multivariate second order polynomial regression python

I am dealing with multivariate regression problems.
My dataset is something like X = (nsample, nx) and Y = (nsample, ny).
nx and ny may vary based on different dataset of different case to study, so they should be general in the code.
I would like to determine the coefficients for the multivariate polynomial regression minimizing the root mean square error.
I thought to split the problem in ny different regressions, so for each of them my dataset is X = (nsample, nx) and Y = (nsample, 1). So, for each depended variable (Uj) the second order polynomial has the following form:
I coded the function in python as:
def func(x,nx,pars0,pars1,pars2):
y = pars0 #pars0 = bias
for i in range(nx):
y = y + pars1[i]*x[i] #pars1 linear coeff (beta_i in the equation)
for j in range(nx):
if (j < i ):
continue
y = y + pars2[i,j]*x[i]*x[j]
#diag pars2 = coeff of x^2 (beta_ii in the equation)
#upper triangle pars2 = coeff of x_i*x_k (beta_ik in the equation)
return y
and the root mean square error as:
def resid(nsample,nx,pars0,pars1,pars2,x,y):
res=0.0
for i in range(nsample):
y_pred = func(nx,pars0,pars1,pars2,x[i])
res=res+((y_pred - y[i]) ** 2)
res=res/nsample
res=res**0.5
return res
To determine the coefficients I thought to use scipy.optmize.minimize but it does not work example_1 example_2.
Any ideas or advices? Should I use sklearn?
-> EDIT: Toy test data nx =3, ny =1
0.20 -0.02 0.20 1.0229781
0.20 -0.02 0.40 1.0218807
0.20 -0.02 0.60 1.0220439
0.20 -0.02 0.80 1.0227083
0.20 -0.02 1.00 1.0237960
0.20 -0.02 1.20 1.0255770
0.20 -0.02 1.40 1.0284888
0.20 -0.06 0.20 1.0123552
0.24 -0.02 1.40 1.0295350
0.24 -0.06 0.20 1.0125935
0.24 -0.06 0.40 1.0195798
0.24 -0.06 0.60 1.0124632
0.24 -0.06 0.80 1.0131748
0.24 -0.06 1.00 1.0141751
0.24 -0.06 1.20 1.0153533
0.24 -0.06 1.40 1.0170036
0.24 -0.10 0.20 1.0026915
0.24 -0.10 0.40 1.0058125
0.24 -0.10 0.60 1.0055921
0.24 -0.10 0.80 1.0057868
0.24 -0.10 1.00 1.0014004
0.24 -0.10 1.20 1.0026257
0.24 -0.10 1.40 1.0024578
0.30 -0.18 0.60 0.9748765
0.30 -0.18 0.80 0.9753220
0.30 -0.18 1.00 0.9740970
0.30 -0.18 1.20 0.9727272
0.30 -0.18 1.40 0.9732258
0.30 -0.20 0.20 0.9722360
0.30 -0.20 0.40 0.9687567
0.30 -0.20 0.60 0.9676569
0.30 -0.20 0.80 0.9672319
0.30 -0.20 1.00 0.9682354
0.30 -0.20 1.20 0.9674461
0.30 -0.20 1.40 0.9673747
0.36 -0.02 0.20 1.0272033
0.36 -0.02 0.40 1.0265790
0.36 -0.02 0.60 1.0271688
0.36 -0.02 0.80 1.0277286
0.36 -0.02 1.00 1.0285388
0.36 -0.02 1.20 1.0295619
0.36 -0.02 1.40 1.0310734
0.36 -0.06 0.20 1.0159603
0.36 -0.06 0.40 1.0159753
0.36 -0.06 0.60 1.0161890
0.36 -0.06 0.80 1.0153346
0.36 -0.06 1.00 1.0159790
0.36 -0.06 1.20 1.0167520
0.36 -0.06 1.40 1.0176916
0.36 -0.10 0.20 1.0048287
0.36 -0.10 0.40 1.0034699
0.36 -0.10 0.60 1.0032798
0.36 -0.10 0.80 1.0037224
0.36 -0.10 1.00 1.0059301
0.36 -0.10 1.20 1.0047114
0.36 -0.10 1.40 1.0041287
0.36 -0.14 0.20 0.9926268
0.40 -0.08 0.80 1.0089013
0.40 -0.08 1.20 1.0096265
0.40 -0.08 1.40 1.0103305
0.40 -0.10 0.20 1.0045464
0.40 -0.10 0.40 1.0041031
0.40 -0.10 0.60 1.0035650
0.40 -0.10 0.80 1.0034553
0.40 -0.10 1.00 1.0034699
0.40 -0.10 1.20 1.0030276
0.40 -0.10 1.40 1.0035284
0.40 -0.10 1.60 1.0042166
0.40 -0.14 0.20 0.9924336
0.40 -0.14 0.40 0.9914971
0.40 -0.14 0.60 0.9910082
0.40 -0.14 0.80 0.9903772
0.40 -0.14 1.00 0.9900816
Minimizing error is a huge, complex problem. As such, a lot of very clever people have thought up a lot of cool solutions. Here are a few:
(out of all of them, I think bayesian optimization with sklearn might be a good choice for your use case, though I've never used it)
(also, delete the last "s" in the image url to see the full size)
Random approaches:
genetic algorithms: formats your problem like chromosomes in a genome and "breeds" an optimal solution (a personal favorite of mine)
simulated anealing: formats your problem like hot metal being annealed, which attempts to move to a stable state while losing heat
random search: better than it sounds. randomly tests a verity of input variables.
Grid Search: Simple to implement, but often less effective than methods which employ true randomness (duplicate exploration along particular axis of interest. This strategy often wastes computational resources)
A lot of these come up in hyperparameter optimization for ML models.
More Prescriptive Approaches:
Gradient Descent: uses the gradient calculated in a differentiable function to step toward local minima
DeepAR: uses Bayesian optimization, combined with random search, to reduce loss in hyperparameter tuning. While I believe this is only available on AWS, It looks like sklearn has an implementation of Bayesian optimization
scipy.optimize.minimize: I know you're already using this, but there are 15 different algorithms that can be used by changing the method flag.
The rub
while error minimization is simple conceptually, in practice complex error topologies in high dimensional spaces can be very difficult to traverse efficiently. It harkens to local and global extrema, the explore/exploit problem, and our mathematical understanding of what computational complexity even is. Often, a good error reduction is accomplished through a combination of thorough understanding of the problem, and experimentation with multiple algorithms and hyperparameters. In ML, this is often referred to as hyperparameter tuning, and is a sort of "meta" error reduction step, if you will.
note: feel free to recommend more optimization methods, I'll add them to the list.
I have a example using Simulated Annealing, as mentioned in the nice list in this thread.
First, I need to load the data and define the objective function. I saved your data in data.csv and loaded with
import pandas as pd
data = pd.read_csv("../data.csv", sep=" ", header=None, engine='python')
And fetch your values with
X = data[ [0,1,2] ].values
Y = data[ 3 ].values
I define your poly function with
from itertools import combinations
def poly_function(X, beta):
X_dimension = X.shape[1]
i,j = zip( *list(combinations( range(X_dimension), 2)) )
X_cross = X[:,i] * X[:,j]
X_expanded = np.concatenate([X,X**2,X_cross] , axis=1)
assert X_expanded.shape[1] == beta.shape[0], "Expect beta to be of size {}".format(X_expanded.shape[1])
return np.matmul(X_expanded, beta)
For Simulated Annealing we simply need objective
def obj(beta,X=X,Y=Y):
Y_hat = poly_function(X, beta)
BOOSTER = 10**5
return BOOSTER * np.mean( (Y-Y_hat)**2 )**.5
and some proposals
def small_delta(beta):
new_beta = beta.copy()
random_index = np.random.randint(0,new_beta.shape[0])
new_beta[ random_index ] += (np.random.random() - .5) * .01
return new_beta
def large_delta(beta):
new_beta = beta.copy()
random_index = np.random.randint(0,new_beta.shape[0])
new_beta[ random_index ] += np.random.random() - .5
return new_beta
And random start
def random_beta():
return np.random.random(size=9)
And SA with
import frigidum
local_opt = frigidum.sa(random_start=random_beta,
neighbours=[small_delta, large_delta],
objective_function=obj,
T_start=10**2,
T_stop=10**-12,
repeats=10**3,
copy_state=frigidum.annealing.copy)
The RMSE I found with your data was around 0.026254 with beta
array([ 7.73168440e+00, 2.93929578e+00, 4.10133180e-02, -1.37266444e+01,
-3.43978686e+00, -1.12816177e-02, -1.00262307e+01, -3.12327590e-02,
9.07369588e-02])
where you need to know it is build up as (X1,X2,X3,X1**2, X2**2, X3**2, X1*X2,X1*X3,X2*X3)
A longer run with more repeats can give me a error of 0.026150 with beta
array([ 7.89212770e+00, 3.24138652e+00, 1.24436937e-02, -1.41549553e+01,
-3.31912739e+00, -5.54411310e-03, -1.08317125e+01, 2.09684769e-02,
6.84396750e-02])
You can try the statsmodelslibrary combined with the explanation from this link to fit polynomial models.
https://ostwalprasad.github.io/machine-learning/Polynomial-Regression-using-statsmodel.html
After some trial and error, I finally came up with a solution. The problem can be seen as linear using a change of variables. I used scikit-learn to build the model. After some tests on real cases it works really well

Remove Columns from DataFrame based on Standard Deviation

I am trying to do something that I think should be rather simple but I am stuck.
I would like to be able to get the standard deviation of each column in my dataframe and remove that column if the standard deviation is below a set number. This is as far as I have gotten.
stdev_min = 0.6
df = pd.DataFrame(np.random.randn(20, 5), columns=list('ABCDE'))
namelist = list(df.columns.values.tolist())
stdev = pd.DataFrame(df.std())
I've tried a few things but nothing worth mentioning, any help would be greatly appreciated.
You don't need any loops.
You rarely do with pandas.
In this case, you need boolean indexing:
import pandas
import numpy
numpy.random.seed(37)
stdev_min = 0.95
df = pandas.DataFrame(numpy.random.randn(20, 5), columns=list('ABCDE'))
So now df.std() gives me:
A 0.928547
B 0.859394
C 0.998692
D 1.187380
E 1.092970
dtype: float64
so I can do
df.loc[:, df.std() > stdev_min]
And get:
C D E
0 0.35 -1.30 1.52
1 -0.45 0.96 -0.83
2 0.52 -0.06 -0.03
3 1.89 0.40 0.19
4 -0.27 -2.07 -0.71
5 -1.72 -0.40 1.27
6 0.44 -2.05 -0.23
7 1.76 0.06 0.36
8 -0.30 -2.05 1.68
9 0.34 1.26 -1.08
10 0.10 -0.48 -1.74
11 1.95 -0.08 1.51
12 0.43 -0.06 -0.63
13 -0.30 -1.06 0.57
14 -0.95 -1.45 0.93
15 -1.13 2.23 -0.88
16 -0.77 0.86 0.58
17 0.93 -0.11 -1.29
18 -0.82 0.03 -0.44
19 0.40 1.13 -1.89
Here's a way to do this.
Iterate through each column. Get the Standard Deviation for the column. Check if it is less than the minimum standard deviation value. If it is, drop the column using inplace=True
stdev_min = 0.6
df = pd.DataFrame(np.random.randn(10, 5), columns=list('ABCDE'))
for col in df.columns:
print (col, df[col].std())
if df[col].std() < stdev_min:
df.drop(col,axis='columns', inplace=True)
print (df)
Output:
A 0.5046725928657507
B 1.1382221163449697
C 1.0318169576864502
D 0.7129102193331575
E 1.3805207184389312
The value of A is less than 0.6 and so it got dropped.
B C D E
0 -0.923822 1.155547 -0.601033 -0.066207
1 0.068844 0.426304 -0.376052 0.368574
2 0.585187 -0.367270 0.530934 0.086811
3 0.021466 1.381579 0.483134 -0.300033
4 0.351492 -0.648734 -0.736213 0.827953
5 0.155731 -0.004504 0.315432 0.310515
6 -1.092933 1.341933 -0.672240 -3.482960
7 -0.587766 0.227846 0.246781 1.978528
8 1.565055 0.527668 -0.371854 -0.030196
9 -2.634862 -1.973874 1.508080 -0.362073
Did a few more runs. Here's an example with before and after.
DF before
A B C D E
0 0.496740 0.799021 1.655287 0.091138 0.309186
1 -0.580667 -0.749337 -0.521909 -0.529410 1.010981
2 0.212731 0.126389 -2.244500 0.400540 -0.148761
3 -0.424375 -0.832478 -0.030865 -0.561107 0.196268
4 0.229766 0.688040 0.580294 0.941885 1.554929
5 0.676926 -0.062092 -1.452619 0.952388 -0.963857
6 0.683216 0.747429 -1.834337 -0.402467 -0.383881
7 0.834815 -0.770804 1.299346 1.694612 1.171190
8 0.500445 -1.517488 0.610287 -0.601442 0.343389
9 -0.182286 -0.713332 0.526507 1.042717 1.229628
Standard Deviations for each column of DF:
A 0.49088743174291477
B 0.8047513692231202
C 1.333382184686379
D 0.8248456756163864
E 0.8033725216710547
df['A'] is less than 0.6 and so got dropped.
DF after dropping the column.
B C D E
0 0.799021 1.655287 0.091138 0.309186
1 -0.749337 -0.521909 -0.529410 1.010981
2 0.126389 -2.244500 0.400540 -0.148761
3 -0.832478 -0.030865 -0.561107 0.196268
4 0.688040 0.580294 0.941885 1.554929
5 -0.062092 -1.452619 0.952388 -0.963857
6 0.747429 -1.834337 -0.402467 -0.383881
7 -0.770804 1.299346 1.694612 1.171190
8 -1.517488 0.610287 -0.601442 0.343389
9 -0.713332 0.526507 1.042717 1.229628

Convert elements in masked astropy Table to np.nan

Consider the simple process of reading a data file with some non-valid entries. This is my test.dat file:
16 1035.22 1041.09 24.54 0.30 1.39 0.30 1.80 0.30 2.26 0.30 1.14 0.30 0.28 0.30 0.2884
127 824.57 1105.52 25.02 0.29 0.87 0.29 1.30 0.29 2.12 0.29 0.66 0.29 0.10 0.29 0.2986
182 1015.83 904.93 INDEF 0.28 1.80 0.28 1.64 0.28 2.38 0.28 1.04 0.28 0.06 0.28 0.3271
185 1019.15 1155.09 24.31 0.28 1.40 0.28 1.78 0.28 2.10 0.28 0.87 0.28 0.35 0.28 0.3290
192 1024.80 1045.57 24.27 0.27 1.24 0.27 2.01 0.27 2.40 0.27 0.90 0.27 0.09 0.27 0.3328
197 1035.99 876.04 24.10 0.27 1.23 0.27 1.52 0.27 2.59 0.27 0.45 0.27 0.25 0.27 0.3357
198 1110.80 1087.97 24.53 0.27 1.49 0.27 1.71 0.27 2.33 0.27 0.22 0.27 0.00 0.27 0.3362
1103 1168.39 1065.97 24.35 0.27 1.28 0.27 1.29 0.27 2.68 0.27 0.43 0.27 0.26 0.27 0.3388
And this is the code to read it, and replace the "bad" values (INDEF) with a float (99.999)
import numpy as np
from astropy.io import ascii
data = ascii.read("test.dat", fill_values=[('INDEF', '0')])
data = data.filled(99.999)
This works just fine, but if I instead try to replace the bad values with a np.nan (i.e., I use the line data = data.filled(np.nan)) I get:
ValueError: cannot convert float NaN to integer
why is this and how can I get around it?
As noted the issue is that the numpy MaskedArray.filled() method seems to try converting the fill value to the appropriate type before checking if there is actually anything to fill. Since the table in the example has an int column, this fails within numpy (and astropy.Table is just calling the filled() method on each column).
This should work:
In [44]: def fill_cols(tbl, fill=np.nan, kind='f'):
...: """
...: In-place fill of ``tbl`` columns which have dtype ``kind``
...: with ``fill`` value.
...: """
...: for col in tbl.itercols():
...: if col.dtype.kind == kind:
...: col[...] = col.filled(fill)
...:
In [45]: t = simple_table(masked=True)
In [46]: t
Out[46]:
<Table masked=True length=3>
a b c
int64 float64 str1
----- ------- ----
-- 1.0 c
2 2.0 --
3 -- e
In [47]: fill_cols(t)
In [48]: t
Out[48]:
<Table masked=True length=3>
a b c
int64 float64 str1
----- ------- ----
-- 1.0 c
2 2.0 --
3 nan e
I don't think it's primarily a numpy problem, as it works with individual columns:
>>> data['col4'].filled(np.nan)
<Column name='col4' dtype='float64' length=8>
24.54
25.02
nan
24.31
24.27
24.1
24.53
24.35
but you still can't construct a Table from this -
Table([data[n].filled(np.nan) for n in data.colnames])
raises the same error in np.ma.core.
You can explicitly set
data['col4'] = data['col4'].filled(np.nan)
but this apparently lets the table lose its .filled() method...
I am not that familiar with masked arrays and tables, but as you've already filed a related issue on Github, you might want to add this problem.
This is happening fairly deep in numpy, in numpy.ma.filled. fill values have to be scalars, basically.
A messy solution that fills with nan's and still returns a table could look like:
import numpy as np
from astropy.io import ascii
from astropy.table import Table
def fill_with_nan(t):
arr = t.as_array()
arr_list = arr.tolist()
arr = np.array(arr_list)
arr[np.equal(arr, None)] = np.nan
arr = np.array(arr.tolist())
return Table(arr)
data = ascii.read("test.dat", fill_values=[('INDEF', '0')])
data = fill_with_nan(data)
Cut out the middleman? fill_values=[('INDEF', np.nan)]) seems to work.

Should stats.norm.pdf gives same result as stats.gaussian_kde in Python?

I was trying to estimate PDF of 1-D using gaussian_kde. However, when I plot pdf using stats.norm.pdf, it gives me different result. Please correct me if I am wrong, I think they should give quite similar result. Here's my code.
npeaks = 9
mean = np.array([0.2, 0.3, 0.38, 0.55, 0.65,0.7,0.75,0.8,0.82]) #peak locations
support = np.arange(0,1.01,0.01)
std = 0.03
pkfun = sum(stats.norm.pdf(support, loc=mean[i], scale=std) for i in range(0,npeaks))
df = pd.DataFrame(support)
X = df.iloc[:,0]
min_x, max_x = X.min(), X.max()
plt.figure(1)
plt.plot(support,pkfun)
kernel = stats.gaussian_kde(X)
grid = 100j
X= np.mgrid[min_x:max_x:grid]
Z = np.reshape(kernel(X), X.shape)
# plot KDE
plt.figure(2)
plt.plot(X, Z)
plt.show()
Also, when I get the first derivative of stats.gaussian_kde was far from the original signal. However, the result of first derivative of stats.norm.pdf does make sense. So, I am assuming I might have error in my code above.
Value of X= np.mgrid[min_x:max_x:grid]:
[
0. 0.01010101 0.02020202 0.03030303 0.04040404 0.05050505
0.06060606 0.07070707 0.08080808 0.09090909 0.1010101 0.11111111
0.12121212 0.13131313 0.14141414 0.15151515 0.16161616 0.17171717
0.18181818 0.19191919 0.2020202 0.21212121 0.22222222 0.23232323
0.24242424 0.25252525 0.26262626 0.27272727 0.28282828 0.29292929
0.3030303 0.31313131 0.32323232 0.33333333 0.34343434 0.35353535
0.36363636 0.37373737 0.38383838 0.39393939 0.4040404 0.41414141
0.42424242 0.43434343 0.44444444 0.45454545 0.46464646 0.47474747
0.48484848 0.49494949 0.50505051 0.51515152 0.52525253 0.53535354
0.54545455 0.55555556 0.56565657 0.57575758 0.58585859 0.5959596
0.60606061 0.61616162 0.62626263 0.63636364 0.64646465 0.65656566
0.66666667 0.67676768 0.68686869 0.6969697 0.70707071 0.71717172
0.72727273 0.73737374 0.74747475 0.75757576 0.76767677 0.77777778
0.78787879 0.7979798 0.80808081 0.81818182 0.82828283 0.83838384
0.84848485 0.85858586 0.86868687 0.87878788 0.88888889 0.8989899
0.90909091 0.91919192 0.92929293 0.93939394 0.94949495 0.95959596
0.96969697 0.97979798 0.98989899 1. ]
Value of X = df.iloc[:,0]:
[ 0. 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 0.11
0.12 0.13 0.14 0.15 0.16 0.17 0.18 0.19 0.2 0.21 0.22 0.23
0.24 0.25 0.26 0.27 0.28 0.29 0.3 0.31 0.32 0.33 0.34 0.35
0.36 0.37 0.38 0.39 0.4 0.41 0.42 0.43 0.44 0.45 0.46 0.47
0.48 0.49 0.5 0.51 0.52 0.53 0.54 0.55 0.56 0.57 0.58 0.59
0.6 0.61 0.62 0.63 0.64 0.65 0.66 0.67 0.68 0.69 0.7 0.71
0.72 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 0.81 0.82 0.83
0.84 0.85 0.86 0.87 0.88 0.89 0.9 0.91 0.92 0.93 0.94 0.95
0.96 0.97 0.98 0.99 1. ]
In the row below you make pdf calculations in every peak-point along 100 datapoints with the std = 0,03. So you get a matrix with array with 100 elements per row then you summerize it elementwise, result:
Thus you get a graph with 9 narrow -because of std = 0,03- U-shape.
Are you sure, that this was your purpose with this row?
This will never get the similar graph as the kernel estimate base of the original data, result:
pkfun = sum(stats.norm.pdf(support, loc=mean[i], scale=std) for i in
range(0,npeaks))

Categories