I am aware of statsmodels.tools.tools.ECDF but since the calculation of an empricial cumulative distribution function (ECDF) is pretty straight-forward and I want to minimise dependencies in my project, I want to code it manually.
In a given list() / np.array() Pandas.Series, the ECDF for each element can be calculated as given in Wikipedia:
I have the Pandas DataFrame ,dfser, below and I want to get the ecdf of the values column. My two one-liner solutions are given as well.
Is there a faster way to do this? Speed is important in my application.
# Note that in my case indices are unique identifiers so I cannot reset them.
import numpy as np
import pandas as pd
# all indices are unique, but there may be duplicate measurement values (that belong to different indices).
dfser = pd.DataFrame({'group':['a','b','b','a','d','c','e','e','c','a','b','d','d','c','d','e','e','a'],
'values':[2.01899E-06, 1.12186E-07, 8.97467E-07, 2.91257E-06, 1.93733E-05,
0.00017889, 0.000120963, 4.27643E-07, 3.33614E-07, 2.08352E-12,
1.39478E-05, 4.28255E-08, 9.7619E-06, 8.51787E-09, 1.28344E-09,
3.5063E-05, 0.01732035,2.08352E-12]},
index = [123, 532, 235, 645, 747, 856, 345, 245, 845, 248, 901, 712, 162, 126,
198,748, 127,395] )
# My 1st Solution - list comprehension
dfser['ecdf']=[sum( dfser['values'] <= x)/float(dfser['values'].size) for x in dfser['values']]
# My 2nd Solution - ranking
dfser['rank'] = dfser['values'].rank(ascending = 0)
dfser['ecdf_r']=(len(dfser)-dfser['rank']+1)/len(dfser)
dfser
group values ecdf rank ecdf_r
123 a 2.018990e-06 0.555556 9.0 0.555556
532 b 1.121860e-07 0.333333 13.0 0.333333
235 b 8.974670e-07 0.500000 10.0 0.500000
645 a 2.912570e-06 0.611111 8.0 0.611111
747 d 1.937330e-05 0.777778 5.0 0.777778
856 c 1.788900e-04 0.944444 2.0 0.944444
345 e 1.209630e-04 0.888889 3.0 0.888889
245 e 4.276430e-07 0.444444 11.0 0.444444
845 c 3.336140e-07 0.388889 12.0 0.388889
248 a 2.083520e-12 0.111111 17.5 0.083333
901 b 1.394780e-05 0.722222 6.0 0.722222
712 d 4.282550e-08 0.277778 14.0 0.277778
162 d 9.761900e-06 0.666667 7.0 0.666667
126 c 8.517870e-09 0.222222 15.0 0.222222
198 d 1.283440e-09 0.166667 16.0 0.166667
748 e 3.506300e-05 0.833333 4.0 0.833333
127 e 1.732035e-02 1.000000 1.0 1.000000
395 a 2.083520e-12 0.111111 17.5 0.083333
Since you are already using pandas I think it will be silly not to use some of its features:
In [15]:
import numpy as np
from numpy import *
sq=ser.value_counts()
sq.sort_index().cumsum()*1./len(sq)
Out[15]:
2.083520e-12 0.058824
1.283440e-09 0.117647
8.517870e-09 0.176471
4.282550e-08 0.235294
1.121860e-07 0.294118
3.336140e-07 0.352941
4.276430e-07 0.411765
8.974670e-07 0.470588
2.018990e-06 0.529412
2.912570e-06 0.588235
9.761900e-06 0.647059
1.394780e-05 0.705882
1.937330e-05 0.764706
3.506300e-05 0.823529
1.209630e-04 0.882353
1.788900e-04 0.941176
1.732035e-02 1.000000
dtype: float64
And speed comparison
In [19]:
%timeit sq.sort_index().cumsum()*1./len(sq)
1000 loops, best of 3: 344 µs per loop
In [18]:
%timeit ser.value_counts().sort_index().cumsum()*1./len(ser.value_counts())
1000 loops, best of 3: 1.58 ms per loop
In [17]:
%timeit [sum( ser <= x)/float(len(ser)) for x in ser]
100 loops, best of 3: 3.31 ms per loop
If the values are all unique, the ser.value_counts() is no longer needed. That part is slow (Fetching unique values). All you need in that case is just to sort ser.
In [23]:
%timeit np.arange(1, ser.size+1)/float(ser.size)
10000 loops, best of 3: 11.6 µs per loop
The fastest version that I can think of is to use get vectorized:
In [35]:
np.sum(dfser['values'].values[...,newaxis]<=dfser['values'].values.reshape((1,-1)), axis=0)*1./dfser['values'].size
Out[35]:
array([ 0.55555556, 0.33333333, 0.5 , 0.61111111, 0.77777778,
0.94444444, 0.88888889, 0.44444444, 0.38888889, 0.11111111,
0.72222222, 0.27777778, 0.66666667, 0.22222222, 0.16666667,
0.83333333, 1. , 0.11111111])
Add let see:
In [37]:
%timeit dfser['ecdf']=[sum( dfser['values'] <= x)/float(dfser['values'].size) for x in dfser['values']]
100 loops, best of 3: 6 ms per loop
In [38]:
%%timeit
dfser['rank'] = dfser['values'].rank(ascending = 0)
dfser['ecdf_r']=(len(dfser)-dfser['rank']+1)/len(dfser)
1000 loops, best of 3: 827 µs per loop
In [39]:
%timeit np.sum(dfser['values'].values[...,newaxis]<=dfser['values'].values.reshape((1,-1)), axis=0)*1./dfser['values'].size
10000 loops, best of 3: 91.1 µs per loop
Related
Wy arange and linspace does not produce the same result on the code below?
a = np.arange(12,17,.2, dtype=float)
b = np.linspace(12.,16.8,25, dtype=float)
print(list(a)==list(b))
The number of elements and the elements theyself is the same, apparently. But:
a==b
results not equal.
I expect the output of '''True''', but the actual output is '''False'''.
See !https://imgur.com/qEvGcJW
for _a, _b in zip(a, b):
print (_a, _b)
Based on https://docs.python.org/3/tutorial/floatingpoint.html,
You can see why it is not same from result as following:
12.0 12.0
12.2 12.2
12.399999999999999 12.4
12.599999999999998 12.6
12.799999999999997 12.8
12.999999999999996 13.0
13.199999999999996 13.2
13.399999999999995 13.4
13.599999999999994 13.6
13.799999999999994 13.8
13.999999999999993 14.0
14.199999999999992 14.200000000000001
14.399999999999991 14.4
14.59999999999999 14.600000000000001
14.79999999999999 14.8
14.99999999999999 15.0
15.199999999999989 15.200000000000001
15.399999999999988 15.4
15.599999999999987 15.600000000000001
15.799999999999986 15.8
15.999999999999986 16.0
16.199999999999985 16.200000000000003
16.399999999999984 16.400000000000002
16.599999999999984 16.6
16.799999999999983 16.8
In temporary, We can use np.round for just this problem.
a = np.arange(12,17,.2, dtype=np.float)
a = np.round(a, 1)
b = np.linspace(12.,16.8,25, dtype=np.float)
b = np.round(a, 1)
print (np.array_equal(a,b))
It returns True.
I have a dataframe, df, in which I am attempting to fill in values within the empty "Set" column, depending on a condition. The condition is as follows: the value of the 'Set' columns need to be "IN" whenever the 'valence_median_split' column's value is 'Low_Valence' within the corresponding row, and "OUT' in all other cases.
Please see below for an example of my attempt to solve this:
df.head()
Out[65]:
ID Category Num Vert_Horizon Description Fem_Valence_Mean \
0 Animals_001_h Animals 1 h Dead Stork 2.40
1 Animals_002_v Animals 2 v Lion 6.31
2 Animals_003_h Animals 3 h Snake 5.14
3 Animals_004_v Animals 4 v Wolf 4.55
4 Animals_005_h Animals 5 h Bat 5.29
Fem_Valence_SD Fem_Av/Ap_Mean Fem_Av/Ap_SD Arousal_Mean ... Contrast \
0 1.30 3.03 1.47 6.72 ... 68.45
1 2.19 5.96 2.24 6.69 ... 32.34
2 1.19 5.14 1.75 5.34 ... 59.92
3 1.87 4.82 2.27 6.84 ... 75.10
4 1.56 4.61 1.81 5.50 ... 59.77
JPEG_size80 LABL LABA LABB Entropy Classification \
0 263028 51.75 -0.39 16.93 7.86
1 250208 52.39 10.63 30.30 6.71
2 190887 55.45 0.25 4.41 7.83
3 282350 49.84 3.82 1.36 7.69
4 329325 54.26 -0.34 -0.95 7.82
valence_median_split temp_selection set
0 Low_Valence Animals_001_h
1 High_Valence NaN
2 Low_Valence Animals_003_h
3 Low_Valence Animals_004_v
4 Low_Valence Animals_005_h
[5 rows x 36 columns]
df['set'] = np.where(df.loc[df['valence_median_split'] == 'Low_Valence'], 'IN', 'OUT')
ValueError: Length of values does not match length of index
I can accomplish this by using loc to separate the df into two different df's, but wondering if there is a more elegant solution using the "np.where" or a similar approach.
Change to
df['set'] = np.where(df['valence_median_split'] == 'Low_Valence', 'IN', 'OUT')
If need .loc
df.loc[df['valence_median_split'] == 'Low_Valence','set']='IN'
df.loc[df['valence_median_split'] != 'Low_Valence','set']='OUT'
I need to replace "0" row data in pandas with the previous rows non-zero value IF and ONLY IF, the value in the row following the "0" is non zero.
I.e.
101
92
78
0
107
0
0
would become:
101
92
78
78
107
0
0
Any ideas how to do this would be much appreciated :-)
Thanks!
using shift you could do
In [608]: df.loc[(df.val == 0) & (df.val.shift(-1) != 0), 'val'] = df.val.shift(1)
In [609]: df
Out[609]:
val
0 101.0
1 92.0
2 78.0
3 78.0
4 107.0
5 0.0
6 0.0
This is answer is similar to JohnGalt but it faster when compared:
In [12]: np.where((df.Val.values==0)&(df.Val.shift(-1)!=0),df.Val.shift(),df.Val)
Out[31]: array([ 101., 92., 78., 78., 107., 0., 0.])
In [24]: %timeit np.where((df.Val.values==0)&(df.Val.shift(-1)!=0),df.Val.shift(),df.Val)
1000 loops, best of 3: 671 µs per loop
In [25]: %timeit df.loc[(df.Val == 0) & (df.Val.shift(-1) != 0), 'val'] = df.Val.shift(1)
100 loops, best of 3: 2.01 ms per loop
I have a data-frame (df) with the following structure:
date a b c d e f g
23/02/2009 577.9102
24/02/2009 579.1345
25/02/2009 583.2158
26/02/2009 629.7425
27/02/2009 553.8306
02/03/2009 6.15 5.31 5.80 223716 790.8724 5.7916
03/03/2009 6.16 6.2 6.26 818424 770.6165 6.0161
04/03/2009 6.6 6.485 6.57 636544 858.5754 1.4331 6.4149
05/03/2009 6.1 5.98 6.06 810584 816.5025 1.7475 6.242
06/03/2009 5.845 5.95 6.00 331079 796.7618 1.7144 6.0427
09/03/2009 5.4 5.2 5.28 504271 744.0833 1.6449 5.4076
10/03/2009 5.93 5.59 5.595 906742 814.2862 1.4128 5.8434
where columns a and g have data i would like to multiple them together using the following:
df["h"] = df["a"]*df["g"]
however as you can see from the timeseries above there is not always data with which to perform the calculation and I am being returned the following error:
KeyError: 'g'
Is there a way to check if the data exists before performing the calculation? I am trying to use :
df["h"] = np.where((df.a == blank)|(df.g == blank),"",df.a*df.g)
I would like to have returned:
date a b c d e f g h
23/02/2009 577.9102
24/02/2009 579.1345
25/02/2009 583.2158
26/02/2009 629.7425
27/02/2009 553.8306
02/03/2009 6.15 5.31 5.8 223716 790.8724 5.7916 1.0618
03/03/2009 6.16 6.2 6.26 818424 770.6165 6.0161 1.0239
04/03/2009 6.6 6.485 6.57 636544 858.5754 1.4331 6.4149 1.0288
05/03/2009 6.1 5.98 6.06 810584 816.5025 1.7475 6.242 0.9772
06/03/2009 5.845 5.95 6.00 331079 796.7618 1.7144 6.0427 0.9672
09/03/2009 5.4 5.2 5.28 504271 744.0833 1.6449 5.4076 0.9985
10/03/2009 5.93 5.59 5.595 906742 814.2862 1.4128 5.8434 1.0148
but am unsure of the syntax for a blank data field. What should that be?
I have a DataFrame like so:
In [10]: df.head()
Out[10]:
sand silt clay rho_b ... n \
5 25 60 5 25 60 5 25 60 5 ... 60
STID ...
ACME 73.0 60.3 52.5 19.7 23.9 25.9 7.2 15.7 21.5 1.27 ... 1.32
ADAX 61.1 51.1 47.6 22.0 25.4 24.6 16.9 23.5 27.8 1.01 ... 1.25
ALTU 23.8 17.8 14.3 40.0 45.2 40.9 36.2 37.0 44.8 1.57 ... 1.18
ALV2 33.3 21.2 19.8 31.4 29.7 29.8 35.3 49.1 50.5 1.66 ... 1.20
ANT2 55.6 57.5 47.7 34.9 31.1 26.8 9.4 11.3 25.5 1.49 ... 1.29
So for every STID (e.g. ACME, ADAX, ALTU), there's some property (e.g. sand, silt, clay) defined at three depths (5, 25, 60).
This structure makes it really easy to do per-depth calculations at each STID, e.g.:
In [12]: (df['sand'] + df['silt']).head()
Out[12]:
5 25 60
STID
ACME 92.7 84.2 78.4
ADAX 83.1 76.5 72.2
ALTU 63.8 63.0 55.2
ALV2 64.7 50.9 49.6
ANT2 90.5 88.6 74.5
How can I neatly incorporate a calculated result back in to the DataFrame? For example, if I wanted to call the result of the above calculation 'notclay':
In [13]: df['notclay'] = df['sand'] + df['silt']
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-13-a30bd9ba99c3> in <module>()
----> 1 df['notclay'] = df['sand'] + df['silt']
<snip>
ValueError: Wrong number of items passed 3, placement implies 1
Three columns are expected to be defined for each column in the result, not just the one 'notclay' column.
I do have a solution using strict assignments, but I'm not very satisfied with it:
In [21]: df[[('notclay', 5), ('notclay', 25), ('notclay', 60)]] = df['sand'] + df['silt']
In [22]: df['notclay'].head()
Out[22]:
5 25 60
STID
ACME 92.7 84.2 78.4
ADAX 83.1 76.5 72.2
ALTU 63.8 63.0 55.2
ALV2 64.7 50.9 49.6
ANT2 90.5 88.6 74.5
I have many other calculations to do similar to this one, and using a strict assignment every time seems tedious. I'm guessing there's a better/"right" way to do this. I think add a field in pandas dataframe with MultiIndex columns might contain the answer, but I don't very well understand the solutions (or even what a Panel is and if it can help me).
Edit: Something I tried that doesn't work, prepending a category using concat:
In [36]: concat([df['sand'] + df['silt']], axis=1, keys=['notclay']).head()
Out[36]:
notclay
5 25 60
STID
ACME 92.7 84.2 78.4
ADAX 83.1 76.5 72.2
ALTU 63.8 63.0 55.2
ALV2 64.7 50.9 49.6
ANT2 90.5 88.6 74.5
In [37]: df['notclay'] = concat([df['sand'] + df['silt']], axis=1, keys=['notclay'])
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<snip>
ValueError: Wrong number of items passed 3, placement implies 1
Same ValueError raised as above.
Depending on your taste, this may be a nicer way to do it still using concat:
In [53]: df
Out[53]:
blah foo
1 2 3 1 2 3
a 0.351045 0.044654 0.855627 0.839725 0.675183 0.325324
b 0.610374 0.394499 0.924708 0.924303 0.404475 0.885368
c 0.116418 0.487866 0.190669 0.283535 0.862869 0.346477
d 0.771014 0.204143 0.143449 0.848520 0.887373 0.220083
e 0.103268 0.306820 0.277125 0.627272 0.631019 0.386406
In [54]: newdf
Out[54]:
1 2 3
a 0.433377 0.806679 0.976298
b 0.593683 0.217415 0.086565
c 0.716244 0.908777 0.180252
d 0.031942 0.074283 0.745019
e 0.651517 0.393569 0.861616
In [56]: newdf.columns=pd.MultiIndex.from_product([['bar'], newdf.columns])
In [57]: pd.concat([df, newdf], axis=1)
Out[57]:
blah foo bar \
1 2 3 1 2 3 1
a 0.351045 0.044654 0.855627 0.839725 0.675183 0.325324 0.433377
b 0.610374 0.394499 0.924708 0.924303 0.404475 0.885368 0.593683
c 0.116418 0.487866 0.190669 0.283535 0.862869 0.346477 0.716244
d 0.771014 0.204143 0.143449 0.848520 0.887373 0.220083 0.031942
e 0.103268 0.306820 0.277125 0.627272 0.631019 0.386406 0.651517
2 3
a 0.806679 0.976298
b 0.217415 0.086565
c 0.908777 0.180252
d 0.074283 0.745019
e 0.393569 0.861616
In order to store this into the original dataframe, you can simply assign to it in the last line:
In [58]: df = pd.concat([df, newdf], axis=1)