I have a dataframe that has the following basic structure:
import numpy as np
import pandas as pd
tempDF = pd.DataFrame({'condition':[0,0,0,0,0,1,1,1,1,1],'x1':[1.2,-2.3,-2.1,2.4,-4.3,2.1,-3.4,-4.1,3.2,-3.3],'y1':[6.5,-7.6,-3.4,-5.3,7.6,5.2,-4.1,-3.3,-5.7,5.3],'decision':[np.nan]*10})
print tempDF
condition decision x1 y1
0 0 NaN 1.2 6.5
1 0 NaN -2.3 -7.6
2 0 NaN -2.1 -3.4
3 0 NaN 2.4 -5.3
4 0 NaN -4.3 7.6
5 1 NaN 2.1 5.2
6 1 NaN -3.4 -4.1
7 1 NaN -4.1 -3.3
8 1 NaN 3.2 -5.7
9 1 NaN -3.3 5.3
Within each row, I want to change the value of the 'decision' column to zero if the 'condition' column equals zero and if 'x1' and 'y1' are both the same sign (either positive or negative) - for the purposes of this script zero is considered to be positive. If the signs of 'x1' and 'y1' are different or if the 'condition' column equals 1 (regardless of the signs of 'x1' and 'y1') then the 'decision' column should equal 1. I hope I've explained that clearly.
I can iterate over each row of the dataframe as follows:
for i in range(len(tempDF)):
if (tempDF.ix[i,'condition'] == 0 and ((tempDF.ix[i,'x1'] >= 0) and (tempDF.ix[i,'y1'] >=0)) or ((tempDF.ix[i,'x1'] < 0) and (tempDF.ix[i,'y1'] < 0))):
tempDF.ix[i,'decision'] = 0
else:
tempDF.ix[i,'decision'] = 1
print tempDF
condition decision x1 y1
0 0 0 1.2 6.5
1 0 0 -2.3 -7.6
2 0 0 -2.1 -3.4
3 0 1 2.4 -5.3
4 0 1 -4.3 7.6
5 1 1 2.1 5.2
6 1 1 -3.4 -4.1
7 1 1 -4.1 -3.3
8 1 1 3.2 -5.7
9 1 1 -3.3 5.3
This produces the right output but it's a bit slow. The real dataframe I have is very large and these comparisons will need to be made many times. Is there a more efficient way to achieve the desired result?
First, use np.sign and the comparison operators to create a boolean array which is True where the decision should be 1:
decision = df["condition"] | (np.sign(df["x1"]) != np.sign(df["y1"]))
Here I've used DeMorgan's laws.
Then cast to int and put it in the dataframe:
df["decision"] = decision.astype(int)
Giving:
>>> df
condition decision x1 y1
0 0 0 1.2 6.5
1 0 0 -2.3 -7.6
2 0 0 -2.1 -3.4
3 0 1 2.4 -5.3
4 0 1 -4.3 7.6
5 1 1 2.1 5.2
6 1 1 -3.4 -4.1
7 1 1 -4.1 -3.3
8 1 1 3.2 -5.7
9 1 1 -3.3 5.3
Related
Transitioning from R to Python, and I am having a difficult time replicating the following code:
df = df %>% group_by(ID) %>% slice(seq_len(min(which(F < 1 & d == 8), n()))
Sample Data:
ID Price F D
1 10.1 1 NAN
1 10.4 1 NAN
1 10.6 .8 8
1 8.1 .8 NAN
1 8.5 .8 NAN
2 22.4 2 NAN
2 22.1 2 NAN
2 21.1 .9 8
2 20.1 .9 NAN
2 20.1 .9 6
with the desired output:
ID Price F D
1 10.1 1 NAN
1 10.4 1 NAN
2 22.4 2 NAN
2 22.1 2 NAN
I believe the code in python would include some sort of:
np.where, cumcount(), and slice.
However, I have no idea how I would go about doing this.
Any help would be appreciated, thank you.
EDIT: To anyone in the future who comes to my question in hopes to finding a solution - yatu's solution worked fine - but I have worked my way into another solution which i found to be a bit more easier to read:
df['temp'] = np.where((df['F'] < 1) & (df['D'] == 8), 1, 0)
mask = df.groupby(ID)['temp'].cumsum().eq(0)
df[mask]
I've read up on masking a bit and it really does help simplify the complexities of python quite a bit!
You could index the dataframe using the conditions bellow:
c1 = ~df.Distro.eq(8).groupby(df.ID).cumsum()
c2 = df.Factor.lt(1).groupby(df.ID).cumsum().eq(0)
df[c1 & c2]
ID Price Factor Distro
0 1 10.1 1.0 NAN
1 1 10.4 1.0 NAN
5 2 22.4 2.0 NAN
6 2 22.1 2.0 NAN
Note that by taking the .cumsum of a boolean series you are essentially propagating the True values, so as soon as a True appears the remaining values will be True. This result, having been negated can be used to remove rows from the dataframe as soon as a value appears.
Details
The following dataframe shows the original dataframe along with the conditions used to index it. In this case given that the specified criteria takes place in the same rows, both conditions show the same behaviour:
df.assign(c1=c1, c2=c2)
ID Price Factor Distro c1 c2
0 1 10.1 1.0 NAN True True
1 1 10.4 1.0 NAN True True
2 1 10.6 0.8 8 False False
3 1 8.1 0.8 NAN False False
4 1 8.5 0.8 NAN False False
5 2 22.4 2.0 NAN True True
6 2 22.1 2.0 NAN True True
7 2 21.1 0.9 8 False False
8 2 20.1 0.9 NAN False False
9 2 20.1 0.9 6 False False
I want all grouped rows to be the same size. I.e either by removing the last rows or adding zeros if the group has a small size.
d = {'ID':['a12', 'a12','a12','a12','a12','b33','b33','b33','b33','v55','v55','v55','v55','v55','v55'], 'Exp_A':[2.2,2.2,2.2,2.2,2.2,3.1,3.1,3.1,3.1,1.5,1.5,1.5,1.5,1.5,1.5],
'Exp_B':[2.4,2.4,2.4,2.4,2.4,1.2,1.2,1.2,1.2,1.5,1.5,1.5,1.5,1.5,1.5],
'A':[0,0,1,0,1,0,1,0,1,0,1,1,1,0,1], 'B':[0,0,1,1,1,0,0,1,1,1,0,0,1,0,1]}
df1 = pd.DataFrame(data=d)
I want all df1.ID to be size df1.groupby('ID').size().mean().
So df1 should look like:
A B Exp_A Exp_B ID
0 0 0 2.2 2.4 a12
1 0 0 2.2 2.4 a12
2 1 1 2.2 2.4 a12
3 0 1 2.2 2.4 a12
4 1 1 2.2 2.4 a12
5 0 0 3.1 1.2 b33
6 1 0 3.1 1.2 b33
7 0 1 3.1 1.2 b33
8 1 1 3.1 1.2 b33
9 0 0 3.1 1.2 b33
10 0 1 1.5 1.5 v55
11 1 0 1.5 1.5 v55
12 1 0 1.5 1.5 v55
13 1 1 1.5 1.5 v55
14 0 0 1.5 1.5 v55
Here's one solution using GroupBy. The complication arises with your condition to add extra rows with certain columns set to 0, whenever a particular group is too small.
g = df1.groupby('ID')
n = int(g.size().mean())
res = []
for _, df in g:
k = len(df.index)
excess = n - k
if excess > 0:
df = df.append(pd.concat([df.iloc[[-1]].assign(A=0, B=0)]*excess))
res.append(df.iloc[:n])
res = pd.concat(res, ignore_index=True)
print(res)
A B Exp_A Exp_B ID
0 0 0 2.2 2.4 a12
1 0 0 2.2 2.4 a12
2 1 1 2.2 2.4 a12
3 0 1 2.2 2.4 a12
4 1 1 2.2 2.4 a12
5 0 0 3.1 1.2 b33
6 1 0 3.1 1.2 b33
7 0 1 3.1 1.2 b33
8 1 1 3.1 1.2 b33
9 0 0 3.1 1.2 b33
10 0 1 1.5 1.5 v55
11 1 0 1.5 1.5 v55
12 1 0 1.5 1.5 v55
13 1 1 1.5 1.5 v55
14 0 0 1.5 1.5 v55
Here is a solution without looping. You can first determine the number of rows for each ID and then go about changing stuff.
# Getting the minimum required number of rows for each ID
min_req = df.groupby('ID').size().mean()
# Adding auto-increment column with respect to ID column
df['row_count'] = df.groupby(['ID']).cumcount()+1
# Adding excess rows equal to required rows
# we will delete unneeded ones later
df2 = df.groupby('ID', as_index=False).max()
df2 = df2.loc[df2['row_count']<int(min_req)]
df2 = df2.assign(A=0, B=0)
df = df.append([df2]*int(min_req), ignore_index=True)
# recalculating the count
df = df.drop('row_count', axis=1)
df = df.sort_values(by=['ID', 'A', 'B'], ascending=[True, False, False])
df['row_count'] = df.groupby(['ID']).cumcount()+1
# Dropping excess rows
df = df.drop((df.loc[df['row_count']>5]).index)
df = df.drop('row_count', axis=1)
df
A B Exp_A Exp_B ID
0 0 0 2.2 2.4 a12
1 0 0 2.2 2.4 a12
2 1 1 2.2 2.4 a12
3 0 1 2.2 2.4 a12
4 1 1 2.2 2.4 a12
17 0 0 3.1 1.2 b33
16 0 0 3.1 1.2 b33
15 0 0 3.1 1.2 b33
18 0 0 3.1 1.2 b33
19 0 0 3.1 1.2 b33
10 1 0 1.5 1.5 v55
11 1 0 1.5 1.5 v55
12 1 1 1.5 1.5 v55
13 0 0 1.5 1.5 v55
14 1 1 1.5 1.5 v55
I have a pandas column like so:
index colA
1 10.2
2 10.8
3 11.6
4 10.7
5 9.5
6 6.2
7 12.9
8 10.6
9 6.4
10 20.5
I want to search the current row value and find matches from previous rows that are close. For example index4 (10.7) would return a match of 1 because it is close to index2 (10.8). Similarly index8 (10.6) would return a match of 2 because it is close to both index2 and 4.
Using a threshold of +/- 5% for this example would output the below:
index colA matches
1 10.2 0
2 10.8 0
3 11.6 0
4 10.7 2
5 9.5 0
6 6.2 0
7 12.9 0
8 10.6 3
9 6.4 1
10 20.5 0
With a large dataframe I would like to limit this to the previous X (300?) number of rows to search over rather than an entire dataframe.
Using triangle indices to ensure we only look backwards. Then use np.bincount to accumulate the matches.
a = df.colA.values
i, j = np.tril_indices(len(a), -1)
mask = np.abs(a[i] - a[j]) / a[i] <= .05
df.assign(matches=np.bincount(i[mask], minlength=len(a)))
colA matches
index
1 10.2 0
2 10.8 0
3 11.6 0
4 10.7 2
5 9.5 0
6 6.2 0
7 12.9 0
8 10.6 3
9 6.4 1
10 20.5 0
If you are having resource issues, consider using good 'ol fashion loops. However, if you have access to numba you make this considerably faster.
from numba import njit
#njit
def counter(a):
c = np.arange(len(a)) * 0
for i, x in enumerate(a):
for j, y in enumerate(a):
if j < i:
if abs(x - y) / x <= .05:
c[i] += 1
return c
df.assign(matches=counter(a))
colA matches
index
1 10.2 0
2 10.8 0
3 11.6 0
4 10.7 2
5 9.5 0
6 6.2 0
7 12.9 0
8 10.6 3
9 6.4 1
10 20.5 0
Here's a numpy solution that leverages broadcasted comparison:
i = df.colA.values
j = np.arange(len(df))
df['matches'] = (
(np.abs(i - i[:, None]) < i * .05) & (j < j[:, None])
).sum(1)
df
index colA matches
0 1 10.2 0
1 2 10.8 0
2 3 11.6 0
3 4 10.7 2
4 5 9.5 0
5 6 6.2 0
6 7 12.9 0
7 8 10.6 3
8 9 6.4 1
9 10 20.5 0
Note; This is extremely fast, but does not handle the 300 row limitation for large dataframes.
rolling with apply , if speed matter , please look into cold's answer
df.colA.rolling(window=len(df),min_periods=1).apply(lambda x : sum(abs((x-x[-1])/x[-1])<0.05)-1)
Out[113]:
index
1 0.0
2 0.0
3 0.0
4 2.0
5 0.0
6 0.0
7 0.0
8 3.0
9 1.0
10 0.0
Name: colA, dtype: float64
I just try to get the quantiles of a dataframe asigned on to an other dataframe like:
dataframe['pc'] = dataframe['row'].quantile([.1,.5,.7])
the result is
0 NaN
...
5758 NaN
Name: pc, Length: 5759, dtype: float64
any idea why the dataframe['row'] got plenty of values
It is expected, because different indices, so no align Series created by quantile with original DataFrame and get NaNs:
#indices 0,1,2...6
dataframe = pd.DataFrame({'row':[2,0,8,1,7,4,5]})
print (dataframe)
row
0 2
1 0
2 8
3 1
4 7
5 4
6 5
#indices 0.1, 0.5, 0.7
print (dataframe['row'].quantile([.1,.5,.7]))
0.1 0.6
0.5 4.0
0.7 5.4
Name: row, dtype: float64
#not align
dataframe['pc'] = dataframe['row'].quantile([.1,.5,.7])
print (dataframe)
row pc
0 2 NaN
1 0 NaN
2 8 NaN
3 1 NaN
4 7 NaN
5 4 NaN
6 5 NaN
If want create DataFrame of quantile add rename_axis + reset_index:
df = dataframe['row'].quantile([.1,.5,.7]).rename_axis('a').reset_index(name='b')
print (df)
a b
0 0.1 0.6
1 0.5 4.0
2 0.7 5.4
But if some indices are same (I think it is not what you want, only for better explanation):
Add reset_index for default indices 0,1,2:
print (dataframe['row'].quantile([.1,.5,.7]).reset_index(drop=True))
0 0.6
1 4.0
2 5.4
Name: row, dtype: float64
First 3 rows are aligned, because same indices 0,1,2 in Series and DataFrame:
dataframe['pc'] = dataframe['row'].quantile([.1,.5,.7]).reset_index(drop=True)
print (dataframe)
row pc
0 2 0.6
1 0 4.0
2 8 5.4
3 1 NaN
4 7 NaN
5 4 NaN
6 5 NaN
EDIT:
For multiple columns need DataFrame.quantile, it also exclude non numeric columns:
df = pd.DataFrame({'A':list('abcdef'),
'B':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'F':list('aaabbb')})
print (df)
A B C D E F
0 a 4 7 1 5 a
1 b 5 8 3 3 a
2 c 4 9 5 6 a
3 d 5 4 7 9 b
4 e 5 2 1 2 b
5 f 4 3 0 4 b
df1 = df.quantile([.1,.2,.3,.4])
print (df1)
B C D E
0.1 4.0 2.5 0.5 2.5
0.2 4.0 3.0 1.0 3.0
0.3 4.0 3.5 1.0 3.5
0.4 4.0 4.0 1.0 4.0
I have two data frame df1 and df2
df1 has following data (N Rows)
Time(s) sv-01 sv-02 sv-03 Val1 val2 val3
1339.4 1 4 12 1.6 0.6 1.3
1340.4 1 12 4 -0.5 0.5 1.4
1341.4 1 6 8 0.4 5 1.6
1342.4 2 5 14 1.2 3.9 11
...... ..... .... ... ..
df2 has following data which has more rows than df1
Time(msec) channel svid value-1 value-2 valu-03
1000 1 2 0 5 1
1000 2 5 1 4 2
1000 3 2 3 4 7
..... .....................................
1339400 1 1 1.6 0.4 5.3
1339400 2 12 0.5 1.8 -4.4
1339400 3 4 -0.20 1.6 -7.9
1340400 1 1 0.3 0.3 1.5
1340400 2 6 2.3 -4.3 1.0
1340400 3 4 2.0 1.1 -0.45
1341400 1 1 2 2.1 0
1341400 2 8 3.4 -0.3 1
1341400 3 6 0 4.1 2.3
.... .... .. ... ... ...
What I am trying to achieve is
1.first multiplying Time(s) column by 1000 so that it matches with df2
millisecond column.
2.In df1 sv 01,02 and 03 are in independent column but those sv are
present in same column under svid.
So goal is when time of df1(after changing) is matching with time
of df2 copy next three consecutive lines i.e copy all matched
lines of that time instant.
Basically I want to iterate the time of df1 in df2 time column
and if there is a match copy three next rows and copy to a new df.
I have seen examples using pandas merge function but in my case both have
different header.
Thanks.
I think you need double boolean indexing - first df2 with isin, for multiple is used mul:
And then count values per groups by cumcount and filter first 3:
df = df2[df2['Time(msec)'].isin(df1['Time(s)'].mul(1000))]
df = df[df.groupby('Time(msec)').cumcount() < 3]
print (df)
Time(msec) channel svid value-1 value-2 valu-03
3 1339400 1 1 1.6 0.4 5.30
4 1339400 2 12 0.5 1.8 -4.40
5 1339400 3 4 -0.2 1.6 -7.90
6 1340400 1 1 0.3 0.3 1.50
7 1340400 2 6 2.3 -4.3 1.00
8 1340400 3 4 2.0 1.1 -0.45
9 1341400 1 1 2.0 2.1 0.00
10 1341400 2 8 3.4 -0.3 1.00
11 1341400 3 6 0.0 4.1 2.30
Detail:
print (df.groupby('Time(msec)').cumcount())
3 0
4 1
5 2
6 0
7 1
8 2
9 0
10 1
11 2
dtype: int64