I'm trying to mark (in ok) all groups in a pandas DataFrame which are smaller than 'N'. I have a working solution but it's slow, is there a way to speed this up?
import pandas as pd
df = pd.DataFrame([
[1, 2, 1],
[1, 2, 2],
[1, 2, 3],
[2, 3, 1],
[2, 3, 2],
[4, 5, 1],
[4, 5, 2],
[4, 5, 3],
], columns=['x', 'y', 'z'])
keys = ['x', 'y']
N = 3
df['ok'] = True
c = df.groupby(keys)['ok'].count()
for vals in c[c < N].index:
local_dict = dict(zip(keys, vals))
query = ' & '.join(f'{key}==#{key}' for key in keys)
idx = df.query(query, local_dict=local_dict).index
df.loc[idx, 'ok'] = False
print(df)
Instead of using groupby/count, use groupby/transform/count to form a Series which is the same length as the original DataFrame df:
c = df.groupby(keys)['z'].transform('count')
Then you can form a boolean mask which has the same length as df:
In [35]: c<N
Out[35]:
0 False
1 False
2 False
3 True
4 True
5 False
6 False
7 False
Name: ok, dtype: bool
Assignment to ok goes much more smoothly now, without a loop, querying or sub-indexing:
df['ok'] = c >= N
import pandas as pd
df = pd.DataFrame([
[1, 2, 1],
[1, 2, 2],
[1, 2, 3],
[2, 3, 1],
[2, 3, 2],
[4, 5, 1],
[4, 5, 2],
[4, 5, 3],
], columns=['x', 'y', 'z'])
keys = ['x', 'y']
N = 3
c = df.groupby(keys)['z'].transform('count')
df['ok'] = c >= N
print(df)
yields
x y z ok
0 1 2 1 True
1 1 2 2 True
2 1 2 3 True
3 2 3 1 False
4 2 3 2 False
5 4 5 1 True
6 4 5 2 True
7 4 5 3 True
Since the builtin groupby/transform methods (such as transform('count')) are
Cythonized they are in general faster than calling groupby/transform
with an custom lambda function.
Thus, computing the ok column in two steps using
c = df.groupby(keys)['z'].transform('count')
df['ok'] = c >= N
is faster than
df.assign(ok=df.groupby(keys)['z'].transform(lambda x: x.size >= N))
In addition, vectorized operations over an entire column (such as c >= N), are
faster than multiple operations over subgroups. transform(lambda x: x.size >=
N)) performs the comparison x.size >= N once for each group. If there are
many groups, then computing c >= N yields an improvement in performance.
For example, with this 1000-row DataFrame:
import numpy as np
import pandas as pd
np.random.seed(2017)
df = pd.DataFrame(np.random.randint(10, size=(1000, 3)), columns=['x', 'y', 'z'])
keys = ['x', 'y']
N = 3
using transform('count') is about 12x faster:
In [37]: %%timeit
....: c = df.groupby(keys)['z'].transform('count')
....: df['ok'] = c >= N
1000 loops, best of 3: 1.69 ms per loop
In [38]: %timeit df.assign(ok=df.groupby(keys)['z'].transform(lambda x: x.size >= N))
1 loop, best of 3: 20.2 ms per loop
In [39]: 20.2/1.69
Out[39]: 11.95266272189349
In the example above there were 100 groups:
In [47]: df.groupby(keys).ngroups
Out[47]: 100
The speed advantage of using transform('count') increases as the number of
groups increase. For example, with 955 groups:
In [48]: np.random.seed(2017); df = pd.DataFrame(np.random.randint(100, size=(1000, 3)), columns=['x', 'y', 'z'])
In [51]: df.groupby(keys).ngroups
Out[51]: 955
the transform('count') method performs about 92x faster:
In [49]: %%timeit
....: c = df.groupby(keys)['z'].transform('count')
....: df['ok'] = c >= N
1000 loops, best of 3: 1.88 ms per loop
In [50]: %timeit df.assign(ok=df.groupby(keys)['z'].transform(lambda x: x.size >= N))
10 loops, best of 3: 173 ms per loop
In [52]: 173/1.88
Out[52]: 92.02127659574468
Input variables:
keys = ['x','y']
N = 3
Calculate okay or not with groupby, transform and size:
df.assign(ok=df.groupby(keys)['z'].transform(lambda x: x.size >= N))
Output:
x y z ok
0 1 2 1 True
1 1 2 2 True
2 1 2 3 True
3 2 3 1 False
4 2 3 2 False
5 4 5 1 True
6 4 5 2 True
7 4 5 3 True
Related
I have a data frame with many columns with binaries representing the presence of a category in the observation. Each observation has exactly 3 categories with a value of 1, the rest 0. I want to create 3 new columns, 1 for each category, where the value is instead the name of the category (so the name of the binary column) if it's equal to one.To make it clearer:
I have:
x|y|z|k|w
---------
0|1|1|0|1
To be:
cat1|cat2|cat3
--------------
y |z |w
Can I do this ?
For better performance use numpy solution:
print (df)
x y z k w
0 0 1 1 0 1
1 1 1 0 0 1
c = df.columns.values
df = pd.DataFrame(c[np.where(df)[1].reshape(-1, 3)]).add_prefix('cat')
print (df)
cat0 cat1 cat2
0 y z w
1 x y w
Details:
#get indices of 1s
print (np.where(df))
(array([0, 0, 0, 1, 1, 1], dtype=int64), array([1, 2, 4, 0, 1, 4], dtype=int64))
#seelct second array
print (np.where(df)[1])
[1 2 4 0 1 4]
#reshape to 3 columns
print (np.where(df)[1].reshape(-1, 3))
[[1 2 4]
[0 1 4]]
#indexing
print (c[np.where(df)[1].reshape(-1, 3)])
[['y' 'z' 'w']
['x' 'y' 'w']]
Timings:
df = pd.concat([df] * 1000, ignore_index=True)
#jezrael solution
In [390]: %timeit (pd.DataFrame(df.columns.values[np.where(df)[1].reshape(-1, 3)]).add_prefix('cat'))
The slowest run took 4.62 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 503 µs per loop
#jpp solution
In [391]: %timeit (pd.DataFrame(df.apply(lambda row: [x for x in df.columns if row[x]], axis=1).values.tolist()))
10 loops, best of 3: 111 ms per loop
#Zero solution working only with one row DataFrame, so not included
Here is one way:
import pandas as pd
df = pd.DataFrame({'x': [0, 1], 'y': [1, 1], 'z': [1, 0], 'k': [0, 1], 'w': [1, 1]})
split = df.apply(lambda row: [x for x in df.columns if row[x]], axis=1).values.tolist()
df2 = pd.DataFrame(split)
# 0 1 2 3
# 0 w y z None
# 1 k w x y
You could
In [13]: pd.DataFrame([df.columns[df.astype(bool).values[0]]]).add_prefix('cat')
Out[13]:
cat0 cat1 cat2
0 y z w
The following code find index where df['A'] == 1
import pandas as pd
import numpy as np
import random
index = range(10)
random.shuffle(index)
df = pd.DataFrame(np.zeros((10,1)).astype(int), columns = ['A'], index = index)
df.A.iloc[3:6] = 1
df.A.iloc[6:] = 2
print df
print df.loc[df['A'] == 1].index.tolist()
It returns pandas index correctly. How do I get the integer index ([3,4,5]) instead using pandas API?
A
8 0
4 0
6 0
3 1
7 1
1 1
5 2
0 2
2 2
9 2
[3, 7, 1]
what about?
In [12]: df.index[df.A == 1]
Out[12]: Int64Index([3, 7, 1], dtype='int64')
or (depending on your goals):
In [15]: df.reset_index().index[df.A == 1]
Out[15]: Int64Index([3, 4, 5], dtype='int64')
Demo:
In [11]: df
Out[11]:
A
8 0
4 0
6 0
3 1
7 1
1 1
5 2
0 2
2 2
9 2
In [12]: df.index[df.A == 1]
Out[12]: Int64Index([3, 7, 1], dtype='int64')
In [15]: df.reset_index().index[df.A == 1]
Out[15]: Int64Index([3, 4, 5], dtype='int64')
Here is one way:
df.reset_index().index[df.A == 1].tolist()
This re-indexes the data frame with [0, 1, 2, ...], then extracts the integer index values based on the boolean mask df.A == 1.
Edit Credits to #Max for the index[df.A == 1] idea.
No need for numpy, you're right. Just pure python with a listcomp:
Just find the indexes where the values are 1
print([i for i,x in enumerate(df['A'].values) if x == 1])
How to aggregate in the way to get the average of b for group a, while excluding the current row (the target result is in c)?
a b c
1 1 0.5 # (avg of 0 & 1, excluding 1)
1 1 0.5 # (avg of 0 & 1, excluding 1)
1 0 1 # (avg of 1 & 1, excluding 0)
2 1 0.5 # (avg of 0 & 1, excluding 1)
2 0 1 # (avg of 1 & 1, excluding 0)
2 1 0.5 # (avg of 0 & 1, excluding 1)
3 1 0.5 # (avg of 0 & 1, excluding 1)
3 0 1 # (avg of 1 & 1, excluding 0)
3 1 0.5 # (avg of 0 & 1, excluding 1)
Data dump:
import pandas as pd
data = pd.DataFrame([[1, 1, 0.5], [1, 1, 0.5], [1, 0, 1], [2, 1, 0.5], [2, 0, 1],
[2, 1, 0.5], [3, 1, 0.5], [3, 0, 1], [3, 1, 0.5]],
columns=['a', 'b', 'c'])
Suppose a group has values x_1, ..., x_n.
The average of the entire group would be
m = (x_1 + ... + x_n)/n
The sum of the group without x_i would be
(m*n - x_i)
The average of the group without x_i would be
(m*n - x_i)/(n-1)
Therefore, you could compute the desired column of values with
import pandas as pd
df = pd.DataFrame([[1, 1, 0.5], [1, 1, 0.5], [1, 0, 1], [2, 1, 0.5], [2, 0, 1],
[2, 1, 0.5], [3, 1, 0.5], [3, 0, 1], [3, 1, 0.5]],
columns=['a', 'b', 'c'])
grouped = df.groupby(['a'])
n = grouped['b'].transform('count')
mean = grouped['b'].transform('mean')
df['result'] = (mean*n - df['b'])/(n-1)
which yields
In [32]: df
Out[32]:
a b c result
0 1 1 0.5 0.5
1 1 1 0.5 0.5
2 1 0 1.0 1.0
3 2 1 0.5 0.5
4 2 0 1.0 1.0
5 2 1 0.5 0.5
6 3 1 0.5 0.5
7 3 0 1.0 1.0
8 3 1 0.5 0.5
In [33]: assert df['result'].equals(df['c'])
Per the comments below, in the OP's actual use case, the DataFrame's a column
contains strings:
def make_random_str_array(letters, strlen, size):
return (np.random.choice(list(letters), size*strlen)
.view('|S{}'.format(strlen)))
N = 3*10**6
df = pd.DataFrame({'a':make_random_str_array(letters='ABCD', strlen=10, size=N),
'b':np.random.randint(10, size=N)})
so that there are about a million unique values in df['a'] out of 3 million
total:
In [87]: uniq, key = np.unique(df['a'], return_inverse=True)
In [88]: len(uniq)
Out[88]: 988337
In [89]: len(df)
Out[89]: 3000000
In this case the calculation above requires (on my machine) about 11 seconds:
In [86]: %%timeit
....: grouped = df.groupby(['a'])
n = grouped['b'].transform('count')
mean = grouped['b'].transform('mean')
df['result'] = (mean*n - df['b'])/(n-1)
....: ....: ....: ....:
1 loops, best of 3: 10.5 s per loop
Pandas converts all string-valued columns to object
dtype. But we could convert the
DataFrame column to a NumPy array with a fixed-width dtype, and the group
according to those values.
Here is a benchmark showing that if we convert the Series with object dtype to a NumPy array with fixed-width string dtype, the calculation requires less than 2 seconds:
In [97]: %%timeit
....: grouped = df.groupby(df['a'].values.astype('|S4'))
n = grouped['b'].transform('count')
mean = grouped['b'].transform('mean')
df['result'] = (mean*n - df['b'])/(n-1)
....: ....: ....: ....:
1 loops, best of 3: 1.39 s per loop
Beware that you need to know the maximum length of the strings in df['a'] to choose the appropriate fixed-width dtype. In the example above, all the strings have length 4, so |S4 works. If you use |Sn for some integer n and n is smaller than the longest string, then those strings will get silently truncated with no error warning. This could potentially lead to the grouping of values which should not be grouped together. Thus, the onus is on you to choose the correct fixed-width dtype.
You could use
dtype = '|S{}'.format(df['a'].str.len().max())
grouped = df.groupby(df['a'].values.astype(dtype))
to ensure the conversion uses the correct dtype.
You can calculate the statistics manually by iterating group by group:
# Set up input
import pandas as pd
df = pd.DataFrame([
[1, 1, 0.5], [1, 1, 0.5], [1, 0, 1],
[2, 1, 0.5], [2, 0, 1], [2, 1, 0.5],
[3, 1, 0.5], [3, 0, 1], [3, 1, 0.5]
], columns=['a', 'b', 'c'])
df
a b c
0 1 1 0.5
1 1 1 0.5
2 1 0 1.0
3 2 1 0.5
4 2 0 1.0
5 2 1 0.5
6 3 1 0.5
7 3 0 1.0
8 3 1 0.5
# Perform grouping, excluding the current row
results = []
grouped = df.groupby(['a'])
for key, group in grouped:
for idx, row in group.iterrows():
# The group excluding current row
group_other = group.drop(idx)
avg = group_other['b'].mean()
results.append(row.tolist() + [avg])
# Compare our results with what is expected
results_df = pd.DataFrame(
results, columns=['a', 'b', 'c', 'c_new']
)
results_df
a b c c_new
0 1 1 0.5 0.5
1 1 1 0.5 0.5
2 1 0 1.0 1.0
3 2 1 0.5 0.5
4 2 0 1.0 1.0
5 2 1 0.5 0.5
6 3 1 0.5 0.5
7 3 0 1.0 1.0
8 3 1 0.5 0.5
This way you can use any statistic you want.
If I have a dataframe df with column x and want to create column y based on values of x using this in pseudo code:
if df['x'] < -2 then df['y'] = 1
else if df['x'] > 2 then df['y'] = -1
else df['y'] = 0
How would I achieve this? I assume np.where is the best way to do this but not sure how to code it correctly.
One simple method would be to assign the default value first and then perform 2 loc calls:
In [66]:
df = pd.DataFrame({'x':[0,-3,5,-1,1]})
df
Out[66]:
x
0 0
1 -3
2 5
3 -1
4 1
In [69]:
df['y'] = 0
df.loc[df['x'] < -2, 'y'] = 1
df.loc[df['x'] > 2, 'y'] = -1
df
Out[69]:
x y
0 0 0
1 -3 1
2 5 -1
3 -1 0
4 1 0
If you wanted to use np.where then you could do it with a nested np.where:
In [77]:
df['y'] = np.where(df['x'] < -2 , 1, np.where(df['x'] > 2, -1, 0))
df
Out[77]:
x y
0 0 0
1 -3 1
2 5 -1
3 -1 0
4 1 0
So here we define the first condition as where x is less than -2, return 1, then we have another np.where which tests the other condition where x is greater than 2 and returns -1, otherwise return 0
timings
In [79]:
%timeit df['y'] = np.where(df['x'] < -2 , 1, np.where(df['x'] > 2, -1, 0))
1000 loops, best of 3: 1.79 ms per loop
In [81]:
%%timeit
df['y'] = 0
df.loc[df['x'] < -2, 'y'] = 1
df.loc[df['x'] > 2, 'y'] = -1
100 loops, best of 3: 3.27 ms per loop
So for this sample dataset the np.where method is twice as fast
Use np.select for multiple conditions
np.select(condlist, choicelist, default=0)
Return elements in choicelist depending on the corresponding condition in condlist.
The default element is used when all conditions evaluate to False.
condlist = [
df['x'] < -2,
df['x'] > 2,
]
choicelist = [
1,
-1,
]
df['y'] = np.select(condlist, choicelist, default=0)
np.select is much more readable than a nested np.where but just as fast:
df = pd.DataFrame({'x': np.random.randint(-5, 5, size=n)})
This is a good use case for pd.cut where you define ranges and based on those ranges you can assign labels:
df['y'] = pd.cut(df['x'], [-np.inf, -2, 2, np.inf], labels=[1, 0, -1], right=False)
Output
x y
0 0 0
1 -3 1
2 5 -1
3 -1 0
4 1 0
set fixed value to 'c2' where the condition is met
df.loc[df['c1'] == 'Value', 'c2'] = 10
You can do it easily using the index and 2 loc calls:
df = pd.DataFrame({'x':[0,-3,5,-1,1]})
df
x
0 0
1 -3
2 5
3 -1
4 1
df['y'] = 0
idx_1 = df.loc[df['x'] < -2, 'y'].index
idx_2 = df.loc[df['x'] > 2, 'y'].index
df.loc[idx_1, 'y'] = 1
df.loc[idx_2, 'y'] = -1
df
x y
0 0 0
1 -3 1
2 5 -1
3 -1 0
4 1 0
Let's say my data frame contains these data:
>>> df = pd.DataFrame({'a':['l1','l2','l1','l2','l1','l2'],
'b':['1','2','2','1','2','2']})
>>> df
a b
0 l1 1
1 l2 2
2 l1 2
3 l2 1
4 l1 2
5 l2 2
l1 should correspond to 1 whereas l2 should correspond to 2.
I'd like to create a new column 'c' such that, for each row, c = 1 if a = l1 and b = 1 (or a = l2 and b = 2). If a = l1 and b = 2 (or a = l2 and b = 1) then c = 0.
The resulting data frame should look like this:
a b c
0 l1 1 1
1 l2 2 1
2 l1 2 0
3 l2 1 0
4 l1 2 0
5 l2 2 1
My data frame is very large so I'm really looking for the most efficient way to do this using pandas.
df = pd.DataFrame({'a': numpy.random.choice(['l1', 'l2'], 1000000),
'b': numpy.random.choice(['1', '2'], 1000000)})
A fast solution assuming only two distinct values:
%timeit df['c'] = ((df.a == 'l1') == (df.b == '1')).astype(int)
10 loops, best of 3: 178 ms per loop
#Viktor Kerkes:
%timeit df['c'] = (df.a.str[-1] == df.b).astype(int)
1 loops, best of 3: 412 ms per loop
#user1470788:
%timeit df['c'] = (((df['a'] == 'l1')&(df['b']=='1'))|((df['a'] == 'l2')&(df['b']=='2'))).astype(int)
1 loops, best of 3: 363 ms per loop
#herrfz
%timeit df['c'] = (df.a.apply(lambda x: x[1:])==df.b).astype(int)
1 loops, best of 3: 387 ms per loop
You can also use the string methods.
df['c'] = (df.a.str[-1] == df.b).astype(int)
df['c'] = (df.a.apply(lambda x: x[1:])==df.b).astype(int)
You can just use logical operators. I'm not sure why you're using strings of 1 and 2 rather than ints, but here's a solution. The astype at the end converts it from boolean to 0's and 1's.
df['c'] = (((df['a'] == 'l1')&(df['b']=='1'))|((df['a'] == 'l2')&(df['b']=='2'))).astype(int)