I am trying to filter a pandas data frame using thresholds for three columns
import pandas as pd
df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
df = df.loc[(df.A > 0) & (df.B > 2) & (df.C > -1)].reset_index(drop = True)
df
A B C
0 2 5 2
1 10 3 1
2 3 6 2
However, I want to do this inside a function where the names of the columns and their thresholds are given to me in a dictionary. Here's my first try that works ok. Essentially I am putting the filter inside cond variable and just run it:
df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
limits_dic = {"A" : 0, "B" : 2, "C" : -1}
cond = "df = df.loc["
for key in limits_dic.keys():
cond += "(df." + key + " > " + str(limits_dic[key])+ ") & "
cond = cond[:-2] + "].reset_index(drop = True)"
exec(cond)
df
A B C
0 2 5 2
1 10 3 1
2 3 6 2
Now, finally I put everything inside a function and it stops working (perhaps exec function does not like to be used inside a function!):
df = pd.DataFrame({"A" : [6, 2, 10, -5, 3],
"B" : [2, 5, 3, 2, 6],
"C" : [-5, 2, 1, 8, 2]})
limits_dic = {"A" : 0, "B" : 2, "C" : -1}
def filtering(df, limits_dic):
cond = "df = df.loc["
for key in limits_dic.keys():
cond += "(df." + key + " > " + str(limits_dic[key])+ ") & "
cond = cond[:-2] + "].reset_index(drop = True)"
exec(cond)
return(df)
df = filtering(df, limits_dic)
df
A B C
0 6 2 -5
1 2 5 2
2 10 3 1
3 -5 2 8
4 3 6 2
I know that exec function acts differently when used inside a function but was not sure how to address the problem. Also, I am wondering there must be a more elegant way to define a function to do the filtering given two input: 1)df and 2)limits_dic = {"A" : 0, "B" : 2, "C" : -1}. I would appreciate any thoughts on this.
If you're trying to build a dynamic query, there are easier ways. Here's one using a list comprehension and str.join:
query = ' & '.join(['{}>{}'.format(k, v) for k, v in limits_dic.items()])
Or, using f-strings with python-3.6+,
query = ' & '.join([f'{k}>{v}' for k, v in limits_dic.items()])
print(query)
'A>0 & C>-1 & B>2'
Pass the query string to df.query, it's meant for this very purpose:
out = df.query(query)
print(out)
A B C
1 2 5 2
2 10 3 1
4 3 6 2
What if my column names have whitespace, or other weird characters?
From pandas 0.25, you can wrap your column name in backticks so this works:
query = ' & '.join([f'`{k}`>{v}' for k, v in limits_dic.items()])
See this Stack Overflow post for more.
You could also use df.eval if you want to obtain a boolean mask for your query, and then indexing becomes straightforward after that:
mask = df.eval(query)
print(mask)
0 False
1 True
2 True
3 False
4 True
dtype: bool
out = df[mask]
print(out)
A B C
1 2 5 2
2 10 3 1
4 3 6 2
String Data
If you need to query columns that use string data, the code above will need a slight modification.
Consider (data from this answer):
df = pd.DataFrame({'gender':list('MMMFFF'),
'height':[4,5,4,5,5,4],
'age':[70,80,90,40,2,3]})
print (df)
gender height age
0 M 4 70
1 M 5 80
2 M 4 90
3 F 5 40
4 F 5 2
5 F 4 3
And a list of columns, operators, and values:
column = ['height', 'age', 'gender']
equal = ['>', '>', '==']
condition = [1.68, 20, 'F']
The appropriate modification here is:
query = ' & '.join(f'{i} {j} {repr(k)}' for i, j, k in zip(column, equal, condition))
df.query(query)
age gender height
3 40 F 5
For information on the pd.eval() family of functions, their features and use cases, please visit Dynamic Expression Evaluation in pandas using pd.eval().
An alternative to #coldspeed 's version:
conditions = None
for key, val in limit_dic.items():
cond = df[key] > val
if conditions is None:
conditions = cond
else:
conditions = conditions & cond
print(df[conditions])
An alternative to both posted, that may or may not be more pythonic:
import pandas as pd
import operator
from functools import reduce
df = pd.DataFrame({"A": [6, 2, 10, -5, 3],
"B": [2, 5, 3, 2, 6],
"C": [-5, 2, 1, 8, 2]})
limits_dic = {"A": 0, "B": 2, "C": -1}
# equiv to [df['A'] > 0, df['B'] > 2 ...]
loc_elements = [df[key] > val for key, val in limits_dic.items()]
df = df.loc[reduce(operator.and_, loc_elements)]
How I do this without creating a string and df.query:
limits_dic = {"A" : 0, "B" : 2, "C" : -1}
cond = None
# Build the conjunction one clause at a time
for key, val in limits_dic.items():
if cond is None:
cond = df[key] > val
else:
cond = cond & (df[key] > val)
df.loc[cond]
A B C
0 2 5 2
1 10 3 1
2 3 6 2
Note the hard coded (>, &) operators (since I wanted to follow your example exactly).
Related
I have three columns: id (non-unique id), X (categories) and Y (categories). (I don't have a dataset to share yet. I'll try to replicate what I have using a smaller dataset and edit as soon as possible)
I ran a for loop on a very small subset and based on those results it might take over 4 hours to run this code. I'm looking for a faster way to do this task using pandas (maybe using iterrows, like iterating over previous rows within apply)
For each row I check
whether the current X matches any of previous Xs (check_X = X[:row] == X[row])
whether the current Y matches any of previous Ys (check_Y = Y[:row] == Y[row])
whether the current id does not match any of previous ids (check_id = id[:row] != id[row])
if sum(check_X & check_Y & check_id)>0: then append 1 to the array
else: append 0
Your are probably looking for duplicated:
df = pd.DataFrame({'id': [0, 0, 0, 1, 0],
'X': [1, 1, 2, 1, 1],
'Y': [2, 2, 2, 2, 2]})
df['dup'] = ~df[df.duplicated(['X', 'Y'])].duplicated('id', keep=False).loc[lambda x: ~x]
df['dup'] = df['dup'].fillna(False).astype(int)
print(df)
# Output
id X Y dup
0 0 1 2 0
1 0 1 2 0
2 0 2 2 0
3 1 1 2 1
4 0 1 2 0
Update
X and Y should be checked separately:
df = pd.DataFrame({'id': [0, 1, 1, 2, 2, 3, 4],
'X': [0, 1, 1, 1, 1, 1, 1],
'Y': [0, 2, 2, 2, 2, 2, 2]})
df['dup'] = np.where(df['X'].duplicated() & df['Y'].duplicated() & ~df['id'].duplicated(), 1, 0)
print(df)
# Output
id X Y dup
0 0 0 0 0
1 1 1 2 0
2 1 1 2 0
3 2 1 2 1
4 2 1 2 0
5 3 1 2 1
6 4 1 2 1
EDIT answer from #Corralien using duplicates() will likely be much faster and the best answer for this specific problem. However, apply is more flexible if you have different things to check.
You could do it with iterrows() or apply(). As far as I know apply() is faster:
check_id, check_x, check_y = set(), set(), set()
def apply_func(row):
global check_id, check_x, check_y
if row["id"] not in check_id and row['x'] in check_x and row['y'] in check_y:
row['duplicate'] = 1
else:
row['duplicate'] = 0
check_id.add(row['id'])
check_x.add(row['x'])
check_y.add(row['y'])
return row
df.apply(apply_func, axis=1)
With iterrows():
check_id, check_x, check_y = set(), set(), set()
for i, row in df.iterrows():
if row["id"] not in check_id and row['x'] in check_x and row['y'] in check_y:
df.loc[i, 'duplicate'] = 1
else:
df.loc[i, 'duplicate'] = 0
check_id.add(row['id'])
check_x.add(row['x'])
check_y.add(row['y'])
I have a dataframe with columns m, n:
m=[0, 0, 1, 0, 0, 0, 4, 0, 0]
n=[6, 1, 2, 1, 4, 3, 1, 3, 5, 1]
I am looking for an iterative loop that adds up values of column n if the value in column m is non-zero. For example at 3rd place of column m the value is 1 (non-zero) so it should add in the column n from index 0 to 2 i.e. 6+1+2=9. Similarly, at m[6]=4 (non-zero) this implies 1+4+3+1=9 and so on.
Let's say you have a dataframe and you want to sum the elements in each column based on the position of non-zero values in column "m". The following code gives you the output as a dataframe. See the comment in the code if you are just looking for summing the values in column "n":
import pandas as pd
from random import randint
m = [0, 1, 0, 0, 1, 0, 0, 0, 2]
n = [1, 1, 3, 4, 1, 1, 2, 1, 3]
r = [randint(1, 3) for _ in m]
names = ['lev', 'yan', 'coke' , 'coke', 'yan', 'lev', 'lev', 'yan', 'lev']
df = pd.DataFrame({'m': m, 'n': n, 'r': r, 'names': names})
print(f"Input dataframe:\n{df}")
# if you want to iterate over all columns
iter_cols = df.columns.tolist()
iter_cols.remove('m')
# To iterate over an specific column (e.g. 'n') you use iter_cols = ['n']
starting_idx = 0
sum_df = pd.DataFrame()
for idx, val in enumerate(df.m):
if val != 0:
sum_df = sum_df.append(df.iloc[starting_idx: (idx+1)][iter_cols].sum(), ignore_index=True)
starting_idx = idx+1
print(f"Output dataframe:\n{sum_df}")
Output:
Input dataframe:
m n r names
0 0 1 2 lev
1 1 1 3 yan
2 0 3 1 coke
3 0 4 2 coke
4 1 1 2 yan
5 0 1 3 lev
6 0 2 3 lev
7 0 1 3 yan
8 2 3 2 lev
Output dataframe:
n names r
0 2.0 levyan 5.0
1 8.0 cokecokeyan 5.0
2 7.0 levlevyanlev 11.0
And if you want to iterate over distinct values in names column and sum the values in 'n' column accordingly:
iter_cols = ['n']
distinct_names = set(df.names)
print(distinct_names)
out_dct = {}
for name in distinct_names:
starting_idx = 0
sum_df = pd.DataFrame()
for idx, val in enumerate(df.names):
if val == name:
sum_df = sum_df.append(df.iloc[starting_idx: (idx+1)][iter_cols].sum(), ignore_index=True)
starting_idx = idx+1
out_dct[name] = sum_df
df:
A
0 219
1 590
2 272
3 945
4 175
5 930
6 662
7 472
8 251
9 130
I am trying to create a new column quantile based on which quantile the value falls in, for example:
if value > 1st quantile : value = 1
if value > 2nd quantile : value = 2
if value > 3rd quantile : value = 3
if value > 4th quantile : value = 4
Code:
f_q = df['A'] .quantile (0.25)
s_q = df['A'] .quantile (0.5)
t_q = df['A'] .quantile (0.75)
fo_q = df['A'] .quantile (1)
index = 0
for i in range(len(test_df)):
value = df.at[index,"A"]
if value > 0 and value <= f_q:
df.at[index,"A"] = 1
elif value > f_q and value <= s_q:
df.at[index,"A"] = 2
elif value > s_q and value <= t_q:
df.at[index,"A"] = 3
elif value > t_q and value <= fo_q:
df.at[index,"A"] = 4
index += 1
The code works fine. But I would like to know if there is a more efficient pandas way of doing this. Any suggestions are helpful.
Yes, using pd.qcut:
>>> pd.qcut(df.A, 4).cat.codes + 1
0 1
1 3
2 2
3 4
4 1
5 4
6 4
7 3
8 2
9 1
dtype: int8
(Gives me exactly the same result your code does.)
You could also call np.unique on the qcut result:
>>> np.unique(pd.qcut(df.A, 4), return_inverse=True)[1] + 1
array([1, 3, 2, 4, 1, 4, 4, 3, 2, 1])
Or, using pd.factorize (note the slight difference in the output):
>>> pd.factorize(pd.qcut(df.A, 4))[0] + 1
array([1, 2, 3, 4, 1, 4, 4, 2, 3, 1])
The following code find index where df['A'] == 1
import pandas as pd
import numpy as np
import random
index = range(10)
random.shuffle(index)
df = pd.DataFrame(np.zeros((10,1)).astype(int), columns = ['A'], index = index)
df.A.iloc[3:6] = 1
df.A.iloc[6:] = 2
print df
print df.loc[df['A'] == 1].index.tolist()
It returns pandas index correctly. How do I get the integer index ([3,4,5]) instead using pandas API?
A
8 0
4 0
6 0
3 1
7 1
1 1
5 2
0 2
2 2
9 2
[3, 7, 1]
what about?
In [12]: df.index[df.A == 1]
Out[12]: Int64Index([3, 7, 1], dtype='int64')
or (depending on your goals):
In [15]: df.reset_index().index[df.A == 1]
Out[15]: Int64Index([3, 4, 5], dtype='int64')
Demo:
In [11]: df
Out[11]:
A
8 0
4 0
6 0
3 1
7 1
1 1
5 2
0 2
2 2
9 2
In [12]: df.index[df.A == 1]
Out[12]: Int64Index([3, 7, 1], dtype='int64')
In [15]: df.reset_index().index[df.A == 1]
Out[15]: Int64Index([3, 4, 5], dtype='int64')
Here is one way:
df.reset_index().index[df.A == 1].tolist()
This re-indexes the data frame with [0, 1, 2, ...], then extracts the integer index values based on the boolean mask df.A == 1.
Edit Credits to #Max for the index[df.A == 1] idea.
No need for numpy, you're right. Just pure python with a listcomp:
Just find the indexes where the values are 1
print([i for i,x in enumerate(df['A'].values) if x == 1])
I have a dictionary 'wordfreq' like this:
{'techsmart': 30, 'paradies': 57, 'jobvark': 5000, 'midgley': 100, 'weisman': 2, 'tucuman': 1, 'amdahl': 2, 'frogfeet': 1, 'd8848': 1, 'jiaoyuwang': 1, 'walter': 19}
and I want to put the keys in a list if the value is more than 5 and also if the key is not in another dataframe 'df', and then adding them to a list called 'stopword':here is a df dataframe:
word freq
1 paradies 1
5 tucuman 1
and here is the code I am using:
stopword = []
for k,v in wordfreq.items():
if v >= 5:
if k not in list_c:
stopword.append((k))
Anybody knows how can I do the same thing with isin() method or more efficiently at least?
I'd load your dict into a df:
In [177]:
wordfreq = {'techsmart': 30, 'paradies': 57, 'jobvark': 5000, 'midgley': 100, 'weisman': 2, 'tucuman': 1, 'amdahl': 2, 'frogfeet': 1, 'd8848': 1, 'jiaoyuwang': 1, 'walter': 19}
df = pd.DataFrame({'word':list(wordfreq.keys()), 'freq':list(wordfreq.values())})
df
Out[177]:
freq word
0 1 frogfeet
1 1 tucuman
2 57 paradies
3 1 d8848
4 5000 jobvark
5 100 midgley
6 1 jiaoyuwang
7 30 techsmart
8 2 weisman
9 19 walter
10 2 amdahl
And then filter using isin against the other df (df_1 in my case) like this:
In [181]:
df[(df['freq'] > 5) & (~df['word'].isin(df1['word']))]
Out[181]:
freq word
4 5000 jobvark
5 100 midgley
7 30 techsmart
9 19 walter
So the boolean condition looks for freq values greater than 5 and also where the word is not in the other df using isin and invert the boolean mask ~.
You can then now get a list easily:
In [182]:
list(df[(df['freq'] > 5) & (~df['word'].isin(df1['word']))]['word'])
Out[182]:
['jobvark', 'midgley', 'techsmart', 'walter']