Lookup a single value using a multi-column key from Pandas DataFrame - python

This thread doesn't seem to cover a situation I am routinely in.
Return single cell value from Pandas DataFrame
How does one return a single value, not a series or dataframe using a set of column conditions as keys? This seems to be a common need. Say you have a database of info and you need to pluck answers to questions from it, but you need one answer, not a series of possible answers. My method seems "hokey" -- not Pythonic? And maybe not good for technical reasons.
import pandas as pd
d = {'A': [1, 1, 1, 2, 2, 2, 3, 3, 3], 'B': [1, 2, 3, 1, 2, 3, 1, 2, 3], 'C': [1, 3, 5,
2, 9, 7, 4, 3, 2]}
df = pd.DataFrame(data=d)
df looks like:
A B C
0 1 1 1
1 1 2 3
2 1 3 5
3 2 1 2
4 2 2 9
5 2 3 7
6 3 1 4
7 3 2 3
8 3 3 2
How to get the value in the C column where A == 1 and B == 3? In my case it's always unique, but I can see how that cannot be assumed so this method returns a series:
df[(df['A'] == 1) & (df['B'] == 3)]['C']
I don't want a series. So how to get a single value, not a series or list of one row or one element?
My method:
df[(df['A'] == 1) & (df['B'] == 3)]['C'].tolist()[0]
In the Pandas library it seems DataFrame.at is the way to go, but this method doesn't look better, though I wonder if it is technically better:
df.at[df.loc[(df['A'] == 1) & (df['B'] == 3)].index[0], 'C']
So, in your opinion, what is the best way to using multiple column conditions to find a value in a dataframe and return a single value (not a list or series)?

I have sat with the same question a few times in the past. I have come to accept that it's not actually that common for me to do this anyway, so I usually just do this:
df.loc[(df['A'] == 1) & (df['B'] == 3), "C"].iat[0]
# frequently I also like to make it more readable like this
is1and3 = (df['A'] == 1) & (df['B'] == 3)
df.loc[is1and3, "C"].iat[0]
This is almost the same as
df.at[df.loc[(df['A'] == 1) & (df['B'] == 3)].index[0], 'C']
which essentially just grabs the first index matching the condition and passes it to .at, rather than subsetting and then grabbing the first returned value with .iat[0], but I don't really like seeing .loc and .index in the call to .at.
Obviously the problem that pandas needs to handle is that there is no guarantee that a condition will only be satisfied by exactly one value in the df, so it's left to the user to handle that.
Some basic guidance
some more in depth

If the combination of columns A and B is unique then we can set the index in advance to efficiently retrieve a single value
df.set_index(['A', 'B']).loc[(1, 3), 'C']
Alternative approach with item
df.loc[df['A'].eq(1) & df['B'].eq(3), 'C'].item()

Related

Select pandas Series elements based on condition

Given a dataframe, I know I can select rows by condition using below syntax:
df[df['colname'] == 'Target Value']
But what about a Series? Series does not have a column (axis 1) name, right?
My scenario is I have created a Series by through the nunique() function:
sr = df.nunique()
And I want to list out the index names of those rows with value 1.
Having failed to find a clear answer on the Net, I resorted to below solution:
for (colname, coldata) in sr.iteritems():
if coldata == 1:
print(colname)
Question: what is a better way to get my answer (i.e list out index names of Series (or column names of the original Dataframe) which has just a single value?)
The ultimate objective was to find which columns in a DF has one and only one unique value. Since I did not know how to do that direct from a DF, I first used nunique() and that gave me a Series. Thus i needed to process the Series with a "== 1" (i.e one and only one)
I hope my question isnt silly.
It is unclear what you want. Whether you want to work on the dataframe or on the Series ?
Case 1: Working on DataFrame
In case you want to work on the dataframe to to list out the index names of those rows with value 1, you can try:
df.index[df[df==1].any(axis=1)].tolist()
Demo
data = {'Col1': [0, 1, 2, 2, 0], 'Col2': [0, 2, 2, 1, 2], 'Col3': [0, 0, 0, 0, 1]}
df = pd.DataFrame(data)
Col1 Col2 Col3
0 0 0 0
1 1 2 0
2 2 2 0
3 2 1 0
4 0 2 1
Then, run the code, it gives:
[1, 3, 4]
Case 2: Working on Series
If you want to extract the index of a Series with value 1, you can extract it into a list, as follows:
sr.loc[sr == 1].index.tolist()
or use:
sr.index[sr == 1].tolist()
It would work the same way, due to the fact that pandas overloads the == operator:
selected_series = series[series == my_value]

Selecting rows based on Boolean values in a non dangerous way

This is an easy question since it is so fundamental. See - in R, when you want to slice rows from a dataframe based on some condition, you just write the condition and it selects the corresponding rows. For example, if you have a condition such that only the third row in the dataframe meets the condition it returns the third row. Easy Peasy.
In python, you have to use loc. IF the index matches the row numbers then everything is great. IF you have been removing rows or re-ordering them for any reason, you have to remember that - since loc is based on INDEX NOT ROW POSITION. So if in your current dataframe the third row matches your boolean conditional in the loc statement - then it will retrieve the index with a number 3 - which could be the 50th row, rather than your current third row. This seems to be an incredibly dangerous way to select rows, so I know I am doing something wrong.
So what is the best practice method of ensuring you select the nth row based on a boolean conditional? Is it just to use loc and "always remember to use reset_index - otherwise if you miss it, even once your entire dataframe is wrecked"? This can't be it.
Use iloc instead of loc for integer based indexing:
data = {'A': [1, 2, 3], 'B': [4, 5, 6], 'C': [7, 8, 9]}
df = pd.DataFrame(data, index=[1, 2, 3])
df
Dataset:
A B C
1 1 4 7
2 2 5 8
3 3 6 9
Label based index
df.loc[1]
Results:
A 1
B 4
C 7
Integer based:
df.iloc[1]
Results:
A 2
B 5
C 8

Quick sum of all rows that fill a condition in DataFrame

I have a pandas dataframe that looks something like this:
df = pd.DataFrame(np.array([[1,1, 0], [5, 1, 4], [7, 8, 9]]),columns=['a','b','c'])
a b c
0 1 1 0
1 5 1 4
2 7 8 9
I want to find the first column in which the majority of elements in that column are equal to 1.0.
I currently have the following code, which works, but in practice, my dataframes usually have thousands of columns and this code is in a performance critical part of my application, so I wanted to know if there is a way to do this faster.
for col in df.columns:
amount_votes = len(df[df[col] == 1.0])
if amount_votes > len(df) / 2:
return col
In this case, the code should return 'b', since that is the first column in which the majority of elements are equal to 1.0
Try:
print((df.eq(1).sum() > len(df) // 2).idxmax())
Prints:
b
Find columns with more than half of values equal to 1.0
cols = df.eq(1.0).sum().gt(len(df)/2)
Get first one:
cols[cols].head(1)

How do I apply an if...else clause to an entire column in Python3?

I'd like to take a set of values in a df column, and apply a correction factor depending on the value of a separate column. I would like to run an if...else clause which adds a different amount depending on the value in the first column.
I've tried the following:
if df['A'] > 5:
df['B'] = df['B']+2
else df['B']=df['B']-2
I would expect the rows in column A which are larger than 5 to have 2 added to them in column B, and those which aren't to have 2 taken from them. Instead I get an error message saying that the truth value of a series is ambiguous.
I guess this is fairly basic, but the answers I've found on Stackoverflow all seem to relate to a different programming language.
You can use:
A B
0 6 0
1 1 0
df.loc[df['A'] > 5, 'B'] += 2
df.loc[~(df['A'] > 5), 'B'] -= 2 # or df.loc[df['A'] <= 5, 'B'] -= 2
Result:
A B
0 6 2
1 1 -2

Python pandas - select by row

I am trying to select rows in a pandas data frame based on it's values matching those of another data frame. Crucially, I only want to match values in rows, not throughout the whole series. For example:
df1 = pd.DataFrame({'a':[1, 2, 3], 'b':[4, 5, 6]})
df2 = pd.DataFrame({'a':[3, 2, 1], 'b':[4, 5, 6]})
I want to select rows where both 'a' and 'b' values from df1 match any row in df2. I have tried:
df1[(df1['a'].isin(df2['a'])) & (df1['b'].isin(df2['b']))]
This of course returns all rows, as the all values are present in df2 at some point, but not necessarily the same row. How can I limit this so the values tested for 'b' are only those rows where the value 'a' was found? So with the example above, I am expecting only row index 1 ([2, 5]) to be returned.
Note that data frames may be of different shapes, and contain multiple matching rows.
Similar to this post, here's one using broadcasting -
df1[(df1.values == df2.values[:,None]).all(-1).any(0)]
The idea is :
1) Use np.all for the both part in ""both 'a' and 'b' values"".
2) Use np.any for the any part in "from df1 match any row in df2".
3) Use broadcasting for doing all these in a vectorized fashion by extending dimensions with None/np.newaxis.
Sample run -
In [41]: df1
Out[41]:
a b
0 1 4
1 2 5
2 3 6
In [42]: df2 # Modified to add another row : [1,4] for variety
Out[42]:
a b
0 3 4
1 2 5
2 1 6
3 1 4
In [43]: df1[(df1.values == df2.values[:,None]).all(-1).any(0)]
Out[43]:
a b
0 1 4
1 2 5
use numpy broadcasting
pd.DataFrame((df1.values[:, None] == df2.values).all(2),
pd.Index(df1.index, name='df1'),
pd.Index(df2.index, name='df2'))

Categories