Get column name based on condition in pandas - python

I have a dataframe as below:
I want to get the name of the column if column of a particular row if it contains 1 in the that column.

Use DataFrame.dot:
df1 = df.dot(df.columns)
If there is multiple 1 per row:
df2 = df.dot(df.columns + ';').str.rstrip(';')

Firstly
Your question is very ambiguous and I recommend reading this link in #sammywemmy's comment. If I understand your problem correctly... we'll talk about this mask first:
df.columns[
(df == 1) # mask
.any(axis=0) # mask
]
What's happening? Lets work our way outward starting from within df.columns[**HERE**] :
(df == 1) makes a boolean mask of the df with True/False(1/0)
.any() as per the docs:
"Returns False unless there is at least one element within a series or along a Dataframe axis that is True or equivalent".
This gives us a handy Series to mask the column names with.
We will use this example to automate for your solution below
Next:
Automate to get an output of (<row index> ,[<col name>, <col name>,..]) where there is 1 in the row values. Although this will be slower on large datasets, it should do the trick:
import pandas as pd
data = {'foo':[0,0,0,0], 'bar':[0, 1, 0, 0], 'baz':[0,0,0,0], 'spam':[0,1,0,1]}
df = pd.DataFrame(data, index=['a','b','c','d'])
print(df)
foo bar baz spam
a 0 0 0 0
b 0 1 0 1
c 0 0 0 0
d 0 0 0 1
# group our df by index and creates a dict with lists of df's as values
df_dict = dict(
list(
df.groupby(df.index)
)
)
Next step is a for loop that iterates the contents of each df in df_dict, checks them with the mask we created earlier, and prints the intended results:
for k, v in df_dict.items(): # k: name of index, v: is a df
check = v.columns[(v == 1).any()]
if len(check) > 0:
print((k, check.to_list()))
('b', ['bar', 'spam'])
('d', ['spam'])
Side note:
You see how I generated sample data that can be easily reproduced? In the future, please try to ask questions with posted sample data that can be reproduced. This way it helps you understand your problem better and it is easier for us to answer it for you.

Getting column name are dividing in 2 sections.
If you want in a new column name then condition should be unique because it will only give 1 col name for each row.
data = {'foo':[0,0,3,0], 'bar':[0, 5, 0, 0], 'baz':[0,0,2,0], 'spam':[0,1,0,1]}
df = pd.DataFrame(data)
df=df.replace(0,np.nan)
df
foo bar baz spam
0 NaN NaN NaN NaN
1 NaN 5.0 NaN 1.0
2 3.0 NaN 2.0 NaN
3 NaN NaN NaN 1.0
If you were looking for min or maximum
max= df.idxmax(1)
min = df.idxmin(1)
out= df.assign(max=max , min=min)
out
foo bar baz spam max min
0 NaN NaN NaN NaN NaN NaN
1 NaN 5.0 NaN 1.0 bar spam
2 3.0 NaN 2.0 NaN foo baz
3 NaN NaN NaN 1.0 spam spam
2nd case, If your condition is satisfied in multiple columns for example you are looking for columns that contain 1 and you are looking for list because its not possible to adjust in same dataframe.
str_con= df.astype(str).apply(lambda x:x.str.contains('1.0',case=False, na=False)).any()
df.column[str_con]
#output
Index(['spam'], dtype='object') #only spam contains 1
Or you are looking for numerical condition columns contains value more than 1
num_con = df.apply(lambda x:x>1.0).any()
df.columns[num_con]
#output
Index(['foo', 'bar', 'baz'], dtype='object') #these col has higher value than 1
Happy learning

Related

How can I match the missing values (nan) of two dataframes?

how can I set all my values in df1 as missing if their position equivalent is a missing value in df2?
Data df1:
Index Data
1 3
2 8
3 9
Data df2:
Index Data
1 nan
2 2
3 nan
desired output:
Index Data
1 nan
2 8
3 nan
So I would like to keep the data of df1, but only for the positions for which df2 also has data entries. For all nans in df2 I would like to replace the value of df1 with nan as well.
I tried the following, but this replaced all data points with nan.
df1 = df1.where(df2== np.nan, np.nan)
Thank you very much for your help.
Use mask, which is doing exactly the inverse of where:
df3 = df1.mask(df2.isna())
output:
Index Data
0 1 NaN
1 2 8.0
2 3 NaN
In your case, you were setting all elements matching a non-NaN as NaN, and because equality is not the correct way to check for NaN (np.nan == np.nan yields False), you were setting all to NaN.
Change df2 == np.nan by df2.notna():
df3 = df1.where(df2.notna(), np.nan)
print(df3)
# Output
Index Data
0 1 NaN
1 2 8.0
2 3 NaN

How to find from which row and column the value belong?

Suppose I created the below data frame
data = {'Height_1': [4.3,6.7,5.4,6.2],
'Height_2': [5.1, 6.9, 5.1, 5.2],
'Height_3': [4.9,6.2,6.5,6.4]}
df = pd.DataFrame(data)
Suppose someone comes and asks me
Find the row and column of height 6.9 ?
Find in how many rows and columns height 6.2 is present ?
Please help me with what will be the code for this?
Using boolean indexing, we can try something like
>>> df[df == 6.9]
Height_1 Height_2 Height_3
0 NaN NaN NaN
1 NaN 6.9 NaN
2 NaN NaN NaN
3 NaN NaN NaN
However, this won't necessarily give you the exact rows and column indices of the data you're looking for. If you want to get the rows and columns explicitly, we need to do some more work.
>>> bool_df = df[df == 6.9]
>>> list(bool_df.stack().index)
[(1, 'Height_2')]
As for the second question, we can use the count function, combined with the boolean approach we used earlier.
>>> df[df == 6.2].count()
Height_1 1
Height_2 0
Height_3 1
dtype: int64
To count the rows, we can use the axis argument.
>>> df[df == 6.2].count(axis=1)
0 0
1 1
2 0
3 1
dtype: int64
To obtain the simple total count of occurrences of a certain value, we can use NumPy's sum function.
>>> np.sum(df[df == 6.2].count())
2

How can I match values on a matrix on python using pandas?

I'm trying to match values in a matrix on python using pandas dataframes. Maybe this is not the best way to express it.
Imagine you have the following dataset:
import pandas as pd
d = {'stores':['','','','',''],'col1': ['x','price','','',1],'col2':['y','quantity','',1,''], 'col3':['z','',1,'',''] }
df = pd.DataFrame(data=d)
stores col1 col2 col3
0 NaN x y z
1 NaN price quantity NaN
2 NaN NaN Nan 1
3 NaN NaN 1 NaN
4 NaN 1 NaN NaN
I'm trying to get the following:
stores col1 col2 col3
0 NaN x y z
1 NaN price quantity NaN
2 z NaN Nan 1
3 y NaN 1 NaN
4 x 1 NaN NaN
Any ideas how this might work? I've tried running loops on lists but I'm not quite sure how to do it.
This is what I have so far but it's just terrible (and obviously not working) and I am sure there is a much simpler way of doing this but I just can't get my head around it.
stores = ['x','y','z']
for i in stores:
for v in df.iloc[0,:]:
if i==v :
df['stores'] = i
It yields the following:
stores col1 col2 col3
0 z x y z
1 z price quantity NaN
2 z NaN NaN 1
3 z NaN 1 NaN
4 z 1 NaN NaN
Thank you in advance.
You can complete this task with a loop by doing the following. It loops through each column excluding the first where you want to write the data. Takes the index values where the value is 1 and writes the value from the first row to the column 'stores'.
Be careful where you might have 1's in multiple rows, in which case it will fill the stores column with the last column that had a 1 value.
for col in df.columns[1:]:
index_values = df[col][df[col]==1].index.tolist()
df.loc[index_values, 'stores'] = df[col][0]
You can fill the whole column at once, like this:
df["stores"] = df[["col1", "col2", "col3"]].rename(columns=df.loc[0]).eq(1).idxmax(axis=1)
This first creates a version of the dataframe with the columns renamed "x", "y", and "z" after the values in the first row; then idxmax(axis=1) returns the column heading associated with the max value in each row (which is the True one).
However this adds an "x" in rows where none of the columns has a 1. If that is a problem you could do something like this:
df["NA"] = 1 # add a column of ones
df["stores"] = df[["col1", "col2", "col3", "NA"]].rename(columns=df.loc[0]).eq(1).idxmax(axis=1)
df["stores"].replace(1, np.NaN, inplace=True) # replace the 1s with NaNs

Problems assigning values to a Dataframe cell in pandas

I am working combining different pandas Dataframes and sorting the index of the final dataframe I found something that does not make any sense to me. It gives no error but no assignation really happens. I give a simplified example below
Case 1:
import pandas as pd
ind_1 = ['a','a','b','c','c']
df_1 = pd.DataFrame(index=ind_1,columns=['col1','col2'])
df_1.col1.loc['a'].iloc[0] = 1
df_1.col1.loc['b'] = 2
df_1.col1.loc['c'].iloc[0] = 3
print('Original df_1')
print(df_1)
# Original df_1
# col1 col2
# a 1 NaN
# a NaN NaN
# b 2 NaN
# c 3 NaN
# c NaN NaN
You can see that this assignation works fine. But let's create the dataframe from the index sorted differently.
ind_1_sorted = sorted(ind_1,reverse=True)
df_1_sorted = pd.DataFrame(index=ind_1_sorted,columns=['col1','col2'])
df_1_sorted.col1.loc['a'].iloc[0] = 1
df_1_sorted.col1.loc['b'] = 2
df_1_sorted.col1.loc['c'].iloc[0] = 3
print('Sorted df_1')
print(df_1_sorted)
# Sorted df_1
# col1 col2
# c NaN NaN
# c NaN NaN
# b 2 NaN
# a NaN NaN
# a NaN NaN
You can see now that the assignation only works for the non-repeated index. I thought that the problem had to be related with the sorting but let's see next case.
Case 2:
ind_2 = ['c','c','b','a','a']
df_2 = pd.DataFrame(index=ind_2,columns=['col1','col2'])
df_2.col1.loc['a'].iloc[0] = 1
df_2.col1.loc['b'] = 2
df_2.col1.loc['c'].iloc[0] = 3
print('Original df_2')
print(df_2)
# Original df_2
# col1 col2
# c NaN NaN
# c NaN NaN
# b 2 NaN
# a NaN NaN
# a NaN NaN
Now, we get no assignation without implementing the sorting. Let's see what happens if I sort the index
ind_2_sorted = sorted(ind_2,reverse=False)
df_2_sorted = pd.DataFrame(index=ind_2_sorted,columns=['col1','col2'])
df_2_sorted.col1.loc['a'].iloc[0] = 1
df_2_sorted.col1.loc['b'] = 2
df_2_sorted.col1.loc['c'].iloc[0] = 3
print('Sorted df_2')
print(df_2_sorted)
# Sorted df_2
# col1 col2
# a 1 NaN
# a NaN NaN
# b 2 NaN
# c 3 NaN
# c NaN NaN
And now, the assignation works after the sorting!! The only difference I see is that the assignation works when the index is sorted in a "standard way" (alphabetically in this case). Has this any sense?
In case the solution is using first a index sorted alphabetically and then sort it in the order I want, how could I do this sorting using repeated indexes as in these examples?
Thanks!
As User Quickbeam2k1 mentioned, the issue is due to chain assignment.
Index Objects have a method called get_loc which can be used to convert labels to positions, however its return type is polymorphic & that is why I prefer to not use it.
Using np.nonzero & filtering on the dataframe's index & column, we can convert the labels to positional references & modify the dataframe using iloc instead of loc
i.e. your first code sample can be rewritten as:
# original
df_1.col1.loc['a'].iloc[0] = 1
df_1.col1.loc['b'] = 2
df_1.col1.loc['c'].iloc[0] = 3
# works for all indices
col1_mask = df_1.columns == 'col1'
a_mask, = np.nonzero(df_1.index == 'a')
b_mask, = np.nonzero(df_1.index == 'b')
c_mask, = np.nonzero(df_1.index == 'c')
df_1.iloc[a_mask[0], col1_mask] = 1
df_1.iloc[b_mask, col1_mask] = 1
df_1.iloc[c_mask[0], col1_mask] = 3
Similarly for the other examples

How to select rows with NaN in particular column?

Given this dataframe, how to select only those rows that have "Col2" equal to NaN?
df = pd.DataFrame([range(3), [0, np.NaN, 0], [0, 0, np.NaN], range(3), range(3)], columns=["Col1", "Col2", "Col3"])
which looks like:
0 1 2
0 0 1 2
1 0 NaN 0
2 0 0 NaN
3 0 1 2
4 0 1 2
The result should be this one:
0 1 2
1 0 NaN 0
Try the following:
df[df['Col2'].isnull()]
#qbzenker provided the most idiomatic method IMO
Here are a few alternatives:
In [28]: df.query('Col2 != Col2') # Using the fact that: np.nan != np.nan
Out[28]:
Col1 Col2 Col3
1 0 NaN 0.0
In [29]: df[np.isnan(df.Col2)]
Out[29]:
Col1 Col2 Col3
1 0 NaN 0.0
If you want to select rows with at least one NaN value, then you could use isna + any on axis=1:
df[df.isna().any(axis=1)]
If you want to select rows with a certain number of NaN values, then you could use isna + sum on axis=1 + gt. For example, the following will fetch rows with at least 2 NaN values:
df[df.isna().sum(axis=1)>1]
If you want to limit the check to specific columns, you could select them first, then check:
df[df[['Col1', 'Col2']].isna().any(axis=1)]
If you want to select rows with all NaN values, you could use isna + all on axis=1:
df[df.isna().all(axis=1)]
If you want to select rows with no NaN values, you could notna + all on axis=1:
df[df.notna().all(axis=1)]
This is equivalent to:
df[df['Col1'].notna() & df['Col2'].notna() & df['Col3'].notna()]
which could become tedious if there are many columns. Instead, you could use functools.reduce to chain & operators:
import functools, operator
df[functools.reduce(operator.and_, (df[i].notna() for i in df.columns))]
or numpy.logical_and.reduce:
import numpy as np
df[np.logical_and.reduce([df[i].notna() for i in df.columns])]
If you're looking for filter the rows where there is no NaN in some column using query, you could do so by using engine='python' parameter:
df.query('Col2.notna()', engine='python')
or use the fact that NaN!=NaN like #MaxU - stop WAR against UA
df.query('Col2==Col2')

Categories