Pandas CSV : Check for each row if a column is empty - python

I want to test for each row of a CSV file if some column are empty or not and change value of another column depending on that.
Here is what I have :
df = df.replace(r'^\s*$', np.NaN, regex=True)
df['Multi-line'] = pd.Series(dtype=object)
for i, row in df.iterrows():
if (row['Directory Number 1'] != np.NaN and row['Directory Number 2'] != np.NaN and row['Directory Number 3'] != np.NaN and row['Directory Number 4'] != np.NaN):
df.at[i,'Multi-line'] = 'Yes'
If 2 "Directory Number X" or more are not empty, I want the "Multi-line" column to be "Yes" and if 1 or 0 "Directory Number X" are not empty then "Multi-line" should be "No".
Here is only one if just to show you how it looks but in my test sample, all Multi-line are set to "Yes", it seems like the problem is inside the If condition with the row value and the np.nan but I don't know how to check if a row value is empty or not..
Thanks for you help !

I assume that you executed df = df.replace(r'^\s*$', np.NaN, regex=True)
before.
Then, to generate the new column, run:
df['Multi-line'] = df.apply(lambda row: 'Yes' if row.notna().sum() >= 2 else 'No', axis=1)
No need for explicit call to iterrows, as apply arranges just such
a loop, invoking the passed function for each row.
If your DataFrame has also other columns, especially when they can
have NaN values, then application of this lambda function should be
limited to just these 4 columns of interest.
In this case run:
cols = [ f'Directory Number {i}' for i in range(1, 5) ]
df['Multi-line'] = df[cols].apply(lambda row:
'Yes' if row.notna().sum() >= 2 else 'No', axis=1)
Note also that a check like if (row[s] != np.NaN): as proposed
in the other solution is a bad approach, since NaN by definition
is not equal to another NaN, so you can't just compare two NaNs.
To check it try:
s = np.nan
s2 = np.nan
s != s2 # True
s == s2 # False
Then save any "true" string in s, running s = 'xx' and repeat:
s != s2 # True
s == s2 # False
with just the same result.

You can use a counter instead
df = df.replace(r'^\s*$', np.NaN, regex=True)
df['Multi-line'] = pd.Series(dtype=object)
cnt=0;
str = ['Directory Number 1','Directory Number 2','Directory Number 3','Directory Number 4'];
for i, row in df.iterrows():
for s in str:
if (row[s] != np.NaN):
cnt+=1;
if (cnt>2):
df.at[i,'Multi-line'] = 'Yes'
else:
df.at[i,'Multi-line'] = 'No'
cnt=0;

Related

Pandas: Remove rows where all values equal a certain value

I have a DataFrame with regex search results. I need to remove any row where there were no matches for any of the terms. Not all columns are search results, only columns 2 - 6.
Have tried ( NF = "Not Found" ):
cond1 = (df['term1'] != "NF") & (df['term2'] != "NF") & (df['term3'] != "NF") & (df['term4'] != "NF") & (df['term5'] != "NF")
df_pos_results = df[cond1]
For some reason this is removing positive results.
I think you need .all:
df = df[df.iloc[:, 1:5].ne('NF').all(axis=1)]
That will remove all rows where every value in the row is equal to NF.
For multiple values:
df = df[~df.iloc[:, 1:5].isin(['NF', 'ABC', 'DEF']).all(axis=1)]

Set a new column using Pandas

I have a dataframe like this:
A Status_A Invalid_A
0 Null OR Blank True
1 NaN Null OR Blank True
2 Xv Valid False
I want a dataframe like this:
A Status_A Invalid_A
0 Null OR Blank A True
1 NaN Null OR Blank A True
2 Xv Valid False
I want to append column name to the Status_A column when I create df using
def checkNull(ele):
if pd.isna(ele) or (ele == ''):
return ("Null OR Blank", True)
else:
return ("Valid", False)
df[['Status_A', 'Invalid_A']] = df['A'].apply(checkNull).tolist()
I want to pass column name in this function.
You have a couple of options here.
One option is that when you create the dataframe, you can pass additional arguments to pd.Series.apply:
def checkNull(ele, suffix):
if pd.isna(ele) or (ele ==''):
return (f"Null OR Blank {suffix}", True)
else :
return ("Valid", False)
df[['Status_A', 'Invalid_A']] = df['A'].apply(checkNull, args=('A',)).tolist()
Another option is to post-process the dataframe to add the suffix
df.loc[df['Invalid_A'], 'Status_A'] += '_A'
That being said, both columns are redundant, which is usually code smell. Consider just using the boolean series pd.isna(df['A']) | (df['A'] == '') as an index instead.
The more efficient way is to use np.where
df[('Status%s') % '_A'] = np.where((df['A'].isnull()) | (df['A']==''), 'Null or Blank', 'Valid')
df[('Invalid%s') % '_A'] = np.where((df['A'].isnull()) | (df['A']==''), 'True', 'False')
Maybe something like this
def append_col_name(df, col_name):
col = f"Status_{col_name}"
df[col] = df[col].apply(lambda x : x + " " + col_name if x != "Valid" else x)
return df
Then with your df
append_col_name(df, "A")
if you're checking each element, you can use a vectorised operation and return an entire dataframe, as opposed to operating on a column.
def str_col_check(colname : str,
dataframe : pd.DataFrame) -> pd.DataFrame:
suffix = colname.split('_')[-1]
dataframe.loc[df['Status_A'].isin(['Null OR Blank', '']),'Status_A'] = dataframe['Status_A'] + '_' + suffix
return dataframe

Data Cleaning with Pandas

I have a dataframe column consisting of text data and I need to filter it according to the following conditions:
The character "M", if it's present in the string, it can only be at the n-2 position
The n-1 position of the string always has to be a "D".
ex:
KFLL
KSDS
KMDK
MDDL
In this case, for example, I would have to remove the first string, since the character at the n-1 position is not a "D", and the last one, since the character "M" appears out of the n-2 position.
How can I apply this to a whole dataframe column?
Here's with a list comprehension:
l = ['KFLL', 'KSDS', 'KMDK', 'MDDL']
[x for x in l if ((('M' not in x) or (x[-3] == 'M')) and (x[-2] == 'D'))]
Output:
['KSDS', 'KMDK']
This does what you want. Could probably be written down shorter with list comprehensions, but at least this is readable. It assumes that the strings are all longer than 3 characters, otherwise you get an IndexError. In that case you need to add a try/except
from collections import Counter
import pandas as pd
df = pd.DataFrame(data=list(["KFLL", "KSDS", "KMDK", "MDDL"]), columns=["code"])
print("original")
print(df)
mask = list()
for code in df["code"]:
flag = False
if code[-2] == "D":
counter = Counter(list(code))
if counter["M"] == 0 or (counter["M"] == 1 and code[-3] == "M"):
flag = True
mask.append(flag)
df["mask"] = mask
df2 = df[df["mask"]].copy()
df2.drop("mask", axis=1, inplace=True)
print("new")
print(df2)
Output looks like this
original
code
0 KFLL
1 KSDS
2 KMDK
3 MDDL
new
code
1 KSDS
2 KMDK
Thank you all for your help.
I ended up implementing it like this:
l = {"Sequence": [ 'KFLL', 'KSDS', 'KMDK', 'MDDL', "MMMD"]}
df = pd.DataFrame(data= l)
print(df)
df = df[df.Sequence.str[-2] == 'D']
df = df[~df.Sequence.apply(lambda x: ("M" in x and x[-3]!='M') or x.count("M") >1 )]
print(df)
Output:
Sequence
0 KFLL
1 KSDS
2 KMDK
3 MDDL
4 MMMD
Sequence
1 KSDS
2 KMDK

Assign labels: all values are false

I have some problem to assign label whether a condition is satisfied. Specifically, I would like to assign False (or 0) to rows which contains at least one of these words
my_list=["maths", "science", "geography", "statistics"]
in one of these fields:
path | Subject | Notes
and look for these websites webs=["www.stanford.edu", "www.ucl.ac.uk", "www.sorbonne-universite.fr"] in column web.
To do this I am using the following code:
def part_is_in(x, values):
output = False
for val in values:
if val in str(x):
return True
break
return output
def assign_value(filename):
my_list=["maths", "", "science", "geography", "statistics"]
filename['Label'] = filename[['path','subject','notes']].apply(part_is_in, values= my_list)
filename['Low_Subject']=filename['Subject']
filename['Low_Notes']=filename['Notes']
lower_cols = [col for col in filename if col not in ['Subject','Notes']]
filename[lower_cols]= filename[lower_cols].apply(lambda x: x.astype(str).str.lower(),axis=1)
webs=["https://www.stanford.edu", "https://www.ucl.ac.uk", "http://www.sorbonne-universite.fr"]
# NEW COLUMN # this is still inside the function but I cannot add an indent within this post
filename['Label'] = pd.Series(index = filename.index, dtype='object')
for index, row in filename.iterrows():
value = row['web']
if any(x in str(value) for x in webs):
filename.at[index,'Label'] = True
else:
filename.at[index,'Label'] = False
for index, row in filename.iterrows():
value = row['Subject']
if any(x in str(value) for x in my_list):
filename.at[index,'Label'] = True
else:
filename.at[index,'Label'] = False
for index, row in filename.iterrows():
value = row['Notes']
if any(x in str(value) for x in my_list):
filename.at[index,'Label'] = True
else:
filename.at[index,'Label'] = False
for index, row in filename.iterrows():
value = row['path']
if any(x in str(value) for x in my_list):
filename.at[index,'Label'] = True
else:
filename.at[index,'Label'] = False
return(filename)
My dataset is
web path Subject Notes
www.stanford.edu /maths/ NA NA
www.ucla.com /history/ History of Egypt NA
www.kcl.ac.uk /datascience/ Data Science 50 students
...
The expected output is:
web path Subject Notes Label
www.stanford.edu /maths/ NA NA 1 # contains the web and maths
www.ucla.com /history/ History of Egypt NA 0
www.kcl.ac.uk /datascience/ Data Science 50 students 1 # contains the word science
...
Using my code, I am getting all values False. Are you able to spot the issue?
The final values in Labels are Booleans
If you want ints, use df.Label = df.Label.astype(int)
def test_words
fill all NaNs, which are float type, with '', which is str type
convert all words to lowercase
replace all / with ' '
split on ' ' to make a list
combine all the lists into a single a set
use set methods to determine if the row contains a word in my_list
set.intersection
{'datascience'}.intersection({'science'}) returns an empty set, because there is not intersection.
{'data', 'science'}.intersection({'science'}) returns {'science'}, because there's an intersection on that word.
lambda x: any(x in y for y in webs)
for each value in webs, check if web is in that value
'www.stanford.edu' in 'https://www.stanford.edu' is True
evaluates as True if any are True.
import pandas as pd
# test data and dataframe
data = {'web': ['www.stanford.edu', 'www.ucla.com', 'www.kcl.ac.uk'],
'path': ['/maths/', '/history/', '/datascience/'],
'Subject': [np.nan, 'History of Egypt', 'Data Science'],
'Notes': [np.nan, np.nan, '50 students']}
df = pd.DataFrame(data)
# given my_list
my_list = ["maths", "science", "geography", "statistics"]
my_list = set(map(str.lower, my_list)) # convert to a set and verify words are lowercase
# given webs; all values should be lowercase
webs = ["https://www.stanford.edu", "https://www.ucl.ac.uk", "http://www.sorbonne-universite.fr"]
# function to test for word content
def test_words(v: pd.Series) -> bool:
v = v.fillna('').str.lower().str.replace('/', ' ').str.split(' ') # replace na, lower case , convert to list
s_set = {st for row in v for st in row if st} # join all the values in the lists to one set
return True if s_set.intersection(my_list) else False # True if there is a word intersection between sets
# test for conditions in the word columns and the web column
df['Label'] = df[['path', 'Subject', 'Notes']].apply(test_words, axis=1) | df.web.apply(lambda x: any(x in y for y in webs))
# display(df)
web path Subject Notes Label
0 www.stanford.edu /maths/ NaN NaN True
1 www.ucla.com /history/ History of Egypt NaN False
2 www.kcl.ac.uk /datascience/ Data Science 50 students True
Notes Regarding Original Code
It's not a good idea to use iterrows multiple times. For a large dataset it will be very time-consuming and error prone.
It was easier to write a new function then interpret the different code blocks for each column.

Search for a value anywhere in a pandas DataFrame

This seems like a simple question, but I couldn't find it asked before (this and this are close but the answers aren't great).
The question is: if I want to search for a value somewhere in my df (I don't know which column it's in) and return all rows with a match.
What's the most Pandaic way to do it? Is there anything better than:
for col in list(df):
try:
df[col] == var
return df[df[col] == var]
except TypeError:
continue
?
You can perform equality comparison on the entire DataFrame:
df[df.eq(var1).any(1)]
You should using isin , this is return the column , is want row check cold' answer :-)
df.isin(['bal1']).any()
A False
B True
C False
CLASS False
dtype: bool
Or
df[df.isin(['bal1'])].stack() # level 0 index is row index , level 1 index is columns which contain that value
0 B bal1
1 B bal1
dtype: object
You can try the code below:
import pandas as pd
x = pd.read_csv(r"filePath")
x.columns = x.columns.str.lower().str.replace(' ', '_')
y = x.columns.values
z = y.tolist()
print("Note: It take Case Sensitive Values.")
keyWord = input("Type a Keyword to Search: ")
try:
for k in range(len(z)-1):
l = x[x[z[k]].str.match(keyWord)]
print(l.head(10))
k = k+1
except:
print("")
This is a solution which will return the actual column you need.
df.columns[df.isin(['Yes']).any()]
Minimal solution:
import pandas as pd
import numpy as np
def locate_in_df(df, value):
a = df.to_numpy()
row = np.where(a == value)[0][0]
col = np.where(a == value)[1][0]
return row, col

Categories