carry out operation on pandas column using an IF statement - python

I have a pandas dataframe,
df = pd.DataFrame({"Id": [77000581079,77000458432,77000458433,77000458434,77000691973], "Code": ['FO07930', 'FO73597','FO03177','FO73596','FOZZZZZ']})
I want to check the value of each row in column Code to see if it matches str FOZZZZ
If the operation is False then I would like to concatenate Id value to Code value
So the expected output will be:
Id Code
0 77000581079 FO0793077000581079
1 77000458432 FO7359777000458432
2 77000458433 FO0317777000458433
3 77000458434 FO7359677000458434
4 77000691973 FOZZZZZ
Ive tried
df['Id'] = df['Id'].astype(str)
for x in df['Id']:
if x == 'FOZZZZ':
pass
else:
df['Id']+df['Code']
Which I thought would run over each row in Column Code to check if it is =
to 'FOZZZZ' if not then concatenate the columns but no joy..

df.loc[df['Code']!='FOZZZZZ', 'Code'] = df['Code'] + df['Id'].astype(str)

Use pandas.Series.where with eq:
s = df["Code"]
df["Code"] = s.where(s.eq("FOZZZZZ"), s + df["Id"].astype(str))
print(df)
Output:
Code Id
0 FO0793077000581079 77000581079
1 FO7359777000458432 77000458432
2 FO0317777000458433 77000458433
3 FO7359677000458434 77000458434
4 FOZZZZZ 77000691973

Try np.where(condition, solution if condition is true, solution if condition is false). Use .isin(to check) if FOZZZZZ exists and reverse using ~ to build a boolean query to be used as condition.
df['Code']=np.where(~df['Code'].isin(['FOZZZZZ']), df.Id.astype(str)+df.Code,df.Code)
Id Code
0 77000581079 77000581079FO07930
1 77000458432 77000458432FO73597
2 77000458433 77000458433FO03177
3 77000458434 77000458434FO73596
4 77000691973 FOZZZZZ

Or you could try using loc:
df['Code'] = df['Code'] + df['Id'].astype(str)
df.loc[df['Code'].str.contains('FOZZZZZ'), 'Code'] = 'FOZZZZZ'
print(df)
Output:
Code Id
0 FO0793077000581079 77000581079
1 FO7359777000458432 77000458432
2 FO0317777000458433 77000458433
3 FO7359677000458434 77000458434
4 FOZZZZZ 77000691973

Related

Start index (under FIeld) from 1 with pandas DataFrame

I would like to start the index from 1 undes the "Field" column
df = pd.DataFrame(list(zip(total_points, passing_percentage)),
columns =['Pts Measured', '% pass'])
df = df.rename_axis('Field').reset_index()
df["Comments"] = ""
df
Output:
Field Pts Measured % pass Comments
0 0 92909 90.66
1 1 92830 91.85
2 2 130714 99.99
I found a similar question here: In Python pandas, start row index from 1 instead of zero without creating additional column
For your question, it would be as simple as adding the following line:
df["Field"] = np.arange(1, len(df) + 1)

Using dictionary to add some columns to a dataframe with assign function

I was using python and pandas to do some statistical analysis on data and at some point I needed to add some new columns with assign function
df_res = (
df
.assign(col1 = lambda x: np.where(x['event'].str.contains('regex1'),1,0))
.assign(col2 = lambda x: np.where(x['event'].str.contains('regex2'),1,0))
.assign(mycol = lambda x: np.where(x['event'].str.contains('regex3'),1,0))
.assign(newcol = lambda x: np.where(x['event'].str.contains('regex4'),1,0))
)
I wanted to know if there is any way to add columns names and my regex to a dictionary and use a for loop or another lambda expression to assign these columns automatically:
Dic = {'col1':'regex1','col2':'regex2','mycol':'regex3','newcol':'regex4'}
df_res = (
df
.assign(...using Dic here...)
)
I need to add more columns later and I think it will make it easier to add new columns later.
https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.assign.html
Assigning multiple columns within the same assign is possible. For Python 3.6 and above, later items in ‘**kwargs’ may refer to newly created or modified columns in ‘df’; items are computed and assigned into ‘df’ in order. For Python 3.5 and below, the order of keyword arguments is not specified, you cannot refer to newly created or modified columns. All items are computed first, and then assigned in alphabetical order.
Changed in version 0.23.0: Keyword argument order is maintained for Python 3.6 and later.
If you map all your regex so that each dictionary value holds a lambda instead of just the regex, you can simply unpack the dic into assign:
lambda_dict = {
col:
lambda x, regex=regex: (
x['event'].
str.contains(regex)
.astype(int)
)
for col, regex in Dic.items()
}
res = df.assign(**lambda_dict)
EDIT
Here's an example:
import pandas as pd
import random
random.seed(0)
events = ['apple_one', 'chicken_one', 'chicken_two', 'apple_two']
data = [random.choice(events) for __ in range(10)]
df = pd.DataFrame(data, columns=['event'])
regex_dict = {
'apples': 'apple',
'chickens': 'chicken',
'ones': 'one',
'twos': 'two',
}
lambda_dict = {
col:
lambda x, regex=regex: (
x['event']
.str.contains(regex)
.astype(int)
)
for col, regex in regex_dict.items()
}
res = df.assign(**lambda_dict)
print(res)
# Output
event apples chickens ones twos
0 apple_two 1 0 0 1
1 apple_two 1 0 0 1
2 apple_one 1 0 1 0
3 chicken_two 0 1 0 1
4 apple_two 1 0 0 1
5 apple_two 1 0 0 1
6 chicken_two 0 1 0 1
7 apple_two 1 0 0 1
8 chicken_two 0 1 0 1
9 chicken_one 0 1 1 0
The problem with the prior code was that the regex was only evaluated during the last loop. Adding it as a default argument fixes this.
This can do what you want to do
pd.concat([df,pd.DataFrame({a:list(df["event"].str.contains(b)) for a,b in Dic.items()})],axis=1)
Actually using a for loop will do the same
If I understand you question correctly, you're trying to rename the columns, in which case I think you could just use Pandas rename function. This would look like
df_res = df_res.rename(mapper=Dic)
-Ben

python - Replace first five characters in a column with asterisks

I have a column called SSN in a CSV file with values like this
289-31-9165
I need to loop through the values in this column and replace the first five characters so it looks like this
***-**-9165
Here's the code I have so far:
emp_file = "Resources/employee_data1.csv"
emp_pd = pd.read_csv(emp_file)
new_ssn = emp_pd["SSN"].str.replace([:5], "*")
emp_pd["SSN"] = new_ssn
How do I loop through the value and replace just the first five numbers (only) with asterisks and keep the hiphens as is?
Similar to Mr. Me, this will instead remove everything before the first 6 characters and replace them with your new format.
emp_pd["SSN"] = emp_pd["SSN"].apply(lambda x: "***-**" + x[6:])
You can simply achieve this with replace() method:
Example dataframe :
borrows from #AkshayNevrekar..
>>> df
ssn
0 111-22-3333
1 121-22-1123
2 345-87-3425
Result:
>>> df.replace(r'^\d{3}-\d{2}', "***-**", regex=True)
ssn
0 ***-**-3333
1 ***-**-1123
2 ***-**-3425
OR
>>> df.ssn.replace(r'^\d{3}-\d{2}', "***-**", regex=True)
0 ***-**-3333
1 ***-**-1123
2 ***-**-3425
Name: ssn, dtype: object
OR:
df['ssn'] = df['ssn'].str.replace(r'^\d{3}-\d{2}', "***-**", regex=True)
Put your asterisks in front, then grab the last 4 digits.
new_ssn = '***-**-' + emp_pd["SSN"][-4:]
You can use regex
df = pd.DataFrame({'ssn':['111-22-3333','121-22-1123','345-87-3425']})
def func(x):
return re.sub(r'\d{3}-\d{2}','***-**', x)
df['ssn'] = df['ssn'].apply(func)
print(df)
Output:
ssn
0 ***-**-3333
1 ***-**-1123
2 ***-**-3425

Python: Add 0/Zero in a string inside a cell

I have this sample data in a cell:
EmployeeID
2016-CT-1028
2016-CT-1028
2017-CT-1063
2017-CT-1063
2015-CT-948
2015-CT-948
So, my problem is how can I add 0 inside this data 2015-CT-948 to
make it like this 2015-CT-0948.
I tried this code:
pattern = re.compile(r'(\d\d+)-(\w\w)-(\d\d\d)')
newlist = list(filter(pattern.match, idList))
Just to get the match regex pattern then add the 0 with zfill() but its not working. Please, can someone give me an idea on how can I do it. Is there anyway I can do it in regex or in pandas. Thank you!
This is one approach using zfill
Ex:
import pandas as pd
def custZfill(val):
val = val.split("-")
#alternative split by last -
#val = val.rsplit("-",1)
val[-1] = val[-1].zfill(4)
return "-".join(val)
df = pd.DataFrame({"EmployeeID": ["2016-CT-1028", "2016-CT-1028",
"2017-CT-1063", "2017-CT-1063",
"2015-CT-948", "2015-CT-948"]})
print(df["EmployeeID"].apply(custZfill))
Output:
0 2016-CT-1028
1 2016-CT-1028
2 2017-CT-1063
3 2017-CT-1063
4 2015-CT-0948
5 2015-CT-0948
Name: EmployeeID, dtype: object
With pandas it can be solved with split instead of regex:
df['EmployeeID'].apply(lambda x: '-'.join(x.split('-')[:-1] + [x.split('-')[-1].zfill(4)]))
In pandas, you could use str.replace
df['EmployeeID'] = df.EmployeeID.str.replace(r'-(\d{3})$', r'-0\1', regex=True)
# Output:
0 2016-CT-1028
1 2016-CT-1028
2 2017-CT-1063
3 2017-CT-1063
4 2015-CT-0948
5 2015-CT-0948
Name: EmployeeID, dtype: object
if the format of the id's is strictly defined, you can also use a simple list comprehension to do this job:
ids = [
'2017-CT-1063',
'2015-CT-948',
'2015-CT-948'
]
new_ids = [id if len(id) == 12 else id[0:8]+'0'+id[8:] for id in ids]
print(new_ids)
# ['2017-CT-1063', '2015-CT-0948', '2015-CT-0948']
Here's a one liner:
df['EmployeeID'].apply(lambda x: '-'.join(xi if i != 2 else '%04d' % int(xi) for i, xi in enumerate(x.split('-'))))

Select Pandas rows with regex match

I have the following data-frame.
and I have an input list of values
I want to match each item from the input list to the Symbol and Synonym column in the data-frame and to extract only those rows where the input value appears in either the Symbol column or Synonym column(Please note that here the values are separated by '|' symbol).
In the output data-frame I need an additional column Input_symbol which denotes the matching value. So here in this case the desired output will should be like the image bellow.
How can I do the same ?.
IIUIC, use
In [346]: df[df.Synonyms.str.contains('|'.join(mylist))]
Out[346]:
Symbol Synonyms
0 A1BG A1B|ABG|GAB|HYST2477
1 A2M A2MD|CPAMD5|FWP007|S863-7
2 A2MP1 A2MP
6 SERPINA3 AACT|ACT|GIG24|GIG25
Check in both columns by str.contains and chain conditions by | (or), last filter by boolean indexing:
mylist = ['GAB', 'A2M', 'GIG24']
m1 = df.Synonyms.str.contains('|'.join(mylist))
m2 = df.Symbol.str.contains('|'.join(mylist))
df = df[m1 | m2]
Another solution is logical_or.reduce all masks created by list comprehension:
masks = [df[x].str.contains('|'.join(mylist)) for x in ['Symbol','Synonyms']]
m = np.logical_or.reduce(masks)
Or by apply, then use DataFrame.any for check at least one True per row:
m = df[['Symbol','Synonyms']].apply(lambda x: x.str.contains('|'.join(mylist))).any(1)
df = df[m]
print (df)
Symbol Synonyms
0 A1BG A1B|ABG|GAB|HYST2477
1 A2M A2MD|CPAMD5|FWP007|S863-7
2 A2MP1 A2MP
6 SERPINA3 AACT|ACT|GIG24|GIG25
The question has changed. What you want to do now is to look through the two columns (Symbol and Synonyms) and if you find a value that is inside mylist return it. If no match you can return 'No match!' (for instance).
import pandas as pd
import io
s = '''\
Symbol,Synonyms
A1BG,A1B|ABG|GAB|HYST2477
A2M,A2MD|CPAMD5|FWP007|S863-7
A2MP1,A2MP
NAT1,AAC1|MNAT|NAT-1|NATI
NAT2,AAC2|NAT-2|PNAT
NATP,AACP|NATP1
SERPINA3,AACT|ACT|GIG24|GIG25'''
mylist = ['GAB', 'A2M', 'GIG24']
df = pd.read_csv(io.StringIO(s))
# Store the lookup serie
lookup_serie = df['Symbol'].str.cat(df['Synonyms'],'|').str.split('|')
# Create lambda function to return first value from mylist, No match! if stop-iteration
f = lambda x: next((i for i in x if i in mylist), 'No match!')
df.insert(0,'Input_Symbol',lookup_serie.apply(f))
print(df)
Returns
Input_Symbol Symbol Synonyms
0 GAB A1BG A1B|ABG|GAB|HYST2477
1 A2M A2M A2MD|CPAMD5|FWP007|S863-7
2 No match! A2MP1 A2MP
3 No match! NAT1 AAC1|MNAT|NAT-1|NATI
4 No match! NAT2 AAC2|NAT-2|PNAT
5 No match! NATP AACP|NATP1
6 GIG24 SERPINA3 AACT|ACT|GIG24|GIG25
Old solution:
f = lambda x: [i for i in x.split('|') if i in mylist] != []
m1 = df['Symbol'].apply(f)
m2 = df['Synonyms'].apply(f)
df[m1 | m2]

Categories