Barplot of two columns based on specific condition - python

I was given a task where I'm supposed to plot a element based on another column element.
For further information here's the code:
# TODO: Plot the Male employee first name on 'Y' axis while Male salary is on 'X' axis
import pandas as pd
import matplotlib.pyplot as plt
data = pd.read_excel("C:\\users\\HP\\Documents\\Datascience task\\Employee.xlsx")
print(data.head(5))
Output:
First Name Last Name Gender Age Experience (Years) Salary
0 Arnold Carter Male 21 10 8344
1 Arthur Farrell Male 20 7 6437
2 Richard Perry Male 28 3 8338
3 Ellia Thomas Female 26 4 8870
4 Jacob Kelly Male 21 4 548
How to plot the 'First Name' column vs the 'Salary' column of the first 5 rows of where the 'Gender' is Male.

First generate the male rows separately and extract first name and salary for plotting.
The below code identifies first five male employees and converts their first name and salary as x and y lists.
x = list(df[df['Gender'] == "Male"][:5]['Fname'])
y = list(df[df['Gender'] == "Male"][:5]['Salary'])
print(x)
print(y)
Output:
['Arnold', 'Arthur', 'Richard', 'Jacob']
[8344, 6437, 8338, 548]
Note that there're only 4 male available in the df.
Then we can plot any chart as we require;
plt.bar(x, y, color = ['r', 'g', 'b', 'y']);
Output:

seaborn can help as well
import seaborn as sns
import matplotlib.plotly as plt
sns.barplot( x=df[(df['Gender'] == "Male")]['First Name'][:5] , y = df[(df['Gender'] == "Male")]['Salary'][:5] )
plt.xlabel('First Names')
plt.ylabel('Salary')
plt.title('Barplot of Male Employees')
plt.show()

Related

Replace certain values in df2 based on common values in df1 [duplicate]

I have two dataframes where I need to update the first one based on the value of the second one if exists. Sample story provided below is to replace the student_id with updatedId if exists in 'old_id' column and replace it with 'new_id'.
import pandas as pd
import numpy as np
student = {
'Name': ['John', 'Jay', 'sachin', 'Geetha', 'Amutha', 'ganesh'],
'gender': ['male', 'male', 'male', 'female', 'female', 'male'],
'math score': [50, 100, 70, 80, 75, 40],
'student_Id': ['1234', '6788', 'xyz', 'abcd', 'ok83', '234v'],
}
updatedId = {
'old_id' : ['ok83', '234v'],
'new_id' : ['83ko', 'v432'],
}
df_student = pd.DataFrame(student)
df_updated_id = pd.DataFrame(updatedId)
print(df_student)
print(df_updated_id)
# Method with np.where
for index, row in df_updated_id.iterrows():
df_student['student_Id'] = np.where(df_student['student_Id'] == row['old_id'], row['new_id'], df_student['student_Id'])
# print(df_student)
# Method with dataframe.mask
for index, row in df_updated_id.iterrows():
df_student['student_Id'].mask(df_student['student_Id'] == row['old_id'], row['new_id'], inplace=True)
print(df_student)
The results for both methods above work and yield the correct result
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 ok83
5 ganesh male 40 234v
old_id new_id
0 ok83 83ko
1 234v v432
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432
Nonetheless, the actual data of students has about 500,000 rows and updated_id has 6000 rows.
Thus I run into performance issues as loop is very slow:
A simple timer are placed to observe the number of records processed for df_updated_id
100 rows - numpy Time=3.9020769596099854; mask Time=3.9169061183929443
500 rows - numpy Time=20.42293930053711; mask Time=19.768696784973145
1000 rows - numpy Time=40.06309795379639; mask Time=37.26559829711914
My question is whether I can optimize it using a merge (join table), or ditch the iterrows? I tried something like the below but failed to get it to work.
Replace dataframe column values based on matching id in another dataframe, and How to iterate over rows in a DataFrame in Pandas
Please advice..
You can also try with map:
df_student['student_Id'] = (
df_student['student_Id'].map(df_updated_id.set_index('old_id')['new_id'])
.fillna(df_student['student_Id'])
)
print(df_student)
# Output
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432
Update
I believe the updated_id isn't unique, so I need to further pre-process the data.
In this case, maybe you could drop duplicates before considering the last value (keep='last') is the most recent for a same old_id:
sr = df_updated_id.drop_duplicates('old_id', keep='last') \
.set_index('old_id')['new_id']
df_student['student_Id'] = df_student['student_Id'].map(sr) \
.fillna(df_student['student_Id']
)
Note: this is exactly what the #BENY's answer does. As he creates a dict, only the last occurrence of an old_id is kept. However, if you want to keep the first value appears, his code doesn't work. With drop_duplicates, you can adjust the keep parameter.
We can just replace
df_student.replace({'student_Id':df_updated_id.set_index('old_id')['new_id']},inplace=True)
df_student
Out[337]:
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432

Replace column value of Dataframe based on a condition on another Dataframe

I have two dataframes where I need to update the first one based on the value of the second one if exists. Sample story provided below is to replace the student_id with updatedId if exists in 'old_id' column and replace it with 'new_id'.
import pandas as pd
import numpy as np
student = {
'Name': ['John', 'Jay', 'sachin', 'Geetha', 'Amutha', 'ganesh'],
'gender': ['male', 'male', 'male', 'female', 'female', 'male'],
'math score': [50, 100, 70, 80, 75, 40],
'student_Id': ['1234', '6788', 'xyz', 'abcd', 'ok83', '234v'],
}
updatedId = {
'old_id' : ['ok83', '234v'],
'new_id' : ['83ko', 'v432'],
}
df_student = pd.DataFrame(student)
df_updated_id = pd.DataFrame(updatedId)
print(df_student)
print(df_updated_id)
# Method with np.where
for index, row in df_updated_id.iterrows():
df_student['student_Id'] = np.where(df_student['student_Id'] == row['old_id'], row['new_id'], df_student['student_Id'])
# print(df_student)
# Method with dataframe.mask
for index, row in df_updated_id.iterrows():
df_student['student_Id'].mask(df_student['student_Id'] == row['old_id'], row['new_id'], inplace=True)
print(df_student)
The results for both methods above work and yield the correct result
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 ok83
5 ganesh male 40 234v
old_id new_id
0 ok83 83ko
1 234v v432
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432
Nonetheless, the actual data of students has about 500,000 rows and updated_id has 6000 rows.
Thus I run into performance issues as loop is very slow:
A simple timer are placed to observe the number of records processed for df_updated_id
100 rows - numpy Time=3.9020769596099854; mask Time=3.9169061183929443
500 rows - numpy Time=20.42293930053711; mask Time=19.768696784973145
1000 rows - numpy Time=40.06309795379639; mask Time=37.26559829711914
My question is whether I can optimize it using a merge (join table), or ditch the iterrows? I tried something like the below but failed to get it to work.
Replace dataframe column values based on matching id in another dataframe, and How to iterate over rows in a DataFrame in Pandas
Please advice..
You can also try with map:
df_student['student_Id'] = (
df_student['student_Id'].map(df_updated_id.set_index('old_id')['new_id'])
.fillna(df_student['student_Id'])
)
print(df_student)
# Output
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432
Update
I believe the updated_id isn't unique, so I need to further pre-process the data.
In this case, maybe you could drop duplicates before considering the last value (keep='last') is the most recent for a same old_id:
sr = df_updated_id.drop_duplicates('old_id', keep='last') \
.set_index('old_id')['new_id']
df_student['student_Id'] = df_student['student_Id'].map(sr) \
.fillna(df_student['student_Id']
)
Note: this is exactly what the #BENY's answer does. As he creates a dict, only the last occurrence of an old_id is kept. However, if you want to keep the first value appears, his code doesn't work. With drop_duplicates, you can adjust the keep parameter.
We can just replace
df_student.replace({'student_Id':df_updated_id.set_index('old_id')['new_id']},inplace=True)
df_student
Out[337]:
Name gender math score student_Id
0 John male 50 1234
1 Jay male 100 6788
2 sachin male 70 xyz
3 Geetha female 80 abcd
4 Amutha female 75 83ko
5 ganesh male 40 v432

seaborn point plot visualization

I am plotting a point plot to show the relationship between "workclass", "sex", "occupation" and "Income exceed 50K or not". However, the result is a mess. The legends are stick together, Female and Male are both shown in blue colors in the legend etc.
#Co-relate categorical features
grid = sns.FacetGrid(train, row='occupation', size=6, aspect=1.6)
grid.map(sns.pointplot, 'workclass', 'exceeds50K', 'sex', palette='deep', markers = ["o", "x"] )
grid.add_legend()
Please advise how to fit the size of the plot. Thanks!
It sounds like 'exceeds50k' is a categorical variable. Your y variable needs to be continuous for a point plot. So assuming this is your dataset:
import pandas as pd
import seaborn as sns
df =pd.read_csv("https://raw.githubusercontent.com/katreparitosh/Income-Predictor-Model/master/Database/adult.csv")
We simplify some categories to plot for example sake:
df['native.country'] = [i if i == 'United-States' else 'others' for i in df['native.country'] ]
df['race'] = [i if i == 'White' else 'others' for i in df['race'] ]
df.head()
age workclass fnlwgt education education.num marital.status occupation relationship race sex capital.gain capital.loss hours.per.week native.country income
0 90 ? 77053 HS-grad 9 Widowed ? Not-in-family White Female 0 4356 40 United-States <=50K
1 82 Private 132870 HS-grad 9 Widowed Exec-managerial Not-in-family White Female 0 4356 18 United
If the y variable is categorical, you might want to use a barplot:
sns.catplot(hue='income',x='sex', palette='deep',data=df,
col='native.country',
row='race',kind='count',height=3,aspect=1.6)
If it is continuous, for example age, you can see it works:
grid = sns.FacetGrid(df, row='race', height=3, aspect=1.6)
grid.map(sns.pointplot, 'native.country', 'age', 'sex', palette='deep', markers = ["o", "x"] )
grid.add_legend()

Pandas groupby chaining: rename multi-index column to one row column

I was doing some continuous operations on pandas dataframe where I need to chain the rename operation. The situation is like this:
import numpy as np
import pandas as pd
import seaborn as sns
df = sns.load_dataset('tips')
g = (df.groupby(['sex','time','smoker'])
.agg({'tip': ['count','sum'],
'total_bill': ['count','mean']})
.reset_index()
)
print(g.head())
This gives:
sex time smoker tip total_bill
count sum count mean
0 Male Lunch Yes 13 36.28 13 17.374615
1 Male Lunch No 20 58.83 20 18.486500
2 Male Dinner Yes 47 146.79 47 23.642553
3 Male Dinner No 77 243.17 77 20.130130
4 Female Lunch Yes 10 28.91 10 17.431000
without chaining
I can do it manually in another line:
g.columns = [i[0] + '_' + i[1] if i[1] else i[0]
for i in g.columns.ravel()]
It works fine, but I would like to chain this rename column process so that I can chain further other operations.
But I want inside chaining
How to do so?
Required output:
g = (df.groupby(['sex','time','smoker'])
.agg({'tip': ['count','sum'],
'total_bill': ['count','mean']})
.reset_index()
.rename(something here)
# or .set_axis(something here)
# or, .pipe(something here) I am not sure.
) # If i could do this this, i can do further chaining
sex time smoker tip_count tip_sum total_bill_count total_bill_mean
0 Male Lunch Yes 13 36.28 13 17.374615
1 Male Lunch No 20 58.83 20 18.486500
2 Male Dinner Yes 47 146.79 47 23.642553
3 Male Dinner No 77 243.17 77 20.130130
4 Female Lunch Yes 10 28.91 10 17.431000
You can use pipe to handle this:
import numpy as np
import pandas as pd
import seaborn as sns
df = sns.load_dataset('tips')
g = (df.groupby(['sex','time','smoker'])
.agg({'tip': ['count','sum'],
'total_bill': ['count','mean']})
.reset_index()
.pipe(lambda x: x.set_axis([f'{a}_{b}' if b == '' else f'{a}' for a,b in x.columns], axis=1, inplace=False))
)
print(g.head())
Output:
sex time smoker tip_count tip_sum total_bill_count total_bill_mean
0 Male Lunch Yes 13 36.28 13 17.374615
1 Male Lunch No 20 58.83 20 18.486500
2 Male Dinner Yes 47 146.79 47 23.642553
3 Male Dinner No 77 243.17 77 20.130130
4 Female Lunch Yes 10 28.91 10 17.431000
Note I am using f-string formatting python 3.6+ is required.

How to take difference of matching df['keys'] and create new column for them

I'm trying to find the wage gap between genders given a set of majors.
Here is a text version of my table:
gender field group logwage
0 male BUSINESS 7.229572
10 female BUSINESS 7.072464
1 male COMM/JOURN 7.108538
11 female COMM/JOURN 7.015018
2 male COMPSCI/STAT 7.340410
12 female COMPSCI/STAT 7.169401
3 male EDUCATION 6.888829
13 female EDUCATION 6.770255
4 male ENGINEERING 7.397082
14 female ENGINEERING 7.323996
5 male HUMANITIES 7.053048
15 female HUMANITIES 6.920830
6 male MEDICINE 7.319011
16 female MEDICINE 7.193518
17 female NATSCI 6.993337
7 male NATSCI 7.089232
18 female OTHER 6.881126
8 male OTHER 7.091698
9 male SOCSCI/PSYCH 7.197572
19 female SOCSCI/PSYCH 6.968322
diff hasn't worked for me, as it will take the difference between every consecutive major.
and here is the code as it is now:
for row in sorted_mfield:
if sorted_mfield['field group']==sorted_mfield['field group'].shift(1):
diff= lambda x: x[0]-x[1]
My next strategy would be to go back to the unsorted dataframe where male and female were their own columns and make a difference from there, but since I've spent an hour trying to do this, and am pretty new to pandas, I thought I would ask and find out how this works. Thanks.
Solution using Pandas.DataFrame.shift() in a sorted version of the data:
df.sort_values(by=['field group', 'gender'], inplace=True)
df['gap'] = df.logwage - df.logwage.shift(1)
df[df.gender =='male'][['field group', 'gap']]
Producing the following output with the sample data:
field group gap
0 BUSINESS 0.157108
2 COMM/JOURN 0.093520
4 COMPSCI/STAT 0.171009
6 EDUCATION 0.118574
8 ENGINEERING 0.073086
10 HUMANITIES 0.132218
12 MEDICINE 0.125493
15 NATSCI 0.095895
17 OTHER 0.210572
18 SOCSCI/PSYCH 0.229250
Note: it considers that you will always have a pair of values for each field group. If you want to validate it or eliminate field groups without this pair, the code below does the filtering:
df_grouped = df.groupby('field group')
df_filtered = df_grouped.filter(lambda x: len(x) == 2)
I'd consider reshaping your DataFrame with pivot, making it easier to then compute.
Code:
df.pivot(index='field group', columns='gender', values='logwage').rename_axis([None], axis=1)
# female male
#field group
#BUSINESS 7.072464 7.229572
#COMM/JOURN 7.015018 7.108538
#COMPSCI/STAT 7.169401 7.340410
#EDUCATION 6.770255 6.888829
#ENGINEERING 7.323996 7.397082
#HUMANITIES 6.920830 7.053048
#MEDICINE 7.193518 7.319011
#NATSCI 6.993337 7.089232
#OTHER 6.881126 7.091698
#SOCSCI/PSYCH 6.968322 7.197572
df.male - df.female
#field group
#BUSINESS 0.157108
#COMM/JOURN 0.093520
#COMPSCI/STAT 0.171009
#EDUCATION 0.118574
#ENGINEERING 0.073086
#HUMANITIES 0.132218
#MEDICINE 0.125493
#NATSCI 0.095895
#OTHER 0.210572
#SOCSCI/PSYCH 0.229250
#dtype: float64

Categories