Wide to Long MiltiIndex dataset using pandas - python

I've solved a part of this question Wide to long dataset using pandas but still need more help.
I've a dataset which has columns as:
IA01_Raw_Baseline,IA01_Raw_Midline,IA01_Class1_Endline etc. I want to break it so that the mid word i.e. Raw, Class1 etc remain in the column and IAx, timeLine (mid, base, end) becomes two separate columns. For example for a row of data:
ID Country Type Region Gender IA01_Raw_EndLine IA01_Raw_Baseline IA01_Raw_Midline IA02_Raw QA_Include QA_Comments
SC1 France A Europe Male 4 8 1 yes N/A
It should become:
ID Country Type Region Gender Timeline IA Raw Class1 Class2 QA_Include QA_Comments
SC1 France A Europe Male Endline IA01 4 N/A N/A yes N/A
SC1 France A Europe Male Baseline IA01 8 N/A N/A yes N/A
SC1 France A Europe Male Midline IA01 1 N/A N/A yes N/A
This is just the conversion of one row where I've 500+ columns which has different IA from 01 to 12 and attributes like Raw, Class1, Class2, Amount etc all has baseline and midline etc.
When converting I did break them down to the columns, where I've idVars containing columns that will be the index and valueVars will have the IA01_Raw_Baseline type columns:
idVars = list(gd_df.columns[0:40]) + list(gd_df.columns[472:527]) #values that will not pivot
valueVars = gd_df.columns[41:472]#.tolist() #value that will pivot
gd_df2 = gd_df.set_index(idVars)
gd_df2.columns = pd.MultiIndex.from_tuples(map(tuple, gd_df2.columns.str.split('_', n=1)))
gd_out = gd_df2.stack(level=0).reset_index().rename({'level_7': 'IA'}, axis=1)
So this code gave me:
As you can see I got IA column the way I wanted but the timeline is still embedded in the column name. What should I change in my code so that it should give this output:
UPDATE:
By doing this:
s=df.set_index(idVars)
s.columns=pd.MultiIndex.from_tuples(s.columns.str.split('_').map(tuple),names =['IA','raw','Timeline'])
s.stack([0,2]).reset_index()
s.to_excel(r'gd_out1.xlsx')
I'm getting:

Since you mentioned wide to long , we using wide_to_long
s=pd.wide_to_long(df,['IA01_Raw'],i=['ID', 'Country', 'Type', 'Region', 'Gender','IA02_Raw', 'QA_Include',
'QA_Comments'],j='Timeline',suffix='\w+',sep='_')
s.columns=pd.MultiIndex.from_tuples(s.columns.str.split('_').map(tuple),names =['IA','raw'])
s.stack(0).reset_index()
Out[27]:
raw ID Country Type Region ... QA_Comments Timeline IA Raw
0 SC1 France A Europe ... NaN EndLine IA01 4
1 SC1 France A Europe ... NaN Baseline IA01 8
2 SC1 France A Europe ... NaN Midline IA01 1
[3 rows x 11 columns]
Update
s=df.set_index(['ID', 'Country', 'Type', 'Region', 'Gender', 'QA_Include',
'QA_Comments','IA02_Raw'])
s.columns=pd.MultiIndex.from_tuples(s.columns.str.split('_').map(tuple),names =['IA','raw','Timeline'])
s.stack([0,2]).reset_index()

Related

Dataframe - filter the values of a particular column with isin()

I have a pandas dataframe in which I have the column "Bio Location", I would like to filter it so that I only have the locations of my list in which there are names of cities. I have made the following code which works except that I have a problem.
For example, if the location is "Paris France" and I have Paris in my list then it will return the result. However, if I had "France Paris", it would not return "Paris". Do you have a solution? Maybe use regex? Thank u a lot!!!
df = pd.read_csv(path_to_file, encoding='utf-8', sep=',')
cities = [Paris, Bruxelles, Madrid]
values = df[df['Bio Location'].isin(citiesfr)]
values.to_csv(r'results.csv', index = False)
What you want here is .str.contains():
1. The DF I used to test:
df = {
'col1':['Paris France','France Paris Test','France Paris','Madrid Spain','Spain Madrid Test','Spain Madrid'] #so tested with 1x at start, 1x in the middle and 1x at the end of a str
}
df = pd.DataFrame(df)
df
Result:
index
col1
0
Paris France
1
France Paris Test
2
France Paris
3
Madrid Spain
4
Spain Madrid Test
5
Spain Madrid
2. Then applying the code below:
Updated following comment
#so tested with 1x at start, 1x in the middle and 1x at the end of a str
reg = ('Paris|Madrid')
df = df[df.col1.str.contains(reg)]
df
Result:
index
col1
0
Paris France
1
France Paris Test
2
France Paris
3
Madrid Spain
4
Spain Madrid Test
5
Spain Madrid

Python split one column into multiple columns and reattach the split columns into original dataframe

I want to split one column from my dataframe into multiple columns, then attach those columns back to my original dataframe and divide my original dataframe based on whether the split columns include a specific string.
I have a dataframe that has a column with values separated by semicolons like below.
import pandas as pd
data = {'ID':['1','2','3','4','5','6','7'],
'Residence':['USA;CA;Los Angeles;Los Angeles', 'USA;MA;Suffolk;Boston', 'Canada;ON','USA;FL;Charlotte', 'NA', 'Canada;QC', 'USA;AZ'],
'Name':['Ann','Betty','Carl','David','Emily','Frank', 'George'],
'Gender':['F','F','M','M','F','M','M']}
df = pd.DataFrame(data)
Then I split the column as below, and separated the split column into two based on whether it contains the string USA or not.
address = df['Residence'].str.split(';',expand=True)
country = address[0] != 'USA'
USA, nonUSA = address[~country], address[country]
Now if you run USA and nonUSA, you'll note that there are extra columns in nonUSA, and also a row with no country information. So I got rid of those NA values.
USA.columns = ['Country', 'State', 'County', 'City']
nonUSA.columns = ['Country', 'State']
nonUSA = nonUSA.dropna(axis=0, subset=[1])
nonUSA = nonUSA[nonUSA.columns[0:2]]
Now I want to attach USA and nonUSA to my original dataframe, so that I will get two dataframes that look like below:
USAdata = pd.DataFrame({'ID':['1','2','4','7'],
'Name':['Ann','Betty','David','George'],
'Gender':['F','F','M','M'],
'Country':['USA','USA','USA','USA'],
'State':['CA','MA','FL','AZ'],
'County':['Los Angeles','Suffolk','Charlotte','None'],
'City':['Los Angeles','Boston','None','None']})
nonUSAdata = pd.DataFrame({'ID':['3','6'],
'Name':['David','Frank'],
'Gender':['M','M'],
'Country':['Canada', 'Canada'],
'State':['ON','QC']})
I'm stuck here though. How can I split my original dataframe into people whose Residence include USA or not, and attach the split columns from Residence ( USA and nonUSA ) back to my original dataframe?
(Also, I just uploaded everything I had so far, but I'm curious if there's a cleaner/smarter way to do this.)
There is unique index in original data and is not changed in next code for both DataFrames, so you can use concat for join together and then add to original by DataFrame.join or concat with axis=1:
address = df['Residence'].str.split(';',expand=True)
country = address[0] != 'USA'
USA, nonUSA = address[~country], address[country]
USA.columns = ['Country', 'State', 'County', 'City']
nonUSA = nonUSA.dropna(axis=0, subset=[1])
nonUSA = nonUSA[nonUSA.columns[0:2]]
#changed order for avoid error
nonUSA.columns = ['Country', 'State']
df = pd.concat([df, pd.concat([USA, nonUSA])], axis=1)
Or:
df = df.join(pd.concat([USA, nonUSA]))
print (df)
ID Residence Name Gender Country State \
0 1 USA;CA;Los Angeles;Los Angeles Ann F USA CA
1 2 USA;MA;Suffolk;Boston Betty F USA MA
2 3 Canada;ON Carl M Canada ON
3 4 USA;FL;Charlotte David M USA FL
4 5 NA Emily F NaN NaN
5 6 Canada;QC Frank M Canada QC
6 7 USA;AZ George M USA AZ
County City
0 Los Angeles Los Angeles
1 Suffolk Boston
2 NaN NaN
3 Charlotte None
4 NaN NaN
5 NaN NaN
6 None None
But it seems it is possible simplify:
c = ['Country', 'State', 'County', 'City']
df[c] = df['Residence'].str.split(';',expand=True)
print (df)
ID Residence Name Gender Country State \
0 1 USA;CA;Los Angeles;Los Angeles Ann F USA CA
1 2 USA;MA;Suffolk;Boston Betty F USA MA
2 3 Canada;ON Carl M Canada ON
3 4 USA;FL;Charlotte David M USA FL
4 5 NA Emily F NA None
5 6 Canada;QC Frank M Canada QC
6 7 USA;AZ George M USA AZ
County City
0 Los Angeles Los Angeles
1 Suffolk Boston
2 None None
3 Charlotte None
4 None None
5 None None
6 None None

How to test string contains elements in list and assign the target element to another column via Pandas

I have a one column list presenting some company names. Some of those names contain the country names (e.g., "China" in "China A1", 'Finland' in "C1 in Finland"). I want to extract their belonging countries based on the company name and a pre-defined list consisted of country names.
The original dataframe df shows like this
Company name Country
0 China A1
1 Australia-A2
2 Belgium_C1
3 C1 in Finland
4 D1 of Greece
5 E2 for Pakistan
For now, I can only come up with an inefficient method. Here is my code:
country_list = ['China','America','Greece','Pakistan','Finland','Belgium','Japan','British','Australia']
for t in country_list:
df.loc[df['company name'].contains(t),'country']=t
The result shows like
Company name Country
0 China A1 China
1 Australia-A2 Australia
2 Belgium_C1 Belgium
3 C1 in Finland Finland
4 D1 of Greece Greece
5 E2 for Pakistan Pakistan
I thought that when the country_list contains large amount of elements, i,e, countries, it would be time-consuming via loop method. Is there any simpler method to tackle with my problem?
Here's one way using str.extract:
df['Country'] = df['Company name'].str.extract('('+'|'.join(country_list)+')')
Company name Country
0 China A1 China
1 Australia-A2 Australia
2 Belgium_C1 Belgium
3 C1 in Finland Finland
4 D1 of Greece Greece
5 E2 for Pakistan Pakistan
You need series.str.extract() here:
pat = r'({})'.format('|'.join(country_list))
# pat-->'(China|America|Greece|Pakistan|Finland|Belgium|Japan|British|Australia)'
df['Country']=df['Company name'].str.extract(pat, expand=False)
Maybe using findall in case you have more than one country name in one cell
df["Company name"].str.findall('|'.join(country_list)).str[0]
Out[758]:
0 China
1 Australia
2 Belgium
3 Finland
4 Greece
5 Pakistan
Name: Company name, dtype: object
Using str.extract with Regex
Ex:
import pandas as pd
country_list = ['China','America','Greece','Pakistan','Finland','Belgium','Japan','British','Australia']
df = pd.read_csv(filename)
df["Country"] = df["Company_name"].str.extract("("+"|".join(country_list)+ ")")
print(df)
Output:
Company_name Country
0 China A1 China
1 Australia-A2 Australia
2 Belgium_C1 Belgium
3 C1 in Finland Finland
4 D1 of Greece Greece
5 E2 for Pakistan Pakistan

how to create a stacked bar with three dataframe with three columns & three rows

I have three dataframes df_Male , df_female , Df_TransGender
sample dataframe df_Male
continent avg_count_country avg_age
Asia 55 5
Africa 65 10
Europe 75 8
df_Female
continent avg_count_country avg_age
Asia 50 7
Africa 60 12
Europe 70 0
df_Transgender
continent avg_count_country avg_age
Asia 30 6
Africa 40 11
America 80 10
Now our stacked bar grap should look like
X axis will contain three ticks Male , Female , Transgender
Y axis will be Total_count--100
And in the Bar avg_age will be stacked
Now I was trying like with pivot table
pivot_df = df.pivot(index='new_Columns', columns='avg_age ', values='Values')
getting confused how to plot this , can anyone please help on how to concatenate three dataframe in one , so that it create Male,Female and Transgener columns
This topic is handeled here: https://pandas.pydata.org/pandas-docs/stable/merging.html
(Please note, that the third continent in df_Transgenderis different to the other dataframes, 'America' instead of 'Europe'; I changed that for the following plot, hoping that this is correct.)
frames = [df_Male, df_Female, df_Transgender]
df = pd.concat(frames, keys=['Male', 'Female', 'Transgender'])
continent avg_count_country avg_age
Male 0 Asia 55 5
1 Africa 65 10
2 Europe 75 8
Female 0 Asia 50 7
1 Africa 60 12
2 Europe 70 0
Transgender 0 Asia 30 6
1 Africa 40 11
2 Europe 80 10
btm = [0, 0, 0]
for name, grp in df.groupby('continent', sort=False):
plt.bar(grp.index.levels[1], grp.avg_age.values, bottom=btm, tick_label=grp.index.levels[0], label=name)
btm = grp.avg_age.values
plt.legend(ncol = 3)
As you commented below that America in the third dataset was no mistake, you can add rows accordingly to each dataframe like this bevor you go on like above:
df_Male.append({'avg_age': 0, 'continent': 'America'}, ignore_index=True)
df_Female.append({'avg_age': 0, 'continent': 'America'}, ignore_index=True)
df_Transgender.append({'avg_age': 0, 'continent': 'Europe'}, ignore_index=True)

Merge two pandas dataframe two create a new dataframe with a specific operation

I have two dataframes as shown below.
Company Name BOD Position Ethnicity DOB Age Gender Degree ( Specialazation) Remark
0 Big Lots Inc. David J. Campisi Director, President and Chief Executive Offic... American 1956 61 Male Graduate NaN
1 Big Lots Inc. Philip E. Mallott Chairman of the Board American 1958 59 Male MBA, Finace NaN
2 Big Lots Inc. James R. Chambers Independent Director American 1958 59 Male MBA NaN
3 Momentive Performance Materials Inc Mahesh Balakrishnan director Asian 1983 34 Male BA Economics NaN
Company Name Net Sale Gross Profit Remark
0 Big Lots Inc. 5.2B 2.1B NaN
1 Momentive Performance Materials Inc 544M 146m NaN
2 Markel Corporation 5.61B 2.06B NaN
3 Noble Energy, Inc. 3.49B 2.41B NaN
4 Leidos Holding, Inc. 7.04B 852M NaN
I want to create a new dataframe with these two, so that in 2nd dataframe, I have new columns with count of ethinicities from each companies, such as American -2 Mexican -5 and so on, so that later on, i can calculate diversity score.
the variables in the output dataframe is like,
Company Name Net Sale Gross Profit Remark American Mexican German .....
Big Lots Inc. 5.2B 2.1B NaN 2 0 5 ....
First get counts per groups by groupby with size and unstack, last join to second DataFrame:
df1 = pd.DataFrame({'Company Name':list('aabcac'),
'Ethnicity':['American'] * 3 + ['Mexican'] * 3})
df1 = df1.groupby(['Company Name', 'Ethnicity']).size().unstack(fill_value=0)
#slowier alternative
#df1 = pd.crosstab(df1['Company Name'], df1['Ethnicity'])
print (df1)
Ethnicity American Mexican
Company Name
a 2 1
b 1 0
c 0 2
df2 = pd.DataFrame({'Company Name':list('abc')})
print (df2)
Company Name
0 a
1 b
2 c
df3 = df2.join(df1, on=['Company Name'])
print (df3)
Company Name American Mexican
0 a 2 1
1 b 1 0
2 c 0 2
EDIT: You need replace unit by 0 and convert to floats:
print (df)
Name sale
0 A 100M
1 B 200M
2 C 5M
3 D 40M
4 E 10B
5 F 2B
d = {'M': '0'*6, 'B': '0'*9}
df['a'] = df['sale'].replace(d, regex=True).astype(float).sort_values(ascending=False)
print (df)
Name sale a
0 A 100M 1.000000e+08
1 B 200M 2.000000e+08
2 C 5M 5.000000e+06
3 D 40M 4.000000e+07
4 E 10B 1.000000e+10
5 F 2B 2.000000e+09

Categories