Pandas- pivoting column into (conditional) aggregated string - python

Lets say I have the following data set, turned into a dataframe:
data = [
['Job 1', datetime.date(2019, 6, 9), 'Jim', 'Tom'],
['Job 1', datetime.date(2019, 6, 9), 'Bill', 'Tom'],
['Job 1', datetime.date(2019, 6, 9), 'Tom', 'Tom'],
['Job 1', datetime.date(2019, 6, 10), 'Bill', None],
['Job 2', datetime.date(2019,6,10), 'Tom', 'Tom']
]
df = pd.DataFrame(data, columns=['Job', 'Date', 'Employee', 'Manager'])
This yields a dataframe that looks like:
Job Date Employee Manager
0 Job 1 2019-06-09 Jim Tom
1 Job 1 2019-06-09 Bill Tom
2 Job 1 2019-06-09 Tom Tom
3 Job 1 2019-06-10 Bill None
4 Job 2 2019-06-10 Tom Tom
What I am trying to generate is a pivot on each unique Job/Date combo, with a column for Manager, and a column for a string with comma separated, non-manager employees. A couple of things to assume:
All employee names are unique (I'll actually be using unique employee ids rather than names), and Managers are also "employees", so there will never be a case with an employee and a manager sharing the same name/id, but being different individuals.
A work crew can have a manager, or not (see row with id 3, for an example without)
A manager will always also be listed as an employee (see row with id 2 or 4)
A job could have a manager, with no additional employees (see row id 4)
I'd like the resulting dataframe to look like:
Job Date Manager Employees
0 Job 1 2019-06-09 Tom Jim, Bill
1 Job 1 2019-06-10 None Bill
2 Job 2 2019-06-10 Tom None
Which leads to my questions:
Is there a way to do a ','.join like aggregation in a pandas pivot?
Is there a way to make this aggregation conditional (exclude the name/id in the manager column)
I suspect 1) is possible, and 2) might be more difficult. If 2) is a no, I can get around it in other ways later in my code.

The tricky part here is removing the Manager from the Employee column.
u = df.melt(['Job', 'Date'])
f = u[~u.duplicated(['Job', 'Date', 'value'], keep='last')].astype(str)
f.pivot_table(
index=['Job', 'Date'],
columns='variable', values='value',
aggfunc=','.join
).rename_axis(None, axis=1)
Employee Manager
Job Date
Job 1 2019-06-09 Jim,Bill Tom
2019-06-10 Bill None
Job 2 2019-06-10 NaN Tom

Group to aggregate, then fix the Employees by removing the Manager and setting to None where appropriate. Since the employees are unique, sets will work nicely here to remove the Manager.
s = df.groupby(['Job', 'Date']).agg({'Manager': 'first', 'Employee': lambda x: set(x)})
s['Employee'] = [', '.join(x.difference({y})) for x,y in zip(s.Employee, s.Manager)]
s['Employee'] = s.Employee.replace({'': None})
Manager Employee
Job Date
Job 1 2019-06-09 Tom Jim, Bill
2019-06-10 None Bill
Job 2 2019-06-10 Tom None

I'm partial to building a dictionary up with the desired results and reconstructing the dataframe.
d = {}
for t in df.itertuples():
d_ = d.setdefault((t.Job, t.Date), {})
d_['Manager'] = t.Manager
d_.setdefault('Employees', set()).add(t.Employee)
for k, v in d.items():
v['Employees'] -= {v['Manager']}
v['Employees'] = ', '.join(v['Employees'])
pd.DataFrame(d.values(), d).rename_axis(['Job', 'Date']).reset_index()
Job Date Employees Manager
0 Job 1 2019-06-09 Bill, Jim Tom
1 Job 1 2019-06-10 Bill None
2 Job 2 2019-06-10 Tom

In your case try not using lambda transform + drop_duplicates
df['Employee']=df['Employee'].mask(df['Employee'].eq(df.Manager)).dropna().groupby([df['Job'], df['Date']]).transform('unique').str.join(',')
df=df.drop_duplicates(['Job','Date'])
df
Out[745]:
Job Date Employee Manager
0 Job 1 2019-06-09 Jim,Bill Tom
3 Job 1 2019-06-10 Bill None
4 Job 2 2019-06-10 NaN Tom

how about
df.groupby(["Job","Date","Manager"]).apply( lambda x: ",".join(x.Employee))
this will find all unique sets of Job Date and Manager and put the employees together with "," into one string

Related

Using unstack to reshape a python dataframe

I'm looking to reformat a dataframe by moving some of the rows to be columns. I'm trying to use unstack for this and not seeing the results I expected.
My input looks like this:
data = {'ID': ['Tom', 'Tom', 'Tom', 'Dick', 'Dick', 'Dick'],
'TAG': ['instance', 'deadline', 'job', 'instance', 'deadline', 'job'],
'VALUE': ['AA', '23:30', 'job01', 'BB', '02:15', 'job02']
}
df = pd.DataFrame(data)
Giving me this:
ID TAG VALUE
0 Tom instance AA
1 Tom deadline 23:30
2 Tom job job01
3 Dick instance BB
4 Dick deadline 02:15
5 Dick job job02
What I'm after is something that looks like this:
ID instance deadline job
Tom AA 23:30 job01
Dick BB 02:15 job02
Using unstack as follows:
df = df.unstack().unstack()
I'm getting this:
0 1 2 3 4 5
ID Tom Tom Tom Dick Dick Dick
TAG instance deadline job instance deadline job
VALUE AA 23:30 job01 BB 02:15 job02
Appreciate any assistance here in getting the desired results.
This will work if you would like to use unstack()
df.set_index(['ID','TAG'])['VALUE'].unstack().reset_index()
You are looking for df.pivot:
df = df.pivot(index='ID', columns='TAG', values='VALUE')
print(df)
TAG deadline instance job
ID
Dick 02:15 BB job02
Tom 23:30 AA job01

Replace column value in one Panda Dataframe with column in another Panda Dataframe with conditions

I have the following 3 Panda Dataframe. I want to replace company and division columns the ID from their respective company and division dataframe.
pd_staff:
id name company division
P001 John Sunrise Headquarter
P002 Jane Falcon Digital Research & Development
P003 Joe Ashford Finance
P004 Adam Falcon Digital Sales
P004 Barbara Sunrise Human Resource
pd_company:
id name
1 Sunrise
2 Falcon Digital
3 Ashford
pd_division:
id name
1 Headquarter
2 Research & Development
3 Finance
4 Sales
5 Human Resource
This is the end result that I am trying to produce
id name company division
P001 John 1 1
P002 Jane 2 2
P003 Joe 3 3
P004 Adam 2 4
P004 Barbara 1 5
I have tried to combine Staff and Company using this code
pd_staff.loc[pd_staff['company'].isin(pd_company['name']), 'company'] = pd_company.loc[pd_company['name'].isin(pd_staff['company']), 'id']
which produces
id name company
P001 John 1.0
P002 Jane NaN
P003 Joe NaN
P004 Adam NaN
P004 Barbara NaN
You can do:
pd_staff['company'] = pd_staff['company'].map(pd_company.set_index('name')['id'])
pd_staff['division'] = pd_staff['division'].map(pd_division.set_index('name')['id'])
print(pd_staff):
id name company division
0 P001 John 1 1
1 P002 Jane 2 2
2 P003 Joe 3 3
3 P004 Adam 2 4
4 P004 Barbara 1 5
This will achieve the desired results
df_merge = df.merge(df2, how = 'inner', right_on = 'name', left_on = 'company', suffixes=('', '_y'))
df_merge = df_merge.merge(df3, how = 'inner', left_on = 'division', right_on = 'name', suffixes=('', '_z'))
df_merge = df_merge[['id', 'name', 'id_y', 'id_z']]
df_merge.columns = ['id', 'name', 'company', 'division']
df_merge.sort_values('id')
first, lets modify df company and df division a little bit
df2.rename(columns={'name':'company'},inplace=True)
df3.rename(columns={'name':'division'},inplace=True)
Then
df1=df1.merge(df2,on='company',how='left').merge(df3,on='division',how='left')
df1=df1[['id_x','name','id_y','id']]
df1.rename(columns={'id_x':'id','id_y':'company','id':'division'},inplace=True)
Use apply, you can have a function thar will replace the values. from the second excel you will pass the field to look up to and what's to replace in this. Here I am replacing Sunrise by 1 because it is in the second excel.
import pandas as pd
df = pd.read_excel('teste.xlsx')
df2 = pd.read_excel('ids.xlsx')
def altera(df33, field='Sunrise', new_field='1'): # for showing pourposes I left default values but they are to pass from the second excel
return df33.replace(field, new_field)
df.loc[:, 'company'] = df['company'].apply(altera)

How to count Pandas df elements with dynamic condition per row (=countif)

I am tyring to do some equivalent of COUNTIF in Pandas. I am trying to get my head around doing it with groupby, but I am struggling because my logical grouping condition is dynamic.
Say I have a list of customers, and the day on which they visited. I want to identify new customers based on 2 logical conditions
They must be the same customer (same Guest ID)
They must have been there on the previous day
If both conditions are met, they are a returning customer. If not, they are new (Hence newby = 1-... to identify new customers.
I managed to do this with a for loop, but obviously performance is terrible and this goes pretty much against the logic of Pandas.
How can I wrap the following code into something smarter than a loop?
for i in range (0, len(df)):
newby = 1-np.sum((df["Day"] == df.iloc[i]["Day"]-1) & (df["Guest ID"] == df.iloc[i]["Guest ID"]))
This post does not help, as the condition is static. I would like to avoid introducting "dummy columns", such as transposing the df, because I will have many categories (many customer names) and would like to build more complex logical statements. I do not want to run the risk of ending up with many auxiliary columns
I have the following input
df
Day Guest ID
0 3230 Tom
1 3230 Peter
2 3231 Tom
3 3232 Peter
4 3232 Peter
and expect this output
df
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1
Note that elements 3 and 4 are not necessarily duplicates - given there might be additional, varying columns (such as their order).
Do:
# ensure the df is sorted by date
df = df.sort_values('Day')
# group by customer and find the diff within each group
df['newby'] = (df.groupby('Guest ID')['Day'].transform('diff').fillna(2) > 1).astype(int)
print(df)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
UPDATE
If multiple visits are allowed per day, you could do:
# only keep unique visits per day
uniques = df.drop_duplicates()
# ensure the df is sorted by date
uniques = uniques.sort_values('Day')
# group by customer and find the diff within each group
uniques['newby'] = (uniques.groupby('Guest ID')['Day'].transform('diff').fillna(2) > 1).astype(int)
# merge the uniques visits back into the original df
res = df.merge(uniques, on=['Day', 'Guest ID'])
print(res)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1
As an alternative, without sorting or merging, you could do:
lookup = {(day + 1, guest) for day, guest in df[['Day', 'Guest ID']].value_counts().to_dict()}
df['newby'] = (~pd.MultiIndex.from_arrays([df['Day'], df['Guest ID']]).isin(lookup)).astype(int)
print(df)
Output
Day Guest ID newby
0 3230 Tom 1
1 3230 Peter 1
2 3231 Tom 0
3 3232 Peter 1
4 3232 Peter 1

Split a column into multiple columns / cleaning data set

So I have initialized a table from a pdf into a pandas Dataframe, it looks like the following:
df_current= pd.DataFrame({'Country': ['NaN','NaN','Nan','NaN','Denmark', 'Sweden',
'Germany'],
'Explained Part':['Personal and job characteristics',
'Education Occupation Job Employment', 'experience contract',
'Employment contract','20 -7 2 0','4 6 2 0', '-9 -6 -1 :']})
The expected (or the output I aim for in the end):
df_expected = pd.DataFrame({'Country': ['Denmark', 'Sweden',
'Germany'],'Personal and job characteristics':[20 ,4,-9],
'Education Occupation Job Employment':[-7,6,-6],
'experience contract':[2,2,-1],'Employment contract':[0,0,':']})
the problem is: the column 'Explained part' holds 4 columns' worth of data, and some of the data is shown as symbols, like ':'.
I was thinking about using
df[['Personal and job characteristics',
'Education Occupation Job Employment',
'experience contract',
'experience contract']] = df['Explained part'].str.split(" ",expand=True,)
But I cannot get it to work.
I want to split the column into 3 but since some cells have split the numbers.
Any ideas ?
Thanks in advance ~
PS. i have updated the question as I think my first post was too hard to understand, I have now added some of the data from the actual problem, and added an expected output, thx for the feedback so far!.
If NaNs are missing values first remove rows with them by DataFrame.dropna and then apply your solution with DataFrame.pop for extract column:
df_current= pd.DataFrame({'Country': [np.nan,np.nan,np.nan,np.nan,'Denmark', 'Sweden',
'Germany'],
'Explained Part':['Personal and job characteristics',
'Education Occupation Job Employment', 'experience contract',
'Employment contract','20 -7 2 0','4 6 2 0', '-9 -6 -1 :']})
print (df_current)
Country Explained Part
0 NaN Personal and job characteristics
1 NaN Education Occupation Job Employment
2 NaN experience contract
3 NaN Employment contract
4 Denmark 20 -7 2 0
5 Sweden 4 6 2 0
6 Germany -9 -6 -1 :
df = df_current.dropna(subset=['Country']).copy()
cols = ['Personal and job characteristics','Education Occupation Job Employment',
'experience contract','Employment contract']
df[cols] = df.pop('Explained Part').str.split(expand=True)
print (df)
Country Personal and job characteristics \
4 Denmark 20
5 Sweden 4
6 Germany -9
Education Occupation Job Employment experience contract Employment contract
4 -7 2 0
5 6 2 0
6 -6 -1 :
Or without pop:
df = df_current.dropna(subset=['Country']).copy()
cols = ['Personal and job characteristics','Education Occupation Job Employment',
'experience contract','Employment contract']
df[cols] = df['Explained Part'].str.split(expand=True)
df = df.drop('Explained Part', axis=1)

Reshape pandas dataframe from rows to columns

I'm trying to reshape my data. At first glance, it sounds like a transpose, but it's not. I tried melts, stack/unstack, joins, etc.
Use Case
I want to have only one row per unique individual, and put all job history on the columns. For clients, it can be easier to read information across rows rather than reading through columns.
Here's the data:
import pandas as pd
import numpy as np
data1 = {'Name': ["Joe", "Joe", "Joe","Jane","Jane"],
'Job': ["Analyst","Manager","Director","Analyst","Manager"],
'Job Eff Date': ["1/1/2015","1/1/2016","7/1/2016","1/1/2015","1/1/2016"]}
df2 = pd.DataFrame(data1, columns=['Name', 'Job', 'Job Eff Date'])
df2
Here's what I want it to look like:
Desired Output Table
.T within groupby
def tgrp(df):
df = df.drop('Name', axis=1)
return df.reset_index(drop=True).T
df2.groupby('Name').apply(tgrp).unstack()
Explanation
groupby returns an object that contains information on how the original series or dataframe has been grouped. Instead of performing a groupby with a subsquent action of some sort, we could first assign the df2.groupby('Name') to a variable (I often do), say gb.
gb = df2.groupby('Name')
On this object gb we could call .mean() to get an average of each group. Or .last() to get the last element (row) of each group. Or .transform(lambda x: (x - x.mean()) / x.std()) to get a zscore transformation within each group. When there is something you want to do within a group that doesn't have a predefined function, there is still .apply().
.apply() for a groupby object is different than it is for a dataframe. For a dataframe, .apply() takes callable object as its argument and applies that callable to each column (or row) in the object. the object that is passed to that callable is a pd.Series. When you are using .apply in a dataframe context, it is helpful to keep this fact in mind. In the context of a groupby object, the object passed to the callable argument is a dataframe. In fact, that dataframe is one of the groups specified by the groupby.
When I write such functions to pass to groupby.apply, I typically define the parameter as df to reflect that it is a dataframe.
Ok, so we have:
df2.groupby('Name').apply(tgrp)
This generates a sub-dataframe for each 'Name' and passes that sub-dataframe to the function tgrp. Then the groupby object recombines all such groups having gone through the tgrp function back together again.
It'll look like this.
I took the OP's original attempt to simply transpose to heart. But I had to do some things first. Had I simply done:
df2[df2.Name == 'Jane'].T
df2[df2.Name == 'Joe'].T
Combining these manually (without groupby):
pd.concat([df2[df2.Name == 'Jane'].T, df2[df2.Name == 'Joe'].T])
Whoa! Now that's ugly. Obviously the index values of [0, 1, 2] don't mesh with [3, 4]. So let's reset.
pd.concat([df2[df2.Name == 'Jane'].reset_index(drop=True).T,
df2[df2.Name == 'Joe'].reset_index(drop=True).T])
That's much better. But now we are getting into the territory groupby was intended to handle. So let it handle it.
Back to
df2.groupby('Name').apply(tgrp)
The only thing missing here is that we want to unstack the results to get the desired output.
Say you start by unstacking:
df2 = df2.set_index(['Name', 'Job']).unstack()
>>> df2
Job Eff Date
Job Analyst Director Manager
Name
Jane 1/1/2015 None 1/1/2016
Joe 1/1/2015 7/1/2016 1/1/2016
In [29]:
df2
Now, to make things easier, flatten the multi-index:
df2.columns = df2.columns.get_level_values(1)
>>> df2
Job Analyst Director Manager
Name
Jane 1/1/2015 None 1/1/2016
Joe 1/1/2015 7/1/2016 1/1/2016
Now, just manipulate the columns:
cols = []
for i, c in enumerate(df2.columns):
col = 'Job %d' % i
df2[col] = c
cols.append(col)
col = 'Eff Date %d' % i
df2[col] = df2[c]
cols.append(col)
>>> df2[cols]
Job Job 0 Eff Date 0 Job 1 Eff Date 1 Job 2 Eff Date 2
Name
Jane Analyst 1/1/2015 Director None Manager 1/1/2016
Joe Analyst 1/1/2015 Director 7/1/2016 Manager 1/1/2016
Edit
Jane was never a director (alas). The above code states that Jane became Director at None date. To change the result so that it specifies that Jane became None at None date (which is a matter of taste), replace
df2[col] = c
by
df2[col] = [None if d is None else c for d in df2[c]]
This gives
Job Job 0 Eff Date 0 Job 1 Eff Date 1 Job 2 Eff Date 2
Name
Jane Analyst 1/1/2015 None None Manager 1/1/2016
Joe Analyst 1/1/2015 Director 7/1/2016 Manager 1/1/2016
​
Here is a possible workaround. Here, I first create a dictionary of the proper form and create a DataFrame based on the new dictionary:
df = pd.DataFrame(data1)
dic = {}
for name, jobs in df.groupby('Name').groups.iteritems():
if not dic:
dic['Name'] = []
dic['Name'].append(name)
for j, job in enumerate(jobs, 1):
jobstr = 'Job {0}'.format(j)
jobeffdatestr = 'Job Eff Date {0}'.format(j)
if jobstr not in dic:
dic[jobstr] = ['']*(len(dic['Name'])-1)
dic[jobeffdatestr] = ['']*(len(dic['Name'])-1)
dic[jobstr].append(df['Job'].ix[job])
dic[jobeffdatestr].append(df['Job Eff Date'].ix[job])
df2 = pd.DataFrame(dic).set_index('Name')
## Job 1 Job 2 Job 3 Job Eff Date 1 Job Eff Date 2 Job Eff Date 3
## Name
## Jane Analyst Manager 1/1/2015 1/1/2016
## Joe Analyst Manager Director 1/1/2015 1/1/2016 7/1/2016
g = df2.groupby('Name').groups
names = list(g.keys())
data2 = {'Name': names}
cols = ['Name']
temp1 = [g[y] for y in names]
job_str = 'Job'
job_date_str = 'Job Eff Date'
for i in range(max([len(x) for x in g.values()])):
temp = [x[i] if len(x) > i else '' for x in temp1]
job_str_curr = job_str + str(i+1)
job_date_curr = job_date_str + str(i + 1)
data2[job_str + str(i+1)] = df2[job_str].ix[temp].values
data2[job_date_str + str(i+1)] = df2[job_date_str].ix[temp].values
cols.extend([job_str_curr, job_date_curr])
df3 = pd.DataFrame(data2, columns=cols)
df3 = df3.fillna('')
print(df3)
Name Job1 Job Eff Date1 Job2 Job Eff Date2 Job3 Job Eff Date3
0 Jane Analyst 1/1/2015 Manager 1/1/2016
1 Joe Analyst 1/1/2015 Manager 1/1/2016 Director 7/1/2016
This is not exactly what you were asking but here is a way to print the data frame as you wanted:
df = pd.DataFrame(data1)
for name, jobs in df.groupby('Name').groups.iteritems():
print '{0:<15}'.format(name),
for job in jobs:
print '{0:<15}{1:<15}'.format(df['Job'].ix[job], df['Job Eff Date'].ix[job]),
print
## Jane Analyst 1/1/2015 Manager 1/1/2016
## Joe Analyst 1/1/2015 Manager 1/1/2016 Director 7/1/2016
Diving into #piRSquared answer....
def tgrp(df):
df = df.drop('Name', axis=1)
print df, '\n'
out = df.reset_index(drop=True)
print out, '\n'
out.T
print out.T, '\n\n'
return out.T
dfxx = df2.groupby('Name').apply(tgrp).unstack()
dfxx
The output of above. Why does pandas repeat the first group? Is this a bug?
Job Job Eff Date
3 Analyst 1/1/2015
4 Manager 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
0 1
Job Analyst Manager
Job Eff Date 1/1/2015 1/1/2016
Job Job Eff Date
3 Analyst 1/1/2015
4 Manager 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
0 1
Job Analyst Manager
Job Eff Date 1/1/2015 1/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
2 Director 7/1/2016
Job Job Eff Date
0 Analyst 1/1/2015
1 Manager 1/1/2016
2 Director 7/1/2016
0 1 2
Job Analyst Manager Director
Job Eff Date 1/1/2015 1/1/2016 7/1/2016

Categories