Python Dataframe: pivot rows as columns - python

I have raw files from different stations. When I combine them into a dataframe, I see three columns with matching id and name with different component. I want to convert this into a dataframe where name entries become the column names
Code:
df =
id name component
0 1 Serial Number 103
1 2 Station Name DC
2 1 Serial Number 114
3 2 Station Name CA
4 1 Serial Number 147
5 2 Station Name FL
Expected answer:
new_df =
Station Name Serial Number
0 DC 103
1 CA 114
2 FL 147
My answer:
# Solution1
df.pivot_table('id','name','component')
name
NaN NaN NaN NaN
# Solution2
df.pivot(index=None,columns='name')['component']
name
NaN NaN NaN NaN
I am not getting desired answer. Any help?

First you have to make every 2 rows with the same id, after that you can use pivot table.
import pandas as pd
df = pd.DataFrame({'id': ["1", "2", "1", "2", "1", "2"],
'name': ["Serial Number", "Station Name", "Serial Number", "Station Name", "Serial Number", "Station Name"],
'component': ["103", "DC", "114", "CA", "147", "FL"]})
new_column = [x//2+1 for x in range(len(df))]
df["id"] = new_column
df = df.pivot(index='id',columns='name')['component']

If your Serial Number is just before Station Name, you can pivot on name columns then combine the every two rows:
df_ = df.pivot(columns='name', values='component').groupby(df.index // 2).first()
print(df_)
name Serial Number Station Name
0 103 DC
1 114 CA
2 147 FL

Related

How to extract specific value from excel column using python pandas dataframe

Need to extract specific value from excel column using python pandas dataframe
The column Product that I am trying to extract looks the below & need to extract only Product # from it. The column also has other numbers but the Product # always comes after the term 'UK Pro' & Product # could be 3 to 4 digit number in a particular row of data.
In[1]:
df['Product'].head()
#Dataframe looks like this:
Out[1]:
Checking center : King 2000 : UK Pro 1000 : London
Checking center : Queen 321 : UK Pro 250 : Spain
CC : UK Pro 3000 : France
CC : UK Pro 810 : Poland
Expected Output:
Product #
1000
250
3000
810
Started with this:
df['Product #'] = df1['Product'].str.split(':').str[1]
But this does split only based on the first two occurrence of : operator.
Then tried this:
df1['Product #'] = df1['Product'].str.split('UK Pro', 1).str[0].str.strip()
You can use pandas.Series.str.extract :
df["Product #"] = df["Product"].str.extract("UK Pro (\d+)", expand=False)
# Output :
print(df)
Product #
0 NaN
1 NaN
2 1000
3 NaN
4 NaN
5 250
6 NaN
7 3000
8 NaN
9 810
10 NaN

Simplify pivot in pandas [duplicate]

This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Closed 7 months ago.
I have a question that concerns the use of the pivot function in pandas. I have a table (df_init) with a bunch of customer Ids (7000 different Ids) and the product codes they purchased
CST_ID
PROD_CODE
11111
1234
11111
2345
11111
5425
11111
9875
22222
2345
22222
9251
22222
1234
33333
6542
33333
7498
Each Id can be repeated at most 4 time in the table, but can appear less than 4 times (e,g, 22222 and 33333). I want to reorganize that table as follows (df_fin)
CST_ID
PROD_1
PROD_2
PROD_3
PROD_4
11111
1234
2345
5425
9875
22222
2345
9251
1234
NaN
33333
6542
7498
NaN
NaN
Good news is, I have found a way to do so. Bad news I am not satisfied as it loops over the Customer Ids nd takes a while. Namely I count the occurrences of a certain Id while looping over the column and add that to a list, then append this list as a new variable to df_init
to_append = []
for index in range(len(df_init)):
temp = df_init.iloc[:index+1]['CST_ID'] == df_init.iloc[index]['CST_ID'] # ['CST_ID']== df_init.iloc[index]['CST_ID']]
counter = sum(list(temp))
to_append.append(counter)
df_init['Product_number'] = to_append
Afterwards I pivot and rename the columns
df_fin = df_init.pivot(index='CST_ID', columns='Product_number', values='PROD_CODE').rename_axis(None).reset_index()
df_fin.columns=['CST_ID', 'pdt1', 'pdt2', 'pdt3', 'pdt4']
Of course this solution works just fine, but looping in order to create the column which I use for the columns specification of the Pivot takes time. Hence I was wondering if there was a better solution (perhapes embedded already in Pandas or in the Pivot method) to do so.
Thanks to anyone who is willing to participate
Best
You can vectorize the part creating the pivoting column as below. groupby + cumcount generates the increasing number by the CST_ID.
df_fin = df_init.assign(key="PROD_" + (df_init.groupby("CST_ID").cumcount()+1).astype(str))
df_fin = df_fin.pivot(index="CST_ID", columns="key", values="PROD_CODE")
df_fin
#key PROD_1 PROD_2 PROD_3 PROD_4
#CST_ID
#11111 1234.0 2345.0 5425.0 9875.0
#22222 2345.0 9251.0 1234.0 NaN
#33333 6542.0 7498.0 NaN NaN
For large dataframes i would have done, but above solution works nice
import pandas
df = pandas.DataFrame(
{
"CST_ID": [11111, 11111, 11111, 11111, 22222, 22222, 22222, 22222, 33333, 33333, 33333, 33333],
"PROD_CODE": [random.randint(1, 100) for _ in range(12)]
}
)
df["Product_number"] = df.groupby(['CST_ID']).cumcount() + 1
df = df.pivot(index='CST_ID', columns='Product_number', values='PROD_CODE')
df.columns = ["PROD_%s" % _ for _ in df.columns]
# PROD_1 PROD_2 PROD_3 PROD_4
#CST_ID
#11111 98 11 13 38
#22222 33 13 3 61
#33333 86 35 93 23

PYTHON/PANDAS - Reindexing on multiple indexes

I have a dataframe similar to what follows:
test = {"id": ["A", "A", "A", "B", "B", "B"],
"date": ["09-02-2013", "09-03-2013", "09-05-2013", "09-15-2013", "09-17-2013", "09-18-2013"],
"country": ["Poland", "Poland", "France", "Scotland", "Scotland", "Canada"]}
and I want a table which returns this :
id
date
country
A
09-02-2013
Poland
A
09-03-2013
Poland
A
09-04-2013
Poland
A
09-05-2013
France
B
09-15-2013
Scotland
B
09-16-2013
Scotland
B
09-17-2013
Scotland
B
09-18-2013
Canada
i.e. a table that fills in any date that I am missing but will only do it to the min/max of each id
I have looked around stack overflow but usually this problem just has one index or the person wants to drop an index anyway
This is what I have got so far:
test_df = pd.DataFrame(test)
# get min date per id
dates = test_df.groupby("id")["date"].min().to_frame(name="min")
# get max date
dates["max"] = test_df.groupby("id")["date"].max().to_frame(name="max")
midx = pd.MultiIndex.from_frame(dates.apply(lambda x: pd.date_range(x["min"], x["max"], freq="D"), axis=1).explode().reset_index(name="date")[["date", "id"]])
test_df = test_df.set_index(["date", "id"])
test_df = test_df.reindex(midx).fillna(method="ffill")
test_df
Which gets me really close but not quite there, with the dates all there but no country:
id
date
country
A
09-02-2013
NaN
A
09-03-2013
NaN
A
09-04-2013
NaN
A
09-05-2013
NaN
B
09-15-2013
NaN
B
09-16-2013
NaN
B
09-17-2013
NaN
B
09-18-2013
NaN
Any ideas on how to fix it?
IIUC, you could generate a date_range per group, explode, then merge and ffill the values per group:
out = (test_df
.merge(pd
.to_datetime(test_df['date'], dayfirst=False)
.groupby(test_df['id'])
.apply(lambda g: pd.date_range(g.min(), g.max(), freq='D'))
.explode().dt.strftime('%m-%d-%Y')
.reset_index(name='date'),
how='right'
)
.assign(country=lambda d: d.groupby('id')['country'].ffill())
)
output:
id date country
0 A 09-02-2013 Poland
1 A 09-03-2013 Poland
2 A 09-04-2013 Poland
3 A 09-05-2013 France
4 B 09-15-2013 Scotland
5 B 09-16-2013 Scotland
6 B 09-17-2013 Scotland
7 B 09-18-2013 Canada

Fuzzy Lookup In Python

I have two CSV files. One that contains Vendor data and one that contains Employee data. Similar to what "Fuzzy Lookup" in excel does, I'm looking to do two types of matches and output all columns from both csv files, including a new column as the similarity ratio for each row. In excel, I would use a 0.80 threshold. The below is sample data and my actual data has 2 million rows in one of the files which is going to be a nightmare if done in excel.
Output 1:
From Vendor file, fuzzy match "Vendor Name" with "Employee Name" from Employee file. Display all columns from both files and a new column for Similarity Ratio
Output 2:
From Vendor file, fuzzy match "SSN" with "SSN" from Employee file. Display all columns from both files and a new column for Similarity Ratio
These are two separate outputs
Dataframe 1: Vendor Data
Company
Vendor ID
Vendor Name
Invoice Number
Transaction Amt
Vendor Type
SSN
15
58421
CLIFFORD BROWN
854
500
Misc
668419628
150
9675
GREEN
7412
70
One Time
774801971
200
15789
SMITH, JOHN
80
40
Employee
965214872
200
69997
HAROON, SIMAN
964
100
Misc
741-98-7821
Dataframe 2: Employee Data
Employee Name
Employee ID
Manager
SSN
BROWN, CLIFFORD
1
Manager 1
668-419-628
BLUE, CITY
2
Manager 2
874126487
SMITH, JOHN
3
Manager 3
965-21-4872
HAROON, SIMON
4
Manager 4
741-98-7820
Expected output 1 - Match Name
Employee Name
Employee ID
Manager
SSN
Company
Vendor ID
Vendor Name
Invoice Number
Transaction Amt
Vendor Type
SSN
Similarity Ratio
BROWN, CLIFFORD
1
Manager 1
668-419-628
150
58421
CLIFFORD BROWN
854
500
Misc
668419628
1.00
SMITH, JOHN
3
Manager 3
965-21-4872
200
15789
SMITH, JOHN
80
40
Employee
965214872
1.00
HAROON, SIMON
4
Manager 4
741-98-7820
200
69997
HAROON, SIMAN
964
100
Misc
741-98-7821
0.96
BLUE, CITY
2
Manager 2
874126487
0.00
Expected output 2 - Match SSN
Employee Name
Employee ID
Manager
SSN
Company
Vendor ID
Vendor Name
Invoice Number
Transaction Amt
Vendor Type
SSN
Similarity Ratio
BROWN, CLIFFORD
1
Manager 1
668-419-628
150
58421
CLIFFORD, BROWN
854
500
Misc
668419628
0.97
SMITH, JOHN
3
Manager 3
965-21-4872
200
15789
SMITH, JOHN
80
40
Employee
965214872
0.97
BLUE, CITY
2
Manager 2
874126487
0.00
HAROON, SIMON
4
Manager 4
741-98-7820
0.00
I've tried the below code:
import pandas as pd
from fuzzywuzzy import fuzz
df1 = pd.read_excel(r'Directory\Sample Vendor Data.xlsx')
df2 = pd.read_excel(r'Directory\Sample Employee Data.xlsx')
matched_names = []
for row1 in df1.index:
name1 = df1._get_value(row1, 'Vendor Name')
for row2 in df2.index:
name2 = df2._get_value(row2, 'Full Name')
match = fuzz.ratio(name1, name2)
if match > 80: # This is the threshold
match.append([name1, name2, match])
df_ratio = pd.DataFrame(columns=['Vendor Name', 'Employee Name','match'], data=matched_names)
df_ratio.to_csv(r'directory\MatchingResults.csv', encoding='utf-8')
I'm just not getting the results I want and am ready to reinvent the whole script. Any suggestions would help to improve my script. Please note, I'm fairly new to Python so be gentle. I am totally open to a new approach on this example.
September 23 Update:
Still having trouble...I'm able to get the similarity ratio now but not getting all the columns from both CSV files. The issue is that both files are completely different so when I concat, it gives NaN values. Any suggestions? New code below:
import numpy as np
from fuzzywuzzy import fuzz
from itertools import product
import pandas as pd
df1 = pd.read_excel(r'Directory\Sample Vendor Data.xlsx')
df2 = pd.read_excel(r'Directory\Sample Workday Data.xlsx')
df1['full_name']= df1['Vendor Name']
df2['full_name'] = df2['Employee Name']
df1_name = df1['full_name']
df2_name = df2['full_name']
frames = [pd.DataFrame(df1), pd.DataFrame(df2)]
df = pd.concat(frames).reset_index(drop=True)
dist = [fuzz.ratio(*x) for x in product(df.full_name, repeat=2)]
dfresult = pd.DataFrame(np.array(dist).reshape(df.shape[0], df.shape[0]), columns=df.full_name.values.tolist())
#create of list of dataframes
listOfDfs = [dfresult.loc[idx] for idx in np.split(dfresult.index, df.shape[0])]
DataFrameDict = {df['full_name'][i]: listOfDfs[i] for i in range(dfresult.shape[0])}
for name in DataFrameDict.keys():
print(name)
#print(DataFrameDict[name]
df = pd.DataFrame(list(DataFrameDict.items())).df.to_excel(r'Directory\TestOutput.xlsx', index = False)
To concatenate the two DataFrames horizontally, I aligned the Employees DataFrame by the index of the matched Vendor Name. If no Vendor Name was matched, I just put an empty row instead.
In more details:
I iterated over the vendor names, and for each vendor name, I added the index of the employee name with the highest score to a list of indices. Note that I added at most one matched employee record to each vendor name.
If no match was found (too low score), I added the index of an empty record that I have added manually to the Employees Dataframe.
This list of indices is then used to reorder the Employees DataDrame.
at last, I just merge the two DataFrame horizontally. Note that the two DataFrames at this point doesn't have to be of the same size, but in such a case, the concat method just fill the gap with appending missing rows to the smaller DataFrame.
The code is as follows:
import numpy as np
import pandas as pd
from thefuzz import process as fuzzy_process # the new repository of fuzzywuzzy
# import dataframes
...
# adding empty row
employees_df = employees_df.append(pd.Series(dtype=np.float64), ignore_index=True)
index_of_empty = len(employees_df) - 1
# matching between vendor and employee names
indexed_employee_names_dict = dict(enumerate(employees_df["Employee Name"]))
matched_employees = set()
ordered_employees = []
scores = []
for vendor_name in vendors_df["Vendor Name"]:
match = fuzzy_process.extractOne(
query=vendor_name,
choices=indexed_employee_names_dict,
score_cutoff=80
)
score, index = match[1:] if match is not None else (0.0, index_of_empty)
matched_employees.add(index)
ordered_employees.append(index)
scores.append(score)
# detect unmatched employees to be positioned at the end of the dataframe
missing_employees = [i for i in range(len(employees_df)) if i not in matched_employees]
ordered_employees.extend(missing_employees)
ordered_employees_df = employees_df.iloc[ordered_employees].reset_index()
merged_df = pd.concat([vendors_df, ordered_employees_df], axis=1)
# adding the scores column and sorting by its values
scores.extend([0] * len(missing_employees))
merged_df["Similarity Ratio"] = pd.Series(scores) / 100
merged_df = merged_df.sort_values("Similarity Ratio", ascending=False)
For the matching according to the SSN columns, it can be done exactly in the same way, by just replacing the column names in the above code. Moreover, The process can be generalize to be a function that accepts DataFrames and column names:
def match_and_merge(df1: pd.DataFrame, df2: pd.DataFrame, col1: str, col2: str, cutoff: int = 80):
# adding empty row
df2 = df2.append(pd.Series(dtype=np.float64), ignore_index=True)
index_of_empty = len(df2) - 1
# matching between vendor and employee names
indexed_strings_dict = dict(enumerate(df2[col2]))
matched_indices = set()
ordered_indices = []
scores = []
for s1 in df1[col1]:
match = fuzzy_process.extractOne(
query=s1,
choices=indexed_strings_dict,
score_cutoff=cutoff
)
score, index = match[1:] if match is not None else (0.0, index_of_empty)
matched_indices.add(index)
ordered_indices.append(index)
scores.append(score)
# detect unmatched employees to be positioned at the end of the dataframe
missing_indices = [i for i in range(len(df2)) if i not in matched_indices]
ordered_indices.extend(missing_indices)
ordered_df2 = df2.iloc[ordered_indices].reset_index()
# merge rows of dataframes
merged_df = pd.concat([df1, ordered_df2], axis=1)
# adding the scores column and sorting by its values
scores.extend([0] * len(missing_indices))
merged_df["Similarity Ratio"] = pd.Series(scores) / 100
return merged_df.sort_values("Similarity Ratio", ascending=False)
if __name__ == "__main__":
vendors_df = pd.read_excel(r'Directory\Sample Vendor Data.xlsx')
employees_df = pd.read_excel(r'Directory\Sample Workday Data.xlsx')
merged_df = match_and_merge(vendors_df, employees_df, "Vendor Name", "Employee Name")
merged_df.to_excel("merged_by_names.xlsx", index=False)
merged_df = match_and_merge(vendors_df, employees_df, "SSN", "SSN")
merged_df.to_excel("merged_by_ssn.xlsx", index=False)
the above code is resulted with the following outputs:
merged_by_names.xlsx
Company
Vendor ID
Vendor Name
Invoice Number
Transaction Amt
Vendor Type
SSN
index
Employee Name
Employee ID
Manager
SSN
Similarity Ratio
200
15789
SMITH, JOHN
80
40
Employee
965214872
2
SMITH, JOHN
3
Manager 3
965-21-4872
1
15
58421
CLIFFORD BROWN
854
500
Misc
668419628
0
BROWN, CLIFFORD
1
Manager 1
668-419-628
0.95
200
69997
HAROON, SIMAN
964
100
Misc
741-98-7821
3
HAROON, SIMON
4
Manager 4
741-98-7820
0.92
150
9675
GREEN
7412
70
One Time
774801971
4
nan
nan
nan
nan
0
nan
nan
nan
nan
nan
nan
nan
1
BLUE, CITY
2
Manager 2
874126487
0
merged_by_ssn.xlsx
Company
Vendor ID
Vendor Name
Invoice Number
Transaction Amt
Vendor Type
SSN
index
Employee Name
Employee ID
Manager
SSN
Similarity Ratio
200
69997
HAROON, SIMAN
964
100
Misc
741-98-7821
3
HAROON, SIMON
4
Manager 4
741-98-7820
0.91
15
58421
CLIFFORD BROWN
854
500
Misc
668419628
0
BROWN, CLIFFORD
1
Manager 1
668-419-628
0.9
200
15789
SMITH, JOHN
80
40
Employee
965214872
2
SMITH, JOHN
3
Manager 3
965-21-4872
0.9
150
9675
GREEN
7412
70
One Time
774801971
4
nan
nan
nan
nan
0
nan
nan
nan
nan
nan
nan
nan
1
BLUE, CITY
2
Manager 2
874126487
0

Multiple aggregated Counting in Pandas

I have a DF:
data = [["John","144","Smith","200"], ["Mia","220","John","144"],["Caleb","155","Smith","200"],["Smith","200","Jason","500"]]
data_frame = pd.DataFrame(data,columns = ["Name","ID","Manager_name","Manager_ID"])
data_frame
OP:
Name ID Manager_name Manager_ID
0 John 144 Smith 200
1 Mia 220 John 144
2 Caleb 155 Smith 200
3 Smith 200 Jason 500
I am trying to count the number of people reporting under each person in the column Name.
Logic is:
Count the number of people reporting individually and people reporting under in the chain. For example with Smith; John and Caleb reports to Smith so 2 + 1 with Mia reporting to John (who already reports to Smith) so total 3.
Similarly for Jason -> 1 because Smith reports to him and 3 people already report to Smith so total 4.
I understand how to do it pythonically with some recursion, is there a way to efficiently do it in Pandas. Any suggestions?
Expected OP:
Name Number of people reporting
John 1
Mia 0
Caleb 0
Smith 3
Jason 4
Scott Boston's Networkx solution is the preferred solution...
There are two solutions to this problem. The first one is a vectorized pandas type solution and should be fast over larger datasets, the second is pythonic and does not work well on the size of dataset the OP was looking for, the original df size is (223635,4).
PANDAS SOLUTION
This problem seeks to find out how many people each person in an organization manages, including subordinate's subordinates. This solution will create a dataframe by adding successive columns that are the managers of the previous columns, and then counting the occurance of each employee in that dataframe to determine the total number under them.
First we set up the input.
import pandas as pd
import numpy as np
data = [
["John", "144", "Smith", "200"],
["Mia", "220", "John", "144"],
["Caleb", "155", "Smith", "200"],
["Smith", "200", "Jason", "500"],
]
df = pd.DataFrame(data, columns=["Name", "SID", "Manager_name", "Manager_SID"])
df = df[["SID", "Manager_SID"]]
# shortening the columns for convenience
df.columns = ["1", "2"]
print(df)
1 2
0 144 200
1 220 144
2 155 200
3 200 500
First the employees without subordinates must be counted and put into a seperate dictionary.
df_not_mngr = df.loc[~df['1'].isin(df['2']), '1']
non_mngr_dict = {str(key):0 for key in df_not_mngr.values}
non_mngr_dict
{'220': 0, '155': 0}
Next we will modify the dataframe by adding columns of managers of the previous column. The loop is stopped when there are no employees in the right most column
for i in range(2, 10):
df = df.merge(
df[["1", "2"]], how="left", left_on=str(i), right_on="1", suffixes=("_l", "_r")
).drop("1_r", axis=1)
df.columns = [str(x) for x in range(1, i + 2)]
if df.iloc[:, -1].isnull().all():
break
else:
continue
print(df)
1 2 3 4 5
0 144 200 500 NaN NaN
1 220 144 200 500 NaN
2 155 200 500 NaN NaN
3 200 500 NaN NaN NaN
All columns except the first columns are collapsed and each employee counted and added to a dictionary.
from collections import Counter
result = dict(Counter(df.iloc[:, 1:].values.flatten()))
The non manager dictionary is added to the result.
result.update(non_mngr_dict)
result
{'200': 3, '500': 4, nan: 8, '144': 1, '220': 0, '155': 0}
RECURSIVE PYTHONIC SOLUTION
I think this is probably way more pythonic than you were looking for. First I created a list 'all_sids' to make sure we capture all employees as not all are in each list.
import pandas as pd
import numpy as np
data = [
["John", "144", "Smith", "200"],
["Mia", "220", "John", "144"],
["Caleb", "155", "Smith", "200"],
["Smith", "200", "Jason", "500"],
]
df = pd.DataFrame(data, columns=["Name", "SID", "Manager_name", "Manager_SID"])
all_sids = pd.unique(df[['SID', 'Manager_SID']].values.ravel('K'))
Then create a pivot table.
dfp = df.pivot_table(values='Name', index='SID', columns='Manager_SID', aggfunc='count')
dfp
Manager_SID 144 200 500
SID
144 NaN 1.0 NaN
155 NaN 1.0 NaN
200 NaN NaN 1.0
220 1.0 NaN NaN
Then a function that will go through the pivot table to total up all the reports.
def count_mngrs(SID, count=0):
if str(SID) not in dfp.columns:
return count
else:
count += dfp[str(SID)].sum()
sid_list = dfp[dfp[str(SID)].notnull()].index
for sid in sid_list:
count = count_mngrs(sid, count)
return count
Call the function for each employee and print the results.
print('SID', ' Number of People Reporting')
for sid in all_sids:
print(sid, " " , int(count_mngrs(sid)))
Results are below, sorry I was a bit lazy in putting the names with the sids.
SID Number of People Reporting
144 1
220 0
155 0
200 3
500 4
Look forward to seeing a more pandas type solution!
This is also, a graph problem and you can use Networkx:
import networkx as nx
import pandas as pd
data = [["John","144","Smith","200"], ["Mia","220","John","144"],["Caleb","155","Smith","200"],["Smith","200","Jason","500"]]
data_frame = pd.DataFrame(data,columns = ["Name","ID","Manager_name","Manager_ID"])
#create a directed graph object using nx.DiGraph
G = nx.from_pandas_edgelist(data_frame,
source='Name',
target='Manager_name',
create_using=nx.DiGraph())
#use nx.ancestors to get set of "ancenstor" nodes for each node in the directed graph
pd.DataFrame.from_dict({i:len(nx.ancestors(G,i)) for i in G.nodes()},
orient='index',
columns=['Num of People reporting'])
Output:
Num of People reporting
John 1
Smith 3
Mia 0
Caleb 0
Jason 4
Draw newtorkx:

Categories