Pandas dataframe to duplicated matrix in sum of quantities - python

import pandas as pd
data = {0: {'ID': 'A', 'Qty': 1, 'Type': 'SVGA'},
1: {'ID': 'B', 'Qty': 2, 'Type': 'SVGA'},
2: {'ID': 'B', 'Qty': 2, 'Type': 'XGA'},
3: {'ID': 'C', 'Qty': 3, 'Type': 'XGA'},
4: {'ID': 'D', 'Qty': 4, 'Type': 'XGA'},
5: {'ID': 'A', 'Qty': 1, 'Type': 'LED'},
6: {'ID': 'C', 'Qty': 3, 'Type': 'LED'}}
df = pd.DataFrame.from_dict(data, orient='index')
Is it possible to transform this dataframe to a duplicated matrix in sum.
Expected output:
LED SVGA XGA
LED 4 1 3
SVGA 1 3 2
XGA 3 2 9

It seems like the key here is the "ID" column, because the value for each Type-Type cell is computed with respect to whether these Types coexist for the same ID.
So, start with a self-merge on "ID". You can then pivot your result to get your matrix.
merge + crosstab
v = df.merge(df[['ID', 'Type']], on='ID')
pd.crosstab(v.Type_x, v.Type_y, v.Qty, aggfunc='sum')
Type_y LED SVGA XGA
Type_x
LED 4 1 3
SVGA 1 3 2
XGA 3 2 9
merge + pivot_table
df.merge(df[['ID', 'Type']], on='ID').pivot_table(
index='Type_x', columns='Type_y', values='Qty', aggfunc='sum'
)
Type_y LED SVGA XGA
Type_x
LED 4 1 3
SVGA 1 3 2
XGA 3 2 9

Related

Stack different column values into one column in a pandas dataframe

I have the following dataframe -
df = pd.DataFrame({
'ID': [1, 2, 2, 3, 3, 3, 4],
'Prior': ['a', 'b', 'c', 'd', 'e', 'f', 'g'],
'Current': ['a1', 'c', 'c1', 'e', 'f', 'f1', 'g1'],
'Date': ['1/1/2019', '5/1/2019', '10/2/2019', '15/3/2019', '6/5/2019',
'7/9/2019', '16/11/2019']
})
This is my desired output -
desired_df = pd.DataFrame({
'ID': [1, 1, 2, 2, 2, 3, 3, 3, 3, 4, 4],
'Prior_Current': ['a', 'a1', 'b', 'c', 'c1', 'd', 'e', 'f', 'f1', 'g',
'g1'],
'Start_Date': ['', '1/1/2019', '', '5/1/2019', '10/2/2019', '', '15/3/2019',
'6/5/2019', '7/9/2019', '', '16/11/2019'],
'End_Date': ['1/1/2019', '', '5/1/2019', '10/2/2019', '', '15/3/2019',
'6/5/2019', '7/9/2019', '', '16/11/2019', '']
})
I tried the following -
keys = ['Prior', 'Current']
df2 = (
pd.melt(df, id_vars='ID', value_vars=keys, value_name='Prior_Current')
.merge(df[['ID', 'Date']], how='left', on='ID')
)
df2['Start_Date'] = np.where(df2['variable'] == 'Prior', df2['Date'], '')
df2['End_Date'] = np.where(df2['variable'] == 'Current', df2['Date'], '')
df2.sort_values(['ID'], ascending=True, inplace=True)
But this does not seem be working. Please help.
you can use stack and pivot_table:
k = df.set_index(['ID', 'Date']).stack().reset_index()
df = k.pivot_table(index = ['ID',0], columns = 'level_2', values = 'Date', aggfunc = ''.join, fill_value= '').reset_index()
df.columns = ['ID', 'prior-current', 'start-date', 'end-date']
OUTPUT:
ID prior-current start-date end-date
0 1 a 1/1/2019
1 1 a1 1/1/2019
2 2 b 5/1/2019
3 2 c 5/1/2019 10/2/2019
4 2 c1 10/2/2019
5 3 d 15/3/2019
6 3 e 15/3/2019 6/5/2019
7 3 f 6/5/2019 7/9/2019
8 3 f1 7/9/2019
9 4 g 16/11/2019
10 4 g1 16/11/2019
Explaination:
After stack / reset_index df will look like this:
ID Date level_2 0
0 1 1/1/2019 Prior a
1 1 1/1/2019 Current a1
2 2 5/1/2019 Prior b
3 2 5/1/2019 Current c
4 2 10/2/2019 Prior c
5 2 10/2/2019 Current c1
6 3 15/3/2019 Prior d
7 3 15/3/2019 Current e
8 3 6/5/2019 Prior e
9 3 6/5/2019 Current f
10 3 7/9/2019 Prior f
11 3 7/9/2019 Current f1
12 4 16/11/2019 Prior g
13 4 16/11/2019 Current g1
Now, we can use ID and column 0 as index / level_2 as column / Date column as value.
Finally, we need to rename the columns to get the desired result.
My approach is to build and attain the target df step by step. The first step is an extension of your code using melt() and merge(). The merge is done based on the columns 'Current' and 'Prior' to get the start and end date.
df = pd.DataFrame({
'ID': [1, 2, 2, 3, 3, 3, 4],
'Prior': ['a', 'b', 'c', 'd', 'e', 'f', 'g'],
'Current': ['a1', 'c', 'c1', 'e', 'f', 'f1', 'g1'],
'Date': ['1/1/2019', '5/1/2019', '10/2/2019', '15/3/2019', '6/5/2019',
'7/9/2019', '16/11/2019']
})
df2 = pd.melt(df, id_vars='ID', value_vars=['Prior', 'Current'], value_name='Prior_Current').drop('variable',1).drop_duplicates().sort_values('ID')
df2 = df2.merge(df[['Current', 'Date']], how='left', left_on='Prior_Current', right_on='Current').drop('Current',1)
df2 = df2.merge(df[['Prior', 'Date']], how='left', left_on='Prior_Current', right_on='Prior').drop('Prior',1)
df2 = df2.fillna('').reset_index(drop=True)
df2.columns = ['ID', 'Prior_Current', 'Start_Date', 'End_Date']
Alternative way is to define a custom function to get date, then use lambda function:
def get_date(x, col):
try:
return df['Date'][df[col]==x].values[0]
except:
return ''
df2 = pd.melt(df, id_vars='ID', value_vars=['Prior', 'Current'], value_name='Prior_Current').drop('variable',1).drop_duplicates().sort_values('ID').reset_index(drop=True)
df2['Start_Date'] = df2['Prior_Current'].apply(lambda x: get_date(x, 'Current'))
df2['End_Date'] = df2['Prior_Current'].apply(lambda x: get_date(x, 'Prior'))
Output

Update columns in one data frame with values from another data frame

I am trying to update df2 with columns and data in ref_df1 such that my output data frame has all columns ['Code', 'Place', 'Product', 'Name', 'Value'] and has pulled data from the reference data frame using Code column values as key. I am not sure how to get to the output.
import pandas as pd
data1 = {
'Code': [1, 2, 3, 4, 5, 6],
'Name': ['Company1', 'Company2', 'Company3', 'Company4', 'Company5', 'Company6'],
'Value': [200, 300, 400, 500, 600, 700],
}
ref_df1 = pd.DataFrame(data1, columns=['Code', 'Name', 'Value'])
data2 = {
'Code': [1, 2, 1, 3, 4, 1, 6],
'Place': ['A', 'B', 'E', 'G', 'I', 'K', 'L'],
'Product': ['P11', 'P22', 'P12', 'P33', 'P44', 'P13', 'P61'],
}
df2 = pd.DataFrame(data2, columns=['Code', 'Place', 'Product'])
Output:
You can merge both the data frames.
df2.merge(ref_df1)
#output:
Code Place Product Name Value
0 1 A P11 Company1 200
1 1 E P12 Company1 200
2 1 K P13 Company1 200
3 2 B P22 Company2 300
4 3 G P33 Company3 400
5 4 I P44 Company4 500
6 6 L P61 Company6 700

Pandas Columns to Flattened Dictionary (instead of list of dictionaries)

I have a DF that looks like this.
df = pd.DataFrame({'ID': {0: 1, 1: 2, 2: 3}, 'Value': {0: 'a', 1: 'b', 2: np.nan}})
ID
Value
0
1
a
1
2
b
2
3
c
I'd like to create a dictionary out of it.
So if I run df.to_dict('records'), it gives me
[{'Visual_ID': 1, 'Customer': 'a'},
{'Visual_ID': 2, 'Customer': 'b'},
{'Visual_ID': 3, 'Customer': 'c'}]
​However, what I want is the following.
{
1: 'a',
2: 'b',
3: 'c'
}
All of the rows in the DF or unique, so it shouldn't run into same key names issue.
Try with
d = dict(zip(df.ID, df.Value))

Correlation in Apache Spark and groupBy with Python

I'm new in Python and Apache Spark, and try to understand, how function "pyspark.sql.functions.corr (val1, val2)" works.
I have big dataframe with auto brand, age and price. I want to get correlation between age and price for each auto brand.
I have 2 solutions:
//get all brands
get_all_maker = data.groupBy("brand").agg(F.count("*").alias("counts")).collect()
for row in get_all_maker:
print(row["brand"],": ",data.filter(data["brand"]==row["brand"]).corr("age","price"))
This solution is slow, because I use "corr" a lot of times.
So I tried to do it with one aggregation:
get_all_maker_corr = data.groupBy("brand").agg(
F.count("*").alias("counts"),
F.corr("age","price").alias("correlation")).collect()
for row in get_all_maker_corr:
print(row["brand"],": ",row["correlation"])
If I try to compare results, they are different. But why?
I tried with simple examples. Here I generate simple data frame:
d = [
{'name': 'a', 'age': 1, 'price': 2},
{'name': 'a', 'age': 2, 'price': 4},
{'name': 'b', 'age': 1, 'price': 1},
{'name': 'b', 'age': 2, 'price': 2}
]
b = spark.createDataFrame(d)
Let's test two methods:
#first version
get_all_maker = b.groupBy("name").agg(F.count("*").alias("counts")).collect()
print("Correlation (1st)")
for row in get_all_maker:
print(row["name"],"(",row["counts"],"):",b.filter(b["name"] == row["name"]).corr("age","price"))
#second version
get_all_maker_corr = b.groupBy("name").agg(
F.count("*").alias("counts"),
F.corr("age","price").alias("correlation")).collect()
print("Correlation (2nd)")
for row in get_all_maker_corr:
print(row["name"],"(",row["counts"],"):",row["correlation"])
Both of them bring me the same answer:
Correlation (1st)
b ( 2 ): 1.0
a ( 2 ): 1.0
Let's add another entry to data frame with None-value:
d = [
{'name': 'a', 'age': 1, 'price': 2},
{'name': 'a', 'age': 2, 'price': 4},
{'name': 'a', 'age': 3, 'price': None},
{'name': 'b', 'age': 1, 'price': 1},
{'name': 'b', 'age': 2, 'price': 2}
]
b = spark.createDataFrame(d)
In first version you will get these results:
Correlation (1st)
b ( 2 ): 1.0
a ( 3 ): -0.5
and the second version bring you other results:
Correlation (2nd)
b ( 2 ): 1.0
a ( 3 ): 1.0
I think, that dataframe.filter with corr-function set None-value to 0-value.
And dataframe.groupBy with F.corr-function in agg-function will ignore None-value.
So, two these methods are not equal. I don't know, if this is a bug or a feature of the Spark system, but just in case you want to count correlation value, the data should be used only without None-value.

Converting to Pandas MultiIndex

I have a dataframe of the form:
SpeciesName 0
0 A [[Year: 1, Quantity: 2],[Year: 3, Quantity: 4...]]
1 B [[Year: 1, Quantity: 7],[Year: 2, Quantity: 15...]]
2 C [[Year: 2, Quantity: 9],[Year: 4, Quantity: 13...]]
I'm attempting to try and create a MultiIndex that uses the SpeciesName and the year as the index:
SpeciesName Year
A 1 Data
2 Data
B 1 Data
2 Data
I have not been able to get pandas.MultiIndex(..) to work and my attempts at iterating through the dataset and manually creating a new object have not been very fruitful. Any insights would be greatly appreciated!
I'm going to assume your data is list of dictionaries... because if I don't, what you've written makes no sense unless they are strings and I don't want to parse strings
df = pd.DataFrame([
['A', [dict(Year=1, Quantity=2), dict(Year=3, Quantity=4)]],
['B', [dict(Year=1, Quantity=7), dict(Year=2, Quantity=15)]],
['C', [dict(Year=2, Quantity=9), dict(Year=4, Quantity=13)]]
], columns=['SpeciesName', 0])
df
SpeciesName 0
0 A [{'Year': 1, 'Quantity': 2}, {'Year': 3, 'Quantity': 4}]
1 B [{'Year': 1, 'Quantity': 7}, {'Year': 2, 'Quantity': 15}]
2 C [{'Year': 2, 'Quantity': 9}, {'Year': 4, 'Quantity': 13}]
Then the solution is obvious
pd.DataFrame.from_records(
*zip(*(
[d, s]
for s, l in zip(
df['SpeciesName'], df[0].values.tolist())
for d in l
))
).set_index('Year', append=True)
Quantity
Year
A 1 2
3 4
B 1 7
2 15
C 2 9
4 13

Categories