How to combine two dataframes and average like values? - python

I'm pretty new to machine learning. I have two dataframes that have movie ratings in them. Some of the movie ratings have the same movie title, but different number ratings while other rows have movie titles that the other data frame doesn't have. I was wondering how I would be able to combine the two dataframes and average any ratings that have the same movie name. Thanks for the help!

You can use pd.concat with GroupBy.agg
# df = pd.DataFrame({'Movie':['IR', 'R'], 'rating':[95, 90], 'director':['SB', 'RC']})
# df1 = pd.DataFrame({'Movie':['IR', 'BH'], 'rating':[93, 88], 'direction':['SB', 'RC']})
(pd.concat([df, df1]).groupby('Movie', as_index=False).
agg({'rating':'mean', 'director':'first'}))
Movie rating director
0 BH 88 RC
1 IR 94 SB
2 R 90 RC
Or df.append
df.append(df1).groupby('Movie',as_index=False).agg({'rating':'mean', 'director':'first'})
Movie rating director
0 BH 88 RC
1 IR 94 SB
2 R 90 RC
If you want Movie column as index, as_index parameter of df.groupby defaults to True, Movie column would be index, remove as_index=False from groupby
If you want to maintain the order then set sort parameter to Truein groupby.
(df.append(df1).groupby('Movie',as_index=False, sort=False).
agg({'rating':'mean', 'director':'first'}))
Movie rating director
0 IR 94 SB
1 R 90 RC
2 BH 88 RC

Related

Pandas Dataframe : Using same category codes on different existing dataframes with same category

I have two pandas dataframes with some columns in common. These columns are of type category but unfortunately the category codes don't match for the two dataframes. For example I have:
>>> df1
artist song
0 The Killers Mr Brightside
1 David Guetta Memories
2 Estelle Come Over
3 The Killers Human
>>> df2
artist date
0 The Killers 2010
1 David Guetta 2012
2 Estelle 2005
3 The Killers 2006
But:
>>> df1['artist'].cat.codes
0 55
1 78
2 93
3 55
Whereas:
>>> df2['artist'].cat.codes
0 99
1 12
2 23
3 99
What I would like is for my second dataframe df2 to take the same category codes as the first one df1 without changing the category values. Is there any way to do this?
(Edit)
Here is a screenshot of my two dataframes. Essentially I want the song_tags to have the same cat codes for artist_name and track_name as the songs dataframe. Also song_tags is created from a merge between songs and another tag dataframe (which contains song data and their tags, without the user information) and then saved and loaded through pickle. Also it might be relevant to add that I had to cast artist_name and track_name in song_tags to type category from type object.
I think essentially my question is: how to modify category codes of an existing dataframe column?

Update a value based on another dataframe pairing

I have problem where I need to update a value if people were at the same table.
import pandas as pd
data = {"p1":['Jen','Mark','Carrie'],
"p2":['John','Jason','Rob'],
"value":[10,20,40]}
df = pd.DataFrame(data,columns=["p1",'p2','value'])
meeting = {'person':['Jen','Mark','Carrie','John','Jason','Rob'],
'table':[1,2,3,1,2,3]}
meeting = pd.DataFrame(meeting,columns=['person','table'])
df is a relationship table and value is the field i need to update. So if two people were at the same table in the meeting dataframe then update the df row accordingly.
for example: Jen and John were both at table 1, so I need to update the row in df that has Jen and John and set their value to value + 100 so 110.
I thought about maybe doing a self join on meeting to get the format to match that of df but not sure if this is the easiest or fastest approach
IIUC you could set the person as index in the meeting dataframe, and use its table values to replace the names in df. Then if both mappings have the same value (table), replace with df.value+100:
m = df[['p1','p2']].replace(meeting.set_index('person').table).eval('p1==p2')
df['value'] = df.value.mask(m, df.value+100)
print(df)
p1 p2 value
0 Jen John 110
1 Mark Jason 120
2 Carrie Rob 140
This could be an approach, using df.to_records():
groups=meeting.groupby('table').agg(set)['person'].to_list()
df['value']=[row[-1]+100 if set(list(row)[1:3]) in groups else row[-1] for row in df.to_records()]
Output:
df
p1 p2 value
0 Jen John 110
1 Mark Jason 120
2 Carrie Rob 140

Python Dataframe: Dropping duplicates base on certain conditions

Dataframe with duplicate Shop IDs where some Shop IDs occurred twice and some occurred thrice:
I only want to keep unique Shop IDs base on the shortest Shop Distance assigned to its Area.
Area Shop Name Shop Distance Shop ID
0 AAA Ly 86 5d87790c46a77300
1 AAA Hi 230 5ce5522012138400
2 BBB Hi 780 5ce5522012138400
3 CCC Ly 450 5d87790c46a77300
...
91 MMM Ju 43 4f76d0c0e4b01af7
92 MMM Hi 1150 5ce5522012138400
...
Using pandas drop_duplicates drop the row duplicates but the condition is base on the first/ last occurring Shop ID which does not allow me to sort by distance:
shops_df = shops_df.drop_duplicates(subset='Shop ID', keep= 'first')
I also tried to group by Shop ID then sort, but sort returns error: Duplicates
bbtshops_new['C'] = bbtshops_new.groupby('Shop ID')['Shop ID'].cumcount()
bbtshops_new.sort_values(by=['C'], axis=1)
So far i tried doing up till this stage:
# filter all the duplicates into a new df
df_toclean = shops_df[shops_df['Shop ID'].duplicated(keep= False)]
# create a mask for all unique Shop ID
mask = df_toclean['Shop ID'].value_counts()
# create a mask for the Shop ID that occurred 2 times
shop_2 = mask[mask==2].index
# create a mask for the Shop ID that occurred 3 times
shop_3 = mask[mask==3].index
# create a mask for the Shops that are under radius 750
dist_1 = df_toclean['Shop Distance']<=750
# returns results for all the Shop IDs that appeared twice and under radius 750
bbtshops_2 = df_toclean[dist_1 & df_toclean['Shop ID'].isin(shop_2)]
* if i use df_toclean['Shop Distance'].min() instead of dist_1 it returns 0 results
I think i'm doing it the long way and still haven't figure out dropping the duplicates, anyone knows how to solve this in a shorter way? I'm new to python, thanks for helping out!
Try to first sort the dataframe based on distance, then drop the duplicate shops.
df = shops_df.sort_values('Distance')
df = df[~df['Shop ID'].duplicated()] # The tilda (~) inverts the boolean mask.
Or just as one chained expression (per comment from #chmielcode).
df = (
shops_df
.sort_values('Distance')
.drop_duplicates(subset='Shop ID', keep= 'first')
.reset_index(drop=True) # Optional.
)
You can use idxmin:
df.loc[df.groupby('Area')['Shop Distance'].idxmin()]
Area Shop Name Shop Distance Shop ID
0 AAA Ly 86 5d87790c46a77300
2 BBB Hi 780 5ce5522012138400
3 CCC Ly 450 5d87790c46a77300
4 MMM Ju 43 4f76d0c0e4b01af7

Python pandas data frame remove row where index name DOES NOT occurs in other data frame

I have two data frames. I want to remove rows where the indexes do not occur in both data frames.
Here is an example of the data frames:
import pandas as pd
data = {'Correlation': [1.000000, 0.607340, 0.348844]}
df = pd.DataFrame(data, columns=['Correlation'])
df = df.rename(index={0: 'GINI'})
df = df.rename(index={1: 'Central government debt, total (% of GDP)'})
df = df.rename(index={2: 'Grants and other revenue (% of revenue)'})
data_2 = {'Correlation': [1.000000, 0.607340, 0.348844, 0.309390, -0.661046]}
df_2 = pd.DataFrame(data_2, columns=['Correlation'])
df_2 = df_2.rename(index={0: 'GINI'})
df_2 = df_2.rename(index={1: 'Central government debt, total (% of GDP)'})
df_2 = df_2.rename(index={2: 'Grants and other revenue (% of revenue)'})
df_2 = df_2.rename(index={3: 'Compensation of employees (% of expense)'})
df_2 = df_2.rename(index={4: 'Central government debt, total (current LCU)'})
I have found this question: How to remove rows in a Pandas dataframe if the same row exists in another dataframe? but was unable to use it as I am trying to remove if the index name is the same.
I also saw this question: pandas get rows which are NOT in other dataframe but removes rows which are equal in both data frames but I also did not find this useful.
What I have thought to do is to transpose then concat the data frames and remove duplicate columns:
df = df.T
df_2 = df_2.T
df3 = pd.concat([df,df_2],axis = 1)
df3.iloc[: , ~df3.columns.duplicated()]
The problem with this is that it only removes one of the columns that is duplicated but I want it to remove both these columns.
Any help doing this would be much appreciated, cheers.
You can just compare the indexes and use .loc to pull the relevant rows:
In [19]: df1 = pd.DataFrame(list(range(50)), index=range(0, 100, 2))
In [20]: df2 = pd.DataFrame(list(range(34)), index=range(0, 100, 3))
In [21]: df2.loc[df2.index.difference(df1.index)]
Out[21]:
0
3 1
9 3
15 5
21 7
27 9
33 11
39 13
45 15
51 17
57 19
63 21
69 23
75 25
81 27
87 29
93 31
99 33
you can simply do this for indices in df2 but not in df1
df_2[~df_2.index.isin(df.index)]
Correlation
Compensation of employees (% of expense) 0.309390
Central government debt, total (current LCU) -0.661046
I have managed to work this out by adapting the answers already submitted:
df_2[df_2.index.isin(df.index)]

Top 5 movies with most number of ratings

I'm currently facing a little problem. I'm working with the movie-lens 1M data, and trying to get the top 5 movies with the most ratings.
movies = pandas.read_table('movies.dat', sep='::', header=None, names= ['movie_id', 'title', 'genre'])
users = pandas.read_table('users.dat', sep='::', header=None, names=['user_id', 'gender','age','occupation_code','zip'])
ratings = pandas.read_table('ratings.dat', sep='::', header=None, names=['user_id','movie_id','rating','timestamp'])
movie_data = pandas.merge(movies,pandas.merge(ratings,users))
The above code is what I have written to merge the .dat files into one Dataframe.
Then I need the top 5 from that movie_data dataframe, based on the ratings.
Here is what I have done:
print(movie_data.sort('rating', ascending = False).head(5))
This seem to find the top 5 based on the rating. However, the output is:
movie_id title genre user_id \
0 1 Toy Story (1995) Animation|Children's|Comedy 1
657724 2409 Rocky II (1979) Action|Drama 101
244214 1012 Old Yeller (1957) Children's|Drama 447
657745 2409 Rocky II (1979) Action|Drama 549
657752 2409 Rocky II (1979) Action|Drama 684
rating timestamp gender age occupation_code zip
0 5 978824268 F 1 10 48067
657724 5 977578472 F 18 3 33314
244214 5 976236279 F 45 11 55105
657745 5 976119207 M 25 6 53217
657752 5 975603281 M 25 4 27510
As you can see Rocky II appears 3 times. I would like to know if I can somehow remove duplicates fast, other than going through the list again, and remove duplicates that way.
I have looked at a pivot_table, but i'm not quite sure how they work, so if it can be done with such a table, i need some explaination of how they work
EDIT.
First comment did indeed remove the duplicates.
movie_data.drop_duplicates(subset='movie_id').sort('rating', ascending = False).head(5)
Thank you :)
You can drop the duplicate entries by calling drop_duplicates and pass param subset='movie_id':
movie_data.drop_duplicates(subset='movie_id').sort('rating', ascending = False).head(5)

Categories