PySpark groupby and max value selection - python

I have a PySpark dataframe like
name city date
satya Mumbai 13/10/2016
satya Pune 02/11/2016
satya Mumbai 22/11/2016
satya Pune 29/11/2016
satya Delhi 30/11/2016
panda Delhi 29/11/2016
brata BBSR 28/11/2016
brata Goa 30/10/2016
brata Goa 30/10/2016
I need to find-out most preferred CITY for each name and Logic is " take city as fav_city if city having max no. of city occurrence on aggregate 'name'+'city' pair". And if multiple same occurrence found then consider city with latest Date. WIll explain:
d = df.groupby('name','city').count()
#name city count
brata Goa 2 #clear favourite
brata BBSR 1
panda Delhi 1 #as single so clear favourite
satya Pune 2 ##Confusion
satya Mumbai 2 ##confusion
satya Delhi 1 ##shd be discard as other cities having higher count than this city
#So get cities having max count
dd = d.groupby('name').agg(F.max('count').alias('count'))
ddd = dd.join(d,['name','count'],'left')
#name count city
brata 2 Goa #fav found
panda 1 Delhi #fav found
satya 2 Mumbai #can't say
satya 2 Pune #can't say
In case of user 'satya' I need to go back to trx_history and get latest date for cities having equal_max count I:e from 'Mumbai' or 'Pune' which is last transacted (max date), consider that city as fav_city. In this case 'Pune' as '29/11/2016' is latest/max date.
But I am not able to proceed further how to get that done.
Please help me with logic or if any better solution(faster/compact way), please suggest. Thanks.

First convert date to the DateType:
import pyspark.sql.functions as F
df_with_date = df.withColumn(
"date",
F.to_date("date", "dd/MM/yyyy")
# For Spark < 2.2
# F.unix_timestamp("date", "dd/MM/yyyy").cast("timestamp").cast("date")
)
Next groupBy user and city but extend aggregation like this:
df_agg = (df_with_date
.groupBy("name", "city")
.agg(F.count("city").alias("count"), F.max("date").alias("max_date")))
Define a window:
from pyspark.sql.window import Window
w = Window().partitionBy("name").orderBy(F.desc("count"), F.desc("max_date"))
Add rank:
df_with_rank = (df_agg
.withColumn("rank", F.dense_rank().over(w)))
And filter:
result = df_with_rank.where(F.col("rank") == 1)
You can detect remaining duplicates using code like this:
import sys
final_w = Window().partitionBy("name").rowsBetween(-sys.maxsize, sys.maxsize)
result.withColumn("tie", F.count("*").over(final_w) != 1)

d = df.groupby('name','city').count()
#name city count
brata Goa 2 #clear favourite
brata BBSR 1
panda Delhi 1 #as single so clear favourite
satya Pune 2 ##Confusion
satya Mumbai 2 ##confusion
satya Delhi 1 ##shd be discard as other cities having higher count than this city
#So get cities having max count
dd = d.groupby('name').count().sort(F.col('count').desc())
display(dd.take(1))

Related

How to convert pandas DataFrame to multiple DataFrame?

My DataFrame
df= pandas.DataFrame({
"City" :["Chennai","Banglore","Mumbai","Delhi","Chennai","Banglore","Mumbai","Delhi"],
"Name" :["Praveen","Dhansekar","Naveen","Kumar","SelvaRani","Nithya","Suji","Konsy"]
"Gender":["M","M","M","M","F","F","F","F"]})
when printed it appears like this, df=
City
Name
Gender
Chennai
Praveen
M
Banglore
Dhansekar
M
Mumbai
Naveen
M
Delhi
Kumar
M
Chennai
SelvaRani
F
Banglore
Nithya
F
Mumbai
Suji
F
Delhi
Konsy
F
I want to save the data in separate DataFrame as follows:
Chennai=
City
Name
Gender
Chennai
Praveen
M
Chennai
SelvaRani
F
Banglore=
City
Name
Gender
Banglore
Dhansekar
M
Banglore
Nithya
F
Mumbai=
City
Name
Gender
Mumbai
Naveen
M
Mumbai
Suji
F
Delhi=
City
Name
Gender
Delhi
Kumar
M
Delhi
Konsy
F
My code is:
D_name= sorted(df['City'].unique())
for i in D_name:
f"{i}"=df[df['City']==I]
The dataset have more than 100 Cities.How do I write a for loop in python to get output as multiple data frame?
You can groupby and create a dictionary like so:
dict_dfs = dict(iter(df.groupby("City")))
Then you can directly access individual cities:
Delhi = dict_dfs["Delhi"]
print(Delhi)
# result:
City Name Gender
3 Delhi Kumar M
7 Delhi Konsy F
You could do something like this:
groups = df.groupby(by='City')
Bangalore = groups.get_group('Bangalore')

Add a column from an existing dataframe into another between every other column

I'll try my best to explain this as I had trouble phrasing the title. I have two dataframes. What I would like to do is add a column from df1 into df2 between every other column.
For example, df1 looks like this :
Age City
0 34 Sydney
1 30 Toronto
2 31 Mumbai
3 32 Richmond
And after adding in df2 it looks like this:
Name Age Clicks City Country
0 Ali 34 10 Sydney Australia
1 Lori 30 20 Toronto Canada
2 Asher 31 45 Mumbai United States
3 Lylah 32 33 Richmond United States
In terms of code, I wasn't quite sure where to even start.
'''Concatenating the dataframes'''
for i in range len(df2):
pos = i+1
df3 = df2.insert
#df2 = pd.concat([df1, df2], axis=1).sort_index(axis=1)
#df2.columns = np.arange(len(df2.columns))
#print (df2)
I was originally going to run it through a loop, but I wasn't quite sure how to do it. Any help would be appreciated!
You can use itertools.zip_longest. For example:
from itertools import zip_longest
new_columns = [
v
for v in (c for a in zip_longest(df2.columns, df1.columns) for c in a)
if not v is None
]
df_out = pd.concat([df1, df2], axis=1)[new_columns]
print(df_out)
Prints:
Name Age Clicks City Country
0 Ali 34 10 Sydney Australia
1 Lori 30 20 Toronto Canada
2 Asher 31 45 Mumbai United States
3 Lylah 32 33 Richmond United States

Creating new dataframe column using string filter of other column

Below is the dataframe with column name 'Address'. I want to create a separate column 'City' with specific string using filter from Address column.
df1
Serial_No Address
1 India Gate Delhi
2 Delhi Redcross Hospital
3 Tolleyganj Bus Stand Kolkata
4 Kolkata Howrah
5 Katra Jammu
Below is the script that I am using
descr = []
col = 'City'
for col in df:
if np.series(df[col]= df[df[col].str.contains('Delhi ', na=False)]:
desc = 'Delhi'
elif np.series(df[col]= df[df[col].str.contains('Kolkata ', na=False)]:
desc = 'Kolkata'
else:
desc = 'None'
Below is the intended output
df1
Serial_No Address City
1 India Gate Delhi Delhi
2 Delhi Redcross Hospital Delhi
3 Tolleyganj Bus Stand Kolkata Kolkata
4 Kolkata Howrah Kolkata
5 Katra Jammu None
Let us try str.extract
df['new'] = df.Address.str.extract(('(Delhi|Kolkata)'))[0]
Try this
import pandas as pd
df1=pd.DataFrame([[1,'India Gate Delhi'],[2,'Delhi Redcross Hospital'],[3,'Tolleyganj Bus Stand Kolkata'],[4,'Kolkata Howrah'],[5,'Katra Jammu']],columns=['Serial_No','Address'])
print(df1)
def f(df1):
if 'Delhi' in df1['Address']:
val = 'Delhi'
elif 'Kolkata' in df1['Address']:
val = 'Kolkata'
else:
val = 'None'
return val
df1['City'] = df1.apply(f, axis=1)
print(df1)

Update date-frame with another data-frame which as a different number of columns

I have a large df called data which looks like:
Identifier Surname First names(s) Date change Work Pattern Region
0 12233.0 Smith Bob FT NW
1 54213.0 Jones Sally 15/04/15 FT NW
2 12237.0 Evans Steve 26/08/14 FT SE
3 10610.0 Cooper Amy 16/08/12 FT SE
I have another dataframe called updates. In this example the dataframe has updated information for data for a couple of records and looks like:
Identifier Surname First names(s) Date change
0 12233.0 Smith Bob 05/09/14
1 10610.0 Cooper Amy 16/08/12
I'm trying to find a way to update data with the updates df so the resulting dataframe looks like:
Identifier Surname First names(s) Date change Work Pattern Region
0 12233.0 Smith Bob 15/09/14 FT NW
1 54213.0 Jones Sally 15/04/15 FT NW
2 12237.0 Evans Steve 26/08/14 FT SE
3 10610.0 Cooper Amy 16/08/12 FT SE
As you can see the Date change field for Bob in the data df has been updated with the Date change from the updates df.
What can I try next?
a while back, I was dealing with that too. the straight up .update was giving me issues (sorry can't remember the exact issue I had. I think it was that when you do .update, it's reliant on indexes matching, and they didn't match in my 2 separate dataframes. so I wanted to use certain columns as my index to update on),
But I made a function to deal with it. So this might be way overkill than what's needed but try this and see if it'll work.
I'm also assuming the date you want update from the updates dataframe should be 15/09/14 not 05/09/14. So I had that different in my sample data below
Also, I'm assuming the Identifier is unique key. If not, you'll need to include multiple columns as your unique key
import sys
import pandas as pd
data = pd.DataFrame([[12233.0,'Smith','Bob','','FT','NW'],
[54213.0,'Jones','Sally','15/04/15','FT','NW'],
[12237.0,'Evans','Steve','26/08/14','FT','SE'],
[10610.0,'Cooper','Amy','16/08/12','FT','SE']],
columns = ['Identifier','Surname','First names(s)','Date change','Work Pattern','Region'])
updates = pd.DataFrame([[12233.0,'Smith','Bob','15/09/14'],
[10610.0,'Cooper','Amy','16/08/12']],
columns = ['Identifier','Surname','First names(s)','Date change'])
def update(df1, df2, keys_list):
df1 = df1.set_index(keys_list)
df2 = df2.set_index(keys_list)
dup_idx1 = df1.index.get_duplicates()
dup_idx2 = df2.index.get_duplicates()
if len(dup_idx1) > 0 or len(dup_idx2) > 0:
print('\n'+'#'*50+'\nError! Duplicate Indicies:')
for element in dup_idx1:
print('df1: %s' %(element,))
for element in dup_idx2:
print('df2: %s' %(element,))
print('#'*50+'\n\n')
df1.update(df2, overwrite=True)
df1.reset_index(inplace=True)
df2.reset_index(inplace=True)
return df1
# the 3rd input is a list, in case you need multiple columns as your unique key
df = update(data, updates, ['Identifier'])
Output:
print (data)
Identifier Surname First names(s) Date change Work Pattern Region
0 12233.0 Smith Bob FT NW
1 54213.0 Jones Sally 15/04/15 FT NW
2 12237.0 Evans Steve 26/08/14 FT SE
3 10610.0 Cooper Amy 16/08/12 FT SE
print (updates)
Identifier Surname First names(s) Date change
0 12233.0 Smith Bob 15/09/14
1 10610.0 Cooper Amy 16/08/12
df = update(data, updates, ['Identifier'])
In [19]: print (df)
Identifier Surname First names(s) Date change Work Pattern Region
0 12233.0 Smith Bob 15/09/14 FT NW
1 54213.0 Jones Sally 15/04/15 FT NW
2 12237.0 Evans Steve 26/08/14 FT SE
3 10610.0 Cooper Amy 16/08/12 FT SE
Using DataFrame.update.
First set index:
data.set_index('Identifier', inplace=True)
updates.set_index('Identifier', inplace=True)
Then update:
data.update(updates)
print(data)
Surname First names(s) Date change Work Pattern Region
Identifier
12233.0 Smith Bob 15/09/14 FT NW
54213.0 Jones Sally 15/04/15 FT NW
12237.0 Evans Steve 26/08/14 FT SE
10610.0 Cooper Amy 16/08/12 FT SE
If you need multiple columns to create a unique index you can just set them with a list. For example:
data.set_index(['Identifier', 'Surname'], inplace=True)
updates.set_index(['Identifier', 'Surname'], inplace=True)
data.update(updates)

String mode aggregation with group by function

I have dataframe which looks like below
Country City
UK London
USA Washington
UK London
UK Manchester
USA Washington
USA Chicago
I want to group country and aggregate on the most repeated city in a country
My desired output should be like
Country City
UK London
USA Washington
Because London and Washington appears 2 times whereas Manchester and Chicago appears only 1 time.
I tried
from scipy.stats import mode
df_summary = df.groupby('Country')['City'].\
apply(lambda x: mode(x)[0][0]).reset_index()
But it seems it won't work on strings
I can't replicate your error, but you can use pd.Series.mode, which accepts strings and returns a series, using iat to extract the first value:
res = df.groupby('Country')['City'].apply(lambda x: x.mode().iat[0]).reset_index()
print(res)
Country City
0 UK London
1 USA Washington
try like below:
>>> df.City.mode()
0 London
1 Washington
dtype: object
OR
import pandas as pd
from scipy import stats
Can use scipy with stats + lambda :
df.groupby('Country').agg({'City': lambda x:stats.mode(x)[0]})
City
Country
UK London
USA Washington
# df.groupby('Country').agg({'City': lambda x:stats.mode(x)[0]}).reset_index()
However, it gives nice count as well if you don't want to return ony First value:
>>> df.groupby('Country').agg({'City': lambda x:stats.mode(x)})
City
Country
UK ([London], [2])
USA ([Washington], [2])

Categories