Group by, aggregate, include separate column - python

Here's my data:
foo = pd.DataFrame({
'accnt' : [101, 102, 103, 104, 105, 101, 102, 103, 104, 105],
'gender' : [0, 1 , 0, 1, 0, 0, 1 , 0, 1, 0],
'date' : pd.to_datetime(["2019-01-01 00:10:21", "2019-01-05 00:09:18", "2019-01-05 00:09:30", "2019-02-05 00:05:12", "2019-04-01 00:08:46",
"2019-04-01 00:11:31", "2019-02-06 00:01:39", "2019-01-26 00:15:14", "2019-01-21 00:12:36", "2019-03-01 00:09:31"]),
'value' : [10, 20, 30, 40, 50, 5, 2, 6, 48, 96]
})
Which is:
accnt date gender value
0 101 2019-01-01 00:10:21 0 10
1 102 2019-01-05 00:09:18 1 20
2 103 2019-01-05 00:09:30 0 30
3 104 2019-02-05 00:05:12 1 40
4 105 2019-04-01 00:08:46 0 50
5 101 2019-04-01 00:11:31 0 5
6 102 2019-02-06 00:01:39 1 2
7 103 2019-01-26 00:15:14 0 6
8 104 2019-01-21 00:12:36 1 48
9 105 2019-03-01 00:09:31 0 96
I want to do the following:
- Group by accnt, include gender, take latest date as latest_date, count number of transactions as txn_count; resulting in:
accnt gender latest_date txn_count
101 0 2019-04-01 00:11:31 2
102 1 2019-02-06 00:01:39 2
103 0 2019-01-26 00:15:14 2
104 1 2019-02-05 00:05:12 2
105 0 2019-04-01 00:08:46 2
In R, I can do this using group_by and summarise from dplyr:
foo %>% group_by(accnt) %>%
summarise(gender = last(gender), most_recent_order_date = max(date), order_count = n()) %>% data.frame()
I'm taking last(gender) to include it, since gender is the same throughout for any accnt, I can take min, max or mean instead also.
How can I do the same in python using pandas?
I've tried:
foo.groupby('accnt').agg({'gender' : ['mean'],
'date': ['max'],
'value': ['count']}).rename(columns = {'gender' : "gender",
'date' : "most_recent_order_date",
'value' : "order_count"})
But this leads to "extra" column names. I'd also like to know what is the best way to include a non-aggregation column like gender in the result.

In R summarise will equal to agg , mutate equal to transform
The reason why you have multiple index in columns : Since you pass the function call with list , which means you can do something like {'date':['mean','sum']}
foo.groupby('accnt').agg({'gender' : 'first',
'date': 'max',
'value': 'count'}).rename(columns = {'date' : "most_recent_order_date",
'value' : "order_count"}).reset_index()
Out[727]:
accnt most_recent_order_date order_count gender
0 101 2019-04-01 00:11:31 2 0
1 102 2019-02-06 00:01:39 2 1
2 103 2019-01-26 00:15:14 2 0
3 104 2019-02-05 00:05:12 2 1
4 105 2019-04-01 00:08:46 2 0
Some example : Here I called two function same time for one columns , which means there should be have two level of index to make sure the out columns names do not have duplicated
foo.groupby('accnt').agg({'gender' : ['first','mean']})
Out[728]:
gender
first mean
accnt
101 0 0
102 1 1
103 0 0
104 1 1
105 0 0

Sorry for the late response. Here's a solution I found.
# Pandas Operations
foo = foo.groupby('accnt').agg({'gender' : ['mean'],
'date': ['max'],
'value': ['count']})
# Drop additionally created column names from Pandas Operations
foo.columns = foo.columns.droplevel(1)
# Rename original column names
foo.rename( columns = { 'date':'latest_date',
'value':'txn_count'},
inplace=True)
If you'd like to include an additional non aggregate column, you can simply append a new column to the grouped foo dataframe.

Related

Filtering dataframes based on one column with a different type of other column

I have the following problem
import pandas as pd
data = {
"ID": [420, 380, 390, 540, 520, 50, 22],
"duration": [50, 40, 45,33,19,1,3],
"next":["390;50","880;222" ,"520;50" ,"380;111" ,"810;111" ,"22;888" ,"11" ]
}
#load data into a DataFrame object:
df = pd.DataFrame(data)
print(df)
As you can see I have
ID duration next
0 420 50 390;50
1 380 40 880;222
2 390 45 520;50
3 540 33 380;111
4 520 19 810;111
5 50 1 22;888
6 22 3 11
Things to notice:
ID type is int
next type is a string with numbers separated by ; if more than two numbers
I would like to filter the rows with no next in the ID
For example in this case
420 has a follow up in both 390 and 50
380 has as next 880 and 222 both of which are not in ID so this one
540 has as next 380 and 111 and while 111 is not in ID, 380 is so not this one
same with 50
In the end I want to get
1 380 40 880;222
4 520 19 810;111
6 22 3 11
With only one value I used print(df[~df.next.astype(int).isin(df.ID)]) but in this case isin can not be simply applied.
How can I do this?
Let us try with split then explode with isin check
s = df.next.str.split(';').explode().astype(int)
out = df[~s.isin(df['ID']).groupby(level=0).any()]
Out[420]:
ID duration next
1 380 40 880;222
4 520 19 810;111
6 22 3 11
Use a regex with word boundaries for efficiency:
pattern = '|'.join(df['ID'].astype(str))
out = df[~df['next'].str.contains(fr'\b(?:{pattern})\b')]
Output:
ID duration next
1 380 40 880;222
4 520 19 810;111
6 22 3 11

Pandas - normalize Json list

I am trying to normalize a column from a Pandas dataframe that is a list of dictionaries (can be missing).
Example to reproduce
import pandas as pd
bids = pd.Series([[{'price': 606, 'quantity': 28},{'price': 588, 'quantity': 29},
{'price': 513, 'quantity': 33}],[],[{'price': 7143, 'quantity': 15},
{'price': 68, 'quantity': 91},{'price': 6849, 'quantity': 12}]])
data = pd.DataFrame([1,2,3]).rename(columns={0:'id'})
data['bids'] = bids
Desired output
id price quantity
1 606 28
1 588 29
1 513 33
3 7143 15
3 68 91
3 6849 12
Attempt
Trying to resolve using pandas json_normalize, following docs here. I'm confused by why none of the below work, and what type of record_path will fix my problem. All the below error.
pd.json_normalize(data['bids'])
pd.json_normalize(data['bids'],['price','quantity'])
pd.json_normalize(data['bids'],[['price','quantity']])
Use DataFrame.explode on column bids then create a new dataframe from the dictionaries in exploded bids column and use DataFrame.join to join it with df:
df = data.explode('bids').dropna(subset=['bids']).reset_index(drop=True)
df = df.join(pd.DataFrame(df.pop('bids').tolist()))
Result:
print(df)
id price quantity
0 1 606 28
1 1 588 29
2 1 513 33
3 3 7143 15
4 3 68 91
5 3 6849 12
Adding another approach with np.repeat and np.concatenate with json_normalize
out = pd.io.json.json_normalize(np.concatenate(data['bids']))
out.insert(0,'id',np.repeat(data['id'],data['bids'].str.len()).to_numpy())
Or you can also use np.hstack as #Shubham mentions instead of np.concatenate:
out = pd.io.json.json_normalize(np.hstack(data['bids']))
print(out)
id price quantity
0 1 606 28
1 1 588 29
2 1 513 33
3 3 7143 15
4 3 68 91
5 3 6849 12

Pandas assign group numbers for each time bin

I have a pandas dataframe that looks like below.
Key Name Val1 Val2 Timestamp
101 A 10 1 01-10-2019 00:20:21
102 A 12 2 01-10-2019 00:20:21
103 B 10 1 01-10-2019 00:20:26
104 C 20 2 01-10-2019 14:40:45
105 B 21 3 02-10-2019 09:04:06
106 D 24 3 02-10-2019 09:04:12
107 A 24 3 02-10-2019 09:04:14
108 E 32 2 02-10-2019 09:04:20
109 A 10 1 02-10-2019 09:04:22
110 B 10 1 02-10-2019 10:40:49
Starting from the earliest timestamp, that is, '01-10-2019 00:20:21', I need to create time bins of 10 seconds each and assign same group number to all the rows having timestamp fitting in a time bin.
The output should look as below.
Key Name Val1 Val2 Timestamp Group
101 A 10 1 01-10-2019 00:20:21 1
102 A 12 2 01-10-2019 00:20:21 1
103 B 10 1 01-10-2019 00:20:26 1
104 C 20 2 01-10-2019 14:40:45 2
105 B 21 3 02-10-2019 09:04:06 3
106 D 24 3 02-10-2019 09:04:12 4
107 A 24 3 02-10-2019 09:04:14 4
108 E 32 2 02-10-2019 09:04:20 4
109 A 10 1 02-10-2019 09:04:22 5
110 B 10 1 02-10-2019 10:40:49 6
First time bin: '01-10-2019 00:20:21' to '01-10-2019 00:20:30',
Next time bin: '01-10-2019 00:20:31' to '01-10-2019 00:20:40',
Next time bin: '01-10-2019 00:20:41' to '01-10-2019 00:20:50',
Next time bin: '01-10-2019 00:20:51' to '01-10-2019 00:21:00',
Next time bin: '01-10-2019 00:21:01' to '01-10-2019 00:21:10'
and so on.. Based on these time bins, 'Group' is assigned for each row.
It is not mandatory to have consecutive group numbers(If a time bin is not present, it's ok to skip that group number).
I have generated this using for loop, but it takes lot of time if data is spread across months.
Please let me know if this can be done as a pandas operation using a single line of code. Thanks.
Here is an example without loop. The main approach is round up seconds to specific ranges and use ngroup().
02-10-2019 09:04:12 -> 02-10-2019 09:04:11
02-10-2019 09:04:14 -> 02-10-2019 09:04:11
02-10-2019 09:04:20 -> 02-10-2019 09:04:11
02-10-2019 09:04:21 -> 02-10-2019 09:04:21
02-10-2019 09:04:25 -> 02-10-2019 09:04:21
...
I use a new temporary column to find some specific range.
df = pd.DataFrame.from_dict({
'Name': ('A', 'A', 'B', 'C', 'B', 'D', 'A', 'E', 'A', 'B'),
'Val1': (1, 2, 1, 2, 3, 3, 3, 2, 1, 1),
'Timestamp': (
'2019-01-10 00:20:21',
'2019-01-10 00:20:21',
'2019-01-10 00:20:26',
'2019-01-10 14:40:45',
'2019-02-10 09:04:06',
'2019-02-10 09:04:12',
'2019-02-10 09:04:14',
'2019-02-10 09:04:20',
'2019-02-10 09:04:22',
'2019-02-10 10:40:49',
)
})
# convert str to Timestamp
df['Timestamp'] = pd.to_datetime(df['Timestamp'])
# your specific ranges. customize if you need
def sec_to_group(x):
if 0 <= x.second <= 10:
x = x.replace(second=0)
elif 11 <= x.second <= 20:
x = x.replace(second=11)
elif 21 <= x.second <= 30:
x = x.replace(second=21)
elif 31 <= x.second <= 40:
x = x.replace(second=31)
elif 41 <= x.second <= 50:
x = x.replace(second=41)
elif 51 <= x.second <= 59:
x = x.replace(second=51)
return x
# new column formated_dt(temporary) with formatted seconds
df['formated_dt'] = df['Timestamp'].apply(sec_to_group)
# group by new column + ngroup() and drop
df['Group'] = df.groupby('formated_dt').ngroup()
df.drop(columns=['formated_dt'], inplace=True)
print(df)
Output:
# Name Val1 Timestamp Group
# 0 A 1 2019-01-10 00:20:21 0 <- ngroup() calculates from 0
# 1 A 2 2019-01-10 00:20:21 0
# 2 B 1 2019-01-10 00:20:26 0
# 3 C 2 2019-01-10 14:40:45 1
# 4 B 3 2019-02-10 09:04:06 2
# ....
Also you can try to use TimeGrouper or resample.
Hope this helps.

Numpy: Use vectorization for loop while referring to previous row value?

I have the following dataframe for which I want to create a column named 'Value' using numpy for fast looping and at the same time refer to the previous row value in the same column.
import pandas as pd
import numpy as np
df = pd.DataFrame(
{
"Product": ["A", "A", "A", "A", "B", "B", "B", "C", "C"],
"Inbound": [115, 220, 200, 402, 313, 434, 321, 343, 120],
"Outbound": [10, 20, 24, 52, 40, 12, 43, 23, 16],
"Is First?": ["Yes", "No", "No", "No", "Yes", "No", "No", "Yes", "No"],
}
)
Product Inbound Outbound Is First? Value
0 A 115 10 Yes 125
1 A 220 20 No 105
2 A 200 24 No 81
3 A 402 52 No 29
4 B 313 40 Yes 353
5 B 434 12 No 341
6 B 321 43 No 298
7 C 343 23 Yes 366
8 C 120 16 No 350
The formula for Value column in pseudocode is:
if ['Is First?'] = 'Yes' then [Value] = [Inbound] + [Outbound]
else [Value] = [Previous Value] - [Outbound]
The ideal way of creating the Value column right now is to do a for loop and use shift to refer to the previous column (which I am somehow not able to make work). But since I will be applying this over a giant dataset, I want to use the numpy vectorization method on it.
for i in range(len(df)):
if df.loc[i, "Is First?"] == "Yes":
df.loc[i, "Value"] = df.loc[i, "Inbound"] + df.loc[i, "Outbound"]
else:
df.loc[i, "Value"] = df.loc[i, "Value"].shift(-1) + df.loc[i, "Outbound"]
One way:
You may use np.subtract.accumulate with transform
s = df['Is First?'].eq('Yes').cumsum()
df['value'] = ((df.Inbound + df.Outbound).where(df['Is First?'].eq('Yes'), df.Outbound)
.groupby(s)
.transform(np.subtract.accumulate))
Out[1749]:
Product Inbound Outbound Is First? value
0 A 115 10 Yes 125
1 A 220 20 No 105
2 A 200 24 No 81
3 A 402 52 No 29
4 B 313 40 Yes 353
5 B 434 12 No 341
6 B 321 43 No 298
7 C 343 23 Yes 366
8 C 120 16 No 350
Another way:
Assign value for Yes. Create groupid s to use for groupby. Groupby and shift Outbound to calculate cumsum, and subtract it from 'Yes' value of each group. Finally, use it to fillna.
df['value'] = (df.Inbound + df.Outbound).where(df['Is First?'].eq('Yes'))
s = df['Is First?'].eq('Yes').cumsum()
s1 = df.value.ffill() - df.Outbound.shift(-1).groupby(s).cumsum().shift()
df['value'] = df.value.fillna(s1)
Out[1671]:
Product Inbound Outbound Is First? value
0 A 115 10 Yes 125.0
1 A 220 20 No 105.0
2 A 200 24 No 81.0
3 A 402 52 No 29.0
4 B 313 40 Yes 353.0
5 B 434 12 No 341.0
6 B 321 43 No 298.0
7 C 343 23 Yes 366.0
8 C 120 16 No 350.0
This is not a trivial task, the difficulty lies in the consecutive Nos. It's necessary to group consecutive no's together, the code below should do,
col_sum = df.Inbound+df.Outbound
mask_no = df['Is First?'].eq('No')
mask_yes = df['Is First?'].eq('Yes')
consec_no = mask_yes.cumsum()
result = col_sum.groupby(consec_no).transform('first')-df['Outbound'].where(mask_no,0).groupby(consec_no).cumsum()
Use:
df.loc[df['Is First?'].eq('Yes'),'Value']=df['Inbound']+df['Outbound']
df.loc[~df['Is First?'].eq('Yes'),'Value']=df['Value'].fillna(0).shift().cumsum()-df.loc[~df['Is First?'].eq('Yes'),'Outbound'].cumsum()
Annotated numpy code:
## 1. line up values to sum
ob = -df["Outbound"].values
# get yes indices
fi, = np.where(df["Is First?"].values == "Yes")
# insert yes formula at yes positions
ob[fi] = df["Inbound"].values[fi] - ob[fi]
## 2. calculate block sums and subtract each from the
## first element of the **next** block
ob[fi[1:]] -= np.add.reduceat(ob,fi)[:-1]
# now simply taking the cumsum will reset after each block
df["Value"] = ob.cumsum()
Result:
Product Inbound Outbound Is First? Value
0 A 115 10 Yes 125
1 A 220 20 No 105
2 A 200 24 No 81
3 A 402 52 No 29
4 B 313 40 Yes 353
5 B 434 12 No 341
6 B 321 43 No 298
7 C 343 23 Yes 366
8 C 120 16 No 350

filter pandas dataframe based in another column

this might be a basic question, but I have not being able to find a solution. I have two dataframes, with identical rows and columns, called Volumes and Prices, which are like this
Volumes
Index ProductA ProductB ProductC ProductD Limit
0 100 300 400 78 100
1 110 370 20 30 100
2 90 320 200 121 100
3 150 320 410 99 100
....
Prices
Index ProductA ProductB ProductC ProductD Limit
0 50 110 30 90 0
1 51 110 29 99 0
2 49 120 25 88 0
3 51 110 22 96 0
....
I want to assign 0 to the "cell" of the Prices dataframe which correspond to Volumes less than what it is on the Limit column
so, the ideal output would be
Prices
Index ProductA ProductB ProductC ProductD Limit
0 50 110 30 0 0
1 51 110 0 0 0
2 0 120 25 88 0
3 51 110 22 0 0
....
I tried
import pandas as pd
import numpy as np
d_price = {'ProductA' : [50, 51, 49, 51], 'ProductB' : [110,110,120,110],
'ProductC' : [30,29,25,22],'ProductD' : [90,99,88,96], 'Limit': [0]*4}
d_volume = {'ProductA' : [100,110,90,150], 'ProductB' : [300,370,320,320],
'ProductC' : [400,20,200,410],'ProductD' : [78,30,121,99], 'Limit': [100]*4}
Prices = pd.DataFrame(d_price)
Volumes = pd.DataFrame(d_volume)
Prices[Volumes > Volumes.Limit]=0
but I do not obtain any changes to the Prices dataframe... obviously I'm having a hard time understanding boolean slicing, any help would be great
The problem is in
Prices[Volumes > Volumes.Limit]=0
Since Limit varies on each row, you should use, for example, apply like following:
Prices[Volumes.apply(lambda x : x>x.Limit, axis=1)]=0
you can use mask to solve this problem, I am not an expert either but this solutions does what you want to do.
test=(Volumes.ix[:,'ProductA':'ProductD'] >= Volumes.Limit.values)
final = Prices[test].fillna(0)

Categories