I have a scenario where I am trying to filter a dataframe by a particular value, and count how many times another identifier is present. I'm then turning that into a dictionary and mapping back to the dataframe. The issue I am having is that the resulting dictionary cannot be mapped back to the dataframe because I'm introducing complexity to the dictionary (extra keys?), and I don't know how to avoid it.
I guess the simple question is: 'How can I use value_counts on my CELL_ID column', filter by another column called Grid_Type, and map the results back to all cells per CELL_ID?
What I'm doing so far
This works to count how many cells contain the CELL_ID, but does NOT allow me to filter by Grid_Type
df['CELL_ID'].value_counts()
z1 = z.to_dict()
df['CELL_CNT'] = df['CELL_ID'].map(z1)
The dictionary output from this simple example looks like:
7015988: 1, 7122961: 1, 6976792: 1
My bad code
This is what I've been working on so far - where I want to be able to return the count, filtered by the Grid_Type. Eg I want to be able to count the number of times I see "Spot" in/by each CELL_ID.
z = df[df.Grid_Type == 'Spot'].groupby('CELL_ID')['Grid_Type'].value_counts()
z1 = z.to_dict()
df['SPOT_CNT'] = df['CELL_ID'].map(z1)
It seems that in the example where I'm trying to filter that the dictionary is returning a more complex result that includes the Grid_Type. The thing is, I only want the counts mapped against the Cell_ID. Eg dictionary response:
(7133691, 'Spot'): 3, (7133692, 'Spot'): 3, (7133693, 'Spot'): 2
Example Data
+---------+-----------+
| CELL_ID | Grid_Type |
+---------+-----------+
| 001 | Spot |
| 001 | Square |
| 001 | Spot |
| 001 | Square |
| 001 | Square |
| 002 | Spot |
| 002 | Square |
| 002 | Square |
| 003 | Square |
| 003 | Spot |
| 003 | Spot |
| 003 | Spot |
+---------+-----------+
Desired Outcome
+---------+-----------+----------+
| CELL_ID | Grid_Type | SPOT_CNT |
+---------+-----------+----------+
| 001 | Spot | 2 |
| 001 | Square | 2 |
| 001 | Spot | 2 |
| 001 | Square | 2 |
| 001 | Square | 2 |
| 002 | Spot | 1 |
| 002 | Square | 1 |
| 002 | Square | 1 |
| 003 | Square | 3 |
| 003 | Spot | 3 |
| 003 | Spot | 3 |
| 003 | Spot | 3 |
+---------+-----------+----------+
Thanks for any help you might be able to offer/
df = pd.read_csv('spot.txt', sep=r"[ ]{1,}", engine='python', dtype='object')
print(df)
CELL_ID Grid_Type
0 001 Spot
1 001 Square
2 001 Spot
3 001 Square
4 001 Square
5 002 Spot
6 002 Square
7 002 Square
8 003 Square
9 003 Spot
10 003 Spot
11 003 Spot
df_gb = df['Grid_Type'].groupby([df['CELL_ID']]).value_counts()
print(df_gb)
CELL_ID Grid_Type
001 Square 3
Spot 2
002 Square 2
Spot 1
003 Spot 3
Square 1
Name: Grid_Type, dtype: int64
df_gb_dict = df_gb.to_dict()
count_list = []
for idx, row in df.iterrows():
for k, v in df_gb_dict.items():
if k[0] == row['CELL_ID'] and k[1] == row['Grid_Type'] and row['Grid_Type'] == 'Spot':
count_list.append([k[0], k[1], v])
if k[0] == row['CELL_ID'] and k[1] == row['Grid_Type'] and row['Grid_Type'] == 'Square':
count_list.append([k[0], k[1], df_gb_dict[(row['CELL_ID'], 'Spot')]])
new_df = pd.DataFrame(count_list, columns=['CELL_ID', 'Grid_Type', 'SPOT_CNT'])
new_df.sort_values(by='CELL_ID', inplace=True)
new_df.reset_index(drop=True)
print(new_df)
CELL_ID Grid_Type SPOT_CNT
0 001 Spot 2
1 001 Square 2
2 001 Spot 2
3 001 Square 2
4 001 Square 2
5 002 Spot 1
6 002 Square 1
7 002 Square 1
8 003 Square 3
9 003 Spot 3
10 003 Spot 3
11 003 Spot 3
Seems you have an answer, but I would approach this problem with transform():
# set it up
df = pd.read_clipboard()
print(df)
CELL_ID Grid_Type
0 1 Spot
1 1 Square
2 1 Spot
3 1 Square
4 1 Square
5 2 Spot
6 2 Square
7 2 Square
8 3 Square
9 3 Spot
10 3 Spot
11 3 Spot
df['SPOT_CNT'] = df.groupby('CELL_ID')['Grid_Type'].transform(lambda x: sum(x == 'Spot'))
print(df)
CELL_ID Grid_Type SPOT_CNT
0 1 Spot 2
1 1 Square 2
2 1 Spot 2
3 1 Square 2
4 1 Square 2
5 2 Spot 1
6 2 Square 1
7 2 Square 1
8 3 Square 3
9 3 Spot 3
10 3 Spot 3
11 3 Spot 3
Inside the lambda function:
- it returns bool if value(x) == 'Spot'
- for each group, sum() adds up the True bools
Lastly transform,as per docs, behaves like so:
DataFrame.transform(self, func, axis=0, *args, **kwargs) → 'DataFrame'[source]
"Call func on self producing a DataFrame with transformed values."
"Produced DataFrame will have same axis length as self." <----
...
Hope this is helpful.
Related
Problem:
I have a DataFrame like so:
import pandas as pd
df = pd.DataFrame({
"name":["john","jim","eric","jim","john","jim","jim","eric","eric","john"],
"category":["a","b","c","b","a","b","c","c","a","c"],
"amount":[100,200,13,23,40,2,43,92,83,1]
})
name | category | amount
----------------------------
0 john | a | 100
1 jim | b | 200
2 eric | c | 13
3 jim | b | 23
4 john | a | 40
5 jim | b | 2
6 jim | c | 43
7 eric | c | 92
8 eric | a | 83
9 john | c | 1
I would like to add two new columns: first; the total amount for the relevant category for the name of the row (eg: the value in row 0 would be 140, because john has a total of 100 + 40 of the a category). Second; the counts of those name and category combinations which are being summed in the first new column (eg: the row 0 value would be 2).
Desired output:
The output I'm looking for here looks like this:
name | category | amount | sum_for_category | count_for_category
------------------------------------------------------------------------
0 john | a | 100 | 140 | 2
1 jim | b | 200 | 225 | 3
2 eric | c | 13 | 105 | 2
3 jim | b | 23 | 225 | 3
4 john | a | 40 | 140 | 2
5 jim | b | 2 | 225 | 3
6 jim | c | 43 | 43 | 1
7 eric | c | 92 | 105 | 2
8 eric | a | 83 | 83 | 1
9 john | c | 1 | 1 | 1
I don't want to group the data by the features because I want to keep the same number of rows. I just want to tag on the desired value for each row.
Best I could do:
I can't find a good way to do this. The best I've been able to come up with is the following:
names = df["name"].unique()
categories = df["category"].unique()
sum_for_category = {i:{
j:df.loc[(df["name"]==i)&(df["category"]==j)]["amount"].sum() for j in categories
} for i in names}
df["sum_for_category"] = df.apply(lambda x: sum_for_category[x["name"]][x["category"]],axis=1)
count_for_category = {i:{
j:df.loc[(df["name"]==i)&(df["category"]==j)]["amount"].count() for j in categories
} for i in names}
df["count_for_category"] = df.apply(lambda x: count_for_category[x["name"]][x["category"]],axis=1)
But this is extremely clunky and slow; far too slow to be viable on my actual dataset (roughly 700,000 rows x 10 columns). I'm sure there's a better and faster way to do this... Many thanks in advance.
You need two groupby.transform:
g = df.groupby(['name', 'category'])['amount']
df['sum_for_category'] = g.transform('sum')
df['count_or_category'] = g.transform('size')
output:
name category amount sum_for_category count_or_category
0 john a 100 140 2
1 jim b 200 225 3
2 eric c 13 105 2
3 jim b 23 225 3
4 john a 40 140 2
5 jim b 2 225 3
6 jim c 43 43 1
7 eric c 92 105 2
8 eric a 83 83 1
9 john c 1 1 1
Another possible solution:
g = df.groupby(['name', 'category']).amount.agg(['sum','count']).reset_index()
df.merge(g, on = ['name', 'category'], how = 'left')
Output:
name category amount sum count
0 john a 100 140 2
1 jim b 200 225 3
2 eric c 13 105 2
3 jim b 23 225 3
4 john a 40 140 2
5 jim b 2 225 3
6 jim c 43 43 1
7 eric c 92 105 2
8 eric a 83 83 1
9 john c 1 1 1
import pandas as pd
df = pd.DataFrame({
"name":["john","jim","eric","jim","john","jim","jim","eric","eric","john"],
"category":["a","b","c","b","a","b","c","c","a","c"],
"amount":[100,200,13,23,40,2,43,92,83,1]
})
df_Count =
df.groupby(['name','category']).count().reset_index().rename({'amount':'Count_For_Category'}, axis=1)
df_Sum = df.groupby(['name','category']).sum().reset_index().rename({'amount':'Sum_For_Category'},axis=1)
df_v2 = pd.merge(df,df_Count[['name','category','Count_For_Category']], left_on=['name','category'], right_on=['name','category'], how='left')
df_v2 = pd.merge(df_v2,df_Sum[['name','category','Sum_For_Category']], left_on=['name','category'], right_on=['name','category'], how='left')
df_v2
Hi There,
Use a simple code to easy understand, please try these code below, Just run it you will get what you want.
Thanks
Leon
I am trying to run a simple calculation over the values of each row from within a group inside of a dataframe, but I'm having trouble with the syntax, I think I'm specifically getting confused in relation to what data object I should return, i.e. dataframe vs series etc.
For context, I have a bunch of stock values for each product I am tracking and I want to estimate the number of sales via a custom function which essentially does the following:
# Because stock can go up and down, I'm looking to record the difference
# when the stock is less than the previous stock number from the previous row.
# How do I access each row of the dataframe and then return the series I need?
def get_stock_sold(x):
# Written in pseudo
stock_sold = previous_stock_no - current_stock_no if current_stock_no < previous_stock_no else 0
return pd.Series(stock_sold)
I then have the following dataframe:
# 'Order' is a date in the real dataset.
data = {
'id' : ['1', '1', '1', '2', '2', '2'],
'order' : [1, 2, 3, 1, 2, 3],
'current_stock' : [100, 150, 90, 50, 48, 30]
}
df = pd.DataFrame(data)
df = df.sort_values(by=['id', 'order'])
df['previous_stock'] = df.groupby('id')['current_stock'].shift(1)
I'd like to create a new column (stock_sold) and apply the logic from above to each row within the grouped dataframe object:
df['stock_sold'] = df.groupby('id').apply(get_stock_sold)
Desired output would look as follows:
| id | order | current_stock | previous_stock | stock_sold |
|----|-------|---------------|----------------|------------|
| 1 | 1 | 100 | NaN | 0 |
| | 2 | 150 | 100.0 | 0 |
| | 3 | 90 | 150.0 | 60 |
| 2 | 1 | 50 | NaN | 0 |
| | 2 | 48 | 50.0 | 2 |
| | 3 | 30 | 48 | 18 |
Try:
df["previous_stock"] = df.groupby("id")["current_stock"].shift()
df["stock_sold"] = np.where(
df["current_stock"] > df["previous_stock"].fillna(0),
0,
df["previous_stock"] - df["current_stock"],
)
print(df)
Prints:
id order current_stock previous_stock stock_sold
0 1 1 100 NaN 0.0
1 1 2 150 100.0 0.0
2 1 3 90 150.0 60.0
3 2 1 50 NaN 0.0
4 2 2 48 50.0 2.0
5 2 3 30 48.0 18.0
I have a dataframe:
df = pd.DataFrame({'No': [123,123,123,523,523,523,765],
'Type': ['A','B','C','A','C','D','A'],
'Task': ['First','Second','First','Second','Third','First','Fifth'],
'Color': ['blue','red','blue','black','red','red','red'],
'Price': [10,5,1,12,12,12,18],
'Unit': ['E','E','E','E','E','E','E'],
'Pers.ID': [45,6,6,43,1,9,2]
})
So it looks like this:
df
+-----+------+--------+-------+-------+------+---------+
| No | Type | Task | Color | Price | Unit | Pers.ID |
+-----+------+--------+-------+-------+------+---------+
| 123 | A | First | blue | 10 | E | 45 |
| 123 | B | Second | red | 5 | E | 6 |
| 123 | C | First | blue | 1 | E | 6 |
| 523 | A | Second | black | 12 | E | 43 |
| 523 | C | Third | red | 12 | E | 1 |
| 523 | D | First | red | 12 | E | 9 |
| 765 | A | First | red | 18 | E | 2 |
+-----+------+--------+-------+-------+------+---------+
then I created a pivot table:
piv = pd.pivot_table(df, index=['No','Type','Task'])
Result:
Pers.ID Price
No Type Task
123 A First 45 10
B Second 6 5
C First 6 1
523 A Second 43 12
C Third 1 12
D First 9 12
765 A Fifth 2 18
As you can see, problems are:
multiple columns are gone (Color and Unit)
The order of the columns Price and Pers.ID is not the same as in the original dataframe.
I tried to fix this by executing:
cols = list(df.columns)
piv = pd.pivot_table(df, index=['No','Type','Task'], values = cols)
but the result is the same.
I read other posts but none of them matched my problem in a way that I could use it.
Thank you!
EDIT: desired output
Color Price Unit Pers.ID
No Type Task
123 A First blue 10 E 45
B Second red 5 E 6
C First blue 1 E 6
523 A Second black 12 E 43
C Third red 12 E 1
D First red 12 E 9
765 A Fifth red 18 E 2
I think problem is in pivot_table default aggregate function is mean, so strings columns are excluded. So need custom function, also order is changed, so reindex is necessary:
f = lambda x: x.sum() if np.issubdtype(x.dtype, np.number) else ', '.join(x)
cols = df.columns[~df.columns.isin(['No','Type','Task'])].tolist()
piv = (pd.pivot_table(df,
index=['No','Type','Task'],
values = cols,
aggfunc=f).reindex(columns=cols))
print (piv)
Color Price Unit Pers.ID
No Type Task
123 A First blue 10 E 45
B Second red 5 E 6
C First blue 1 E 6
523 A Second black 12 E 43
C Third red 12 E 1
D First red 12 E 9
765 A Fifth red 18 E 2
Another solution with groupby and same aggregation function, ordering is not problem:
df = (df.groupby(['No','Type','Task'])
.agg(lambda x: x.sum() if np.issubdtype(x.dtype, np.number) else ', '.join(x)))
print (df)
Color Price Unit Pers.ID
No Type Task
123 A First blue 10 E 45
B Second red 5 E 6
C First blue 1 E 6
523 A Second black 12 E 43
C Third red 12 E 1
D First red 12 E 9
765 A Fifth red 18 E 2
But if need set first 3 columns to MultiIndex only:
df = df.set_index(['No','Type','Task'])
print (df)
Color Price Unit Pers.ID
No Type Task
123 A First blue 10 E 45
B Second red 5 E 6
C First blue 1 E 6
523 A Second black 12 E 43
C Third red 12 E 1
D First red 12 E 9
765 A Fifth red 18 E 2
I have a DataFrame (df) that looks like the following:
+----------+----+
| dd_mm_yy | id |
+----------+----+
| 01-03-17 | A |
| 01-03-17 | B |
| 01-03-17 | C |
| 01-05-17 | B |
| 01-05-17 | D |
| 01-07-17 | A |
| 01-07-17 | D |
| 01-08-17 | C |
| 01-09-17 | B |
| 01-09-17 | B |
+----------+----+
This the end result i would like to compute:
+----------+----+-----------+
| dd_mm_yy | id | cum_count |
+----------+----+-----------+
| 01-03-17 | A | 1 |
| 01-03-17 | B | 1 |
| 01-03-17 | C | 1 |
| 01-05-17 | B | 2 |
| 01-05-17 | D | 1 |
| 01-07-17 | A | 2 |
| 01-07-17 | D | 2 |
| 01-08-17 | C | 1 |
| 01-09-17 | B | 2 |
| 01-09-17 | B | 3 |
+----------+----+-----------+
Logic
To calculate the cumulative occurrences of values in id but within a specified time window, for example 4 months. i.e. every 5th month the counter resets to one.
To get the cumulative occurences we can use this df.groupby('id').cumcount() + 1
Focusing on id = B we see that the 2nd occurence of B is after 2 months so the cum_count = 2. The next occurence of B is at 01-09-17, looking back 4 months we only find one other occurence so cum_count = 2, etc.
My approach is to call a helper function from df.groupby('id').transform. I feel this is more complicated and slower than it could be, but it seems to work.
# test data
date id cum_count_desired
2017-03-01 A 1
2017-03-01 B 1
2017-03-01 C 1
2017-05-01 B 2
2017-05-01 D 1
2017-07-01 A 2
2017-07-01 D 2
2017-08-01 C 1
2017-09-01 B 2
2017-09-01 B 3
# preprocessing
df['date'] = pd.to_datetime(df['date'])
df.set_index('date', inplace=True)
# Encode the ID strings to numbers to have a column
# to work with after grouping by ID
df['id_code'] = pd.factorize(df['id'])[0]
# solution
def cumcounter(x):
y = [x.loc[d - pd.DateOffset(months=4):d].count() for d in x.index]
gr = x.groupby('date')
adjust = gr.rank(method='first') - gr.size()
y += adjust
return y
df['cum_count'] = df.groupby('id')['id_code'].transform(cumcounter)
# output
df[['id', 'id_num', 'cum_count_desired', 'cum_count']]
id id_num cum_count_desired cum_count
date
2017-03-01 A 0 1 1
2017-03-01 B 1 1 1
2017-03-01 C 2 1 1
2017-05-01 B 1 2 2
2017-05-01 D 3 1 1
2017-07-01 A 0 2 2
2017-07-01 D 3 2 2
2017-08-01 C 2 1 1
2017-09-01 B 1 2 2
2017-09-01 B 1 3 3
The need for adjust
If the same ID occurs multiple times on the same day, the slicing approach that I use will overcount each of the same-day IDs, because the date-based slice immediately grabs all of the same-day values when the list comprehension encounters the date on which multiple IDs show up. Fix:
Group the current DataFrame by date.
Rank each row in each date group.
Subtract from these ranks the total number of rows in each date group. This produces a date-indexed Series of ascending negative integers, ending at 0.
Add these non-positive integer adjustments to y.
This only affects one row in the given test data -- the second-last row, because B appears twice on the same day.
Including or excluding the left endpoint of the time interval
To count rows as old as or newer than 4 calendar months ago, i.e., to include the left endpoint of the 4-month time interval, leave this line unchanged:
y = [x.loc[d - pd.DateOffset(months=4):d].count() for d in x.index]
To count rows strictly newer than 4 calendar months ago, i.e., to exclude the left endpoint of the 4-month time interval, use this instead:
y = [d.loc[d - pd.DateOffset(months=4, days=-1):d].count() for d in x.index]
You can extend the groupby with a grouper:
df['cum_count'] = df.groupby(['id', pd.Grouper(freq='4M', key='date')]).cumcount()
Out[48]:
date id cum_count
0 2017-03-01 A 0
1 2017-03-01 B 0
2 2017-03-01 C 0
3 2017-05-01 B 0
4 2017-05-01 D 0
5 2017-07-01 A 0
6 2017-07-01 D 1
7 2017-08-01 C 0
8 2017-09-01 B 0
9 2017-09-01 B 1
We can make use of .apply row-wise to work on sliced df as well. Sliced will be based on the use of relativedelta from dateutil.
def get_cum_sum (slice, row):
if slice.shape[0] == 0:
return 1
return slice[slice['id'] == row.id].shape[0]
d={'dd_mm_yy':['01-03-17','01-03-17','01-03-17','01-05-17','01-05-17','01-07-17','01-07-17','01-08-17','01-09-17','01-09-17'],'id':['A','B','C','B','D','A','D','C','B','B']}
df=pd.DataFrame(data=d)
df['dd_mm_yy'] = pd.to_datetime(df['dd_mm_yy'], format='%d-%m-%y')
df['cum_sum'] = df.apply(lambda current_row: get_cum_sum(df[(df.index <= current_row.name) & (df.dd_mm_yy >= (current_row.dd_mm_yy - relativedelta(months=+4)))],current_row),axis=1)
>>> df
dd_mm_yy id cum_sum
0 2017-03-01 A 1
1 2017-03-01 B 1
2 2017-03-01 C 1
3 2017-05-01 B 2
4 2017-05-01 D 1
5 2017-07-01 A 2
6 2017-07-01 D 2
7 2017-08-01 C 1
8 2017-09-01 B 2
9 2017-09-01 B 3
Thinking if it is feasible to use .rolling but months are not a fixed period thus might not work.
I use package "pandas" for python. And i have a question.
I have DataFrame like this:
| first | last | datr |city|
|Zahir |Petersen|22.11.15|9 |
|Zahir |Petersen|22.11.15|2 |
|Mason |Sellers |10.04.16|4 |
|Gannon |Cline |29.10.15|2 |
|Craig |Sampson |20.04.16|2 |
|Craig |Sampson |20.04.16|4 |
|Cameron |Mathis |09.05.15|6 |
|Adam |Hurley |16.04.16|2 |
|Brock |Vaughan |14.04.16|10 |
|Xanthus |Murray |30.03.15|6 |
|Xanthus |Murray |30.03.15|7 |
|Xanthus |Murray |30.03.15|4 |
|Palmer |Caldwell|31.10.15|2 |
I want create pivot_table by fields ['first', 'last', 'datr'], but display
['first', 'last', 'datr','city'] where count of record by ['first', 'last', 'datr'] more than one, like this:
| first | last | datr |city|
|Zahir |Petersen|22.11.15|9 | 2
| | | |2 | 2
|Craig |Sampson |20.04.16|2 | 2
| | | |4 | 2
|Xanthus |Murray |30.03.15|6 | 3
| | | |7 | 3
| | | |4 | 3
UPD.
If i groupby three fields from four, than
df['count'] = df.groupby(['first','last','datr']).transform('count')
is work, but if count of all columns-columns for "groupby" > 1 than this code throw error. For example(all columns - 4('first','last', 'datr', 'city'), columns for groupby - 2('first','last'), 4-2 = 2:
In [181]: df['count'] = df.groupby(['first','last']).transform('count')
...
ValueError: Wrong number of items passed 2, placement implies 1
You can do this with groupby. Group by the three columns (first, last and datr), and then count the number of elements in each group:
In [63]: df['count'] = df.groupby(['first', 'last', 'datr']).transform('count')
In [64]: df
Out[64]:
first last datr city count
0 Zahir Petersen 22.11.15 9 2
1 Zahir Petersen 22.11.15 2 2
2 Mason Sellers 10.04.16 4 1
3 Gannon Cline 29.10.15 2 1
4 Craig Sampson 20.04.16 2 2
5 Craig Sampson 20.04.16 4 2
6 Cameron Mathis 09.05.15 6 1
7 Adam Hurley 16.04.16 2 1
8 Brock Vaughan 14.04.16 10 1
9 Xanthus Murray 30.03.15 6 3
10 Xanthus Murray 30.03.15 7 3
11 Xanthus Murray 30.03.15 4 3
12 Palmer Caldwell 31.10.15 2 1
From there, you can filter the frame:
In [65]: df[df['count'] > 1]
Out[65]:
first last datr city count
0 Zahir Petersen 22.11.15 9 2
1 Zahir Petersen 22.11.15 2 2
4 Craig Sampson 20.04.16 2 2
5 Craig Sampson 20.04.16 4 2
9 Xanthus Murray 30.03.15 6 3
10 Xanthus Murray 30.03.15 7 3
11 Xanthus Murray 30.03.15 4 3
And if you want these columns as the index (as in the example output in your question): df.set_index(['first', 'last', 'datr'])