Calculates new columns based on other columns' values in python pandas dataframe - python

I want to create a new column based on the value of other columns in pandas dataframe. My data is about a truck that moves back and forth from loading to dumping location. I want calculates the distance of current road segment to the last segment. The example of the data shown below:
State | segment length |
-----------------------------
Loaded | 20 |
Loaded | 10 |
Loaded | 10 |
Empty | 15 |
Empty | 10 |
Empty | 10 |
Loaded | 30 |
Loaded | 20 |
Loaded | 10 |
So, the end of the road will be the record where the State changes. Hence I want to calculate the distance from end of the road. The final dataframe will be:
State | segment length | Distance to end
Loaded | 20 | 40
Loaded | 10 | 20
Loaded | 10 | 10
Empty | 15 | 35
Empty | 10 | 20
Empty | 10 | 10
Loaded | 30 | 60
Loaded | 20 | 30
Loaded | 10 | 10
Can anyone help?
Thank you in advance

Use GroupBy.cumsum with DataFrame.iloc for swap ordering and custom Series for get unique consecutive groups with shift and cumsum:
g = df['State'].ne(df['State'].shift()).cumsum()
df['Distance to end'] = df.iloc[::-1].groupby(g)['segment length'].cumsum()
print (df)
State segment length Distance to end
0 Loaded 20 40
1 Loaded 10 20
2 Loaded 10 10
3 Empty 15 35
4 Empty 10 20
5 Empty 10 10
6 Loaded 30 60
7 Loaded 20 30
8 Loaded 10 10
Detail:
print (g)
0 1
1 1
2 1
3 2
4 2
5 2
6 3
7 3
8 3
Name: State, dtype: int32

df['Distance to end'] = (
df.assign(i=df.State.ne(df.State.shift()).cumsum())
.assign(s=lambda x: x.groupby(by='i')['segment length'].transform(sum))
.groupby(by='i')
.apply(lambda x: x.s.sub(x['segment length'].shift().cumsum().fillna(0)))
.values
)
State segment length Distance to end
0 Loaded 20 40.0
1 Loaded 10 20.0
2 Loaded 10 10.0
3 Empty 15 35.0
4 Empty 10 20.0
5 Empty 10 10.0
6 Loaded 30 60.0
7 Loaded 20 30.0
8 Loaded 10 10.0

Related

python piecewise linear interpolation across dataframes in a list

I am trying to apply piecewise linear interpolation. I first tried to use pandas built-in interpolate function but it was not working.
Example data looks below
import pandas as pd
import numpy as np
d = {'ID':[5,5,5,5,5,5,5], 'month':[0,3,6,9,12,15,18], 'num':[7,np.nan,5,np.nan,np.nan,5,8]}
tempo = pd.DataFrame(data = d)
d2 = {'ID':[6,6,6,6,6,6,6], 'month':[0,3,6,9,12,15,18], 'num':[5,np.nan,2,np.nan,np.nan,np.nan,7]}
tempo2 = pd.DataFrame(data = d2)
this = []
this.append(tempo)
this.append(tempo2)
The actual data has over 1000 unique IDs, so I filtered each ID into a dataframe and put them into the list.
The first dataframe in the list looks as below
I am trying to go through all the dataframe in the list to do a piecewise linear interpolation. I tried to change month to a index and use .interpolate(method='index', inplace = True) but it was not working.
The expected output is
ID | month | num
5 | 0 | 7
5 | 3 | 6
5 | 6 | 5
5 | 9 | 5
5 | 12 | 5
5 | 15 | 5
5 | 18 | 8
This needs to be applied across all the dataframes in the list.
Assuming this is a follow up of your previous question, change the code to:
for i, df in enumerate(this):
this[i] = (df
.set_index('month')
# optional, because of the previous question
.reindex(range(df['month'].min(), df['month'].max()+3, 3))
.interpolate()
.reset_index()[df.columns]
)
NB. I simplified the code to remove the groupby, which only works if you have a single group per DataFrame, as you mentioned in the other question.
Output:
[ ID month num
0 5 0 7.0
1 5 3 6.0
2 5 6 5.0
3 5 9 5.0
4 5 12 5.0
5 5 15 5.0
6 5 18 8.0,
ID month num
0 6 0 5.00
1 6 3 3.50
2 6 6 2.00
3 6 9 3.25
4 6 12 4.50
5 6 15 5.75
6 6 18 7.00]

How to obtain counts and sums for pairs of values in each row of Pandas DataFrame

Problem:
I have a DataFrame like so:
import pandas as pd
df = pd.DataFrame({
"name":["john","jim","eric","jim","john","jim","jim","eric","eric","john"],
"category":["a","b","c","b","a","b","c","c","a","c"],
"amount":[100,200,13,23,40,2,43,92,83,1]
})
name | category | amount
----------------------------
0 john | a | 100
1 jim | b | 200
2 eric | c | 13
3 jim | b | 23
4 john | a | 40
5 jim | b | 2
6 jim | c | 43
7 eric | c | 92
8 eric | a | 83
9 john | c | 1
I would like to add two new columns: first; the total amount for the relevant category for the name of the row (eg: the value in row 0 would be 140, because john has a total of 100 + 40 of the a category). Second; the counts of those name and category combinations which are being summed in the first new column (eg: the row 0 value would be 2).
Desired output:
The output I'm looking for here looks like this:
name | category | amount | sum_for_category | count_for_category
------------------------------------------------------------------------
0 john | a | 100 | 140 | 2
1 jim | b | 200 | 225 | 3
2 eric | c | 13 | 105 | 2
3 jim | b | 23 | 225 | 3
4 john | a | 40 | 140 | 2
5 jim | b | 2 | 225 | 3
6 jim | c | 43 | 43 | 1
7 eric | c | 92 | 105 | 2
8 eric | a | 83 | 83 | 1
9 john | c | 1 | 1 | 1
I don't want to group the data by the features because I want to keep the same number of rows. I just want to tag on the desired value for each row.
Best I could do:
I can't find a good way to do this. The best I've been able to come up with is the following:
names = df["name"].unique()
categories = df["category"].unique()
sum_for_category = {i:{
j:df.loc[(df["name"]==i)&(df["category"]==j)]["amount"].sum() for j in categories
} for i in names}
df["sum_for_category"] = df.apply(lambda x: sum_for_category[x["name"]][x["category"]],axis=1)
count_for_category = {i:{
j:df.loc[(df["name"]==i)&(df["category"]==j)]["amount"].count() for j in categories
} for i in names}
df["count_for_category"] = df.apply(lambda x: count_for_category[x["name"]][x["category"]],axis=1)
But this is extremely clunky and slow; far too slow to be viable on my actual dataset (roughly 700,000 rows x 10 columns). I'm sure there's a better and faster way to do this... Many thanks in advance.
You need two groupby.transform:
g = df.groupby(['name', 'category'])['amount']
df['sum_for_category'] = g.transform('sum')
df['count_or_category'] = g.transform('size')
output:
name category amount sum_for_category count_or_category
0 john a 100 140 2
1 jim b 200 225 3
2 eric c 13 105 2
3 jim b 23 225 3
4 john a 40 140 2
5 jim b 2 225 3
6 jim c 43 43 1
7 eric c 92 105 2
8 eric a 83 83 1
9 john c 1 1 1
Another possible solution:
g = df.groupby(['name', 'category']).amount.agg(['sum','count']).reset_index()
df.merge(g, on = ['name', 'category'], how = 'left')
Output:
name category amount sum count
0 john a 100 140 2
1 jim b 200 225 3
2 eric c 13 105 2
3 jim b 23 225 3
4 john a 40 140 2
5 jim b 2 225 3
6 jim c 43 43 1
7 eric c 92 105 2
8 eric a 83 83 1
9 john c 1 1 1
import pandas as pd
df = pd.DataFrame({
"name":["john","jim","eric","jim","john","jim","jim","eric","eric","john"],
"category":["a","b","c","b","a","b","c","c","a","c"],
"amount":[100,200,13,23,40,2,43,92,83,1]
})
df_Count =
df.groupby(['name','category']).count().reset_index().rename({'amount':'Count_For_Category'}, axis=1)
df_Sum = df.groupby(['name','category']).sum().reset_index().rename({'amount':'Sum_For_Category'},axis=1)
df_v2 = pd.merge(df,df_Count[['name','category','Count_For_Category']], left_on=['name','category'], right_on=['name','category'], how='left')
df_v2 = pd.merge(df_v2,df_Sum[['name','category','Sum_For_Category']], left_on=['name','category'], right_on=['name','category'], how='left')
df_v2
Hi There,
Use a simple code to easy understand, please try these code below, Just run it you will get what you want.
Thanks
Leon

Delete the rows of a DataFrame satisfying conditions evaluated against multiple columns

I would like to filter my DataFrame by evaluating some conditions against several columns of the DataFrame. I illustrate what I want to do with the following eample:
df = {'user': [1,1,1,2,2,2],
'speed':[10,20,90,15,39, 10],
'acceleration': [9.8,29,5,4,7, 3],
'jerk':[50,60,60,40,20,-50],
'mode':['car','car','car','metro','metro', 'metro']}
df = pd.DataFrame.from_dict(df)
df
user speed acceleration jerk mode
0 1 10 9.8 50 car
1 1 20 29.0 60 car
2 1 90 5.0 60 car
3 2 15 4.0 40 metro
4 2 39 7.0 20 metro
5 2 10 3.0 -50 metro
In the given example, I would like to filter the dataframe based on thresholds set against speed, acceleration and jerk columns as in the table below:
+-------+-------+--------------+------+-----+
| | speed | acceleration | jerk |
+-------+-------+--------------+------+-----+
| | max | max | min | max |
| --- | --- | --- | --- | --- |
| car | 50 | 10 | -100 | 100 |
| metro | 35 | 5 | 60 | -40 |
+-------+-------+--------------+------+-----+
So only users' with speed & acceleration below the max as well as user's jerk within min-max are selected (or delete rows not satisfying stated conditions).
You can use reindex, and then do the msk:
threshold=threshold.reindex(df['mode'])
threshold=threshold.reset_index(drop=True)
msk=(df.acceleration.lt(threshold['acceleration','max']))&\
(df.speed.lt(threshold['speed','max']))&\
(df.jerk.ge(threshold['jerk','min'])&\
df.jerk.le(threshold['jerk','max']))
df[msk]
Details
Taking this threshold dataframe:
threshold=pd.DataFrame({'s':['car','car','metro','metro'],
'acceleration':[10,5,5,2],
'speed':[50,5,35,2],
'jerk':[-100,100,60,-40]})
threshold=threshold.groupby('s').agg({'acceleration':'max',
'speed':'max',
'jerk':['min','max']})
threshold
# acceleration speed jerk
# max max min max
#s
#car 10 50 -100 100
#metro 5 35 -40 60
You can use 'mode' column to make the reindex:
threshold=threshold.reindex(df['mode'])
# acceleration speed jerk
# max max min max
#mode
#car 10 50 -100 100
#car 10 50 -100 100
#car 10 50 -100 100
#metro 5 35 -40 60
#metro 5 35 -40 60
#metro 5 35 -40 60
threshold=threshold.reset_index(drop=True)
msk=(df.acceleration.lt(threshold['acceleration','max']))&\
(df.speed.lt(threshold['speed','max']))&\
(df.jerk.ge(threshold['jerk','min'])&\
df.jerk.le(threshold['jerk','max']))
df[msk]
# user speed acceleration jerk mode
#0 1 10 9.8 50 car
#3 2 15 4.0 40 metro
maybe the where clause is what you're looking for.

How to extend the columns of a pydatable with a dictionary containing the values in list?

I have created a sample datatable as,
DT_EX = dt.Frame({'recency': ['current','savings','fixex','current','savings','fixed','savings','current'],
'amount': [4200,2300,1500,8000,1200,6500,4500,9010],
'no_of_pl': [3,2,1,5,1,2,5,4],
'default': [True,False,True,False,True,True,True,False]})
and it can be viewed as,
| recency amount no_of_pl default
-- + ------- ------ -------- -------
0 | current 4200 3 1
1 | savings 2300 2 0
2 | fixex 1500 1 1
3 | current 8000 5 0
4 | savings 1200 1 1
5 | fixed 6500 2 1
6 | savings 4500 5 1
7 | current 9010 4 0
[8 rows x 4 columns]
I'm doing some data manipulations as explained in the below steps:
Step 1: Two new columns are added to datatable as
DT_EX[:, f[:].extend({"total_amount": f.amount*f.no_of_pl,
'test_col': f.amount/f.no_of_pl})]
output:
| recency amount no_of_pl default total_amount test_col
-- + ------- ------ -------- ------- ------------ --------
0 | current 4200 3 1 12600 1400
1 | savings 2300 2 0 4600 1150
2 | fixex 1500 1 1 1500 1500
3 | current 8000 5 0 40000 1600
4 | savings 1200 1 1 1200 1200
5 | fixed 6500 2 1 13000 3250
6 | savings 4500 5 1 22500 900
7 | current 9010 4 0 36040 2252.5
[8 rows x 6 columns]
Step2:
A dictionary is created as, and note it has values stored in a list
test_dict = {'discount': [10,20,30,40,50,60,70,80],
'charges': [0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8]}
Step 3:
A new datatable created with the above mentioned dict and append to a datatable DT_EX as,
dt.cbind(DT_EX, dt.Frame(test_dict))
output:
| recency amount no_of_pl default discount charges
-- + ------- ------ -------- ------- -------- -------
0 | current 4200 3 1 10 0.1
1 | savings 2300 2 0 20 0.2
2 | fixex 1500 1 1 30 0.3
3 | current 8000 5 0 40 0.4
4 | savings 1200 1 1 50 0.5
5 | fixed 6500 2 1 60 0.6
6 | savings 4500 5 1 70 0.7
7 | current 9010 4 0 80 0.8
[8 rows x 6 columns]
Here we can see a datatable with the newly added columns (discount, charges)
Step 4:
As we know that extend function can be used to add on the columns i tried to pass in the dictionary named test_dict as,
DT_EX[:, f[:].extend(test_dict)]
Output:
Out[18]:
| recency amount no_of_pl default discount discount.0 discount.1 discount.2 discount.3 discount.4 … charges.2 charges.3 charges.4 charges.5 charges.6
-- + ------- ------ -------- ------- -------- ---------- ---------- ---------- ---------- ---------- --------- --------- --------- --------- ---------
0 | current 4200 3 1 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
1 | savings 2300 2 0 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
2 | fixex 1500 1 1 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
3 | current 8000 5 0 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
4 | savings 1200 1 1 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
5 | fixed 6500 2 1 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
6 | savings 4500 5 1 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
7 | current 9010 4 0 10 20 30 40 50 60 … 0.4 0.5 0.6 0.7 0.8
[8 rows x 20 columns]
Note : Here in the output it is seen that there are about 8 columns created (each element of a list is filled in) for each of dictionary key (discount, charges) and total newly added columns are 16.
Step 5:
I have had thought of creating a dictionary with values of numpy array as,
test_dict_1 = {'discount': np.array([10,20,30,40,50,60,70,80]),
'charges': np.array([0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8])}
I have pass the test_dict_1 to extend function as
DT_EX[:, f[:].extend(test_dict_1)]
output:
Out[20]:
| recency amount no_of_pl default discount charges
-- + ------- ------ -------- ------- -------- -------
0 | current 4200 3 1 10 0.1
1 | savings 2300 2 0 20 0.2
2 | fixex 1500 1 1 30 0.3
3 | current 8000 5 0 40 0.4
4 | savings 1200 1 1 50 0.5
5 | fixed 6500 2 1 60 0.6
6 | savings 4500 5 1 70 0.7
7 | current 9010 4 0 80 0.8
[8 rows x 6 columns]
At this step, extend has taken a dictionary and added the new columns to DT_EX. and it is an expected output.
So, here i would like to understand what has happened in the step 4? Why didn't it take a list of values from a dictionary key to add a new column? Why the step 5 case was executed?
Could you please write your comments/answers on it?
You could wrap the dictionary in a Frame constructor to get the desired result:
>>> DT_EX[:, f[:].extend(dt.Frame(test_dict))]
| recency amount no_of_pl default discount charges
-- + ------- ------ -------- ------- -------- -------
0 | current 4200 3 1 10 0.1
1 | savings 2300 2 0 20 0.2
2 | fixex 1500 1 1 30 0.3
3 | current 8000 5 0 40 0.4
4 | savings 1200 1 1 50 0.5
5 | fixed 6500 2 1 60 0.6
6 | savings 4500 5 1 70 0.7
7 | current 9010 4 0 80 0.8
[8 rows x 6 columns]
As to what happens in step 4, the following logic is applied: when we evaluate a dictionary for the DT[] call, we treat it simply as a list of elements, where each item in the list is named by the corresponding key. If an "item" produces multiple columns, then each of the columns gets the same name from the key. Now, in this case each "item" is a list again, and we don't have any special rules for evaluating such lists of primitives. So they end up expanding into a list of columns where each column is a constant.
You are right that the end result looks quite counterintuitive, so we'd probably want to adjust the rules for evaluating lists inside DT[] expressions.

Pandas calculate and apply weighted rolling average on another column

I am having a hard time figuring out how to get "rolling weights" based off of one of my columns, then factor these weights onto another column.
I've tried groupby.rolling.apply (function) on my data but the main problem is just conceptualizing how I'm going to take a running/rolling average of the column I'm going to turn into weights, and then factor this "window" of weights onto another column that isn't rolled.
I'm also purposely setting min_period to 1, so you'll notice my first two rows in each group final output "rwag" mirror the original.
W is the rolling column to derive the weights from.
B is the column to apply the rolled weights to.
Grouping is only done on column a.
df is already sorted by a and yr.
def wavg(w,x):
return (x * w).sum() / w.sum()
n=df.groupby(['a1'])[['w']].rolling(window=3,min_periods=1).apply(lambda x: wavg(df['w'],df['b']))
Input:
id | yr | a | b | w
---------------------------------
0 | 1990 | a1 | 50 | 3000
1 | 1991 | a1 | 40 | 2000
2 | 1992 | a1 | 10 | 1000
3 | 1993 | a1 | 20 | 8000
4 | 1990 | b1 | 10 | 500
5 | 1991 | b1 | 20 | 1000
6 | 1992 | b1 | 30 | 500
7 | 1993 | b1 | 40 | 4000
Desired output:
id | yr | a | b | rwavg
---------------------------------
0 1990 a1 50 50
1 1991 a1 40 40
2 1992 a1 10 39.96
3 1993 a1 20 22.72
4 1990 b1 10 10
5 1991 b1 20 20
6 1992 b1 30 20
7 1993 b1 40 35.45
apply with rolling usually have some wired behavior
df['Weight']=df.b*df.w
g=df.groupby(['a']).rolling(window=3,min_periods=1)
g['Weight'].sum()/g['w'].sum()
df['rwavg']=(g['Weight'].sum()/g['w'].sum()).values
Out[277]:
a
a1 0 50.000000
1 46.000000
2 40.000000
3 22.727273
b1 4 10.000000
5 16.666667
6 20.000000
7 35.454545
dtype: float64

Categories