Get max value in previous rows for matching rows [duplicate] - python

This question already has answers here:
pandas rolling max with groupby
(2 answers)
Closed 4 months ago.
Say I have a dataframe that records temperature measurements for various sensors:
import pandas as pd
df = pd.DataFrame({'sensor': ['A', 'C', 'A', 'C', 'B', 'B', 'C', 'A', 'A', 'A'],
'temperature': [4.8, 12.5, 25.1, 16.9, 20.4, 15.7, 7.7, 5.5, 27.4, 17.7]})
I would like to add a column max_prev_temp that will show the previous maximum temperature for the corresponding sensor. So this works:
df["max_prev_temp"] = df.apply(
lambda row: df[df["sensor"] == row["sensor"]].loc[: row.name, "temperature"].max(),
axis=1,
)
It returns:
sensor temperature max_prev_temp
0 A 4.8 4.8
1 C 12.5 12.5
2 A 25.1 25.1
3 C 16.9 16.9
4 B 20.4 20.4
5 B 15.7 20.4
6 C 7.7 16.9
7 A 5.5 25.1
8 A 27.4 27.4
9 A 17.7 27.4
Problem is: my actual data set contains over 2 million rows, so this is excruciatingly slow (it probably will take about 2 hours). I understand that rolling is a better method, but I don't see to use it for this specific case.
Any hint would be appreciated.

Use Series.expanding per groups with remove first level by Series.droplevel:
df["max_prev_temp"] = df.groupby('sensor')["temperature"].expanding().max().droplevel(0)
print (df)
sensor temperature max_prev_temp
0 A 4.8 4.8
1 C 12.5 12.5
2 A 25.1 25.1
3 C 16.9 16.9
4 B 20.4 20.4
5 B 15.7 20.4
6 C 7.7 16.9
7 A 5.5 25.1
8 A 27.4 27.4
9 A 17.7 27.4

Use groupby.cummax:
df['max_prev_temp'] = df.groupby('sensor')['temperature'].cummax()
output:
sensor temperature max_prev_temp
0 A 4.8 4.8
1 C 12.5 12.5
2 A 25.1 25.1
3 C 16.9 16.9
4 B 20.4 20.4
5 B 15.7 20.4
6 C 7.7 16.9
7 A 5.5 25.1
8 A 27.4 27.4
9 A 17.7 27.4

Related

Checking and finding duplicate values

I have a table:
Key
Sequence
Longitude
Latitude
1001
1
18.2
14.2
1001
2
18.2
14.2
1001
3
18.2
14.2
2001
1
25.6
22.8
2001
2
25.6
22.8
2001
3
25.6
22.8
5004
1
25.6
22.8
5004
2
25.6
22.8
5004
3
25.6
22.8
6895
1
36.2
17.4
6895
2
36.2
17.4
6895
3
36.2
17.4
6650
1
18.2
14.2
6650
2
18.2
14.2
6650
3
18.2
14.2
From the table I need to find out the keys (different keys) that have duplicate longitude and latitude. If a key having any of sequence has duplicate longitude and latitude all the sequences of the same key should be shown duplicate. (comparison between same key sequences does not happen).
The output table should be:
Key
Sequence
Longitude
Latitude
Duplicate
1001
1
18.2
14.2
No
1001
2
18.2
14.2
No
1001
3
18.2
14.2
No
2001
1
25.6
22.8
No
2001
2
25.6
22.8
No
2001
3
25.6
22.8
No
5004
1
25.6
22.8
Yes
5004
2
25.6
22.8
Yes
5004
3
25.6
22.8
Yes
6895
1
36.2
17.4
No
6895
2
36.2
17.4
No
6895
3
36.2
17.4
No
6650
1
18.2
14.2
Yes
6650
2
18.2
14.2
Yes
6650
3
18.2
14.2
Yes
First remove the duplicates per key, then check what is still duplicated between keys:
# get example of duplicated rows
s = (df
.drop_duplicates(subset=['Key', 'Longitude', 'Latitude'])
.duplicated(['Longitude', 'Latitude'])
)
# extract the original keys and mark all matching rows
df['Duplicate'] = np.where(df['Key'].isin(df.loc[s[s].index, 'Key']), 'Yes', 'No')
print(df)
Output:
Key Sequence Longitude Latitude Duplicate
0 1001 1 18.2 14.2 No
1 1001 2 18.2 14.2 No
2 1001 3 18.2 14.2 No
3 2001 1 25.6 22.8 No
4 2001 2 25.6 22.8 No
5 2001 3 25.6 22.8 No
6 5004 1 25.6 22.8 Yes
7 5004 2 25.6 22.8 Yes
8 5004 3 25.6 22.8 Yes
9 6895 1 36.2 17.4 No
10 6895 2 36.2 17.4 No
11 6895 3 36.2 17.4 No
12 6650 1 18.2 14.2 Yes
13 6650 2 18.2 14.2 Yes
14 6650 3 18.2 14.2 Yes
#first get the keys which are duplicate
key = list(df.duplicated(['Longitude', 'Latitude'])['Key'].unique())
#then put np.where to assign yes or no
df['Duplicate'] = np.where(df['Key'].isin(key), 'Yes', 'No')

AttributeError: 'list' object has no attribute 'assign'

I have this dataframe:
SRC Coup Vint Bal Mar Apr May Jun Jul BondSec
0 JPM 1.5 2021 43.9 5.6 4.9 4.9 5.2 4.4 FNCL
1 JPM 1.5 2020 41.6 6.2 6.0 5.6 5.8 4.8 FNCL
2 JPM 2.0 2021 503.9 7.1 6.3 5.8 6.0 4.9 FNCL
3 JPM 2.0 2020 308.3 9.3 7.8 7.5 7.9 6.6 FNCL
4 JPM 2.5 2021 345.0 8.6 7.8 6.9 6.8 5.6 FNCL
5 JPM 4.5 2010 5.7 21.3 20.0 18.0 17.7 14.6 G2SF
6 JPM 5.0 2019 2.8 39.1 37.6 34.6 30.8 24.2 G2SF
7 JPM 5.0 2018 7.3 39.8 37.1 33.4 30.1 24.2 G2SF
8 JPM 5.0 2010 3.9 23.3 20.0 18.6 17.9 14.6 G2SF
9 JPM 5.0 2009 4.2 22.8 21.2 19.5 18.6 15.4 G2SF
I want to duplicate all the rows that have FNCL as the BondSec, and rename the value of BondSec in those new duplicate rows to FGLMC. I'm able to accomplish half of that with the following code:
if "FGLMC" not in jpm['BondSec']:
is_FNCL = jpm['BondSec'] == "FNCL"
FNCL_try = jpm[is_FNCL]
jpm.append([FNCL_try]*1,ignore_index=True)
But if I instead try to implement the change to the BondSec value in the same line as below:
jpm.append(([FNCL_try]*1).assign(**{'BondSecurity': 'FGLMC'}),ignore_index=True)
I get the following error:
AttributeError: 'list' object has no attribute 'assign'
Additionally, I would like to insert the duplicated rows based on an index condition, not just at the bottom as additional rows. The condition cannot be simply a row position because this will have to work on future files with different numbers of rows. So I would like to insert the duplicated rows at the position where the BondSec column values change from FNCL to FNCI (FNCI is not showing here, but basically it would be right below the last row with FNCL). I'm assuming this could be done with an np.where function call, but I'm not sure how to implement that.
I'll also eventually want to do this same exact process with rows with FNCI as the BondSec value (duplicating them and transforming the BondSec value to FGCI, and inserting at the index position right below the last row with FNCI as the value).
I'd suggest a helper function to handle all your duplications:
def duplicate_and_rename(df, target, value):
return pd.concat([df, df[df["BondSec"] == target].assign(BondSec=value)])
Then
for target, value in (("FNCL", "FGLMC"), ("FNCI", "FGCI")):
df = duplicate_and_rename(df, target, value)
Then after all that, you can categorize the BondSec column and use a custom order:
ordering = ["FNCL", "FGLMC", "FNCI", "FGCI", "G2SF"]
df["BondSec"] = pd.Categorical(df["BondSec"], ordering).sort_values()
df = df.reset_index(drop=True)
Alternatively, you can use a dictionary for your ordering, as explained in this answer.

Transforming yearwise data using pandas

I have a dataframe that looks like this:
Temp
Date
1981-01-01 20.7
1981-01-02 17.9
1981-01-03 18.8
1981-01-04 14.6
1981-01-05 15.8
... ...
1981-12-27 15.5
1981-12-28 13.3
1981-12-29 15.6
1981-12-30 15.2
1981-12-31 17.4
365 rows × 1 columns
And I want to transform It so That It looks like:
1981 1982 1983 1984 1985 1986 1987 1988 1989 1990
0 20.7 17.0 18.4 19.5 13.3 12.9 12.3 15.3 14.3 14.8
1 17.9 15.0 15.0 17.1 15.2 13.8 13.8 14.3 17.4 13.3
2 18.8 13.5 10.9 17.1 13.1 10.6 15.3 13.5 18.5 15.6
3 14.6 15.2 11.4 12.0 12.7 12.6 15.6 15.0 16.8 14.5
4 15.8 13.0 14.8 11.0 14.6 13.7 16.2 13.6 11.5 14.3
... ... ... ... ... ... ... ... ... ... ...
360 15.5 15.3 13.9 12.2 11.5 14.6 16.2 9.5 13.3 14.0
361 13.3 16.3 11.1 12.0 10.8 14.2 14.2 12.9 11.7 13.6
362 15.6 15.8 16.1 12.6 12.0 13.2 14.3 12.9 10.4 13.5
363 15.2 17.7 20.4 16.0 16.3 11.7 13.3 14.8 14.4 15.7
364 17.4 16.3 18.0 16.4 14.4 17.2 16.7 14.1 12.7 13.0
My attempt:
groups=df.groupby(df.index.year)
keys=groups.groups.keys()
years=pd.DataFrame()
for key in keys:
years[key]=groups.get_group(key)['Temp'].values
Question:
The above code is giving me my desired output but Is there is a more efficient way of transforming this?
As I can't post the whole data because there are 3650 rows in the dataframe so you can download the csv file(60.6 kb) for testing from here
Try grabbing the year and dayofyear from the index then pivoting:
import pandas as pd
import numpy as np
# Create Random Data
dr = pd.date_range(pd.to_datetime("1981-01-01"), pd.to_datetime("1982-12-31"))
df = pd.DataFrame(np.random.randint(1, 100, size=dr.shape),
index=dr,
columns=['Temp'])
# Get Year and Day of Year
df['year'] = df.index.year
df['day'] = df.index.dayofyear
# Pivot
p = df.pivot(index='day', columns='year', values='Temp')
print(p)
p:
year 1981 1982
day
1 38 85
2 51 70
3 76 61
4 71 47
5 44 76
.. ... ...
361 23 22
362 42 64
363 84 22
364 26 56
365 67 73
Run-Time via Timeit
import timeit
setup = '''
import pandas as pd
import numpy as np
# Create Random Data
dr = pd.date_range(pd.to_datetime("1981-01-01"), pd.to_datetime("1983-12-31"))
df = pd.DataFrame(np.random.randint(1, 100, size=dr.shape),
index=dr,
columns=['Temp'])'''
pivot = '''
df['year'] = df.index.year
df['day'] = df.index.dayofyear
p = df.pivot(index='day', columns='year', values='Temp')'''
groupby_for = '''
groups=df.groupby(df.index.year)
keys=groups.groups.keys()
years=pd.DataFrame()
for key in keys:
years[key]=groups.get_group(key)['Temp'].values'''
if __name__ == '__main__':
print("Pivot")
print(timeit.timeit(setup=setup, stmt=pivot, number=1000))
print("Groupby For")
print(timeit.timeit(setup=setup, stmt=groupby_for, number=1000))
Pivot
1.598973
Groupby For
2.3967995999999996
*Additional note, the groupby for option will not work for leap years as it will not be able to handle 1984 being 366 days instead of 365. Pivot will work regardless.

How to concat and transpose tow tables in python

I do not know why my code is not working, I want to transpose and concat tow tables in python.
my code:
import numpy as np
import pandas as pd
np.random.seed(100)
df = pd.DataFrame({'TR':np.arange(1, 6).repeat(5), 'A': np.random.randint(1, 100,25), 'B': np.random.randint(50, 100,25), 'C': np.random.randint(50, 1000,25), 'D': np.random.randint(5, 100,25) })
table = df.groupby('TR').mean().round(decimals=1)
table2 = df.drop(['TR'], axis=1).sem().round(decimals=1)
table2 = table2.T
pd.concat([table, table2])
The output should be:
TR A B C D
1 54.0 68.6 795.8 49.8
2 61.4 67.8 524.8 52.8
3 54.0 73.6 556.6 46.6
4 35.6 69.2 207.2 46.4
5 44.4 85.0 639.8 73.8
st 6.5 3.4 62.5 6.4
output
append after assign name
table2.name='st'
table=table.append(table2)
table
A B C D
TR
1 55.8 73.2 536.8 42.8
2 31.0 75.4 731.2 43.6
3 42.0 68.8 598.6 32.4
4 33.6 79.0 300.8 43.6
5 70.2 72.2 566.8 54.8
st 5.9 3.2 62.5 5.9

Fill NaN values from previous column with data

I have a dataframe in pandas, and I am trying to take data from the same row and different columns and fill NaN values in my data. How would I do this in pandas?
For example,
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
83 27.0 29.0 NaN 29.0 30.0 NaN NaN 15.0 16.0 17.0 NaN 28.0 30.0 NaN 28.0 18.0
The goal is for the data to look like this:
1 2 3 4 5 6 7 ... 10 11 12 13 14 15 16
83 NaN NaN NaN 27.0 29.0 29.0 30.0 ... 15.0 16.0 17.0 28.0 30.0 28.0 18.0
The goal is to be able to take the mean of the last five columns that have data. If there are not >= 5 data-filled cells, then take the average of however many cells there are.
Use function justify for improve performance with filter all columns without first by DataFrame.iloc:
print (df)
name 1 2 3 4 5 6 7 8 9 10 11 12 13 \
80 bob 27.0 29.0 NaN 29.0 30.0 NaN NaN 15.0 16.0 17.0 NaN 28.0 30.0
14 15 16
80 NaN 28.0 18.0
df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
print (df)
name 1 2 3 4 5 6 7 8 9 10 11 12 13 \
80 bob NaN NaN NaN NaN NaN 27.0 29.0 29.0 30.0 15.0 16.0 17.0 28.0
14 15 16
80 30.0 28.0 18.0
Function:
#https://stackoverflow.com/a/44559180/2901002
def justify(a, invalid_val=0, axis=1, side='left'):
"""
Justifies a 2D array
Parameters
----------
A : ndarray
Input array to be justified
axis : int
Axis along which justification is to be made
side : str
Direction of justification. It could be 'left', 'right', 'up', 'down'
It should be 'left' or 'right' for axis=1 and 'up' or 'down' for axis=0.
"""
if invalid_val is np.nan:
mask = ~np.isnan(a)
else:
mask = a!=invalid_val
justified_mask = np.sort(mask,axis=axis)
if (side=='up') | (side=='left'):
justified_mask = np.flip(justified_mask,axis=axis)
out = np.full(a.shape, invalid_val)
if axis==1:
out[justified_mask] = a[mask]
else:
out.T[justified_mask.T] = a.T[mask.T]
return out
Performance:
#100 rows
df = pd.concat([df] * 100, ignore_index=True)
#41 times slowier
In [39]: %timeit df.loc[:,df.columns[1:]] = df.loc[:,df.columns[1:]].apply(fun, axis=1)
145 ms ± 23.7 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [41]: %timeit df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
3.54 ms ± 236 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
#1000 rows
df = pd.concat([df] * 1000, ignore_index=True)
#198 times slowier
In [43]: %timeit df.loc[:,df.columns[1:]] = df.loc[:,df.columns[1:]].apply(fun, axis=1)
1.13 s ± 37.9 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [45]: %timeit df.iloc[:, 1:] = justify(df.iloc[:, 1:].to_numpy(), invalid_val=np.nan, side='right')
5.7 ms ± 184 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Assuming you need to move all NaN to the first columns I would define a function that takes all NaN and places them first and leave the rest as it is:
def fun(row):
index_order = row.index[row.isnull()].append(row.index[~row.isnull()])
row.iloc[:] = row[index_order].values
return row
df_fix = df.loc[:,df.columns[1:]].apply(fun, axis=1)
If you need to overwrite the results in the same dataframe then:
df.loc[:,df.columns[1:]] = df_fix.copy()

Categories