I have this data frame
import pandas as pd
df = pd.DataFrame({'COTA':['A','A','A','A','A','B','B','B','B'],
'Date':['14/10/2021','19/10/2020','29/10/2019','30/09/2021','20/09/2020','20/10/2021','29/10/2020','15/10/2019','10/09/2020'],
'Mark':[1,2,3,4,5,1,2,3,3]
})
print(df)
based on this data frame I wanted the MARK from the previous year, I managed to acquire the maximum COTA but I wanted the last one, I used .max() and I thought I could get it with .last() but it didn't work.
follow the example of my code.
df['Date'] = pd.to_datetime(df['Date'])
df['LastYear'] = df['Date'] - pd.offsets.YearEnd(0)
s1 = df.groupby(['Found', 'LastYear'])['Mark'].max()
s2 = s1.rename(index=lambda x: x + pd.offsets.DateOffset(years=1), level=1)
df = df.join(s2.rename('Max_MarkLastYear'), on=['Found', 'LastYear'])
print (df)
Found Date Mark LastYear Max_MarkLastYear
0 A 2021-10-14 1 2021-12-31 5.0
1 A 2020-10-19 2 2020-12-31 3.0
2 A 2019-10-29 3 2019-12-31 NaN
3 A 2021-09-30 4 2021-12-31 5.0
4 A 2020-09-20 5 2020-12-31 3.0
5 B 2021-10-20 1 2021-12-31 3.0
6 B 2020-10-29 2 2020-12-31 3.0
7 B 2019-10-15 3 2019-12-31 NaN
8 B 2020-10-09 3 2020-12-31 3.0
How do I create a new column with the last value of the previous year
I have a dataframe. There is always data available for each date and firm. But a given row isn't guaranteed to have the data; the row only has data if that firm is True.
date IBM AAPL_total_amount IBM_total_amount AAPL_count_avg IBM_count_avg
2013-01-31 True False 29 9
2013-01-31 True True 29 9 27 5
2013-02-31 False True 27 5
2013-02-08 True True 2 3 5 6
...
How could I transpose the above dataframe to long format?
Expected output:
date Firm total_amount count_avg
2013-01-31 IBM 9 5
2013-01-31 AAPL 29 27
...
Might have to add some logic to drop all the boolean masks, but once you have that it's just a stack.
u = df.set_index('date').drop(['IBM', 'AAPL'], 1)
u.columns = u.columns.str.split('_', expand=True)
u.stack(0)
count total
date
2013-01-31 IBM 9.0 29.0
AAPL 5.0 27.0
IBM 9.0 29.0
2013-02-31 AAPL 5.0 27.0
2013-02-08 AAPL 6.0 5.0
IBM 3.0 2.0
To drop all the masks if you don't have a list of keys, possibly use select_dtypes
df.select_dtypes(exclude=[bool])
Use wide_to_long with pre-processing on columns and post-processing with slicing and dropna
df.columns = ['_'.join(col[::-1]) for col in df.columns.str.split('_')]
df_final = (pd.wide_to_long(df.reset_index(), stubnames=['total','count'],
i=['index','date'],
j='firm', sep='_', suffix='\w+')[['total', 'count']]
.reset_index(level=[1,2]).dropna())
Out[59]:
date firm total count
index
0 2013-01-31 IBM 29.0 9.0
1 2013-01-31 IBM 29.0 9.0
1 2013-01-31 AAPL 27.0 5.0
2 2013-02-31 AAPL 27.0 5.0
3 2013-02-08 IBM 2.0 3.0
3 2013-02-08 AAPL 5.0 6.0
That's an unusual table design. Let's assume the table is called df.
So you first want to find the list of tickers:
Either you have them elsewhere:
tickers = ['AAPL','IBM']
or you can extract them from your table:
tickers = [c for c in df.columns
if not c.endswith('_count') and
not c.endswith('_total') and
c != 'date']
Now you have to loop over the tickers:
res = []
for tic in tickers:
sub = df[df[tic]][ ['date', f'{tic}_total','f{tic}_count'] ].copy()
sub.columns = ['date', 'Total','Count']
sub['Firm'] = tic
res.append(sub)
res = pd.concat(res, axis=0)
Eventually, you might want to reorder the columns:
res = res[['date','Item','Total','Count']]
You might want to handle duplicates. From what I read in your example, you want to drop them:
res = res.drop_duplicates()
This isn't duplicate. I already referred this post_1 and post_2
My question is different and not about agg function. It is about displaying grouped by column as well during ffill operation. Though the code works fine, just sharing the full code for you to get an idea. Problem is in the commented line. look out for that line below.
I have a dataframe like as given below
df = pd.DataFrame({
'subject_id':[1,1,1,1,1,1,1,2,2,2,2,2],
'time_1' :['2173-04-03 12:35:00','2173-04-03 12:50:00','2173-04-05 12:59:00','2173-05-04 13:14:00','2173-05-05 13:37:00','2173-07-06 13:39:00','2173-07-08 11:30:00','2173-04-08 16:00:00','2173-04-09 22:00:00','2173-04-11 04:00:00','2173- 04-13 04:30:00','2173-04-14 08:00:00'],
'val' :[5,5,5,5,1,6,5,5,8,3,4,6]})
df['time_1'] = pd.to_datetime(df['time_1'])
df['day'] = df['time_1'].dt.day
df['month'] = df['time_1'].dt.month
What this code with the help of Jezrael from forum does is add missing dates based on threshold value. Only issue is,I don't see the grouped by column during output
df['time_1'] = pd.to_datetime(df['time_1'])
df['day'] = df['time_1'].dt.day
df['date'] = df['time_1'].dt.floor('d')
df1 = (df.set_index('date')
.groupby('subject_id')
.resample('d')
.last()
.index
.to_frame(index=False))
df2 = df1.merge(df, how='left')
thresh = 5
mask = df2['day'].notna()
s = mask.cumsum().mask(mask)
df2['count'] = s.map(s.value_counts())
df2 = df2[(df2['count'] < thresh) | (df2['count'].isna())]
df2 = df2.groupby(df2['subject_id']).ffill() # problem is here #here is the problem
dates = df2['time_1'].dt.normalize()
df2['time_1'] += np.where(dates == df2['date'], 0, df2['date'] - dates)
df2['day'] = df2['time_1'].dt.day
df2['val'] = df2['val'].astype(int)
As shown in code above, I tried the below approaches
df2 = df2.groupby(df2['subject_id']).ffill() # doesn't help
df2 = df2.groupby(df2['subject_id']).ffill().reset_index() # doesn't help
df2 = df2.groupby('subject_id',as_index=False).ffill() # doesn't help
Incorrect output without subject_id
I expect my output to have subject_id column as well
Here are 2 possible solutions - specify all columns in list after groupby and assign back:
cols = df2.columns.difference(['subject_id'])
df2[cols] = df2.groupby('subject_id')[cols].ffill() # problem is here #here is the problem
Or create index by subject_id column and grouping by index:
#newer pandas versions
df2 = df2.set_index('subject_id').groupby('subject_id').ffill().reset_index()
#oldier pandas versions
df2 = df2.set_index('subject_id').groupby(level=0).ffill().reset_index()
dates = df2['time_1'].dt.normalize()
df2['time_1'] += np.where(dates == df2['date'], 0, df2['date'] - dates)
df2['day'] = df2['time_1'].dt.day
df2['val'] = df2['val'].astype(int)
print (df2)
subject_id date time_1 val day month count
0 1 2173-04-03 2173-04-03 12:35:00 5 3 4.0 NaN
1 1 2173-04-03 2173-04-03 12:50:00 5 3 4.0 NaN
2 1 2173-04-04 2173-04-04 12:50:00 5 4 4.0 1.0
3 1 2173-04-05 2173-04-05 12:59:00 5 5 4.0 1.0
32 1 2173-05-04 2173-05-04 13:14:00 5 4 5.0 1.0
33 1 2173-05-05 2173-05-05 13:37:00 1 5 5.0 1.0
95 1 2173-07-06 2173-07-06 13:39:00 6 6 7.0 1.0
96 1 2173-07-07 2173-07-07 13:39:00 6 7 7.0 1.0
97 1 2173-07-08 2173-07-08 11:30:00 5 8 7.0 1.0
98 2 2173-04-08 2173-04-08 16:00:00 5 8 4.0 NaN
99 2 2173-04-09 2173-04-09 22:00:00 8 9 4.0 NaN
100 2 2173-04-10 2173-04-10 22:00:00 8 10 4.0 1.0
101 2 2173-04-11 2173-04-11 04:00:00 3 11 4.0 1.0
102 2 2173-04-12 2173-04-12 04:00:00 3 12 4.0 1.0
103 2 2173-04-13 2173-04-13 04:30:00 4 13 4.0 1.0
104 2 2173-04-14 2173-04-14 08:00:00 6 14 4.0 1.0
I am little new to Python and have a problem like this. I have a dataframe of multiple sensor data. There are NA missing values in the dataset and need to be filled with below rules.
if the next sensor has data at the same time stamp, fill it using the next sensor data.
If near sensor has no data either, fill it with average value of all available sensors at the same timestamp.
If all sensor missing data at the same timestamp, use linear interpolation of it's own to fill the missing values
There's a sample data I built.
import pandas as pd
sensor1 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[1,1,1,1,1,1,1,1,1,1],"value":[np.nan,2,2,2,2,np.nan,np.nan,np.nan,4,6]})
sensor2 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[2,2,2,2,2,2,2,2,2,2],"value":[3,4,5,6,7,np.nan,np.nan,np.nan,7,8]})
sensor3 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[3,3,3,3,3,3,3,3,3,3],"value":[2,3,4,5,6,7,np.nan,np.nan,7,8]})
sensordata = sensor1.append([sensor2,sensor3]).reset_index(drop = True)
Any help would be appreciated.
With the answer from Christian, the solution will be as follows.
# create data
df1 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[1,1,1,1,1,1,1,1,1,1],"value":[np.nan,2,2,2,2,np.nan,np.nan,np.nan,4,6]})
df2 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[2,2,2,2,2,2,2,2,2,2],"value":[3,4,5,6,7,np.nan,np.nan,np.nan,7,8]})
df3 = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor":[3,3,3,3,3,3,3,3,3,3],"value":[2,3,4,5,6,7,np.nan,np.nan,7,8]})
df = df1.append([df2,df3]).reset_index(drop = True)
# pivot dataframe
df = df.pivot(index = 'date', columns ='sensor',values ='value')
# step 1, using specified sensor to fill missing values first, here use sensor 3
for c in df.columns:
selectedsensor = 3
df[c] = df[c].fillna(df[selectedsensor])
# step 2, use average of all available sensors to fill
df = df.transpose().fillna(df.transpose().mean()).transpose()
# step 3, use interpolate to fill remaining missing values
df = df.interpolate()
# unstack back to the original data format
df = df.reset_index()
df = df.melt(id_vars=['date'],var_name = 'sensor')
#df = df.unstack('sensor').reset_index()
#df = df.rename(columns ={0:'value'})
The final output is as follows:
date sensor value
0 2000-01-01 1 2.0
1 2000-01-02 1 2.0
2 2000-01-03 1 2.0
3 2000-01-04 1 2.0
4 2000-01-05 1 2.0
5 2000-01-06 1 7.0
6 2000-01-07 1 6.0
7 2000-01-08 1 5.0
8 2000-01-09 1 4.0
9 2000-01-10 1 6.0
10 2000-01-01 2 3.0
11 2000-01-02 2 4.0
12 2000-01-03 2 5.0
13 2000-01-04 2 6.0
14 2000-01-05 2 7.0
15 2000-01-06 2 7.0
16 2000-01-07 2 7.0
17 2000-01-08 2 7.0
18 2000-01-09 2 7.0
19 2000-01-10 2 8.0
20 2000-01-01 3 2.0
21 2000-01-02 3 3.0
22 2000-01-03 3 4.0
23 2000-01-04 3 5.0
24 2000-01-05 3 6.0
25 2000-01-06 3 7.0
26 2000-01-07 3 7.0
27 2000-01-08 3 7.0
28 2000-01-09 3 7.0
29 2000-01-10 3 8.0
You can do the following:
Your dataset, pivoted:
df = pd.DataFrame({"date": pd.date_range('1/1/2000', periods=10),"sensor1":[np.nan,2,2,2,2,np.nan,np.nan,np.nan,4,6], "sensor2":[3,4,5,6,7,np.nan,np.nan,np.nan,7,8], "sensor3":[2,3,4,5,6,7,np.nan,np.nan,7,8]}).set_index('date')
1) This is fillna with options backward, and limit = 1 along axis 1
df.fillna(method='bfill',limit=1,axis=1)
2) This is fillna with mean along the axis 1. This isn't really implemented apparently, but we can trick it with transposing:
df.transpose().fillna(df.transpose().mean()).transpose()
3) This is just interpolate
df.interpolate()
Bonus:
This got a bit uglier, since i had to apply column by column, but here is one selecting sensor 3 to fill:
for c in df.columns:
df[c] = df[c].fillna(df["sensor3"])
df
I have a dataframe like this
df
order_date amount
0 2015-10-02 1
1 2015-12-21 15
2 2015-12-24 3
3 2015-12-26 4
4 2015-12-27 5
5 2015-12-28 10
I would like to sum on df["amount"] based on range from df["order_date"] to df["order_date"] + 6 days
order_date amount sum
0 2015-10-02 1 1
1 2015-12-21 15 27 //comes from 15 + 3 + 4 + 5
2 2015-12-24 3 22 //comes from 3 + 4 + 5 + 10
3 2015-12-26 4 19
4 2015-12-27 5 15
5 2015-12-28 10 10
the data type of order_date is datetime
have tried to use iloc but it did not work well...
if anyone has any idea/example on who to work on this,
please kindly let me know.
If pandas rolling allowed left-aligned window (default is right-aligned) then the answer would be a simple single liner: df.set_index('order_date').amount.rolling('7d',min_periods=1,align='left').sum(), however forward-looking has not been implemented yet (i.e. rolling does not accept an align parameter). So, the trick I came up with is to "reverse" the dates temporarily. Solution:
df.index = pd.to_datetime(pd.datetime.now() - df.order_date)
df['sum'] = df.sort_index().amount.rolling('7d',min_periods=1).sum()
df.reset_index(drop=True)
Output:
order_date amount sum
0 2015-10-02 1 1.0
1 2015-12-21 15 27.0
2 2015-12-24 3 22.0
3 2015-12-26 4 19.0
4 2015-12-27 5 15.0
5 2015-12-28 10 10.0
Expanding on my comment:
from datetime import timedelta
df['sum'] = 0
for i in range(len(df)):
dt1 = df['order_date'][i]
dt2 = dt1 + timedelta(days=6)
df['sum'][i] = sum(df['amount'][(df['order_date'] >= dt1) & (df['order_date'] <= dt2)])
There's probably a much better way to do this but it works...
There is my way for this problem. It works.. (I believe there should be a much better way to do this.)
import pandas as pd
df['order_date']=pd.to_datetime(pd.Series(df.order_date))
Temp=pd.DataFrame(pd.date_range(start='2015-10-02', end='2017-01-01'),columns=['STDate'])
Temp=Temp.merge(df,left_on='STDate',right_on='order_date',how='left')
Temp['amount']=Temp['amount'].fillna(0)
Temp.sort(['STDate'],ascending=False,inplace=True)
Temp['rolls']=pd.rolling_sum(Temp['amount'],window =7,min_periods=0)
Temp.loc[Temp.STDate.isin(df.order_date),:].sort(['STDate'],ascending=True)
STDate Unnamed: 0 order_date amount rolls
0 2015-10-02 0.0 2015-10-02 1.0 1.0
80 2015-12-21 1.0 2015-12-21 15.0 27.0
83 2015-12-24 2.0 2015-12-24 3.0 22.0
85 2015-12-26 3.0 2015-12-26 4.0 19.0
86 2015-12-27 4.0 2015-12-27 5.0 15.0
87 2015-12-28 5.0 2015-12-28 10.0 10.0
Set order_date to be DatetimeIndex, so that you can use df.ix[time1:time2] to get the time range rows, then filter amount column and sum them.
You can try with :
from datetime import timedelta
df = pd.read_fwf('test2.csv')
df.order_date = pd.to_datetime(df.order_date)
df =df.set_index(pd.DatetimeIndex(df['order_date']))
sum_list = list()
for i in range(len(df)):
sum_list.append(df.ix[df.ix[i]['order_date']:(df.ix[i]['order_date'] + timedelta(days=6))]['amount'].sum())
df['sum'] = sum_list
df
Output:
order_date amount sum
2015-10-02 2015-10-02 1 1
2015-12-21 2015-12-21 15 27
2015-12-24 2015-12-24 3 22
2015-12-26 2015-12-26 4 19
2015-12-27 2015-12-27 5 15
2015-12-28 2015-12-28 10 10
import datetime
df['order_date'] = pd.to_datetime(df['order_date'], format='%Y-%m-%d')
df.set_index(['order_date'], inplace=True)
# Sum rows within the range of six days in the future
d = {t: df[(df.index >= t) & (df.index <= t + datetime.timedelta(days=6))]['amount'].sum()
for t in df.index}
# Assign the summed values back to the dataframe
df['amount_sum'] = [d[t] for t in df.index]
df is now:
amount amount_sum
order_date
2015-10-02 1.0 1.0
2015-12-21 15.0 27.0
2015-12-24 3.0 22.0
2015-12-26 4.0 19.0
2015-12-27 5.0 15.0
2015-12-28 10.0 10.0