add rows to pandas dataframe based on days in week - python

So I'm fairly new to pandas and I run into this problem that I'm not able to fix.
I have the following dataframe:
import pandas as pd
df = pd.DataFrame({
'Day': ['2018-12-31', '2019-01-07'],
'Product_Finished': [1000, 2000],
'Product_Tested': [50, 10]})
df['Day'] = pd.to_datetime(df['Day'], format='%Y-%m-%d')
df
I would like to add rows to my dateframe based on the column 'Day', ideally adding all other days of the weeks, but keeping the rest of the columns the same value. The output should look something like this:
Day Product_Finished Product_Tested
0 2018-12-31 1000 50
1 2019-01-01 1000 50
2 2019-01-02 1000 50
3 2019-01-03 1000 50
4 2019-01-04 1000 50
5 2019-01-05 1000 50
6 2019-01-06 1000 50
7 2019-01-07 2000 10
8 2019-01-08 2000 10
9 2019-01-09 2000 10
10 2019-01-10 2000 10
11 2019-01-11 2000 10
12 2019-01-12 2000 10
13 2019-01-13 2000 10
Any tips would be greatly appreciated, thank you in advance!

You can achieve this by first creating a new DataFrame that contains the desired date range using pandas.date_range.
Step 2, use pandas.merge_asof specifying to get the last value.

You can re sample by
import datetime
import pandas as pd
df = pd.DataFrame({
'Day': ['2018-12-31', '2019-01-07'],
'Product_Finished': [1000, 2000],
'Product_Tested': [50, 10]})
df['Day'] = pd.to_datetime(df['Day'], format='%Y-%m-%d')
df.set_index('Day',inplace=True)
df_Date=pd.date_range(start=df.index.min(), end=(df.index.max()+ datetime.timedelta(days=6)), freq='D')
df=df.reindex(df_Date,method='ffill',fill_value=None)
df.reset_index(inplace=True)

Related

Get the min value of a week in a pandas dataframe

So lets say I have a pandas dataframe with SOME repeated dates:
import pandas as pd
import random
reportDate = pd.date_range('04-01-2010', '09-03-2021',periods = 5000).date
lowPriceMin = [random.randint(10, 20) for x in range(5000)]
df = pd.DataFrame()
df['reportDate'] = reportDate
df['lowPriceMin'] = lowPriceMin
Now I want to get the min value from every week since the starting date. So I will have around 559 (the number of weeks from '04-01-2010' to '09-03-2021') values with the min value from every week.
Try with resample:
df['reportDate'] = pd.to_datetime(df['reportDate'])
>>> df.set_index("reportDate").resample("W").min()
lowPriceMin
reportDate
2010-01-10 10
2010-01-17 10
2010-01-24 14
2010-01-31 10
2010-02-07 14
...
2021-02-14 11
2021-02-21 11
2021-02-28 10
2021-03-07 10
2021-03-14 17
[584 rows x 1 columns]

Cumulative sum over days in python

I have the following dataframe:
date money
0 2018-01-01 20
1 2018-01-05 30
2 2018-02-15 7
3 2019-03-17 150
4 2018-01-05 15
...
2530 2019-03-17 350
And I need:
[(2018-01-01,20),(2018-01-05,65),(2018-02-15,72),...,(2019-03-17,572)]
So i need to do a cumulative sum of money over all days:
So far I have tried many things and the closest Ithink I've got is:
graph_df.date = pd.to_datetime(graph_df.date)
temporary = graph_df.groupby('date').money.sum()
temporary = temporary.groupby(temporary.index.to_period('date')).cumsum().reset_index()
But this gives me ValueError: Invalid frequency: date
Could anyone help please?
Thanks
I don't think you need the second groupby. You can simply add a column with the cumulative sum.
This does the trick for me:
import pandas as pd
df = pd.DataFrame({'date': ['01-01-2019','04-06-2019', '07-06-2019'], 'money': [12,15,19]})
df['date'] = pd.to_datetime(df['date']) # this is not strictly needed
tmp = df.groupby('date')['money'].sum().reset_index()
tmp['money_sum'] = tmp['money'].cumsum()
Converting the date column to an actual date is not needed for this to work.
list(map(tuple, df.groupby('date', as_index=False)['money'].sum().values))
Edit:
df = pd.DataFrame({'date': ['2018-01-01', '2018-01-05', '2018-02-15', '2019-03-17', '2018-01-05'],
'money': [20, 30, 7, 150, 15]})
#df['date'] = pd.to_datetime(df['date'])
#df = df.sort_values(by='date')
temporary = df.groupby('date', as_index=False)['money'].sum()
temporary['money_cum'] = temporary['money'].cumsum()
Result:
>>> list(map(tuple, temporary[['date', 'money_cum']].values))
[('2018-01-01', 20),
('2018-01-05', 65),
('2018-02-15', 72),
('2019-03-17', 222)]
you can try using df.groupby('date').sum():
example data frame:
df
date money
0 01/01/2018 20
1 05/01/2018 30
2 15/02/2018 7
3 17/03/2019 150
4 05/01/2018 15
5 17/03/2019 550
6 15/02/2018 13
df['cumsum'] = df.money.cumsum()
list(zip(df.groupby('date').tail(1)['date'], df.groupby('date').tail(1)['cumsum']))
[('01/01/2018', 20),
('05/01/2018', 222),
('17/03/2019', 772),
('15/02/2018', 785)]

pandas time series average monthly volume

I have csv time series data of once per day date, and cumulative sale. Silimar to this
01-01-2010 12:10:10 50.00
01-02-2010 12:10:10 80.00
01-03-2010 12:10:10 110.00
.
. for each dat of 2010
.
01-01-2011 12:10:10 2311.00
01-02-2011 12:10:10 2345.00
01-03-2011 12:10:10 2445.00
.
. for each dat of 2011
.
and so on.
I am looking to get the monthly sale (max - min) for each month in each year. Therefore for past 5 years, I will have 5 Jan values (max - min), 5 Feb values (max - min) ... and so on
once I have those, I next get the (5 years avg) for Jan, 5 years avg for Feb .. and so on.
Right now, I do this by slicing the original df [year/month] and then do the averaging over the specific month of the year.
I am looking to use time series resample() approach, but I am currently stuck at telling PD to sample monthly (max - min) for each month in [past 10 years from today]. and then chain in a .mean()
Any advice on an efficient way to do this with resample() would be appreciated.
It would probably look like something like this (note: no cumulative sale values). The key here is to perform a df.groupby() passing dt.year and dt.month.
import pandas as pd
import numpy as np
df = pd.DataFrame({
'date': pd.date_range(start='2016-01-01',end='2017-12-31'),
'sale': np.random.randint(100,200, size = 365*2+1)
})
# Get month max, min and size (and as they are sorted - last and first)
dfg = df.groupby([df.date.dt.year,df.date.dt.month])['sale'].agg(['last','first','size'])
# Assign new cols (diff and avg) and drop max min size
dfg = dfg.assign(diff = dfg['last'] - dfg['first'])
dfg = dfg.assign(avg = dfg['diff'] / dfg['size']).drop(['last','first','size'], axis=1)
# Rename index cols
dfg.index = dfg.index.rename(['Year','Month'])
print(dfg.head(6))
Returns:
diff avg
Year Month
2016 1 -56 -1.806452
2 -17 -0.586207
3 30 0.967742
4 34 1.133333
5 46 1.483871
6 2 0.066667
You can do it with a resample*2:
First resample to a month (M) and get the diff (max()-min())
Then resample to 5 years (5AS), and groupby month and take the mean()
E.g.:
In []:
date_range = pd.date_range(start='2008-01-01',end='2017-12-31')
df = pd.DataFrame({'sale': np.random.randint(100, 200, size=date_range.size)},
index=date_range)
In []:
df1 = df.resample('M').apply(lambda g: g.max()-g.min())
df1.resample('5AS').apply(lambda g: g.groupby(g.index.month).mean()).unstack()
Out[]:
sale
1 2 3 4 5 6 7 8 9 10 11 12
2008-01-01 95.4 90.2 95.2 95.4 93.2 93.8 91.8 95.6 93.4 93.4 94.2 93.8
2013-01-01 93.2 96.4 92.8 96.4 92.6 93.0 93.2 92.6 91.2 93.2 91.8 92.2

pd.to_datetime is getting half my dates with flipped day / months

My dataset has dates in the European format, and I'm struggling to convert it into the correct format before I pass it through a pd.to_datetime, so for all day < 12, my month and day switch.
Is there an easy solution to this?
import pandas as pd
import datetime as dt
df = pd.read_csv(loc,dayfirst=True)
df['Date']=pd.to_datetime(df['Date'])
Is there a way to force datetime to acknowledge that the input is formatted at dd/mm/yy?
Thanks for the help!
Edit, a sample from my dates:
renewal["Date"].head()
Out[235]:
0 31/03/2018
2 30/04/2018
3 28/02/2018
4 30/04/2018
5 31/03/2018
Name: Earliest renewal date, dtype: object
After running the following:
renewal['Date']=pd.to_datetime(renewal['Date'],dayfirst=True)
I get:
Out[241]:
0 2018-03-31 #Correct
2 2018-04-01 #<-- this number is wrong and should be 01-04 instad
3 2018-02-28 #Correct
Add format.
df['Date'] = pd.to_datetime(df['Date'], format='%d/%m/%Y')
You can control the date construction directly if you define separate columns for 'year', 'month' and 'day', like this:
import pandas as pd
df = pd.DataFrame(
{'Date': ['01/03/2018', '06/08/2018', '31/03/2018', '30/04/2018']}
)
date_parts = df['Date'].apply(lambda d: pd.Series(int(n) for n in d.split('/')))
date_parts.columns = ['day', 'month', 'year']
df['Date'] = pd.to_datetime(date_parts)
date_parts
# day month year
# 0 1 3 2018
# 1 6 8 2018
# 2 31 3 2018
# 3 30 4 2018
df
# Date
# 0 2018-03-01
# 1 2018-08-06
# 2 2018-03-31
# 3 2018-04-30

Applying Date Operation to Entire Data Frame

import pandas as pd
import numpy as np
df = pd.DataFrame({'year': np.repeat(2018,12), 'month': range(1,13)})
In this data frame, I am interested in creating a field called 'year_month' such that each value looks like so:
datetime.date(df['year'][0], df['month'][0], 1).strftime("%Y%m")
I'm stuck on how to apply this operation to the entire data frame and would appreciate any help.
Join both columns converted to strings and for months add zfill:
df['new'] = df['year'].astype(str) + df['month'].astype(str).str.zfill(2)
Or add new column day by assign, convert columns to_datetime and last strftime:
df['new'] = pd.to_datetime(df.assign(day=1)).dt.strftime("%Y%m")
If multiple columns in DataFrame:
df['new'] = pd.to_datetime(df.assign(day=1)[['day','month','year']]).dt.strftime("%Y%m")
print (df)
month year new
0 1 2018 201801
1 2 2018 201802
2 3 2018 201803
3 4 2018 201804
4 5 2018 201805
5 6 2018 201806
6 7 2018 201807
7 8 2018 201808
8 9 2018 201809
9 10 2018 201810
10 11 2018 201811
11 12 2018 201812
Timings:
df = pd.DataFrame({'year': np.repeat(2018,12), 'month': range(1,13)})
df = pd.concat([df] * 1000, ignore_index=True)
In [212]: %timeit pd.to_datetime(df.assign(day=1)).dt.strftime("%Y%m")
10 loops, best of 3: 74.1 ms per loop
In [213]: %timeit df['year'].astype(str) + df['month'].astype(str).str.zfill(2)
10 loops, best of 3: 41.3 ms per loop
One way would be to create the datetime objects directly from the source data:
import pandas as pd
import numpy as np
from datetime import date
df = pd.DataFrame({'date': [date(i, j, 1) for i, j \
in zip(np.repeat(2018,12), range(1,13))]})
# date
# 0 2018-01-01
# 1 2018-02-01
# 2 2018-03-01
# 3 2018-04-01
# 4 2018-05-01
# 5 2018-06-01
# 6 2018-07-01
# 7 2018-08-01
# 8 2018-09-01
# 9 2018-10-01
# 10 2018-11-01
# 11 2018-12-01
You could use an apply function such as:
df['year_month'] = df.apply(lambda row: datetime.date(row[1], row[0], 1).strftime("%Y%m"), axis = 1)

Categories