How to aggregate data in hour based on timestamp in pandas? - python

I have a dataframe a day full from 00:00:00 to 23:59:59
the table bellow is just and example, I can't paste it here because it's too long.
id sm_log_time score 1 score 2
0 2020-04-15 15:25:49 10 10
1 2020-04-15 15:38:55 10 10
2 2020-04-15 15:52:01 10 10
3 2020-04-15 16:05:07 10 10
4 2020-04-15 16:18:13 10 10
And my desired dataframe is something like this. Score 1 and score 2 is sum based on minutes in an hour
id sm_log_time score 1 score 2
0 2020-04-15 15:00:00 100 200
1 2020-04-15 16:00:00 230 200
2 2020-04-15 17:00:00 200 300
3 2020-04-15 18:00:00 100 300
4 2020-04-15 19:00:00 100 300
Someone give me this for reference:
times = pd.to_datetime(df.timestamp_col)
df.groupby([times.hour, times.minute]).value_col.sum()

First setting index is necessary. Then use resample method of time series index:
df.set_index('sm_log_time').resample('H').sum().reset_index()
Result:
sm_log_time id score 1 score 2
0 2020-04-15 15:00:00 3 30 30
1 2020-04-15 16:00:00 7 20 20
Please note also id is summed, You may drop it if not necessary. New row number of resulting dataframe is now in index.

Related

Conduct the calculation only when the date value is valid

I have a data frame dft:
Date Total Value
02/01/2022 2
03/01/2022 6
N/A 4
03/11/2022 4
03/15/2022 4
05/01/2022 4
For each date in the data frame, I want to calculate the how many days from today and I want to add these calculated values in a new column called Days.
I have tried the following code:
newdft = []
for item in dft:
temp = item.copy()
timediff = datetime.now() - datetime.strptime(temp["Date"], "%m/%d/%Y")
temp["Days"] = timediff.days
newdft.append(temp)
But the third date value is N/A, which caused an error. What should I add to my code so that I only conduct the calculation only when the date value is valid?
I would convert the whole Date column to be a date time object, using pd.to_datetime(), with the errors set to coerce, to replace the 'N/A' string to NaT (Not a Timestamp) with the below:
dft['Date'] = pd.to_datetime(dft['Date'], errors='coerce')
So the column will now look like this:
0 2022-02-01
1 2022-03-01
2 NaT
3 2022-03-11
4 2022-03-15
5 2022-05-01
Name: Date, dtype: datetime64[ns]
You can then subtract that column from the current date in one go, which will automatically ignore the NaT value, and assign this as a new column:
dft['Days'] = datetime.now() - dft['Date']
This will make dft look like below:
Date Total Value Days
0 2022-02-01 2 148 days 15:49:03.406935
1 2022-03-01 6 120 days 15:49:03.406935
2 NaT 4 NaT
3 2022-03-11 4 110 days 15:49:03.406935
4 2022-03-15 4 106 days 15:49:03.406935
5 2022-05-01 4 59 days 15:49:03.406935
If you just want the number instead of 59 days 15:49:03.406935, you can do the below instead:
df['Days'] = (datetime.now() - df['Date']).dt.days
Which will give you:
Date Total Value Days
0 2022-02-01 2 148.0
1 2022-03-01 6 120.0
2 NaT 4 NaN
3 2022-03-11 4 110.0
4 2022-03-15 4 106.0
5 2022-05-01 4 59.0
In contrast to Emi OB's excellent answer, if you did actually need to process individual values, it's usually easier to use apply to create a new Series from an existing one. You'd just need to filter out 'N/A'.
df['Days'] = (
df['Date']
[lambda d: d != 'N/A']
.apply(lambda d: (datetime.now() - datetime.strptime(d, "%m/%d/%Y")).days)
)
Result:
Date Total Value Days
0 02/01/2022 2 148.0
1 03/01/2022 6 120.0
2 N/A 4 NaN
3 03/11/2022 4 110.0
4 03/15/2022 4 106.0
5 05/01/2022 4 59.0
And for what it's worth, another option is date.today() instead of datetime.now():
.apply(lambda d: date.today() - datetime.strptime(d, "%m/%d/%Y").date())
And the result is a timedelta instead of float:
Date Total Value Days
0 02/01/2022 2 148 days
1 03/01/2022 6 120 days
2 N/A 4 NaT
3 03/11/2022 4 110 days
4 03/15/2022 4 106 days
5 05/01/2022 4 59 days
See also: How do I select rows from a DataFrame based on column values?
Following up on the excellent answer by Emi OB I would suggest using DataFrame.mask() to update the dataframe without type coercion.
import datetime
import pandas as pd
dft = pd.DataFrame({'Date': [
'02/01/2022',
'03/01/2022',
None,
'03/11/2022',
'03/15/2022',
'05/01/2022'],
'Total Value': [2,6,4,4,4,4]})
dft['today'] = datetime.datetime.now()
dft['Days'] = 0
dft['Days'].mask(dft['Date'].notna(),
(dft['today'] - pd.to_datetime(dft['Date'])).dt.days,
axis=0, inplace=True)
dft.drop(columns=['today'], inplace=True)
This would result in integer values in the Days column:
Date Total Value Days
0 02/01/2022 2 148
1 03/01/2022 6 120
2 None 4 None
3 03/11/2022 4 110
4 03/15/2022 4 106
5 05/01/2022 4 59

Adding rows per group using Moving Average in Python

I have the following problem:
I have a table in which the customer number, the date and the sales are stored. The customer transactions are available on the first of each month. It may happen that a customer has not placed an order every month. The table looks like this:
ID
Date
Revenues
1
2021-05-01
100
1
2021-07-01
200
1
2021-08-01
100
1
2021-10-01
200
2
2021-12-01
300
2
2022-01-01
400
Now I want to add a certain number of rows to each group whose date is from today for a certain number of months in the future. The ID should remain the same, the date should be increased by one month and the turnover column should be filled with the moving average method.
The table should look like this:
ID
Date
Revenues
1
2021-05-01
100
1
2021-07-01
200
1
2021-08-01
100
1
2021-10-01
200
1
2022-04-01
150
1
2022-05-01
150
2
2021-12-01
300
2
2022-01-01
400
2
2022-04-01
350
2
2022-05-01
350
How can I solve this problem?
Thank you for your help :)
If I understand you correctly:
df["Date"] = pd.to_datetime(df["Date"])
def reindex(x):
min_date = x["Date"].min()
r = pd.date_range(min_date, min_date + pd.DateOffset(months=3), freq="MS")
x = x.set_index("Date").reindex(r)
x["ID"] = x["ID"].ffill().bfill()
x["Revenues"] = x["Revenues"].fillna(x["Revenues"].mean())
return x
x = df.groupby("ID", as_index=False).apply(reindex).droplevel(0).reset_index()
x = x.rename(columns={"index": "Date"})
print(x.to_markdown())
Prints:
Date
ID
Revenues
0
2021-12-01 00:00:00
1
100
1
2022-01-01 00:00:00
1
200
2
2022-02-01 00:00:00
1
150
3
2022-03-01 00:00:00
1
150
4
2021-12-01 00:00:00
2
300
5
2022-01-01 00:00:00
2
400
6
2022-02-01 00:00:00
2
350
7
2022-03-01 00:00:00
2
350

pandas get a sum column for next 7 days

I want to get the sum of values for next 7 days of a column
my dataframe :
date value
0 2021-04-29 1
1 2021-05-03 2
2 2021-05-06 1
3 2021-05-15 1
4 2021-05-17 2
5 2021-05-18 1
6 2021-05-21 2
7 2021-05-22 5
8 2021-05-24 4
i tried to make a new column that contains date 7 days from current date
df['temp'] = df['date'] + timedelta(days=7)
then calculate value between date range :
df['next_7days'] = df[(df.date > df.date) & (df.date <= df.temp)].value.sum()
But this gives me answer as all 0.
intended result:
date value next_7days
0 2021-04-29 1 3
1 2021-05-03 2 1
2 2021-05-06 1 0
3 2021-05-15 1 10
4 2021-05-17 2 12
5 2021-05-18 1 11
6 2021-05-21 2 9
7 2021-05-22 5 4
8 2021-05-24 4 0
The method iam using currently is quite tedious, are their any better methods to get the intended result.
With a list comprehension:
tomorrow_dates = df.date + pd.Timedelta("1 day")
next_week_dates = df.date + pd.Timedelta("7 days")
df["next_7days"] = [df.value[df.date.between(tomorrow, next_week)].sum()
for tomorrow, next_week in zip(tomorrow_dates, next_week_dates)]
where we first define tomorrow and next week's dates and store them. Then zip them together and use between of pd.Series to get a boolean series if the date is indeed between the desired range. Then using boolean indexing to get the actual values and sum them. Do this for each date pair.
to get
date value next_7days
0 2021-04-29 1 3
1 2021-05-03 2 1
2 2021-05-06 1 0
3 2021-05-15 1 10
4 2021-05-17 2 12
5 2021-05-18 1 11
6 2021-05-21 2 9
7 2021-05-22 5 4
8 2021-05-24 4 0

Add a dummy indicating change between consecutive rows in grouped dataframe

This is a follow up to my previous question here.
Assume a dataset like this (which originally is read in from a .csv):
data = pd.DataFrame({'id': [1,2,3,1,2,3,1,2,3],
'time':['2017-01-01 12:00:00','2017-01-01 12:00:00','2017-01-01 12:00:00',
'2017-01-01 12:10:00','2017-01-01 12:10:00','2017-01-01 12:10:00',
'2017-01-01 12:20:00','2017-01-01 12:20:00','2017-01-01 12:20:00'],
'values': [10,11,12,10,12,13,10,13,13]})
data = data.set_index('id')
=>
id time values
0 1 2017-01-01 12:00:00 10
1 2 2017-01-01 12:00:00 11
2 3 2017-01-01 12:00:00 12
3 1 2017-01-01 12:10:00 10
4 2 2017-01-01 12:10:00 12
5 3 2017-01-01 12:10:00 13
6 1 2017-01-01 12:20:00 10
7 2 2017-01-01 12:20:00 13
8 3 2017-01-01 12:20:00 13
Time is identical for all IDs in each observation period. The series goes on like that for many observations, i.e. every ten minutes.
Previously, I learned how to get the total number of changes in values between two consecutive periods for each id:
data.groupby(data.index).values.apply(lambda x: (x != x.shift()).sum() - 1)
This works great and is really fast. Now, I am interested in adding a new column to the df. It should be a dummy indicating for each row in values if there was a change between the current and previous row. Thus, the result would be as follows:
=>
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
After fiddling around, I came up with a solution. However, it is really slow. It won't run on my actual dataset which is rather big:
def calc_change(x):
x = (x != x.shift())
x.iloc[0,] = False
return x
changes = data.groupby(data.index, as_index=False).values.apply(
calc_change).reset_index().iloc[:,2]
data = data.sort_index().reset_index()
data.loc[changes, 'change'] = 1
data = data.fillna(0)
I'm sure there are better and appreciate any help!
You can use this solution if your id column is not set as index.
data['change'] = data.groupby(['id'])['values'].apply(lambda x: x.diff() > 0).astype(int)
You get
id time values change
0 1 2017-01-01 12:00:00 10 0
1 2 2017-01-01 12:00:00 11 0
2 3 2017-01-01 12:00:00 12 0
3 1 2017-01-01 12:10:00 10 0
4 2 2017-01-01 12:10:00 12 1
5 3 2017-01-01 12:10:00 13 1
6 1 2017-01-01 12:20:00 10 0
7 2 2017-01-01 12:20:00 13 1
8 3 2017-01-01 12:20:00 13 0
With id as index,
data = data.sort_index()
data['change'] = data.groupby(data.index)['values'].apply(lambda x: x.diff() > 0).astype(int)

How to do/workaround a conditional join in python Pandas?

I am trying to calculate time-based aggregations in Pandas based on date values stored in a separate tables.
The top of the first table table_a looks like this:
COMPANY_ID DATE MEASURE
1 2010-01-01 00:00:00 10
1 2010-01-02 00:00:00 10
1 2010-01-03 00:00:00 10
1 2010-01-04 00:00:00 10
1 2010-01-05 00:00:00 10
Here is the code to create the table:
table_a = pd.concat(\
[pd.DataFrame({'DATE': pd.date_range("01/01/2010", "12/31/2010", freq="D"),\
'COMPANY_ID': 1 , 'MEASURE': 10}),\
pd.DataFrame({'DATE': pd.date_range("01/01/2010", "12/31/2010", freq="D"),\
'COMPANY_ID': 2 , 'MEASURE': 10})])
The second table, table_b, looks like this:
COMPANY END_DATE
1 2010-03-01 00:00:00
1 2010-06-02 00:00:00
2 2010-03-01 00:00:00
2 2010-06-02 00:00:00
and the code to create it is:
table_b = pd.DataFrame({'END_DATE':pd.to_datetime(['03/01/2010','06/02/2010','03/01/2010','06/02/2010']),\
'COMPANY':(1,1,2,2)})
I want to be able to get the sum of the 'measure' column for each 'COMPANY_ID' for each 30-day period prior to the 'END_DATE' in table_b.
This is (I think) the SQL equivalent:
select
b.COMPANY_ID,
b.DATE
sum(a.MEASURE) AS MEASURE_TO_END_DATE
from table_a a, table_b b
where a.COMPANY = b.COMPANY and
a.DATE < b.DATE and
a.DATE > b.DATE - 30
group by b.COMPANY;
Well, I can think of a few ways:
essentially blow up the dataframe by just merging on the exact field (company)... then filter on the 30-day windows after the merge.
should be fast but could use lots of memory
Move the merging and filtering on the 30-day window into a groupby().
results in a merge for each group, so slower but should use less memory
Option #1
Suppose your data looks like the following (I expanded your sample data):
print df
company date measure
0 0 2010-01-01 10
1 0 2010-01-15 10
2 0 2010-02-01 10
3 0 2010-02-15 10
4 0 2010-03-01 10
5 0 2010-03-15 10
6 0 2010-04-01 10
7 1 2010-03-01 5
8 1 2010-03-15 5
9 1 2010-04-01 5
10 1 2010-04-15 5
11 1 2010-05-01 5
12 1 2010-05-15 5
print windows
company end_date
0 0 2010-02-01
1 0 2010-03-15
2 1 2010-04-01
3 1 2010-05-15
Create a beginning date for the 30 day windows:
windows['beg_date'] = (windows['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
print windows
company end_date beg_date
0 0 2010-02-01 2010-01-02
1 0 2010-03-15 2010-02-13
2 1 2010-04-01 2010-03-02
3 1 2010-05-15 2010-04-15
Now do a merge and then select based on if date falls within beg_date and end_date:
df = df.merge(windows,on='company',how='left')
df = df[(df.date >= df.beg_date) & (df.date <= df.end_date)]
print df
company date measure end_date beg_date
2 0 2010-01-15 10 2010-02-01 2010-01-02
4 0 2010-02-01 10 2010-02-01 2010-01-02
7 0 2010-02-15 10 2010-03-15 2010-02-13
9 0 2010-03-01 10 2010-03-15 2010-02-13
11 0 2010-03-15 10 2010-03-15 2010-02-13
16 1 2010-03-15 5 2010-04-01 2010-03-02
18 1 2010-04-01 5 2010-04-01 2010-03-02
21 1 2010-04-15 5 2010-05-15 2010-04-15
23 1 2010-05-01 5 2010-05-15 2010-04-15
25 1 2010-05-15 5 2010-05-15 2010-04-15
You can compute the 30 day window sums by grouping on company and end_date:
print df.groupby(['company','end_date']).sum()
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Option #2 Move all merging into a groupby. This should be better on memory but I would think much slower:
windows['beg_date'] = (windows['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
def cond_merge(g,windows):
g = g.merge(windows,on='company',how='left')
g = g[(g.date >= g.beg_date) & (g.date <= g.end_date)]
return g.groupby('end_date')['measure'].sum()
print df.groupby('company').apply(cond_merge,windows)
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Another option Now if your windows never overlap (like in the example data), you could do something like the following as an alternative that doesn't blow up a dataframe but is pretty fast:
windows['date'] = windows['end_date']
df = df.merge(windows,on=['company','date'],how='outer')
print df
company date measure end_date
0 0 2010-01-01 10 NaT
1 0 2010-01-15 10 NaT
2 0 2010-02-01 10 2010-02-01
3 0 2010-02-15 10 NaT
4 0 2010-03-01 10 NaT
5 0 2010-03-15 10 2010-03-15
6 0 2010-04-01 10 NaT
7 1 2010-03-01 5 NaT
8 1 2010-03-15 5 NaT
9 1 2010-04-01 5 2010-04-01
10 1 2010-04-15 5 NaT
11 1 2010-05-01 5 NaT
12 1 2010-05-15 5 2010-05-15
This merge essentially inserts your window end dates into the dataframe and then backfilling the end dates (by group) will give you a structure to easily create you summation windows:
df['end_date'] = df.groupby('company')['end_date'].apply(lambda x: x.bfill())
print df
company date measure end_date
0 0 2010-01-01 10 2010-02-01
1 0 2010-01-15 10 2010-02-01
2 0 2010-02-01 10 2010-02-01
3 0 2010-02-15 10 2010-03-15
4 0 2010-03-01 10 2010-03-15
5 0 2010-03-15 10 2010-03-15
6 0 2010-04-01 10 NaT
7 1 2010-03-01 5 2010-04-01
8 1 2010-03-15 5 2010-04-01
9 1 2010-04-01 5 2010-04-01
10 1 2010-04-15 5 2010-05-15
11 1 2010-05-01 5 2010-05-15
12 1 2010-05-15 5 2010-05-15
df = df[df.end_date.notnull()]
df['beg_date'] = (df['end_date'].values.astype('datetime64[D]') -
np.timedelta64(30,'D'))
print df
company date measure end_date beg_date
0 0 2010-01-01 10 2010-02-01 2010-01-02
1 0 2010-01-15 10 2010-02-01 2010-01-02
2 0 2010-02-01 10 2010-02-01 2010-01-02
3 0 2010-02-15 10 2010-03-15 2010-02-13
4 0 2010-03-01 10 2010-03-15 2010-02-13
5 0 2010-03-15 10 2010-03-15 2010-02-13
7 1 2010-03-01 5 2010-04-01 2010-03-02
8 1 2010-03-15 5 2010-04-01 2010-03-02
9 1 2010-04-01 5 2010-04-01 2010-03-02
10 1 2010-04-15 5 2010-05-15 2010-04-15
11 1 2010-05-01 5 2010-05-15 2010-04-15
12 1 2010-05-15 5 2010-05-15 2010-04-15
df = df[(df.date >= df.beg_date) & (df.date <= df.end_date)]
print df.groupby(['company','end_date']).sum()
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15
Another alternative is to resample your first dataframe to daily data and then compute rolling_sums with a 30 day window; and select the dates at the end that you are interested in. This could be quite memory intensive too.
There is a very easy, and practical (or maybe the only direct way) to do conditional join in pandas. Since there is no direct way to do conditional join in pandas, you will need an additional library, and that is, pandasql
Install the library pandasql from pip using the command pip install pandasql. This library allows you to manipulate the pandas dataframes using the SQL queries.
import pandas as pd
from pandasql import sqldf
df = pd.read_excel(r'play_data.xlsx')
df
id Name Amount
0 A001 A 100
1 A002 B 110
2 A003 C 120
3 A005 D 150
Now let's just do a conditional join to compare the Amount of the IDs
# Make your pysqldf object:
pysqldf = lambda q: sqldf(q, globals())
# Write your query in SQL syntax, here you can use df as a normal SQL table
cond_join= '''
select
df_left.*,
df_right.*
from df as df_left
join df as df_right
on
df_left.[Amount] > (df_right.[Amount]+10)
'''
# Now, get your queries results as dataframe using the sqldf object that you created
pysqldf(cond_join)
id Name Amount id Name Amount
0 A003 C 120 A001 A 100
1 A005 D 150 A001 A 100
2 A005 D 150 A002 B 110
3 A005 D 150 A003 C 120
I know I am late for the party but here are two solutions. The first one is rather simple but not very general, while the second one should be more universal. In what follows I assume that table_a and table_b objects are already defined as in the original question.
Solution 1
This one is simple. Here we just do a left join and append END_DATE values to table_a and then filter out the rows we are not interested in. So the memory overhead here is size of table_a * number of unique END_DATE values per COMPANY in table_b.
table_c = table_a.merge(table_b, left_on="COMPANY_ID", right_on="COMPANY")
table_c[(table_c["DATE"] - table_c["END_DATE"]).dt.days.between(-30, 0)] \
.groupby(["COMPANY", "END_DATE"])["MEASURE"].sum()
## OUTPUT:
COMPANY END_DATE
1 2010-03-01 310
2010-06-02 310
2 2010-03-01 310
2010-06-02 310
Name: MEASURE, dtype: int64
This is quite fast, but could blow up the size of table_a significantly if table_b contained many values.
Solution 2
This one is a bit smarter and operates row-by-row, where to each row in table_b we explicitly map only the relevant subset of table_a. Thus, we get only the data we need, so there is no memory overhead (beyond the memory needed to represent the raw records over which we want to sum).
table_b.groupby(["COMPANY", "END_DATE"]) \
.apply(lambda g: table_a[
(table_a["COMPANY_ID"] == g["COMPANY"].iloc[0]) & \
((table_a["DATE"] - g["END_DATE"].iloc[0]).dt.days.between(-30, 0))
]["MEASURE"].sum())
## OUTPUT:
COMPANY END_DATE
1 2010-03-01 310
2010-06-02 310
2 2010-03-01 310
2010-06-02 310
dtype: int64
Note that in this case for each inequality we use only the relevant subsets of table_a, which will be much more memory efficient. The price is that this soution seems to be about 2-3 times slower (but in general still relatively fast; ~2-3ms runtime on your data).
I am using karl D's data.
conditional_join from pyjanitor offers a way to deal with non-equi joins efficiently:
# pip install pyjanitor
import pandas as pd
import janitor
(df
.conditional_join(
windows, # series or dataframe to join to
# variable arguments
# left column, right column, join operator
('company', 'company', '=='),
('date', 'beg_date', '>='),
('date', 'end_date', '<='),
# for more performance, depending on the data size
# you can turn on use_numba
use_numba = False,
# filter for specific columns, if required
df_columns=['company', 'measure'],
right_columns='end_date')
.groupby(['company', 'end_date'])
.sum()
)
measure
company end_date
0 2010-02-01 20
2010-03-15 30
1 2010-04-01 10
2010-05-15 15

Categories