I have a DataFrame of various columns: Two columns I have are 'Date of birth' and 'Date of murder'.
Date of birth looks like :
Date of murder looks like :
I would like to be able to subtract each row from the DOB column from the DOM column so I can get how old someone was when they committed murder. Is there any way to go about this?
Now it appears to be working (mostly). I made the following edits:
df_1['Date of birth'] = pd.to_datetime(df_1['Date of birth'],
errors
=
'coerce')
df_1['Date of Murder revised'] = pd.to_datetime(df_1['Date of
Murder
revised'], errors = 'coerce')
df_1['Date at Murder'] = (df_1['Date of birth'] - df_1['Date of
Murder
revised'])
print(df_1['Date at Murder'].head(10))
This gives m the following output:
0 -12395 days
1 -9941 days
2 -7651 days
3 NaT
4 -9313 days
5 -9184 days
I would however like to get years, but when I do the same code above
like so:
df_1['Date at Murder'] = (df_1['Date of birth'] - df_1['Date of
Murder revised']) / 365.25
I get this output:
0 -34 days +01:32:38.932238
1 -28 days +18:47:33.388090
2 -21 days +01:15:53.593429
3 NaT
4 -26 days +12:03:26.981519
If DOM and DOB are datetime type then this should do the job:
df['Age'] = df.DOM - df.DOB
If the data type is not datetime then, convert them by
df.DOM = pandas.to_datetime(df.DOM)
df.DOB = pandas.to_datetime(df.DOB)
df['Age'] = (df.DOM - df.DOB).dt.days/365.25
Related
My DataFrame looks like this:
id
date
value
1
2021-07-16
100
2
2021-09-15
20
1
2021-04-10
50
1
2021-08-27
30
2
2021-07-22
15
2
2021-07-22
25
1
2021-06-30
40
3
2021-10-11
150
2
2021-08-03
15
1
2021-07-02
90
I want to groupby the id, and return the difference of total value in a 90-days period.
Specifically, I want the values of last 90 days based on today, and based on 30 days ago.
For example, considering today is 2021-10-13, I would like to get:
the sum of all values per id between 2021-10-13 and 2021-07-15
the sum of all values per id between 2021-09-13 and 2021-06-15
And finally, subtract them to get the variation.
I've already managed to calculate it, by creating separated temporary dataframes containing only the dates in those periods of 90 days, grouping by id, and then merging these temp dataframes into a final one.
But I guess it should be an easier or simpler way to do it. Appreciate any help!
Btw, sorry if the explanation was a little messy.
If I understood correctly, you need something like this:
import pandas as pd
import datetime
## Calculation of the dates that we are gonna need.
today = datetime.datetime.now()
delta = datetime.timedelta(days = 120)
# Date of the 120 days ago
hundredTwentyDaysAgo = today - delta
delta = datetime.timedelta(days = 90)
# Date of the 90 days ago
ninetyDaysAgo = today - delta
delta = datetime.timedelta(days = 30)
# Date of the 30 days ago
thirtyDaysAgo = today - delta
## Initializing an example df.
df = pd.DataFrame({"id":[1,2,1,1,2,2,1,3,2,1],
"date": ["2021-07-16", "2021-09-15", "2021-04-10", "2021-08-27", "2021-07-22", "2021-07-22", "2021-06-30", "2021-10-11", "2021-08-03", "2021-07-02"],
"value": [100,20,50,30,15,25,40,150,15,90]})
## Casting date column
df['date'] = pd.to_datetime(df['date']).dt.date
grouped = df.groupby('id')
# Sum of last 90 days per id
ninetySum = grouped.apply(lambda x: x[x['date'] >= ninetyDaysAgo.date()]['value'].sum())
# Sum of last 90 days, starting from 30 days ago per id
hundredTwentySum = grouped.apply(lambda x: x[(x['date'] >= hundredTwentyDaysAgo.date()) & (x['date'] <= thirtyDaysAgo.date())]['value'].sum())
The output is
ninetySum - hundredTwentySum
id
1 -130
2 20
3 150
dtype: int64
You can double check to make sure these are the numbers you wanted by printing ninetySum and hundredTwentySum variables.
I have a df
date
2021-03-12
2021-03-17
...
2022-05-21
2022-08-17
I am trying to add a column year_week, but my year week starts at 2021-06-28, which is the first day of July.
I tried:
df['date'] = pd.to_datetime(df['date'])
df['year_week'] = (df['date'] - timedelta(days=datetime(2021, 6, 24).timetuple()
.tm_yday)).dt.isocalendar().week
I played around with the timedelta days values so that the 2021-06-28 has a value of 1.
But then I got problems with previous & dates exceeding my start date + 1 year:
2021-03-12 has a value of 38
2022-08-17 has a value of 8
So it looks like the valid period is from 2021-06-28 + 1 year.
date year_week
2021-03-12 38 # LY38
2021-03-17 39 # LY39
2021-06-28 1 # correct
...
2022-05-21 47 # correct
2022-08-17 8 # NY8
Is there a way to get around this? As I am aggregating the data by year week I get incorrect results due to the past & upcoming dates. I would want to have negative dates for the days before 2021-06-28 or LY38 denoting that its the year week of the last year, accordingly year weeks of 52+ or NY8 denoting that this is the 8th week of the next year?
Here is a way, I added two dates more than a year away. You need the isocalendar from the difference between the date column and the dayofyear of your specific date. Then you can select the different scenario depending on the year of your specific date. use np.select for the different result format.
#dummy dataframe
df = pd.DataFrame(
{'date': ['2020-03-12', '2021-03-12', '2021-03-17', '2021-06-28',
'2022-05-21', '2022-08-17', '2023-08-17']
}
)
# define start date
d = pd.to_datetime('2021-6-24')
# remove the nomber of day of year from each date
s = (pd.to_datetime(df['date']) - pd.Timedelta(days=d.day_of_year)
).dt.isocalendar()
# get the difference in year
m = (s['year'].astype('int32') - d.year)
# all condition of result depending on year difference
conds = [m.eq(0), m.eq(-1), m.eq(1), m.lt(-1), m.gt(1)]
choices = ['', 'LY','NY',(m+1).astype(str)+'LY', '+'+(m-1).astype(str)+'NY']
# create the column
df['res'] = np.select(conds, choices) + s['week'].astype(str)
print(df)
date res
0 2020-03-12 -1LY38
1 2021-03-12 LY38
2 2021-03-17 LY39
3 2021-06-28 1
4 2022-05-21 47
5 2022-08-17 NY8
6 2023-08-17 +1NY8
I think
pandas period_range can be of some help
pd.Series(pd.period_range("6/28/2017", freq="W", periods=Number of weeks you want))
I have a Pandas dataframe, which looks like below
I want to create a new column, which tells the exact date from the information from all the above columns. The code should look something like this:
df['Date'] = pd.to_datetime(df['Month']+df['WeekOfMonth']+df['DayOfWeek']+df['Year'])
I was able to find a workaround for your case. You will need to define the dictionaries for the months and the days of the week.
month = {"Jan":"01", "Feb":"02", "March":"03", "Apr": "04", "May":"05", "Jun":"06", "Jul":"07", "Aug":"08", "Sep":"09", "Oct":"10", "Nov":"11", "Dec":"12"}
week = {"Monday":1,"Tuesday":2,"Wednesday":3,"Thursday":4,"Friday":5,"Saturday":6,"Sunday":7}
With this dictionaries the transformation that I used with a custom dataframe was:
rows = [["Dec",5,"Wednesday", "1995"],
["Jan",3,"Wednesday","2013"]]
df = pd.DataFrame(rows, columns=["Month","Week","Weekday","Year"])
df['Date'] = (df["Year"] + "-" + df["Month"].map(month) + "-" + (df["Week"].apply(lambda x: (x - 1)*7) + df["Weekday"].map(week).apply(int) ).apply(str)).astype('datetime64[ns]')
However you have to be careful. With some data that you posted as example there were some dates that exceeds the date range. For example, for
row = ["Oct",5,"Friday","2018"]
The date displayed is 2018-10-33. I recommend using some logic to filter your data in order to avoid this kind of problems.
Let's approach it in 3 steps as follows:
Get the date of month start Month_Start from Year and Month
Calculate the date offsets DateOffset relative to Month_Start from WeekOfMonth and DayOfWeek
Get the actual date Date from Month_Start and DateOffset
Here's the codes:
df['Month_Start'] = pd.to_datetime(df['Year'].astype(str) + df['Month'] + '01', format="%Y%b%d")
import time
df['DateOffset'] = (df['WeekOfMonth'] - 1) * 7 + df['DayOfWeek'].map(lambda x: time.strptime(x, '%A').tm_wday) - df['Month_Start'].dt.dayofweek
df['Date'] = df['Month_Start'] + pd.to_timedelta(df['DateOffset'], unit='D')
Output:
Month WeekOfMonth DayOfWeek Year Month_Start DateOffset Date
0 Dec 5 Wednesday 1995 1995-12-01 26 1995-12-27
1 Jan 3 Wednesday 2013 2013-01-01 15 2013-01-16
2 Oct 5 Friday 2018 2018-10-01 32 2018-11-02
3 Jun 2 Saturday 1980 1980-06-01 6 1980-06-07
4 Jan 5 Monday 1976 1976-01-01 25 1976-01-26
The Date column now contains the dates derived from the information from other columns.
You can remove the working interim columns, if you like, as follows:
df = df.drop(['Month_Start', 'DateOffset'], axis=1)
I have a csv-file: https://data.rivm.nl/covid-19/COVID-19_aantallen_gemeente_per_dag.csv
I want to use it to provide insight into the corona deaths per week.
df = pd.read_csv("covid.csv", error_bad_lines=False, sep=";")
df = df.loc[df['Deceased'] > 0]
df["Date_of_publication"] = pd.to_datetime(df["Date_of_publication"])
df["Week"] = df["Date_of_publication"].dt.isocalendar().week
df["Year"] = df["Date_of_publication"].dt.year
df = df[["Week", "Year", "Municipality_name", "Deceased"]]
df = df.groupby(by=["Week", "Year", "Municipality_name"]).agg({"Deceased" : "sum"})
df = df.sort_values(by=["Year", "Week"])
print(df)
Everything seems to be working fine except for the first 3 days of 2021. The first 3 days of 2021 are part of the last week (53) of 2020: http://week-number.net/calendar-with-week-numbers-2021.html.
When I print the dataframe this is the result:
53 2021 Winterswijk 1
Woudenberg 1
Zaanstad 1
Zeist 2
Zutphen 1
So basically what I'm looking for is a way where this line returns the year of the week number and not the year of the date:
df["Year"] = df["Date_of_publication"].dt.year
You can use dt.isocalendar().year to setup df["Year"]:
df["Year"] = df["Date_of_publication"].dt.isocalendar().year
You will get year 2020 for date of 2021-01-01 but will get back to year 2021 for date of 2021-01-04 by this.
This is just similar to how you used dt.isocalendar().week for setting up df["Week"]. Since they are both basing on the same tuple (year, week, day) returned by dt.isocalendar(), they would always be in sync.
Demo
date_s = pd.Series(pd.date_range(start='2021-01-01', periods=5, freq='1D'))
date_s
0
0 2021-01-01
1 2021-01-02
2 2021-01-03
3 2021-01-04
4 2021-01-05
date_s.dt.isocalendar()
year week day
0 2020 53 5
1 2020 53 6
2 2020 53 7
3 2021 1 1
4 2021 1 2
You can simply subtract the two dates and then divide the days attribute of the timedelta object by 7.
For example, this is the current week we are on now.
time_delta = (dt.datetime.today() - dt.datetime(2021, 1, 1))
The output is a datetime timedelta object
datetime.timedelta(days=75, seconds=84904, microseconds=144959)
For your problem, you'd do something like this
time_delta = int((df["Date_of_publication"] - df["Year"].days / 7)
The output would be a number that is the current week since date_of_publication
This code is used to find late deliveries for a certain range of time ( In this example, the year 2018 ) and to write the data into a csv file (otdedit.csv). However, although the data is filtering out correctly by year, the values that are not late deliveries are not being filtered out. My question is, how do I filter out the rows that only have late deliveries to be written into the csv file otdedit.csv.
import pandas as pd
from datetime import datetime
from datetime import timedelta
PURCHASE_ORDER = 'Material'
DELIVERY_DATE = 'Delivery Date'
DESIRED_DATE = 'Desired Delivery'
DELAYED_DAYS = 'Delayed Days'
df = pd.read_csv('otd.csv', index_col=PURCHASE_ORDER)
df[DELIVERY_DATE] = pd.to_datetime(df[DELIVERY_DATE])
df[DESIRED_DATE] = pd.to_datetime(df[DESIRED_DATE])
df[DELAYED_DAYS] = df[DELIVERY_DATE] - df[DESIRED_DATE]
late_threshold = pd.Timedelta(days=0)
late_deliveries = df[DELAYED_DAYS] > late_threshold
df[late_deliveries].drop([DELIVERY_DATE, DESIRED_DATE], axis=1)
df['Delivery Date'] = pd.to_datetime(df['Delivery Date'], format='%m/%d/%Y')
df['Desired Delivery'] = pd.to_datetime(df['Desired Delivery'], format='%m/%d/%Y')
df2 = df[(df['Delivery Date'].dt.year >= 2018) & (df['Delivery Date'].dt.year <= 2018)]
df2['Diff Deliv Date'] = df2['Delivery Date'] - df2['Desired Delivery']
df2.to_csv('otdedit.csv', sep=',')
Here is a snap of otdedit.csv, notice how the rows with 0 delayed days are still appearing.
(Also as a side note, I don't know why this program is not filtering by headers too, I only wish for these 4 columns to appear, but every column from the original file show ( I have hid the columns for the snapshot)
Also here is example data if needed:
Material Delivery Date Desired Delivery Delayed Days Diff Deliv Date
20030650 1/3/2018 12/22/2017 12 days 00:00:00.000 12 days 00:00:00.00000
20056352 1/2/2018 12/31/2017 2 days 00:00:00.00000 2 days 00:00:00.000000
20052196 10/18/2018 10/18/2018 0 days 00:00:00.0000 0 days 00:00:00.0000000
20031687 1/3/2018 12/27/2017 7 days 00:00:00.0000 7 days 00:00:00.000000
20031687 2/3/2018 2/3/2018 0 days 00:00:00.00000 0 days 00:00:00.000000
20056053 5/14/2018 3/11/2017 429 days 00:00:00.00 429 days 00:00:00.0000000
20070547 1/2/2018 8/15/2017 140 days 00:00:00.0000 140 days 00:00:00.00
The line
df[late_deliveries].drop([DELIVERY_DATE, DESIRED_DATE], axis=1)
Is creating a copy of a view into the original dataframe with the given columns dropped, however you are not assigning this copy to anything. The original dataframe df is unchanged.
What you could do after creating df2 is:
df2 = df2[df2[DELAYED_DAYS] > late_threshold]
df2.drop([DELIVERY_DATE, DESIRED_DATE], axis=1, inplace=True)