I have a Dataframe, df, with the following column:
df['ArrivalDate'] =
...
936 2012-12-31
938 2012-12-29
965 2012-12-31
966 2012-12-31
967 2012-12-31
968 2012-12-31
969 2012-12-31
970 2012-12-29
971 2012-12-31
972 2012-12-29
973 2012-12-29
...
The elements of the column are pandas.tslib.Timestamp.
I want to just include the year and month. I thought there would be simple way to do it, but I can't figure it out.
Here's what I've tried:
df['ArrivalDate'].resample('M', how = 'mean')
I got the following error:
Only valid with DatetimeIndex or PeriodIndex
Then I tried:
df['ArrivalDate'].apply(lambda(x):x[:-2])
I got the following error:
'Timestamp' object has no attribute '__getitem__'
Any suggestions?
Edit: I sort of figured it out.
df.index = df['ArrivalDate']
Then, I can resample another column using the index.
But I'd still like a method for reconfiguring the entire column. Any ideas?
If you want new columns showing year and month separately you can do this:
df['year'] = pd.DatetimeIndex(df['ArrivalDate']).year
df['month'] = pd.DatetimeIndex(df['ArrivalDate']).month
or...
df['year'] = df['ArrivalDate'].dt.year
df['month'] = df['ArrivalDate'].dt.month
Then you can combine them or work with them just as they are.
The df['date_column'] has to be in date time format.
df['month_year'] = df['date_column'].dt.to_period('M')
You could also use D for Day, 2M for 2 Months etc. for different sampling intervals, and in case one has time series data with time stamp, we can go for granular sampling intervals such as 45Min for 45 min, 15Min for 15 min sampling etc.
You can directly access the year and month attributes, or request a datetime.datetime:
In [15]: t = pandas.tslib.Timestamp.now()
In [16]: t
Out[16]: Timestamp('2014-08-05 14:49:39.643701', tz=None)
In [17]: t.to_pydatetime() #datetime method is deprecated
Out[17]: datetime.datetime(2014, 8, 5, 14, 49, 39, 643701)
In [18]: t.day
Out[18]: 5
In [19]: t.month
Out[19]: 8
In [20]: t.year
Out[20]: 2014
One way to combine year and month is to make an integer encoding them, such as: 201408 for August, 2014. Along a whole column, you could do this as:
df['YearMonth'] = df['ArrivalDate'].map(lambda x: 100*x.year + x.month)
or many variants thereof.
I'm not a big fan of doing this, though, since it makes date alignment and arithmetic painful later and especially painful for others who come upon your code or data without this same convention. A better way is to choose a day-of-month convention, such as final non-US-holiday weekday, or first day, etc., and leave the data in a date/time format with the chosen date convention.
The calendar module is useful for obtaining the number value of certain days such as the final weekday. Then you could do something like:
import calendar
import datetime
df['AdjustedDateToEndOfMonth'] = df['ArrivalDate'].map(
lambda x: datetime.datetime(
x.year,
x.month,
max(calendar.monthcalendar(x.year, x.month)[-1][:5])
)
)
If you happen to be looking for a way to solve the simpler problem of just formatting the datetime column into some stringified representation, for that you can just make use of the strftime function from the datetime.datetime class, like this:
In [5]: df
Out[5]:
date_time
0 2014-10-17 22:00:03
In [6]: df.date_time
Out[6]:
0 2014-10-17 22:00:03
Name: date_time, dtype: datetime64[ns]
In [7]: df.date_time.map(lambda x: x.strftime('%Y-%m-%d'))
Out[7]:
0 2014-10-17
Name: date_time, dtype: object
If you want the month year unique pair, using apply is pretty sleek.
df['mnth_yr'] = df['date_column'].apply(lambda x: x.strftime('%B-%Y'))
Outputs month-year in one column.
Don't forget to first change the format to date-time before, I generally forget.
df['date_column'] = pd.to_datetime(df['date_column'])
SINGLE LINE: Adding a column with 'year-month'-paires:
('pd.to_datetime' first changes the column dtype to date-time before the operation)
df['yyyy-mm'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%Y-%m')
Accordingly for an extra 'year' or 'month' column:
df['yyyy'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%Y')
df['mm'] = pd.to_datetime(df['ArrivalDate']).dt.strftime('%m')
Extracting the Year say from ['2018-03-04']
df['Year'] = pd.DatetimeIndex(df['date']).year
The df['Year'] creates a new column. While if you want to extract the month just use .month
You can first convert your date strings with pandas.to_datetime, which gives you access to all of the numpy datetime and timedelta facilities. For example:
df['ArrivalDate'] = pandas.to_datetime(df['ArrivalDate'])
df['Month'] = df['ArrivalDate'].values.astype('datetime64[M]')
#KieranPC's solution is the correct approach for Pandas, but is not easily extendible for arbitrary attributes. For this, you can use getattr within a generator comprehension and combine using pd.concat:
# input data
list_of_dates = ['2012-12-31', '2012-12-29', '2012-12-30']
df = pd.DataFrame({'ArrivalDate': pd.to_datetime(list_of_dates)})
# define list of attributes required
L = ['year', 'month', 'day', 'dayofweek', 'dayofyear', 'weekofyear', 'quarter']
# define generator expression of series, one for each attribute
date_gen = (getattr(df['ArrivalDate'].dt, i).rename(i) for i in L)
# concatenate results and join to original dataframe
df = df.join(pd.concat(date_gen, axis=1))
print(df)
ArrivalDate year month day dayofweek dayofyear weekofyear quarter
0 2012-12-31 2012 12 31 0 366 1 4
1 2012-12-29 2012 12 29 5 364 52 4
2 2012-12-30 2012 12 30 6 365 52 4
Thanks to jaknap32, I wanted to aggregate the results according to Year and Month, so this worked:
df_join['YearMonth'] = df_join['timestamp'].apply(lambda x:x.strftime('%Y%m'))
Output was neat:
0 201108
1 201108
2 201108
There is two steps to extract year for all the dataframe without using method apply.
Step1
convert the column to datetime :
df['ArrivalDate']=pd.to_datetime(df['ArrivalDate'], format='%Y-%m-%d')
Step2
extract the year or the month using DatetimeIndex() method
pd.DatetimeIndex(df['ArrivalDate']).year
df['Month_Year'] = df['Date'].dt.to_period('M')
Result :
Date Month_Year
0 2020-01-01 2020-01
1 2020-01-02 2020-01
2 2020-01-03 2020-01
3 2020-01-04 2020-01
4 2020-01-05 2020-01
df['year_month']=df.datetime_column.apply(lambda x: str(x)[:7])
This worked fine for me, didn't think pandas would interpret the resultant string date as date, but when i did the plot, it knew very well my agenda and the string year_month where ordered properly... gotta love pandas!
Then I tried:
df['ArrivalDate'].apply(lambda(x):x[:-2])
I think here the proper input should be string.
df['ArrivalDate'].astype(str).apply(lambda(x):x[:-2])
Related
So my code is as follows:
df['Dates'][df['Dates'].index.month == 11]
I was doing a test to see if I could filter the months so it only shows November dates, but this did not work. It gives me the following error: AttributeError: 'Int64Index' object has no attribute 'month'.
If I do
print type(df['Dates'][0])
then I get class 'pandas.tslib.Timestamp', which leads me to believe that the types of objects stored in the dataframe are Timestamp objects. (I'm not sure where the 'Int64Index' is coming from... for the error before)
What I want to do is this: The dataframe column contains dates from the early 2000's to present in the following format: dd/mm/yyyy. I want to filter for dates only between November 15 and March 15, independent of the YEAR. What is the easiest way to do this?
Thanks.
Here is df['Dates'] (with indices):
0 2006-01-01
1 2006-01-02
2 2006-01-03
3 2006-01-04
4 2006-01-05
5 2006-01-06
6 2006-01-07
7 2006-01-08
8 2006-01-09
9 2006-01-10
10 2006-01-11
11 2006-01-12
12 2006-01-13
13 2006-01-14
14 2006-01-15
...
Using pd.to_datetime & dt accessor
The accepted answer is not the "pandas" way to approach this problem.
To select only rows with month 11, use the dt accessor:
# df['Date'] = pd.to_datetime(df['Date']) -- if column is not datetime yet
df = df[df['Date'].dt.month == 11]
Same works for days or years, where you can substitute dt.month with dt.day or dt.year
Besides that, there are many more, here are a few:
dt.quarter
dt.week
dt.weekday
dt.day_name
dt.is_month_end
dt.is_month_start
dt.is_year_end
dt.is_year_start
For a complete list see the documentation
Map an anonymous function to calculate the month on to the series and compare it to 11 for nov.
That will give you a boolean mask. You can then use that mask to filter your dataframe.
nov_mask = df['Dates'].map(lambda x: x.month) == 11
df[nov_mask]
I don't think there is straight forward way to filter the way you want ignoring the year so try this.
nov_mar_series = pd.Series(pd.date_range("2013-11-15", "2014-03-15"))
#create timestamp without year
nov_mar_no_year = nov_mar_series.map(lambda x: x.strftime("%m-%d"))
#add a yearless timestamp to the dataframe
df["no_year"] = df['Date'].map(lambda x: x.strftime("%m-%d"))
no_year_mask = df['no_year'].isin(nov_mar_no_year)
df[no_year_mask]
In your code there are two issues. First, need to bring column reference after the filtering condition. Second, can either use ".month" with a column or index, but not both. One of the following should work:
df[df.index.month == 11]['Dates']
df[df['Dates'].month == 11]['Dates']
I have a pandas dataframe that contains a year and week column:
year week
2018 18
2019 17
2019 17
I'm trying to combine the year and week columns into a new 'isoweek' column using the isoweek library. I can't seem to figure out how to properly loop through the rows to create the object column. If I do something like:
df['isoweek'] = Week(df['year'],df['week'])
isoweek chokes on the vectorization. I've tried creating a basic list and appending it to my dataframe, like so:
obj_list = []
for i in range(500):
year = df['year'][i]
week = df['week'][i]
w = Week(year,week)
obj_list.append(w)
df['isoweek'] = obj_list
But I end up with a simple tuple in the column.
The goal is to be able to use some of the isoweek library's operations to calculate date differences, like:
df['isoweek'] - 4
>isoweek.Week(2019, 34)
Is it even possible to store an object like this in a dataframe column? If so, how does one go about it?
As an alternative, you can use the built in method for datetime:
df['week_start'] = pd.to_datetime(df['year'].astype(str), format='%Y') + pd.to_timedelta(df['week'].mul(7).astype(str) + ' days')
# Output:
week year week_start
0 18 2018 2018-05-07
1 17 2019 2019-04-30
2 17 2019 2019-04-30
Calculating time differences is pretty straightforward here:
# Choose 7 weeks
n_weeks = pd.to_timedelta(7, unit='W')
# Adding is simple
df['week_start'] + n_weeks
# Output
0 2018-06-25
1 2019-06-18
2 2019-06-18
For more on this, read: Pandas: How to create a datetime object from Week and Year?
Potentially you could do this
First, set up the example dataframe
from isoweek import Week
df = pd.DataFrame ({'year' : [2018,2019,2019],
'week' : [18,17,17]})
Loop through the dataframe, adding the isoweek to a list
ls_isoweek = []
for row in df.itertuples():
ls_isoweek.append(Week(row[1],row[2]))
The list looks like this
[isoweek.Week(2018, 18), isoweek.Week(2019, 17), isoweek.Week(2019, 17)]
This list can be accessed thusly
ls_isoweek[0] - 4
Produces this output
isoweek.Week(2018, 14)
However, the list can also be added back to the dataframe if you wish
df['isoweek'] = ls_isoweek
You can then do things like ...
df['isoweek_minus_4'] = df['isoweek'].apply(lambda x: x-4)
Producing an output like the below
A little late, but if anyone else is still looking to use a solution of this form as I was, you could use lambda functions along with apply. For the dataframe below (with int64 dtypes),
year week
0 2018 18
1 2019 17
2 2019 17
Now we use isoweek to appropriately parse the data,
from isoweek import Week
df.apply(lambda row : Week(row["year"],row["week"]),axis=1)
This produces the output,
0 (2018, 18)
1 (2019, 17)
2 (2019, 17)
dtype: object
You could also identify the (week,year) with a datetime object by combining this approach with this answer https://stackoverflow.com/a/7687085.
df.apply(lambda row : Week(int(row["year"]),int(row["week"])).monday(),axis=1)
The int appears a little redundant there, but pandas by default uses int64 which doesn't appear to function with isoweek correctly. This produces the output,
0 2018-04-30
1 2019-04-22
2 2019-04-22
dtype: object
Want to calculate the difference of days between pandas date series -
0 2013-02-16
1 2013-01-29
2 2013-02-21
3 2013-02-22
4 2013-03-01
5 2013-03-14
6 2013-03-18
7 2013-03-21
and today's date.
I tried but could not come up with logical solution.
Please help me with the code. Actually I am new to python and there are lot of syntactical errors happening while applying any function.
You could do something like
# generate time data
data = pd.to_datetime(pd.Series(["2018-09-1", "2019-01-25", "2018-10-10"]))
pd.to_datetime("now") > data
returns:
0 False
1 True
2 False
you could then use that to select the data
data[pd.to_datetime("now") > data]
Hope it helps.
Edit: I misread it but you can easily alter this example to calculate the difference:
data - pd.to_datetime("now")
returns:
0 -122 days +13:10:37.489823
1 24 days 13:10:37.489823
2 -83 days +13:10:37.489823
dtype: timedelta64[ns]
You can try as Follows:
>>> from datetime import datetime
>>> df
col1
0 2013-02-16
1 2013-01-29
2 2013-02-21
3 2013-02-22
4 2013-03-01
5 2013-03-14
6 2013-03-18
7 2013-03-21
Make Sure to convert the column names to_datetime:
>>> df['col1'] = pd.to_datetime(df['col1'], infer_datetime_format=True)
set the current datetime in order to Further get the diffrence:
>>> curr_time = pd.to_datetime("now")
Now get the Difference as follows:
>>> df['col1'] - curr_time
0 -2145 days +07:48:48.736939
1 -2163 days +07:48:48.736939
2 -2140 days +07:48:48.736939
3 -2139 days +07:48:48.736939
4 -2132 days +07:48:48.736939
5 -2119 days +07:48:48.736939
6 -2115 days +07:48:48.736939
7 -2112 days +07:48:48.736939
Name: col1, dtype: timedelta64[ns]
With numpy you can solve it like difference-two-dates-days-weeks-months-years-pandas-python-2
. bottom line
df['diff_days'] = df['First dates column'] - df['Second Date column']
# for days use 'D' for weeks use 'W', for month use 'M' and for years use 'Y'
df['diff_days']=df['diff_days']/np.timedelta64(1,'D')
print(df)
if you want days as int and not as float use
df['diff_days']=df['diff_days']//np.timedelta64(1,'D')
From the pandas docs under Converting To Timestamps you will find:
"Converting to Timestamps To convert a Series or list-like object of date-like objects e.g. strings, epochs, or a mixture, you can use the to_datetime function"
I haven't used pandas before but this suggests your pandas date series (a list-like object) is iterable and each element of this series is an instance of a class which has a to_datetime function.
Assuming my assumptions are correct, the following function would take such a list and return a list of timedeltas' (a datetime object representing the difference between two date time objects).
from datetime import datetime
def convert(pandas_series):
# get the current date
now = datetime.now()
# Use a list comprehension and the pandas to_datetime method to calculate timedeltas.
return [now - pandas_element.to_datetime() for pandas_series]
# assuming 'some_pandas_series' is a list-like pandas series object
list_of_timedeltas = convert(some_pandas_series)
I have a large pandas DataFrame (around 1050000 entries). One of the columns is of type datetime. I want to extract year, month and weekday. The problem is that the code shown below is extremely slow:
df['Year'] = pd.DatetimeIndex(df.Date).year
df['Month'] = pd.DatetimeIndex(df.Date).month
df['Weekday'] = pd.DatetimeIndex(df.Date).weekday
Update:
The data looks like this:
Id DayOfWeek Date
0 1 5 2015-07-31
1 2 4 2015-07-30
2 3 3 2015-07-29
3 4 2 2015-07-28
4 5 1 2015-07-27
If I do this way:
df = pd.read_csv("data.csv", parse_dates=[2])
df['Year'] = pd.to_datetime(df['Date']).year
df['Month'] = pd.to_datetime(df['Date']).month
df['Weekday'] = pd.to_datetime(df['Date']).weekday
then the error is:
AttributeError: 'Series' object has no attribute 'year'
You state that your column is already of the datetime64 type. In that case you can simply use the .dt accessor to expose the methods and attributes associated with the datetime values in the column:
df['Year'] = df.Date.dt.year
This will be much quicker than writing pd.DatetimeIndex(df.Date).year which creates a whole new index object first.
It seems like you may be parsing the dates each time rather than all at once. Also, using the to_datetime() method may be faster.
Try
df['parsedDate'] = pd.to_datetime(df['Date'])
df['Year'] = pd.parsedDate.year
df['Month'] = pd.parsedDate.month
df['Weekday'] = pd.parsedDate.weekday
So my code is as follows:
df['Dates'][df['Dates'].index.month == 11]
I was doing a test to see if I could filter the months so it only shows November dates, but this did not work. It gives me the following error: AttributeError: 'Int64Index' object has no attribute 'month'.
If I do
print type(df['Dates'][0])
then I get class 'pandas.tslib.Timestamp', which leads me to believe that the types of objects stored in the dataframe are Timestamp objects. (I'm not sure where the 'Int64Index' is coming from... for the error before)
What I want to do is this: The dataframe column contains dates from the early 2000's to present in the following format: dd/mm/yyyy. I want to filter for dates only between November 15 and March 15, independent of the YEAR. What is the easiest way to do this?
Thanks.
Here is df['Dates'] (with indices):
0 2006-01-01
1 2006-01-02
2 2006-01-03
3 2006-01-04
4 2006-01-05
5 2006-01-06
6 2006-01-07
7 2006-01-08
8 2006-01-09
9 2006-01-10
10 2006-01-11
11 2006-01-12
12 2006-01-13
13 2006-01-14
14 2006-01-15
...
Using pd.to_datetime & dt accessor
The accepted answer is not the "pandas" way to approach this problem.
To select only rows with month 11, use the dt accessor:
# df['Date'] = pd.to_datetime(df['Date']) -- if column is not datetime yet
df = df[df['Date'].dt.month == 11]
Same works for days or years, where you can substitute dt.month with dt.day or dt.year
Besides that, there are many more, here are a few:
dt.quarter
dt.week
dt.weekday
dt.day_name
dt.is_month_end
dt.is_month_start
dt.is_year_end
dt.is_year_start
For a complete list see the documentation
Map an anonymous function to calculate the month on to the series and compare it to 11 for nov.
That will give you a boolean mask. You can then use that mask to filter your dataframe.
nov_mask = df['Dates'].map(lambda x: x.month) == 11
df[nov_mask]
I don't think there is straight forward way to filter the way you want ignoring the year so try this.
nov_mar_series = pd.Series(pd.date_range("2013-11-15", "2014-03-15"))
#create timestamp without year
nov_mar_no_year = nov_mar_series.map(lambda x: x.strftime("%m-%d"))
#add a yearless timestamp to the dataframe
df["no_year"] = df['Date'].map(lambda x: x.strftime("%m-%d"))
no_year_mask = df['no_year'].isin(nov_mar_no_year)
df[no_year_mask]
In your code there are two issues. First, need to bring column reference after the filtering condition. Second, can either use ".month" with a column or index, but not both. One of the following should work:
df[df.index.month == 11]['Dates']
df[df['Dates'].month == 11]['Dates']