Modify hour in datetimeindex in pandas dataframe - python

I have a dataframe that looks like this:
master.head(5)
Out[73]:
hour price
day
2014-01-01 0 1066.24
2014-01-01 1 1032.11
2014-01-01 2 1028.53
2014-01-01 3 963.57
2014-01-01 4 890.65
In [74]: master.index.dtype
Out[74]: dtype('<M8[ns]')
What I need to do is update the hour in the index with the hour in the column but the following approaches don't work:
In [82]: master.index.hour = master.index.hour(master['hour'])
TypeError: 'numpy.ndarray' object is not callable
In [83]: master.index.hour = [master.index.hour(master.iloc[i,0]) for i in len(master.index.hour)]
TypeError: 'int' object is not iterable
How to proceed?

IIUC I think you want to construct a TimedeltaIndex:
In [89]:
df.index += pd.TimedeltaIndex(df['hour'], unit='h')
df
Out[89]:
hour price
2014-01-01 00:00:00 0 1066.24
2014-01-01 01:00:00 1 1032.11
2014-01-01 02:00:00 2 1028.53
2014-01-01 03:00:00 3 963.57
2014-01-01 04:00:00 4 890.65
Just to compare against using apply:
In [87]:
%timeit df.index + pd.TimedeltaIndex(df['hour'], unit='h')
%timeit df.index + df['hour'].apply(lambda x: pd.Timedelta(x, 'h'))
1000 loops, best of 3: 291 µs per loop
1000 loops, best of 3: 1.18 ms per loop
You can see that using a TimedeltaIndex is significantly faster

master.index =
pd.to_datetime(master.index.map(lambda x : x.strftime('%Y-%m-%d')) + '-' + master.hour.map(str) , format='%Y-%m-%d-%H.0')

Related

Extract day and month from a datetime object

I have a column with dates in string format '2017-01-01'. Is there a way to extract day and month from it using pandas?
I have converted the column to datetime dtype but haven't figured out the later part:
df['Date'] = pd.to_datetime(df['Date'], format='%Y-%m-%d')
df.dtypes:
Date datetime64[ns]
print(df)
Date
0 2017-05-11
1 2017-05-12
2 2017-05-13
With dt.day and dt.month --- Series.dt
df = pd.DataFrame({'date':pd.date_range(start='2017-01-01',periods=5)})
df.date.dt.month
Out[164]:
0 1
1 1
2 1
3 1
4 1
Name: date, dtype: int64
df.date.dt.day
Out[165]:
0 1
1 2
2 3
3 4
4 5
Name: date, dtype: int64
Also can do with dt.strftime
df.date.dt.strftime('%m')
Out[166]:
0 01
1 01
2 01
3 01
4 01
Name: date, dtype: object
A simple form:
df['MM-DD'] = df['date'].dt.strftime('%m-%d')
Use dt to get the datetime attributes of the column.
In [60]: df = pd.DataFrame({'date': [datetime.datetime(2018,1,1),datetime.datetime(2018,1,2),datetime.datetime(2018,1,3),]})
In [61]: df
Out[61]:
date
0 2018-01-01
1 2018-01-02
2 2018-01-03
In [63]: df['day'] = df.date.dt.day
In [64]: df['month'] = df.date.dt.month
In [65]: df
Out[65]:
date day month
0 2018-01-01 1 1
1 2018-01-02 2 1
2 2018-01-03 3 1
Timing the methods provided:
Using apply:
In [217]: %timeit(df['date'].apply(lambda d: d.day))
The slowest run took 33.66 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 210 µs per loop
Using dt.date:
In [218]: %timeit(df.date.dt.day)
10000 loops, best of 3: 127 µs per loop
Using dt.strftime:
In [219]: %timeit(df.date.dt.strftime('%d'))
The slowest run took 40.92 times longer than the fastest. This could mean that an intermediate result is being cached.
1000 loops, best of 3: 284 µs per loop
We can see that dt.day is the fastest
This should do it:
df['day'] = df['Date'].apply(lambda r:r.day)
df['month'] = df['Date'].apply(lambda r:r.month)

Pandas DataFrame index - month and day only

I'd like to have a DataFrame with a DatetimeIndex, but I only want the months and days; not years. I'd like it to look like the following:
(index) (values)
01-01 56.2
01-02 59.6
...
01-31 62.3
02-01 61.6
...
12-31 44.0
I've tried creating a date_range but this seems to require the year input, so I can't seem to figure out how to achieve the above.
you can do it this way:
In [78]: df = pd.DataFrame({'val':np.random.rand(10)}, index=pd.date_range('2000-01-01', freq='10D', periods=10))
In [79]: df
Out[79]:
val
2000-01-01 0.422023
2000-01-11 0.215800
2000-01-21 0.186017
2000-01-31 0.804285
2000-02-10 0.014004
2000-02-20 0.296644
2000-03-01 0.048683
2000-03-11 0.239037
2000-03-21 0.129382
2000-03-31 0.963110
In [80]: df.index.dtype_str
Out[80]: 'datetime64[ns]'
In [81]: df.index.dtype
Out[81]: dtype('<M8[ns]')
In [82]: df.index = df.index.strftime('%m-%d')
In [83]: df
Out[83]:
val
01-01 0.422023
01-11 0.215800
01-21 0.186017
01-31 0.804285
02-10 0.014004
02-20 0.296644
03-01 0.048683
03-11 0.239037
03-21 0.129382
03-31 0.963110
In [84]: df.index.dtype_str
Out[84]: 'object'
In [85]: df.index.dtype
Out[85]: dtype('O')
NOTE: the index dtype is a string (object) now
PS of course you can do it in one step if you nedd:
In [86]: pd.date_range('2000-01-01', freq='10D', periods=5).strftime('%m-%d')
Out[86]:
array(['01-01', '01-11', '01-21', '01-31', '02-10'],
dtype='<U5')

Convert Pandas object to multiple columns

I have imported the following data within a CSV file:
01/01/2014 00:00:00, 50.031
01/01/2014 00:00:01, 50.026
01/01/2014 00:00:02, 50.019
01/01/2014 00:00:03, 50.008
etc
I successfully have converted the "object" in the first column to a datetime using:
df= pd.read_csv("myfile.csv",names=['DateTime','Freq'])
df['DateTime'] = pd.to_datetime(df['DateTime'], coerce=True)
The problem is, it's a very big CSV file (35 million rows) and it's dog slow. Is there a more efficient ways of converting the first column to datetime?
I would also like to split the date and the time into separate columns.
Yes, you can do it in the read_csv() function itself, you can use the argument parse_dates , and send in the list of columns to parse as date to it. Example -
df= pd.read_csv("myfile.csv",names=['DateTime','Freq'],parse_dates=['DateTime'])
Demo -
In [41]: import io
In [42]: s = """Date, SomeNum
....: 01/01/2014 00:00:00, 50.031
....: 01/01/2014 00:00:01, 50.026
....: 01/01/2014 00:00:02, 50.019
....: 01/01/2014 00:00:03, 50.008"""
In [43]: df = pd.read_csv(io.StringIO(s),parse_dates=['Date'])
In [44]: df
Out[44]:
Date SomeNum
0 2014-01-01 00:00:00 50.031
1 2014-01-01 00:00:01 50.026
2 2014-01-01 00:00:02 50.019
3 2014-01-01 00:00:03 50.008
In [45]: df['Date']
Out[45]:
0 2014-01-01 00:00:00
1 2014-01-01 00:00:01
2 2014-01-01 00:00:02
3 2014-01-01 00:00:03
Name: Date, dtype: datetime64[ns]
Timing results of different methods for a csv with 1 million records -
In [92]: def func1():
....: df = pd.read_csv('a.csv',names=['DateTime','Freq'])
....: df['DateTime'] = pd.to_datetime(df['DateTime'], coerce=True,format='%d/%m/%Y %H:%M:%S')
....: return df
....:
In [96]: def func2():
....: return pd.read_csv('a.csv',names=['DateTime','Freq'],parse_dates=['DateTime'])
....:
In [97]: %timeit func1()
1 loops, best of 3: 6.5 s per loop
In [98]: %timeit func2()
1 loops, best of 3: 652 ms per loop

Merge multiple dataframes with non-unique indices

I have a bunch of pandas time series. Here is an example for illustration (real data has ~ 1 million entries in each series):
>>> for s in series:
print s.head()
print
2014-01-01 01:00:00 -0.546404
2014-01-01 01:00:00 -0.791217
2014-01-01 01:00:01 0.117944
2014-01-01 01:00:01 -1.033161
2014-01-01 01:00:02 0.013415
2014-01-01 01:00:02 0.368853
2014-01-01 01:00:02 0.380515
2014-01-01 01:00:02 0.976505
2014-01-01 01:00:02 0.881654
dtype: float64
2014-01-01 01:00:00 -0.111314
2014-01-01 01:00:01 0.792093
2014-01-01 01:00:01 -1.367650
2014-01-01 01:00:02 -0.469194
2014-01-01 01:00:02 0.569606
2014-01-01 01:00:02 -1.777805
dtype: float64
2014-01-01 01:00:00 -0.108123
2014-01-01 01:00:00 -1.518526
2014-01-01 01:00:00 -1.395465
2014-01-01 01:00:01 0.045677
2014-01-01 01:00:01 1.614789
2014-01-01 01:00:01 1.141460
2014-01-01 01:00:02 1.365290
dtype: float64
The times in each series are not unique. For example, the last series has 3 values at 2014-01-01 01:00:00. The second series has only one value at that time. Also, not all the times need to be present in all the series.
My goal is to create a merged DataFrame with times that are a union of all the times in the individual time series. Each timestamp should be repeated as many times as needed. So, if a timestamp occurs (2, 0, 3, 4) times in the series above, the timestamp should be repeated 4 times (the maximum of the frequencies) in the resulting DataFrame. The values of each column should be "filled forward".
As an example, the result of merging the above should be:
c0 c1 c2
2014-01-01 01:00:00 -0.546404 -0.111314 -0.108123
2014-01-01 01:00:00 -0.791217 -0.111314 -1.518526
2014-01-01 01:00:00 -0.791217 -0.111314 -1.395465
2014-01-01 01:00:01 0.117944 0.792093 0.045677
2014-01-01 01:00:01 -1.033161 -1.367650 1.614789
2014-01-01 01:00:01 -1.033161 -1.367650 1.141460
2014-01-01 01:00:02 0.013415 -0.469194 1.365290
2014-01-01 01:00:02 0.368853 0.569606 1.365290
2014-01-01 01:00:02 0.380515 -1.777805 1.365290
2014-01-01 01:00:02 0.976505 -1.777805 1.365290
2014-01-01 01:00:02 0.881654 -1.777805 1.365290
To give an idea of size and "uniqueness" in my real data:
>>> [len(s.index.unique()) for s in series]
[48617, 48635, 48720, 48620]
>>> len(times)
51043
>>> [len(s) for s in series]
[1143409, 1143758, 1233646, 1242864]
Here is what I have tried:
I can create a union of all the unique times:
uniques = [s.index.unique() for s in series]
times = uniques[0].union_many(uniques[1:])
I can now index each series using times:
series[0].loc[times]
But that seems to repeat the values for each item in times, which is not what I want.
I can't reindex() the series using times because the index for each series is not unique.
I can do it by a slow Python loop or do it in Cython, but is there a "pandas-only" way to do what I want to do?
I created my example series using the following code:
def make_series(n=3, rep=(0,5)):
times = pandas.date_range('2014/01/01 01:00:00', periods=n, freq='S')
reps = [random.randint(*rep) for _ in xrange(n)]
dates = []
values = numpy.random.randn(numpy.sum(reps))
for date, rep in zip(times, reps):
dates.extend([date]*rep)
return pandas.Series(data=values, index=dates)
series = [make_series() for _ in xrange(3)]
This is very nearly a concat:
In [11]: s0 = pd.Series([1, 2, 3], name='s0')
In [12]: s1 = pd.Series([1, 4, 5], name='s1')
In [13]: pd.concat([s0, s1], axis=1)
Out[13]:
s0 s1
0 1 1
1 2 4
2 3 5
However, concat cannot deal with duplicate indices (it's ambigious how they should merge, and in your case you don't want to merge them in the "ordinary" way - as combinations)...
I think you are going to use a groupby:
In [21]: s0 = pd.Series([1, 2, 3], [0, 0, 1], name='s0')
In [22]: s1 = pd.Series([1, 4, 5], [0, 1, 1], name='s1')
Note: I've appended a faster method which works for int-like dtypes (like datetime64).
We want to add a MultiIndex level of the cumcounts for each item, that way we trick the Index into becoming unique:
In [23]: s0.groupby(level=0).cumcount()
Out[23]:
0 0
0 1
1 0
dtype: int64
Note: I can't seem to append a column to the index without being a DataFrame..
In [24]: df0 = pd.DataFrame(s0).set_index(s0.groupby(level=0).cumcount(), append=True)
In [25]: df1 = pd.DataFrame(s1).set_index(s1.groupby(level=0).cumcount(), append=True)
In [26]: df0
Out[26]:
s0
0 0 1
1 2
1 0 3
Now we can go ahead and concat these:
In [27]: res = pd.concat([df0, df1], axis=1)
In [28]: res
Out[28]:
s0 s1
0 0 1 1
1 2 NaN
1 0 3 4
1 NaN 5
If you want to drop the cumcount level:
In [29]: res.index = res.index.droplevel(1)
In [30]: res
Out[30]:
s0 s1
0 1 1
0 2 NaN
1 3 4
1 NaN 5
Now you can ffill to get the desired result... (if you were concerned about forward filling of different datetimes you could groupby the index and ffill).
If the upperbound on repetitions in each group was reasonable (I'm picking 1000, but much higher is still "reasonable"!, you could use a Float64Index as follows (and certainly it seems more elegant):
s0.index = s0.index + (s0.groupby(level=0)._cumcount_array() / 1000.)
s1.index = s1.index + (s1.groupby(level=0)._cumcount_array() / 1000.)
res = pd.concat([s0, s1], axis=1)
res.index = res.index.values.astype('int64')
Note: I'm cheekily using a private method here which returns the cumcount as a numpy array...
Note2: This is pandas 0.14, in 0.13 you have to pass a numpy array to _cumcount_array e.g. np.arange(len(s0))), pre-0.13 you're out of luck - there's no cumcount.
How about this - convert to dataframes with labeled columns first, then concat().
s1 = pd.Series(index=['4/4/14','4/4/14','4/5/14'],
data=[12.2,0.0,12.2])
s2 = pd.Series(index=['4/5/14','4/8/14'],
data=[14.2,3.0])
d1 = pd.DataFrame(a,columns=['a'])
d2 = pd.DataFrame(b,columns=['b'])
final_df = pd.merge(d1, d2, left_index=True, right_index=True, how='outer')
This gives me
a b
4/4/14 12.2 NaN
4/4/14 0.0 NaN
4/5/14 12.2 14.2
4/8/14 NaN 3.0

Get MM-DD-YYYY from pandas Timestamp

dates seem to be a tricky thing in python, and I am having a lot of trouble simply stripping the date out of the pandas TimeStamp. I would like to get from 2013-09-29 02:34:44 to simply 09-29-2013
I have a dataframe with a column Created_date:
Name: Created_Date, Length: 1162549, dtype: datetime64[ns]`
I have tried applying the .date() method on this Series, eg: df.Created_Date.date(), but I get the error AttributeError: 'Series' object has no attribute 'date'
Can someone help me out?
map over the elements:
In [239]: from operator import methodcaller
In [240]: s = Series(date_range(Timestamp('now'), periods=2))
In [241]: s
Out[241]:
0 2013-10-01 00:24:16
1 2013-10-02 00:24:16
dtype: datetime64[ns]
In [238]: s.map(lambda x: x.strftime('%d-%m-%Y'))
Out[238]:
0 01-10-2013
1 02-10-2013
dtype: object
In [242]: s.map(methodcaller('strftime', '%d-%m-%Y'))
Out[242]:
0 01-10-2013
1 02-10-2013
dtype: object
You can get the raw datetime.date objects by calling the date() method of the Timestamp elements that make up the Series:
In [249]: s.map(methodcaller('date'))
Out[249]:
0 2013-10-01
1 2013-10-02
dtype: object
In [250]: s.map(methodcaller('date')).values
Out[250]:
array([datetime.date(2013, 10, 1), datetime.date(2013, 10, 2)], dtype=object)
Yet another way you can do this is by calling the unbound Timestamp.date method:
In [273]: s.map(Timestamp.date)
Out[273]:
0 2013-10-01
1 2013-10-02
dtype: object
This method is the fastest, and IMHO the most readable. Timestamp is accessible in the top-level pandas module, like so: pandas.Timestamp. I've imported it directly for expository purposes.
The date attribute of DatetimeIndex objects does something similar, but returns a numpy object array instead:
In [243]: index = DatetimeIndex(s)
In [244]: index
Out[244]:
<class 'pandas.tseries.index.DatetimeIndex'>
[2013-10-01 00:24:16, 2013-10-02 00:24:16]
Length: 2, Freq: None, Timezone: None
In [246]: index.date
Out[246]:
array([datetime.date(2013, 10, 1), datetime.date(2013, 10, 2)], dtype=object)
For larger datetime64[ns] Series objects, calling Timestamp.date is faster than operator.methodcaller which is slightly faster than a lambda:
In [263]: f = methodcaller('date')
In [264]: flam = lambda x: x.date()
In [265]: fmeth = Timestamp.date
In [266]: s2 = Series(date_range('20010101', periods=1000000, freq='T'))
In [267]: s2
Out[267]:
0 2001-01-01 00:00:00
1 2001-01-01 00:01:00
2 2001-01-01 00:02:00
3 2001-01-01 00:03:00
4 2001-01-01 00:04:00
5 2001-01-01 00:05:00
6 2001-01-01 00:06:00
7 2001-01-01 00:07:00
8 2001-01-01 00:08:00
9 2001-01-01 00:09:00
10 2001-01-01 00:10:00
11 2001-01-01 00:11:00
12 2001-01-01 00:12:00
13 2001-01-01 00:13:00
14 2001-01-01 00:14:00
...
999985 2002-11-26 10:25:00
999986 2002-11-26 10:26:00
999987 2002-11-26 10:27:00
999988 2002-11-26 10:28:00
999989 2002-11-26 10:29:00
999990 2002-11-26 10:30:00
999991 2002-11-26 10:31:00
999992 2002-11-26 10:32:00
999993 2002-11-26 10:33:00
999994 2002-11-26 10:34:00
999995 2002-11-26 10:35:00
999996 2002-11-26 10:36:00
999997 2002-11-26 10:37:00
999998 2002-11-26 10:38:00
999999 2002-11-26 10:39:00
Length: 1000000, dtype: datetime64[ns]
In [269]: timeit s2.map(f)
1 loops, best of 3: 1.04 s per loop
In [270]: timeit s2.map(flam)
1 loops, best of 3: 1.1 s per loop
In [271]: timeit s2.map(fmeth)
1 loops, best of 3: 968 ms per loop
Keep in mind that one of the goals of pandas is to provide a layer on top of numpy so that (most of the time) you don't have to deal with the low level details of the ndarray. So getting the raw datetime.date objects in an array is of limited use since they don't correspond to any numpy.dtype that is supported by pandas (pandas only supports datetime64[ns] [that's nanoseconds] dtypes). That said, sometimes you need to do this.
Maybe this only came in recently, but there are built-in methods for this. Try:
In [27]: s = pd.Series(pd.date_range(pd.Timestamp('now'), periods=2))
In [28]: s
Out[28]:
0 2016-02-11 19:11:43.386016
1 2016-02-12 19:11:43.386016
dtype: datetime64[ns]
In [29]: s.dt.to_pydatetime()
Out[29]:
array([datetime.datetime(2016, 2, 11, 19, 11, 43, 386016),
datetime.datetime(2016, 2, 12, 19, 11, 43, 386016)], dtype=object)
You can try using .dt.date on datetime64[ns] of the dataframe.
For e.g. df['Created_date'] = df['Created_date'].dt.date
Input dataframe named as test_df:
print(test_df)
Result:
Created_date
0 2015-03-04 15:39:16
1 2015-03-22 17:36:49
2 2015-03-25 22:08:45
3 2015-03-16 13:45:20
4 2015-03-19 18:53:50
Checking dtypes:
print(test_df.dtypes)
Result:
Created_date datetime64[ns]
dtype: object
Extracting date and updating Created_date column:
test_df['Created_date'] = test_df['Created_date'].dt.date
print(test_df)
Result:
Created_date
0 2015-03-04
1 2015-03-22
2 2015-03-25
3 2015-03-16
4 2015-03-19
well I would do this way.
pdTime =pd.date_range(timeStamp, periods=len(years), freq="D")
pdTime[i].strftime('%m-%d-%Y')

Categories