This question already has answers here:
How can I pivot a dataframe?
(5 answers)
Closed 3 years ago.
There is a csv data frame which contains attributes and their values in an hourly interval. Not all attributes are listed each hour. It looks like this:
time attribute value
2019.10.11. 10:00:00 A 10
2019.10.11. 10:00:00 B 20
2019.10.11. 10:00:00 C 10
2019.10.11. 10:00:00 D 13
2019.10.11. 10:00:00 E 12
2019.10.11. 11:00:00 A 11
2019.10.11. 11:00:00 D 8
2019.10.11. 11:00:00 E 17
2019.10.11. 12:00:00 A 13
2019.10.11. 12:00:00 B 24
2019.10.11. 12:00:00 C 11
2019.10.11. 12:00:00 E 17
I would like to convert it to have one row for each hour and the attribute name should go as column with its value. If an attribute is not listed, then it should have a zero value or can also be left blank etc... Does pandas offer a way with merge, concat or join or anything else to automate this or do I have to implement it manually?
I would need the dataset in the following format:
time A B C D E
2019.10.11. 10:00:00 10 20 10 13 12
2019.10.11. 11:00:00 11 0 0 8 17
2019.10.11. 12:00:00 13 24 11 0 17
Thank you for reading it!
Use DataFrame.pivot_table:
df=df.pivot_table(columns='attribute',index='time' ,values ='value',fill_value=0)
print(df)
attribute A B C D E
time
2019.10.11. 10:00:00 10 20 10 13 12
2019.10.11. 11:00:00 11 0 0 8 17
2019.10.11. 12:00:00 13 24 11 0 17
You could use unstack + fillna:
df = pd.DataFrame(data=data, columns=['time', 'attribute', 'value'])
print(df.set_index(['time', 'attribute']).unstack(level=-1).fillna(0))
Output
value
attribute A B C D E
time
2019.10.11. 10:00:00 10.0 20.0 10.0 13.0 12.0
2019.10.11. 11:00:00 11.0 0.0 0.0 8.0 17.0
2019.10.11. 12:00:00 13.0 24.0 11.0 0.0 17.0
Related
I would like to analyze a dataframe with hourly data for several days, e.g. df:
DATE TIME Threshold Value
2022-11-04 02:00:00 10 9
2022-11-04 03:00:00 11 10
2022-11-04 04:00:00 10 11
2022-11-04 06:00:00 12 11
2022-11-04 05:00:00 12 12
2022-11-04 07:00:00 10 11
2022-11-04 08:00:00 11 10
2022-11-04 09:00:00 11 9
2022-11-04 10:00:00 12 9
2022-11-04 11:00:00 10 10
2022-11-04 12:00:00 10 10
...
2022-11-05 01:00:00 10 9
2022-11-05 02:00:00 11 10
...
Now I would like to examine the data based on threshold/value and time.
Let's say I am interested in the Value of time "08:00:00" if the threshold of the preceding time "04:00:00" was 10. To find possible patterns, I might also look at other combinations in the future.
My approach was:
Create a new dataframe df_2 with all slices of 04:00:00 and value = 10
Create a new dataframe df_3 with all slices of 08:00:00
merge df_2 and df_3 and select only rows where a time = 04:00:00 of the same day precedes a time = 8:00:00 entry.
This seems to be a bit cumbersome and I was wondering if there was a more practical way to do this.
Maybe someone could suggest a more efficient way?
at first make DatetimeInex:
date_idx=df.iloc[:, :2].astype('str').apply(lambda x: pd.to_datetime(' '.join(x)), axis=1)
and make new column that have Threshold before 4H
and make result to df1
df1 = (df.set_index(date_idx)
.drop(['DATE', 'TIME'], axis=1)
.sort_index()
.assign(new=df1.shift(freq='4H')['Threshold']))
output(df1):
Threshold Value new
2022-11-04 02:00:00 10 9 NaN
2022-11-04 03:00:00 11 10 NaN
2022-11-04 04:00:00 10 11 NaN
2022-11-04 05:00:00 12 12 NaN
2022-11-04 06:00:00 12 11 10.0
2022-11-04 07:00:00 10 11 11.0
2022-11-04 08:00:00 11 10 10.0
2022-11-04 09:00:00 11 9 12.0
2022-11-04 10:00:00 12 9 12.0
2022-11-04 11:00:00 10 10 10.0
2022-11-04 12:00:00 10 10 11.0
filter data at 08:00:00:
df1.at_time('08:00')
output:
Threshold Value new
2022-11-04 08:00:00 11 10 10.0
check or filter Value and new column
here is one way to do it
out=(df.loc[
(df['TIME'].isin(['04:00:00','08:00:00']) & # choose rows where time is 4:00 or 8:00
df['DATE'].isin( # and date where
df.loc[df['TIME'].eq('04:00:00') & # time is 04:00:00
df['Threshold'].eq(10)]['DATE']) # and Threshold is 10
)])
out
DATE TIME Threshold Value
2 2022-11-04 04:00:00 10 11
6 2022-11-04 08:00:00 11 10
Alternately, same as above just choose time eq to 08:00:00
out=(df.loc[
(df['TIME'].isin(['08:00:00']) &
df['DATE'].isin(
df.loc[df['TIME'].eq('04:00:00') &
df['Threshold'].eq(10)]['DATE'])
)])
out
DATE TIME Threshold Value
6 2022-11-04 08:00:00 11 10
so I'm trying to apply different conditions that depends on a date, months to be specific. For example, for January replace the data in TEMP that is above 45 but for February that is above 30 and so on. I already did that with the code below, but the problem is that the data from the previous month is replace it with nan.
This is my code:
meses = ["01", "02"]
for i in var_vars:
if i in dataframes2.columns.values:
for j in range(len(meses)):
test_prueba_mes = dataframes2[i].loc[dataframes2['fecha'].dt.month == int(meses[j])]
test_prueba = test_prueba_mes[dataframes2[i]<dataframes.loc[i]["X"+meses[j]+".max"]]
dataframes2["Prueba " + str(i)] = test_prueba
Output:
dataframes2.tail(5)
fecha TEMP_C_Avg RH_Avg Prueba TEMP_C_Avg Prueba RH_Avg
21 2020-01-01 22:00:00 46.0 103 NaN NaN
22 2020-01-01 23:00:00 29.0 103 NaN NaN
23 2020-01-02 00:00:00 31.0 3 NaN NaN
24 2020-01-02 12:00:00 31.0 2 NaN NaN
25 2020-02-01 10:00:00 29.0 5 29.0 5.0
My desired Output is:
Output:
fecha TEMP_C_Avg RH_Avg Prueba TEMP_C_Avg Prueba RH_Avg
21 2020-01-01 22:00:00 46.0 103 NaN NaN
22 2020-01-01 23:00:00 29.0 103 29.0 NaN
23 2020-01-02 00:00:00 31.0 3 31.0 3.0
24 2020-01-02 12:00:00 31.0 2 31.0 2.0
25 2020-02-01 10:00:00 29.0 5 29.0 5.0
Appreciate if anyone can help me.
Update: The ruleset for 6 months is jan 45, feb 30, mar 45, abr 10, may 15, jun 30
An example of the data:
fecha TEMP_C_Avg RH_Avg
25 2020-02-01 10:00:00 29.0 5
26 2020-02-01 11:00:00 32.0 105
27 2020-03-01 10:00:00 55.0 3
28 2020-03-01 11:00:00 40.0 5
29 2020-04-01 10:00:00 10.0 20
30 2020-04-01 11:00:00 5.0 15
31 2020-05-01 10:00:00 20.0 15
32 2020-05-01 11:00:00 5.0 106
33 2020-06-01 10:00:00 33.0 107
34 2020-06-01 11:00:00 20.0 20
With clear understanding
have encoded monthly limits into a dict limits
use numpy select(), when a condition matches take value corresponding to condition from second parameter. Default to third parameter
build conditions dynamically from limits dict
second parameter needs to be same length as conditions list. Build list of np.nan as list comprehension so it's correct length
to consider all columns, use a dict comprehension that builds **kwarg params to assign()
df = pd.read_csv(io.StringIO(""" fecha TEMP_C_Avg RH_Avg
25 2020-02-01 10:00:00 29.0 5
26 2020-02-01 11:00:00 32.0 105
27 2020-03-01 10:00:00 55.0 3
28 2020-03-01 11:00:00 40.0 5
29 2020-04-01 10:00:00 10.0 20
30 2020-04-01 11:00:00 5.0 15
31 2020-05-01 10:00:00 20.0 15
32 2020-05-01 11:00:00 5.0 106
33 2020-06-01 10:00:00 33.0 107
34 2020-06-01 11:00:00 20.0 20"""), sep="\s\s+", engine="python")
df.fecha = pd.to_datetime(df.fecha)
# The ruleset for 6 months is jan 45, feb 30, mar 45, abr 10, may 15, jun 30
limits = {1:45, 2:30, 3:45, 4:10, 5:15, 6:30}
df = df.assign(**{f"Prueba {c}":np.select( # construct target column name
# build a condition for each of the month limits
[df.fecha.dt.month.eq(m) & df[c].gt(l) for m,l in limits.items()],
[np.nan for m in limits.keys()], # NaN if beyond limit
df[c]) # keep value if within limits
for c in df.columns if "Avg" in c}) # do calc for all columns that have "Avg" in name
fecha
TEMP_C_Avg
RH_Avg
Prueba TEMP_C_Avg
Prueba RH_Avg
25
2020-02-01 10:00:00
29
5
29
5
26
2020-02-01 11:00:00
32
105
nan
nan
27
2020-03-01 10:00:00
55
3
nan
3
28
2020-03-01 11:00:00
40
5
40
5
29
2020-04-01 10:00:00
10
20
10
nan
30
2020-04-01 11:00:00
5
15
5
nan
31
2020-05-01 10:00:00
20
15
nan
15
32
2020-05-01 11:00:00
5
106
5
nan
33
2020-06-01 10:00:00
33
107
nan
nan
34
2020-06-01 11:00:00
20
20
20
20
I am trying to develop a more efficient loop to complete a problem. At the moment, the code below applies a string if it aligns with a specific value. However, the values are in identical order so a loop could make this process more efficient.
Using the df below as an example, using integers to represent time periods, each integer increase equates to a 15 min period. So 1 == 8:00:00 and 2 == 8:15:00 etc. At the moment I would repeat this process until the last time period. If this gets up to 80 it could become very inefficient. Could a loop be incorporated here?
import pandas as pd
d = ({
'Time' : [1,1,1,2,2,2,3,3,3,4,4,4,5,5,5,6,6,6],
})
df = pd.DataFrame(data = d)
def time_period(row) :
if row['Time'] == 1 :
return '8:00:00'
if row['Time'] == 2 :
return '8:15:00'
if row['Time'] == 3 :
return '8:30:00'
if row['Time'] == 4 :
return '8:45:00'
if row['Time'] == 5 :
return '9:00:00'
if row['Time'] == 6 :
return '9:15:00'
.....
if row['Time'] == 80 :
return '4:00:00'
df['24Hr Time'] = df.apply(lambda row: time_period(row), axis=1)
print(df)
Out:
Time 24Hr Time
0 1 8:00:00
1 1 8:00:00
2 1 8:00:00
3 2 8:15:00
4 2 8:15:00
5 2 8:15:00
6 3 8:30:00
7 3 8:30:00
8 3 8:30:00
9 4 8:45:00
10 4 8:45:00
11 4 8:45:00
12 5 9:00:00
13 5 9:00:00
14 5 9:00:00
15 6 9:15:00
16 6 9:15:00
17 6 9:15:00
This is possible with some simple timdelta arithmetic:
df['24Hr Time'] = (
pd.to_timedelta((df['Time'] - 1) * 15, unit='m') + pd.Timedelta(hours=8))
df.head()
Time 24Hr Time
0 1 08:00:00
1 1 08:00:00
2 1 08:00:00
3 2 08:15:00
4 2 08:15:00
df.dtypes
Time int64
24Hr Time timedelta64[ns]
dtype: object
If you need a string, use pd.to_datetime with unit and origin:
df['24Hr Time'] = (
pd.to_datetime((df['Time']-1) * 15, unit='m', origin='8:00:00')
.dt.strftime('%H:%M:%S'))
df.head()
Time 24Hr Time
0 1 08:00:00
1 1 08:00:00
2 1 08:00:00
3 2 08:15:00
4 2 08:15:00
df.dtypes
Time int64
24Hr Time object
dtype: object
In general, you want to make a dictionary and apply
my_dict = {'old_val1': 'new_val1',...}
df['24Hr Time'] = df['Time'].map(my_dict)
But, in this case, you can do with time delta:
df['24Hr Time'] = pd.to_timedelta(df['Time']*15, unit='T') + pd.to_timedelta('7:45:00')
Output (note that the new column is of type timedelta, not string)
Time 24Hr Time
0 1 08:00:00
1 1 08:00:00
2 1 08:00:00
3 2 08:15:00
4 2 08:15:00
5 2 08:15:00
6 3 08:30:00
7 3 08:30:00
8 3 08:30:00
9 4 08:45:00
10 4 08:45:00
11 4 08:45:00
12 5 09:00:00
13 5 09:00:00
14 5 09:00:00
15 6 09:15:00
16 6 09:15:00
17 6 09:15:00
I end up using this
pd.to_datetime((df.Time-1)*15*60+8*60*60,unit='s').dt.time
0 08:00:00
1 08:00:00
2 08:00:00
3 08:15:00
4 08:15:00
5 08:15:00
6 08:30:00
7 08:30:00
8 08:30:00
9 08:45:00
10 08:45:00
11 08:45:00
12 09:00:00
13 09:00:00
14 09:00:00
15 09:15:00
16 09:15:00
17 09:15:00
Name: Time, dtype: object
A fun way is using pd.timedelta_range and index.repeat
n = df.Time.nunique()
c = df.groupby('Time').size()
df['24_hr'] = pd.timedelta_range(start='8 hours', periods=n, freq='15T').repeat(c)
Out[380]:
Time 24_hr
0 1 08:00:00
1 1 08:00:00
2 1 08:00:00
3 2 08:15:00
4 2 08:15:00
5 2 08:15:00
6 3 08:30:00
7 3 08:30:00
8 3 08:30:00
9 4 08:45:00
10 4 08:45:00
11 4 08:45:00
12 5 09:00:00
13 5 09:00:00
14 5 09:00:00
15 6 09:15:00
16 6 09:15:00
17 6 09:15:00
I am trying to calculate the total cost of staffing requirements over a day. My attempt is to group People required throughout the day and multiply the cost. I then try to group this cost per/hour. But my output isn't correct.
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as dates
d = ({
'Time' : ['0/1/1900 8:00:00','0/1/1900 9:59:00','0/1/1900 10:00:00','0/1/1900 12:29:00','0/1/1900 12:30:00','0/1/1900 13:00:00','0/1/1900 13:02:00','0/1/1900 13:15:00','0/1/1900 13:20:00','0/1/1900 18:10:00','0/1/1900 18:15:00','0/1/1900 18:20:00','0/1/1900 18:25:00','0/1/1900 18:45:00','0/1/1900 18:50:00','0/1/1900 19:05:00','0/1/1900 19:07:00','0/1/1900 21:57:00','0/1/1900 22:00:00','0/1/1900 22:30:00','0/1/1900 22:35:00','1/1/1900 3:00:00','1/1/1900 3:05:00','1/1/1900 3:20:00','1/1/1900 3:25:00'],
'People' : [1,1,2,2,3,3,2,2,3,3,4,4,3,3,2,2,3,3,4,4,3,3,2,2,1],
})
df = pd.DataFrame(data = d)
df['Time'] = ['/'.join([str(int(x.split('/')[0])+1)] + x.split('/')[1:]) for x in df['Time']]
df['Time'] = pd.to_datetime(df['Time'], format='%d/%m/%Y %H:%M:%S')
formatter = dates.DateFormatter('%Y-%m-%d %H:%M:%S')
df = df.groupby(pd.Grouper(freq='15T',key='Time'))['People'].max().ffill()
df = df.reset_index(level=['Time'])
df['Cost'] = df['People'] * 26
cost = df.groupby([df['Time'].dt.hour])['Cost'].sum()
#For reference. This plot displays people required throughout the day
fig, ax = plt.subplots(figsize = (10,5))
plt.plot(df['Time'], df['People'], color = 'blue')
plt.locator_params(axis='y', nbins=6)
ax.xaxis.set_major_formatter(formatter)
ax.xaxis.set_major_formatter(dates.DateFormatter('%H:%M:%S'))
plt.ylabel('People Required', labelpad = 10)
plt.xlabel('Time', labelpad = 10)
print(cost)
Out:
0 416.0
1 416.0
2 416.0
3 130.0
8 104.0
9 104.0
10 208.0
11 208.0
12 260.0
13 312.0
14 312.0
15 312.0
16 312.0
17 312.0
18 364.0
19 312.0
20 312.0
21 312.0
22 416.0
23 416.0
I have done the calculations manually an the total cost output should be:
$1456
I think the wrong numbers in your question is most likely caused by the incorrect datetime values that you have. Once you have fixed that, you should get the correct numbers. Here's an attempt from my end, with a little tweak to the Time column.
import pandas as pd
df = pd.DataFrame({
'Time' : ['1/1/1900 8:00:00','1/1/1900 9:59:00','1/1/1900 10:00:00','1/1/1900 12:29:00','1/1/1900 12:30:00','1/1/1900 13:00:00','1/1/1900 13:02:00','1/1/1900 13:15:00','1/1/1900 13:20:00','1/1/1900 18:10:00','1/1/1900 18:15:00','1/1/1900 18:20:00','1/1/1900 18:25:00','1/1/1900 18:45:00','1/1/1900 18:50:00','1/1/1900 19:05:00','1/1/1900 19:07:00','1/1/1900 21:57:00','1/1/1900 22:00:00','1/1/1900 22:30:00','1/1/1900 22:35:00','1/2/1900 3:00:00','1/2/1900 3:05:00','1/2/1900 3:20:00','1/2/1900 3:25:00'],
'People' : [1,1,2,2,3,3,2,2,3,3,4,4,3,3,2,2,3,3,4,4,3,3,2,2,1],
})
>>>df
Time People
0 1/1/1900 8:00:00 1
1 1/1/1900 9:59:00 1
2 1/1/1900 10:00:00 2
3 1/1/1900 12:29:00 2
4 1/1/1900 12:30:00 3
5 1/1/1900 13:00:00 3
6 1/1/1900 13:02:00 2
7 1/1/1900 13:15:00 2
8 1/1/1900 13:20:00 3
9 1/1/1900 18:10:00 3
10 1/1/1900 18:15:00 4
11 1/1/1900 18:20:00 4
12 1/1/1900 18:25:00 3
13 1/1/1900 18:45:00 3
14 1/1/1900 18:50:00 2
15 1/1/1900 19:05:00 2
16 1/1/1900 19:07:00 3
17 1/1/1900 21:57:00 3
18 1/1/1900 22:00:00 4
19 1/1/1900 22:30:00 4
20 1/1/1900 22:35:00 3
21 1/2/1900 3:00:00 3
22 1/2/1900 3:05:00 2
23 1/2/1900 3:20:00 2
24 1/2/1900 3:25:00 1
df.Time = pd.to_datetime(df.Time)
df.Time.set_index('Time', inplace=True)
df_group = df.resample('15T').max().ffill()
df_hour = df_group.resample('1h').max()
df_hour['Cost'] = df_hour['People'] * 26
>>>df_hour
People Cost
Time
1900-01-01 08:00:00 1.0 26.0
1900-01-01 09:00:00 1.0 26.0
1900-01-01 10:00:00 2.0 52.0
1900-01-01 11:00:00 2.0 52.0
1900-01-01 12:00:00 3.0 78.0
1900-01-01 13:00:00 3.0 78.0
1900-01-01 14:00:00 3.0 78.0
1900-01-01 15:00:00 3.0 78.0
1900-01-01 16:00:00 3.0 78.0
1900-01-01 17:00:00 3.0 78.0
1900-01-01 18:00:00 4.0 104.0
1900-01-01 19:00:00 3.0 78.0
1900-01-01 20:00:00 3.0 78.0
1900-01-01 21:00:00 3.0 78.0
1900-01-01 22:00:00 4.0 104.0
1900-01-01 23:00:00 4.0 104.0
1900-01-02 00:00:00 4.0 104.0
1900-01-02 01:00:00 4.0 104.0
1900-01-02 02:00:00 4.0 104.0
1900-01-02 03:00:00 3.0 78.0
>>>df_hour.sum()
People 60.0
Cost 1560.0
dtype: float64
Edit: Took me reading the second time to realize the methodology that you're using. The incorrect number that you got is likely due to grouping by sum() after you performed a ffill() on your aggregated People column. Since ffill() fills the holes from the last valid value, you actually overestimated your cost for these periods. You should be using max() again, to find the maximum number of headcount required for that hour.
I have a dataframe that can be simplified as:
date id
0 02/04/2015 02:34 1
1 06/04/2015 12:34 2
2 09/04/2015 23:03 3
3 12/04/2015 01:00 4
4 15/04/2015 07:12 5
5 21/04/2015 12:59 6
6 29/04/2015 17:33 7
7 04/05/2015 10:44 8
8 06/05/2015 11:12 9
9 10/05/2015 08:52 10
10 12/05/2015 14:19 11
11 19/05/2015 19:22 12
12 27/05/2015 22:31 13
13 01/06/2015 11:09 14
14 04/06/2015 12:57 15
15 10/06/2015 04:00 16
16 15/06/2015 03:23 17
17 19/06/2015 05:37 18
18 23/06/2015 13:41 19
19 27/06/2015 15:43 20
It can be created using:
tempDF = pd.DataFrame({ 'id': [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20],
'date': ["02/04/2015 02:34","06/04/2015 12:34","09/04/2015 23:03","12/04/2015 01:00","15/04/2015 07:12","21/04/2015 12:59","29/04/2015 17:33","04/05/2015 10:44","06/05/2015 11:12","10/05/2015 08:52","12/05/2015 14:19","19/05/2015 19:22","27/05/2015 22:31","01/06/2015 11:09","04/06/2015 12:57","10/06/2015 04:00","15/06/2015 03:23","19/06/2015 05:37","23/06/2015 13:41","27/06/2015 15:43"]})
The data has the following types:
tempDF.dtypes
date object
id int64
dtype: object
I have set the 'date' variable to be Pandas datefime64 format (if that's the right way to describe it) using:
import numpy as np
import pandas as pd
tempDF['date'] = pd_to_datetime(tempDF['date'])
So now, the dtypes look like:
tempDF.dtypes
date datetime64[ns]
id int64
dtype: object
I want to change the hours of the original date data. I can use .normalize() to convert to midnight via the .dt accessor:
tempDF['date'] = tempDF['date'].dt.normalize()
And, I can get access to individual datetime components (e.g. year) using:
tempDF['date'].dt.year
This produces:
0 2015
1 2015
2 2015
3 2015
4 2015
5 2015
6 2015
7 2015
8 2015
9 2015
10 2015
11 2015
12 2015
13 2015
14 2015
15 2015
16 2015
17 2015
18 2015
19 2015
Name: date, dtype: int64
The question is, how can I change specific date and time components? For example, how could I change the midday (12:00) for all the dates? I've found that datetime.datetime has a .replace() function. However, having converted dates to Pandas format, it would make sense to keep in that format. Is there a way to do that without changing the format again?
EDIT :
A vectorized way to do this would be to normalize the series, and then add 12 hours to it using timedelta. Example -
tempDF['date'].dt.normalize() + datetime.timedelta(hours=12)
Demo -
In [59]: tempDF
Out[59]:
date id
0 2015-02-04 12:00:00 1
1 2015-06-04 12:00:00 2
2 2015-09-04 12:00:00 3
3 2015-12-04 12:00:00 4
4 2015-04-15 12:00:00 5
5 2015-04-21 12:00:00 6
6 2015-04-29 12:00:00 7
7 2015-04-05 12:00:00 8
8 2015-06-05 12:00:00 9
9 2015-10-05 12:00:00 10
10 2015-12-05 12:00:00 11
11 2015-05-19 12:00:00 12
12 2015-05-27 12:00:00 13
13 2015-01-06 12:00:00 14
14 2015-04-06 12:00:00 15
15 2015-10-06 12:00:00 16
16 2015-06-15 12:00:00 17
17 2015-06-19 12:00:00 18
18 2015-06-23 12:00:00 19
19 2015-06-27 12:00:00 20
In [60]: tempDF['date'].dt.normalize() + datetime.timedelta(hours=12)
Out[60]:
0 2015-02-04 12:00:00
1 2015-06-04 12:00:00
2 2015-09-04 12:00:00
3 2015-12-04 12:00:00
4 2015-04-15 12:00:00
5 2015-04-21 12:00:00
6 2015-04-29 12:00:00
7 2015-04-05 12:00:00
8 2015-06-05 12:00:00
9 2015-10-05 12:00:00
10 2015-12-05 12:00:00
11 2015-05-19 12:00:00
12 2015-05-27 12:00:00
13 2015-01-06 12:00:00
14 2015-04-06 12:00:00
15 2015-10-06 12:00:00
16 2015-06-15 12:00:00
17 2015-06-19 12:00:00
18 2015-06-23 12:00:00
19 2015-06-27 12:00:00
dtype: datetime64[ns]
Timing information for both methods at bottom
One method would be to use Series.apply along with the .replace() method OP mentions in his post. Example -
tempDF['date'] = tempDF['date'].apply(lambda x:x.replace(hour=12,minute=0))
Demo -
In [12]: tempDF
Out[12]:
date id
0 2015-02-04 02:34:00 1
1 2015-06-04 12:34:00 2
2 2015-09-04 23:03:00 3
3 2015-12-04 01:00:00 4
4 2015-04-15 07:12:00 5
5 2015-04-21 12:59:00 6
6 2015-04-29 17:33:00 7
7 2015-04-05 10:44:00 8
8 2015-06-05 11:12:00 9
9 2015-10-05 08:52:00 10
10 2015-12-05 14:19:00 11
11 2015-05-19 19:22:00 12
12 2015-05-27 22:31:00 13
13 2015-01-06 11:09:00 14
14 2015-04-06 12:57:00 15
15 2015-10-06 04:00:00 16
16 2015-06-15 03:23:00 17
17 2015-06-19 05:37:00 18
18 2015-06-23 13:41:00 19
19 2015-06-27 15:43:00 20
In [13]: tempDF['date'] = tempDF['date'].apply(lambda x:x.replace(hour=12,minute=0))
In [14]: tempDF
Out[14]:
date id
0 2015-02-04 12:00:00 1
1 2015-06-04 12:00:00 2
2 2015-09-04 12:00:00 3
3 2015-12-04 12:00:00 4
4 2015-04-15 12:00:00 5
5 2015-04-21 12:00:00 6
6 2015-04-29 12:00:00 7
7 2015-04-05 12:00:00 8
8 2015-06-05 12:00:00 9
9 2015-10-05 12:00:00 10
10 2015-12-05 12:00:00 11
11 2015-05-19 12:00:00 12
12 2015-05-27 12:00:00 13
13 2015-01-06 12:00:00 14
14 2015-04-06 12:00:00 15
15 2015-10-06 12:00:00 16
16 2015-06-15 12:00:00 17
17 2015-06-19 12:00:00 18
18 2015-06-23 12:00:00 19
19 2015-06-27 12:00:00 20
Timing information
In [52]: df = pd.DataFrame([[datetime.datetime.now()] for _ in range(100000)],columns=['date'])
In [54]: %%timeit
....: df['date'].dt.normalize() + datetime.timedelta(hours=12)
....:
The slowest run took 12.53 times longer than the fastest. This could mean that an intermediate result is being cached
1 loops, best of 3: 32.3 ms per loop
In [57]: %%timeit
....: df['date'].apply(lambda x:x.replace(hour=12,minute=0))
....:
1 loops, best of 3: 1.09 s per loop
Here's the solution I used to replace the time component of the datetime values in a Pandas DataFrame. Not sure how efficient this solution is, but it fit my needs.
import pandas as pd
# Create a list of EOCY dates for a specified period
sDate = pd.Timestamp('2022-01-31 23:59:00')
eDate = pd.Timestamp('2060-01-31 23:59:00')
dtList = pd.date_range(sDate, eDate, freq='Y').to_pydatetime()
# Create a DataFrame with a single column called 'Date' and fill the rows with the list of EOCY dates.
df = pd.DataFrame({'Date': dtList})
# Loop through the DataFrame rows using the replace function to replace the hours and minutes of each date value.
for i in range(df.shape[0]):
df.iloc[i, 0]=df.iloc[i, 0].replace(hour=00, minute=00)
Not sure how efficient this solution is, but it fit my needs.