I have extracted the table below from a csv file :
date user_id whole_cost cost1
02/10/2012 00:00:00 1 1790 12
07/10/2012 00:00:00 1 364 15
30/01/2013 00:00:00 1 280 10
02/02/2013 00:00:00 1 259 24
05/03/2013 00:00:00 1 201 39
02/10/2012 00:00:00 3 623 1
07/12/2012 00:00:00 3 90 0
30/01/2013 00:00:00 3 312 90
02/02/2013 00:00:00 5 359 45
05/03/2013 00:00:00 5 301 34
02/02/2013 00:00:00 5 359 1
05/03/2013 00:00:00 5 801 12
For this purpose I used the following statement :
import pandas as pd
newnames = ['date','user_id', 'whole_cost', 'cost1']
df = pd.read_csv('expenses.csv', names = newnames, index_col = 'timestamp')
pivoted = df.pivot('timestamp','user_id')
But this last line generate the error message : no item named timestamp.
Many thanks in advance for your help.
looks like column name timestamp is not present in the dataframe.
Try index_col = 'date' instead of index_col = 'timestamp' also use pares_dates = ['date'] while using pd.read_csv.
This should work:
df = pd.read_csv('expenses.csv', header = False, names = newnames, index_col = 'date', parse_dates = ['date'])
Hope this helps.
Related
I have a column of Call Duration formatted as mm.ss and I would like to convert it to all seconds.
It looks like this:
CallDuration
25 29.02
183 5.40
213 3.02
290 10.27
304 2.00
...
4649990 13.02
4650067 5.33
4650192 19.47
4650197 3.44
4650204 14.15
In excel I would separate the column at the ".", multiply the minutes column by 60 and then add it to the seconds column for my total seconds. I feel like this should be much easier with pandas/python, but I cannot figure it out.
I tried using pd.to_timedelta but that did not give me what I need - I can't figure out how to put in there how the time is formatted. When I put in 'm' it does not return correctly with seconds being after the "."
pd.to_timedelta(post_group['CallDuration'],'m')
25 0 days 00:29:01.200000
183 0 days 00:05:24
213 0 days 00:03:01.200000
290 0 days 00:10:16.200000
304 0 days 00:02:00
...
4649990 0 days 00:13:01.200000
4650067 0 days 00:05:19.800000
4650192 0 days 00:19:28.200000
4650197 0 days 00:03:26.400000
4650204 0 days 00:14:09
Name: CallDuration, Length: 52394, dtype: timedelta64[ns]
Tried doing it this way, but now can't get the 'sec' column to convert to an integer because there are blanks, and it won't fill the blanks...
post_duration = post_group['CallDuration'].str.split(".",expand=True)
post_duration.columns = ["min","sec"]
post_duration['min'] = post_duration['min'].astype(int)
post_duration['min'] = 60*post_duration['min']
post_duration.loc['Total', 'min'] = post_duration['min'].sum()
post_duration
min sec
25 1740.0 02
183 300.0 4
213 180.0 02
290 600.0 27
304 120.0 None
... ... ...
4650067 300.0 33
4650192 1140.0 47
4650197 180.0 44
4650204 840.0 15
Total 24902700.0 NaN
post_duration2 = post_group['CallDuration'].str.split(".",expand=True)
post_duration2.columns = ["min","sec"]
post_duration2['sec'].astype(float).astype('Int64')
post_duration2.fillna(0)
post_duration2.loc['Total', 'sec'] = post_duration2['sec'].sum()
post_duration2
TypeError: object cannot be converted to an IntegerDtype
Perhaps there's a more efficient way, but I would still convert to a timedelta format then use apply with the Timedelta.total_seconds() method to get the column in seconds.
import pandas as pd
pd.to_timedelta(post_group['CallDuration'], 'm').apply(pd.Timedelta.total_seconds)
You can find more info on attributes and methods you can call on timedeltas here
import pandas as pd
import numpy as np
import datetime
def convert_to_seconds(col_data):
col_data = pd.to_datetime(col_data, format="%M:%S")
# The above line adds the 1900-01-01 as a date to the time, so using subtraction to remove it
col_data = col_data - datetime.datetime(1900,1,1)
return col_data.dt.total_seconds()
df = pd.DataFrame({'CallDuration':['2:02',
'5:50',
np.nan,
'3:02']})
df['CallDuration'] = convert_to_seconds(df['CallDuration'])
Here's the result:
CallDuration
0 122.0
1 350.0
2 NaN
3 182.0
You can also use the above code to convert string HH:MM to total seconds in float but only if the number of hours are less than 24.
And if you want to convert multiple columns in your dataframe replace
df['CallDuration'] = convert_to_seconds(df['CallDuration'])
with
new_df = df.apply(lambda col: convert_to_seconds(col) if col.name in colnames_list else col)
I have a column with hh:mm:ss and a separate column with the decimal seconds.
I have quite a horrible text files to process and the decimal value of my time is separated into another column. Now I'd like to concatenate them back in.
For example:
df = {'Time':['01:00:00','01:00:00 AM','01:00:01 AM','01:00:01 AM'],
'DecimalSecond':['14','178','158','75']}
I tried the following but it didn't work. It gives me "01:00:00 AM.14" LOL
df = df['Time2'] = df['Time'].map(str) + '.' + df['DecimalSecond'].map(str)
The goal is to come up with one column named "Time2" which has the first row 01:00:00.14 AM, second row 01.00.00.178 AM, etc)
Thank you for the help.
You can convert ouput to datetimes and then call Series.dt.time:
#Time column is splitted by space and extracted values before first space
s = df['Time'].astype(str).str.split().str[0] + '.' + df['DecimalSecond'].astype(str)
df['Time2'] = pd.to_datetime(s).dt.time
print (df)
Time DecimalSecond Time2
0 01:00:00 14 01:00:00.140000
1 01:00:00 AM 178 01:00:00.178000
2 01:00:01 AM 158 01:00:01.158000
3 01:00:01 AM 75 01:00:01.750000
Please see the python code below
In [1]:
import pandas as pd
In [2]:
df = pd.DataFrame({'Time':['01:00:00','01:00:00','01:00:01','01:00:01'],
'DecimalSecond':['14','178','158','75']})
In [3]:
df['Time2'] = df[['Time','DecimalSecond']].apply(lambda x: ' '.join(x), axis = 1)
print(df)
Time DecimalSecond Time2
0 01:00:00 14 01:00:00 14
1 01:00:00 178 01:00:00 178
2 01:00:01 158 01:00:01 158
3 01:00:01 75 01:00:01 75
In [4]:
df.iloc[:,2]
Out[4]:
0 01:00:00 14
1 01:00:00 178
2 01:00:01 158
3 01:00:01 75
Name: Time2, dtype: object
I have a folder on my computer that contains ~8500 .csv files that are all names of various stock tickers. Within each .csv file, there is a 'timestamp' and 'users_holding' column. I have the 'timestamp' column set up as a datetime index, as the entries in that column include hourly entries for each day ex/ 2019-12-01 01:50, 2020-01-01 02:55... 2020-01-01 01:45 etc. Each one of those timestamps has a corresponding integer representing the number of users holding at that time. I want to create a for loop that iterates through all of the .csv files and tallies up the total users holding across all .csv files for the latest time every day starting on February 1st, 2020 (2020-02-01) until the last day in the .csv file. The folder updates daily, so I can't really have an end date.
This is the for loop I have set up to establish each ticker as a dataframe:
path = 'C:\\Users\\N****\\Desktop\\r******\\t**\\p*********\\'
all_files = glob.glob(path + "/*.csv")
for filename in all_files:
df = pd.read_csv(filename, header = 0, parse_dates = ['timestamp'], index_col='timestamp')
If anyone could show me how to write the for loop that finds the latest entry for each date and tallies up that number for each day, that would be amazing.
Thank you!
First, create a data frame with a Datetime index (in one-hour steps):
import numpy as np
import pandas as pd
idx = pd.date_range(start='2020-01-01', end='2020-01-31', freq='H')
data = np.arange(len(idx) * 3).reshape(len(idx), 3)
columns = ['ticker-1', 'ticker-2', 'ticker-3']
df = pd.DataFrame(data=data, index=idx, columns=columns)
print(df.head())
ticker-1 ticker-2 ticker-3
2020-01-01 00:00:00 0 1 2
2020-01-01 01:00:00 3 4 5
2020-01-01 02:00:00 6 7 8
2020-01-01 03:00:00 9 10 11
2020-01-01 04:00:00 12 13 14
Then, groupby the index (keep year-month-day), but drop hours-minutes-seconds). The aggregation function is .last()
result = (df.groupby(by=df.index.strftime('%Y-%m-%d'))
[['ticker-1', 'ticker-2', 'ticker-3']]
.last()
)
print(result.head())
ticker-1 ticker-2 ticker-3
2020-01-01 69 70 71
2020-01-02 141 142 143
2020-01-03 213 214 215
2020-01-04 285 286 287
2020-01-05 357 358 359
I want to convert a column in dataset of hh:mm format to minutes. I tried the following code but it says " AttributeError: 'Series' object has no attribute 'split' ". The data is in following format. I also have nan values in the dataset and the plan is to compute the median of values and then fill the rows which has nan with the median
02:32
02:14
02:31
02:15
02:28
02:15
02:22
02:16
02:22
02:14
I have tried this so far
s = dataset['Enroute_time_(hh mm)']
hours, minutes = s.split(':')
int(hours) * 60 + int(minutes)
I suggest you avoid row-wise calculations. You can use a vectorised approach with Pandas / NumPy:
df = pd.DataFrame({'time': ['02:32', '02:14', '02:31', '02:15', '02:28', '02:15',
'02:22', '02:16', '02:22', '02:14', np.nan]})
values = df['time'].fillna('00:00').str.split(':', expand=True).astype(int)
factors = np.array([60, 1])
df['mins'] = (values * factors).sum(1)
print(df)
time mins
0 02:32 152
1 02:14 134
2 02:31 151
3 02:15 135
4 02:28 148
5 02:15 135
6 02:22 142
7 02:16 136
8 02:22 142
9 02:14 134
10 NaN 0
If you want to use split you will need to use the str accessor, ie s.str.split(':').
However I think that in this case it makes more sense to use apply:
df = pd.DataFrame({'Enroute_time_(hh mm)': ['02:32', '02:14', '02:31',
'02:15', '02:28', '02:15',
'02:22', '02:16', '02:22', '02:14']})
def convert_to_minutes(value):
hours, minutes = value.split(':')
return int(hours) * 60 + int(minutes)
df['Enroute_time_(hh mm)'] = df['Enroute_time_(hh mm)'].apply(convert_to_minutes)
print(df)
# Enroute_time_(hh mm)
# 0 152
# 1 134
# 2 151
# 3 135
# 4 148
# 5 135
# 6 142
# 7 136
# 8 142
# 9 134
I understood that you have a column in a DataFrame with multiple Timedeltas as Strings. Then you want to extract the total minutes of the Deltas. After that you want to fill the NaN values with the median of the total minutes.
import pandas as pd
df = pd.DataFrame(
{'hhmm' : ['02:32',
'02:14',
'02:31',
'02:15',
'02:28',
'02:15',
'02:22',
'02:16',
'02:22',
'02:14']})
Your Timedeltas are not Timedeltas. They are strings. So you need to convert them first.
df.hhmm = pd.to_datetime(df.hhmm, format='%H:%M')
df.hhmm = pd.to_timedelta(df.hhmm - pd.datetime(1900, 1, 1))
This gives you the following values (Note the dtype: timedelta64[ns] here)
0 02:32:00
1 02:14:00
2 02:31:00
3 02:15:00
4 02:28:00
5 02:15:00
6 02:22:00
7 02:16:00
8 02:22:00
9 02:14:00
Name: hhmm, dtype: timedelta64[ns]
Now that you have true timedeltas, you can use some cool functions like total_seconds() and then calculate the minutes.
df.hhmm.dt.total_seconds() / 60
If that is not what you wanted, you can also use the following.
df.hhmm.dt.components.minutes
This gives you the minutes from the HH:MM string as if you would have split it.
Fill the na-values.
df.hhmm.fillna((df.hhmm.dt.total_seconds() / 60).mean())
or
df.hhmm.fillna(df.hhmm.dt.components.minutes.mean())
I have a dataset of samples covering multiple days, all with a timestamp.
I want to select rows within a specific time window. E.g. all rows that were generated between 1pm and 3 pm every day.
This is a sample of my data in a pandas dataframe:
22 22 2018-04-12T20:14:23Z 2018-04-12T21:14:23Z 0 6370.1
23 23 2018-04-12T21:14:23Z 2018-04-12T21:14:23Z 0 6368.8
24 24 2018-04-12T22:14:22Z 2018-04-13T01:14:23Z 0 6367.4
25 25 2018-04-12T23:14:22Z 2018-04-13T01:14:23Z 0 6365.8
26 26 2018-04-13T00:14:22Z 2018-04-13T01:14:23Z 0 6364.4
27 27 2018-04-13T01:14:22Z 2018-04-13T01:14:23Z 0 6362.7
28 28 2018-04-13T02:14:22Z 2018-04-13T05:14:22Z 0 6361.0
29 29 2018-04-13T03:14:22Z 2018-04-13T05:14:22Z 0 6359.3
.. ... ... ... ... ...
562 562 2018-05-05T08:13:21Z 2018-05-05T09:13:21Z 0 6300.9
563 563 2018-05-05T09:13:21Z 2018-05-05T09:13:21Z 0 6300.7
564 564 2018-05-05T10:13:14Z 2018-05-05T13:13:14Z 0 6300.2
565 565 2018-05-05T11:13:14Z 2018-05-05T13:13:14Z 0 6299.9
566 566 2018-05-05T12:13:14Z 2018-05-05T13:13:14Z 0 6299.6
How do I achieve that? I need to ignore the date and just evaluate the time component. I could traverse the dataframe in a loop and evaluate the date time in that way, but there must be a more simple way to do that..
I converted the messageDate which was read a a string to a dateTime by
df["messageDate"]=pd.to_datetime(df["messageDate"])
But after that I got stuck on how to filter on time only.
Any input appreciated.
datetime columns have DatetimeProperties object, from which you can extract datetime.time and filter on it:
import datetime
df = pd.DataFrame(
[
'2018-04-12T12:00:00Z', '2018-04-12T14:00:00Z','2018-04-12T20:00:00Z',
'2018-04-13T12:00:00Z', '2018-04-13T14:00:00Z', '2018-04-13T20:00:00Z'
],
columns=['messageDate']
)
df
messageDate
# 0 2018-04-12 12:00:00
# 1 2018-04-12 14:00:00
# 2 2018-04-12 20:00:00
# 3 2018-04-13 12:00:00
# 4 2018-04-13 14:00:00
# 5 2018-04-13 20:00:00
df["messageDate"] = pd.to_datetime(df["messageDate"])
time_mask = (df['messageDate'].dt.hour >= 13) & \
(df['messageDate'].dt.hour <= 15)
df[time_mask]
# messageDate
# 1 2018-04-12 14:00:00
# 4 2018-04-13 14:00:00
I hope the code is self explanatory. You can always ask questions.
import pandas as pd
# Prepping data for example
dates = pd.date_range('1/1/2018', periods=7, freq='H')
data = {'A' : range(7)}
df = pd.DataFrame(index = dates, data = data)
print df
# A
# 2018-01-01 00:00:00 0
# 2018-01-01 01:00:00 1
# 2018-01-01 02:00:00 2
# 2018-01-01 03:00:00 3
# 2018-01-01 04:00:00 4
# 2018-01-01 05:00:00 5
# 2018-01-01 06:00:00 6
# Creating a mask to filter the value we with to have or not.
# Here, we use df.index because the index is our datetime.
# If the datetime is a column, you can always say df['column_name']
mask = (df.index > '2018-1-1 01:00:00') & (df.index < '2018-1-1 05:00:00')
print mask
# [False False True True True False False]
df_with_good_dates = df.loc[mask]
print df_with_good_dates
# A
# 2018-01-01 02:00:00 2
# 2018-01-01 03:00:00 3
# 2018-01-01 04:00:00 4
df=df[(df["messageDate"].apply(lambda x : x.hour)>13) & (df["messageDate"].apply(lambda x : x.hour)<15)]
You can use x.minute, x.second similarly.
try this after ensuring messageDate is indeed datetime format as you have done
df.set_index('messageDate',inplace=True)
choseInd = [ind for ind in df.index if (ind.hour>=13)&(ind.hour<=15)]
df_select = df.loc[choseInd]
you can do the same, even without making the datetime column as an index, as the answer with apply: lambda shows
it just makes your dataframe 'better looking' if the datetime is your index rather than numerical one.