I have a dataframe ('df') and the first column is a timestamp. I successfully converted that timestamp from that milliseconds since Unix epoch thing to a date like this "2020-02-18 13:00:00" (which is 1:00 pm on February 18th, 2020) with the following code:
df['Time'] = pd.to_datetime(df['Time'], unit='ms')
I'm trying to subset to just all of the rows from 2020-02-17 but this code:
df_1day = df[(df['Time'] == '2020-02-17')]
only returns the row at midnight (2020-02-17 00:00:00)
I'm sorry if the answer is somewhere else in this site, or the internet in general, but TIA for any help.
Not sure of protocol if I answer my own questions but I'm doing this edit to include lines of code that solved my issue--even though I'm pretty sure there's an easier way of doing this
## Create new column with 'Time' as a string
df['Day'] = df['Time'].astype(str)
## Take only the first 10 characters of the string (which would be date only)
df['Day'] = df['Day'].str[:10]
## Create dataframe subset based on values in the new column
df_1day = df[(df['Day'] == '2020-02-17')]
Related
So I have sales data that I'm trying to analyze. I have datetime data ["Order Date Time"] and I'd like to see the most common hours for sales but more importantly I'd like to see what minutes have NO sales.
I have been spinning my wheels for a while and I can't get my brain around a solution. Any help is greatly appreciated.
I import the data:
df = pd.read_excel ('Audit Period.xlsx')
print (df)
I clean up the data:
# Remove all columns except `applieddate` and null rows
time_df = df[df["Order Date Time"].notnull()]
# Ensure the index is still sequential
time_df = time_df[["Order Date Time"]].reset_index(drop=True)
# Select the first 10 rows
time_df.head(10)
I convert to datetime and I look at the month totals:
# Convert applieddate to datetime
time_df = time_df.copy()
time_df["Order Date Time"] = time_df["Order Date Time"].apply(pd.to_datetime)
time_df = time_df.set_index(time_df["Order Date Time"])
# Group by month
grouped = time_df.resample("M").count()
time_df = pd.DataFrame({"count": grouped.values.flatten()}, index=grouped.index)
time_df.head(10)
I try to group by hour but that gives me totals per day/hour rather than totals per hour like every order ever at noon, etc:
# Group by hour
grouped = time_df.resample("2H").count()
time_df = pd.DataFrame({"count": grouped.values.flatten()}, index=grouped.index)
time_df.head(10)
And that is where I'm stuck. I'm trying to integrate the below suggestions but can't quite get a grasp on them yet. Any help would be appreciated.
Not sure if this is the most brilliant solution, but I would start by generating a dataframe at the level of detail I wanted, whether that is 1-hour intervals, 5-minute intervals, etc. Then in your df with all the actual data, you could do your grouping as you currently are doing it above. Once it is grouped, join the two. That way you have one dataframe that includes empty rows associated with time spans with no records. The tricky part will just be making sure you have your date and time formatted in a way that it will match and join properly.
This is a follow up to Calculating new column value in dataframe based on next rows column value
The solution in the previous question worked for a column holding hh:mm:ss values as a string.
I tried applying (no pun intended) the same logic to calculate the 1 second difference on a column of pandas Timestamps:
# df.start_time is now of type <class 'pandas._libs.tslibs.timestamps.Timestamp'>
# in yyyy-dd-mm hh:mm:ss format
s = pd.to_timedelta(df.start_time).shift(-1).sub(pd.offsets.Second(1))
df = df.assign( end_time=s.add(pd.Timestamp('now').normalize()).dt.time.astype(str) )
By mistake in one round of coding I change the line where the series is applied as a column to the df to:
df = df.assign( end_time=s.add(pd.Timestamp('now').normalize()))
The results were... interesting. The end_time is in the correct format, but the date portion...
start_time end_time
2021-03-30 16:58:13 2072-06-28 03:17:30.192227
2021-03-30 17:00:00 2072-06-28 03:17:32.192227
I expected the end_time Timedelta of 1 second less than the start_time. As you can see that is not the case! The end_time Timedelta is 51 years in the future!
Can someone please explain how/why this happened? There is no explicit call of pd.offsets.DateOffset(years=50)
The solution to this was easy, and staring me in the face.
The offending code:
s = pd.to_timedelta(df.start_time).shift(-1).sub(pd.offsets.Second(1))
The correct way to create an end_time off of a timestamp type series/column:
s = pd.to_timestamp(df.start_time).shift(-1).sub(pd.offsets.Second(1))
I'm a beginner in python. I have an excel file. This file shows the rainfall amount between 2016-1-1 and 2020-6-30. It has 2 columns. The first column is date, another column is rainfall. Some dates are missed in the file (The rainfall didn't estimate). For example there isn't a row for 2016-05-05 in my file. This a sample of my excel file.
Date rainfall (mm)
1/1/2016 10
1/2/2016 5
.
.
.
12/30/2020 0
I want to find the missing dates but my code doesn't work correctly!
import pandas as pd
from datetime import datetime, timedelta
from matplotlib import dates as mpl_dates
from matplotlib.dates import date2num
df=pd.read_excel ('rainfall.xlsx')
a= pd.date_range(start = '2016-01-01', end = '2020-06-30' ).difference(df.index)
print(a)
Here' a beginner friendly way of doing it.
First you need to make sure, that the Date in your dataframe is really a date and not a string or object.
Type (or print) df.info().
The date column should show up as datetime64[ns]
If not, df['Date'] = pd.to_datetime(df['Date'], dayfirst=False)fixes that. (Use dayfirst to tell if the month is first or the day is first in your date string because Pandas doesn't know. Month first is the default, if you forget, so it would work without...)
For the tasks of finding missing days, there's many ways to solve it. Here's one.
Turn all dates into a series
all_dates = pd.Series(pd.date_range(start = '2016-01-01', end = '2020-06-30' ))
Then print all dates from that series which are not in your dataframe "Date" column. The ~ sign means "not".
print(all_dates[~all_dates.isin(df['Date'])])
Try:
df = pd.read_excel('rainfall.xlsx', usecols=[0])
a = pd.date_range(start = '2016-01-01', end = '2020-06-30').difference([l[0] for l in df.values])
print(a)
And the date in the file must like 2016/1/1
To find the missing dates from a list, you can apply Conditional Formatting function in Excel. 4. Click OK > OK, then the position of the missing dates are highlighted. Note: The last date in the date list will be highlighted.
this TRICK Is not with python,a NORMAL Trick
I'm sure this is really easy to answer but I have only just started using Pandas.
I have a column in my excel file called 'Day' and a Date/time column called 'Date'.
I want to update my Day column with the corresponding day of NUMEROUS Dates from the 'Date' column.
So far I use this code shown below to change the date/time to just date
df['Date'] = pd.to_datetime(df.Date).dt.strftime('%d/%m/%Y')
And then use this code to change the 'Day' column to Tuesday
df.loc[df['Date'] == '02/02/2018', 'Day'] = '2'
(2 signifies the 2nd day of the week)
This works great. The problem is, my excel sheet has 500000+ rows of data and lots of dates. Therefore I need this code to work with numerous dates (4 different dates to be exact)
For example; I have tried this code;
df.loc[df['Date'] == '02/02/2018' + '09/02/2018' + '16/02/2018' + '23/02/2018', 'Day'] = '2'
Which does not give me an error, but does not change the date to 2. I know I could just use the same line of code numerous times and change the date each time...but there must be a way to do it the way I explained? Help would be greatly appreciated :)
2/2/2018 is a Friday so I don't know what "2nd day in a week" mean. Does your week starts on Thursday?
Since you have already converted day to Timestamp, use the dt accessor:
df['Day'] = df['Date'].dt.dayofweek()
Monday is 0 and Sunday = 6. Manipulate that as needed.
If got it right, you want to change the Day column for just a few Dates, right? If so, you can just include these dates in a separated list and do
my_dates = ['02/02/2018', '09/02/2018', '16/02/2018', '23/02/2018']
df.loc[df['Date'].isin(my_dates), 'Day'] = '2'
I've got a DataFrame that looks like this:
It has two columns, one of them being a "from" datetime and one of them being a "to" datetime. I would like to change this DataFrame such that it has a single column or index for the date (e.g. 2015-07-06 00:00:00 in datetime form) with the variables of the other columns (like deep) split proportionately into each of the days. How might one approach this problem? I've meddled with groupby tricks and I'm not sure how to proceed.
So I don't have time to work through your specific problem at the moment. But the way to approach this is to us pandas.resample(). Here are the steps I would take. 1) Resample your to date column by minute. 2) Populate the other columns out over that resample. 3) Add the date column back in as an index.
If this doesn't work or is being tricky to work with I would create a date range from your earliest date to your latest date (at the smallest interval you want - so maybe hourly?) and then run some conditional statements over your other columns to fill in the data.
Here is somewhat what your code may look like for the resample portion (replace day with hour or whatever):
drange = pd.date_range('01-01-1970', '01-20-2018', freq='D')
data = data.resample('D').fillna(method='ffill')
data.index.name = 'date'
Hope this helps!