I m having a dataframe with dates from 2006 to 2016 and for each date 7 corresponding values.
The data is like below:
H PS T RH TD WDIR WSP
date
2006-01-01 11:28:00 38 988.6 0.9 98.0 0.6 120.0 14.4
2006-01-01 11:28:00 46 987.6 0.5 91.0 -0.7 122.0 15.0
2006-01-01 11:28:00 57 986.3 0.5 89.0 -1.1 124.0 15.5
2006-01-01 11:28:00 66 985.1 0.5 90.0 -1.1 126.0 16.0
2006-01-01 11:28:00 74 984.1 0.4 90.0 -1.1 127.0 16.5
2006-01-01 11:28:00 81 983.3 0.4 90.0 -1.1 129.0 17.0
I want to select few columns for each year ( for example T and RH for all 2006) . So, for each year 2006 to 2016 select a bunch of columns then write each of the new dataframes in one file.
I did the following:
df_H_T=(df[['RH','T']])
mask = (df_H_T['date'] >'2016-01-01 00:00:00') & (df_H_T['date'] <='2016-12-31 23:59:59')
df_H_T_2006 =df.loc[mask]
print(df_H_T_2006.head(20))
print(df_H_T_2006.tail(20))
But is not working because it seems it doesn't know what 'date' is but then when I print the head of the dataframe it seems date is there. What am I doing wrong ?
My second question is how can I put this in a loop over the year variable so that I don t write by hand each new dataframe and select one year at a time up to 2016 ? ( I m new and never used loops in python).
Thanks,
Ioana
date is in the original dataframe, but then you take df_H_T=df[['RH','T']], so now date isn't in df_H_T. You can use masks generated from one dataframe to slice another, as long as they have the same index. So you can do
mask = (df['date'] >'2016-01-01 00:00:00') & (df['date'] <='2016-12-31 23:59:59')
df_H_T_2006 =df_H_T.loc[mask]
(Note: you're applying the mask to df, but presumably you want to apply it to df_H_T).
If date is in datetime format, you can just take df['date'].apply(lamda x: x.year==2016). For your for-loop, it would be
df_H_T=(df[['RH','T']])
for year in years:
mask = df['date'].apply(lamda x: x.year==year)
df_H_T_cur_year =df_H_T.loc[mask]
print(df_H_T_cur_year.head(20))
print(df_H_T_cur_year.tail(20))
Related
The small reproducible example below sets up a dataframe that is 100 yrs in length containing some randomly generated values. It then inserts 3 100-day stretches of missing values. Using this small example, I am attempting to sort out the pandas commands that will fill in the missing days using average values for that day of the year (hence the use of .groupby) with a condition. For example, if April 12th is missing, how can the last line of code be altered such that only the 10 nearest April 12th's are used to fill in the missing value? In other words, a missing April 12th value in 1920 would be filled in using the mean April 12th values between 1915 to 1925; a missing April 12th value in 2000 would be filled in with the mean April 12th values between 1995 to 2005, etc. I tried playing around with adding a .rolling() to the lambda function in last line of script, but was unsuccessful in my attempt.
Bonus question: The example below extends from 1918 to 2018. If a value is missing on April 12th 1919, for example, it would still be nice if ten April 12ths were used to fill in the missing value even though the window couldn't be 'centered' on the missing day because of its proximity to the beginning of the time series. Is there a solution to the first question above that would be flexible enough to still use a minimum of 10 values when missing values are close to the beginning and ending of the time series?
import pandas as pd
import numpy as np
import random
# create 100 yr time series
dates = pd.date_range(start="1918-01-01", end="2018-12-31").strftime("%Y-%m-%d")
vals = [random.randrange(1, 50, 1) for i in range(len(dates))]
# Create some arbitrary gaps
vals[100:200] = vals[9962:10062] = vals[35895:35995] = [np.nan] * 100
# Create dataframe
df = pd.DataFrame(dict(
list(
zip(["Date", "vals"],
[dates, vals])
)
))
# confirm missing vals
df.iloc[95:105]
df.iloc[35890:35900]
# set a date index (for use by groupby)
df.index = pd.DatetimeIndex(df['Date'])
df['Date'] = df.index
# Need help restricting the mean to the 10 nearest same-days-of-the-year:
df['vals'] = df.groupby([df.index.month, df.index.day])['vals'].transform(lambda x: x.fillna(x.mean()))
This answers both parts
build a DF dfr that is the calculation you want
lambda function returns a dict {year:val, ...}
make sure indexes are named in reasonable way
expand out dict with apply(pd.Series)
reshape by putting year columns back into index
merge() built DF with original DF. vals column contains NaN 0 column is value to fill
finally fillna()
# create 100 yr time series
dates = pd.date_range(start="1918-01-01", end="2018-12-31")
vals = [random.randrange(1, 50, 1) for i in range(len(dates))]
# Create some arbitrary gaps
vals[100:200] = vals[9962:10062] = vals[35895:35995] = [np.nan] * 100
# Create dataframe - simplified from question...
df = pd.DataFrame({"Date":dates,"vals":vals})
df[df.isna().any(axis=1)]
ystart = df.Date.dt.year.min()
# generate rolling means for month/day. bfill for when it's start of series
dfr = (df.groupby([df.Date.dt.month, df.Date.dt.day])["vals"]
.agg(lambda s: {y+ystart:v for y,v in enumerate(s.dropna().rolling(5).mean().bfill())})
.to_frame().rename_axis(["month","day"])
)
# expand dict into columns and reshape to by indexed by month,day,year
dfr = dfr.join(dfr.vals.apply(pd.Series)).drop(columns="vals").rename_axis("year",axis=1).stack().to_frame()
# get df index back, plus vals & fillna (column 0) can be seen alongside each other
dfm = df.merge(dfr, left_on=[df.Date.dt.month,df.Date.dt.day,df.Date.dt.year], right_index=True)
# finally what we really want to do - fill tha NaNs
df.fillna(dfm[0])
analysis
taking NaN for 11-Apr-1918, default is 22 as it's backfilled from 1921
(12+2+47+47+2)/5 == 22
dfm.query("key_0==4 & key_1==11").head(7)
key_0
key_1
key_2
Date
vals
0
100
4
11
1918
1918-04-11 00:00:00
nan
22
465
4
11
1919
1919-04-11 00:00:00
12
22
831
4
11
1920
1920-04-11 00:00:00
2
22
1196
4
11
1921
1921-04-11 00:00:00
47
27
1561
4
11
1922
1922-04-11 00:00:00
47
36
1926
4
11
1923
1923-04-11 00:00:00
2
34.6
2292
4
11
1924
1924-04-11 00:00:00
37
29.4
I'm not sure how far I've gotten with the intent of your question. The approach I've taken is to satisfy two requirements
Need an arbitrary number of averages
Use those averages to fill in the NA
I have addressed the
Simply put, instead of filling in the NA with before and after dates, I fill in the NA with averages extracted from any number of years in a row.
import pandas as pd
import numpy as np
import random
# create 100 yr time series
dates = pd.date_range(start="1918-01-01", end="2018-12-31").strftime("%Y-%m-%d")
vals = [random.randrange(1, 50, 1) for i in range(len(dates))]
# Create some arbitrary gaps
vals[100:200] = vals[9962:10062] = vals[35895:35995] = [np.nan] * 100
# Create dataframe
df = pd.DataFrame(dict(
list(
zip(["Date", "vals"],
[dates, vals])
)
))
df['Date'] = pd.to_datetime(df['Date'])
df['mm-dd'] = df['Date'].apply(lambda x:'{:02}-{:02}'.format(x.month, x.day))
df['yyyy'] = df['Date'].apply(lambda x:'{:04}'.format(x.year))
df = df.iloc[:,1:].pivot(index='mm-dd', columns='yyyy')
df.columns = df.columns.droplevel(0)
df['nans'] = df.isnull().sum(axis=1)
df['10n_mean'] = df.iloc[:,:-1].sample(n=10, axis=1).mean(axis=1)
df['10n_mean'] = df['10n_mean'].round(1)
df.loc[df['nans'] >= 1]
yyyy 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 ... 2011 2012 2013 2014 2015 2016 2017 2018 nans 10n_mean
mm-dd
02-29 NaN NaN 34.0 NaN NaN NaN 2.0 NaN NaN NaN ... NaN 49.0 NaN NaN NaN 32.0 NaN NaN 76 21.6
04-11 NaN 43.0 12.0 28.0 29.0 28.0 1.0 38.0 11.0 3.0 ... 17.0 35.0 8.0 17.0 34.0 NaN 5.0 33.0 3 29.7
04-12 NaN 19.0 38.0 34.0 48.0 46.0 28.0 29.0 29.0 14.0 ... 41.0 16.0 9.0 39.0 8.0 NaN 1.0 12.0 3 21.3
04-13 NaN 33.0 26.0 47.0 21.0 26.0 20.0 16.0 11.0 7.0 ... 5.0 11.0 34.0 28.0 27.0 NaN 2.0 46.0 3 21.3
04-14 NaN 36.0 19.0 6.0 45.0 41.0 24.0 39.0 1.0 11.0 ... 30.0 47.0 45.0 14.0 48.0 NaN 16.0 8.0 3 24.7
df_mean = df.T.fillna(df['10n_mean'], downcast='infer').T
df_mean.loc[df_mean['nans'] >= 1]
yyyy 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 ... 2011 2012 2013 2014 2015 2016 2017 2018 nans 10n_mean
mm-dd
02-29 21.6 21.6 34.0 21.6 21.6 21.6 2.0 21.6 21.6 21.6 ... 21.6 49.0 21.6 21.6 21.6 32.0 21.6 21.6 76.0 21.6
04-11 29.7 43.0 12.0 28.0 29.0 28.0 1.0 38.0 11.0 3.0 ... 17.0 35.0 8.0 17.0 34.0 29.7 5.0 33.0 3.0 29.7
04-12 21.3 19.0 38.0 34.0 48.0 46.0 28.0 29.0 29.0 14.0 ... 41.0 16.0 9.0 39.0 8.0 21.3 1.0 12.0 3.0 21.3
04-13 21.3 33.0 26.0 47.0 21.0 26.0 20.0 16.0 11.0 7.0 ... 5.0 11.0 34.0 28.0 27.0 21.3 2.0 46.0 3.0 21.3
04-14 24.7 36.0 19.0 6.0 45.0 41.0 24.0 39.0 1.0 11.0 ... 30.0 47.0 45.0 14.0 48.0 24.7 16.0 8.0 3.0 24.7
I'm using a code for create a cohort analysis based on customer retention. As I have the code right now, I see the analysis based by day. I need a desired output based in a month. In the output attached below, you will see that I index = day and should be index = month. The 1,2,3,4 columns should represent 1 day on a month so I need to get from 1 to 31 as output columns.
.csv file sample:
timestamp user_hash
0 2019-02-01 NaN
1 2019-02-01 e044a52188dbccd71428
2 2019-02-01 NaN C0D1A22B-9ECB-4DEF
3 2019-02-01 d7260762b66295fbf9e5
4 2019-02-01 d7260762b66295fbf9e5
Actual output sample:
CohortIndex 1 2 3 4
CohortMonth
2019-02-01 399.0 202.0 160.0 117.0
2019-02-02 215.0 109.0 89.0 61.0
2019-02-03 146.0 79.0 62.0 50.0
2019-02-04 175.0 67.0 50.0 32.0
2019-02-05 179.0 52.0 39.0 32.0
2019-02-06 137.0 31.0 29.0 16.0
2019-02-07 139.0 42.0 33.0 25.0
2019-02-08 143.0 35.0 32.0 24.0
2019-02-09 105.0 31.0 23.0 12.0
The code used is the following:
import pandas as pd
import datetime as dt
df_events = pd.read_csv('.../events.csv')
#convert object column to datatime and remove time from the column
df_events['timestamp'] = pd.to_datetime(df_events['timestamp'].str.strip(), format='%d/%m/%y %H:%M' )
df_events['timestamp'] = df_events['timestamp'].dt.date
#drop NaN from user_hash column
clean_data = df_events[df_events['user_hash'].notna()]
#function to check if we have NaN where whe shouldn't
def missing_data(x):
return x.isna().sum()
#uses the datatime function to gets the month a datatime stam and strips the time
def get_month(x):
return dt.datetime(x.year,x.month,1)#year, month, increment of day
#create a new column
clean_data['LoginMonth'] = clean_data['timestamp'].apply(get_month)
#create new columns called CohortMonth and groupby information
clean_data['CohortMonth'] = clean_data.groupby('user_hash')['timestamp'].transform('min')
#create the cohort
def get_date(df,column):
year = df[column].dt.year
month = df[column].dt.month
day = df[column].dt.day
return year, month, day
#create 2 variables, one for a month and one for a year. As we have 3 variables to return to the function we need to indicate to python that, _ is for day
login_year,login_month, _ = get_date(clean_data,'LoginMonth')
clean_data['CohortMonth'] = pd.to_datetime(clean_data['CohortMonth'])
cohort_year,cohort_month, _ = get_date(clean_data,'CohortMonth')
year_diff = login_year - cohort_year
month_diff = login_month - cohort_month
clean_data['CohortIndex'] = year_diff*12 + month_diff +1
#create cohort analysis data table
cohort_data = clean_data.groupby(['CohortMonth','CohortIndex'])['user_hash'].apply(pd.Series.nunique).reset_index()
cohort_count = cohort_data.pivot_table(index='CohortMonth',
columns='CohortIndex',
values='user_hash')
Thanks!
I am trying to make a graph that shows the average temperature each day over a year by averaging 19 years of NOAA data (side note, is there any better way to get historical weather data because the NOAA's seems super inconsistent). I was wondering what the best way to set up the data would be. The relevant columns of my data look like this:
DATE PRCP TAVG TMAX TMIN TOBS
0 1990-01-01 17.0 NaN 13.3 8.3 10.0
1 1990-01-02 0.0 NaN NaN NaN NaN
2 1990-01-03 0.0 NaN 13.3 2.8 10.0
3 1990-01-04 0.0 NaN 14.4 2.8 10.0
4 1990-01-05 0.0 NaN 14.4 2.8 11.1
... ... ... ... ... ... ...
10838 2019-12-27 0.0 NaN 15.0 4.4 13.3
10839 2019-12-28 0.0 NaN 14.4 5.0 13.9
10840 2019-12-29 3.6 NaN 15.0 5.6 14.4
10841 2019-12-30 0.0 NaN 14.4 6.7 12.2
10842 2019-12-31 0.0 NaN 15.0 6.7 13.9
10843 rows × 6 columns
The DATE column is the datetime64[ns] type
Here's my code:
import pandas as pd
from matplotlib import pyplot as plt
data = pd.read_csv('1990-2019.csv')
#seperate the data by station
oceanside = data[data.STATION == 'USC00047767']
downtown = data[data.STATION == 'USW00023272']
oceanside.loc[:,'DATE'] = pd.to_datetime(oceanside.loc[:,'DATE'],format='%Y-%m-%d')
#This is the area I need help with:
oceanside['DATE'].dt.year
I've been trying to separate the data by year, so I can then average it. I would like to do this without using a for loop because I plan on doing this with much larger data sets and that would be super inefficient. I looked in the pandas documentation but I couldn't find a function that seemed like it would do that. Am I missing something? Is that even the right way to do it?
I am new to pandas/python data analysis so it is very possible the answer is staring me in the face.
Any help would be greatly appreciated!
Create a dict of dataframes where each key is a year
df_by_year = dict()
for year oceanside.date.dt.year.unique():
data = oceanside[oceanside.date.dt.year == year]
df_by_year[year] = data
Get data by a single year
oceanside[oceanside.date.dt.year == 2019]
Get average for each year
oceanside.groupby(oceanside.date.dt.year).mean()
My data set has values like
date quantity
01/04/2018 35
01/05/2018 33
01/06/2018 75
01/07/2018 0
01/08/2018 70
01/09/2018 0
01/10/2018 66
Code I tried:
df['rollmean3'] = df['quantity'].rolling(3).mean()
output:
2018-04-01 35.0 NaN
2018-05-01 33.0 NaN
2018-06-01 75.0 47.666667
2018-07-01 0.0 36.000000
2018-08-01 70.0 48.333333
2018-09-01 0.0 23.333333
2018-10-01 66.0 45.333333
EXPECTED OUTPUT:
But I need output as it should take the AVERAGE of 35,33 and 75 and fill it in the 0.0 value.
and for next zero it should calculated average for previous three values and fill it.
2018-04-01 35.0
2018-05-01 33.0
2018-06-01 75.0
2018-07-01 0.0 47.666667
2018-08-01 70.0
2018-09-01 0.0 64.22222 # average of (0, 47.6667 and 75)
2018-10-01 66.0
like this output should be displayed
Unfortunately there does not seem to be a vectorized solution for this in Pandas. You'll need to iterate the rows and fill in the missing values one by one. This will be slow; if you need to speed it up you can JIT compile your code using Numba.
Like John Zwinck said, there's no vectorized solution in pandas for this.
You'll have to use something like .itterrows(), like this:
for i, row in df.iterrows():
if row['quantity'] == 0:
df.loc[i,'quantity'] = df['quantity'].iloc[(i-3):i].mean()
Or even with recursion, if you prefer:
def fill_recursively(column: pd.Series, window_size: int = 3):
if 0 in column.values:
idx = column.tolist().index(0)
column[idx] = column[(idx-window_size):idx].mean()
column = fill_recursively(column)
return column
You can verify that fill_recursively(df['quantity']) returns the desired result (just make sure that it has the dtype float, otherwise it will be rounded to the nearest integer).
I have two data frames. One has rows for every five minutes in a day:
df
TIMESTAMP TEMP
1 2011-06-01 00:05:00 24.5
200 2011-06-01 16:40:00 32.0
1000 2011-06-04 11:20:00 30.2
5000 2011-06-18 08:40:00 28.4
10000 2011-07-05 17:20:00 39.4
15000 2011-07-23 02:00:00 29.3
20000 2011-08-09 10:40:00 29.5
30656 2011-09-15 10:40:00 13.8
I have another dataframe that ranks the days
ranked
TEMP DATE RANK
62 43.3 2011-08-02 1.0
63 43.1 2011-08-03 2.0
65 43.1 2011-08-05 3.0
38 43.0 2011-07-09 4.0
66 42.8 2011-08-06 5.0
64 42.5 2011-08-04 6.0
84 42.2 2011-08-24 7.0
56 42.1 2011-07-27 8.0
61 42.1 2011-08-01 9.0
68 42.0 2011-08-08 10.0
Both the columns TIMESTAMP and DATE are datetime datatypes (dtype returns dtype('M8[ns]').
What I want to be able to do is add a column to the dataframe df and then put the rank of the row based on the TIMESTAMP and corresponding day's rank from ranked (so in a day all the 5 minute timesteps will have the same rank).
So, the final result would look something like this:
df
TIMESTAMP TEMP RANK
1 2011-06-01 00:05:00 24.5 98.0
200 2011-06-01 16:40:00 32.0 98.0
1000 2011-06-04 11:20:00 30.2 96.0
5000 2011-06-18 08:40:00 28.4 50.0
10000 2011-07-05 17:20:00 39.4 9.0
15000 2011-07-23 02:00:00 29.3 45.0
20000 2011-08-09 10:40:00 29.5 40.0
30656 2011-09-15 10:40:00 13.8 100.0
What I have done so far:
# Separate the date and times.
df['DATE'] = df['YYYYMMDDHHmm'].dt.normalize()
df['TIME'] = df['YYYYMMDDHHmm'].dt.time
df = df[['DATE', 'TIME', 'TAIR']]
df['RANK'] = 0
for index, row in df.iterrows():
df.loc[index, 'RANK'] = ranked[ranked['DATE']==row['DATE']]['RANK'].values
But I think I am going in a very wrong direction because this takes ages to complete.
How do I improve this code?
IIUC, you can play with indexes to match the values
df = df.set_index(df.TIMESTAMP.dt.date)\
.assign(RANK=ranked.set_index('DATE').RANK)\
.set_index(df.index)