.resample('W') picks one column only while subset includes two columns - python

I am trying to resample a Pandas dataframe after subsetting for 2 columns. Below is the head of the dataframe. Both columns are Pandas Series.
temp_2011_clean[['visibility', 'dry_bulb_faren']].head()
visibility dry_bulb_faren
2011-01-01 00:53:00 10.00 51.0
2011-01-01 01:53:00 10.00 51.0
2011-01-01 02:53:00 10.00 51.0
2011-01-01 03:53:00 10.00 50.0
2011-01-01 04:53:00 10.00 50.0
type(temp_2011_clean['visibility'])
pandas.core.series.Series
type(temp_2011_clean['dry_bulb_faren'])
pandas.core.series.Series
While the .resample('W') method successfully creates the resample object, if I chain the .mean() method to the same, it is picking up only one column, instead of the expected, both columns. Can someone suggest what could be the issue? Why is that one column is missed?
temp_2011_clean[['visibility', 'dry_bulb_faren']].resample('W')
<pandas.core.resample.DatetimeIndexResampler object at 0x0000016F4B943288>
temp_2011_clean[['visibility', 'dry_bulb_faren']].resample('W').mean().head()
dry_bulb_faren
2011-01-02 44.791667
2011-01-09 50.246637
2011-01-16 41.103774
2011-01-23 47.194313
2011-01-30 53.486188

I think problem should be column visibility is not numeric, so non numeric column is excluded.
print (temp_2011_clean.dtypes)
visibility object
dry_bulb_faren float64
dtype: object
df = temp_2011_clean[['visibility', 'dry_bulb_faren']].resample('W').mean()
print (df)
dry_bulb_faren
2011-01-02 50.6
So convert column to numeric by to_numeric with errors='coerce' for convert non numeric values to NaNs:
temp_2011_clean['visibility'] = pd.to_numeric(temp_2011_clean['visibility'], errors='coerce')
print (temp_2011_clean.dtypes)
visibility float64
dry_bulb_faren float64
dtype: object
df = temp_2011_clean[['visibility', 'dry_bulb_faren']].resample('W').mean()
print (df)
visibility dry_bulb_faren
2011-01-02 10.0 50.6

Related

Copy row to another dataframe

I have 2 dataframes with index type: Datatimeindex and I would like to copy one row to another. The dataframes are:
variable: df
DateTime
2013-01-01 01:00:00 0.0
2013-01-01 02:00:00 0.0
2013-01-01 03:00:00 0.0
....
Freq: H, Length: 8759, dtype: float64
variable: consumption_year
Potência Ativa ... Costs
Datetime ...
2019-01-01 00:00:00 11.500000 ... 1.08874
2019-01-01 01:00:00 6.500000 ... 0.52016
2019-01-01 02:00:00 5.250000 ... 0.38183
2019-01-01 03:00:00 5.250000 ... 0.38183
[8760 rows x 5 columns]
here is my code:
mc.run_model(tmy_data)
df=round(mc.ac.fillna(0)/1000,3)
consumption_year['PVProduction'] = df.iloc[:,[1]] #1
consumption_year['PVProduction'] = df[:,1] #2
I am trying to copy the second column of df, to a new column in consumption_year dataframe but none of those previous experiences worked. Looking to the index, I see 3 major differences:
year (2013 and 2019)
starting hour: 01:00 and 00:00
length: 8760 and 8759
Do I need to solve those 3 differences first (making an datetime from df equal to consumption_year), before I can copy one row to another? If so, could you provide me a solution to fix those differences.
Those are the errors:
1: consumption_year['PVProduction'] = df.iloc[:,[1]]
raise IndexingError("Too many indexers")
pandas.core.indexing.IndexingError: Too many indexers
2: consumption_year['PVProduction'] = df[:,1]
raise ValueError("Can only tuple-index with a MultiIndex")
ValueError: Can only tuple-index with a MultiIndex
You can merge two data frames together.
pd.merge(df, consumption_year, left_index=True, right_index=True, how='outer')

pandas datetime columns problem and i don't know what i am missing

I am a Korean student
Please understand that English is awkward
i want to make columns datetime > year,mounth .... ,second
train = pd.read_csv('input/Train.csv')
DateTime looks like this
(this is head(20) and I remove other columns easy to see)
datetime
0 2011-01-01 00:00:00
1 2011-01-01 01:00:00
2 2011-01-01 02:00:00
3 2011-01-01 03:00:00
4 2011-01-01 04:00:00
5 2011-01-01 05:00:00
6 2011-01-01 06:00:00
7 2011-01-01 07:00:00
8 2011-01-01 08:00:00
9 2011-01-01 09:00:00
10 2011-01-01 10:00:00
11 2011-01-01 11:00:00
12 2011-01-01 12:00:00
13 2011-01-01 13:00:00
14 2011-01-01 14:00:00
15 2011-01-01 15:00:00
16 2011-01-01 16:00:00
17 2011-01-01 17:00:00
18 2011-01-01 18:00:00
19 2011-01-01 19:00:00
then I write this code to see each columns (year,month,day,hour,minute,second)
train['year'] = train['datetime'].dt.year
train['month'] = train['datetime'].dt.month
train['day'] = train['datetime'].dt.day
train['hour'] = train['datetime'].dt.hour
train['minute'] = train['datetime'].dt.minute
train['second'] = train['datetime'].dt.seond
and error like this
AttributeError: Can only use .dt accessor with datetimelike values
please help me ㅠㅅㅠ
Note that by default read_csv is able to deduce column type only
for numeric and boolean columns.
Unless explicitely specified (e.g. passing converters or dtype
parameters), all other cases of input are left as strings
and the pandasonic type of such columns is object.
And just this occurred in your case.
So, as this column is of object type, you can not invoke dt accessor
on it, as it works only on columns of datetime type.
Actually, in this case, you can take the following approach:
do not specify any conversion of this column (it will be parsed
just as object),
after that split datetime column into "parts", using str.split
(all 6 columns with a single instruction),
set proper column names in the resulting DataFrame,
join it to the original DataFrame (then drop),
as late as now change the type of the original column.
To do it, you can run:
wrk = df['datetime'].str.split(r'[- :]', expand=True).astype(int)
wrk.columns = ['year', 'month', 'day', 'hour', 'minute', 'second']
df = df.join(wrk)
del wrk
df['datetime'] = pd.to_datetime(df['datetime'])
Note that I added astype(int). Otherwise these columns would be left as
object (actually string) type.
Or maybe this original column is not needed any more (as you have extracted
all date / time components)? In such case drop this column instead of
converting it.
And the last hint: datetime is used rather as a type name (with various
endings).
So it is better when you used some other name here, at least differing
in char case, e.g. DateTime.

Difference between pandas aggregators .first() and .last()

I'm curious as to what last() and first() does in this specific instance (when chained to a resampling). Correct me if I'm wrong, but I understand if you pass arguments into first and last, e.g. 3; it returns the first 3 months or first 3 years.
In this circumstance, since I'm not passing any arguments into first() and last(), what is it actually doing when I'm resampling it like that? I know that if I resample by chaining .mean(), I'll resample into years with the mean score from averaging all the months, but what is happening when I'm using last()?
More importantly, why does first() and last() give me different answers in this context? I see that numerically they are not equal.
i.e: post2008.resample().first() != post2008.resample().last()
TLDR:
What does .first() and .last() do?
What does .first() and .last() do in this instance, when chained to a resample?
Why does .resample().first() != .resample().last()?
This is the code before the aggregation:
# Read 'GDP.csv' into a DataFrame: gdp
gdp = pd.read_csv('GDP.csv', index_col='DATE', parse_dates=True)
# Slice all the gdp data from 2008 onward: post2008
post2008 = gdp.loc['2008-01-01':,:]
# Print the last 8 rows of post2008
print(post2008.tail(8))
This is what print(post2008.tail(8)) outputs:
VALUE
DATE
2014-07-01 17569.4
2014-10-01 17692.2
2015-01-01 17783.6
2015-04-01 17998.3
2015-07-01 18141.9
2015-10-01 18222.8
2016-01-01 18281.6
2016-04-01 18436.5
Here is the code that resamples and aggregates by last():
# Resample post2008 by year, keeping last(): yearly
yearly = post2008.resample('A').last()
print(yearly)
This is what yearly is like when it's post2008.resample('A').last():
VALUE
DATE
2008-12-31 14549.9
2009-12-31 14566.5
2010-12-31 15230.2
2011-12-31 15785.3
2012-12-31 16297.3
2013-12-31 16999.9
2014-12-31 17692.2
2015-12-31 18222.8
2016-12-31 18436.5
Here is the code that resamples and aggregates by first():
# Resample post2008 by year, keeping first(): yearly
yearly = post2008.resample('A').first()
print(yearly)
This is what yearly is like when it's post2008.resample('A').first():
VALUE
DATE
2008-12-31 14668.4
2009-12-31 14383.9
2010-12-31 14681.1
2011-12-31 15238.4
2012-12-31 15973.9
2013-12-31 16475.4
2014-12-31 17025.2
2015-12-31 17783.6
2016-12-31 18281.6
Before anything else, let's create a dataframe with example data:
import pandas as pd
dates = pd.DatetimeIndex(['2014-07-01', '2014-10-01', '2015-01-01',
'2015-04-01', '2015-07-01', '2015-07-01',
'2016-01-01', '2016-04-01'])
df = pd.DataFrame({'VALUE': range(1000, 9000, 1000)}, index=dates)
print(df)
The output will be
VALUE
2014-07-01 1000
2014-10-01 2000
2015-01-01 3000
2015-04-01 4000
2015-07-01 5000
2015-07-01 6000
2016-01-01 7000
2016-04-01 8000
If we pass e.g. '6M' to df.first (which is not an aggregator, but a DataFrame method), we will be selecting the first six months of data, which in the example above consists of just two days:
print(df.first('6M'))
VALUE
2014-07-01 1000
2014-10-01 2000
Similarly, last returns only the rows that belong to the last six months of data:
print(df.last('6M'))
VALUE
2016-01-01 6000
2016-04-01 7000
In this context, not passing the required argument results in an error:
print(df.first())
TypeError: first() missing 1 required positional argument: 'offset'
On the other hand, df.resample('Y') returns a Resampler object, which has aggregation methods first, last, mean, etc. In this case, they keep only the first (respectively, last) values of each year (instead of e.g. averaging all values, or some other kind of aggregation):
print(df.resample('Y').first())
VALUE
2014-12-31 1000
2015-12-31 3000 # This is the first of the 4 values from 2015
2016-12-31 7000
print(df.resample('Y').last())
VALUE
2014-12-31 2000
2015-12-31 6000 # This is the last of the 4 values from 2015
2016-12-31 8000
As an extra example, consider also the case of grouping by a smaller period:
print(df.resample('M').last().head())
VALUE
2014-07-31 1000.0 # This is the last (and only) value from July, 2014
2014-08-31 NaN # No data for August, 2014
2014-09-30 NaN # No data for September, 2014
2014-10-31 2000.0
2014-11-30 NaN # No data for November, 2014
In this case, any periods for which there is no value will be filled with NaNs. Also, for this example, using first instead of last would have returned the same values, since each month has (at most) one value.

Pandas Series fillna with default date

How do I fill in NAN values in dataframe with a default date of 2015-01-01
what do I use here df['SIGN_DATE'] = df['SIGN_DATE'].fillna(??, inplace=True)
>>>df.SIGN_DATE.head()
0 2012-03-28 14:14:18
1 2011-05-18 00:41:48
2 2011-06-13 16:36:58
3 nan
4 2011-05-22 23:43:56
Name: SIGN_DATE, dtype: object
type(df.SIGN_DATE)
pandas.core.series.Series
df['SIGN_DATE'].fillna(value=pd.to_datetime('1/1/2015'), inplace=True)

Getting rid of a hierarchical index in Pandas

I have just pivoted a dataframe to create the dataframe below:
date 2012-10-31 2012-11-30
term
red -4.043862 -0.709225
blue -18.046630 -8.137812
green -8.339924 -6.358016
The columns are supposed to be dates, the left most column in supposed to have strings in it.
I want to be able to run through the rows (using the .apply()) and compare the values under each date column. The problem I am having is that I think the df has a hierarchical index.
Is there a way to give the whole df a new index (e.g. 1, 2, 3 etc.) and then have a flat index (but not get rid of the terms in the first column)?
EDIT: When I try to use .reset_index() I get the error ending with 'AttributeError: 'str' object has no attribute 'view''.
EDIT 2: this is what the df looks like:
EDIT 3: here is the description of the df:
<class 'pandas.core.frame.DataFrame'>
Index: 14597 entries, 101016j to zymogens
Data columns (total 6 columns):
2012-10-31 00:00:00 14597 non-null values
2012-11-30 00:00:00 14597 non-null values
2012-12-31 00:00:00 14597 non-null values
2013-01-31 00:00:00 14597 non-null values
2013-02-28 00:00:00 14597 non-null values
2013-03-31 00:00:00 14597 non-null values
dtypes: float64(6)
Thanks in advance.
df= df.reset_index()
this will take the current index and make it a column then give you a fresh index from 0
Adding example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'2012-10-31': [-4, -18, -18], '2012-11-30': [-0.7, -8, -6]}, index = ['red', 'blue','green'])
df
2012-10-31 2012-11-30
red -4 -0.7
blue -18 -8.0
green -18 -6.0
df.reset_index()
term 2012-10-31 2012-11-30
0 red -4 -0.7
1 blue -18 -8.0
2 green -18 -6.0
EDIT: When I try to use .reset_index() I get the error ending with 'AttributeError: 'str' object has no attribute 'view''.
Try to convert your date columns to string type columns first.
I think pandas doesn't like to reset_index() here because you try to reset your string index into a columns which only consist of dates. If you only have dates as columns, pandas will handle those columns internally as a DateTimeIndex. When calling reset_index(), pandas tries to set up your string index as a further column to your date columns and fails somehow. Looks like a bug for me, but not sure.
Example:
t = pandas.DataFrame({pandas.to_datetime('2011') : [1,2], pandas.to_datetime('2012') : [3,4]}, index=['A', 'B'])
t
2011-01-01 00:00:00 2012-01-01 00:00:00
A 1 3
B 2 4
t.columns
<class 'pandas.tseries.index.DatetimeIndex'>
[2011-01-01 00:00:00, 2012-01-01 00:00:00]
Length: 2, Freq: None, Timezone: None
t.reset_index()
...
AttributeError: 'str' object has no attribute 'view'
If you try with a string columns it will work.

Categories