'OrderedDict' object has no attribute 'sort' - python

I am trying to run sort on the data imported from excel but getting below error. Why after importing into dataframe its saying its ordered dictionary?
-Error:
'OrderedDict' object has no attribute 'sort'
Code:
import pandas as pd
dfs = pd.read_excel("data.xlsx", sheet_name=None)
dfs
data_df = (dfs.sort(['Date','Tank','Time']).groupby(['Date','Tank']))
data_df
DF:
OrderedDict([(u'Sheet1',
Date Time Tank Sales Quantity Delivery
0 2018-01-01 06:30:00 1 100 3444 0
1 2018-01-01 07:00:00 1 200 3144 0
2 2018-01-01 05:30:00 1 100 2900 0
3 2018-01-01 07:30:00 1 200 2800 0
4 2018-01-01 06:30:00 2 50 3000 0
5 2018-01-01 07:00:00 2 100 2950 0
6 2018-01-01 05:30:00 2 150 2800 0
7 2018-01-01 07:30:00 2 100 2704 0
8 2018-01-02 06:30:00 1 100 3444 0
9 2018-01-02 07:00:00 1 200 3144 0
10 2018-01-02 05:30:00 1 100 2900 50
11 2018-01-02 07:30:00 1 200 2800 0
12 2018-01-02 06:30:00 2 50 3000 0
13 2018-01-02 07:00:00 2 100 2950 0
14 2018-01-02 05:30:00 2 150 2800 50
15 2018-01-02 07:30:00 2 100 2704 0)])

Because parameter sheet_name=None in read_excel:
sheet_name : string, int, mixed list of strings/ints, or None, default 0
...
None -> All sheets as a dictionary of DataFrames
Also check specifying sheets.
So you need no parameter for return first sheetname:
df = pd.read_excel("data.xlsx")
Or specify sheet_name if necessary:
df = pd.read_excel("data.xlsx", sheet_name='Sheet1')

Related

Drop overlapping periods less than 6 months in pandas dataframe

I have the following Pandas dataframe and I want to drop the rows for each customer where the difference between Dates is less than 6 month per customer. For example, I want to keep the following dates for customer with ID 1 - 2017-07-01, 2018-01-01, 2018-08-01
Customer_ID Date
1 2017-07-01
1 2017-08-01
1 2017-09-01
1 2017-10-01
1 2017-11-01
1 2017-12-01
1 2018-01-01
1 2018-02-01
1 2018-03-01
1 2018-04-01
1 2018-06-01
1 2018-08-01
2 2018-11-01
2 2019-02-01
2 2019-03-01
2 2019-05-01
2 2020-02-01
2 2020-05-01
Define the following function to process each group of rows (for each customer):
def selDates(grp):
res = []
while grp.size > 0:
stRow = grp.iloc[0]
res.append(stRow)
grp = grp[grp.Date >= stRow.Date + pd.DateOffset(months=6)]
return pd.DataFrame(res)
Then apply this function to each group:
result = df.groupby('Customer_ID', group_keys=False).apply(selDates)
The result, for your data sample, is:
Customer_ID Date
0 1 2017-07-01
6 1 2018-01-01
11 1 2018-08-01
12 2 2018-11-01
15 2 2019-05-01
16 2 2020-02-01

Pandas: time column addition and repeating all rows for a month

I'd like to change my dataframe adding time intervals for every hour during a month
Original df
money food
0 1 2
1 4 5
2 5 7
Output:
money food time
0 1 2 2020-01-01 00:00:00
1 1 2 2020-01-01 00:01:00
2 1 2 2020-01-01 00:02:00
...
2230 5 7 2020-01-31 00:22:00
2231 5 7 2020-01-31 00:23:00
where 2231 = out_rows_number-1 = month_days_number*hours_per_day*orig_rows_number - 1
What is the proper way to perform it?
Use cross join by DataFrame.merge and new DataFrame with all hours per month created by date_range:
df1 = pd.DataFrame({'a':1,
'time':pd.date_range('2020-01-01', '2020-01-31 23:00:00', freq='h')})
df = df.assign(a=1).merge(df1, on='a', how='outer').drop('a', axis=1)
print (df)
money food time
0 1 2 2020-01-01 00:00:00
1 1 2 2020-01-01 01:00:00
2 1 2 2020-01-01 02:00:00
3 1 2 2020-01-01 03:00:00
4 1 2 2020-01-01 04:00:00
... ... ...
2227 5 7 2020-01-31 19:00:00
2228 5 7 2020-01-31 20:00:00
2229 5 7 2020-01-31 21:00:00
2230 5 7 2020-01-31 22:00:00
2231 5 7 2020-01-31 23:00:00
[2232 rows x 3 columns]

Pandas: Resample from weekly to daily with offset

I want to resample this following dataframe from weekly to daily then ffill the missing values.
Note: 2018-01-07 and 2018-01-14 is Sunday.
Date Val
0 2018-01-07 1
1 2018-01-14 2
I tried.
df.Date = pd.to_datetime(df.Date)
df.set_index('Date', inplace=True)
offset = pd.offsets.DateOffset(-6)
df.resample('D', loffset=offset).ffill()
Val
Date
2018-01-01 1
2018-01-02 1
2018-01-03 1
2018-01-04 1
2018-01-05 1
2018-01-06 1
2018-01-07 1
2018-01-08 2
But I want
Date Val
0 2018-01-01 1
1 2018-01-02 1
2 2018-01-03 1
3 2018-01-04 1
4 2018-01-05 1
5 2018-01-06 1
6 2018-01-07 1
7 2018-01-08 2
8 2018-01-09 2
9 2018-01-10 2
10 2018-01-11 2
11 2018-01-12 2
12 2018-01-13 2
13 2018-01-14 2
What did I do wrong?
You can add new last row manually with subtract offset for datetime:
df.loc[df.index[-1] - offset] = df.iloc[-1]
df = df.resample('D', loffset=offset).ffill()
print (df)
Val
Date
2018-01-01 1
2018-01-02 1
2018-01-03 1
2018-01-04 1
2018-01-05 1
2018-01-06 1
2018-01-07 1
2018-01-08 2
2018-01-09 2
2018-01-10 2
2018-01-11 2
2018-01-12 2
2018-01-13 2
2018-01-14 2

pandas: resample.apply discards index names

For some reason doing df.resample("M").apply(foo) drops the index name in df. Is this expected behavior?
import pandas as pd
df = pd.DataFrame({"a": np.arange(60)}, index=pd.date_range(start="2018-01-01", periods=60))
df.index.name = "dte"
df.head()
# a
#dte
#2018-01-01 0
#2018-01-02 1
#2018-01-03 2
#2018-01-04 3
#2018-01-05 4
def f(x):
print(x.head())
df.resample("M").apply(f)
#2018-01-01 0
#2018-01-02 1
#2018-01-03 2
#2018-01-04 3
#2018-01-05 4
#Name: a, dtype: int64
update/clarification:
When I said drops the name I meant that series received by the function doesn't have a name component associated with its index
I suggest use alternative of resample - groupby with Grouper:
def f(x):
print(x.head())
df.groupby(pd.Grouper(freq="M")).apply(f)
dte
2018-01-01 0
2018-01-02 1
2018-01-03 2
2018-01-04 3
2018-01-05 4
a
dte
2018-01-01 0
2018-01-02 1
2018-01-03 2
2018-01-04 3
2018-01-05 4
a
dte
2018-02-01 31
2018-02-02 32
2018-02-03 33
2018-02-04 34
2018-02-05 35

Pandas : How to sum column values over a date range

I am trying to sum the values of colA, over a date range based on "date" column, and store this rolling value in the new column "sum_col"
But I am getting the sum of all rows (=100), not just those in the date range.
I can't use rolling or groupby by as my dates (in the real data) are not sequential (some days are missing)
Amy idea how to do this? Thanks.
# Create data frame
df = pd.DataFrame()
# Create datetimes and data
df['date'] = pd.date_range('1/1/2018', periods=100, freq='D')
df['colA']= 1
df['colB']= 2
df['colC']= 3
StartDate = df.date- pd.to_timedelta(5, unit='D')
EndDate= df.date
dfx=df
dfx['StartDate'] = StartDate
dfx['EndDate'] = EndDate
dfx['sum_col']=df[(df['date'] > StartDate) & (df['date'] <= EndDate)].sum()['colA']
dfx.head(50)
I'm not sure whether you want 3 columns for the sum of colA, colB, colC respectively, or one column which sums all three, but here is an example of how you would sum the values for colA:
dfx['colAsum'] = dfx.apply(lambda x: df.loc[(df.date >= x.StartDate) &
(df.date <= x.EndDate), 'colA'].sum(), axis=1)
e.g. (withperiods=10):
date colA colB colC StartDate EndDate colAsum
0 2018-01-01 1 2 3 2017-12-27 2018-01-01 1
1 2018-01-02 1 2 3 2017-12-28 2018-01-02 2
2 2018-01-03 1 2 3 2017-12-29 2018-01-03 3
3 2018-01-04 1 2 3 2017-12-30 2018-01-04 4
4 2018-01-05 1 2 3 2017-12-31 2018-01-05 5
5 2018-01-06 1 2 3 2018-01-01 2018-01-06 6
6 2018-01-07 1 2 3 2018-01-02 2018-01-07 6
7 2018-01-08 1 2 3 2018-01-03 2018-01-08 6
8 2018-01-09 1 2 3 2018-01-04 2018-01-09 6
9 2018-01-10 1 2 3 2018-01-05 2018-01-10 6
If what I understand is correct:
for i in range(df.shape[0]):
dfx.loc[i,'sum_col']=df[(df['date'] > StartDate[i]) & (df['date'] <= EndDate[i])].sum()['colA']
For example, in range (2018-01-01, 2018-01-06) the sum is 6.
date colA colB colC StartDate EndDate sum_col
0 2018-01-01 1 2 3 2017-12-27 2018-01-01 1.0
1 2018-01-02 1 2 3 2017-12-28 2018-01-02 2.0
2 2018-01-03 1 2 3 2017-12-29 2018-01-03 3.0
3 2018-01-04 1 2 3 2017-12-30 2018-01-04 4.0
4 2018-01-05 1 2 3 2017-12-31 2018-01-05 5.0
5 2018-01-06 1 2 3 2018-01-01 2018-01-06 5.0
6 2018-01-07 1 2 3 2018-01-02 2018-01-07 5.0
7 2018-01-08 1 2 3 2018-01-03 2018-01-08 5.0
8 2018-01-09 1 2 3 2018-01-04 2018-01-09 5.0
9 2018-01-10 1 2 3 2018-01-05 2018-01-10 5.0
10 2018-01-11 1 2 3 2018-01-06 2018-01-11 5.0

Categories