Filling Missing Dates for a combination of columns - python

I have a dataframe 3 columns One Date, 2 Object Columns. I need to fill missing dates of different col1 and col 2 combinations by using max and min dates of the dataframe. Date column only contains first day of each month.
I have done it using naive manner but original data is in thousands or records taking huge amount of time to iterate thru all COL1+COL2 combinations, date ranges. original dataframe contains 15000 records and 30 columns. I need to fill missing date + col1 + col2 then rest all columns empty values. If I have data for Jan 2019 for a col1+col2 combination and dont have it for feb I actually wanted to insert feb, col1, col2, other records empty.
There should be equal unique combinations (COL1 + COL2) from original dataframe to after filling. Same number of combinations before and after
Please help me optimizing it.
df_1 = pd.DataFrame({'Date':['2018-01-01','2018-02-01','2018-03-01','2018-05-01','2018-05-01'],
'COL1':['A','A','B','B','A'],
'COL2':['1','2','1','2','1']})
df_1['Date'] = pd.to_datetime(df_1['Date'])
Initial Dataframe -->>
Date COL1 COL2
0 2018-01-01 A 1
1 2018-02-01 A 2
2 2018-03-01 B 1
3 2018-05-01 B 2
4 2018-05-01 A 1
--
print(df_1.dtypes)
print(df_1)
COLS_COMBO = [i for i in list(set(list(df_1[['COL1','COL2']].itertuples(name='',index=False))))]
months_range = [str(i.date()) for i in list(pd.date_range(start=min(df_1['Date']).date(),
end=max(df_1['Date']).date(), freq='MS'))]
print(COLS_COMBO)
print(months_range)
for col in COLS_COMBO:
col1,col2 = col[0], col[1]
for month in months_range:
d = df_1[(df_1['Date'] == month) & (df_1['COL1'] == col1) & (df_1['COL2'] == col2)]
if len(d) == 0:
dx = {'Date':month,'COL1':col1,'COL2':col2}
df_1 = df_1.append(dx, ignore_index=True)
print(df_1)
OUTPUT
Data TYPES -->>
Date datetime64[ns]
COL1 object
COL2 object
dtype: object
Unique COmbinations of COL1 + COL2 -->>
[('A', '2'), ('B', '2'), ('B', '1'), ('A', '1')]
Months range using min, max in the dataframe -->>
['2018-01-01', '2018-02-01', '2018-03-01', '2018-04-01', '2018-05-01']
My final output is
FINAL Dataframe -->>
Date COL1 COL2
0 2018-01-01 A 1
1 2018-02-01 A 2
2 2018-03-01 B 1
3 2018-05-01 B 2
4 2018-05-01 A 1
5 2018-01-01 A 2
6 2018-02-01 A 2
7 2018-03-01 A 2
8 2018-04-01 A 2
9 2018-05-01 A 2
10 2018-01-01 B 2
11 2018-02-01 B 2
12 2018-03-01 B 2
13 2018-04-01 B 2
14 2018-05-01 B 2
15 2018-01-01 B 1
16 2018-02-01 B 1
17 2018-03-01 B 1
18 2018-04-01 B 1
19 2018-05-01 B 1
20 2018-01-01 A 1
21 2018-02-01 A 1
22 2018-03-01 A 1
23 2018-04-01 A 1
24 2018-05-01 A 1
PS:
COL1 is like parent COL2 is child. So there should be no change in the original combinations and also (date+col1+col2) combinations shouldn't be duplicated / updated if exists.

You can use:
from itertools import product
#get all unique combinations of columns
COLS_COMBO = df_1[['COL1','COL2']].drop_duplicates().values.tolist()
#remove times and create MS date range
dates = df_1['Date'].dt.floor('d')
months_range = pd.date_range(dates.min(), dates.max(), freq='MS')
print(COLS_COMBO)
print(months_range)
#create all combinations of values
df = pd.DataFrame([(c, a, b) for (a, b), c in product(COLS_COMBO, months_range)],
columns=['Date','COL1','COL2'])
print (df)
Date COL1 COL2
0 2018-01-01 A 1
1 2018-02-01 A 1
2 2018-03-01 A 1
3 2018-04-01 A 1
4 2018-05-01 A 1
5 2018-01-01 A 2
6 2018-02-01 A 2
7 2018-03-01 A 2
8 2018-04-01 A 2
9 2018-05-01 A 2
10 2018-01-01 B 1
11 2018-02-01 B 1
12 2018-03-01 B 1
13 2018-04-01 B 1
14 2018-05-01 B 1
15 2018-01-01 B 2
16 2018-02-01 B 2
17 2018-03-01 B 2
18 2018-04-01 B 2
19 2018-05-01 B 2
#add to original df_1 and remove duplicates
df_1 = pd.concat([df_1, df], ignore_index=True).drop_duplicates()
print (df_1)
Date COL1 COL2
0 2018-01-01 A 1
1 2018-02-01 A 2
2 2018-03-01 B 1
3 2018-05-01 B 2
4 2018-05-01 A 1
6 2018-02-01 A 1
7 2018-03-01 A 1
8 2018-04-01 A 1
10 2018-01-01 A 2
12 2018-03-01 A 2
13 2018-04-01 A 2
14 2018-05-01 A 2
15 2018-01-01 B 1
16 2018-02-01 B 1
18 2018-04-01 B 1
19 2018-05-01 B 1
20 2018-01-01 B 2
21 2018-02-01 B 2
22 2018-03-01 B 2
23 2018-04-01 B 2

Related

to_datetime assemblage error due to extra keys

My pandas version is 0.23.4.
I tried to run this code:
df['date_time'] = pd.to_datetime(df[['year','month','day','hour_scheduled_departure','minute_scheduled_departure']])
and the following error appeared:
extra keys have been passed to the datetime assemblage: [hour_scheduled_departure, minute_scheduled_departure]
Any ideas of how to get the job done by pd.to_datetime?
#anky_91
In this image an extract of first 10 rows is presented. First column [int32]: year; Second column[int32]: month; Third column[int32]: day; Fourth column[object]: hour; Fifth column[object]: minute. The length of objects is 2.
Another solution:
>>pd.concat([df.A,pd.to_datetime(pd.Series(df[df.columns[1:]].fillna('').values.tolist(),name='Date').map(lambda x: '0'.join(map(str,x))))],axis=1)
A Date
0 a 2002-07-01 05:07:00
1 b 2002-08-03 03:08:00
2 c 2002-09-05 06:09:00
3 d 2002-04-07 09:04:00
4 e 2002-02-01 02:02:00
5 f 2002-03-05 04:03:00
For the example you have added as image (i have skipped the last 3 columns due to save time)
>>df.month=df.month.map("{:02}".format)
>>df.day = df.day.map("{:02}".format)
>>pd.concat([df.A,pd.to_datetime(pd.Series(df[df.columns[1:]].fillna('').values.tolist(),name='Date').map(lambda x: ''.join(map(str,x))))],axis=1)
A Date
0 a 2015-01-01 00:05:00
1 b 2015-01-01 00:01:00
2 c 2015-01-01 00:02:00
3 d 2015-01-01 00:02:00
4 e 2015-01-01 00:25:00
5 f 2015-01-01 00:25:00
You can use rename to columns, so possible use pandas.to_datetime with columns year, month, day, hour, minute:
df = pd.DataFrame({
'A':list('abcdef'),
'year':[2002,2002,2002,2002,2002,2002],
'month':[7,8,9,4,2,3],
'day':[1,3,5,7,1,5],
'hour_scheduled_departure':[5,3,6,9,2,4],
'minute_scheduled_departure':[7,8,9,4,2,3]
})
print (df)
A year month day hour_scheduled_departure minute_scheduled_departure
0 a 2002 7 1 5 7
1 b 2002 8 3 3 8
2 c 2002 9 5 6 9
3 d 2002 4 7 9 4
4 e 2002 2 1 2 2
5 f 2002 3 5 4 3
cols = ['year','month','day','hour_scheduled_departure','minute_scheduled_departure']
d = {'hour_scheduled_departure':'hour','minute_scheduled_departure':'minute'}
df['date_time'] = pd.to_datetime(df[cols].rename(columns=d))
#if necessary remove columns
df = df.drop(cols, axis=1)
print (df)
A date_time
0 a 2002-07-01 05:07:00
1 b 2002-08-03 03:08:00
2 c 2002-09-05 06:09:00
3 d 2002-04-07 09:04:00
4 e 2002-02-01 02:02:00
5 f 2002-03-05 04:03:00
Detail:
print (df[cols].rename(columns=d))
year month day hour minute
0 2002 7 1 5 7
1 2002 8 3 3 8
2 2002 9 5 6 9
3 2002 4 7 9 4
4 2002 2 1 2 2
5 2002 3 5 4 3

Dataframe has Everyother column timestamp, how to get it in one column?

I have a dataframe I import from excel that is of 'n x n' length, that looks like the following (sorry, i do not know how to easily duplicate this with code)
How do I get the timestamps into one column? Like the following (I've tried pivot)
You may need to extract the data by 3 columns group. Then rename the columns and add the "A,B,C" flag column and concatenate them together. See the test as below:
abc_list = [["2017-10-01",0,"2017-10-02",1,"2017-10-03",8],["2017-11-01",3,"2017-11-01",5,"2017-11-05",10],["2017-12-01",0,"2017-12-07",7,"2017-12-07",12]]
df = pd.DataFrame(abc_list,columns=["Time1","A","Time2","B","Time3","C"])
The output:
Time1 A Time2 B Time3 C
0 2017-10-01 0 2017-10-02 1 2017-10-03 8
1 2017-11-01 3 2017-11-01 5 2017-11-05 10
2 2017-12-01 0 2017-12-07 7 2017-12-07 12
Then:
df_a=df.iloc[:,0:2].rename(columns={'Time1':'time','A':'value'})
df_a['flag']="A"
df_b=df.iloc[:,2:4].rename(columns={'Time2':'time','B':'value'})
df_b['flag']="B"
df_c=df.iloc[:,4:].rename(columns={'Time3':'time','C':'value'})
df_c['flag']="C"
df_final=pd.concat([df_a,df_b,df_c])
df_final.reset_index(drop=True)
output:
time value flag
0 2017-10-01 0 A
1 2017-11-01 3 A
2 2017-12-01 0 A
3 2017-10-02 1 B
4 2017-11-01 5 B
5 2017-12-07 7 B
6 2017-10-03 8 C
7 2017-11-05 10 C
8 2017-12-07 12 C
This is a quit bit not a pythonic way to do it.
Here is another way:
columns = pd.MultiIndex.from_tuples([('A','Time'),('A','Value'),('B','Time'),('B','Value'),('C','Time'),('C','Value')],names=['Group','Sub_value'])
df.columns=columns
Output:
Group A B C
Sub_value Time Value Time Value Time Value
0 2017-10-01 0 2017-10-02 1 2017-10-03 8
1 2017-11-01 3 2017-11-01 5 2017-11-05 10
2 2017-12-01 0 2017-12-07 7 2017-12-07 12
Run:
df.stack(level='Group')
Output:
Sub_value Time Value
Group
0 A 2017-10-01 0
B 2017-10-02 1
C 2017-10-03 8
1 A 2017-11-01 3
B 2017-11-01 5
C 2017-11-05 10
2 A 2017-12-01 0
B 2017-12-07 7
C 2017-12-07 12
This is one method. It is fairly easy to extend to any number of columns.
import pandas as pd
dfs = {}
# read in pairs of columns and assign 'Category' column
dfs[i] = {i: pd.read_excel('file.xlsx', usecols=[2*i, 2*i+1], skiprows=[0],
header=None, columns=['Date', 'Value']).assign(Category=j) \
for i, j in enumerate(['A', 'B', 'C'])}
# concatenate dataframes
df = pd.concat(list(dfs.values()), ignore_index=True)

How do you iterate through groups in a pandas Dataframe, operate on each group, then assign values to the original dataframe?

yearCount = df[['antibiotic', 'order_date', 'antiYearCount']]
yearGroups = yearCount.groupby('order_date')
for year in yearGroups:
yearCount['antiYearCount'] =year.groupby('antibiotic'['antibiotic'].transform(pd.Series.value_counts)
In this case, yearCount is a dataframe containing 'order_date', 'antibiotic', 'antiYearCount'. I have cleaned 'order_date' to only contain the year of the order. I want to group yearCount by the years in 'order_date', count the number of times each 'antibiotic' appears in each "year group" then assign that value to yearCount's 'antiYearCount' variable.
I think you need add new column order_date to groupby and then is also possible usesize instead pd.Series.value_counts for same output:
df = pd.DataFrame({'antibiotic':list('accbbb'),
'antiYearCount':[4,5,4,5,5,4],
'C':[7,8,9,4,2,3],
'D':[1,3,5,7,1,0],
'E':[5,3,6,9,2,4],
'order_date': pd.to_datetime(['2012-01-01']*3+['2012-01-02']*3)})
print (df)
C D E antiYearCount antibiotic order_date
0 7 1 5 4 a 2012-01-01
1 8 3 3 5 c 2012-01-01
2 9 5 6 4 c 2012-01-01
3 4 7 9 5 b 2012-01-02
4 2 1 2 5 b 2012-01-02
5 3 0 4 4 b 2012-01-02
#copy for remove warning
#https://stackoverflow.com/a/45035966/2901002
yearCount = df[['antibiotic', 'order_date', 'antiYearCount']].copy()
yearCount['antiYearCount'] = yearCount.groupby(['order_date','antibiotic'])['antibiotic'] \
.transform('size')
print (yearCount)
antibiotic order_date antiYearCount
0 a 2012-01-01 1
1 c 2012-01-01 2
2 c 2012-01-01 2
3 b 2012-01-02 3
4 b 2012-01-02 3
5 b 2012-01-02 3
yearCount['antiYearCount'] = yearCount.groupby(['order_date','antibiotic'])['antibiotic'] \
.transform(pd.Series.value_counts)
print (yearCount)
antibiotic order_date antiYearCount
0 a 2012-01-01 1
1 c 2012-01-01 2
2 c 2012-01-01 2
3 b 2012-01-02 3
4 b 2012-01-02 3
5 b 2012-01-02 3

How to indicate the multi index columns using read_sql_query (pandas dataframes)

I have a table with the following columns:
| Date | ProductId | SubProductId | Value |
I am trying to retrieve the data from that table and to put it in a pandas DataFrame.
I want the DataFrame to have the following structure:
index: dates
columns: products
sub-columns: sub-products
(products) 1 2 ...
(subproducts) 1 2 3 1 2 3 ...
date
2015-01-02 val val val ...
2015-01-03 val val val ...
2015-01-04 ...
2015-01-05
...
I already have dataframes with the products and the subproducts and the dates.
I understand that I need to use the MultiIndex, here is what I tried:
query ="SELECT Date, ProductId, SubProductId, Value " \
" FROM table "\
" WHERE SubProductId in (1,2,3)"\
" AND ProductId in (1,2,3)"\
" AND Date BETWEEN '2015-01-02' AND '2015-01-08' "\
" GROUP BY Date, ProductId, SubProductId, Value "\
" ORDER BY Date, ProductId, SubProductId "
df = pd.read_sql_query(query, conn, index_col=pd.MultiIndex.from_product([df_products['products'].tolist(), df_subproducts['subproducts'].tolist()])
But it does not work because the query returns a vector of "value" (shape is nb of value x 1), while I need to have a matrix (shape: nb of distinct dates x (nb of subproducts*nb of prodcuts)) in the dataframe.
How can it be achieved:
directly via the read sql query ?
or by "trandofrming" the dataframe once the database values inserted in ?
NB: I am using Microsoft SQL Server.
IIUC you can use unstack() method:
df = pd.read_sql_query(query, conn, index_col=['Date','ProductID','SubProductId']) \
.unstack(['ProductID','SubProductId'])
Demo:
In [413]: df
Out[413]:
Date ProductID SubProductId Value
0 2015-01-02 1 1 11
1 2015-01-02 1 2 12
2 2015-01-02 1 3 13
3 2015-01-02 2 1 14
4 2015-01-02 2 2 15
5 2015-01-02 2 3 16
6 2015-01-03 1 1 17
7 2015-01-03 1 2 18
8 2015-01-03 1 3 19
9 2015-01-03 2 1 20
10 2015-01-03 2 2 21
In [414]: df.set_index(['Date','ProductID','SubProductId']).unstack(['ProductID','SubProductId'])
Out[414]:
Value
ProductID 1 2
SubProductId 1 2 3 1 2 3
Date
2015-01-02 11.0 12.0 13.0 14.0 15.0 16.0
2015-01-03 17.0 18.0 19.0 20.0 21.0 NaN
You can also use pivot_table
df.pivot_table('Value', 'Date', ['ProductId', 'SubProductId'])
demo
df = pd.DataFrame(dict(
Date=pd.date_range('2017-03-31', periods=2).repeat(9),
ProductId=[1, 1, 1, 2, 2, 2, 3, 3, 3] * 2,
SubProductId=list('abc') * 6,
Value=np.random.randint(10, size=18)
))
print(df)
Date ProductId SubProductId Value
0 2017-03-31 1 a 8
1 2017-03-31 1 b 2
2 2017-03-31 1 c 5
3 2017-03-31 2 a 4
4 2017-03-31 2 b 3
5 2017-03-31 2 c 2
6 2017-03-31 3 a 9
7 2017-03-31 3 b 3
8 2017-03-31 3 c 1
9 2017-04-01 1 a 3
10 2017-04-01 1 b 5
11 2017-04-01 1 c 7
12 2017-04-01 2 a 3
13 2017-04-01 2 b 6
14 2017-04-01 2 c 4
15 2017-04-01 3 a 5
16 2017-04-01 3 b 2
17 2017-04-01 3 c 0
df.pivot_table('Value', 'Date', ['ProductId', 'SubProductId'])
ProductId 1 2 3
SubProductId a b c a b c a b c
Date
2017-03-31 8 2 5 4 3 2 9 3 1
2017-04-01 3 5 7 3 6 4 5 2 0

Pandas - Counting the number of days for group by

I want to count the number of days after grouping by 2 columns:
groups = df.groupby([df.col1,df.col2])
Now i want to count the number of days relevant for each group:
result = groups['date_time'].dt.date.nunique()
I'm using something similar when I want to group by day, but here I get an error:
AttributeError: Cannot access attribute 'dt' of 'SeriesGroupBy' objects, try using the 'apply' method
What is the proper way to get the number of days?
You need another variation of groupby - define column first:
df['date_time'].dt.date.groupby([df.col1,df.col2]).nunique()
df.groupby(['col1','col2'])['date_time'].apply(lambda x: x.dt.date.nunique())
df['date_time1'] = df['date_time'].dt.date
a = df.groupby([df.col1,df.col2]).date_time1.nunique()
Sample:
start = pd.to_datetime('2015-02-24')
rng = pd.date_range(start, periods=10, freq='15H')
df = pd.DataFrame({'date_time': rng, 'col1': [0]*5 + [1]*5, 'col2': [2]*3 + [3]*4+ [4]*3})
print (df)
col1 col2 date_time
0 0 2 2015-02-24 00:00:00
1 0 2 2015-02-24 15:00:00
2 0 2 2015-02-25 06:00:00
3 0 3 2015-02-25 21:00:00
4 0 3 2015-02-26 12:00:00
5 1 3 2015-02-27 03:00:00
6 1 3 2015-02-27 18:00:00
7 1 4 2015-02-28 09:00:00
8 1 4 2015-03-01 00:00:00
9 1 4 2015-03-01 15:00:00
#solution with apply
df1 = df.groupby(['col1','col2'])['date_time'].apply(lambda x: x.dt.date.nunique())
print (df1)
col1 col2
0 2 2
3 2
1 3 1
4 2
Name: date_time, dtype: int64
#create new helper column
df['date_time1'] = df['date_time'].dt.date
df2 = df.groupby([df.col1,df.col2]).date_time1.nunique()
print (df2)
col1 col2
0 2 2
3 2
1 3 1
4 2
Name: date_time, dtype: int64
df3 = df['date_time'].dt.date.groupby([df.col1,df.col2]).nunique()
print (df3)
col1 col2
0 2 2
3 2
1 3 1
4 2
Name: date_time, dtype: int64

Categories