I have a dataframe like this
name tag time val
0 ABC A 1 10
0 ABC A 1 12
1 ABC B 1 12
1 ABC B 1 14
2 ABC A 2 11
3 ABC C 2 12
4 DEF B 3 10
5 DEF C 3 9
6 GHI A 4 14
7 GHI B 4 12
8 GHI C 5 10
Each row is a timestamp and shows the value between the name and tag in that row.
What I want is a dataframe where each row shows the mean value from each tag at each timestamp, like this:
name time A B C
0 ABC 1 11.0 13.0 NaN
1 ABC 2 11.0 NaN 12.0
2 DEF 3 NaN 10.0 9.0
3 GHI 4 14.0 12.0 NaN
4 GHI 5 NaN NaN 10.0
I can achieve this successfully by grouping by name and time and returning a transposed series each time:
def transpose_df(observation_df):
ser = pd.Series()
for tag in tags:
ser[tag] = observation_df[observation_df['tag'] == tag]['val'].mean()
return ser
tdf = df.groupby(['name', 'time']).apply(transpose_df).reset_index()
But this is slow. I feel like there must be a smarter way using a builtin transpose/reshape tool, but I can't figure it out. Can anyone see suggest a better alternative?
In [175]: df.pivot_table(index=['name','time'], columns='tag', values='val').reset_index()
Out[175]:
tag name time A B C
0 ABC 1 11.0 13.0 NaN
1 ABC 2 11.0 NaN 12.0
2 DEF 3 NaN 10.0 9.0
3 GHI 4 14.0 12.0 NaN
4 GHI 5 NaN NaN 10.0
Option 1
Use pivot_table:
df.pivot_table(values='val',index=['name','time'],columns='tag',aggfunc='mean').reset_index()
Output:
tag name time A B C
0 ABC 1 11.0 13.0 NaN
1 ABC 2 11.0 NaN 12.0
2 DEF 3 NaN 10.0 9.0
3 GHI 4 14.0 12.0 NaN
4 GHI 5 NaN NaN 10.0
Option 2:
Use groupby and unstack
df.groupby(['name','time','tag']).agg('mean')['val'].unstack().reset_index()
Output:
tag name time A B C
0 ABC 1 11.0 13.0 NaN
1 ABC 2 11.0 NaN 12.0
2 DEF 3 NaN 10.0 9.0
3 GHI 4 14.0 12.0 NaN
4 GHI 5 NaN NaN 10.0
Option 3
Use set_index and mean and unstack:
df.set_index(['name','time','tag']).mean(level=[0,1,2])['val'].unstack().reset_index()
Output:
tag name time A B C
0 ABC 1 11.0 13.0 NaN
1 ABC 2 11.0 NaN 12.0
2 DEF 3 NaN 10.0 9.0
3 GHI 4 14.0 12.0 NaN
4 GHI 5 NaN NaN 10.0
You can also groupby and then unstack (equivalent to a pivot table).
>>> df.groupby(['name', 'time', 'tag'])['val'].mean().unstack('tag').reset_index()
tag name time A B C
0 ABC 1 11 13 NaN
1 ABC 2 11 NaN 12
2 DEF 3 NaN 10 9
3 GHI 4 14 12 NaN
4 GHI 5 NaN NaN 10
By the way, transform is for when you want to maintain the shape of your original dataframe, e.g.
>>> df.assign(tag_mean=df.groupby(['name', 'time', 'tag'])['val'].transform(np.mean))
name tag time val tag_mean
0 ABC A 1 10 11
0 ABC A 1 12 11
1 ABC B 1 12 13
1 ABC B 1 14 13
2 ABC A 2 11 11
3 ABC C 2 12 12
4 DEF B 3 10 10
5 DEF C 3 9 9
6 GHI A 4 14 14
7 GHI B 4 12 12
8 GHI C 5 10 10
Related
I have the following pandas DataFrame
Id_household Age_Father Age_child
0 1 30 2
1 1 30 4
2 1 30 4
3 1 30 1
4 2 27 4
5 3 40 14
6 3 40 18
and I want to achieve the following result
Age_Father Age_child_1 Age_child_2 Age_child_3 Age_child_4
Id_household
1 30 1 2.0 4.0 4.0
2 27 4 NaN NaN NaN
3 40 14 18.0 NaN NaN
I tried stacking with multi-index renaming, but I am not very happy with it and I am not able to make everything work properly.
Use this:
df_out = df.set_index([df.groupby('Id_household').cumcount()+1,
'Id_household',
'Age_Father']).unstack(0)
df_out.columns = [f'{i}_{j}' for i, j in df_out.columns]
df_out.reset_index()
Output:
Id_household Age_Father Age_child_1 Age_child_2 Age_child_3 Age_child_4
0 1 30 2.0 4.0 4.0 1.0
1 2 27 4.0 NaN NaN NaN
2 3 40 14.0 18.0 NaN NaN
Suppose I have the following contrived example:
ids types values
1 a 10
1 b 11
1 c 12
2 a -10
2 b -11
3 a 100
Is there way to use panda.pivot() to get the following table?
ids a b c
1 10 11 12
2 -10 -11 NaN
3 100 NaN NaN
You could try something like this -
df.pivot(index='ids', columns='types', values='values')
types a b c
ids
1 10.0 11.0 12.0
2 -10.0 -11.0 NaN
3 100.0 NaN NaN
In my data frame I want to create a column '5D_Peak' as a rolling max, and then another column with rolling count of historical data that's close to the peak. I wonder if there is an easier way to simply or ideally vectorise the calculation.
This is my codes in a plain but complicated way:
import numpy as np
import pandas as pd
df = pd.DataFrame([[1,2,4],[4,5,2],[3,5,8],[1,8,6],[5,2,8],[1,4,10],[3,5,9],[1,4,7],[1,4,6]], columns=list('ABC'))
df['5D_Peak']=df['C'].rolling(window=5,center=False).max()
for i in range(5,len(df.A)):
val=0
for j in range(i-5,i):
if df.loc[j,'C']>df.loc[i,'5D_Peak']-2 and df.loc[j,'C']<df.loc[i,'5D_Peak']+2:
val+=1
df.loc[i,'5D_Close_to_Peak_Count']=val
This is the output I want:
A B C 5D_Peak 5D_Close_to_Peak_Count
0 1 2 4 NaN NaN
1 4 5 2 NaN NaN
2 3 5 8 NaN NaN
3 1 8 6 NaN NaN
4 5 2 8 8.0 NaN
5 1 4 10 10.0 0.0
6 3 5 9 10.0 1.0
7 1 4 7 10.0 2.0
8 1 4 6 10.0 2.0
I believe this is what you want. You can set the two values below:
'''the window within which to search "close-to_peak" values'''
lkp_rng = 5
'''how close is close?'''
closeness_measure = 2
'''function to count the number of "close-to_peak" values in the lkp_rng'''
fc = lambda x: np.count_nonzero(np.where(x >= x.max()- closeness_measure))
'''apply fc to the coulmn you choose'''
df['5D_Close_to_Peak_Count'] = df['C'].rolling(window=lkp_range,center=False).apply(fc)
df.head(10)
A B C 5D_Peak 5D_Close_to_Peak_Count
0 1 2 4 NaN NaN
1 4 5 2 NaN NaN
2 3 5 8 NaN NaN
3 1 8 6 NaN NaN
4 5 2 8 8.0 3.0
5 1 4 10 10.0 3.0
6 3 5 9 10.0 3.0
7 1 4 7 10.0 3.0
8 1 4 6 10.0 2.0
I am guessing what you mean by "historical data".
I have a pandas.DataFrame that contain string, float and int types.
Is there a way to set all strings that cannot be converted to float to NaN ?
For example:
A B C D
0 1 2 5 7
1 0 4 NaN 15
2 4 8 9 10
3 11 5 8 0
4 11 5 8 "wajdi"
to:
A B C D
0 1 2 5 7
1 0 4 NaN 15
2 4 8 9 10
3 11 5 8 0
4 11 5 8 NaN
You can use pd.to_numeric and set errors='coerce'
pandas.to_numeric
df['D'] = pd.to_numeric(df.D, errors='coerce')
Which will give you:
A B C D
0 1 2 5.0 7.0
1 0 4 NaN 15.0
2 4 8 9.0 10.0
3 11 5 8.0 0.0
4 11 5 8.0 NaN
Deprecated solution (pandas <= 0.20 only):
df.convert_objects(convert_numeric=True)
pandas.DataFrame.convert_objects
Here's the dev note in the convert_objects source code: # TODO: Remove in 0.18 or 2017, which ever is sooner. So don't make this a long term solution if you use it.
Here is a way:
df['E'] = pd.to_numeric(df.D, errors='coerce')
And then you have:
A B C D E
0 1 2 5.0 7 7.0
1 0 4 NaN 15 15.0
2 4 8 9.0 10 10.0
3 11 5 8.0 0 0.0
4 11 5 8.0 wajdi NaN
You can use pd.to_numeric with errors='coerce'.
In [30]: df = pd.DataFrame({'a': [1, 2, 'NaN', 'bob', 3.2]})
In [31]: pd.to_numeric(df.a, errors='coerce')
Out[31]:
0 1.0
1 2.0
2 NaN
3 NaN
4 3.2
Name: a, dtype: float64
Here is one way to apply it to all columns:
for c in df.columns:
df[c] = pd.to_numeric(df[c], errors='coerce')
(See comment by NinjaPuppy for a better way.)
i have two data frames predictor_df and solution_df like this :
predictor_df
1000 A B C
1001 1 2 3
1002 4 5 6
1003 7 8 9
1004 Nan Nan Nan
and a solution_df
0 D
1 10
2 11
3 12
the reason for the names is that the predictor_df is used to do some analysis on it's columns to arrive at analysis_df . My analysis leaves the rows with Nan values in predictor_df and hence the shorter solution_df
Now i want to know how to join these two dataframes to obtain my final dataframe as
A B C D
1 2 3 10
4 5 6 11
7 8 9 12
Nan Nan Nan
please guide me through it . thanks in advance.
Edit : i tried to merge the two dataframes but the result comes like this ,
A B C D
1 2 3 Nan
4 5 6 Nan
7 8 9 Nan
Nan Nan Nan
Edit 2 : also when i do pd.concat([predictor_df, solution_df], axis = 1)
it becomes like this
A B C D
Nan Nan Nan 10
Nan Nan Nan 11
Nan Nan Nan 12
Nan Nan Nan Nan
You could use reset_index with drop=True which resets the index to the default integer index.
pd.concat([df_1.reset_index(drop=True), df_2.reset_index(drop=True)], axis=1)
A B C D
0 1 2 3 10.0
1 4 5 6 11.0
2 7 8 9 12.0
3 Nan Nan Nan NaN