Related
This should be straightforward, but the closest thing I've found is this post:
pandas: Filling missing values within a group, and I still can't solve my problem....
Suppose I have the following dataframe
df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']})
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
and I'd like to fill in "NaN" with mean value in each "name" group, i.e.
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
I'm not sure where to go after:
grouped = df.groupby('name').mean()
Thanks a bunch.
One way would be to use transform:
>>> df
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
>>> df["value"] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
>>> df
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
fillna + groupby + transform + mean
This seems intuitive:
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
The groupby + transform syntax maps the groupwise mean to the index of the original dataframe. This is roughly equivalent to #DSM's solution, but avoids the need to define an anonymous lambda function.
#DSM has IMO the right answer, but I'd like to share my generalization and optimization of the question: Multiple columns to group-by and having multiple value columns:
df = pd.DataFrame(
{
'category': ['X', 'X', 'X', 'X', 'X', 'X', 'Y', 'Y', 'Y'],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],
'other_value': [10, np.nan, np.nan, 20, 30, 10, 30, np.nan, 30],
'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3],
}
)
... gives ...
category name other_value value
0 X A 10.0 1.0
1 X A NaN NaN
2 X B NaN NaN
3 X B 20.0 2.0
4 X B 30.0 3.0
5 X B 10.0 1.0
6 Y C 30.0 3.0
7 Y C NaN NaN
8 Y C 30.0 3.0
In this generalized case we would like to group by category and name, and impute only on value.
This can be solved as follows:
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
Notice the column list in the group-by clause, and that we select the value column right after the group-by. This makes the transformation only be run on that particular column. You could add it to the end, but then you will run it for all columns only to throw out all but one measure column at the end. A standard SQL query planner might have been able to optimize this, but pandas (0.19.2) doesn't seem to do this.
Performance test by increasing the dataset by doing ...
big_df = None
for _ in range(10000):
if big_df is None:
big_df = df.copy()
else:
big_df = pd.concat([big_df, df])
df = big_df
... confirms that this increases the speed proportional to how many columns you don't have to impute:
import pandas as pd
from datetime import datetime
def generate_data():
...
t = datetime.now()
df = generate_data()
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
print(datetime.now()-t)
# 0:00:00.016012
t = datetime.now()
df = generate_data()
df["value"] = df.groupby(['category', 'name'])\
.transform(lambda x: x.fillna(x.mean()))['value']
print(datetime.now()-t)
# 0:00:00.030022
On a final note you can generalize even further if you want to impute more than one column, but not all:
df[['value', 'other_value']] = df.groupby(['category', 'name'])['value', 'other_value']\
.transform(lambda x: x.fillna(x.mean()))
Shortcut:
Groupby + Apply + Lambda + Fillna + Mean
>>> df['value1']=df.groupby('name')['value'].apply(lambda x:x.fillna(x.mean()))
>>> df.isnull().sum().sum()
0
This solution still works if you want to group by multiple columns to replace missing values.
>>> df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, np.nan,np.nan, 4, 3],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],'class':list('ppqqrrsss')})
>>> df['value']=df.groupby(['name','class'])['value'].apply(lambda x:x.fillna(x.mean()))
>>> df
value name class
0 1.0 A p
1 1.0 A p
2 2.0 B q
3 2.0 B q
4 3.0 B r
5 3.0 B r
6 3.5 C s
7 4.0 C s
8 3.0 C s
I'd do it this way
df.loc[df.value.isnull(), 'value'] = df.groupby('group').value.transform('mean')
The featured high ranked answer only works for a pandas Dataframe with only two columns. If you have a more columns case use instead:
df['Crude_Birth_rate'] = df.groupby("continent").Crude_Birth_rate.transform(
lambda x: x.fillna(x.mean()))
To summarize all above concerning the efficiency of the possible solution
I have a dataset with 97 906 rows and 48 columns.
I want to fill in 4 columns with the median of each group.
The column I want to group has 26 200 groups.
The first solution
start = time.time()
x = df_merged[continuous_variables].fillna(df_merged.groupby('domain_userid')[continuous_variables].transform('median'))
print(time.time() - start)
0.10429811477661133 seconds
The second solution
start = time.time()
for col in continuous_variables:
df_merged.loc[df_merged[col].isnull(), col] = df_merged.groupby('domain_userid')[col].transform('median')
print(time.time() - start)
0.5098445415496826 seconds
The next solution I only performed on a subset since it was running too long.
start = time.time()
for col in continuous_variables:
x = df_merged.head(10000).groupby('domain_userid')[col].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
11.685635566711426 seconds
The following solution follows the same logic as above.
start = time.time()
x = df_merged.head(10000).groupby('domain_userid')[continuous_variables].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
42.630549907684326 seconds
So it's quite important to choose the right method.
Bear in mind that I noticed once a column was not a numeric the times were going up exponentially (makes sense as I was computing the median).
def groupMeanValue(group):
group['value'] = group['value'].fillna(group['value'].mean())
return group
dft = df.groupby("name").transform(groupMeanValue)
I know that is an old question. But I am quite surprised by the unanimity of apply/lambda answers here.
Generally speaking, that is the second worst thing to do after iterating rows, from timing point of view.
What I would do here is
df.loc[df['value'].isna(), 'value'] = df.groupby('name')['value'].transform('mean')
Or using fillna
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
I've checked with timeit (because, again, unanimity for apply/lambda based solution made me doubt my instinct). And that is indeed 2.5 faster than the most upvoted solutions.
To fill all the numeric null values with the mean grouped by "name"
num_cols = df.select_dtypes(exclude='object').columns
df[num_cols] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
df.fillna(df.groupby(['name'], as_index=False).mean(), inplace=True)
You can also use "dataframe or table_name".apply(lambda x: x.fillna(x.mean())).
How can I merge between one dataframe based on the other lookup dataframe.
This is dataframe A where I want to replace the values :
InfoType IncidentType DangerType
0 NaN A NaN
1 NaN C NaN
2 NaN B C
3 NaN B NaN
This is the lookup table :
ID ParamCode ParamValue ParmDesc1 ParamDesc2 SortOrder ParamStatus
0 1 IncidentType A ABC DEF 1 1
1 2 IncidentType B GHI JKL 2 1
2 3 IncidentType C MNO PQR 7 1
2 3 DangerType C STU VWX 6 1
The expected input :
InfoType IncidentType DangerType
0 NaN ABC NaN
1 NaN MNO NaN
2 NaN GHI STU
3 NaN GHI NaN
Note that ParamCode is the column names and I need to replace ParamDesc1 into respective columns in dataframe A. Every column in dataframe A may have NaN and I don't intend to remove them. Just ignore them.
This is what I have done :
ntf_cols = ['InfoType','IncidentType','DangerType']
for c in ntf_cols:
if (c in ntf.columns) & (c in param['ParamCode'].values):
paramValue = param['ParamValue'].unique()
for idx, pv in enumerate(paramValue):
ntf['NewIncidentType'] = pd.np.where(ntf.IncidentType.str.contains(pv), param['ParmDesc1'].values, "whatever")
Error :
ValueError: operands could not be broadcast together with shapes (25,)
(13,) ()
Use the lookup table to make a dict, and then replace the column values of the original dataframe. Assume the original dataframe is df1 and the lookup table is df2
...
dict_map = dict(zip(df2.ParamCode + "-" + df2.ParamValue, df2.ParmDesc1))
df1['IncidentType'] = ("IncidentType" +'-'+ df1.IncidentType).replace(dict_map)
df1['DangerType'] = ("DangerType" +'-'+ df1.DangerType).replace(dict_map)
...
EDIT: Lambda's answer gave me an idea for how you could do this for many columns that you want to apply this logical pattern to:
import pandas as pd
df1 = pd.DataFrame(dict(
InfoType = [None, None, None, None],
IncidentType = 'A C B B'.split(),
DangerType = [None, None, 'C', None],
))
df2 = pd.DataFrame(dict(
ParamCode = 'IncidentType IncidentType IncidentType DangerType'.split(),
ParamValue = 'A B C C'.split(),
ParmDesc1 = 'ABC GHI MNO STU'.split(),
))
for col in df1.columns[1:]:
dict_map = dict(
df2[df2.ParamCode == col][['ParamValue','ParmDesc1']].to_records(index=False)
)
df1[col] = df1[col].replace(dict_map)
print(df1)
This assumes every column after the first column in df1 is one that needs updating and the to-be updated column names exists as a values in the 'ParamCode' column of df2.
Python tutor link to code
This problem could be solved using some custom functions and pandas.Series.apply():
import pandas as pd
def find_incident_type(x):
if pd.isna(x):
return x
return df2[
(df2['ParamCode'] == 'IncidentType') & (df2['ParamValue']==x)
]["ParmDesc1"].values[0]
def find_danger_type(x):
if pd.isna(x):
return x
return df2[
(df2['ParamCode'] == 'DangerType') & (df2['ParamValue']==x)
]["ParmDesc1"].values[0]
df1 = pd.DataFrame(dict(
InfoType = [None, None, None, None],
IncidentType = 'A C B B'.split(),
DangerType = [None, None, 'C', None],
))
df2 = pd.DataFrame(dict(
ParamCode = 'IncidentType IncidentType IncidentType DangerType'.split(),
ParamValue = 'A B C C'.split(),
ParmDesc1 = 'ABC GHI MNO STU'.split(),
))
df1['IncidentType'] = df1['IncidentType'].apply(find_incident_type)
df1['DangerType'] = df1['DangerType'].apply(find_danger_type)
print(df1)
step through the code in python tutor
It is very possible there is a more efficient way to do this. Hopefully some one who knows it will share it.
Also the ref to df2 from the outer scope is hard coded into the custom functions and thus will only work for that variable name from the outer scope. You'll need to use an argument for pandas.Series.apply's args param if you don't want these functions to be dependent on that ref.
This should be straightforward, but the closest thing I've found is this post:
pandas: Filling missing values within a group, and I still can't solve my problem....
Suppose I have the following dataframe
df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3], 'name': ['A','A', 'B','B','B','B', 'C','C','C']})
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
and I'd like to fill in "NaN" with mean value in each "name" group, i.e.
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
I'm not sure where to go after:
grouped = df.groupby('name').mean()
Thanks a bunch.
One way would be to use transform:
>>> df
name value
0 A 1
1 A NaN
2 B NaN
3 B 2
4 B 3
5 B 1
6 C 3
7 C NaN
8 C 3
>>> df["value"] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
>>> df
name value
0 A 1
1 A 1
2 B 2
3 B 2
4 B 3
5 B 1
6 C 3
7 C 3
8 C 3
fillna + groupby + transform + mean
This seems intuitive:
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
The groupby + transform syntax maps the groupwise mean to the index of the original dataframe. This is roughly equivalent to #DSM's solution, but avoids the need to define an anonymous lambda function.
#DSM has IMO the right answer, but I'd like to share my generalization and optimization of the question: Multiple columns to group-by and having multiple value columns:
df = pd.DataFrame(
{
'category': ['X', 'X', 'X', 'X', 'X', 'X', 'Y', 'Y', 'Y'],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],
'other_value': [10, np.nan, np.nan, 20, 30, 10, 30, np.nan, 30],
'value': [1, np.nan, np.nan, 2, 3, 1, 3, np.nan, 3],
}
)
... gives ...
category name other_value value
0 X A 10.0 1.0
1 X A NaN NaN
2 X B NaN NaN
3 X B 20.0 2.0
4 X B 30.0 3.0
5 X B 10.0 1.0
6 Y C 30.0 3.0
7 Y C NaN NaN
8 Y C 30.0 3.0
In this generalized case we would like to group by category and name, and impute only on value.
This can be solved as follows:
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
Notice the column list in the group-by clause, and that we select the value column right after the group-by. This makes the transformation only be run on that particular column. You could add it to the end, but then you will run it for all columns only to throw out all but one measure column at the end. A standard SQL query planner might have been able to optimize this, but pandas (0.19.2) doesn't seem to do this.
Performance test by increasing the dataset by doing ...
big_df = None
for _ in range(10000):
if big_df is None:
big_df = df.copy()
else:
big_df = pd.concat([big_df, df])
df = big_df
... confirms that this increases the speed proportional to how many columns you don't have to impute:
import pandas as pd
from datetime import datetime
def generate_data():
...
t = datetime.now()
df = generate_data()
df['value'] = df.groupby(['category', 'name'])['value']\
.transform(lambda x: x.fillna(x.mean()))
print(datetime.now()-t)
# 0:00:00.016012
t = datetime.now()
df = generate_data()
df["value"] = df.groupby(['category', 'name'])\
.transform(lambda x: x.fillna(x.mean()))['value']
print(datetime.now()-t)
# 0:00:00.030022
On a final note you can generalize even further if you want to impute more than one column, but not all:
df[['value', 'other_value']] = df.groupby(['category', 'name'])['value', 'other_value']\
.transform(lambda x: x.fillna(x.mean()))
Shortcut:
Groupby + Apply + Lambda + Fillna + Mean
>>> df['value1']=df.groupby('name')['value'].apply(lambda x:x.fillna(x.mean()))
>>> df.isnull().sum().sum()
0
This solution still works if you want to group by multiple columns to replace missing values.
>>> df = pd.DataFrame({'value': [1, np.nan, np.nan, 2, 3, np.nan,np.nan, 4, 3],
'name': ['A','A', 'B','B','B','B', 'C','C','C'],'class':list('ppqqrrsss')})
>>> df['value']=df.groupby(['name','class'])['value'].apply(lambda x:x.fillna(x.mean()))
>>> df
value name class
0 1.0 A p
1 1.0 A p
2 2.0 B q
3 2.0 B q
4 3.0 B r
5 3.0 B r
6 3.5 C s
7 4.0 C s
8 3.0 C s
I'd do it this way
df.loc[df.value.isnull(), 'value'] = df.groupby('group').value.transform('mean')
The featured high ranked answer only works for a pandas Dataframe with only two columns. If you have a more columns case use instead:
df['Crude_Birth_rate'] = df.groupby("continent").Crude_Birth_rate.transform(
lambda x: x.fillna(x.mean()))
To summarize all above concerning the efficiency of the possible solution
I have a dataset with 97 906 rows and 48 columns.
I want to fill in 4 columns with the median of each group.
The column I want to group has 26 200 groups.
The first solution
start = time.time()
x = df_merged[continuous_variables].fillna(df_merged.groupby('domain_userid')[continuous_variables].transform('median'))
print(time.time() - start)
0.10429811477661133 seconds
The second solution
start = time.time()
for col in continuous_variables:
df_merged.loc[df_merged[col].isnull(), col] = df_merged.groupby('domain_userid')[col].transform('median')
print(time.time() - start)
0.5098445415496826 seconds
The next solution I only performed on a subset since it was running too long.
start = time.time()
for col in continuous_variables:
x = df_merged.head(10000).groupby('domain_userid')[col].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
11.685635566711426 seconds
The following solution follows the same logic as above.
start = time.time()
x = df_merged.head(10000).groupby('domain_userid')[continuous_variables].transform(lambda x: x.fillna(x.median()))
print(time.time() - start)
42.630549907684326 seconds
So it's quite important to choose the right method.
Bear in mind that I noticed once a column was not a numeric the times were going up exponentially (makes sense as I was computing the median).
def groupMeanValue(group):
group['value'] = group['value'].fillna(group['value'].mean())
return group
dft = df.groupby("name").transform(groupMeanValue)
I know that is an old question. But I am quite surprised by the unanimity of apply/lambda answers here.
Generally speaking, that is the second worst thing to do after iterating rows, from timing point of view.
What I would do here is
df.loc[df['value'].isna(), 'value'] = df.groupby('name')['value'].transform('mean')
Or using fillna
df['value'] = df['value'].fillna(df.groupby('name')['value'].transform('mean'))
I've checked with timeit (because, again, unanimity for apply/lambda based solution made me doubt my instinct). And that is indeed 2.5 faster than the most upvoted solutions.
To fill all the numeric null values with the mean grouped by "name"
num_cols = df.select_dtypes(exclude='object').columns
df[num_cols] = df.groupby("name").transform(lambda x: x.fillna(x.mean()))
df.fillna(df.groupby(['name'], as_index=False).mean(), inplace=True)
You can also use "dataframe or table_name".apply(lambda x: x.fillna(x.mean())).
I am processing inbound user data. I receive DataFrame h that is supposed to contain all float but has some strings:
>>> h = pd.DataFrame(np.random.rand(3, 2), columns=['a', 'b'])
>>> h.loc[0, 'a'] = 'bad'
>>> h.loc[1, 'b'] = 'robot'
>>> h
a b
0 bad 0.747314
1 0.921919 robot
2 0.754256 0.664455
I process and set the strings to np.nan (I realize np.nan is a float but this is to illustrate):
>>> hh = h.copy()
>>> hh.loc[0, 'a'] = np.nan
>>> hh.loc[1, 'b'] = np.nan
>>> hh
a b
0 NaN 0.747314
1 0.921919 NaN
2 0.754256 0.664455
I have a DataFrame with expected values (or a dict):
>>> g = pd.DataFrame({'a': ['foo'], 'b': ['bar']}, index=h.index)
>>> g
a b
0 foo bar
1 foo bar
2 foo bar
Which I use to fill where the bad data is.
>>> hh.fillna(g)
a b
0 foo 0.747314
1 0.921919 bar
2 0.754256 0.664455
I need to include the expected data too. So the result should be:
>>> magic(hh, g)
a b
0 rec=bad; exp=foo 0.747314
1 0.921919 rec=robot; exp=bar
2 0.754256 0.664455
How can I create such a result?
You can convert non necessary values to NaNs by DataFrame.where, join together with strings and last replace original values:
m = hh.isna()
df = ('rec=' + h.where(m) + '; exp=' + g.where(m)).fillna(h)
print (df)
a b
0 rec=bad; exp=foo 0.440508
1 0.525949 rec=robot; exp=bar
2 0.337586 0.414336
The simple task of adding a row to a pandas.DataFrame object seems to be hard to accomplish. There are 3 stackoverflow questions relating to this, none of which give a working answer.
Here is what I'm trying to do. I have a DataFrame of which I already know the shape as well as the names of the rows and columns.
>>> df = pandas.DataFrame(columns=['a','b','c','d'], index=['x','y','z'])
>>> df
a b c d
x NaN NaN NaN NaN
y NaN NaN NaN NaN
z NaN NaN NaN NaN
Now, I have a function to compute the values of the rows iteratively. How can I fill in one of the rows with either a dictionary or a pandas.Series ? Here are various attempts that have failed:
>>> y = {'a':1, 'b':5, 'c':2, 'd':3}
>>> df['y'] = y
AssertionError: Length of values does not match length of index
Apparently it tried to add a column instead of a row.
>>> y = {'a':1, 'b':5, 'c':2, 'd':3}
>>> df.join(y)
AttributeError: 'builtin_function_or_method' object has no attribute 'is_unique'
Very uninformative error message.
>>> y = {'a':1, 'b':5, 'c':2, 'd':3}
>>> df.set_value(index='y', value=y)
TypeError: set_value() takes exactly 4 arguments (3 given)
Apparently that is only for setting individual values in the dataframe.
>>> y = {'a':1, 'b':5, 'c':2, 'd':3}
>>> df.append(y)
Exception: Can only append a Series if ignore_index=True
Well, I don't want to ignore the index, otherwise here is the result:
>>> df.append(y, ignore_index=True)
a b c d
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 NaN NaN NaN NaN
3 1 5 2 3
It did align the column names with the values, but lost the row labels.
>>> y = {'a':1, 'b':5, 'c':2, 'd':3}
>>> df.ix['y'] = y
>>> df
a b \
x NaN NaN
y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3}
z NaN NaN
c d
x NaN NaN
y {'a': 1, 'c': 2, 'b': 5, 'd': 3} {'a': 1, 'c': 2, 'b': 5, 'd': 3}
z NaN NaN
That also failed miserably.
So how do you do it ?
df['y'] will set a column
since you want to set a row, use .loc
Note that .ix is equivalent here, yours failed because you tried to assign a dictionary
to each element of the row y probably not what you want; converting to a Series tells pandas
that you want to align the input (for example you then don't have to to specify all of the elements)
In [6]: import pandas as pd
In [7]: df = pd.DataFrame(columns=['a','b','c','d'], index=['x','y','z'])
In [8]: df.loc['y'] = pd.Series({'a':1, 'b':5, 'c':2, 'd':3})
In [9]: df
Out[9]:
a b c d
x NaN NaN NaN NaN
y 1 5 2 3
z NaN NaN NaN NaN
Update: because append has been deprecated
df = pd.DataFrame(columns=["firstname", "lastname"])
entry = pd.DataFrame.from_dict({
"firstname": ["John"],
"lastname": ["Johny"]
})
df = pd.concat([df, entry], ignore_index=True)
This is a simpler version
import pandas as pd
df = pd.DataFrame(columns=('col1', 'col2', 'col3'))
for i in range(5):
df.loc[i] = ['<some value for first>','<some value for second>','<some value for third>']`
If your input rows are lists rather than dictionaries, then the following is a simple solution:
import pandas as pd
list_of_lists = []
list_of_lists.append([1,2,3])
list_of_lists.append([4,5,6])
pd.DataFrame(list_of_lists, columns=['A', 'B', 'C'])
# A B C
# 0 1 2 3
# 1 4 5 6
The logic behind the code is quite simple and straight forward
Make a df with 1 row using the dictionary
Then create a df of shape (1, 4) that only contains NaN and has the same columns as the dictionary keys
Then concatenate a nan df with the dict df and then another nan df
import pandas as pd
import numpy as np
raw_datav = {'a':1, 'b':5, 'c':2, 'd':3}
datav_df = pd.DataFrame(raw_datav, index=[0])
nan_df = pd.DataFrame([[np.nan]*4], columns=raw_datav.keys())
df = pd.concat([nan_df, datav_df, nan_df], ignore_index=True)
df.index = ["x", "y", "z"]
print(df)
gives
a b c d
x NaN NaN NaN NaN
y 1.0 5.0 2.0 3.0
z NaN NaN NaN NaN
[Program finished]