Consider the following dataframe:
index count signal
1 1 1
2 1 NAN
3 1 NAN
4 1 -1
5 1 NAN
6 2 NAN
7 2 -1
8 2 NAN
9 3 NAN
10 3 NAN
11 3 NAN
12 4 1
13 4 NAN
14 4 NAN
I need to 'ffill' the NANs in 'signal' and values with different 'count' value should not affect each other. such that I should get the following dataframe:
index count signal
1 1 1
2 1 1
3 1 1
4 1 -1
5 1 -1
6 2 NAN
7 2 -1
8 2 -1
9 3 NAN
10 3 NAN
11 3 NAN
12 4 1
13 4 1
14 4 1
Right now I iterate through each data frame in group by object and fill NAN value and then copy to a new data frame:
new_table = np.array([]);
for key, group in df.groupby('count'):
group['signal'] = group['signal'].fillna(method='ffill')
group1 = group.copy()
if new_table.shape[0]==0:
new_table = group1
else:
new_table = pd.concat([new_table,group1])
which kinda works, but really slow considering the data frame is large. I am wondering if there is any other method to do it with or without groupby methods. Thanks!
EDITED:
Thanks to Alexander and jwilner for providing alternative methods. However both methods are very slow for my big dataframe which has 800,000 rows of data.
Use the apply method.
In [56]: df = pd.DataFrame({"count": [1] * 4 + [2] * 5 + [3] * 2 , "signal": [1] + [None] * 4 + [-1] + [None] * 5})
In [57]: df
Out[57]:
count signal
0 1 1
1 1 NaN
2 1 NaN
3 1 NaN
4 2 NaN
5 2 -1
6 2 NaN
7 2 NaN
8 2 NaN
9 3 NaN
10 3 NaN
[11 rows x 2 columns]
In [58]: def ffill_signal(df):
....: df["signal"] = df["signal"].ffill()
....: return df
....:
In [59]: df.groupby("count").apply(ffill_signal)
Out[59]:
count signal
0 1 1
1 1 1
2 1 1
3 1 1
4 2 NaN
5 2 -1
6 2 -1
7 2 -1
8 2 -1
9 3 NaN
10 3 NaN
[11 rows x 2 columns]
However, be aware that groupby reorders stuff. If the count column doesn't always stay the same or increase, but instead can have values repeated in it, groupby might be problematic. That is, given a count series like [1, 1, 2, 2, 1], groupby will group like so: [1, 1, 1], [2, 2], which could have possibly undesirable effects on your forward filling. If that were undesired, you'd have to create a new series to use with groupby that always stayed the same or increased according to changes in the count series -- probably using pd.Series.diff and pd.Series.cumsum
I know it's very late, but I found a solution that is much faster than those proposed, namely to collect the updated dataframes in a list and do the concatenation only at the end. To take your example:
new_table = []
for key, group in df.groupby('count'):
group['signal'] = group['signal'].fillna(method='ffill')
group1 = group.copy()
if new_table.shape[0]==0:
new_table = [group1]
else:
new_table.append(group1)
new_table = pd.concat(new_table).reset_index(drop=True)
An alternative solution is to create a pivot table, forward fill values, and then map them back into the original DataFrame.
df2 = df.pivot(columns='count', values='signal', index='index').ffill()
df['signal'] = [df2.at[i, c]
for i, c in zip(df2.index, df['count'].tolist())]
>>> df
count index signal
0 1 1 1
1 1 2 1
2 1 3 1
3 1 4 -1
4 1 5 -1
5 2 6 NaN
6 2 7 -1
7 2 8 -1
8 3 9 NaN
9 3 10 NaN
10 3 11 NaN
11 4 12 1
12 4 13 1
13 4 14 1
With 800k rows of data, the efficacy of this approach depends on how many unique values are in 'count'.
Compared to my prior answer:
%%timeit
for c in df['count'].unique():
df.loc[df['count'] == c, 'signal'] = df[df['count'] == c].ffill()
100 loops, best of 3: 4.1 ms per loop
%%timeit
df2 = df.pivot(columns='count', values='signal', index='index').ffill()
df['signal'] = [df2.at[i, c] for i, c in zip(df2.index, df['count'].tolist())]
1000 loops, best of 3: 1.32 ms per loop
Lastly, you can simply use groupby, although it is slower than the previous method:
df.groupby('count').ffill()
Out[191]:
index signal
0 1 1
1 2 1
2 3 1
3 4 -1
4 5 -1
5 6 NaN
6 7 -1
7 8 -1
8 9 NaN
9 10 NaN
10 11 NaN
11 12 1
12 13 1
13 14 1
%%timeit
df.groupby('count').ffill()
100 loops, best of 3: 3.55 ms per loop
Assuming the data has been pre-sorted on df['index'], try using loc instead:
for c in df['count'].unique():
df.loc[df['count'] == c, 'signal'] = df[df['count'] == c].ffill()
>>> df
index count signal
0 1 1 1
1 2 1 1
2 3 1 1
3 4 1 -1
4 5 1 -1
5 6 2 NaN
6 7 2 -1
7 8 2 -1
8 9 3 NaN
9 10 3 NaN
10 11 3 NaN
11 12 4 1
12 13 4 1
13 14 4 1
Related
The following is the pandas dataframe I have:
cluster Value
1 A
1 NaN
1 NaN
1 NaN
1 NaN
2 NaN
2 NaN
2 B
2 NaN
3 NaN
3 NaN
3 C
3 NaN
4 NaN
4 S
4 NaN
5 NaN
5 A
5 NaN
5 NaN
If we look into the data, cluster 1 has Value 'A' for one row and remain all are NA values. I want to fill 'A' value for all the rows of cluster 1. Similarly for all the clusters. Based on one of the values of the cluster, I want to fill the remaining rows of the cluster. The output should be like,
cluster Value
1 A
1 A
1 A
1 A
1 A
2 B
2 B
2 B
2 B
3 C
3 C
3 C
3 C
4 S
4 S
4 S
5 A
5 A
5 A
5 A
I am new to python and not sure how to proceed with this. Can anybody help with this ?
groupby + bfill, and ffill
df = df.groupby('cluster').bfill().ffill()
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Or,
groupby + transform with first
df['Value'] = df.groupby('cluster').Value.transform('first')
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Edit
The following seems better:
nan_map = df.dropna().set_index('cluster').to_dict()['Value']
df['Value'] = df['cluster'].map(nan_map)
print(df)
Original
I can't think of a better way to do this than iterate over all the rows, but one might exist. First I built your DataFrame:
import pandas as pd
import math
# Build your DataFrame
df = pd.DataFrame.from_items([
('cluster', [1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,5,5,5,5]),
('Value', [float('nan') for _ in range(20)]),
])
df['Value'] = df['Value'].astype(object)
df.at[ 0,'Value'] = 'A'
df.at[ 7,'Value'] = 'B'
df.at[11,'Value'] = 'C'
df.at[14,'Value'] = 'S'
df.at[17,'Value'] = 'A'
Now here's an approach that first creates a nan_map dict, then sets the values in Value as specified in the dict.
# Create a dict to map clusters to unique values
nan_map = df.dropna().set_index('cluster').to_dict()['Value']
# nan_map: {1: 'A', 2: 'B', 3: 'C', 4: 'S', 5: 'A'}
# Apply
for i, row in df.iterrows():
df.at[i,'Value'] = nan_map[row['cluster']]
print(df)
Output:
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 C
10 3 C
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Note: This sets all values based on the cluster and doesn't check for NaN-ness. You may want to experiment with something like:
# Apply
for i, row in df.iterrows():
if isinstance(df.at[i,'Value'], float) and math.isnan(df.at[i,'Value']):
df.at[i,'Value'] = nan_map[row['cluster']]
to see which is more efficient (my guess is the former, without the checks).
The following is the pandas dataframe I have:
cluster Value
1 A
1 NaN
1 NaN
1 NaN
1 NaN
2 NaN
2 NaN
2 B
2 NaN
3 NaN
3 NaN
3 C
3 NaN
4 NaN
4 S
4 NaN
5 NaN
5 A
5 NaN
5 NaN
If we look into the data, cluster 1 has Value 'A' for one row and remain all are NA values. I want to fill 'A' value for all the rows of cluster 1. Similarly for all the clusters. Based on one of the values of the cluster, I want to fill the remaining rows of the cluster. The output should be like,
cluster Value
1 A
1 A
1 A
1 A
1 A
2 B
2 B
2 B
2 B
3 C
3 C
3 C
3 C
4 S
4 S
4 S
5 A
5 A
5 A
5 A
I am new to python and not sure how to proceed with this. Can anybody help with this ?
groupby + bfill, and ffill
df = df.groupby('cluster').bfill().ffill()
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Or,
groupby + transform with first
df['Value'] = df.groupby('cluster').Value.transform('first')
df
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 B
10 3 B
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Edit
The following seems better:
nan_map = df.dropna().set_index('cluster').to_dict()['Value']
df['Value'] = df['cluster'].map(nan_map)
print(df)
Original
I can't think of a better way to do this than iterate over all the rows, but one might exist. First I built your DataFrame:
import pandas as pd
import math
# Build your DataFrame
df = pd.DataFrame.from_items([
('cluster', [1,1,1,1,1,2,2,2,2,3,3,3,3,4,4,4,5,5,5,5]),
('Value', [float('nan') for _ in range(20)]),
])
df['Value'] = df['Value'].astype(object)
df.at[ 0,'Value'] = 'A'
df.at[ 7,'Value'] = 'B'
df.at[11,'Value'] = 'C'
df.at[14,'Value'] = 'S'
df.at[17,'Value'] = 'A'
Now here's an approach that first creates a nan_map dict, then sets the values in Value as specified in the dict.
# Create a dict to map clusters to unique values
nan_map = df.dropna().set_index('cluster').to_dict()['Value']
# nan_map: {1: 'A', 2: 'B', 3: 'C', 4: 'S', 5: 'A'}
# Apply
for i, row in df.iterrows():
df.at[i,'Value'] = nan_map[row['cluster']]
print(df)
Output:
cluster Value
0 1 A
1 1 A
2 1 A
3 1 A
4 1 A
5 2 B
6 2 B
7 2 B
8 2 B
9 3 C
10 3 C
11 3 C
12 3 C
13 4 S
14 4 S
15 4 S
16 5 A
17 5 A
18 5 A
19 5 A
Note: This sets all values based on the cluster and doesn't check for NaN-ness. You may want to experiment with something like:
# Apply
for i, row in df.iterrows():
if isinstance(df.at[i,'Value'], float) and math.isnan(df.at[i,'Value']):
df.at[i,'Value'] = nan_map[row['cluster']]
to see which is more efficient (my guess is the former, without the checks).
I need to generate a column that starts with an initial value, and then is generated by a function that includes past values of that column. For example
df = pd.DataFrame({'a': [1,1,5,2,7,8,16,16,16]})
df['b'] = 0
df.ix[0, 'b'] = 1
df
a b
0 1 1
1 1 0
2 5 0
3 2 0
4 7 0
5 8 0
6 16 0
7 16 0
8 16 0
Now, I want to generate the rest of the column 'b' by taking the minimum of the previous row and adding two. One solution would be
for i in range(1, len(df)):
df.ix[i, 'b'] = df.ix[i-1, :].min() + 2
Resulting in the desired output
a b
0 1 1
1 1 3
2 5 3
3 2 5
4 7 4
5 8 6
6 16 8
7 16 10
8 16 12
Does pandas have a 'clean' way to do this? Preferably one that would vectorize the computation?
pandas doesn't have a great way to handle general recursive calculations. There may be some trick to vectorize it, but if you can take the dependency, this is relatively painless and very fast with numba.
#numba.njit
def make_b(a):
b = np.zeros_like(a)
b[0] = 1
for i in range(1, len(a)):
b[i] = min(b[i-1], a[i-1]) + 2
return b
df['b'] = make_b(df['a'].values)
df
Out[73]:
a b
0 1 1
1 1 3
2 5 3
3 2 5
4 7 4
5 8 6
6 16 8
7 16 10
8 16 12
What is the best way to fill missing values in dataframe with items from list?
For example:
pd.DataFrame([[1,2,3],[4,5],[7,8],[10,11,12],[13,14]])
0 1 2
0 1 2 3
1 4 5 NaN
2 7 8 NaN
3 10 11 12
4 13 14 NaN
list = [6, 9, 150]
to get some something like this:
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
4 13 14 15
this is actually a little tricky and a bit of a hack, if you know the column you want to fill the NaN values for then you can construct a df for that column with the indices of the missing values and pass the df to fillna:
In [33]:
fill = pd.DataFrame(index =df.index[df.isnull().any(axis=1)], data= [6, 9, 150],columns=[2])
df.fillna(fill)
Out[33]:
0 1 2
0 1 2 3
1 4 5 6
2 7 8 9
3 10 11 12
4 13 14 150
You can't pass a dict (my original answer) as the dict key values are the column values to match on and the scalar value will be used for all NaN values for that column which is not what you want:
In [40]:
l=[6, 9, 150]
df.fillna(dict(zip(df.index[df.isnull().any(axis=1)],l)))
Out[40]:
0 1 2
0 1 2 3
1 4 5 9
2 7 8 9
3 10 11 12
4 13 14 9
You can see that it has replaced all NaNs with 9 as it matched the missing NaN index value of 2 with column 2.
I want to replace some missing values in a dataframe with some other values, keeping the index alignment.
For example, in the following dataframe
import pandas as pd
import numpy as np
df = pd.DataFrame({'A':np.repeat(['a','b','c'],4), 'B':np.tile([1,2,3,4],3),'C':range(12),'D':range(12)})
df = df.iloc[:-1]
df.set_index(['A','B'], inplace=True)
df.loc['b'] = np.nan
df
C D
A B
a 1 0 0
2 1 1
3 2 2
4 3 3
b 1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
c 1 8 8
2 9 9
3 10 10
I would like to replace the missing values of 'b' rows matching them with the corresponding indices of 'c' rows.
The result should look like
C D
A B
a 1 0 0
2 1 1
3 2 2
4 3 3
b 1 8 8
2 9 9
3 10 10
4 NaN NaN
c 1 8 8
2 9 9
3 10 10
You can use fillna with the values dictionary to_dict from relevant c rows, like this:
# you can of course use .loc
>>> df.ix['b'].fillna(value=df.ix['c'].to_dict(), inplace=True)
C D
B
1 8 8
2 9 9
3 10 10
4 NaN NaN
Result:
>>> df
C D
A B
a 1 0 0
2 1 1
3 2 2
4 3 3
b 1 8 8
2 9 9
3 10 10
4 NaN NaN
c 1 8 8
2 9 9
3 10 10