I have a dataframe like this with some NaNs:
df:
0 1
1 11.0 111.0
2 12.0 112.0
3 13.0 113.0
4 NaN 114.0
4 15.0 NaN
5 16.0 116.0
6 17.0 117.0
7 18.0 118.0
So what should I do to it to get the following:
0 1
1 11.0 111.0
2 12.0 112.0
3 13.0 113.0
4 15.0 114.0
4 15.0 114.0
5 16.0 116.0
6 17.0 117.0
7 18.0 118.0
So that the NaN values in the index 4 are filled with index 4 values from other rows which are not NaN?
you can group by index and ffill() + bfill() inside each group:
In [165]: df.groupby(level=0).apply(lambda x: x.ffill().bfill())
Out[165]:
0 1
1 11.0 111.0
2 12.0 112.0
3 13.0 113.0
4 15.0 114.0
4 15.0 114.0
5 16.0 116.0
6 17.0 117.0
7 18.0 118.0
Related
I have this dataframe
hour = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]
visitor = [4,6,2,4,3,7,5,7,8,3,2,8,3,6,4,5,1,8,9,4,2,3,4,1]
df = {"Hour":hour, "Total_Visitor":visitor}
df = pd.DataFrame(df)
print(df)
I applied 6 window rolling sum
df_roll = df.rolling(6, min_periods=6).sum()
print(df_roll)
The first 5 rows will give you NaN value,
The problem is I want to know the sum of total visitor from 9pm to 3am, so I have to sum total visitor from hour 21 and then back to hour 0 until 3
How do you do that automatically with rolling?
I think you need add last N values, then using rolling and filter by length of Series:
N = 6
df_roll = df.iloc[-N:].append(df).rolling(N).sum().iloc[-len(df):]
print (df_roll)
Hour Total_Visitor
0 105.0 18.0
1 87.0 20.0
2 69.0 20.0
3 51.0 21.0
4 33.0 20.0
5 15.0 26.0
6 21.0 27.0
7 27.0 28.0
8 33.0 34.0
9 39.0 33.0
10 45.0 32.0
11 51.0 33.0
12 57.0 31.0
13 63.0 30.0
14 69.0 26.0
15 75.0 28.0
16 81.0 27.0
17 87.0 27.0
18 93.0 33.0
19 99.0 31.0
20 105.0 29.0
21 111.0 27.0
22 117.0 30.0
23 123.0 23.0
Check original solution:
df_roll = df.rolling(6, min_periods=6).sum()
print(df_roll)
Hour Total_Visitor
0 NaN NaN
1 NaN NaN
2 NaN NaN
3 NaN NaN
4 NaN NaN
5 15.0 26.0
6 21.0 27.0
7 27.0 28.0
8 33.0 34.0
9 39.0 33.0
10 45.0 32.0
11 51.0 33.0
12 57.0 31.0
13 63.0 30.0
14 69.0 26.0
15 75.0 28.0
16 81.0 27.0
17 87.0 27.0
18 93.0 33.0
19 99.0 31.0
20 105.0 29.0
21 111.0 27.0
22 117.0 30.0
23 123.0 23.0
Numpy alternative with strides is complicated, but faster if large one Series:
def rolling_window(a, window):
shape = a.shape[:-1] + (a.shape[-1] - window + 1, window)
strides = a.strides + (a.strides[-1],)
return np.lib.stride_tricks.as_strided(a, shape=shape, strides=strides)
N = 3
x = np.concatenate([fv[-N+1:], fv.to_numpy()])
cv = pd.Series(rolling_window(x, N).sum(axis=1), index=fv.index)
print (cv)
0 5
1 4
2 4
3 6
4 5
dtype: int64
Though you have mentioned a series, see if this is helpful-
import pandas as pd
def cyclic_roll(s, n):
s = s.append(s[:n-1])
result = s.rolling(n).sum()
return result[-n+1:].append(result[n-1:-n+1])
fv = pd.DataFrame([1, 2, 3, 4, 5])
cv = fv.apply(cyclic_roll, n=3)
cv.reset_index(inplace=True, drop=True)
print cv
Output
0
0 10.0
1 8.0
2 6.0
3 9.0
4 12.0
Given a dataframe as follows:
id value1 value2
0 3918703 62.0 64.705882
1 3919144 60.0 60.000000
2 3919534 62.5 30.000000
3 3919559 55.0 55.000000
4 3920438 82.0 82.031250
5 3920463 71.0 71.428571
6 3920502 70.0 69.230769
7 3920535 80.0 40.000000
8 3920674 62.0 62.222222
9 3920856 80.0 79.987176
I want to check if value2 is in the range of plus and minus 10% of value1, and return a new column result_review.
If it's not in the range as required, then indicate No as result_review's values.
id value1 value2 results_review
0 3918703 62.0 64.705882 NaN
1 3919144 60.0 60.000000 NaN
2 3919534 62.5 30.000000 no
3 3919559 55.0 55.000000 NaN
4 3920438 82.0 82.031250 NaN
5 3920463 71.0 71.428571 NaN
6 3920502 70.0 69.230769 NaN
7 3920535 80.0 40.000000 no
8 3920674 62.0 62.222222 NaN
9 3920856 80.0 79.987176 NaN
How can I do that in Pandas? Thanks for your help at advance.
Use Series.between with DataFrame.loc:
m = df['value2'].between(df['value1'].mul(0.9), df['value1'].mul(1.1))
df.loc[~m, 'results_review'] = 'no'
print(df)
id value1 value2 results_review
0 3918703 62.0 64.705882 NaN
1 3919144 60.0 60.000000 NaN
2 3919534 62.5 30.000000 no
3 3919559 55.0 55.000000 NaN
4 3920438 82.0 82.031250 NaN
5 3920463 71.0 71.428571 NaN
6 3920502 70.0 69.230769 NaN
7 3920535 80.0 40.000000 no
8 3920674 62.0 62.222222 NaN
9 3920856 80.0 79.987176 NaN
I want to re-assign values in specific rows and varying multi-index columns of a large pandas dataframe, df, to non NaN values that have been calculated and stored in a slightly smaller masked subset of the dataframe, df_sub.
df =
A B
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 -51.0 -50.0 -49.0 -48.0 -47.0 -46.0 -45.0 -44.0 -43.0 -42.0
1 11.0 12.0 13.0 14.0 15.0 16.0 17.0 18.0 19.0 20.0 -41.0 -40.0 -39.0 -38.0 -37.0 -36.0 -35.0 -34.0 -33.0 -32.0
2 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0 30.0 -31.0 -30.0 -29.0 -28.0 -27.0 -26.0 -25.0 -24.0 -23.0 -22.0
3 31.0 32.0 33.0 34.0 35.0 36.0 37.0 38.0 39.0 40.0 -21.0 -20.0 -29.0 -28.0 -27.0 -26.0 -25.0 -24.0 -23.0 -22.0
4 41.0 42.0 43.0 44.0 45.0 46.0 47.0 48.0 49.0 50.0 -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 -4.0 -3.0 -2.0
df_sub =
0 1 2 3 4 5 6 7 8 9
1 NaN NaN NaN NaN NaN 0.3 0.2 0.1 NaN NaN
3 NaN NaN NaN 0.6 0.9 0.7 NaN NaN NaN NaN
My goal is to get the result, shown below, for df.loc[:,'B'] where the non NaN values in df_sub replace the respective row and columns of df (i.e., df.loc[1, pd.IndexSlice['B', 5:7]] = df_sub.loc[1, 5:7] and df.loc[3, pd.IndexSlice['B', 3:5]] = df_sub.loc[3, 3:5]):
df.loc[:,'B'] =
0 1 2 3 4 5 6 7 8 9
0 -51.0 -50.0 -49.0 -48.0 -47.0 -46.0 -45.0 -44.0 -43.0 -42.0
1 -41.0 -40.0 -39.0 -38.0 -37.0 0.3 0.2 0.1 -33.0 -32.0
2 -31.0 -30.0 -29.0 -28.0 -27.0 -26.0 -25.0 -24.0 -23.0 -22.0
3 -21.0 -20.0 -19.0 0.6 0.9 0.7 -15.0 -14.0 -13.0 -12.0
4 -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 -4.0 -3.0 -2.0
However, rather getting the desired values, I am getting NaNs:
df.loc[:,'B'] =
0 1 2 3 4 5 6 7 8 9
0 -51.0 -50.0 -49.0 -48.0 -47.0 -46.0 -45.0 -44.0 -43.0 -42.0
1 -41.0 -40.0 -39.0 -38.0 -37.0 NaN NaN NaN -33.0 -32.0
2 -31.0 -30.0 -29.0 -28.0 -27.0 -26.0 -25.0 -24.0 -23.0 -22.0
3 -21.0 -20.0 -19.0 NaN NaN NaN -15.0 -14.0 -13.0 -12.0
4 -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 -4.0 -3.0 -2.0
My simple sample code is included below. From the diagnostics, it looks like everything is behaving as expected: 1) the non-nan values and their indices from df_sub are identified for each row of df_sub, 2) the slicing of the original df appears to be correct, and 3) the assignment is made without a complaint or a "setting copy" warning.
What is the appropriate way to accomplish my goal?
Why is this failing?
Is there a more compact, efficient way to perform the assignments?
Simplified example:
# Create data for example case
idf = pd.MultiIndex.from_product([['A', 'B'], np.arange(0,10)])
df = pd.DataFrame(np.concatenate((np.arange(1.,51.).reshape(5,10),
np.arange(-51., -1.).reshape(5,10)), axis=1),
index=np.arange(0,5), columns=idf)
df_sub = pd.DataFrame([[np.nan, np.nan, np.nan, np.nan, np.nan, 0.5, 0.6, 0.7, np.nan, np.nan],
[np.nan, np.nan, np.nan, 0.3, 0.4, 0.5, np.nan, np.nan, np.nan, np.nan]],
index=[1,3], columns=np.arange(0,10))
dfsub_idx = df_sub.index
# Perform assignments
for (idx, row) in df_sub.iterrows() :
arr = row.index[~row.isnull()]
print 'row {}: \n{}'.format(idx, row)
print 'non-nan indices: {}\n'.format(arr)
print 'df before mod: \n{}'.format(df.loc[idx, pd.IndexSlice['B', arr.tolist()]])
df.loc[idx, pd.IndexSlice['B', arr.tolist()]] = row[arr]
print 'df after mod: \n{}'.format(df.loc[idx, pd.IndexSlice['B', arr.tolist()]])
You should add values at the end of df_sub after .iloc
df.loc[1, pd.IndexSlice['B', 5:7]] = df_sub.loc[1, 5:7].values
df.loc[3, pd.IndexSlice['B', 3:5]] = df_sub.loc[3, 3:5].values
Inline with pandas.DataFrame.align and pandas.DataFrame.fillna
By using the level argument
pd.DataFrame.fillna(*df_sub.align(df, level=1))
A B
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9
0 1.0 2.0 3.0 4.0 5.0 6.0 7.0 8.0 9.0 10.0 -51.0 -50.0 -49.0 -48.0 -47.0 -46.0 -45.0 -44.0 -43.0 -42.0
1 11.0 12.0 13.0 14.0 15.0 0.5 0.6 0.7 19.0 20.0 -41.0 -40.0 -39.0 -38.0 -37.0 0.5 0.6 0.7 -33.0 -32.0
2 21.0 22.0 23.0 24.0 25.0 26.0 27.0 28.0 29.0 30.0 -31.0 -30.0 -29.0 -28.0 -27.0 -26.0 -25.0 -24.0 -23.0 -22.0
3 31.0 32.0 33.0 0.3 0.4 0.5 37.0 38.0 39.0 40.0 -21.0 -20.0 -19.0 0.3 0.4 0.5 -15.0 -14.0 -13.0 -12.0
4 41.0 42.0 43.0 44.0 45.0 46.0 47.0 48.0 49.0 50.0 -11.0 -10.0 -9.0 -8.0 -7.0 -6.0 -5.0 -4.0 -3.0 -2.0
In place with update
df.update(df_sub.align(df, level=1)[0])
Clarification
This:
pd.DataFrame.fillna(*df_sub.align(df, level=1))
Is equivalent to
a, b = df_sub.align(df, level=1)
a.fillna(b)
# Or pd.DataFrame.fillna(a, b)
I have a pandas'DataFrame, it looks like this:
# Output
# A B C D
# 0 3.0 6.0 7.0 4.0
# 1 42.0 44.0 1.0 3.0
# 2 4.0 2.0 3.0 62.0
# 3 90.0 83.0 53.0 23.0
# 4 22.0 23.0 24.0 NaN
# 5 5.0 2.0 5.0 34.0
# 6 NaN NaN NaN NaN
# 7 NaN NaN NaN NaN
# 8 2.0 12.0 65.0 1.0
# 9 5.0 7.0 32.0 7.0
# 10 2.0 13.0 6.0 12.0
# 11 NaN NaN NaN NaN
# 12 23.0 NaN 23.0 34.0
# 13 61.0 NaN 63.0 3.0
# 14 32.0 43.0 12.0 76.0
# 15 24.0 2.0 34.0 2.0
What I would like to do is fill the NaN's with the earliest preceding row's B value. Apart from Column D, on this row, I would like NaN's replaced with zeros.
I've looked into ffill, fillna.. neither seem to be able to do the job.
My solution so far:
def fix_abc(row, column, df):
# If the row/column value is null/nan
if pd.isnull( row[column] ):
# Get the value of row[column] from the row before
prior = row.name
value = df[prior-1:prior]['B'].values[0]
# If that values empty, go to the row before that
while pd.isnull( value ) and prior >= 1 :
prior = prior - 1
value = df[prior-1:prior]['B'].values[0]
else:
value = row[column]
return value
df['A'] = df.apply( lambda x: fix_abc(x,'A',df), axis=1 )
df['B'] = df.apply( lambda x: fix_abc(x,'B',df), axis=1 )
df['C'] = df.apply( lambda x: fix_abc(x,'C',df), axis=1 )
def fix_d(x):
if pd.isnull(x['D']):
return 0
return x
df['D'] = df.apply( lambda x: fix_d(x), axis=1 )
It feels like this quite inefficient, and slow. So I'm wondering if there is a quicker, more efficient way to do this.
Example output;
# A B C D
# 0 3.0 6.0 7.0 3.0
# 1 42.0 44.0 1.0 42.0
# 2 4.0 2.0 3.0 4.0
# 3 90.0 83.0 53.0 90.0
# 4 22.0 23.0 24.0 0.0
# 5 5.0 2.0 5.0 5.0
# 6 2.0 2.0 2.0 0.0
# 7 2.0 2.0 2.0 0.0
# 8 2.0 12.0 65.0 2.0
# 9 5.0 7.0 32.0 5.0
# 10 2.0 13.0 6.0 2.0
# 11 13.0 13.0 13.0 0.0
# 12 23.0 13.0 23.0 23.0
# 13 61.0 13.0 63.0 61.0
# 14 32.0 43.0 12.0 32.0
# 15 24.0 2.0 34.0 24.0
I have dumped the code including the data for the dataframe into a python fiddle available (here)
fillna allows for various ways to do the filling. In this case, column D can just fill with 0. Column B can fill via pad. And then columns A and C can fill from column B, like:
Code:
df['D'] = df.D.fillna(0)
df['B'] = df.B.fillna(method='pad')
df['A'] = df.A.fillna(df['B'])
df['C'] = df.C.fillna(df['B'])
Test Code:
df = pd.read_fwf(StringIO(u"""
A B C D
3.0 6.0 7.0 4.0
42.0 44.0 1.0 3.0
4.0 2.0 3.0 62.0
90.0 83.0 53.0 23.0
22.0 23.0 24.0 NaN
5.0 2.0 5.0 34.0
NaN NaN NaN NaN
NaN NaN NaN NaN
2.0 12.0 65.0 1.0
5.0 7.0 32.0 7.0
2.0 13.0 6.0 12.0
NaN NaN NaN NaN
23.0 NaN 23.0 34.0
61.0 NaN 63.0 3.0
32.0 43.0 12.0 76.0
24.0 2.0 34.0 2.0"""), header=1)
print(df)
df['D'] = df.D.fillna(0)
df['B'] = df.B.fillna(method='pad')
df['A'] = df.A.fillna(df['B'])
df['C'] = df.C.fillna(df['B'])
print(df)
Results:
A B C D
0 3.0 6.0 7.0 4.0
1 42.0 44.0 1.0 3.0
2 4.0 2.0 3.0 62.0
3 90.0 83.0 53.0 23.0
4 22.0 23.0 24.0 NaN
5 5.0 2.0 5.0 34.0
6 NaN NaN NaN NaN
7 NaN NaN NaN NaN
8 2.0 12.0 65.0 1.0
9 5.0 7.0 32.0 7.0
10 2.0 13.0 6.0 12.0
11 NaN NaN NaN NaN
12 23.0 NaN 23.0 34.0
13 61.0 NaN 63.0 3.0
14 32.0 43.0 12.0 76.0
15 24.0 2.0 34.0 2.0
A B C D
0 3.0 6.0 7.0 4.0
1 42.0 44.0 1.0 3.0
2 4.0 2.0 3.0 62.0
3 90.0 83.0 53.0 23.0
4 22.0 23.0 24.0 0.0
5 5.0 2.0 5.0 34.0
6 2.0 2.0 2.0 0.0
7 2.0 2.0 2.0 0.0
8 2.0 12.0 65.0 1.0
9 5.0 7.0 32.0 7.0
10 2.0 13.0 6.0 12.0
11 13.0 13.0 13.0 0.0
12 23.0 13.0 23.0 34.0
13 61.0 13.0 63.0 3.0
14 32.0 43.0 12.0 76.0
15 24.0 2.0 34.0 2.0
I'm interested in combining two dataframes in pandas that have the same row indices and column names, but different cell values. See the example below:
import pandas as pd
import numpy as np
df1 = pd.DataFrame({'A':[22,2,np.NaN,np.NaN],
'B':[23,4,np.NaN,np.NaN],
'C':[24,6,np.NaN,np.NaN],
'D':[25,8,np.NaN,np.NaN]})
df2 = pd.DataFrame({'A':[np.NaN,np.NaN,56,100],
'B':[np.NaN,np.NaN,58,101],
'C':[np.NaN,np.NaN,59,102],
'D':[np.NaN,np.NaN,60,103]})
In[6]: print(df1)
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 NaN NaN NaN NaN
3 NaN NaN NaN NaN
In[7]: print(df2)
A B C D
0 NaN NaN NaN NaN
1 NaN NaN NaN NaN
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
I would like the resulting frame to look like this:
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
I have tried different ways of pd.concat and pd.merge but some of the data always gets replaced with NaNs. Any pointers in the right direction would be greatly appreciated.
Use combine_first:
print (df1.combine_first(df2))
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
Or fillna:
print (df1.fillna(df2))
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
Or update:
df1.update(df2)
print (df1)
A B C D
0 22.0 23.0 24.0 25.0
1 2.0 4.0 6.0 8.0
2 56.0 58.0 59.0 60.0
3 100.0 101.0 102.0 103.0
Use combine_first
df1.combine_first(df2)