Conditional multiplication of multiple series with another series - python

I would like to multiply (in place) values in one column of a DataFrame by values in another column, based on a condition in a third column. For example:
data = pd.DataFrame({'a': [1, 33, 56, 79, 2], 'b': [9, 12, 14, 5, 5], 'c': np.arange(5)})
data.loc[data.a > 10, ['a', 'b']] *= data.loc[data.a > 10, 'c']
What I would like this to do is multiply the values of both 'a' and 'b' by the corresponding (same row) value in 'c' based on a condition. However, the above code just results in NaN values in the desired range.
The closest workaround I've found has been to do this:
data.loc[data.a > 10, ['a', 'b']] = (data.loc[data.a > 10, ['a', 'b']].as_matrix().T * data.loc[data.a > 10, 'c']).T
which works, but it seems like there is a better (more Pythonic) way that I'm missing.

you can use mul(..., axis=0) method:
In [122]: mask = data.a > 10
In [125]: data.loc[mask, ['a','b']] = data.loc[mask, ['a','b']].mul(data.loc[mask, 'c'], 0)
In [126]: data
Out[126]:
a b c
0 1 9 0
1 33 12 1
2 112 28 2
3 237 15 3
4 2 5 4

Here is one alternative to use Series.where() to update values conditionally:
data[['a', 'b']] = data[['a', 'b']].apply(lambda m: m.where(data.a <= 10, m*data.c))

use update
data.update(data.query('a > 10')[['a', 'b']].mul(data.query('a > 10').c, 0))
data

Well it seems NumPy could be an alternative here -
arr = data.values
mask = arr[:,0] > 10
arr[mask,:2] *= arr[mask,2,None]
We just extracted the values as an array, which is a view into the dataframe and that lets us work on the array and the updates would be automatically reflected in the dataframe. Here's a sample run to show the progress -
In [507]: data # Input dataframe
Out[507]:
a b c
0 1 9 0
1 33 12 1
2 56 14 2
3 79 5 3
4 2 5 4
Use proposed codes -
In [508]: arr = data.values
In [509]: mask = arr[:,0] > 10
In [510]: arr[mask,:2] *= arr[mask,2,None]
Verify results with dataframe -
In [511]: data
Out[511]:
a b c
0 1 9 0
1 33 12 1
2 112 28 2
3 237 15 3
4 2 5 4
Let's try to verify through other way that we were indeed working with a view there -
In [512]: np.may_share_memory(data,arr)
Out[512]: True

# %%
import pandas as pd
import numpy as np
data = pd.DataFrame({'a': [1, 33, 56, 79, 2],
'b': [9, 12, 14, 5, 5],
'c': np.arange(5)})
(data.loc[data.a>10, ['a','b']]\
.T * data.loc[data.a>10, 'c'])\
.T.append(data.loc[data.a<=10, ['a','b']])\
.T.append(data.c).T.sort()
# %%
Out[17]:
a b c
0 1 9 0
1 33 12 1
2 112 28 2
3 237 15 3
4 2 5 4

Related

Pandas, how to pick value from different columns based on value from diffrent column? [duplicate]

The operation pandas.DataFrame.lookup is "Deprecated since version 1.2.0", and has since invalidated a lot of previous answers.
This post attempts to function as a canonical resource for looking up corresponding row col pairs in pandas versions 1.2.0 and newer.
Standard LookUp Values With Default Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 B 4 8
I would like to be able to lookup the corresponding value in the column specified in Col:
I would like my result to look like:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Standard LookUp Values With a Non-Default Index
Non-Contiguous Range Index
Given the following DataFrame:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
Col A B
0 B 1 5
2 A 2 6
8 A 3 7
9 B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
MultiIndex
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
Col A B
C E B 1 5
F A 2 6
D E A 3 7
F B 4 8
I would like to preserve the index but still find the correct corresponding Value:
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
LookUp with Default For Unmatched/Not-Found Values
Given the following DataFrame
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 C 4 8 # Column C does not correspond with any column
I would like to look up the corresponding values if one exists otherwise I'd like to have it default to 0
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0 # Default value 0 since C does not correspond
LookUp with Missing Values in the lookup Col
Given the following DataFrame:
Col A B
0 B 1 5
1 A 2 6
2 A 3 7
3 NaN 4 8 # <- Missing Lookup Key
I would like any NaN values in Col to result in a NaN value in Val
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # NaN to indicate missing
Standard LookUp Values With Any Index
The documentation on Looking up values by index/column labels recommends using NumPy indexing via factorize and reindex as the replacement for the deprecated DataFrame.lookup.
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
factorize is used to convert the column encode the values as an "enumerated type".
idx, col = pd.factorize(df['Col'])
# idx = array([0, 1, 1, 0], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
Notice that B corresponds to 0 and A corresponds to 1. reindex is used to ensure that columns appear in the same order as the enumeration:
df.reindex(columns=col)
B A # B appears First (location 0) A appers second (location 1)
0 5 1
1 6 2
2 7 3
3 8 4
We need to create an appropriate range indexer compatible with NumPy indexing.
The standard approach is to use np.arange based on the length of the DataFrame:
np.arange(len(df))
[0 1 2 3]
Now NumPy indexing will work to select values from the DataFrame:
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
[5 2 3 8]
*Note: This approach will always work regardless of type of index.
MultiIndex
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
C E B 1 5 5
F A 2 6 2
D E A 3 7 3
F B 4 8 8
Why use np.arange and not df.index directly?
Standard Contiguous Range Index
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
In this case only, there is no error as the result from np.arange is the same as the df.index.
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
Non-Contiguous Range Index Error
Raises IndexError:
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: index 8 is out of bounds for axis 0 with size 4
MultiIndex Error
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=pd.MultiIndex.from_product([['C', 'D'], ['E', 'F']]))
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
Raises IndexError:
df['Val'] = df.reindex(columns=col).to_numpy()[df.index, idx]
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
LookUp with Default For Unmatched/Not-Found Values
There are a few approaches.
First let's look at what happens by default if there is a non-corresponding value:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'C'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 C 4 8
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 C 4 8 NaN # NaN Represents the Missing Value in C
If we look at why the NaN values are introduced, we will find that when factorize goes through the column it will enumerate all groups present regardless of whether they correspond to a column or not.
For this reason, when we reindex the DataFrame we will end up with the following result:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col)
B A C
0 5 1 NaN
1 6 2 NaN
2 7 3 NaN
3 8 4 NaN # Reindex adds the missing column with the Default `NaN`
If we want to specify a default value, we can specify the fill_value argument of reindex which allows us to modify the behaviour as it relates to missing column values:
idx, col = pd.factorize(df['Col'])
df.reindex(columns=col, fill_value=0)
idx = array([0, 1, 1, 2], dtype=int64)
col = Index(['B', 'A', 'C'], dtype='object')
df.reindex(columns=col, fill_value=0)
B A C
0 5 1 0
1 6 2 0
2 7 3 0
3 8 4 0 # Notice reindex adds missing column with specified value `0`
This means that we can do:
idx, col = pd.factorize(df['Col'])
df['Val'] = df.reindex(
columns=col,
fill_value=0 # Default value for Missing column values
).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 C 4 8 0
*Notice the dtype of the column is int, since NaN was never introduced, and, therefore, the column type was not changed.
LookUp with Missing Values in the lookup Col
factorize has a default na_sentinel=-1, meaning that when NaN values appear in the column being factorized the resulting idx value is -1
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 A 3 7
# 3 NaN 4 8 # <- Missing Lookup Key
idx, col = pd.factorize(df['Col'])
# idx = array([ 0, 1, 1, -1], dtype=int64)
# col = Index(['B', 'A'], dtype='object')
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
# Col A B Val
# 0 B 1 5 5
# 1 A 2 6 2
# 2 A 3 7 3
# 3 NaN 4 8 4 <- Value From A
This -1 means that, by default, we'll be pulling from the last column when we reindex. Notice the col still only contains the values B and A. Meaning, that we will end up with the value from A in Val for the last row.
The easiest way to handle this is to fillna Col with some value that cannot be found in the column headers.
Here I use the empty string '':
idx, col = pd.factorize(df['Col'].fillna(''))
# idx = array([0, 1, 1, 2], dtype=int64)
# col = Index(['B', 'A', ''], dtype='object')
Now when I reindex, the '' column will contain NaN values meaning that the lookup produces the desired result:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
idx, col = pd.factorize(df['Col'].fillna(''))
df['Val'] = df.reindex(columns=col).to_numpy()[np.arange(len(df)), idx]
df:
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 A 3 7 3.0
3 NaN 4 8 NaN # Missing as expected
Other Approaches to LookUp
There are 2 other approaches to performing this operation:
apply (Intuitive, but quite slow)
apply can be used on axis=1 in order to use the Column values as the key:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This operation will work regardless of index type:
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]},
index=[0, 2, 8, 9])
# Col A B
# 0 B 1 5
# 2 A 2 6
# 8 A 3 7
# 9 B 4 8
df['Val'] = df.apply(lambda row: row[row['Col']], axis=1)
df:
Col A B Val
0 B 1 5 5
2 A 2 6 2
8 A 3 7 3
9 B 4 8 8
When dealing with Missing/Non-Corresponding Values we can use Series.get can be used to remedy this issue:
import numpy as np
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'C', np.nan],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
# Col A B
# 0 B 1 5
# 1 A 2 6
# 2 C 3 7 <- Non Corresponding
# 3 NaN 4 8 <- Missing
df['Val'] = df.apply(lambda row: row.get(row['Col']), axis=1)
Col A B Val
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN # Missing value
3 NaN 4 8 NaN # Missing value
With Default Value
df['Val'] = df.apply(lambda row: row.get(row['Col'], default=-1), axis=1)
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 C 3 7 -1 # Default -1
3 NaN 4 8 -1 # Default -1
apply is extremely flexible and modifications are straightforward, however, the general iterative approach, as well as all the individual Series lookups can become extremely costly in large DataFrames.
get_indexer (limited)
Index.get_indexer can be used to convert the column to index values into an indexer for the DataFrame. This means there is no reason to reindex the DataFrame as the indexer corresponds to the DataFrame as a whole.
import pandas as pd
df = pd.DataFrame({'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df
Col A B Val
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This approach is reasonably fast, however, missing values are represented by -1 meaning that if a value is missing it will grab the value from the -1 column (The last column in the DataFrame).
import pandas as pd
df = pd.DataFrame({'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8],
'Col': ['B', 'A', 'A', 'C']})
# A B Col <- Col is now the Last Col
# 0 1 5 B
# 1 2 6 A
# 2 3 7 A
# 3 4 8 C <- Notice Col `C` does not correspond to a Valid Column Header
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df:
A B Col Val
0 1 5 B 5
1 2 6 A 2
2 3 7 A 3
3 4 8 C C # <- Value from the last column in the DataFrame (index -1)
It is also notable that not reindexing the DataFrame means converting the entire DataFrame to numpy. This can be very costly if there are many unrelated columns that all need converted:
import numpy as np
import pandas as pd
df = pd.DataFrame({1: 10,
2: 20,
3: 't',
4: 40,
5: np.nan,
'Col': ['B', 'A', 'A', 'B'],
'A': [1, 2, 3, 4],
'B': [5, 6, 7, 8]})
df['Val'] = df.to_numpy()[df.index, df.columns.get_indexer(df['Col'])]
df.to_numpy()
[[10 20 't' 40 nan 'B' 1 5 5]
[10 20 't' 40 nan 'A' 2 6 2]
[10 20 't' 40 nan 'A' 3 7 3]
[10 20 't' 40 nan 'B' 4 8 8]]
Compared to the reindexing approach which only contains columns relevant to the column values:
df.reindex(columns=['B', 'A']).to_numpy()
[[5 1]
[6 2]
[7 3]
[8 4]]
Another option is to build a tuple of the lookup columns, pivot the dataframe, and select the relevant columns with the tuples:
cols = [(ent, ent) for ent in df.Col.unique()]
df.assign(Val = df.pivot(index = None, columns = 'Col')
.reindex(columns = cols)
.ffill(axis=1)
.iloc[:, -1])
Col A B Val
0 B 1 5 5.0
2 A 2 6 2.0
8 A 3 7 3.0
9 B 4 8 8.0
Another possible method is to use melt:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output:
Col A B value
0 B 1 5 5
1 A 2 6 2
2 A 3 7 3
3 B 4 8 8
This method also works with Missing/Non-Corresponding Values:
df['value'] = (df.melt('Col', ignore_index=False)
.loc[lambda x: x['Col'] == x['variable'], 'value'])
print(df)
# Output
Col A B value
0 B 1 5 5.0
1 A 2 6 2.0
2 C 3 7 NaN
3 NaN 4 8 NaN
You can replace .loc[...] by query(...) but it's little slower although more expressive:
df['value'] = df.melt('Col', ignore_index=False).query('Col == variable')['value']

Python Dataframe. Move rows values left according index of rows

I have the table like this:
import pandas as pd
data = [[20, 15, 10, 5], [20, 15, 10, 5], [20, 15, 10, 5], [20, 15, 10, 5]]
df = pd.DataFrame(data, columns = ['one', 'two', 'three', 'four'])
df
one
two
three
four
20
15
10
5
20
15
10
5
20
15
10
5
20
15
10
5
I want to move every rows values left according their rows index.
Row values with index 0 stays the same, Row values with index 1 moves left in one point, Row values with index 2 moves left in two points, etc...
Desired table should looks like this:
one
two
three
four
20
15
10
5
15
10
5
0
10
5
0
0
5
0
0
0
Thanks for helping me!
Another way by using a simple loop to shift the values in every row, and then use
fillna to replace NA values with 0:
for i in range(len(df)):
df.iloc[i,:] = df.iloc[i,:].shift(-i)
df.fillna(0, inplace=True)
Output:
>>> df
one two three four
0 20 15.0 10.0 5.0
1 15 10.0 5.0 0.0
2 10 5.0 0.0 0.0
3 5 0.0 0.0 0.0
You could use a method that shift left regarding the index value and fills with 0
import pandas as pd
def rotate_row(row):
return pd.Series(row.to_list()[row.name:] + [0] * row.name, index=row.index)
data = [[20, 15, 10, 5], [20, 15, 10, 5], [20, 15, 10, 5], [20, 15, 10, 5]]
df = pd.DataFrame(data, columns=['one', 'two', 'three', 'four'])
df = df.apply(rotate_row, axis=1)
print(df)
one two three four
0 20 15 10 5
1 15 10 5 0
2 10 5 0 0
3 5 0 0 0
An upper right triangle to upper left triangle approach:
Create a mask to grab the upper triangle of the DataFrame using np.triu + np.ones + DataFrame.shape
mask = np.triu(np.ones(df.shape, dtype=bool))
[[ True True True True]
[False True True True]
[False False True True]
[False False False True]]
Grab corresponding values from the values of the DataFrame:
a = df.values[mask]
[20 15 10 5 15 10 5 10 5 5]
Create a np.zeros skeleton of the same dtype as a and fliplr the mask and assign back:
tri = np.zeros(df.shape, dtype=a.dtype)
tri[np.fliplr(mask)] = a
[[20 15 10 5]
[15 10 5 0]
[10 5 0 0]
[ 5 0 0 0]]
Turn back into a DataFrame:
new_df = pd.DataFrame(tri, columns=df.columns)
new_df:
one two three four
0 20 15 10 5
1 15 10 5 0
2 10 5 0 0
3 5 0 0 0
Complete Working Example:
import numpy as np
import pandas as pd
data = [[20, 15, 10, 5], [20, 15, 10, 5], [20, 15, 10, 5],
[20, 15, 10, 5]]
df = pd.DataFrame(data, columns=['one', 'two', 'three', 'four'])
mask = np.triu(np.ones(df.shape, dtype=bool))
a = df.values[mask]
tri = np.zeros(df.shape, dtype=a.dtype)
tri[np.fliplr(mask)] = a
new_df = pd.DataFrame(tri, columns=df.columns)
print(new_df)

Pad selection range in Pandas Dataframe?

If I slice a dataframe with something like
>>> df = pd.DataFrame(data=[[x] for x in [1,2,3,5,1,3,2,1,1,4,5,6]], columns=['A'])
>>> df.loc[df['A'] == 1]
# or
>>> df[df['A'] == 1]
A
0 1
4 1
7 1
8 1
how could I pad my selections by a buffer of 1 and get the each of the indices 0, 1, 3, 4, 5, 6, 7, 8, 9? I want to select all rows for which the value in column 'A' is 1, but also a row before or after any such row.
edit I'm hoping to figure out a solution that works for arbitrary pad sizes, rather than just for a pad size of 1.
edit 2 here's another example illustrating what I'm going for
df = pd.DataFrame(data=[[x] for x in [1,2,3,5,3,2,1,1,4,5,6,0,0,3,1,2,4,5]], columns=['A'])
and we're looking for pad == 2. In this case I'd be trying to fetch rows 0, 1, 2, 4, 5, 6, 7, 8, 9, 12, 13, 14, 15, 16.
you can use shift with bitwise or |
c = df['A'] == 1
df[c|c.shift()|c.shift(-1)]
A
0 1
1 2
3 5
4 1
5 3
6 2
7 1
8 1
9 4
For arbitrary pad sizes, you may try where, interpolate, and notna to create the mask
n = 2
c = df.where(df['A'] == 1)
m = c.interpolate(limit=n, limit_direction='both').notna()
df[m]
Out[61]:
A
0 1
1 2
2 3
4 3
5 2
6 1
7 1
8 4
9 5
12 0
13 3
14 1
15 2
16 4
Here is an approach that allows for multiple pad levels. Use ffill and bfill on the boolean mask (df['A'] == 1), after converting the False values to np.nan:
import numpy as np
pad = 2
df[(df['A'] == 1).replace(False, np.nan).ffill(limit=pad).bfill(limit=pad).replace(np.nan,False).astype(bool)]
Here it is in action:
def padsearch(df, column, value, pad):
return df[(df[column] == value).replace(False, np.nan).ffill(limit=pad).bfill(limit=pad).replace(np.nan,False).astype(bool)]
# your first example
df = pd.DataFrame(data=[[x] for x in [1,2,3,5,1,3,2,1,1,4,5,6]], columns=['A'])
print(padsearch(df=df, column='A', value=1, pad=1))
# your other example
df = pd.DataFrame(data=[[x] for x in [1,2,3,5,3,2,1,1,4,5,6,0,0,3,1,2,4,5]], columns=['A'])
print(padsearch(df=df, column='A', value=1, pad=2))
Result:
A
0 1
1 2
3 5
4 1
5 3
6 2
7 1
8 1
9 4
A
0 1
1 2
2 3
4 3
5 2
6 1
7 1
8 4
9 5
12 0
13 3
14 1
15 2
16 4
Granted the command is far less nice, and its a little clunky to be converting the False to and from null. But it's still using all Pandas builtins, so it is fairly quick still.
I found another solution but not nearly as slick as some of the ones already posted.
# setup
df = ...
pad = 2
# determine set of indicies
indices = set(
[
x for x in filter(
lambda x: x>=0,
[
x+y
for x in df[df['A'] == 1].index
for y in range(-pad, pad+1)
]
)
]
)
# fetch rows
df.iloc[[*indices]]

pandas lambda function returns both df and series, why?

Given a df and a lambda function:
df = pd.DataFrame({'label' : ['a', 'a', 'a', 'a', 'a', 'b', 'b', 'b', 'b', 'c', 'c'],
't' : [1, 2, 3, 4, 5, 1, 2, 3, 4, 1, 2, ],
'x' : [48, 6, 30, 30, 53, 48, 25, 51, 9, 55, 2]})
top3 = lambda x : x.groupby('t')['x'].idxmax().head(3)
I tried a few combinations of label and got varying results when the function is called:
print(df.groupby('label').apply(top3))
label t
a 1 0
2 1
3 2
b 1 5
2 6
3 7
c 1 9
2 10
Name: x, dtype: int64
df2 = df[df.label=='a']
print(df2.groupby('label').apply(top3))
t 1 2 3
label
a 0 1 2
df3 = df[df.label.isin(['a', 'b'])]
print(df3.groupby('label').apply(top3))
t 1 2 3
label
a 0 1 2
b 5 6 7
The first result is a Series while the next 2 are DataFrames. why is this so?
.groupby.apply() has a lot of magic behind it to try to coerce things into what it thinks the best shape will be. When c is excluded from the passed dataframe, it can coerce things into a clean rectangular dataframe as a result, but with c included, it will fall back to a MultiIndex:
In [71]: df[df.label.isin(['a', 'c'])].groupby('label').apply(top3)
Out[71]:
label t
a 1 0
2 1
3 2
c 1 9
2 10
Name: x, dtype: int64
If you want to follow the rabbit hole in pandas' code, you can start here: https://github.com/pandas-dev/pandas/blob/30362ed828bebdd58d4f1f74d70236d32547d52a/pandas/core/groupby/ops.py#L189

Flatten nested pandas dataframe columns

After some aggregation, my dataframe looks something like this
A B
B_min B_max
0 11 3 6
1 22 1 2
2 33 4 4
How do I make the columns be A, B_min and B_max, without any nesting? Simple and standard. I've tried reindex_axix() and unstack(), but nothing worked.
Here is one way, but I wish there was an in-built way to do this.
import pandas as pd
df = pd.DataFrame({'A': [11, 11, 22, 22, 33, 33],
'B': [3, 6, 1, 2, 4, 4]})
g = df.groupby('A', as_index=False).agg({'B': ['min', 'max']})
g.columns = ['_'.join(col).strip() if col[1] else col[0] for col in g.columns.values]
# A B_min B_max
# 0 11 3 6
# 1 22 1 2
# 2 33 4 4

Categories