Label contiguous groups of True elements within a pandas Series - python

I have a pandas series of Boolean values, and I would like to label contiguous groups of True values. How is it possible to do this? Is it possible to do this in a vectorised manner? Any help would be hugely appreciated!
Data:
A
0 False
1 True
2 True
3 True
4 False
5 False
6 True
7 False
8 False
9 True
10 True
Desired:
A Label
0 False 0
1 True 1
2 True 1
3 True 1
4 False 0
5 False 0
6 True 2
7 False 0
8 False 0
9 True 3
10 True 3

Here's a unlikely but simple and working solution:
import scipy.ndimage.measurements as mnts
labeled, clusters = mnts.label(df.A.values)
# labeled is what you want, cluster is the number of clusters.
df.Labels = labeled # puts it into df
Tested as:
a = array([False, False, True, True, True, False, True, False, False,
True, False, True, True, True, True, True, True, True,
False, True], dtype=bool)
labeled, clusters = mnts.label(a)
>>> labeled
array([0, 0, 1, 1, 1, 0, 2, 0, 0, 3, 0, 4, 4, 4, 4, 4, 4, 4, 0, 5], dtype=int32)
>>> clusters
5

With cumsum
a = df.A.values
z = np.zeros(a.shape, int)
z[a] = pd.factorize((~a).cumsum()[a])[0] + 1
df.assign(Label=z)
A Label
0 False 0
1 True 1
2 True 1
3 True 1
4 False 0
5 False 0
6 True 2
7 False 0
8 False 0
9 True 3
10 True 3

You can use cumsum and groupby + ngroup to mark groups.
v = (~df.A).cumsum().where(df.A).bfill()
df['Label'] = (
v.groupby(v).ngroup().add(1).where(df.A).fillna(0, downcast='infer'))
df
A Label
0 False 0
1 True 1
2 True 1
3 True 1
4 False 0
5 False 0
6 True 2
7 False 0
8 False 0
9 True 3
10 True 3

Related

Vectorizing the aggregation operation on different columns of a Pandas dataframe

I have a Pandas dataframe, mostly containing boolean columns. A small example is:
import pandas as pd
df = pd.DataFrame({"A": [1, 2, 3, 1, 2, 3],
"B": ['a', 'b', 'c', 'a', 'b', 'c'],
"f1": [True, True, True, True, True, False],
"f2": [True, True, True, True, False, True],
"f3": [True, True, True, False, True, True],
"f4": [True, True, False, True, True, True],
"f5": [True, False, True, True, True, True],
"target1": [True, False, True, True, False, True],
"target2": [False, True, True, False, True, False]})
df
Outout:
A B f1 f2 f3 f4 f5 target1 target2
0 1 a True True True True True True False
1 2 b True True True True False False True
2 3 c True True True False True True True
3 1 a True True False True True True False
4 2 b True False True True True False True
5 3 c False True True True True True False
for each True and False class of each f columns and for all groups in ("A", "B") columns, I want to do a sum over target1 and target2 columns. Using a loop over f columns, we have:
for col in ["f1", "f2", "f3", "f4", "f5"]:
print(col, "\n",
df[df[col]].groupby(["A", "B"]).agg({"target1": "sum", "target2": "sum"}), "\n",
df[~df[col]].groupby(["A", "B"]).agg({"target1": "sum", "target2": "sum"}))
Now, I need to do it without the for loop; I mean a vecotization over f columns to reduce the computation time (computation time should be almost equal to time needed for doing it for one f column).
Use DataFrame.melt, so possible aggreagte by columns names f and value for True/Falses:
df = df.melt(['A','B','target1','target2'])
df1 = df.groupby(["A", "B","variable","value"]).agg({"target1": "sum", "target2": "sum"})
print (df1)
target1 target2
A B variable value
1 a f1 True 2 0
f2 True 2 0
f3 False 1 0
True 1 0
f4 True 2 0
f5 True 2 0
2 b f1 True 0 2
f2 False 0 1
True 0 1
f3 True 0 2
f4 True 0 2
f5 False 0 1
True 0 1
3 c f1 False 1 0
True 1 1
f2 True 2 1
f3 True 2 1
f4 False 1 1
True 1 0
f5 True 2 1
Then selecting is possible by:
print (df1.query("variable=='f1' and value==True").droplevel([-1,-2]))
target1 target2
A B
1 a 2 0
2 b 0 2
3 c 1 1
Or:
idx = pd.IndexSlice
print (df1.loc[idx[:, :, 'f1', True],:].droplevel([-1,-2]))
target1 target2
A B
1 a 2 0
2 b 0 2
3 c 1 1

Add column with a specific sequence of numbers depending on value

I have this dataframe:
df = pd.DataFrame({
'ID': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'Condition': [False, False, True, False, False, False, False, False, False, False, True, False]})
ID Condition
0 1 False
1 1 False
2 1 True
3 1 False
4 1 False
5 1 False
6 1 False
7 1 False
8 1 False
9 1 False
10 1 True
11 1 False
I want to add a new column Sequence with a sequence of numbers. The condition is when the first True appears in the Condition column, the following rows must contain the sequence 1, 2, 3, 1, 2, 3... until another True appears again, at which point the sequence is restarted again. Furthermore, ideally, until the first True appears, the values in the new column should be 0. El resultado final serĂ­a:
ID Condition Sequence
0 1 False 0
1 1 False 0
2 1 True 1
3 1 False 2
4 1 False 3
5 1 False 1
6 1 False 2
7 1 False 3
8 1 False 1
9 1 False 2
10 1 True 1
11 1 False 2
I have tried to do it with cumsum and cumcount but I can't find the exact code.
Any suggestion?
Let us do cumsum to identify blocks of rows, then group the dataframe by blocks and use cumcount to create sequential counter, then with some simple maths we can get the output
b = df['Condition'].cumsum()
df['Seq'] = df.groupby(b).cumcount().mod(3).add(1).mask(b < 1, 0)
Explained
Identify blocks/groups of rows using cumsum
b = df['Condition'].cumsum()
print(b)
0 0
1 0
2 1 # -- group 1 start --
3 1
4 1
5 1
6 1
7 1
8 1
9 1 # -- group 1 ended --
10 2
11 2
Name: Condition, dtype: int32
Group the dataframe by the blocks and use cumcount to create a sequential counter per block
c = df.groupby(b).cumcount()
print(c)
0 0
1 1
2 0
3 1
4 2
5 3
6 4
7 5
8 6
9 7
10 0
11 1
dtype: int64
Modulo(%) divide the sequential counter by 3 to create a repeating sequence that repeats every three rows
c = c.mod(3).add(1)
print(c)
0 1
1 2
2 1
3 2
4 3
5 1
6 2
7 3
8 1
9 2
10 1
11 2
dtype: int64
Mask the values in sequence with 0 where the group(b) is < 1
c = c.mask(b < 1, 0)
print(c)
0 0
1 0
2 1
3 2
4 3
5 1
6 2
7 3
8 1
9 2
10 1
11 2
Result
ID Condition Seq
0 1 False 0
1 1 False 0
2 1 True 1
3 1 False 2
4 1 False 3
5 1 False 1
6 1 False 2
7 1 False 3
8 1 False 1
9 1 False 2
10 1 True 1
11 1 False 2
This was the simplest way I could think of doing it
import pandas as pd
df = pd.DataFrame({
'ID': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'Condition': [False, False, True, False, False, False, False, False, False, False, True, False]})
conditions = df.Condition.tolist()
sequence = []
buf = 1
seenTrue = False
for condition in conditions:
#If it's seen a True in the list, this bool is set to True
if condition or seenTrue:
seenTrue = True
#Checking buffer and resetting back to 0
if buf%4 == 0 or condition:
buf = 1
sequence.append(buf)
buf += 1
#While True has not been seen, all 0s to be appended.
if not seenTrue:
sequence.append(0)
df["Sequence"] = sequence
Effectively looping through and then adding the new column in. The buffer is reset whenever it reaches 4 or when a new True is seen, giving you the looping 1,2,3 effect.
The solution I've come up with is just simply looping through the Condition column, adding 0's to the list until you have seen the first True. When you have found a True, you set seen_true to True and set seq_count to 1. After the first True, you keep increasing seq_count, until it's larger then 3 or you see a new True. In both cases, you reset seq_count to 1. This gives you the column you were looking for.
import pandas as pd
df = pd.DataFrame({
'ID': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1],
'Condition': [False, False, True, False, False, False, False, False, False, False, True, False]})
l = []
seq_count = 0
first_true = False
for index, row in df.iterrows():
con = row["Condition"]
if con:
seq_count = 1
first_true = True
elif first_true:
seq_count += 1
if seq_count > 3:
seq_count = 1
l.append(seq_count)
df["Sequence"] = l
Output:
ID Condition Sequence
0 1 False 0
1 1 False 0
2 1 True 1
3 1 False 2
4 1 False 3
5 1 False 1
6 1 False 2
7 1 False 3
8 1 False 1
9 1 False 2
10 1 True 1
11 1 False 2

Calculate cumulative sum based on threshold and condition in another column numpy

I have a data frame and I'd like to calculate cumulative sum based on 2 conditions:
1st which is a boolean already in the table
and a fixed threshold that checks what's the cumulative sum.
I've succeed with 1st or 2nd but I find it hard to combine both.
For the first one I used groupby
df['group'] = np.cumsum((df['IsSuccess'] != df['IsSuccess'].shift(1)))
df['SumSale'] = df[['Sale', 'group']].groupby('group').cumsum()
For the 2nd frompyfunc
sumlm = np.frompyfunc(lambda a,b: b if (a+b>5) else a+b, 2, 1)
df['SumSale'] = sumlm.accumulate(df['Sale'], dtype=object)
My df is, and the SumSale is the result I'm looking for.
df2 = pd.DataFrame({'Sale': [10, 2, 2, 1, 3, 2, 1, 3, 5, 5],
'IsSuccess': [False, True, False, False, True, False, True, False, False, False],
'SumSaleExpected': [10, 12, 2, 3, 6, 2, 3, 6, 11, 16]})
So to summarize I'd like to start cumulating the sum once that sum is over 5 and the row IsSuccess is True. I'd like to avoid for loop if possible as well.
Thank you for help!
I hope I've understood your question right. This example will substract necessary value ("reset") when cumulative sum of sale is greater than 5 and IsSuccess==True:
df["SumSale"] = df["Sale"].cumsum()
# "reset" when SumSale>5 and IsSuccess==True
m = df["SumSale"].gt(5) & df["IsSuccess"].eq(True)
df.loc[m, "to_remove"] = df["SumSale"]
df["to_remove"] = df["to_remove"].ffill().shift().fillna(0)
df["SumSale"] -= df["to_remove"]
df = df.drop(columns="to_remove")
print(df)
Prints:
Sale IsSuccess SumSale
0 1 False 1.0
1 2 True 3.0
2 3 False 6.0
3 2 False 8.0
4 4 True 12.0
5 3 False 3.0
6 5 True 8.0
7 5 False 5.0
EDIT:
def fn():
sale, success = yield
cum = sale
while True:
sale, success = yield cum
if success and cum > 5:
cum = sale
else:
cum += sale
s = fn()
next(s)
df["ss"] = df["IsSuccess"].shift()
df["SumSale"] = df.apply(lambda x: s.send((x["Sale"], x["ss"])), axis=1)
df = df.drop(columns="ss")
print(df)
Prints:
Sale IsSuccess SumSaleExpected SumSale
0 10 False 10 10
1 2 True 12 12
2 2 False 2 2
3 1 False 3 3
4 3 True 6 6
5 2 False 2 2
6 1 True 3 3
7 3 False 6 6
8 5 False 11 11
9 5 False 16 16
You can modify your group approach to account for both conditions by taking the cumsum() of the two conditions:
cond1 = df.Sale.cumsum().gt(5).shift().bfill()
cond2 = df.IsSuccess.shift().bfill()
df['group'] = (cond1 & cond2).cumsum()
Now that group accounts for both conditions, you can directly cumsum() within these pseudogroups:
df['SumSale'] = df.groupby('group').Sale.cumsum()
# Sale IsSuccess group SumSale
# 0 1 False 0 1
# 1 2 True 0 3
# 2 3 False 0 6
# 3 2 False 0 8
# 4 4 True 0 12
# 5 3 False 1 3

(Python) Selecting rows containing a string in ANY column?

I am trying to iterate through a dataframe and return the rows that contain a string "x" in any column.
This is what I have been trying
for col in df:
rows = df[df[col].str.contains(searchTerm, case = False, na = False)]
However, it only returns up to 2 rows if I search for a string I know exists there and in more rows.
How do I make sure it is searching every row of every column?
Edit: My end goal is to get the row and column of the cell containing the string searchTerm
Welcome!
Agree with all the comments. It's generally best practice to find a way to accomplish what you want in Pandas/Numpy without iterating over rows/columns.
If the objective is to "find rows where any column contains the value 'x'), life is a lot easier than you think.
Below is some data:
import pandas as pd
df = pd.DataFrame({
'a': range(10),
'b': ['x', 'b', 'c', 'd', 'x', 'f', 'g', 'h', 'i', 'x'],
'c': [False, False, True, True, True, False, False, True, True, True],
'd': [1, 'x', 3, 4, 5, 6, 7, 8, 'x', 10]
})
print(df)
a b c d
0 0 x False 1
1 1 b False x
2 2 c True 3
3 3 d True 4
4 4 x True 5
5 5 f False 6
6 6 g False 7
7 7 h True 8
8 8 i True x
9 9 x True 10
So clearly rows 0, 1, 4, 8 and 9 should be included.
If we just do df == 'x', pandas broadcasts the comparison across the whole dataframe:
df == 'x'
a b c d
0 False True False False
1 False False False True
2 False False False False
3 False False False False
4 False True False False
5 False False False False
6 False False False False
7 False False False False
8 False False False True
9 False True False False
But pandas also has the handy .any method, to check for True in any dimension. So if we want to check across all columns, we want dimension 1:
rows = (df == 'x').any(axis=1)
print(rows)
0 True
1 True
2 False
3 False
4 True
5 False
6 False
7 False
8 True
9 True
Note that if you want your solution to be truly case sensitive like what you're using with the .str method, you might need something more like:
rows = (df.applymap(lambda x: str(x).lower() == 'x')).any(axis=1)
The correct rows are flagged without any looping. And you get a series back that can be used for indexing the original dataframe:
df.loc[rows]
a b c d
0 0 x False 1
1 1 b False x
4 4 x True 5
8 8 i True x
9 9 x True 10

Is there a Pandas equivalent to tidyr's uncount?

Let's assume we have a table with groupings of variable and their frequencies:
In R:
> df
# A tibble: 3 x 3
Cough Fever cases
<lgl> <lgl> <dbl>
1 TRUE FALSE 1
2 FALSE FALSE 2
3 TRUE TRUE 3
Then we could use tidyr::uncount to get a dataframe with the individual cases:
> uncount(df, cases)
# A tibble: 6 x 2
Cough Fever
<lgl> <lgl>
1 TRUE FALSE
2 FALSE FALSE
3 FALSE FALSE
4 TRUE TRUE
5 TRUE TRUE
6 TRUE TRUE
Is there an equivalent in Python/Pandas?
You have a row index and repeat it according to the counts, for example in R you can do:
df[rep(1:nrow(df),df$cases),]
first to get a data like yours:
df = pd.DataFrame({'x':[1,1,2,2,2,2],'y':[0,1,0,1,1,1]})
counts = df.groupby(['x','y']).size().reset_index()
counts.columns = ['x','y','n']
x y n
0 1 0 1
1 1 1 1
2 2 0 1
3 2 1 3
Then:
counts.iloc[np.repeat(np.arange(len(counts)),counts.n),:2]
x y
0 1 0
1 1 1
2 2 0
3 2 1
3 2 1
3 2 1
I haven't found an equivalent function in Python, but this works
df2 = df.pop('cases')
df = pd.DataFrame(df.values.repeat(df2, axis=0), columns=df.columns)
df['cases'] is passed to df2, then you create a new DataFrame with the elements from the original DataFrame repeated according to the count in df2. Please let me know if it helps.
In addition to the other solutions, you could combine take, repeat and drop:
import pandas as pd
df = pd.DataFrame({'Cough': [True, False, True],
'Fever': [False, False, True],
'cases': [1, 2, 3]})
df.take(df.index.repeat(df.cases)).drop(columns="cases")
Cough Fever
0 True False
1 False False
1 False False
2 True True
2 True True
2 True True
As easy as you use tidyr's API with datar:
>>> from datar.all import f, tribble, uncount
>>> df = tribble(
... f.Cough, f.Fever, f.cases,
... True, False, 1,
... False, False, 2,
... True, True, 3
... )
>>> uncount(df, f.cases)
Cough Fever
<bool> <bool>
0 True False
1 False False
2 False False
3 True True
4 True True
5 True True
I am the author of the package. Feel free to submit issues if you have any questions.

Categories