Processing records with conditions in pandas - python

Given a pandas dataset with columns a, b and c, I have the following requirement:
calculate m = mean of c in the entire dataset
For each record in the dataset, if (a>10 and b<5) c = m
Is it possible to do this with a single pandas command, or I need to loop each record and ask the condition?

I think it is very much possible using boolean masks
This should work-
m = df.c.mean()
df.c[(df.a > 10) & (df.b < 5)] = m

You would better use loc when trying to update a slice of a dataframe, to avoid the common error:
m = d['c'].mean()
df.loc[(df['a'] > 10) & (df['b'] < 5), 'c'] = m

One line
df.loc[df['a'].gt(10) & df['b'].lt(5), 'c'] = df['c'].mean()

Related

Compare rows with conditions and generate a new dataframe in Pandas

I have a very big dataframe with this structure:
Timestamp Val1
Here you can see a real sample:
Timestamp Temp
0 1622471518.92911 36.443
1 1622471525.034114 36.445
2 1622471531.148139 37.447
3 1622471537.284337 36.449
4 1622471543.622588 43.345
5 1622471549.734765 36.451
6 1622471556.2518 36.454
7 1622471562.361368 41.461
8 1622471568.472718 42.468
9 1622471574.826475 36.470
What I want to do is compare the Temp column with itself and if is higher than "X", for example 4, and the time between they is lower than "Y", for example 180 min, then I save some data of they.
Now I'm using two for loops one inside the other, but this expends to much time and usually pandas has an option to avoid this.
This is my code:
cap_time, maxim = 180, 4
cap_time = cap_time * 60
temps= df['Temperature'].values
times = df['Timestamp'].values
results = []
for i in range(len(temps)):
for j in range(i+1, len(temps)):
print(i,j,len(temps))
if float(temps[j]) > float(temps[i])*maxim:
timeIn = dt.datetime.fromtimestamp(float(times[i]))
timeOut = dt.datetime.fromtimestamp(float(times[j]))
diff = timeOut - timeIn
tdiff = diff.total_seconds()
if dd > cap_time:
break
else:
res = [temps[i], temps[j], times[i], times[j], tdiff/60, cap_time/60, maxim]
results.append(res)
break
# Then I save it in a dataframe and another actions
Can Pandas help me to achieve my goal and reduce the execution time? I found dataFrame.diff() but I'm not sure is what I want (or I don`t know how to use it).
Thank you very much.
Short of avoiding the nested for loops, you can already speed things up by avoiding all unnecessary calculations and conversions within the loops. In particular, you can use NumPy broadcasting to define a Boolean array beforehand, in which you can look up whether the condition is met:
import numpy as np
temps_diff = temps - temps[:, None]
times_diff = times - times[:, None]
condition = np.logical_and(temps_diff > maxim,
times_diff < cap_time)
results = []
for i in range(len(temps)):
for j in range(i+1, len(temps)):
if condition[i, j]:
results.append([temps[i], temps[j],
times[i], times[j],
times_diff[i, j]])
results
[[36.443, 43.345, 1622471518.92911, 1622471543.622588, 24.693477869033813],
...
[36.454, 42.468, 1622471556.2518, 1622471568.472718, 12.22091794013977]]
To avoid the loops altogether, you could define a 3-dimensional full results array and then use the condition array as a Boolean mask to filter out the results you want:
import numpy as np
n = len(temps)
temps_diff = temps - temps[:, None]
times_diff = times - times[:, None]
condition = np.logical_and(temps_diff > maxim,
times_diff < cap_time)
results_full = np.stack([np.repeat(temps[:, None], n, axis=1),
np.tile(temps, (n, 1)),
np.repeat(times[:, None], n, axis=1),
np.tile(times, (n, 1)),
times_diff])
results = results_full[np.stack(results_full.shape[0] * [condition])]
results.reshape((5, -1)).T
array([[ 3.64430000e+01, 4.33450000e+01, 1.62247152e+09,
1.62247154e+09, 2.46934779e+01],
...
[ 3.64540000e+01, 4.24680000e+01, 1.62247156e+09,
1.62247157e+09, 1.22209179e+01],
...
])
As you can see, the resulting numbers are the same as above, although this time the results array will contain more rows, because we didn't use the shortcut of starting the inner loop at i+1.

Pandas Mask on multiple Conditions

In my dataframe I want to substitute every value below 1 and higher than 5 with nan.
This code works
persDf = persDf.mask(persDf < 1000)
and I get every value as an nan but this one does not:
persDf = persDf.mask((persDf < 1) and (persDf > 5))
and I have no idea why this is so. I have checked the man page and different solutions on apparentely similar problems but could not find a solution. Does anyone have have an idea that could help me on this?
Use the | operator, because a value cant be < 1 AND > 5:
persDf = persDf.mask((persDf < 1) | (persDf > 5))
Another method would be to use np.where and call that inside pd.DataFrame:
pd.DataFrame(data=np.where((df < 1) | (df > 5), np.NaN, df),
columns=df.columns)

Python Groupby with Boolean Mask

I have a pandas dataframe with the following general format:
id,atr1,atr2,orig_date,fix_date
1,bolt,l,2000-01-01,nan
1,screw,l,2000-01-01,nan
1,stem,l,2000-01-01,nan
2,stem,l,2000-01-01,nan
2,screw,l,2000-01-01,nan
2,stem,l,2001-01-01,2001-01-01
3,bolt,r,2000-01-01,nan
3,stem,r,2000-01-01,nan
3,bolt,r,2001-01-01,2001-01-01
3,stem,r,2001-01-01,2001-01-01
This result would be the following:
id,atr1,atr2,orig_date,fix_date,failed_part_ind
1,bolt,l,2000-01-01,nan,0
1,screw,l,2000-01-01,nan,0
1,stem,l,2000-01-01,nan,0
2,stem,l,2000-01-01,nan,1
2,screw,l,2000-01-01,nan,0
2,stem,l,2001-01-01,2001-01-01,0
3,bolt,r,2000-01-01,nan,1
3,stem,r,2000-01-01,nan,1
3,bolt,r,2001-01-01,2001-01-01,0
3,stem,r,2001-01-01,2001-01-01,0
Any tips or tricks most welcome!
Update2:
A better way to describe what I need to accomplish is that in a .groupby(['id','atr1','atr2']) to create a new indicator column where the following criteria are met for records within the groups:
(df['orig_date'] < df['fix_date'])
I think this should work:
df['failed_part_ind'] = df.apply(lambda row: 1 if ((row['id'] == row['id']) &
(row['atr1'] == row['atr1']) &
(row['atr2'] == row['atr2']) &
(row['orig_date'] < row['fix_date']))
else 0, axis=1)
Update: I think this is what you want:
import numpy as np
def f(g):
min_fix_date = g['fix_date'].min()
if np.isnan(min_fix_date):
g['failed_part_ind'] = 0
else:
g['failed_part_ind'] = g['orig_date'].apply(lambda d: 1 if d < min_fix_date else 0)
return g
df.groupby(['id', 'atr1', 'atr2']).apply(lambda g: f(g))

Replacing value based on conditional pandas

How do you replace a value in a dataframe for a cell based on a conditional for the entire data frame not just a column. I have tried to use df.where but this doesn't work as planned
df = df.where(operator.and_(df > (-1 * .2), df < 0),0)
df = df.where(df > 0 , df * 1.2)
Basically what Im trying to do here is replace all values between -.2 and 0 to zero across all columns in my dataframe and all values greater than zero I want to multiply by 1.2
You've misunderstood the way pandas.where works, which keeps the values of the original object if condition is true, and replace otherwise, you can try to reverse your logic:
df = df.where((df <= (-1 * .2)) | (df >= 0), 0)
df = df.where(df <= 0 , df * 1.2)
where allows you to have a one-line solution, which is great. I prefer to use a mask like so.
idx = (df < 0) & (df >= -0.2)
df[idx] = 0
I prefer breaking this into two lines because, using this method, it is easier to read. You could force this onto a single line as well.
df[(df < 0) & (df >= -0.2)] = 0
Just another option.

pandas: Is it possible to filter a dataframe with arbitrarily long boolean criteria?

If you know exactly how you want to filter a dataframe, the solution is trivial:
df[(df.A == 1) & (df.B == 1)]
But what if you are accepting user input and do not know beforehand how many criteria the user wants to use? For example, the user wants a filtered data frame where columns [A, B, C] == 1. Is it possible to do something like:
def filterIt(*args, value):
return df[(df.*args == value)]
so if the user calls filterIt(A, B, C, value=1), it returns:
df[(df.A == 1) & (df.B == 1) & (df.C == 1)]
I think the most elegant way to do this is using df.query(), where you can build up a string with all your conditions, e.g.:
import pandas as pd
import numpy as np
cols = {}
for col in ('A', 'B', 'C', 'D', 'E'):
cols[col] = np.random.randint(1, 5, 20)
df = pd.DataFrame(cols)
def filter_df(df, filter_cols, value):
conditions = []
for col in filter_cols:
conditions.append('{c} == {v}'.format(c=col, v=value))
query_expr = ' and '.join(conditions)
print('querying with: {q}'.format(q=query_expr))
return df.query(query_expr)
Example output (your results may differ due to the randomly generated data):
filter_df(df, ['A', 'B'], 1)
querying with: A == 1 and B == 1
A B C D E
6 1 1 1 2 1
11 1 1 2 3 4
Here is another approach. It's cleaner, more performant, and has the advantage that columns can be empty (in which case the entire data frame is returned).
def filter(df, value, *columns):
return df.loc[df.loc[:, columns].eq(value).all(axis=1)]
Explanation
values = df.loc[:, columns] selects only the columns we are interested in.
masks = values.eq(value) gives a boolean data frame indicating equality with the target value.
mask = masks.all(axis=1) applies an AND across columns (returning an index mask). Note that you can use masks.any(axis=1) for an OR.
return df.loc[mask] applies index mask to the data frame.
Demo
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randint(0, 2, (100, 3)), columns=list('ABC'))
# both columns
assert np.all(filter(df, 1, 'A', 'B') == df[(df.A == 1) & (df.B == 1)])
# no columns
assert np.all(filter(df, 1) == df)
# different values per column
assert np.all(filter(df, [1, 0], 'A', 'B') == df[(df.A == 1) & (df.B == 0)])
Alternative
For a small number of columns (< 5), the following solution, based on steven's answer, is more performant than the above, although less flexible. As-is, it will not work for an empty columns set, and will not work using different values per column.
from operator import and_
def filter(df, value, *columns):
return df.loc[reduce(and_, (df[column] == value for column in columns))]
Retrieving a Series object by key (df[column]) is significantly faster than constructing a DataFrame object around a subset of columns (df.loc[:, columns]).
In [4]: %timeit df['A'] == 1
100 loops, best of 3: 17.3 ms per loop
In [5]: %timeit df.loc[:, ['A']] == 1
10 loops, best of 3: 48.6 ms per loop
Nevertheless, this speedup becomes negligible when dealing with a larger number of columns. The bottleneck becomes ANDing the masks together, for which reduce(and_, ...) is far slower than the Pandas builtin all(axis=1).
Thanks for the help guys. I came up with something similar to Marius after finding out about df.query():
def makeQuery(cols, equivalence=True, *args):
operator = ' == ' if equivalence else ' != '
query = ''
for arg in args:
for col in cols:
query = query + "({}{}{})".format(col, operator, arg) + ' & '
return query[:-3]
query = makeQuery([A, B, C], False, 1, 2)
Contents of query is a string:
(A != 1) & (B != 1) & (C != 1) & (A != 2) & (B != 2) & (C != 2)
that can be passed to df.query(query)
This is pretty messy but it seems to work.
import operator
def filterIt(value,args):
stuff = [getattr(b,thing) == value for thing in args]
return reduce(operator.and_, stuff)
a = {'A':[1,2,3],'B':[2,2,2],'C':[3,2,1]}
b = pd.DataFrame(a)
filterIt(2,['A','B','C'])
0 False
1 True
2 False
dtype: bool
(b.A == 2) & (b.B == 2) & (b.C ==2)
0 False
1 True
2 False
dtype: bool

Categories