Find indices of where Pandas Series contains element containing character - python

Example:
import pandas as pd
arr = pd.Series(['a',['a','b'],'c'])
I would like to get the indices of where the series contains elements containing 'a'. So I would like to get back indices 0 and 1.
I've tried writing
arr.str.contains('a')
but this returns
0 True
1 NaN
2 False
dtype: object
while I'd like it to return
0 True
1 True
2 False
dtype: object

use Series.str.join() to concatenate lists/arrays in cells into a single string and then use .str.contains('a'):
In [78]: arr.str.join(sep='~').str.contains('a')
Out[78]:
0 True
1 True
2 False
dtype: bool

Use Series.apply and Python's in keyword which works on both lists and strings
arr.apply(lambda x: 'a' in x)
This will work fine if you don't have any NaN values in your Series, but if you do, you can use:
arr.apply(lambda x: 'a' in x if x is not np.nan else x)
This is much faster than using Series.str.
Benchmarks:
%%timeit
arr.str.join(sep='~').str.contains('a')
Takes: 249 µs ± 4.83 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%%timeit
arr.apply(lambda x: 'a' in x)
Takes: 70.1 µs ± 1.68 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
arr.apply(lambda x: 'a' in x if x is not np.nan else x)
Takes: 69 µs ± 1.6 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)

Related

How to apply changes to subset dataframe to source dataframe

I'm trying to determine and flag duplicate 'Sample' values in a dataframe using groupby with lambda:
rdtRows["DuplicateSample"] = False
rdtRowsSampleGrouped = rdtRows.groupby( ['Sample']).filter(lambda x: len(x) > 1)
rdtRowsSampleGrouped["DuplicateSample"] = True
# How to get flag changes made on rdtRowsSampleGrouped to apply to rdtRows??
How do I make changes / apply the "DuplicateSample" to the source rdtRows data? I'm stumped
:(
Use GroupBy.transform with GroupBy.size:
df['DuplicateSample'] = df.groupby('Sample')['Sample'].transform('size') > 1
Or use Series.duplicated with keep=False if need faster solution:
df['DuplicateSample'] = df['Sample'].duplicated(keep=False)
Performance in some sample data (in real should be different, depends of number of rows, number of duplicated values):
np.random.seed(2020)
N = 100000
df = pd.DataFrame({'Sample': np.random.randint(100000, size=N)})
In [51]: %timeit df['DuplicateSample'] = df.groupby('Sample')['Sample'].transform('size') > 1
17 ms ± 50 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [52]: %timeit df['DuplicateSample1'] = df['Sample'].duplicated(keep=False)
3.73 ms ± 40 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
#Stef solution is unfortunately 2734times slowier like duplicated solution
In [53]: %timeit df['DuplicateSample2'] = df.groupby('Sample')['Sample'].transform(lambda x: len(x)>1)
10.2 s ± 517 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
You can use transform:
import pandas as pd
df = pd.DataFrame({'Sample': [1,2,2,3,4,4]})
df['DuplicateSample'] = df.groupby('Sample')['Sample'].transform(lambda x: len(x)>1)
Result:
Sample DuplicateSample
0 1 False
1 2 True
2 2 True
3 3 False
4 4 True
5 4 True

Pandas - check if dataframe has negative value in any column

I wonder how to check if a pandas dataframe has negative value in 1 or more columns and
return only boolean value (True or False). Can you please help?
In[1]: df = pd.DataFrame(np.random.randn(10, 3))
In[2]: df
Out[2]:
0 1 2
0 -1.783811 0.736010 0.865427
1 -1.243160 0.255592 1.670268
2 0.820835 0.246249 0.288464
3 -0.923907 -0.199402 0.090250
4 -1.575614 -1.141441 0.689282
5 -1.051722 0.513397 1.471071
6 2.549089 0.977407 0.686614
7 -1.417064 0.181957 0.351824
8 0.643760 0.867286 1.166715
9 -0.316672 -0.647559 1.331545
Expected output:-
Out[3]: True
Actually, if speed is important, I did a few tests:
df = pd.DataFrame(np.random.randn(10000, 30000))
Test 1, slowest: pure pandas
(df < 0).any().any()
# 303 ms ± 1.28 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Test 2, faster: switching over to numpy with .values for testing the presence of a True entry
(df < 0).values.any()
# 269 ms ± 8.19 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
Test 3, maybe even faster, though not significant: switching over to numpy for the whole thing
(df.values < 0).any()
# 267 ms ± 1.48 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
You can chain two any
df.lt(0).any().any()
Out[96]: True
This does the trick:
(df < 0).any().any()
To break it down, (df < 0) gives a dataframe with boolean entries. Then the first .any() returns a series of booleans, testing within each column for the presence of a True value. And then, the second .any() asks whether this returned series itself contains any True value.
This returns a simple:
True

Using pandas, how do I check if a particular sequence exist in a column?

I have a dataframe:
df = pd.DataFrame({'Sequence': ['ABCDEFG', 'AWODIH', 'AWODIHAWD], 'Length': [7, 6, 9]})
I want to be able to check if a particular sequence, say 'WOD', exists in any entry of the 'Sequence' column. It doesn't have to be in the middle or the ends of the entry, but just if that sequence, in that order, exists in any entry of that column, return true.
How would I do this?
I looked into .isin and .contains, both of which only returns if the exact, and ENTIRE, sequence is in the column:
df.isin('ABCDEFG') //returns true
df.isin('ABC') //returns false
I want a sort of Cltr+F function that could search any sequence in that order, regardless of where it is or how long it is.
Can simply do this using str.contains:
In [657]: df['Sequence'].str.contains('WOD')
Out[657]:
0 False
1 True
2 True
Name: Sequence, dtype: bool
OR, you can use str.find:
In [658]: df['Sequence'].str.find('WOD')
Out[658]:
0 -1
1 1
2 1
Name: Sequence, dtype: int64
Which returns -1 on failure.
We need use str.findall before contains
df.Sequence.str.findall('W|O|D').str.join('').str.contains('WOD')
0 False
1 True
2 True
Name: Sequence, dtype: bool
If you want to use your in syntax, you can do:
df.Sequence.apply(lambda x: 'WOD' in x)
If performance is a consideration, the following solution is many times faster than other solutions:
['WOD' in e for e in df.Sequence]
Benchmark
%%timeit
['WOD' in e for e in df.Sequence]
8.26 µs ± 90.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
%%timeit
df.Sequence.apply(lambda x: 'WOD' in x)
164 µs ± 7.26 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
df['Sequence'].str.contains('WOD')
153 µs ± 4.49 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
df['Sequence'].str.find('WOD')
159 µs ± 7.84 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
%%timeit
df.Sequence.str.findall('W|O|D').str.join('').str.contains('WOD')
585 µs ± 34 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Convert a list of hashIDs stored as string to a column of unique values

I have a dataframe where in one column I have a list of hash values stored like strings:
'[d85235f50b3c019ad7c6291e3ca58093,03e0fb034f2cb3264234b9eae09b4287]' just to be clear.
the dataframe looks like
1
0 [8a88e629c368001c18619c7cd66d3e96, 4b0709dd990a0904bbe6afec636c4213, c00a98ceb6fc7006d572486787e551cc, 0e72ae6851c40799ec14a41496d64406, 76475992f4207ee2b209a4867b42c372]
1 [3277ded8d1f105c84ad5e093f6e7795d]
2 [d85235f50b3c019ad7c6291e3ca58093, 03e0fb034f2cb3264234b9eae09b4287]
I'd like to create a list of unique hash id's present in this column.
What is the efficient way?
Thank you
Option 1
See timing below for fastest option
You can embed the parsing and flattening in one comprehension
[y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')]
['8a88e629c368001c18619c7cd66d3e96',
'4b0709dd990a0904bbe6afec636c4213',
'c00a98ceb6fc7006d572486787e551cc',
'0e72ae6851c40799ec14a41496d64406',
'76475992f4207ee2b209a4867b42c372',
'3277ded8d1f105c84ad5e093f6e7795d',
'd85235f50b3c019ad7c6291e3ca58093',
'03e0fb034f2cb3264234b9eae09b4287']
From there, you can use either list(set()), pd.unique, or np.unique
pd.unique([y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')])
array(['8a88e629c368001c18619c7cd66d3e96',
'4b0709dd990a0904bbe6afec636c4213',
'c00a98ceb6fc7006d572486787e551cc',
'0e72ae6851c40799ec14a41496d64406',
'76475992f4207ee2b209a4867b42c372',
'3277ded8d1f105c84ad5e093f6e7795d',
'd85235f50b3c019ad7c6291e3ca58093',
'03e0fb034f2cb3264234b9eae09b4287'], dtype=object)
Option 2
For brevity, use pd.Series.extractall
list(set(df['1'].str.extractall('(\w+)')[0]))
['8a88e629c368001c18619c7cd66d3e96',
'4b0709dd990a0904bbe6afec636c4213',
'c00a98ceb6fc7006d572486787e551cc',
'0e72ae6851c40799ec14a41496d64406',
'76475992f4207ee2b209a4867b42c372',
'3277ded8d1f105c84ad5e093f6e7795d',
'd85235f50b3c019ad7c6291e3ca58093',
'03e0fb034f2cb3264234b9eae09b4287']
#jezrael's list(set()) with my comprehension is fastest
Parse Timing
I kept the same list(set()) for purposes of comparing parsing and flattening.
%timeit list(set(np.concatenate(df['1'].apply(yaml.load).values).tolist()))
%timeit list(set([y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')]))
%timeit list(set(chain.from_iterable(df['1'].str.strip('[]').str.split(', '))))
%timeit list(set(df['1'].str.extractall('(\w+)')[0]))
1.01 ms ± 45 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
6.42 µs ± 219 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
279 µs ± 8.87 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
941 µs ± 10.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
This takes my comprehension and uses various ways to make unique to compare those speeds
%timeit pd.unique([y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')])
%timeit np.unique([y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')])
%timeit list(set([y for x in df['1'].values.tolist() for y in x.strip('[]').split(', ')]))
57.8 µs ± 3.66 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
17.5 µs ± 552 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
6.18 µs ± 184 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
You need strip with split first and for flatenning chain:
print (df.columns.tolist())
['col']
#convert strings to lists per rows
#change by your column name if necessary
s = df['col'].str.strip('[]').str.split(', ')
print (s)
0 [8a88e629c368001c18619c7cd66d3e96, 4b0709dd990...
1 [3277ded8d1f105c84ad5e093f6e7795d]
2 [d85235f50b3c019ad7c6291e3ca58093, 03e0fb034f2...
Name: col, dtype: object
#check first value
print (type(s.iat[0]))
<class 'list'>
#get unique values - for unique values use set
from itertools import chain
L = list(set(chain.from_iterable(s)))
['76475992f4207ee2b209a4867b42c372', '3277ded8d1f105c84ad5e093f6e7795d',
'd85235f50b3c019ad7c6291e3ca58093', '4b0709dd990a0904bbe6afec636c4213',
'c00a98ceb6fc7006d572486787e551cc', '03e0fb034f2cb3264234b9eae09b4287',
'8a88e629c368001c18619c7cd66d3e96', '0e72ae6851c40799ec14a41496d64406']
from itertools import chain
s = [x.strip('[]').split(', ') for x in df['col'].values.tolist()]
L = list(set(chain.from_iterable(s)))
print (L)
['76475992f4207ee2b209a4867b42c372', '3277ded8d1f105c84ad5e093f6e7795d',
'd85235f50b3c019ad7c6291e3ca58093', '4b0709dd990a0904bbe6afec636c4213',
'c00a98ceb6fc7006d572486787e551cc', '03e0fb034f2cb3264234b9eae09b4287',
'8a88e629c368001c18619c7cd66d3e96', '0e72ae6851c40799ec14a41496d64406']
IIUC, you want to flatten your data. Convert it to a column of lists using yaml.load.
import yaml
df = df.applymap(yaml.load)
print(df)
1
0 [8a88e629c368001c18619c7cd66d3e96, 4b0709dd990...
1 [3277ded8d1f105c84ad5e093f6e7795d]
2 [d85235f50b3c019ad7c6291e3ca58093, 03e0fb034f2...
The easiest way would be to construct a new dataframe form the old one's values.
out = pd.DataFrame(np.concatenate(df.iloc[:, 0].values.tolist()))
print(out)
0
0 8a88e629c368001c18619c7cd66d3e96
1 4b0709dd990a0904bbe6afec636c4213
2 c00a98ceb6fc7006d572486787e551cc
3 0e72ae6851c40799ec14a41496d64406
4 76475992f4207ee2b209a4867b42c372
5 3277ded8d1f105c84ad5e093f6e7795d
6 d85235f50b3c019ad7c6291e3ca58093
7 03e0fb034f2cb3264234b9eae09b4287

How to determine whether a Pandas Column contains a particular value

I am trying to determine whether there is an entry in a Pandas column that has a particular value. I tried to do this with if x in df['id']. I thought this was working, except when I fed it a value that I knew was not in the column 43 in df['id'] it still returned True. When I subset to a data frame only containing entries matching the missing id df[df['id'] == 43] there are, obviously, no entries in it. How to I determine if a column in a Pandas data frame contains a particular value and why doesn't my current method work? (FYI, I have the same problem when I use the implementation in this answer to a similar question).
in of a Series checks whether the value is in the index:
In [11]: s = pd.Series(list('abc'))
In [12]: s
Out[12]:
0 a
1 b
2 c
dtype: object
In [13]: 1 in s
Out[13]: True
In [14]: 'a' in s
Out[14]: False
One option is to see if it's in unique values:
In [21]: s.unique()
Out[21]: array(['a', 'b', 'c'], dtype=object)
In [22]: 'a' in s.unique()
Out[22]: True
or a python set:
In [23]: set(s)
Out[23]: {'a', 'b', 'c'}
In [24]: 'a' in set(s)
Out[24]: True
As pointed out by #DSM, it may be more efficient (especially if you're just doing this for one value) to just use in directly on the values:
In [31]: s.values
Out[31]: array(['a', 'b', 'c'], dtype=object)
In [32]: 'a' in s.values
Out[32]: True
You can also use pandas.Series.isin although it's a little bit longer than 'a' in s.values:
In [2]: s = pd.Series(list('abc'))
In [3]: s
Out[3]:
0 a
1 b
2 c
dtype: object
In [3]: s.isin(['a'])
Out[3]:
0 True
1 False
2 False
dtype: bool
In [4]: s[s.isin(['a'])].empty
Out[4]: False
In [5]: s[s.isin(['z'])].empty
Out[5]: True
But this approach can be more flexible if you need to match multiple values at once for a DataFrame (see DataFrame.isin)
>>> df = DataFrame({'A': [1, 2, 3], 'B': [1, 4, 7]})
>>> df.isin({'A': [1, 3], 'B': [4, 7, 12]})
A B
0 True False # Note that B didn't match 1 here.
1 False True
2 True True
found = df[df['Column'].str.contains('Text_to_search')]
print(found.count())
the found.count() will contains number of matches
And if it is 0 then means string was not found in the Column.
You can try this to check a particular value 'x' in a particular column named 'id'
if x in df['id'].values
I did a few simple tests:
In [10]: x = pd.Series(range(1000000))
In [13]: timeit 999999 in x.values
567 µs ± 25.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [16]: timeit (x == 999999).any()
6.86 ms ± 107 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [21]: timeit x.eq(999999).any()
7.03 ms ± 33.7 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [22]: timeit x.eq(9).any()
7.04 ms ± 60 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [15]: timeit x.isin([999999]).any()
9.54 ms ± 291 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
In [17]: timeit 999999 in set(x)
79.8 ms ± 1.98 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
Interestingly it doesn't matter if you look up 9 or 999999, it seems like it takes about the same amount of time using the in syntax (must be using some vectorized computation)
In [24]: timeit 9 in x.values
666 µs ± 15.7 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [25]: timeit 9999 in x.values
647 µs ± 5.21 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [26]: timeit 999999 in x.values
642 µs ± 2.11 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [27]: timeit 99199 in x.values
644 µs ± 5.31 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [28]: timeit 1 in x.values
667 µs ± 20.8 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Seems like using x.values is the fastest, but maybe there is a more elegant way in pandas?
Or use Series.tolist or Series.any:
>>> s = pd.Series(list('abc'))
>>> s
0 a
1 b
2 c
dtype: object
>>> 'a' in s.tolist()
True
>>> (s=='a').any()
True
Series.tolist makes a list about of a Series, and the other one i am just getting a boolean Series from a regular Series, then checking if there are any Trues in the boolean Series.
Simple condition:
if any(str(elem) in ['a','b'] for elem in df['column'].tolist()):
Use
df[df['id']==x].index.tolist()
If x is present in id then it'll return the list of indices where it is present, else it gives an empty list.
I had a CSV file to read:
df = pd.read_csv('50_states.csv')
And after trying:
if value in df.column:
print(True)
which never printed true, even though the value was in the column;
I tried:
for values in df.column:
if value == values:
print(True)
#Or do something
else:
print(False)
Which worked. I hope this can help!
Use query() to find the rows where the condition holds and get the number of rows with shape[0]. If there exists at least one entry, this statement is True:
df.query('id == 123').shape[0] > 0
Suppose you dataframe looks like :
Now you want to check if filename "80900026941984" is present in the dataframe or not.
You can simply write :
if sum(df["filename"].astype("str").str.contains("80900026941984")) > 0:
print("found")

Categories