What code should I type for ipython notebook to determine if the code in the ID column of a csv file is unique?
I have tried searching online but to no avail.
Probably the simplest is to compare the length of the df against the length of the unique values:
len(df) == len(df['ID'].unique())
will yield True or False
Also you could call drop_duplicates():
len(df) == len(df['ID'].drop_duplicates())
Also nunique:
len(df) == df['ID'].nunique()
Example:
In [6]:
df = pd.DataFrame({'a':[0,1,1,2,3,4]})
df
Out[6]:
a
0 0
1 1
2 1
3 2
4 3
5 4
In [7]:
len(df) == df['a'].nunique()
Out[7]:
False
Another method is to invert the boolean series returned from duplicated and pass this np.all which will return true if all values are True, for this sample data we get a single False value hence it will yield False:
In [11]:
np.all(~df['a'].duplicated())
Out[11]:
False
Related
I'm finding a way (using a built-in pandas function) to scan a column of a DataFrame comparing its-self values for different indices.
Here an example using a for cycle. I've a dataframe with a single column col 1. I want to create a column col 2 with TRUE/FALSE in this way.
df["col_2"] = "False"
N=5
for idx in range(0,len(df)-N):
for i in range (idx+1,idx+N+1):
if(df["col_1"].iloc[idx]==df["col_1"].iloc[i]):
df["col_2"].iloc[idx]=True
What I'm trying to do is to compare the value of col 1 for the i-th index with the next N indices.
I'd like to do the same operation without using a for cycle . I've already tried to use a shift and df.loc , but the computational time is similar.
Have you tried doing something like
df["col_1_shifted"] = df["col_1"].shift(N)
df["col_2"] = (df["col_1"] == df["col_1_shifted"])
update: looking more carefully at your double-loop, it seems you want to flag all duplicates except the last. That's done by just changing the keep argument to 'last' instead of the default 'first'.
As suggested by #QuangHoang in the comments, duplicated() works nicely for this:
newdf = df.assign(col_2=df.duplicated(subset='col_1', keep='last'))
Example:
df = pd.DataFrame(np.random.randint(0, 5, 10), columns=['col_1'])
newdf = df.assign(col_2=df.duplicated(subset='col_1', keep='last'))
>>> newdf
col_1 col_2
0 2 False
1 0 True
2 1 True
3 0 True
4 0 False
5 3 False
6 1 True
7 1 False
8 4 True
9 4 False
How do I set the values of a pandas dataframe slice, where the rows are chosen by a boolean expression and the columns are chosen by position?
I have done it in the following way so far:
>>> vals = [5,7]
>>> df = pd.DataFrame({'a':[1,2,3,4], 'b':[5,5,7,7]})
>>> df
a b
0 1 5
1 2 5
2 3 7
3 4 7
>>> df.iloc[:,1][df.iloc[:,1] == vals[0]] = 0
>>> df
a b
0 1 0
1 2 0
2 3 7
3 4 7
This works as expected on this small sample, but gives me the following warning on my real life dataframe:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
What is the recommended way to achieve this?
Use DataFrame.columns and DataFrame.loc:
col = df.columns[1]
df.loc[df.loc[:,col] == vals[0], col] = 0
One way is to use index of column header and loc (label based indexing):
df.loc[df.iloc[:, 1] == vals[0], df.columns[1]] = 0
Another way is to use np.where with iloc (integer position indexing), np.where returns the tuple of row, column index positions where True:
df.iloc[np.where(df.iloc[:, 1] == vals[0])[0], 1] = 0
I believe this can be also done with a combination of loc and iloc:
df.loc[df.iloc[:,1] == vals[0]].iloc[:, 1] = 0
I have a DataFrame with a multiple columns with 'yes' and 'no' strings. I want all of them to convert to a boolian dtype. To map one column, I would use
dict_map_yn_bool={'yes':True, 'no':False}
df['nearby_subway_station'].map(dict_map_yn_bool)
This would do the job for the one column. how can I replace multiple columns with single line of code?
You can use applymap:
df = pd.DataFrame({'nearby_subway_station':['yes','no'], 'Station':['no','yes']})
print (df)
Station nearby_subway_station
0 no yes
1 yes no
dict_map_yn_bool={'yes':True, 'no':False}
df = df.applymap(dict_map_yn_bool.get)
print (df)
Station nearby_subway_station
0 False True
1 True False
Another solution:
for x in df:
df[x] = df[x].map(dict_map_yn_bool)
print (df)
Station nearby_subway_station
0 False True
1 True False
Thanks Jon Clements for very nice idea - using replace:
df = df.replace({'yes': True, 'no': False})
print (df)
Station nearby_subway_station
0 False True
1 True False
Some differences if data are no in dict:
df = pd.DataFrame({'nearby_subway_station':['yes','no','a'], 'Station':['no','yes','no']})
print (df)
Station nearby_subway_station
0 no yes
1 yes no
2 no a
applymap create None for boolean, strings, for numeric NaN.
df = df.applymap(dict_map_yn_bool.get)
print (df)
Station nearby_subway_station
0 False True
1 True False
2 False None
map create NaN:
for x in df:
df[x] = df[x].map(dict_map_yn_bool)
print (df)
Station nearby_subway_station
0 False True
1 True False
2 False NaN
replace dont create NaN or None, but original data are untouched:
df = df.replace(dict_map_yn_bool)
print (df)
Station nearby_subway_station
0 False True
1 True False
2 False a
You could use a stack/unstack idiom
df.stack().map(dict_map_yn_bool).unstack()
Using #jezrael's setup
df = pd.DataFrame({'nearby_subway_station':['yes','no'], 'Station':['no','yes']})
dict_map_yn_bool={'yes':True, 'no':False}
Then
df.stack().map(dict_map_yn_bool).unstack()
Station nearby_subway_station
0 False True
1 True False
timing
small data
bigger data
I would work with pandas.DataFrame.replace as I think it is the simplest and has built-in arguments to support this task. Also a one-liner solution, as requested.
First case, replace all instances of 'yes' or 'no':
import pandas as pd
import numpy as np
from numpy import random
# Generating the data, 20 rows by 5 columns.
data = random.choice(['yes','no'], size=(20, 5), replace=True)
col_names = ['col_{}'.format(a) for a in range(1,6)]
df = pd.DataFrame(data, columns=col_names)
# Supplying lists of values to what they will replace. No dict needed.
df_bool = df.replace(to_replace=['yes','no'], value=[True, False])
Second case, where you only want to replace in a subset of columns, as described in the documentation for DataFrame.replace. Use a nested dictionary where the first set of keys are columns with values to replace, and values are dictionaries mapping values to their replacements:
dict_map_yn_bool={'yes':True, 'no':False}
replace_dict = {'col_1':dict_map_yn_bool,
'col_2':dict_map_yn_bool}
df_bool = df.replace(to_replace=replace_dict)
I have a regular DataFrame with a string type (object) column. When I try to filter on the column using the equivalent of a WHERE clause, I get a KeyError when I use the dot notation. When in bracket notation, all is well.
I am referring to these instructions:
df[df.colA == 'blah']
df[df['colA'] == 'blah']
The first gives the equivalent of
KeyError: False
Not posting an example as I cannot reproduce the issue on a bespoke DataFrame built for the purpose of illustration: when I do, both notations yield the same result.
Asking then if there is a difference in the two and why.
The dot notation is just a convenient shortcut for accessing things vs. the standard brackets. Notably, they don't work when the column name is something like sum that is already a DataFrame method. My bet would be that the column name in your real example is running into that issue, and so it works fine with the bracket selection but is otherwise testing whether a method is equal to 'blah'.
Quick example below:
In [67]: df = pd.DataFrame(np.arange(10).reshape(5,2), columns=["number", "sum"])
In [68]: df
Out[68]:
number sum
0 0 1
1 2 3
2 4 5
3 6 7
4 8 9
In [69]: df.number == 0
Out[69]:
0 True
1 False
2 False
3 False
4 False
Name: number, dtype: bool
In [70]: df.sum == 0
Out[70]: False
In [71]: df['sum'] == 0
Out[71]:
0 False
1 False
2 False
3 False
4 False
Name: sum, dtype: bool
I am relatively new to Python/Pandas and am struggling with extracting the correct data from a pd.Dataframe. What I actually have is a Dataframe with 3 columns:
data =
Position Letter Value
1 a TRUE
2 f FALSE
3 c TRUE
4 d TRUE
5 k FALSE
What I want to do is put all of the TRUE rows into a new Dataframe so that the answer would be:
answer =
Position Letter Value
1 a TRUE
3 c TRUE
4 d TRUE
I know that you can access a particular column using
data['Value']
but how do I extract all of the TRUE rows?
Thanks for any help and advice,
Alex
You can test which Values are True:
In [11]: data['Value'] == True
Out[11]:
0 True
1 False
2 True
3 True
4 False
Name: Value, dtype: bool
and then use fancy indexing to pull out those rows:
In [12]: data[data['Value'] == True]
Out[12]:
Position Letter Value
0 1 a True
2 3 c True
3 4 d True
*Note: if the values are actually the strings 'TRUE' and 'FALSE' (they probably shouldn't be!) then use:
data['Value'] == 'TRUE'
You can wrap your value/values in a list and do the following:
new_df = df.loc[df['yourColumnName'].isin(['your', 'list', 'items'])]
This will return a new dataframe consisting of rows where your list items match your column name in df.