I have a dataframe that may or may not have columns that are the same value. For example
row A B
1 9 0
2 7 0
3 5 0
4 2 0
I'd like to return just
row A
1 9
2 7
3 5
4 2
Is there a simple way to identify if any of these columns exist and then remove them?
I believe this option will be faster than the other answers here as it will traverse the data frame only once for the comparison and short-circuit if a non-unique value is found.
>>> df
0 1 2
0 1 9 0
1 2 7 0
2 3 7 0
>>> df.loc[:, (df != df.iloc[0]).any()]
0 1
0 1 9
1 2 7
2 3 7
Ignoring NaNs like usual, a column is constant if nunique() == 1. So:
>>> df
A B row
0 9 0 1
1 7 0 2
2 5 0 3
3 2 0 4
>>> df = df.loc[:,df.apply(pd.Series.nunique) != 1]
>>> df
A row
0 9 1
1 7 2
2 5 3
3 2 4
I compared various methods on data frame of size 120*10000. And found the efficient one is
def drop_constant_column(dataframe):
"""
Drops constant value columns of pandas dataframe.
"""
return dataframe.loc[:, (dataframe != dataframe.iloc[0]).any()]
1 loop, best of 3: 237 ms per loop
The other contenders are
def drop_constant_columns(dataframe):
"""
Drops constant value columns of pandas dataframe.
"""
result = dataframe.copy()
for column in dataframe.columns:
if len(dataframe[column].unique()) == 1:
result = result.drop(column,axis=1)
return result
1 loop, best of 3: 19.2 s per loop
def drop_constant_columns_2(dataframe):
"""
Drops constant value columns of pandas dataframe.
"""
for column in dataframe.columns:
if len(dataframe[column].unique()) == 1:
dataframe.drop(column,inplace=True,axis=1)
return dataframe
1 loop, best of 3: 317 ms per loop
def drop_constant_columns_3(dataframe):
"""
Drops constant value columns of pandas dataframe.
"""
keep_columns = [col for col in dataframe.columns if len(dataframe[col].unique()) > 1]
return dataframe[keep_columns].copy()
1 loop, best of 3: 358 ms per loop
def drop_constant_columns_4(dataframe):
"""
Drops constant value columns of pandas dataframe.
"""
keep_columns = dataframe.columns[dataframe.nunique()>1]
return dataframe.loc[:,keep_columns].copy()
1 loop, best of 3: 1.8 s per loop
Assuming that the DataFrame is completely of type numeric:
you can try:
>>> df = df.loc[:, df.var() == 0.0]
which will remove constant(i.e. variance = 0) columns.
If the DataFrame is of type both numeric and object, then you should try:
>>> enum_df = df.select_dtypes(include=['object'])
>>> num_df = df.select_dtypes(exclude=['object'])
>>> num_df = num_df.loc[:, num_df.var() == 0.0]
>>> df = pd.concat([num_df, enum_df], axis=1)
which will drop constant columns of numeric type only.
If you also want to ignore/delete constant enum columns, you should try:
>>> enum_df = df.select_dtypes(include=['object'])
>>> num_df = df.select_dtypes(exclude=['object'])
>>> enum_df = enum_df.loc[:, [True if y !=1 else False for y in [len(np.unique(x, return_counts=True)[-1]) for x in enum_df.T.as_matrix()]]]
>>> num_df = num_df.loc[:, num_df.var() == 0.0]
>>> df = pd.concat([num_df, enum_df], axis=1)
Here is my solution since I needed to do both object and numerical columns. Not claiming its super efficient or anything but it gets the job done.
def drop_constants(df):
"""iterate through columns and remove columns with constant values (all same)"""
columns = df.columns.values
for col in columns:
# drop col if unique values is 1
if df[col].nunique(dropna=False) == 1:
del df[col]
return df
Extra caveat, it won't work on columns of lists or arrays since they are not hashable.
Many examples in this thread does not work properly. Check this my answer with collection of examples that work
Related
How do I set the values of a pandas dataframe slice, where the rows are chosen by a boolean expression and the columns are chosen by position?
I have done it in the following way so far:
>>> vals = [5,7]
>>> df = pd.DataFrame({'a':[1,2,3,4], 'b':[5,5,7,7]})
>>> df
a b
0 1 5
1 2 5
2 3 7
3 4 7
>>> df.iloc[:,1][df.iloc[:,1] == vals[0]] = 0
>>> df
a b
0 1 0
1 2 0
2 3 7
3 4 7
This works as expected on this small sample, but gives me the following warning on my real life dataframe:
SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
What is the recommended way to achieve this?
Use DataFrame.columns and DataFrame.loc:
col = df.columns[1]
df.loc[df.loc[:,col] == vals[0], col] = 0
One way is to use index of column header and loc (label based indexing):
df.loc[df.iloc[:, 1] == vals[0], df.columns[1]] = 0
Another way is to use np.where with iloc (integer position indexing), np.where returns the tuple of row, column index positions where True:
df.iloc[np.where(df.iloc[:, 1] == vals[0])[0], 1] = 0
I believe this can be also done with a combination of loc and iloc:
df.loc[df.iloc[:,1] == vals[0]].iloc[:, 1] = 0
I am trying to create a function that uses df.iterrows() and Series.nlargest. I want to iterate over each row and find the largest number and then mark it as a 1. This is the data frame:
A B C
9 6 5
3 7 2
Here is the output I wish to have:
A B C
1 0 0
0 1 0
This is the function I wish to use here:
def get_top_n(df, top_n):
"""
Parameters
----------
df : DataFrame
top_n : int
The top number to get
Returns
-------
top_numbers : DataFrame
Returns the top number marked with a 1
"""
# Implement Function
for row in df.iterrows():
top_numbers = row.nlargest(top_n).sum()
return top_numbers
I get the following error:
AttributeError: 'tuple' object has no attribute 'nlargest'
Help would be appreciated on how to re-write my function in a neater way and to actually work! Thanks in advance
Add i variable, because iterrows return indices with Series for each row:
for i, row in df.iterrows():
top_numbers = row.nlargest(top_n).sum()
General solution with numpy.argsort for positions in descending order, then compare and convert boolean array to integers:
def get_top_n(df, top_n):
if top_n > len(df.columns):
raise ValueError("Value is higher as number of columns")
elif not isinstance(top_n, int):
raise ValueError("Value is not integer")
else:
arr = ((-df.values).argsort(axis=1) < top_n).astype(int)
df1 = pd.DataFrame(arr, index=df.index, columns=df.columns)
return (df1)
df1 = get_top_n(df, 2)
print (df1)
A B C
0 1 1 0
1 1 1 0
df1 = get_top_n(df, 1)
print (df1)
A B C
0 1 0 0
1 0 1 0
EDIT:
Solution with iterrows is possible, but not recommended, because slow:
top_n = 2
for i, row in df.iterrows():
top = row.nlargest(top_n).index
df.loc[i] = 0
df.loc[i, top] = 1
print (df)
A B C
0 1 1 0
1 1 1 0
For context, the dataframe consists of stock return data for the S&P500 over approximately 4 years
def get_top_n(prev_returns, top_n):
# generate dataframe populated with zeros for merging
top_stocks = pd.DataFrame(0, columns = prev_returns.columns, index = prev_returns.index)
# find top_n largest entries by row
df = prev_returns.apply(lambda x: x.nlargest(top_n), axis=1)
# merge dataframes
top_stocks = top_stocks.merge(df, how = 'right').set_index(df.index)
# return dataframe replacing non_zero answers with a 1
return (top_stocks.notnull()) * 1
Alternatively, the 2-line solution could be
def get_top_n(df, top_n):
# find top_n largest entries by stock
df = df.apply(lambda x: x.nlargest(top_n), axis=1)
# convert dataframe NaN or float entries True and False, and then convert to 0 and 1
top_numbers = (df.notnull()).astype(np.int)
return top_numbers
please consider the following DataFrame df:
timestamp id condition
1234 A
2323 B
3843 B
1234 C
8574 A
9483 A
Basing on the condition contained in the column condition I have to define a new column in this data frame which counts how many ids are in that condition.
However, please note that since the DataFrame is ordered by the timestamp column, one could have multiple entries of the same id and then a simple .cumsum() is not a viable option.
I have come out with the following code, which is working properly but is extremely slow:
#I start defining empty arrays
ids_with_condition_a = np.empty(0)
ids_with_condition_b = np.empty(0)
ids_with_condition_c = np.empty(0)
#Initializing new column
df['count'] = 0
#Using a for loop to do the task, but this is sooo slow!
for r in range(0, df.shape[0]):
if df.condition[r] == 'A':
ids_with_condition_a = np.append(ids_with_condition_a, df.id[r])
elif df.condition[r] == 'B':
ids_with_condition_b = np.append(ids_with_condition_b, df.id[r])
ids_with_condition_a = np.setdiff1d(ids_with_condition_a, ids_with_condition_b)
elifif df.condition[r] == 'C':
ids_with_condition_c = np.append(ids_with_condition_c, df.id[r])
df.count[r] = ids_with_condition_a.size
Keeping these Numpy arrays is very useful to me because it gives the list of the ids in a particular condition. I would also be able to put dinamically these arrays in a corresponding cell in the df DataFrame.
Are you able to come out with a better solution in terms of performance?
you need to use groupby on the column 'condition' and cumcount to count how many ids are in each condition up to the current row (which seems to be what your code do):
df['count'] = df.groupby('condition').cumcount()+1 # +1 is to start at 1 not 0
with your input sample, you get:
id condition count
0 1234 A 1
1 2323 B 1
2 3843 B 2
3 1234 C 1
4 8574 A 2
5 9483 A 3
which is faster than using loop for
and if you want just have the row with condition A for example, you can use a mask such as, if you do
print (df[df['condition'] == 'A']), you see row with only condition egal to A. So to get an array,
arr_A = df.loc[df['condition'] == 'A','id'].values
print (arr_A)
array([1234, 8574, 9483])
EDIT: to create two column per conditions, you can do for example for condition A:
# put 1 in a column where the condition is met
df['nb_cond_A'] = pd.np.where(df['condition'] == 'A',1,None)
# then use cumsum for increment number, ffill to fill the same number down
# where the condition is not meet, fillna(0) for filling other missing values
df['nb_cond_A'] = df['nb_cond_A'].cumsum().ffill().fillna(0).astype(int)
# for the partial list, first create the full array
arr_A = df.loc[df['condition'] == 'A','id'].values
# create the column with apply (here another might exist, but it's one way)
df['partial_arr_A'] = df['nb_cond_A'].apply(lambda x: arr_A[:x])
the output looks like this:
id condition nb_condition_A partial_arr_A nb_cond_A
0 1234 A 1 [1234] 1
1 2323 B 1 [1234] 1
2 3843 B 1 [1234] 1
3 1234 C 1 [1234] 1
4 8574 A 2 [1234, 8574] 2
5 9483 A 3 [1234, 8574, 9483] 3
then same thing for B, C. Maybe with a loop for cond in set(df['condition']) ould be practical for generalisation
EDIT 2: one idea to do what you expalined in the comments but not sure it improves the performance:
# array of unique condition
arr_cond = df.condition.unique()
#use apply to create row-wise the list of ids for each condition
df[arr_cond] = (df.apply(lambda row: (df.loc[:row.name].drop_duplicates('id','last')
.groupby('condition').id.apply(list)) ,axis=1)
.applymap(lambda x: [] if not isinstance(x,list) else x))
Some explanations: for each row, select the dataframe up to this row loc[:row.name], drop the duplicated 'id' and keep the last one drop_duplicates('id','last') (in your example, it means that once we reach the row 3, the row 0 is dropped, as the id 1234 is twice), then the data is grouped by condition groupby('condition'), and ids for each condition are put in a same list id.apply(list). The part starting with applymap fillna with empty list (you can't use fillna([]), it's not possible).
For the length for each condition, you can do:
for cond in arr_cond:
df['len_{}'.format(cond)] = df[cond].str.len().fillna(0).astype(int)
THe result is like this:
id condition A B C len_A len_B len_C
0 1234 A [1234] [] [] 1 0 0
1 2323 B [1234] [2323] [] 1 1 0
2 3843 B [1234] [2323, 3843] [] 1 2 0
3 1234 C [] [2323, 3843] [1234] 0 2 1
4 8574 A [8574] [2323, 3843] [1234] 1 2 1
5 9483 A [8574, 9483] [2323, 3843] [1234] 2 2 1
I've started using Pandas recently and have been stumbling over this issue for a few days. I have a dataframe with interval information that looks a bit like this:
df = pd.DataFrame({'RangeBegin' : [1,3,5,10,12,42,65],
'RangeEnd' : [2,4,7,11,41,54,100],
'Var1' : ['A','A','A','B','B','B','A'],
'Var2' : ['A','A','B','B','B','B','A']})
RangeBegin RangeEnd Var1 Var2
0 1 2 A A
1 3 4 A A
2 5 7 A B
3 10 11 B B
4 12 41 B B
5 42 54 B B
6 65 100 A A
It is sorted by RangeBegin. The idea is to to end up with something like this instead:
RangeBegin RangeEnd Var1 Var2
0 1.0 4.0 A A
2 5.0 7.0 A B
3 10.0 54.0 B B
6 65.0 100.0 A A
Where every "duplicate" (matching Var1 and Var2) row with contiguous ranges is aggregated into a single row. I'm thinking of expanding this algorithm to detect and deal with overlaps, but I'd like to get this working properly first.
You see, I've got a solution working by using iterrows to build a new dataframe row-by-row, but it takes far too long on my real dataset and I'd like to use a more vectorized implementation.
I've looked into groupby but can't find a set of keys (or a function to apply to said groups) that would make this work.
Here's my current implementation as it stands:
def test():
df = pd.DataFrame({'RangeBegin' : [1,3,5,10,12,42,65],
'RangeEnd' : [2,4,7,11,41,54,100],
'Var1' : ['A','A','A','B','B','B','A'],
'Var2' : ['A','A','B','B','B','B','A']})
print(df)
i = 0
cols = df.columns
aggData = pd.DataFrame(columns = cols)
for row in df.iterrows():
rowIndex, rowData = row
#if our new dataframe is empty or its last row is not contiguous, append it
if(aggData.empty or not duplicateContiguousRow(cols,rowData,aggData.loc[i])):
aggData = aggData.append(rowData)
i=rowIndex
#otherwise, modify the last row
else:
aggData.loc[i,'RangeEnd'] = rowData['RangeEnd']
print(aggData)
def duplicateContiguousRow(cols, row, aggDataRow):
#first bool: are the ranges contiguous?
contiguousBool = aggDataRow['RangeEnd']+1 == row['RangeBegin']
if(not contiguousBool):
return False
#second bool: is this row a duplicate (minus range columns)?
duplicateBool = True
for col in cols:
if(not duplicateBool):
break
elif col not in ['RangeBegin','RangeEnd']:
#Nan != Nan
duplicateBool = duplicateBool and (row[col] == aggDataRow[col] or (row[col]!=row[col] and aggDataRow[col]!=aggDataRow[col]))
return duplicateBool
EDIT: This question just got asked while I was writing this one. The answer looks promising
You can use groupby for this purpose, when first detecting the consecutive segments:
df['block'] = ((df['Var1'].shift(1) != df['Var1']) | (df['Var2'].shift(1) != df['Var2'])).astype(int).cumsum()
df.groupby(['Var1', 'Var2', 'block']).agg({'RangeBegin': np.min, 'RangeEnd': np.max}).reset_index()
will result in:
Var1 Var2 block RangeBegin RangeEnd
0 A A 1 1 4
1 A A 4 65 100
2 A B 2 5 7
3 B B 3 10 54
You could then sort by block to restore the original order.
This is my pandas DataFrame with original column names.
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt
1 3 0 0
2 1 1 5
Firstly I want to extract all unique variations of cm, e.g. in this case cm1 and cm2.
After this I want to create a new column per each unique cm. In this example there should be 2 new columns.
Finally in each new column I should store the total count of non-zero original column values, i.e.
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt cm1 cm2
1 3 0 0 2 0
2 1 1 5 2 1
I implemented the first step as follows:
cols = pd.DataFrame(list(df.columns))
ind = [c for c in df.columns if 'cm' in c]
df.ix[:, ind].columns
How to proceed with steps 2 and 3, so that the solution is automatic (I don't want to manually define column names cm1 and cm2, because in original data set I might have many cm variations.
You can use:
print df
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt
0 1 3 0 0
1 2 1 1 5
First you can filter columns contains string cm, so columns without cm are removed.
df1 = df.filter(regex='cm')
Now you can change columns to new values like cm1, cm2, cm3.
print [cm for c in df1.columns for cm in c.split('_') if cm[:2] == 'cm']
['cm1', 'cm1', 'cm2']
df1.columns = [cm for c in df1.columns for cm in c.split('_') if cm[:2] == 'cm']
print df1
cm1 cm1 cm2
0 1 3 0
1 2 1 1
Now you can count non - zero values - change df1 to boolean DataFrame and sum - True are converted to 1 and False to 0. You need count by unique column names - so groupby columns and sum values.
df1 = df1.astype(bool)
print df1
cm1 cm1 cm2
0 True True False
1 True True True
print df1.groupby(df1.columns, axis=1).sum()
cm1 cm2
0 2 0
1 2 1
You need unique columns, which are added to original df:
print df1.columns.unique()
['cm1' 'cm2']
Last you can add new columns by df[['cm1','cm2']] from groupby function:
df[df1.columns.unique()] = df1.groupby(df1.columns, axis=1).sum()
print df
old_dt_cm1_tt old_dm_cm1 old_rr_cm2_epf old_gt cm1 cm2
0 1 3 0 0 2 0
1 2 1 1 5 2 1
Once you know which columns have cm in them you can map them (with a dict) to the desired new column with an adapted version of this answer:
col_map = {c:'cm'+c[c.index('cm') + len('cm')] for c in ind}
# ^ if you are hard coding this in you might as well use 2
so that instead of the string after cm it is cm and the character directly following, in this case it would be:
{'old_dm_cm1': 'cm1', 'old_dt_cm1_tt': 'cm1', 'old_rr_cm2_epf': 'cm2'}
Then add the new columns to the DataFrame by iterating over the dict:
for col,new_col in col_map.items():
if new_col not in df:
df[new_col] =[int(a!=0) for a in df[col]]
else:
df[new_col]+=[int(a!=0) for a in df[col]]
note that int(a!=0) will simply give 0 if the value is 0 and 1 otherwise. The only issue with this is because dicts are inherently unordered it may be preferable to add the new columns in order according to the values: (like the answer here)
import operator
for col,new_col in sorted(col_map.items(),key=operator.itemgetter(1)):
if new_col in df:
df[new_col]+=[int(a!=0) for a in df[col]]
else:
df[new_col] =[int(a!=0) for a in df[col]]
to ensure the new columns are inserted in order.