Related
I have a dataframe df :
>>> df
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20060630 6.590 NaN 6.590 5.291
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left:
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that ?
Use DataFrame.drop and pass it a Series of index labels:
In [65]: df
Out[65]:
one two
one 1 4
two 2 3
three 3 2
four 4 1
In [66]: df.drop(index=[1,3])
Out[66]:
one two
one 1 4
three 3 2
Note that it may be important to use the "inplace" command when you want to do the drop in line.
df.drop(df.index[[1,3]], inplace=True)
Because your original question is not returning anything, this command should be used.
http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.drop.html
If the DataFrame is huge, and the number of rows to drop is large as well, then simple drop by index df.drop(df.index[]) takes too much time.
In my case, I have a multi-indexed DataFrame of floats with 100M rows x 3 cols, and I need to remove 10k rows from it. The fastest method I found is, quite counterintuitively, to take the remaining rows.
Let indexes_to_drop be an array of positional indexes to drop ([1, 2, 4] in the question).
indexes_to_keep = set(range(df.shape[0])) - set(indexes_to_drop)
df_sliced = df.take(list(indexes_to_keep))
In my case this took 20.5s, while the simple df.drop took 5min 27s and consumed a lot of memory. The resulting DataFrame is the same.
I solved this in a simpler way - just in 2 steps.
Make a dataframe with unwanted rows/data.
Use the index of this unwanted dataframe to drop the rows from the original dataframe.
Example:
Suppose you have a dataframe df which as many columns including 'Age' which is an integer. Now let's say you want to drop all the rows with 'Age' as negative number.
df_age_negative = df[ df['Age'] < 0 ] # Step 1
df = df.drop(df_age_negative.index, axis=0) # Step 2
Hope this is much simpler and helps you.
You can also pass to DataFrame.drop the label itself (instead of Series of index labels):
In[17]: df
Out[17]:
a b c d e
one 0.456558 -2.536432 0.216279 -1.305855 -0.121635
two -1.015127 -0.445133 1.867681 2.179392 0.518801
In[18]: df.drop('one')
Out[18]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
Which is equivalent to:
In[19]: df.drop(df.index[[0]])
Out[19]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
If I want to drop a row which has let's say index x, I would do the following:
df = df[df.index != x]
If I would want to drop multiple indices (say these indices are in the list unwanted_indices), I would do:
desired_indices = [i for i in len(df.index) if i not in unwanted_indices]
desired_df = df.iloc[desired_indices]
Here is a bit specific example, I would like to show. Say you have many duplicate entries in some of your rows. If you have string entries you could easily use string methods to find all indexes to drop.
ind_drop = df[df['column_of_strings'].apply(lambda x: x.startswith('Keyword'))].index
And now to drop those rows using their indexes
new_df = df.drop(ind_drop)
Use only the Index arg to drop row:-
df.drop(index = 2, inplace = True)
For multiple rows:-
df.drop(index=[1,3], inplace = True)
In a comment to #theodros-zelleke's answer, #j-jones asked about what to do if the index is not unique. I had to deal with such a situation. What I did was to rename the duplicates in the index before I called drop(), a la:
dropped_indexes = <determine-indexes-to-drop>
df.index = rename_duplicates(df.index)
df.drop(df.index[dropped_indexes], inplace=True)
where rename_duplicates() is a function I defined that went through the elements of index and renamed the duplicates. I used the same renaming pattern as pd.read_csv() uses on columns, i.e., "%s.%d" % (name, count), where name is the name of the row and count is how many times it has occurred previously.
Determining the index from the boolean as described above e.g.
df[df['column'].isin(values)].index
can be more memory intensive than determining the index using this method
pd.Index(np.where(df['column'].isin(values))[0])
applied like so
df.drop(pd.Index(np.where(df['column'].isin(values))[0]), inplace = True)
This method is useful when dealing with large dataframes and limited memory.
To drop rows with indices 1, 2, 4 you can use:
df[~df.index.isin([1, 2, 4])]
The tilde operator ~ negates the result of the method isin. Another option is to drop indices:
df.loc[df.index.drop([1, 2, 4])]
Look at the following dataframe df
df
column1 column2 column3
0 1 11 21
1 2 12 22
2 3 13 23
3 4 14 24
4 5 15 25
5 6 16 26
6 7 17 27
7 8 18 28
8 9 19 29
9 10 20 30
Lets drop all the rows which has an odd number in column1
Create a list of all the elements in column1 and keep only those elements that are even numbers (the elements that you dont want to drop)
keep_elements = [x for x in df.column1 if x%2==0]
All the rows with the values [2, 4, 6, 8, 10] in its column1 will be retained or not dropped.
df.set_index('column1',inplace = True)
df.drop(df.index.difference(keep_elements),axis=0,inplace=True)
df.reset_index(inplace=True)
We make the column1 as index and drop all the rows that are not required. Then we reset the index back.
df
column1 column2 column3
0 2 12 22
1 4 14 24
2 6 16 26
3 8 18 28
4 10 20 30
As Dennis Golomazov's answer suggests, using drop to drop rows. You can select to keep rows instead. Let's say you have a list of row indices to drop called indices_to_drop. You can convert it to a mask as follows:
mask = np.ones(len(df), bool)
mask[indices_to_drop] = False
You can use this index directly:
df_new = df.iloc[mask]
The nice thing about this method is that mask can come from any source: it can be a condition involving many columns, or something else.
The really nice thing is, you really don't need the index of the original DataFrame at all, so it doesn't matter if the index is unique or not.
The disadvantage is of course that you can't do the drop in-place with this method.
Consider an example dataframe
df =
index column1
0 00
1 10
2 20
3 30
we want to drop 2nd and 3rd index rows.
Approach 1:
df = df.drop(df.index[2,3])
or
df.drop(df.index[2,3],inplace=True)
print(df)
df =
index column1
0 00
3 30
#This approach removes the rows as we wanted but the index remains unordered
Approach 2
df.drop(df.index[2,3],inplace=True,ignore_index=True)
print(df)
df =
index column1
0 00
1 30
#This approach removes the rows as we wanted and resets the index.
This worked for me
# Create a list containing the index numbers you want to remove
index_list = list(range(42766, 42798))
df.drop(df.index[index_list], inplace =True)
df.shape
This should drop all indexes within that created range
I have two dataframes with identical structures df and df_a. df_a is a subset of df that I need to reintegrate into df. Essentially, df_a has various rows (with varying indices) from df that have been manipulated.
Below is an example of indices of each df and df_a. These both have the same column structure so all the columns are the same, it's only the rows and idex of the rows that differ.
>> df
index .. other_columns ..
0
1
2
3
. .
9999
10000
10001
[10001 rows x 20 columns]
>> df_a
index .. other_columns ..
5
12
105
712
. .
9824
9901
9997
[782 rows x 20 columns]
So, I want to overwrite only the rows in df that have the indices of df_a with the corresponding rows in df_a. I checked out Replace rows in a Pandas df with rows from another df and replace rows in a pandas data frame but neither of those tell how to use the indices of another dataframe to replace the values in the rows.
Something along the lines of:
df.loc[df_a.index, :] = df_a[:]
I don't know if this wants you meant, for that you would need to be more specific, but if the first data frame was modified to be a new data frame with different indexes, then you can use this code to reset back the indexes:
import pandas as pd
df_a = pd.DataFrame({'a':[1,2,3,4],'b':[5,4,2,7]}, index=[2,55,62,74])
df_a.reset_index(inplace=True, drop=True)
print(df_a)
PRINTS:
a b
0 1 5
1 2 4
2 3 2
3 4 7
I want to calculate some values between type1 and type2 elements. For example if have index like: (a,b) and (b,a) then it will equals to (a,b)+(b,a) I want to sum values if reverse indexes exists.
One way using frozenset:
df = df.reset_index()
df['total'].groupby(df[['type1', 'type2']].apply(frozenset, 1)).sum()
Output:
(b, a) 15
(c, a) 19
Name: total, dtype: int64
Let df be your DataFrame. Swap the first and second levels of the MultiIndex, concatenate the original and the new DataFrames, and calculate row sums:
pd.concat([df, df.swaplevel()], axis=1).sum(1)
#a b 15
# c 19
#b a 15
#c a 19
The solution works even for the rows that do not have matching reversed rows. The answer has duplicated rows for the direct and the reversed index. You will have to filter out the unwanted rows.
For example, I have a dataframe called dat, then I want to apply a function on each column of the dataframe, if the return value is Ture, then keep this column and turn to next column, if the return value is False, then drop this column and turn to next column.
I know I can write a for loop to do this, but is there a efficient way to do this?
You could do it like this using boolean index on df.columns:
I want to drop all columns where the 'sum' for simplicity is greater than 50
df = pd.DataFrame({'A':[2,4,6,8],'B':[101,102,102,102]})
r = df.apply(np.sum) # applies the sum function to all columns
c = r <= 50 #create boolean test for columns
df[c[c].index] #Use boolea indexing to get columns and column filter for dataframe
Output:
A
0 2
1 4
2 6
3 8
Updating an old answer:
df.loc[:, df.sum() <= 50]
I have a dataframe df :
>>> df
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20060630 6.590 NaN 6.590 5.291
20060930 10.103 NaN 10.103 7.981
20061231 15.915 NaN 15.915 12.686
20070331 3.196 NaN 3.196 2.710
20070630 7.907 NaN 7.907 6.459
Then I want to drop rows with certain sequence numbers which indicated in a list, suppose here is [1,2,4], then left:
sales discount net_sales cogs
STK_ID RPT_Date
600141 20060331 2.709 NaN 2.709 2.245
20061231 15.915 NaN 15.915 12.686
20070630 7.907 NaN 7.907 6.459
How or what function can do that ?
Use DataFrame.drop and pass it a Series of index labels:
In [65]: df
Out[65]:
one two
one 1 4
two 2 3
three 3 2
four 4 1
In [66]: df.drop(index=[1,3])
Out[66]:
one two
one 1 4
three 3 2
Note that it may be important to use the "inplace" command when you want to do the drop in line.
df.drop(df.index[[1,3]], inplace=True)
Because your original question is not returning anything, this command should be used.
http://pandas.pydata.org/pandas-docs/version/0.17.0/generated/pandas.DataFrame.drop.html
If the DataFrame is huge, and the number of rows to drop is large as well, then simple drop by index df.drop(df.index[]) takes too much time.
In my case, I have a multi-indexed DataFrame of floats with 100M rows x 3 cols, and I need to remove 10k rows from it. The fastest method I found is, quite counterintuitively, to take the remaining rows.
Let indexes_to_drop be an array of positional indexes to drop ([1, 2, 4] in the question).
indexes_to_keep = set(range(df.shape[0])) - set(indexes_to_drop)
df_sliced = df.take(list(indexes_to_keep))
In my case this took 20.5s, while the simple df.drop took 5min 27s and consumed a lot of memory. The resulting DataFrame is the same.
I solved this in a simpler way - just in 2 steps.
Make a dataframe with unwanted rows/data.
Use the index of this unwanted dataframe to drop the rows from the original dataframe.
Example:
Suppose you have a dataframe df which as many columns including 'Age' which is an integer. Now let's say you want to drop all the rows with 'Age' as negative number.
df_age_negative = df[ df['Age'] < 0 ] # Step 1
df = df.drop(df_age_negative.index, axis=0) # Step 2
Hope this is much simpler and helps you.
You can also pass to DataFrame.drop the label itself (instead of Series of index labels):
In[17]: df
Out[17]:
a b c d e
one 0.456558 -2.536432 0.216279 -1.305855 -0.121635
two -1.015127 -0.445133 1.867681 2.179392 0.518801
In[18]: df.drop('one')
Out[18]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
Which is equivalent to:
In[19]: df.drop(df.index[[0]])
Out[19]:
a b c d e
two -1.015127 -0.445133 1.867681 2.179392 0.518801
If I want to drop a row which has let's say index x, I would do the following:
df = df[df.index != x]
If I would want to drop multiple indices (say these indices are in the list unwanted_indices), I would do:
desired_indices = [i for i in len(df.index) if i not in unwanted_indices]
desired_df = df.iloc[desired_indices]
Here is a bit specific example, I would like to show. Say you have many duplicate entries in some of your rows. If you have string entries you could easily use string methods to find all indexes to drop.
ind_drop = df[df['column_of_strings'].apply(lambda x: x.startswith('Keyword'))].index
And now to drop those rows using their indexes
new_df = df.drop(ind_drop)
Use only the Index arg to drop row:-
df.drop(index = 2, inplace = True)
For multiple rows:-
df.drop(index=[1,3], inplace = True)
In a comment to #theodros-zelleke's answer, #j-jones asked about what to do if the index is not unique. I had to deal with such a situation. What I did was to rename the duplicates in the index before I called drop(), a la:
dropped_indexes = <determine-indexes-to-drop>
df.index = rename_duplicates(df.index)
df.drop(df.index[dropped_indexes], inplace=True)
where rename_duplicates() is a function I defined that went through the elements of index and renamed the duplicates. I used the same renaming pattern as pd.read_csv() uses on columns, i.e., "%s.%d" % (name, count), where name is the name of the row and count is how many times it has occurred previously.
Determining the index from the boolean as described above e.g.
df[df['column'].isin(values)].index
can be more memory intensive than determining the index using this method
pd.Index(np.where(df['column'].isin(values))[0])
applied like so
df.drop(pd.Index(np.where(df['column'].isin(values))[0]), inplace = True)
This method is useful when dealing with large dataframes and limited memory.
To drop rows with indices 1, 2, 4 you can use:
df[~df.index.isin([1, 2, 4])]
The tilde operator ~ negates the result of the method isin. Another option is to drop indices:
df.loc[df.index.drop([1, 2, 4])]
Look at the following dataframe df
df
column1 column2 column3
0 1 11 21
1 2 12 22
2 3 13 23
3 4 14 24
4 5 15 25
5 6 16 26
6 7 17 27
7 8 18 28
8 9 19 29
9 10 20 30
Lets drop all the rows which has an odd number in column1
Create a list of all the elements in column1 and keep only those elements that are even numbers (the elements that you dont want to drop)
keep_elements = [x for x in df.column1 if x%2==0]
All the rows with the values [2, 4, 6, 8, 10] in its column1 will be retained or not dropped.
df.set_index('column1',inplace = True)
df.drop(df.index.difference(keep_elements),axis=0,inplace=True)
df.reset_index(inplace=True)
We make the column1 as index and drop all the rows that are not required. Then we reset the index back.
df
column1 column2 column3
0 2 12 22
1 4 14 24
2 6 16 26
3 8 18 28
4 10 20 30
As Dennis Golomazov's answer suggests, using drop to drop rows. You can select to keep rows instead. Let's say you have a list of row indices to drop called indices_to_drop. You can convert it to a mask as follows:
mask = np.ones(len(df), bool)
mask[indices_to_drop] = False
You can use this index directly:
df_new = df.iloc[mask]
The nice thing about this method is that mask can come from any source: it can be a condition involving many columns, or something else.
The really nice thing is, you really don't need the index of the original DataFrame at all, so it doesn't matter if the index is unique or not.
The disadvantage is of course that you can't do the drop in-place with this method.
Consider an example dataframe
df =
index column1
0 00
1 10
2 20
3 30
we want to drop 2nd and 3rd index rows.
Approach 1:
df = df.drop(df.index[2,3])
or
df.drop(df.index[2,3],inplace=True)
print(df)
df =
index column1
0 00
3 30
#This approach removes the rows as we wanted but the index remains unordered
Approach 2
df.drop(df.index[2,3],inplace=True,ignore_index=True)
print(df)
df =
index column1
0 00
1 30
#This approach removes the rows as we wanted and resets the index.
This worked for me
# Create a list containing the index numbers you want to remove
index_list = list(range(42766, 42798))
df.drop(df.index[index_list], inplace =True)
df.shape
This should drop all indexes within that created range