Pandas dataframe : sample() function resets indexes? - python

Please consider a panda dataframe final_df with 142457 rows correctly indexed:
0
1
2
3
4
...
142452
142453
142454
142455
142456
I create / sample a new df data_test_for_all_models from this one:
data_test_for_all_models = final_df.copy().sample(frac=0.1, random_state=786)
A few indexes:
2235
118727
23291`
Now I drop rows from final_df with indexes in data_test_for_all_models :
final_df = = final_df.drop(data_test_for_all_models.index)
If I check a few indexes present in final_df :
final_df.iloc[2235]
returns wrongly a row.
I think it's a problem of reset indexes but which function does it: drop(), sample()?
Thanks.

You are using .iloc which provides integer-based indexing. You are getting the row number 2235, not the row with index 2235.
For that, you should use .loc:
final_df.loc[2235]
And you should get a KeyError.

Related

Drop first row when index doesn't start at 0

I want to drop the first row of a dataframe subset which is a subset of the main dataframe main. The first row of the dataframe has index = 31, so when I try dropping the first row I get the following error:
>>> subset.drop(0, axis=1)
KeyError: '[0] not found in axis'
I want to perform this drop on multiple dataframes, so I cannot drop index 31 on every dataframe. Is it possible to drop the first row when the index isn't equal to 0?
Simpliest is select all rows without first by position:
df = df.iloc[1:]
Or with drop is possible select first value, but if duplicated values, then all rows are removed:
df = df.drop(df.index[0])
Your solution try remove column 0:
subset.drop(0, axis=1)
df = df if df.index[0] == 0 else df.iloc[1:]

How to iterate over a list of dataframes in pandas?

I have multiple dataframes, on which I want to run this function which mainly drops unnecessary columns from the dataframe and returns a dataframe:
def dropunnamednancols(df):
"""
Drop any columns staring with unnamed and NaN
Args:
df ([dataframe]): dataframe of which columns to be dropped
"""
#first drop nan columns
df = df.loc[:, df.columns.notnull()]
#then search for columns with unnamed
df = df.loc[:, ~df.columns.str.contains('^Unnamed')]
return df
Now I iterate over the list of dataframes: [df1, df2, df3]
dfsublist = [df1, df2, df3]
for index in enumerate(dfsublist):
dfsublist[index] = dropunnamednancols(dfsublist[index])
Whereas the items of dfsublist have been changed, the original dataframes df1, df2, df3 still retain the unnecessary columns. How could I achieve this?
If I understand correctly you want to apply a function to multiple dataframes seperately.
The underlaying issue is that in your function you return a new dataframe and replace the stored dataframe in the list with a new own instead of modifying the old orignal one.
If you want to modify the orignal one you have to use the inplace=True parameters of the pandas functions. This is possible, but not recommended, as seen here.
Your code could therefore look like this:
def dropunnamednancols(df):
"""
Drop any columns staring with unnamed and NaN
Args:
df ([dataframe]): dataframe of which columns to be dropped
"""
cols = [col for col in df.columns if (col is None) | (col.startswith('Unnamed'))]
df.drop(cols, axis=1, inplace=True)
As example on sample data:
import pandas as pd
df_1 = pd.DataFrame({'a':[0,1,2,3], 'Unnamed':[9,8,7,6]})
df_2 = pd.DataFrame({'Unnamed':[9,8,7,6], 'b':[0,1,2,3]})
lst_dfs = [df_1, df_2]
[dropunnamednancols(df) for df in lst_dfs]
# df_1
# Out[55]:
# a
# 0 0
# 1 1
# 2 2
# 3 3
# df_2
# Out[56]:
# b
# 0 0
# 1 1
# 2 2
# 3 3
The reason is probably because your are using enumerate wrong. In your case, you just want the index, so what you should do is:
for index in range(len(dfsublist)):
...
Enumerate returns a tuple of an index and the actual value in your list. So in your code, the loop variable index will actually be asigned:
(0, df1) # First iteration
(1, df2) # Second iteration
(2, df3) # Third iteration
So either, you use enumerate correctly and unpack the tuple:
for index, df in enumerate(dfsublist):
...
or you get rid of it altogether because you access the values with the index either way.

range(1:len(df)) assigns NaN to last rows in dataframe

I have this weird problem with my code . I am trying to generate Auto Id to my dataframe with this code
df['id'] = pd.Series(range(1,(len(df)+1))).astype(str).apply('{:0>8}'.format
now, len(df) is equals to 799734
but df['id'] is Nan after row 77998
I tried to print the values using:
[print(i) for i in range(1,(len(df)+1))]
In first attempt it printed None after 77998 values. In second attempt it printed all values to the end normally. but dataframe has still Nan in last rows.
May be it has something to do with memory? I am not getting any hint. Please help me solve this issue.
Missing values means there is different index values in Series and DataFrame, for correct working need same.
So need pass df.index to Series constructor:
df['id'] = pd.Series(range(1,(len(df)+1)), index=df.index).astype(str).apply('{:0>8}'.format
Or 2 rows solution with assign range:
df['id'] = range(1,(len(df)+1))
df['id'] = df['id'].astype(str).apply('{:0>8}'.format
Or create default index values in DataFrame for same like Series:
df = df.reset_index(drop=True)
df['id'] = pd.Series(range(1,(len(df)+1))).astype(str).apply('{:0>8}'.format

Filtering a list of pandas dataframes to only include row indexes ending in a

Hi I have a list of pd dataframes (1377 of them). I need to split each dataframe into cases where the row index ends in a and where the row index ends in c.
I have looked at other stack overflow pages where this is suggested
(df.iloc[all_dfs[0].index.str.endswith('a',na=False)])
however this transposes my dataframe and then reduces the number of rows (previously columns before transposing)
Here is a short section from my first dataframe if that helps.
You can pass tuple of test values to str.endswith with boolean indexing for filtering:
df = pd.DataFrame({'a':range(5)},
index=['_E031a','_E031b','_E031c','_E032a','_E032b'])
df1 = df[df.index.str.endswith(('a', 'c'),na=False)]
print (df1)
a
_E031a 0
_E031c 2
_E032a 3
Or get last values of strings by indexing [-1] and test membership by Index.isin:
df1 = df[df.index.str[-1].isin(['a', 'c'])]
print (df1)
a
_E031a 0
_E031c 2
_E032a 3
For looping in list of DataFrames use:
all_dfs = [df[df.index.str.endswith(('a', 'c'),na=False)] for df in all_dfs]
If want only test a:
all_dfs = [df[df.index.str.endswith('a',na=False)] for df in all_dfs]

Iterate to find the repeat values in Pandas dataframe

Window 10, Python 3.6
I have a dataframe df
df=pd.DataFrame({'name':['boo', 'foo', 'too', 'boo', 'roo', 'too'],
'zip':['30004', '02895', '02895', '30750', '02895', '02895']})
I want to find the repeat record that has same 'name' and 'zip', and record the repeat times. The idea output is
name repeat zip
0 too 1 02895
Because my dataframe is much more than six rows, I need to use a iterate method. I appreciate any tips.
I believe you need groupby all columns and use GroupBy.size:
#create DataFrame from online source
#df = pd.read_csv('someonline.csv')
#df = pd.read_html('someurl')[0]
#L = []
#for x in iterator:
#in loop added data to list
# L.append(x)
##created DataFrame from contructor
#df = pd.DataFrame(L)
df = df.groupby(df.columns.tolist()).size().reset_index(name='repeat')
#if need specify columns
#df = df.groupby(['name','zip']).size().reset_index(name='repeat')
print (df)
name zip repeat
0 boo 30004 1
1 boo 30750 1
2 foo 02895 1
3 roo 02895 1
4 too 02895 2
Pandas has a handy .duplicated() method that can help you identify duplicates.
df.duplicated()
By passing the duplicate vector into a selection you can get the duplicate record:
df[df.duplicated()]
You can get the sum of the duplicated records by using .sum()
df.duplicated().sum()

Categories