I am getting a ValueError: cannot reindex from a duplicate axis when I am trying to set an index to a certain value. I tried to reproduce this with a simple example, but I could not do it.
Here is my session inside of ipdb trace. I have a DataFrame with string index, and integer columns, float values. However when I try to create sum index for sum of all columns I am getting ValueError: cannot reindex from a duplicate axis error. I created a small DataFrame with the same characteristics, but was not able to reproduce the problem, what could I be missing?
I don't really understand what ValueError: cannot reindex from a duplicate axismeans, what does this error message mean? Maybe this will help me diagnose the problem, and this is most answerable part of my question.
ipdb> type(affinity_matrix)
<class 'pandas.core.frame.DataFrame'>
ipdb> affinity_matrix.shape
(333, 10)
ipdb> affinity_matrix.columns
Int64Index([9315684, 9315597, 9316591, 9320520, 9321163, 9320615, 9321187, 9319487, 9319467, 9320484], dtype='int64')
ipdb> affinity_matrix.index
Index([u'001', u'002', u'003', u'004', u'005', u'008', u'009', u'010', u'011', u'014', u'015', u'016', u'018', u'020', u'021', u'022', u'024', u'025', u'026', u'027', u'028', u'029', u'030', u'032', u'033', u'034', u'035', u'036', u'039', u'040', u'041', u'042', u'043', u'044', u'045', u'047', u'047', u'048', u'050', u'053', u'054', u'055', u'056', u'057', u'058', u'059', u'060', u'061', u'062', u'063', u'065', u'067', u'068', u'069', u'070', u'071', u'072', u'073', u'074', u'075', u'076', u'077', u'078', u'080', u'082', u'083', u'084', u'085', u'086', u'089', u'090', u'091', u'092', u'093', u'094', u'095', u'096', u'097', u'098', u'100', u'101', u'103', u'104', u'105', u'106', u'107', u'108', u'109', u'110', u'111', u'112', u'113', u'114', u'115', u'116', u'117', u'118', u'119', u'121', u'122', ...], dtype='object')
ipdb> affinity_matrix.values.dtype
dtype('float64')
ipdb> 'sums' in affinity_matrix.index
False
Here is the error:
ipdb> affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0)
*** ValueError: cannot reindex from a duplicate axis
I tried to reproduce this with a simple example, but I failed
In [32]: import pandas as pd
In [33]: import numpy as np
In [34]: a = np.arange(35).reshape(5,7)
In [35]: df = pd.DataFrame(a, ['x', 'y', 'u', 'z', 'w'], range(10, 17))
In [36]: df.values.dtype
Out[36]: dtype('int64')
In [37]: df.loc['sums'] = df.sum(axis=0)
In [38]: df
Out[38]:
10 11 12 13 14 15 16
x 0 1 2 3 4 5 6
y 7 8 9 10 11 12 13
u 14 15 16 17 18 19 20
z 21 22 23 24 25 26 27
w 28 29 30 31 32 33 34
sums 70 75 80 85 90 95 100
This error usually rises when you join / assign to a column when the index has duplicate values. Since you are assigning to a row, I suspect that there is a duplicate value in affinity_matrix.columns, perhaps not shown in your question.
As others have said, you've probably got duplicate values in your original index. To find them do this:
df[df.index.duplicated()]
Indices with duplicate values often arise if you create a DataFrame by concatenating other DataFrames. IF you don't care about preserving the values of your index, and you want them to be unique values, when you concatenate the the data, set ignore_index=True.
Alternatively, to overwrite your current index with a new one, instead of using df.reindex(), set:
df.index = new_index
Simple Fix
Run this before grouping
df = df.reset_index()
Thanks to this github comment for the solution.
For people who are still struggling with this error, it can also happen if you accidentally create a duplicate column with the same name. Remove duplicate columns like so:
df = df.loc[:,~df.columns.duplicated()]
Simply skip the error using .values at the end.
affinity_matrix.loc['sums'] = affinity_matrix.sum(axis=0).values
I came across this error today when I wanted to add a new column like this
df_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
I wanted to process the REMARK column of df_temp to return 1 or 0. However I typed wrong variable with df. And it returned error like this:
----> 1 df_temp['REMARK_TYPE'] = df.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in __setitem__(self, key, value)
2417 else:
2418 # set column
-> 2419 self._set_item(key, value)
2420
2421 def _setitem_slice(self, key, value):
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _set_item(self, key, value)
2483
2484 self._ensure_valid_index(value)
-> 2485 value = self._sanitize_column(key, value)
2486 NDFrame._set_item(self, key, value)
2487
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in _sanitize_column(self, key, value, broadcast)
2633
2634 if isinstance(value, Series):
-> 2635 value = reindexer(value)
2636
2637 elif isinstance(value, DataFrame):
/usr/lib64/python2.7/site-packages/pandas/core/frame.pyc in reindexer(value)
2625 # duplicate axis
2626 if not value.index.is_unique:
-> 2627 raise e
2628
2629 # other
ValueError: cannot reindex from a duplicate axis
As you can see it, the right code should be
df_temp['REMARK_TYPE'] = df_temp.REMARK.apply(lambda v: 1 if str(v)!='nan' else 0)
Because df and df_temp have a different number of rows. So it returned ValueError: cannot reindex from a duplicate axis.
Hope you can understand it and my answer can help other people to debug their code.
In my case, this error popped up not because of duplicate values, but because I attempted to join a shorter Series to a Dataframe: both had the same index, but the Series had fewer rows (missing the top few). The following worked for my purposes:
df.head()
SensA
date
2018-04-03 13:54:47.274 -0.45
2018-04-03 13:55:46.484 -0.42
2018-04-03 13:56:56.235 -0.37
2018-04-03 13:57:57.207 -0.34
2018-04-03 13:59:34.636 -0.33
series.head()
date
2018-04-03 14:09:36.577 62.2
2018-04-03 14:10:28.138 63.5
2018-04-03 14:11:27.400 63.1
2018-04-03 14:12:39.623 62.6
2018-04-03 14:13:27.310 62.5
Name: SensA_rrT, dtype: float64
df = series.to_frame().combine_first(df)
df.head(10)
SensA SensA_rrT
date
2018-04-03 13:54:47.274 -0.45 NaN
2018-04-03 13:55:46.484 -0.42 NaN
2018-04-03 13:56:56.235 -0.37 NaN
2018-04-03 13:57:57.207 -0.34 NaN
2018-04-03 13:59:34.636 -0.33 NaN
2018-04-03 14:00:34.565 -0.33 NaN
2018-04-03 14:01:19.994 -0.37 NaN
2018-04-03 14:02:29.636 -0.34 NaN
2018-04-03 14:03:31.599 -0.32 NaN
2018-04-03 14:04:30.779 -0.33 NaN
2018-04-03 14:05:31.733 -0.35 NaN
2018-04-03 14:06:33.290 -0.38 NaN
2018-04-03 14:07:37.459 -0.39 NaN
2018-04-03 14:08:36.361 -0.36 NaN
2018-04-03 14:09:36.577 -0.37 62.2
I wasted couple of hours on the same issue. In my case, I had to reset_index() of a dataframe before using apply function.
Before merging, or looking up from another indexed dataset, you need to reset the index as 1 dataset can have only 1 Index.
I got this error when I tried adding a column from a different table. Indeed I got duplicate index values along the way. But it turned out I was just doing it wrong: I actually needed to df.join the other table.
This pointer might help someone in a similar situation.
In my case it was caused by mismatch in dimensions:
accidentally using a column from different df during the mul operation
This can also be a cause for this[:) I solved my problem like this]
It may happen even if you are trying to insert a dataframe type column inside dataframe
you can try this
df['my_new']=pd.Series(my_new.values)
if you get this error after merging two dataframe and remove suffix adnd try to write to excel
Your problem is that there are columns you are not merging on that are common to both source DataFrames. Pandas needs a way to say which one came from where, so it adds the suffixes, the defaults being '_x' on the left and '_y' on the right.
If you have a preference on which source data frame to keep the columns from, then you can set the suffixes and filter accordingly, for example if you want to keep the clashing columns from the left:
# Label the two sides, with no suffix on the side you want to keep
df = pd.merge(
df,
tempdf[what_i_care_about],
on=['myid', 'myorder'],
how='outer',
suffixes=('', '_delete_suffix') # Left gets no suffix, right gets something identifiable
)
# Discard the columns that acquired a suffix
df = df[[c for c in df.columns if not c.endswith('_delete_suffix')]]
Alternatively, you can drop one of each of the clashing columns prior to merging, then Pandas has no need to assign a suffix.
Just add .to_numpy() to the end of the series you want to concatenate.
It happened to me when I appended 2 dataframes into another (df3 = df1.append(df2)), so the output was:
df1
A B
0 1 a
1 2 b
2 3 c
df2
A B
0 4 d
1 5 e
2 6 f
df3
A B
0 1 a
1 2 b
2 3 c
0 4 d
1 5 e
2 6 f
The simplest way to fix the indexes is using the "df.reset_index(drop=bool, inplace=bool)" method, as Connor said... you can also set the 'drop' argument True to avoid the index list to be created as a columns, and 'inplace' to True to make the indexes reset permanent.
Here is the official refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.reset_index.html
In addition, you can also use the ".set_index(keys=list, inplace=bool)" method, like this:
new_index_list = list(range(0, len(df3)))
df3['new_index'] = new_index_list
df3.set_index(keys='new_index', inplace=True)
official refference: https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.set_index.html
Make sure your index does not have any duplicates, I simply did df.reset_index(drop=True, inplace=True) and I don't get the error anymore! But you might want to keep the index, in that case just set drop to False
df = df.reset_index(drop=True) worked for me
I was trying to create a histogram using seaborn.
sns.histplot(data=df, x='Blood Chemistry 1', hue='Outcome', discrete=False, multiple='stack')
I get ValueError: cannot reindex from a duplicate axis. To solve it, I had to choose only the rows where x has no missing values:
data = df[~df['Blood Chemistry 1'].isnull()]
Related
I am trying to fill missing values in subset of rows. I am using inplace=True in fillna(), but it is not working in jupyter notebook. You can see attached picture showing NaN in the first 2 rows in column of Surface. I am not sure why?
I have to do this so it is working. why? Thank you for your help.
data.loc[mark,'Surface']=data.loc[mark,'Surface'].fillna(value='TEST')
Here are my codes
mark=(data['Pad']==51) | (data['Pad']==52) | (data['Pad']==53) | (data['Pad']==54) | (data['Pad']==55)
data.loc[mark,'Surface'].fillna(value='TEST',inplace=True)
This one is working:
data.loc[mark,'Surface']=data.loc[mark,'Surface'].fillna(value='TEST')
The main issue you're bumping into here is that pandas does not have very explicit view vs copy rules. Your result indicates to me that the issue here is .loc is returning a copy instead of a view. While pandas does try to return a view from .loc, there are a decent number of caveats.
After playing around a little, it seems that using a boolean/positional index mask return a copy- you can verify this with the private _is_view attribute:
import pandas as pd
import numpy as np
df = pd.DataFrame({"Pad": range(40, 60), "Surface": np.nan})
print(df)
Pad Surface
0 40 NaN
1 41 NaN
2 42 NaN
. ... ...
19 59 NaN
# Create masks
bool_mask = df["Pad"].isin(range(51, 56))
positional_mask = np.where(bool_mask)[0]
# Check `_is_view` after simple .loc:
>>> df.loc[bool_mask, "Surface"]._is_view
False
>>> df.loc[positional_mask, "Surface"]._is_view
False
So neither of the approaches above return a "view" of the original data, which is why performing an inplace operation does not change the original dataframe. In order to return a view from .loc you will need to use a slice as your row-index.
>>> df.loc[10:15, "Surface"]._is_view
True
Now this still won't resolve your issue because the value you're filling NaN with may or may not change the dtype of the "Surface" column. In the example I have set up, "Surface" has a float64 dtype- and by filling in NaN with the value "Test", you are forcing that dtype to change which is incompatible with the original dataframe. If your "Surface" columns is an object dtype, then you don't need to worry about this.
>>> df.dtypes
Pad int64
Surface float64
# this does not work because "Test" is incompatible with float64 dtype
>>> df.loc[10:15, "Surface"].fillna("Test", inplace=True)
# this works because 0.9 is an appropriate value for a float64 dtype
>>> df.loc[10:15, "Surface"].fillna(0.9, inplace=True)
>>> print(df)
Pad Surface
.. ... ...
8 48 NaN
9 49 NaN
10 50 0.9
11 51 0.9
12 52 0.9
13 53 0.9
14 54 0.9
15 55 0.9
16 56 NaN
17 57 NaN
.. ... ...
TLDR; don't rely on inplace in pandas in general. In the bulk of its operations it still creates a copy of the underlying data and then attempts to replace the original source with the new copy. Pandas is not memory efficient so if you're worried about memory-performance you may want to switch to something designed to be zero-copy from the ground up like Vaex, instead of trying to go through pandas.
Your approach of assigning the slice of the dataframe is the most appropriate and will ensure you receive the correct result of updating the dataframe as "inplace" as possible:
>>> df.loc[bool_mask, "Surface"] = df.loc[bool_mask, "Surface"].fillna("Test")
I'm trying to find out if the values of a series appear in another df. Here is a simple example:
import pandas as pd
s=pd.Series([1.1,2.2])
df=pd.DataFrame()
df['a']=[1.1,3.3]
df['b']=[4.4,5.5]
I want to check if '1.1' is present in df in any column and if it is then get a value True. I was this as a boolean mask so that i can use for a selection criteria on another df.
I have tried:
>>> s.isin(df)
0 False
1 False
also:
>>> s.values == df
a b
0 True False
1 False False
looks like this is on the right track so i try and derive a mask from it:
>>> np.sum(s.values == df, axis=1) >0
0 True
1 False
dtype: bool
looks good, but when I try this on my actual data i get an unusual error:
I am just trying to do the following 3 prints statements and i get:
print(df)
print(series)
print(np.sum([series.values==df], axis=1) >0)
3rd 2nd 1st
Date
1995-01-03 NaN NaN NaN
1995-01-04 NaN NaN NaN
1995-01-05 NaN NaN NaN
1995-01-06 NaN NaN NaN
1995-01-09 NaN NaN NaN
... ... ... ...
2021-05-19 19.531345 16.888084 15.596825
2021-05-20 19.422386 16.571087 14.174667
2021-05-21 19.283871 16.112516 14.281031
2021-05-24 19.167412 15.726680 14.438169
2021-05-25 19.157773 15.488945 14.616952
[6815 rows x 3 columns]
Date
1995-01-03 NaN
1995-01-04 NaN
1995-01-05 NaN
1995-01-06 NaN
1995-01-09 NaN
...
2021-05-19 14.016911
2021-05-20 14.174667
2021-05-21 14.281031
2021-05-24 14.438169
2021-05-25 14.616952
Name: series_name, Length: 6815, dtype: float64
CRITICAL:root:Global try/catch caught an unhandled exception
ERROR:root:Unable to coerce to Series, length must be 3: given 6815
ValueError: Unable to coerce to Series, length must be 3: given 6815
This sizes of the series and dataframe and series seem to be correct and the final 4 values in the series (ending in 14.616952) matches the dataframe final column so I would expect a mask to have a 'True' value at this point?
If anyone can diagnose this bug it would be really helpful? Many thanks
Use .isin method of Pandas like following. Simple explanation would be, apply isin function on DataFrame with respect to series and then apply any function on axis=1 to make sure get expected output in form of True or False for whole row.
df.isin(series).any(axis=1)
Output will be as follows:
0 True
1 False
dtype: bool
You have an extra [] around your command:
print(df)
print(series)
print(np.sum([series.values==df], axis=1) >0)
^ remove this ^ and this
Also consider all:
(series.values==df).all(axis=1)
I have a Pandas dataframe that contains a column of float64 values:
tempDF = pd.DataFrame({ 'id': [12,12,12,12,45,45,45,51,51,51,51,51,51,76,76,76,91,91,91,91],
'measure': [3.2,4.2,6.8,5.6,3.1,4.8,8.8,3.0,1.9,2.1,2.4,3.5,4.2,5.2,4.3,3.6,5.2,7.1,6.5,7.3]})
I want to create a new column containing just the integer part. My first thought was to use .astype(int):
tempDF['int_measure'] = tempDF['measure'].astype(int)
This works fine but, as an extra complication, the column I have contains a missing value:
tempDF.ix[10,'measure'] = np.nan
This missing value causes the .astype(int) method to fail with:
ValueError: Cannot convert NA to integer
I thought I could round down the floats in the column of data. However, the .round(0) function will round to the nearest integer (higher or lower) rather than rounding down. I can't find a function equivalent to ".floor()" that will act on a column of a Pandas dataframe.
Any suggestions?
You could just apply numpy.floor;
import numpy as np
tempDF['int_measure'] = tempDF['measure'].apply(np.floor)
id measure int_measure
0 12 3.2 3
1 12 4.2 4
2 12 6.8 6
...
9 51 2.1 2
10 51 NaN NaN
11 51 3.5 3
...
19 91 7.3 7
You could also try:
df.apply(lambda s: s // 1)
Using np.floor is faster, however.
The answers here are pretty dated and as of pandas 0.25.2 (perhaps earlier) the error
A value is trying to be set on a copy of a slice from a DataFrame.
Try using .loc[row_indexer,col_indexer] = value instead
Which would be
df.iloc[:,0] = df.iloc[:,0].astype(int)
for one particular column.
From
Fill in missing row values in pandas dataframe
I have the following dataframe and would like to fill in missing values.
mukey hzdept_r hzdepb_r sandtotal_r silttotal_r
425897 0 61
425897 61 152 5.3 44.7
425911 0 30 30.1 54.9
425911 30 74 17.7 49.8
425911 74 84
I want each missing value to be the average of values corresponding to that mukey. In this case, e.g. the first row missing values will be the average of sandtotal_r and silttotal_r corresponding to mukey==425897. pandas fillna doesn't seem to do the trick. Any help?
While the code works for the sample dataframe in that example, it is failing on the larger dataset I have uploaded here: https://www.dropbox.com/s/w3m0jppnq74op4c/www004.csv?dl=0
import pandas as pd
df = pd.read_csv('www004.csv')
# CSV file is here: https://www.dropbox.com/s/w3m0jppnq74op4c/www004.csv?dl=0
df1 = df.set_index('mukey')
df1.fillna(df.groupby('mukey').mean(),inplace=True)
df1.reset_index()
I get the error: InvalidIndexError. Why is it not working?
Use combine_first. It allows you to patch up the missing data on the left dataframe with the matching data on the right dataframe based on same index.
In this case, df1 is on the left and df2, the means, as the one on the right.
In [48]: df = pd.read_csv('www004.csv')
...: df1 = df.set_index('mukey')
...: df2 = df.groupby('mukey').mean()
In [49]: df1.loc[426178,:]
Out[49]:
hzdept_r hzdepb_r sandtotal_r silttotal_r claytotal_r om_r
mukey
426178 0 36 NaN NaN NaN 72.50
426178 36 66 NaN NaN NaN 72.50
426178 66 152 42.1 37.9 20 0.25
In [50]: df2.loc[426178,:]
Out[50]:
hzdept_r 34.000000
hzdepb_r 84.666667
sandtotal_r 42.100000
silttotal_r 37.900000
claytotal_r 20.000000
om_r 48.416667
Name: 426178, dtype: float64
In [51]: df3 = df1.combine_first(df2)
...: df3.loc[426178,:]
Out[51]:
hzdept_r hzdepb_r sandtotal_r silttotal_r claytotal_r om_r
mukey
426178 0 36 42.1 37.9 20 72.50
426178 36 66 42.1 37.9 20 72.50
426178 66 152 42.1 37.9 20 0.25
Note that the following rows still won't have values in the resulting df3
426162
426163
426174
426174
426255
because they were single rows to begin with, hence, .mean() doesn't mean anything to them (eh, see what I did there?).
The problem is the duplicate index values. When you use df1.fillna(df2), if you have multiple NaN entries in df1 where both the index and the column label are the same, pandas will get confused when trying to slice df1, and throw that InvalidIndexError.
Your sample dataframe works because even though you have duplicate index values there, only one of each index value is null. Your larger dataframe contains null entries that share both the index value and column label in some cases.
To make this work, you can do this one column at a time. For some reason, when operating on a series, pandas will not get confused by multiple entries of the same index, and will simply fill the same value in each one. Hence, this should work:
import pandas as pd
df = pd.read_csv('www004.csv')
# CSV file is here: https://www.dropbox.com/s/w3m0jppnq74op4c/www004.csv?dl=0
df1 = df.set_index('mukey')
grouped = df.groupby('mukey').mean()
for col in ['sandtotal_r', 'silttotal_r']:
df1[col] = df1[col].fillna(grouped[col])
df1.reset_index()
NOTE: Be careful using the combine_first method if you ever have "extra" data in the dataframe you're filling from. The combine_first function will include ALL indices from the dataframe you're filling from, even if they're not present in the original dataframe.
I want to resample a DataFrame with a multi-index containing both a datetime column and some other key. The Dataframe looks like:
import pandas as pd
from StringIO import StringIO
csv = StringIO("""ID,NAME,DATE,VAR1
1,a,03-JAN-2013,69
1,a,04-JAN-2013,77
1,a,05-JAN-2013,75
2,b,03-JAN-2013,69
2,b,04-JAN-2013,75
2,b,05-JAN-2013,72""")
df = pd.read_csv(csv, index_col=['DATE', 'ID'], parse_dates=['DATE'])
df.columns.name = 'Params'
Because resampling is only allowed on datatime indexes, i thought unstacking the other index column would help. And indeed it does, but i cant stack it again afterwards.
print df.unstack('ID').resample('W-THU')
Params VAR1
ID 1 2
DATE
2013-01-03 69 69.0
2013-01-10 76 73.5
But then stacking 'ID' again results in an index-error:
print df.unstack('ID').resample('W-THU').stack('ID')
IndexError: index 0 is out of bounds for axis 0 with size 0
Strangely enough, i can stack the other column level with both:
print df.unstack('ID').resample('W-THU').stack(0)
and
print df.unstack('ID').resample('W-THU').stack('Params')
The index-error also occurs if i reorder (swap) both column levels. Does anyone know how to overcome this issue?
The example unstacks a non-numerical column 'NAME' which is silently dropped but causes problems during re-stacking. The code below worked for me
print df[['VAR1']].unstack('ID').resample('W-THU').stack('ID')
Params VAR1
DATE ID
2013-01-03 A 69.0
B 69.0
2013-01-10 A 76.0
B 73.5