I have found an inconsistency (at least to me) in the following two approaches:
For a dataframe defined as:
df=pd.DataFrame([[1,2,3,4,np.NaN],[8,2,0,4,5]])
I would like to access the element in the 1st row, 4th column (counting from 0). I either do this:
df[4][1]
Out[94]: 5.0
Or this:
df.iloc[1,4]
Out[95]: 5.
Am I correctly understanding that in the first approach I need to use the column first and then the rows, and vice versa when using iloc? I just want to make sure that I use both approaches correctly going forward.
EDIT: Some of the answers below have pointed out that the first approach is not as reliable, and I see now that this is why:
df.index = ['7','88']
df[4][1]
Out[101]: 5.0
I still get the correct result. But using int instead, will raise an exception if that corresponding number is not there anymore:
df.index = [7,88]
df[4][1]
KeyError: 1
Also, changing the column names:
df.columns = ['4','5','6','1','5']
df['4'][1]
Out[108]: 8
Gives me a different result. So overall, I should stick to iloc or loc to avoid these issues.
You should think of DataFrames as a collection of columns. Therefore when you do df[4] you get the 4th column of df, which is of type Pandas Series. Afer this when you do df[4][1] you get the 1st element of this Series, which corresponds to the 1st row and 4th column entry of the DataFrame, which is what df.iloc[1,4] does exactly.
Therefore, no inconsistency at all, but beware: This will work only if you don't have any column names, or if your column names are [0,1,2,3,4]. Else, it will either fail or give you a wrong result. Hence, for positional indexing you must stick with iloc, or loc for name indexing.
Unfortunately, you are not using them correctly. It's just coincidence you get the same result.
df.loc[i, j] means the element in df with the row named i and the column named j
Besides many other defferences, df[j] means the column named j, and df[j][i] menas the column named j, and the element (which is row here) named i.
df.iloc[i, j] means the element in the i-th row and the j-th column started from 0.
So, df.loc select data by label (string or int or any other format, int in this case), df.iloc select data by position. It's just coincidence that in your example, the i-th row named i.
For more details you should read the doc
Update:
Think of df[4][1] as a convenient way. There are some logic background that under most circumstances you'll get what you want.
In fact
df.index = ['7', '88']
df[4][1]
works because the dtype of index is str. And you give an int 1, so it will fall back to position index. If you run:
df.index = [7, 88]
df[4][1]
Will raise an error. And
df.index = [1, 0]
df[4][1]
Sill won't be the element you expect. Because it's not the 1st row starts from 0. It will be the row with the name 1
Related
I saw this code in someone's iPython notebook, and I'm very confused as to how this code works. As far as I understood, pd.loc[] is used as a location based indexer where the format is:
df.loc[index,column_name]
However, in this case, the first index seems to be a series of boolean values. Could someone please explain to me how this selection works. I tried to read through the documentation but I couldn't figure out an explanation. Thanks!
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
pd.DataFrame.loc can take one or two indexers. For the rest of the post, I'll represent the first indexer as i and the second indexer as j.
If only one indexer is provided, it applies to the index of the dataframe and the missing indexer is assumed to represent all columns. So the following two examples are equivalent.
df.loc[i]
df.loc[i, :]
Where : is used to represent all columns.
If both indexers are present, i references index values and j references column values.
Now we can focus on what types of values i and j can assume. Let's use the following dataframe df as our example:
df = pd.DataFrame([[1, 2], [3, 4]], index=['A', 'B'], columns=['X', 'Y'])
loc has been written such that i and j can be
scalars that should be values in the respective index objects
df.loc['A', 'Y']
2
arrays whose elements are also members of the respective index object (notice that the order of the array I pass to loc is respected
df.loc[['B', 'A'], 'X']
B 3
A 1
Name: X, dtype: int64
Notice the dimensionality of the return object when passing arrays. i is an array as it was above, loc returns an object in which an index with those values is returned. In this case, because j was a scalar, loc returned a pd.Series object. We could've manipulated this to return a dataframe if we passed an array for i and j, and the array could've have just been a single value'd array.
df.loc[['B', 'A'], ['X']]
X
B 3
A 1
boolean arrays whose elements are True or False and whose length matches the length of the respective index. In this case, loc simply grabs the rows (or columns) in which the boolean array is True.
df.loc[[True, False], ['X']]
X
A 1
In addition to what indexers you can pass to loc, it also enables you to make assignments. Now we can break down the line of code you provided.
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
iris_data['class'] == 'versicolor' returns a boolean array.
class is a scalar that represents a value in the columns object.
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] returns a pd.Series object consisting of the 'class' column for all rows where 'class' is 'versicolor'
When used with an assignment operator:
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
We assign 'Iris-versicolor' for all elements in column 'class' where 'class' was 'versicolor'
This is using dataframes from the pandas package. The "index" part can be either a single index, a list of indices, or a list of booleans. This can be read about in the documentation: https://pandas.pydata.org/pandas-docs/stable/indexing.html
So the index part specifies a subset of the rows to pull out, and the (optional) column_name specifies the column you want to work with from that subset of the dataframe. So if you want to update the 'class' column but only in rows where the class is currently set as 'versicolor', you might do something like what you list in the question:
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
It's a pandas data-frame and it's using label base selection tool with df.loc and in it, there are two inputs, one for the row and the other one for the column, so in the row input it's selecting all those row values where the value saved in the column class is versicolor, and in the column input it's selecting the column with label class, and assigning Iris-versicolor value to them.
So basically it's replacing all the cells of column class with value versicolor with Iris-versicolor.
Whenever slicing (a:n) can be used, it can be replaced by fancy indexing (e.g. [a,b,c,...,n]). Fancy indexing is nothing more than listing explicitly all the index values instead of specifying only the limits.
Whenever fancy indexing can be used, it can be replaced by a list of Boolean values (a mask) the same size than the index. The value will be True for index values that would have been included in the fancy index, and False for the values that would have been excluded. It's another way of listing some index values, but which can be easily automated in NumPy and Pandas, e.g by a logical comparison (like in your case).
The second replacement possibility is the one used in your example. In:
iris_data.loc[iris_data['class'] == 'versicolor', 'class'] = 'Iris-versicolor'
the mask
iris_data['class'] == 'versicolor'
is a replacement for a long and silly fancy index which would be list of row numbers where class column (a Series) has the value versicolor.
Whether a Boolean mask appears within a .iloc or .loc (e.g. df.loc[mask]) indexer or directly as the index (e.g. df[mask]) depends on wether a slice is allowed as a direct index. Such cases are shown in the following indexer cheat-sheet:
Pandas indexers loc and iloc cheat-sheet
It's pandas label-based selection, as explained here: https://pandas.pydata.org/pandas-docs/stable/indexing.html#selection-by-label
The boolean array is basically a selection method using a mask.
so i have a
df = read_excel(...)
loop does work:
for i, row in df.iterrows(): #loop through rows
a = df[df.columns].SignalName[i] #column "SignalName" of row i, is read
b = (row[7]) #column "Bus-Signalname" of row i, taken primitively=hardcoded
Access to a is ok, how to replace the hardcoded b = (row[7]) with a dynamically found/located "Bus-Signalname" element from the excel table. Which are the many ways to do this?
b = df[df.columns].Bus-Signalname[i]
does not work.
To access the whole column, run: df['Bus-Signalname'].
So called attribute notation (df.Bus-Signalname) will not work here,
since "-" is not allowed as a part of an attribute name.
It is treated as minus operator, so:
the expression before it is df.Bus, but df probably has no
column with whis name, so an exception is thrown,
what occurs after it (Signalname) is expected to be e.g. a variable,
but you probably have no such variable and this is another reason
which could cause an exception.
Note also that then you wrote [i].
As I understand, i is an integer and you want to access element No i from this column.
Note that the column you retrieved is a Series with index just the
same as your whole DataFrame.
If the index is a default one (consecutive numbers, starting from 0),
you will succeed. Otherwise (if the index does not contain value of i)
you will fail.
A more pandasonic syntax to access an element in a DataFrame is:
df.loc[i, 'Bus-Signalname']
where i is the index of the row in question and Bus-Signalname is the column name.
#Valdi_Bo
thank you. In the loop, both
df.loc[i, 'Bus-Signalname']
and
df['Bus-Signalname'][i]
work.
I am curious as to why df[2] is not supported, while df.ix[2] and df[2:3] both work.
In [26]: df.ix[2]
Out[26]:
A 1.027680
B 1.514210
C -1.466963
D -0.162339
Name: 2000-01-03 00:00:00
In [27]: df[2:3]
Out[27]:
A B C D
2000-01-03 1.02768 1.51421 -1.466963 -0.162339
I would expect df[2] to work the same way as df[2:3] to be consistent with Python indexing convention. Is there a design reason for not supporting indexing row by single integer?
echoing #HYRY, see the new docs in 0.11
http://pandas.pydata.org/pandas-docs/stable/indexing.html
Here we have new operators, .iloc to explicity support only integer indexing, and .loc to explicity support only label indexing
e.g. imagine this scenario
In [1]: df = pd.DataFrame(np.random.rand(5,2),index=range(0,10,2),columns=list('AB'))
In [2]: df
Out[2]:
A B
0 1.068932 -0.794307
2 -0.470056 1.192211
4 -0.284561 0.756029
6 1.037563 -0.267820
8 -0.538478 -0.800654
In [5]: df.iloc[[2]]
Out[5]:
A B
4 -0.284561 0.756029
In [6]: df.loc[[2]]
Out[6]:
A B
2 -0.470056 1.192211
[] slices the rows (by label location) only
The primary purpose of the DataFrame indexing operator, [] is to select columns.
When the indexing operator is passed a string or integer, it attempts to find a column with that particular name and return it as a Series.
So, in the question above: df[2] searches for a column name matching the integer value 2. This column does not exist and a KeyError is raised.
The DataFrame indexing operator completely changes behavior to select rows when slice notation is used
Strangely, when given a slice, the DataFrame indexing operator selects rows and can do so by integer location or by index label.
df[2:3]
This will slice beginning from the row with integer location 2 up to 3, exclusive of the last element. So, just a single row. The following selects rows beginning at integer location 6 up to but not including 20 by every third row.
df[6:20:3]
You can also use slices consisting of string labels if your DataFrame index has strings in it. For more details, see this solution on .iloc vs .loc.
I almost never use this slice notation with the indexing operator as its not explicit and hardly ever used. When slicing by rows, stick with .loc/.iloc.
You can think DataFrame as a dict of Series. df[key] try to select the column index by key and returns a Series object.
However slicing inside of [] slices the rows, because it's a very common operation.
You can read the document for detail:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#basics
To index-based access to the pandas table, one can also consider numpy.as_array option to convert the table to Numpy array as
np_df = df.as_matrix()
and then
np_df[i]
would work.
You can take a look at the source code .
DataFrame has a private function _slice() to slice the DataFrame, and it allows the parameter axis to determine which axis to slice. The __getitem__() for DataFrame doesn't set the axis while invoking _slice(). So the _slice() slice it by default axis 0.
You can take a simple experiment, that might help you:
print df._slice(slice(0, 2))
print df._slice(slice(0, 2), 0)
print df._slice(slice(0, 2), 1)
you can loop through the data frame like this .
for ad in range(1,dataframe_c.size):
print(dataframe_c.values[ad])
I would normally go for .loc/.iloc as suggested by Ted, but one may also select a row by transposing the DataFrame. To stay in the example above, df.T[2] gives you row 2 of df.
If you want to index multiple rows by their integer indexes, use a list of indexes:
idx = [2,3,1]
df.iloc[idx]
N.B. If idx is created using some rule, then you can also sort the dataframe by using .iloc (or .loc) because the output will be ordered by idx. So in a sense, iloc can act like a sorting function where idx is the sorting key.
I am curious as to why df[2] is not supported, while df.ix[2] and df[2:3] both work.
In [26]: df.ix[2]
Out[26]:
A 1.027680
B 1.514210
C -1.466963
D -0.162339
Name: 2000-01-03 00:00:00
In [27]: df[2:3]
Out[27]:
A B C D
2000-01-03 1.02768 1.51421 -1.466963 -0.162339
I would expect df[2] to work the same way as df[2:3] to be consistent with Python indexing convention. Is there a design reason for not supporting indexing row by single integer?
echoing #HYRY, see the new docs in 0.11
http://pandas.pydata.org/pandas-docs/stable/indexing.html
Here we have new operators, .iloc to explicity support only integer indexing, and .loc to explicity support only label indexing
e.g. imagine this scenario
In [1]: df = pd.DataFrame(np.random.rand(5,2),index=range(0,10,2),columns=list('AB'))
In [2]: df
Out[2]:
A B
0 1.068932 -0.794307
2 -0.470056 1.192211
4 -0.284561 0.756029
6 1.037563 -0.267820
8 -0.538478 -0.800654
In [5]: df.iloc[[2]]
Out[5]:
A B
4 -0.284561 0.756029
In [6]: df.loc[[2]]
Out[6]:
A B
2 -0.470056 1.192211
[] slices the rows (by label location) only
The primary purpose of the DataFrame indexing operator, [] is to select columns.
When the indexing operator is passed a string or integer, it attempts to find a column with that particular name and return it as a Series.
So, in the question above: df[2] searches for a column name matching the integer value 2. This column does not exist and a KeyError is raised.
The DataFrame indexing operator completely changes behavior to select rows when slice notation is used
Strangely, when given a slice, the DataFrame indexing operator selects rows and can do so by integer location or by index label.
df[2:3]
This will slice beginning from the row with integer location 2 up to 3, exclusive of the last element. So, just a single row. The following selects rows beginning at integer location 6 up to but not including 20 by every third row.
df[6:20:3]
You can also use slices consisting of string labels if your DataFrame index has strings in it. For more details, see this solution on .iloc vs .loc.
I almost never use this slice notation with the indexing operator as its not explicit and hardly ever used. When slicing by rows, stick with .loc/.iloc.
You can think DataFrame as a dict of Series. df[key] try to select the column index by key and returns a Series object.
However slicing inside of [] slices the rows, because it's a very common operation.
You can read the document for detail:
http://pandas.pydata.org/pandas-docs/stable/indexing.html#basics
To index-based access to the pandas table, one can also consider numpy.as_array option to convert the table to Numpy array as
np_df = df.as_matrix()
and then
np_df[i]
would work.
You can take a look at the source code .
DataFrame has a private function _slice() to slice the DataFrame, and it allows the parameter axis to determine which axis to slice. The __getitem__() for DataFrame doesn't set the axis while invoking _slice(). So the _slice() slice it by default axis 0.
You can take a simple experiment, that might help you:
print df._slice(slice(0, 2))
print df._slice(slice(0, 2), 0)
print df._slice(slice(0, 2), 1)
you can loop through the data frame like this .
for ad in range(1,dataframe_c.size):
print(dataframe_c.values[ad])
I would normally go for .loc/.iloc as suggested by Ted, but one may also select a row by transposing the DataFrame. To stay in the example above, df.T[2] gives you row 2 of df.
If you want to index multiple rows by their integer indexes, use a list of indexes:
idx = [2,3,1]
df.iloc[idx]
N.B. If idx is created using some rule, then you can also sort the dataframe by using .iloc (or .loc) because the output will be ordered by idx. So in a sense, iloc can act like a sorting function where idx is the sorting key.
I have a DataFrame df with columns 'a'. How would I create a new column 'b' which has dtype=object?
I know this may be considered poor form, but at the moment I have a dataframe df where the column 'a' contains arrays (each element is an np.array). I want to create a new column 'b' where each element is a new np.array that contains the logs of the corresponding elemnent in 'a'.
At the moment I tried these two methods, but neither worked:
for i in df.index:
df.set_value(i,'b', log10(df.loc[i,'a']))
and
for i in df.index:
df.loc[i,'b'] = log10(df.loc[i,'a']))
Both give me ValueError: Must have equal len keys and value when setting with an iterable.
I'm assuming the error comes about because the dtype of the new column is defaulted to float although I may be wrong.
As each row of your column is an array, it's better to use the standard NumPy mathematical functions for computing their element-wise logarithms to the base 10:
df['log_a'] = df.a.apply(lambda x: np.log10(x))