How can I extract the last value, 102.584855? I have tried df[-1:].iloc[0] but it will return the 20 as well. Howe to get only 102.58485? Thanks!
You could use: df.iloc[-1, 0]. When 2 indexers are passed to iloc, the first indicates the index of the rows, the second the index of the columns. So df.iloc[-1, 0] selects the value in the last row and first column.
Alternatively, df[-1:].iloc[0].item() would also work, but is less efficient.
Related
I have a pandas dataframe df. In every column eventually the values are '-' until the end of the dataframe. I would like to find the final row where there is no '-' value. How can I do that?
df.isin(['-'])
gives me a dataframe full of Trues and Falses. So I want the last row that only has False in it.
You can use df.tail(1) to pick the last row:
df[df.isin(['-'])].tail(1)
To get the last row with False use ~ Not operator:
df[~df.isin(['-'])].tail(1)
Is there a way to select rows with a DateTimeIndex without referring to the date as such e.g. selecting row index 2 (the usual Python default manner) rather than "1995-02-02"?
Thanks in advance.
Yes, you can use .iloc, the positional indexer:
df.iloc[2]
Basically, it indexes by actual position starting from 0 to len(df), allowing slicing too:
df.iloc[2:5]
It also works for columns (by position, again):
df.iloc[:, 0] # All rows, first column
df.iloc[0:2, 0:2] # First 2 rows, first 2 columns
I am curious to know how to grab index number off of a dataframe that's meeting a specific condition. I've been playing with pandas.Index.get_loc, but no luck.
I've loaded a csv file, and it's structured in a way that has 1000+ rows with all column values filled in, but in the middle there is one completely empty row, and the data starts again. I wanted to get the index # of the row, so I can remove/delete all the subsequent rows that come after the empty row.
This is the way I identified the empty row, df[df["ColumnA"] ==None], but no luck in getting the row index number for that row. Please help!
What you most likely want is pd.DataFrame.dropna:
Return object with labels on given axis omitted where alternately any
or all of the data are missing
If the row is empty, you can simply do this:
df = df.dropna(how='all')
If you want to find indices of null rows, you can use pd.DataFrame.isnull:
res = df[df.isnull().all(axis=1)].index
To remove rows with indices greater than the first empty row:
df = df[df.index < res[0]]
I have found an inconsistency (at least to me) in the following two approaches:
For a dataframe defined as:
df=pd.DataFrame([[1,2,3,4,np.NaN],[8,2,0,4,5]])
I would like to access the element in the 1st row, 4th column (counting from 0). I either do this:
df[4][1]
Out[94]: 5.0
Or this:
df.iloc[1,4]
Out[95]: 5.
Am I correctly understanding that in the first approach I need to use the column first and then the rows, and vice versa when using iloc? I just want to make sure that I use both approaches correctly going forward.
EDIT: Some of the answers below have pointed out that the first approach is not as reliable, and I see now that this is why:
df.index = ['7','88']
df[4][1]
Out[101]: 5.0
I still get the correct result. But using int instead, will raise an exception if that corresponding number is not there anymore:
df.index = [7,88]
df[4][1]
KeyError: 1
Also, changing the column names:
df.columns = ['4','5','6','1','5']
df['4'][1]
Out[108]: 8
Gives me a different result. So overall, I should stick to iloc or loc to avoid these issues.
You should think of DataFrames as a collection of columns. Therefore when you do df[4] you get the 4th column of df, which is of type Pandas Series. Afer this when you do df[4][1] you get the 1st element of this Series, which corresponds to the 1st row and 4th column entry of the DataFrame, which is what df.iloc[1,4] does exactly.
Therefore, no inconsistency at all, but beware: This will work only if you don't have any column names, or if your column names are [0,1,2,3,4]. Else, it will either fail or give you a wrong result. Hence, for positional indexing you must stick with iloc, or loc for name indexing.
Unfortunately, you are not using them correctly. It's just coincidence you get the same result.
df.loc[i, j] means the element in df with the row named i and the column named j
Besides many other defferences, df[j] means the column named j, and df[j][i] menas the column named j, and the element (which is row here) named i.
df.iloc[i, j] means the element in the i-th row and the j-th column started from 0.
So, df.loc select data by label (string or int or any other format, int in this case), df.iloc select data by position. It's just coincidence that in your example, the i-th row named i.
For more details you should read the doc
Update:
Think of df[4][1] as a convenient way. There are some logic background that under most circumstances you'll get what you want.
In fact
df.index = ['7', '88']
df[4][1]
works because the dtype of index is str. And you give an int 1, so it will fall back to position index. If you run:
df.index = [7, 88]
df[4][1]
Will raise an error. And
df.index = [1, 0]
df[4][1]
Sill won't be the element you expect. Because it's not the 1st row starts from 0. It will be the row with the name 1
I have such a data frame df:
a b
10 2
3 1
0 0
0 4
....
# about 50,000+ rows
I wish to choose the df[:5, 'a']. But When I call df.loc[:5, 'a'], I got an error: KeyError: 'Cannot get right slice bound for non-unique label: 5. When I call df.loc[5], the result contains 250 rows while there is just one when I use df.iloc[5]. Why does this thing happen and how can I index it properly? Thank you in advance!
The error message is explained here: if the index is not monotonic, then both slice bounds must be unique members of the index.
The difference between .loc and .iloc is label vs integer position based indexing - see docs. .loc is intended to select individual labels or slices of labels. That's why .loc[5] selects all rows where the index has the value 250 (and the error is about a non-unique index). iloc, in contrast, select row number 5 (0-indexed). That's why you only get a single row, and the index value may or may not be 5. Hope this helps!
To filter with non-unique indexs try something like this:
df.loc[(df.index>0)&(df.index<2)]
The issue with the way you are addressing is that, there are multiple rows with index as 5. So the loc attribute does not know which one to pick. To know just do a df.loc[5] you will get number of rows with same index.
Either you can sort it using sort_index or you can first aggregate data based on index and then retrieve.
Hope this helps.