so i have a
df = read_excel(...)
loop does work:
for i, row in df.iterrows(): #loop through rows
a = df[df.columns].SignalName[i] #column "SignalName" of row i, is read
b = (row[7]) #column "Bus-Signalname" of row i, taken primitively=hardcoded
Access to a is ok, how to replace the hardcoded b = (row[7]) with a dynamically found/located "Bus-Signalname" element from the excel table. Which are the many ways to do this?
b = df[df.columns].Bus-Signalname[i]
does not work.
To access the whole column, run: df['Bus-Signalname'].
So called attribute notation (df.Bus-Signalname) will not work here,
since "-" is not allowed as a part of an attribute name.
It is treated as minus operator, so:
the expression before it is df.Bus, but df probably has no
column with whis name, so an exception is thrown,
what occurs after it (Signalname) is expected to be e.g. a variable,
but you probably have no such variable and this is another reason
which could cause an exception.
Note also that then you wrote [i].
As I understand, i is an integer and you want to access element No i from this column.
Note that the column you retrieved is a Series with index just the
same as your whole DataFrame.
If the index is a default one (consecutive numbers, starting from 0),
you will succeed. Otherwise (if the index does not contain value of i)
you will fail.
A more pandasonic syntax to access an element in a DataFrame is:
df.loc[i, 'Bus-Signalname']
where i is the index of the row in question and Bus-Signalname is the column name.
#Valdi_Bo
thank you. In the loop, both
df.loc[i, 'Bus-Signalname']
and
df['Bus-Signalname'][i]
work.
Related
I have a dataframe with several names. I want to look for the name on the pair_stock column and get the index value of that name.
So if i do :
df['pair_stock'].get_loc("MMM-MO")
I want to get a 0
So if i do :
df['pair_stock'].get_loc("WU-ZBH")
But it shows these error
i want to get 5539
But it shows these error:
.get_loc gives you the integer index for an index.
'pair_stock' isn't the index.
One option you have is to make it the index, which I think is actually what you want.
Another option (to get the index value for a row with that label) is akin to this:
df.loc[df['pair_stock']=="MMM-MO"].index.values
That gives you an array. You can grab just the first item, but if you know it's unique, maybe it should just be your index.
You can simply use df.index.
For example, to get all the indices that has the 'pair_stock' name of 'MMM-MO':
df.index[df['pair_stock'] == 'MMM-MO'].tolist()
I have created a function for which the input is a pandas dataframe.
It should return the row-indices of the rows with a missing value.
It works for all the defined Missingness values except when the cell is entirely empty - even though I tried to specify this in the missing_values List as [...,""] .
What could be the issue here? Or is there even a more intuitive way to solve this in general?
def missing_values(x):
df=x
missing_values = ["NaN","NAN","NA","Na","n/a", "na", "--","-"," ","","None","0","-inf"] #common ways to indicate missingness
observations = df.shape[0] # Gives number of observations (rows)
variables = df.shape[1] # Gives number of variables (columns)
row_index_list = []
#this goes through each observation in the first row
for n in range(0,variables): #this iterates over all variables
column_list = [] #creates a list for each value per variable
for i in range(0,observations): #now this iterates over every observation per variable
column_list.append(df.iloc[i,n]) #and adds the values to the list
for i in range(0,len(column_list)): #now for every value
if column_list[i] in missing_values: #it is checked, whether the value is a Missing one
row_index_list.append(column_list.index(column_list[i])) #and if yes, the row index is appended
finished = list(set(row_index_list)) #set is used to make sure the index only appears once if there are multiple occurences in one row and then it is listed
return finished
There might be spurious whitespace, so try adding strip() on this line:
if column_list[i].strip() in missing_values: #it is checked, whether the value is a Missing one
Also a simpler way to get the indexes of rows containing missing_values is with isin() and any(axis=1):
x = x.replace('\s+', '', regex=True)
row_index_list = x[x.isin(missing_values).any(axis=1)].index
When you import a file to Pandas using for example read_csv or read_excel, the missing variable (literally missing) is then can only be specify using np.nan or other type of null value with the numpy library.
(Sorry my bad right here, I was really silly when doing np.nan == np.nan)
You can replace the np.nan value first with:
df = df.replace(np.nan, 'NaN')
then your function can catch it.
Another way is to use isna() in pandas,
df.isna()
This will return the same DataFrame but with cell contains boolean value, True for each cell that is np.nan
If you do df.isna().any(),
This will return a Series with True value for any columns that contains null value.
If you want to retrieved the ID, simply adding the parameter axis = 1 to any():
df.isna().any(axis = 1)
This will return a Series show all the rows with np.nan value.
Now you have the boolean values that indicate which row contains null values. If you add these boolean value to a list and apply that on the DF.index this will took out the index value of the rows containing null.
booleanlist = df.isna().any(axis =1).tolist()
null_row_id = df.index[booleanlist]
I was wondering how I would be able to expand out a list in a cell without repeating variables in other cells.
The goal is to get it so that the list is expanded but the first column is not repeated. I know how to expand the list out but I would not like to have the first column values repeated if that is possible. Thank you for any help!!
In order to get what you're asking for, you still have to use explode() to get what you need. You just have to take it a step further and change the values of the first column. Please note that this will destroy the association between the elements of the list and the letter of the row they were first in. You would be creating a third value for the column (an empty string) that would be repeated for every record not beginning with 1.
If you want to eliminate the value from the rows you are talking about but still want to have those records associated with the value that their list was associated with, you can't. It's not logically possible for a value to both be in a given cell but also not be in that cell. So, I will show you the steps for eliminating the original association.
For this example, I named the columns since they are not provided.
data = [
["a",["1 hey","2 hi","3 hello"]],
["b",["1 what","2 how","3 say"]]
]
df = pd.DataFrame(data,columns=["first","second"])
df = df.explode("second")
df['first'] = df.apply(lambda x: x['first'] if x['second'][0] == '1' else '', axis=1)
I have found an inconsistency (at least to me) in the following two approaches:
For a dataframe defined as:
df=pd.DataFrame([[1,2,3,4,np.NaN],[8,2,0,4,5]])
I would like to access the element in the 1st row, 4th column (counting from 0). I either do this:
df[4][1]
Out[94]: 5.0
Or this:
df.iloc[1,4]
Out[95]: 5.
Am I correctly understanding that in the first approach I need to use the column first and then the rows, and vice versa when using iloc? I just want to make sure that I use both approaches correctly going forward.
EDIT: Some of the answers below have pointed out that the first approach is not as reliable, and I see now that this is why:
df.index = ['7','88']
df[4][1]
Out[101]: 5.0
I still get the correct result. But using int instead, will raise an exception if that corresponding number is not there anymore:
df.index = [7,88]
df[4][1]
KeyError: 1
Also, changing the column names:
df.columns = ['4','5','6','1','5']
df['4'][1]
Out[108]: 8
Gives me a different result. So overall, I should stick to iloc or loc to avoid these issues.
You should think of DataFrames as a collection of columns. Therefore when you do df[4] you get the 4th column of df, which is of type Pandas Series. Afer this when you do df[4][1] you get the 1st element of this Series, which corresponds to the 1st row and 4th column entry of the DataFrame, which is what df.iloc[1,4] does exactly.
Therefore, no inconsistency at all, but beware: This will work only if you don't have any column names, or if your column names are [0,1,2,3,4]. Else, it will either fail or give you a wrong result. Hence, for positional indexing you must stick with iloc, or loc for name indexing.
Unfortunately, you are not using them correctly. It's just coincidence you get the same result.
df.loc[i, j] means the element in df with the row named i and the column named j
Besides many other defferences, df[j] means the column named j, and df[j][i] menas the column named j, and the element (which is row here) named i.
df.iloc[i, j] means the element in the i-th row and the j-th column started from 0.
So, df.loc select data by label (string or int or any other format, int in this case), df.iloc select data by position. It's just coincidence that in your example, the i-th row named i.
For more details you should read the doc
Update:
Think of df[4][1] as a convenient way. There are some logic background that under most circumstances you'll get what you want.
In fact
df.index = ['7', '88']
df[4][1]
works because the dtype of index is str. And you give an int 1, so it will fall back to position index. If you run:
df.index = [7, 88]
df[4][1]
Will raise an error. And
df.index = [1, 0]
df[4][1]
Sill won't be the element you expect. Because it's not the 1st row starts from 0. It will be the row with the name 1
my question is very similar to here: Find unique values in a Pandas dataframe, irrespective of row or column location
I am very new to coding, so I apologize for the cringing in advance.
I have a .csv file which I open as a pandas dataframe, and would like to be able to return unique values across the entire dataframe, as well as all unique strings.
I have tried:
for row in df:
pd.unique(df.values.ravel())
This fails to iterate through rows.
The following code prints what I want:
for index, row in df.iterrows():
if isinstance(row, object):
print('%s\n%s' % (index, row))
However, trying to place these values into a previously defined set (myset = set()) fails when I hit a blank column (NoneType error):
for index, row in df.iterrows():
if isinstance(row, object):
myset.update(print('%s\n%s' % (index, row)))
I get closest to what I was when I try the following:
for index, row in df.iterrows():
if isinstance(row, object):
myset.update('%s\n%s' % (index, row))
However, my set prints out a list of characters rather than the strings/floats/values that appear on my screen when I print above.
Someone please help point out where I fail miserably at this task. Thanks!
I think the following should work for almost any dataframe. It will extract each value that is unique in the entire dataframe.
Post a comment if you encounter a problem, i'll try to solve it.
# Replace all nones / nas by spaces - so they won't bother us later
df = df.fillna('')
# Preparing a list
list_sets = []
# Iterates all columns (much faster than rows)
for col in df.columns:
# List containing all the unique values of this column
this_set = list(set(df[col].values))
# Creating a combined list
list_sets = list_sets + this_set
# Doing a set of the combined list
final_set = list(set(list_sets))
# For completion's sake, you can remove the space introduced by the fillna step
final_set.remove('')
Edit :
I think i know what happens. You must have some float columns, and fillna is failing on those, as the code i gave you was replacing missing values with an empty string. Try those :
df = df.fillna(np.nan) or
df = df.fillna(0)
For the first point, you'll need to import numpy first (import numpy as np). It must already be installed as you have pandas.