Pandas apply function and update copy of dataframe - python

I have data frames
df = pd.DataFrame({'A':[1,2,2,1],'B':[20,21,22,32],'C':[4,5,6,7],'D':[99,98,97,96]})
dfcopy = df.copy()
I want to apply a function to values in df columns 'B' and 'C' based on value in col 'A' and then update the result in corresponding rows in dfcopy.
For example, for each row where 'A' is 1, get the 'B' and 'C' values for that row, apply function, and store results in dfcopy. For the first row where 'A'==2, the value for 'B' is 21 and 'C' is 5. Assume the function is to multiply by 2x2 ones matrix: np.dot(np.ones((2,2)),np.array([[21],[5]])). Then we want df[1,'B']=26 and df[1,'C']=26. Then I want to repeat for a different value in A until the function has been applied uniquely based on each value in A.
Lastly, I don't want to iterate row by row, check value in A, and apply function. This is because there will be an operation to do based on each value of A (i.e. the np.ones((2,2)) will be replaced by values in file corresponding to value in A, and I don't want to repeat it
I'm sure I can force a solution (e.g. by looping and setting values), but I'm guessing there is an elegant way to do this with Pandas API. I just can't find it.

In the example below I picked different matrices so it's obvious that I have applied them.
df = pd.DataFrame({'A':[1,2,2,1],'B':[20,21,22,32],'C':[4,5,6,7],'D':[99,98,97,96]})
matrices = [None,pd.DataFrame([[1,0],[0,0]],index=["B","C"]),pd.DataFrame([[0,0],[0,1]],index=["B","C"])]
df[["B","C"]] = pd.concat((df[df["A"] == i][["B","C"]].dot(matrices[i]) for i in set(df["A"])))
A B C D
0 1 20 0 99
1 2 0 5 98
2 2 0 6 97
3 1 32 0 96

Related

Pandas: add number of unique values to other dataset (as shown in picture):

I need to add the number of unique values in column C (right table) to the related row in the left table based on the values in common column A (as shown in the picture):
thank you in advance
Groupby column A in second dataset and calculate count of each unique value in column C. merge it with first dataset on column A. Rename column C to C-count if needed:
>>> count_df = df2.groupby('A', as_index=False).C.nunique()
>>> output = pd.merge(df1, count_df, on='A')
>>> output.rename(columns={'C':'C-count'}, inplace=True)
>>> output
A B C-count
0 2 22 3
1 3 23 2
2 5 21 1
3 1 24 1
4 6 21 1
Use DataFrameGroupBy.nunique with Series.map for new column in df1:
df1['C-count'] = df1['A'].map(df2.groupby('A')['C'].nunique())
This may not be the most effective way of doing this, so if your databases are too big be careful.
Define the following function:
def c_value(a_value, right_table):
c_ids = []
for index, row in right_table.iterrows():
if row['A'] == a_value:
if row['C'] not in c_ids:
c_ids.append(row['C'])
return len(c_ids)
For this function I'm supposing that the right_table is a pandas.Dataframe.
Now, you do the following to build the new column (assuming that the left table is a pandas.Dataframe):
new_column = []
for index, row in left_table.iterrows():
new_column.append(c_value(row['A'],right_table))
left_table["C-count"] = new_column
After this, the left_table Dataframe should be the one dessired (as far as I understand what you need).

KeyError for stdnum.isin.is_valid() function in python [duplicate]

Can someone explain how these two methods of slicing are different?
I've seen the docs,
and I've seen these answers, but I still find myself unable to understand how the three are different. To me, they seem interchangeable in large part, because they are at the lower levels of slicing.
For example, say we want to get the first five rows of a DataFrame. How is it that these two work?
df.loc[:5]
df.iloc[:5]
Can someone present three cases where the distinction in uses are clearer?
Once upon a time, I also wanted to know how these two functions differ from df.ix[:5] but ix has been removed from pandas 1.0, so I don't care anymore.
Label vs. Location
The main distinction between the two methods is:
loc gets rows (and/or columns) with particular labels.
iloc gets rows (and/or columns) at integer locations.
To demonstrate, consider a series s of characters with a non-monotonic integer index:
>>> s = pd.Series(list("abcdef"), index=[49, 48, 47, 0, 1, 2])
49 a
48 b
47 c
0 d
1 e
2 f
>>> s.loc[0] # value at index label 0
'd'
>>> s.iloc[0] # value at index location 0
'a'
>>> s.loc[0:1] # rows at index labels between 0 and 1 (inclusive)
0 d
1 e
>>> s.iloc[0:1] # rows at index location between 0 and 1 (exclusive)
49 a
Here are some of the differences/similarities between s.loc and s.iloc when passed various objects:
<object>
description
s.loc[<object>]
s.iloc[<object>]
0
single item
Value at index label 0 (the string 'd')
Value at index location 0 (the string 'a')
0:1
slice
Two rows (labels 0 and 1)
One row (first row at location 0)
1:47
slice with out-of-bounds end
Zero rows (empty Series)
Five rows (location 1 onwards)
1:47:-1
slice with negative step
three rows (labels 1 back to 47)
Zero rows (empty Series)
[2, 0]
integer list
Two rows with given labels
Two rows with given locations
s > 'e'
Bool series (indicating which values have the property)
One row (containing 'f')
NotImplementedError
(s>'e').values
Bool array
One row (containing 'f')
Same as loc
999
int object not in index
KeyError
IndexError (out of bounds)
-1
int object not in index
KeyError
Returns last value in s
lambda x: x.index[3]
callable applied to series (here returning 3rd item in index)
s.loc[s.index[3]]
s.iloc[s.index[3]]
loc's label-querying capabilities extend well-beyond integer indexes and it's worth highlighting a couple of additional examples.
Here's a Series where the index contains string objects:
>>> s2 = pd.Series(s.index, index=s.values)
>>> s2
a 49
b 48
c 47
d 0
e 1
f 2
Since loc is label-based, it can fetch the first value in the Series using s2.loc['a']. It can also slice with non-integer objects:
>>> s2.loc['c':'e'] # all rows lying between 'c' and 'e' (inclusive)
c 47
d 0
e 1
For DateTime indexes, we don't need to pass the exact date/time to fetch by label. For example:
>>> s3 = pd.Series(list('abcde'), pd.date_range('now', periods=5, freq='M'))
>>> s3
2021-01-31 16:41:31.879768 a
2021-02-28 16:41:31.879768 b
2021-03-31 16:41:31.879768 c
2021-04-30 16:41:31.879768 d
2021-05-31 16:41:31.879768 e
Then to fetch the row(s) for March/April 2021 we only need:
>>> s3.loc['2021-03':'2021-04']
2021-03-31 17:04:30.742316 c
2021-04-30 17:04:30.742316 d
Rows and Columns
loc and iloc work the same way with DataFrames as they do with Series. It's useful to note that both methods can address columns and rows together.
When given a tuple, the first element is used to index the rows and, if it exists, the second element is used to index the columns.
Consider the DataFrame defined below:
>>> import numpy as np
>>> df = pd.DataFrame(np.arange(25).reshape(5, 5),
index=list('abcde'),
columns=['x','y','z', 8, 9])
>>> df
x y z 8 9
a 0 1 2 3 4
b 5 6 7 8 9
c 10 11 12 13 14
d 15 16 17 18 19
e 20 21 22 23 24
Then for example:
>>> df.loc['c': , :'z'] # rows 'c' and onwards AND columns up to 'z'
x y z
c 10 11 12
d 15 16 17
e 20 21 22
>>> df.iloc[:, 3] # all rows, but only the column at index location 3
a 3
b 8
c 13
d 18
e 23
Sometimes we want to mix label and positional indexing methods for the rows and columns, somehow combining the capabilities of loc and iloc.
For example, consider the following DataFrame. How best to slice the rows up to and including 'c' and take the first four columns?
>>> import numpy as np
>>> df = pd.DataFrame(np.arange(25).reshape(5, 5),
index=list('abcde'),
columns=['x','y','z', 8, 9])
>>> df
x y z 8 9
a 0 1 2 3 4
b 5 6 7 8 9
c 10 11 12 13 14
d 15 16 17 18 19
e 20 21 22 23 24
We can achieve this result using iloc and the help of another method:
>>> df.iloc[:df.index.get_loc('c') + 1, :4]
x y z 8
a 0 1 2 3
b 5 6 7 8
c 10 11 12 13
get_loc() is an index method meaning "get the position of the label in this index". Note that since slicing with iloc is exclusive of its endpoint, we must add 1 to this value if we want row 'c' as well.
iloc works based on integer positioning. So no matter what your row labels are, you can always, e.g., get the first row by doing
df.iloc[0]
or the last five rows by doing
df.iloc[-5:]
You can also use it on the columns. This retrieves the 3rd column:
df.iloc[:, 2] # the : in the first position indicates all rows
You can combine them to get intersections of rows and columns:
df.iloc[:3, :3] # The upper-left 3 X 3 entries (assuming df has 3+ rows and columns)
On the other hand, .loc use named indices. Let's set up a data frame with strings as row and column labels:
df = pd.DataFrame(index=['a', 'b', 'c'], columns=['time', 'date', 'name'])
Then we can get the first row by
df.loc['a'] # equivalent to df.iloc[0]
and the second two rows of the 'date' column by
df.loc['b':, 'date'] # equivalent to df.iloc[1:, 1]
and so on. Now, it's probably worth pointing out that the default row and column indices for a DataFrame are integers from 0 and in this case iloc and loc would work in the same way. This is why your three examples are equivalent. If you had a non-numeric index such as strings or datetimes, df.loc[:5] would raise an error.
Also, you can do column retrieval just by using the data frame's __getitem__:
df['time'] # equivalent to df.loc[:, 'time']
Now suppose you want to mix position and named indexing, that is, indexing using names on rows and positions on columns (to clarify, I mean select from our data frame, rather than creating a data frame with strings in the row index and integers in the column index). This is where .ix comes in:
df.ix[:2, 'time'] # the first two rows of the 'time' column
I think it's also worth mentioning that you can pass boolean vectors to the loc method as well. For example:
b = [True, False, True]
df.loc[b]
Will return the 1st and 3rd rows of df. This is equivalent to df[b] for selection, but it can also be used for assigning via boolean vectors:
df.loc[b, 'name'] = 'Mary', 'John'
In my opinion, the accepted answer is confusing, since it uses a DataFrame with only missing values. I also do not like the term position-based for .iloc and instead, prefer integer location as it is much more descriptive and exactly what .iloc stands for. The key word is INTEGER - .iloc needs INTEGERS.
See my extremely detailed blog series on subset selection for more
.ix is deprecated and ambiguous and should never be used
Because .ix is deprecated we will only focus on the differences between .loc and .iloc.
Before we talk about the differences, it is important to understand that DataFrames have labels that help identify each column and each index. Let's take a look at a sample DataFrame:
df = pd.DataFrame({'age':[30, 2, 12, 4, 32, 33, 69],
'color':['blue', 'green', 'red', 'white', 'gray', 'black', 'red'],
'food':['Steak', 'Lamb', 'Mango', 'Apple', 'Cheese', 'Melon', 'Beans'],
'height':[165, 70, 120, 80, 180, 172, 150],
'score':[4.6, 8.3, 9.0, 3.3, 1.8, 9.5, 2.2],
'state':['NY', 'TX', 'FL', 'AL', 'AK', 'TX', 'TX']
},
index=['Jane', 'Nick', 'Aaron', 'Penelope', 'Dean', 'Christina', 'Cornelia'])
All the words in bold are the labels. The labels, age, color, food, height, score and state are used for the columns. The other labels, Jane, Nick, Aaron, Penelope, Dean, Christina, Cornelia are used for the index.
The primary ways to select particular rows in a DataFrame are with the .loc and .iloc indexers. Each of these indexers can also be used to simultaneously select columns but it is easier to just focus on rows for now. Also, each of the indexers use a set of brackets that immediately follow their name to make their selections.
.loc selects data only by labels
We will first talk about the .loc indexer which only selects data by the index or column labels. In our sample DataFrame, we have provided meaningful names as values for the index. Many DataFrames will not have any meaningful names and will instead, default to just the integers from 0 to n-1, where n is the length of the DataFrame.
There are three different inputs you can use for .loc
A string
A list of strings
Slice notation using strings as the start and stop values
Selecting a single row with .loc with a string
To select a single row of data, place the index label inside of the brackets following .loc.
df.loc['Penelope']
This returns the row of data as a Series
age 4
color white
food Apple
height 80
score 3.3
state AL
Name: Penelope, dtype: object
Selecting multiple rows with .loc with a list of strings
df.loc[['Cornelia', 'Jane', 'Dean']]
This returns a DataFrame with the rows in the order specified in the list:
Selecting multiple rows with .loc with slice notation
Slice notation is defined by a start, stop and step values. When slicing by label, pandas includes the stop value in the return. The following slices from Aaron to Dean, inclusive. Its step size is not explicitly defined but defaulted to 1.
df.loc['Aaron':'Dean']
Complex slices can be taken in the same manner as Python lists.
.iloc selects data only by integer location
Let's now turn to .iloc. Every row and column of data in a DataFrame has an integer location that defines it. This is in addition to the label that is visually displayed in the output. The integer location is simply the number of rows/columns from the top/left beginning at 0.
There are three different inputs you can use for .iloc
An integer
A list of integers
Slice notation using integers as the start and stop values
Selecting a single row with .iloc with an integer
df.iloc[4]
This returns the 5th row (integer location 4) as a Series
age 32
color gray
food Cheese
height 180
score 1.8
state AK
Name: Dean, dtype: object
Selecting multiple rows with .iloc with a list of integers
df.iloc[[2, -2]]
This returns a DataFrame of the third and second to last rows:
Selecting multiple rows with .iloc with slice notation
df.iloc[:5:3]
Simultaneous selection of rows and columns with .loc and .iloc
One excellent ability of both .loc/.iloc is their ability to select both rows and columns simultaneously. In the examples above, all the columns were returned from each selection. We can choose columns with the same types of inputs as we do for rows. We simply need to separate the row and column selection with a comma.
For example, we can select rows Jane, and Dean with just the columns height, score and state like this:
df.loc[['Jane', 'Dean'], 'height':]
This uses a list of labels for the rows and slice notation for the columns
We can naturally do similar operations with .iloc using only integers.
df.iloc[[1,4], 2]
Nick Lamb
Dean Cheese
Name: food, dtype: object
Simultaneous selection with labels and integer location
.ix was used to make selections simultaneously with labels and integer location which was useful but confusing and ambiguous at times and thankfully it has been deprecated. In the event that you need to make a selection with a mix of labels and integer locations, you will have to make both your selections labels or integer locations.
For instance, if we want to select rows Nick and Cornelia along with columns 2 and 4, we could use .loc by converting the integers to labels with the following:
col_names = df.columns[[2, 4]]
df.loc[['Nick', 'Cornelia'], col_names]
Or alternatively, convert the index labels to integers with the get_loc index method.
labels = ['Nick', 'Cornelia']
index_ints = [df.index.get_loc(label) for label in labels]
df.iloc[index_ints, [2, 4]]
Boolean Selection
The .loc indexer can also do boolean selection. For instance, if we are interested in finding all the rows wher age is above 30 and return just the food and score columns we can do the following:
df.loc[df['age'] > 30, ['food', 'score']]
You can replicate this with .iloc but you cannot pass it a boolean series. You must convert the boolean Series into a numpy array like this:
df.iloc[(df['age'] > 30).values, [2, 4]]
Selecting all rows
It is possible to use .loc/.iloc for just column selection. You can select all the rows by using a colon like this:
df.loc[:, 'color':'score':2]
The indexing operator, [], can select rows and columns too but not simultaneously.
Most people are familiar with the primary purpose of the DataFrame indexing operator, which is to select columns. A string selects a single column as a Series and a list of strings selects multiple columns as a DataFrame.
df['food']
Jane Steak
Nick Lamb
Aaron Mango
Penelope Apple
Dean Cheese
Christina Melon
Cornelia Beans
Name: food, dtype: object
Using a list selects multiple columns
df[['food', 'score']]
What people are less familiar with, is that, when slice notation is used, then selection happens by row labels or by integer location. This is very confusing and something that I almost never use but it does work.
df['Penelope':'Christina'] # slice rows by label
df[2:6:2] # slice rows by integer location
The explicitness of .loc/.iloc for selecting rows is highly preferred. The indexing operator alone is unable to select rows and columns simultaneously.
df[3:5, 'color']
TypeError: unhashable type: 'slice'
.loc and .iloc are used for indexing, i.e., to pull out portions of data. In essence, the difference is that .loc allows label-based indexing, while .iloc allows position-based indexing.
If you get confused by .loc and .iloc, keep in mind that .iloc is based on the index (starting with i) position, while .loc is based on the label (starting with l).
.loc
.loc is supposed to be based on the index labels and not the positions, so it is analogous to Python dictionary-based indexing. However, it can accept boolean arrays, slices, and a list of labels (none of which work with a Python dictionary).
iloc
.iloc does the lookup based on index position, i.e., pandas behaves similarly to a Python list. pandas will raise an IndexError if there is no index at that location.
Examples
The following examples are presented to illustrate the differences between .iloc and .loc. Let's consider the following series:
>>> s = pd.Series([11, 9], index=["1990", "1993"], name="Magic Numbers")
>>> s
1990 11
1993 9
Name: Magic Numbers , dtype: int64
.iloc Examples
>>> s.iloc[0]
11
>>> s.iloc[-1]
9
>>> s.iloc[4]
Traceback (most recent call last):
...
IndexError: single positional indexer is out-of-bounds
>>> s.iloc[0:3] # slice
1990 11
1993 9
Name: Magic Numbers , dtype: int64
>>> s.iloc[[0,1]] # list
1990 11
1993 9
Name: Magic Numbers , dtype: int64
.loc Examples
>>> s.loc['1990']
11
>>> s.loc['1970']
Traceback (most recent call last):
...
KeyError: ’the label [1970] is not in the [index]’
>>> mask = s > 9
>>> s.loc[mask]
1990 11
Name: Magic Numbers , dtype: int64
>>> s.loc['1990':] # slice
1990 11
1993 9
Name: Magic Numbers, dtype: int64
Because s has string index values, .loc will fail when
indexing with an integer:
>>> s.loc[0]
Traceback (most recent call last):
...
KeyError: 0
This example will illustrate the difference:
df = pd.DataFrame({'col1': [1,2,3,4,5], 'col2': ["foo", "bar", "baz", "foobar", "foobaz"]})
col1 col2
0 1 foo
1 2 bar
2 3 baz
3 4 foobar
4 5 foobaz
df = df.sort_values('col1', ascending = False)
col1 col2
4 5 foobaz
3 4 foobar
2 3 baz
1 2 bar
0 1 foo
Index based access:
df.iloc[0, 0:2]
col1 5
col2 foobaz
Name: 4, dtype: object
We get the first row of the sorted dataframe. (This is not the row with index 0, but with index 4).
Position based access:
df.loc[0, 'col1':'col2']
col1 1
col2 foo
Name: 0, dtype: object
We get the row with index 0, even when the df is sorted.
DataFrame.loc() : Select rows by index value
DataFrame.iloc() : Select rows by rows number
Example:
Select first 5 rows of a table, df1 is your dataframe
df1.iloc[:5]
Select first A, B rows of a table, df1 is your dataframe
df1.loc['A','B']

Question about drop=True in pd.dataframe.reset_index()

In a Pandas dataframe, it's possible to reset the index using the reset_index() method. One optional argument is drop=True which according to the documentation:
drop : bool, default False
Do not try to insert index into dataframe columns.
This resets the index to the default integer index.
My question is, what does the first sentence mean? Will it try to convert an integer index to a new column in my df if I leave if False?
Also, will my row order be preserved or should I also sort to ensure proper ordering?
As you can see below, df.reset_index() will move the index into the dataframe as a column. If the index was just a generic numerical index, you probably don't care about it and can just discard it. Below is a simple dataframe, but I dropped the first row just to have differing values in the index.
df = pd.DataFrame([['a', 10], ['b', 20], ['c', 30], ['d', 40]], columns=['letter','number'])
df = df[df.number > 10]
print(df)
# letter number
# 1 b 20
# 2 c 30
# 3 d 40
Default behavior now shows a column named index which was the previous index. You can see that df['index'] matches the index from above, but the index has been renumbered starting from 0.
print(df.reset_index())
# index letter number
# 0 1 b 20
# 1 2 c 30
# 2 3 d 40
drop=True doesn't pretend like the index was important and just gives you a new index.
print(df.reset_index(drop=True))
# letter number
# 0 b 20
# 1 c 30
# 2 d 40
Regarding row order, I suspect that it would be maintained, but the order in which things are stored should not be relied on in general. If you are performing an aggregate function, you probably want to make sure you have the data ordered properly for the aggrigation.

How to get values on one dataframe based on the position of a value in other dataframe

I have two dataframes with the same size.
df1
1 5 3
6 5 1
2 4 9
df2
a b c
d e f
g h i
I want to get the corresponding value on df2 that is in the same position as the maximum value of each row in df1. For example, row 0 has element [0,1] as its max, so I'd like to get [0,1] from df2 in return
Desired result would be:
df3
b
d
i
Thank you so much!
Don't use for loops. numpy can be handy here
vals = df2.values[np.arange(len(df2)), df1.values.argmax(1)]
Of course, can df3 = pd.DataFrame(vals)
col
0 b
1 d
2 i
S=df1.idxmax(axis=0)
p=0
for a in range(len(df1):
df3.iloc(['a','0'])=df2.iloc([S[p],0])
p+=1
Try the code:
>>> for i, j in enumerate(df1.idxmax()):
... print(df2.iloc[i, j])
...
b
d
i
idxmax gives the id of the maximum value in the dataframe, either row-wise or column-wise.
Your problem has two parts:
1- Finding the maximum value of each row
2- Choosing the maximum column of each row with values found in step one
You can easily use lookup function. The first argument is finding the maximum column in rows(step one), and the second is the selection(step two)
df2.lookup(range(len(df1)), df1.idxmax()) #output => array(['b', 'd', 'i'], dtype=object)
If array does not work for you, you can also create data frame from these values if by simply passing it to pd.DataFrame:
pd.DataFrame(df2.lookup(range(len(df1)), df1.idxmax()))
One good feature of this solution is avoiding loops which makes it efficient.

Function within a function involving each column of a DataFrame in Python

As the question states, I'm trying to learn how to run a function on each element belonging to a column within a DataFrame without having to define that column directly. The point is that I would like to be able to enter any given set of DataFrame and find each element within each column that fulfills a particular condition.
The sample that I've included illustrates what I'm trying to do. I know the below doesn't work and I thought that writing def fun(dataframe[column]) would do the trick but the syntax is incorrect, unfortunately.
Basically, the reason for this is that I have multiple sets of data where I'd like to locate each element that is above a set threshold.
Thanks a lot in advance!
df=pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns=list('ABC'))
def fun(dataframe):
for column in dataframe:
def fun(column):
mean= sum(column)/len(column)
print (mean)
for element in column:
if element < mean*1.1:
element = 0
print (element)
fun(df)
As #MadPhysicist mentioned in a comment, pandas was created to reduce the need for explicit for-looping.
If I understand your specific case correctly, you intend to replace with zero any element that is less than 1.1 times the mean value of its column. Here's one way to do that in idiomatic pandas:
# Set a random seed for repeatability
np.random.seed(314159)
# Create example data
df = pd.DataFrame(np.random.randint(0,100,size=(3, 3)), columns=list('ABC'))
df
A B C
0 11 34 93
1 79 0 81
2 66 43 71
# By default, df.mean() computes the mean of each numeric column (not row)
df.mean()
A 52.000000
B 25.666667
C 81.666667
dtype: float64
# We can use boolean indexing to replace values less than
# 1.1 * column mean with zero
# docs: https://pandas.pydata.org/pandas-docs/stable/indexing.html#boolean-indexing
df[df < 1.1 * df.mean()] = 0
df
A B C
0 0 34 93
1 79 0 0
2 66 43 0

Categories