Can a list comprehension be divided in two lists? - python

I think I've caught the idea of one-line for loop, but now I have a problem. I know I can define a dataframe column using this like:
df = pd.DataFrame(columns=["columnA"])
list = [0, 1, 2, 3, 4]
df["columnA"] = [i for i in list]
Now my question is: Is it possible to define 2 columns in a one-line for loop?
I've tried this:
df["columnA"], df["columnB"] = [i, i**2 for i in list]
df["columnA"], df["columnB"] = [[i, i**2] for i in list]
None of this worked. I'm using Python 3.10

You have to zip your output:
df['A'], df['B'] = zip(*[(i, i**2) for i in lst])
print(df)
# Output
A B
0 0 0
1 1 1
2 2 4
3 3 9
4 4 16
You can also use np.array:
df[['A', 'B']] = np.array([(i, i**2) for i in lst])

Here is my solution to your problem:
1: Column creation
Create the column with the dataframe, it is much faster than adding the column later
list = [0, 1, 2, 3, 4]
df = pd.DataFrame({
"columnA":list,
"columnB":[i**2 for i in list]
})
By testing it with %%timeit we obtain:
161 µs ± 4.41 µs per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
Now, lets check your version:
df = pd.DataFrame(columns=["columnA"])
list = [0, 1, 2, 3, 4]
df["columnA"] = [i for i in list]
df["columnB"] = [i**2 for i in list]
1.58 ms ± 72.3 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
Your version is more or less 10000x slower.
2: Using .assign
If you cannot create all columns when the dataframe is created, you can create multiple columns with a single method by using .assign:
df = pd.DataFrame({
"columnA" :[i for i in list]
}).assign(
columnB = [i**2 for i in list],
columnC = [i**3 for i in list]
)
3: Single for
If you really want to use a single for, you can build the data first and the dataframe later:
data = [
{
"columnA":i,
"columnB":i**2
} for i in list
]
df = pd.DataFrame(data)
Finally, list is already a python keyword, so you should avoid avoid overwriting it. You will lose access to the actual function and type, so these wont work:
list(iter([1,2,3])) (converts an interable into a list)
isinstance([1,2,3],list) (checks that the variable is of the list type)

Right now your code is overwriting what's in Column A.
df["columnB"], df['columnA'] = [i**2 for i in list], [i for i in list]
The above answer is much better than mine. Learned something new today.

Related

How can I insert a single value into a Pandas dataframe at a given location? [duplicate]

I have created a Pandas DataFrame
df = DataFrame(index=['A','B','C'], columns=['x','y'])
and have got this
x y
A NaN NaN
B NaN NaN
C NaN NaN
Now, I would like to assign a value to particular cell, for example to row C and column x.
I would expect to get this result:
x y
A NaN NaN
B NaN NaN
C 10 NaN
with this code:
df.xs('C')['x'] = 10
However, the contents of df has not changed. The dataframe contains yet again only NaNs.
Any suggestions?
RukTech's answer, df.set_value('C', 'x', 10), is far and away faster than the options I've suggested below. However, it has been slated for deprecation.
Going forward, the recommended method is .iat/.at.
Why df.xs('C')['x']=10 does not work:
df.xs('C') by default, returns a new dataframe with a copy of the data, so
df.xs('C')['x']=10
modifies this new dataframe only.
df['x'] returns a view of the df dataframe, so
df['x']['C'] = 10
modifies df itself.
Warning: It is sometimes difficult to predict if an operation returns a copy or a view. For this reason the docs recommend avoiding assignments with "chained indexing".
So the recommended alternative is
df.at['C', 'x'] = 10
which does modify df.
In [18]: %timeit df.set_value('C', 'x', 10)
100000 loops, best of 3: 2.9 µs per loop
In [20]: %timeit df['x']['C'] = 10
100000 loops, best of 3: 6.31 µs per loop
In [81]: %timeit df.at['C', 'x'] = 10
100000 loops, best of 3: 9.2 µs per loop
Update: The .set_value method is going to be deprecated. .iat/.at are good replacements, unfortunately pandas provides little documentation
The fastest way to do this is using set_value. This method is ~100 times faster than .ix method. For example:
df.set_value('C', 'x', 10)
You can also use a conditional lookup using .loc as seen here:
df.loc[df[<some_column_name>] == <condition>, [<another_column_name>]] = <value_to_add>
where <some_column_name is the column you want to check the <condition> variable against and <another_column_name> is the column you want to add to (can be a new column or one that already exists). <value_to_add> is the value you want to add to that column/row.
This example doesn't work precisely with the question at hand, but it might be useful for someone wants to add a specific value based on a condition.
Try using df.loc[row_index,col_indexer] = value
The recommended way (according to the maintainers) to set a value is:
df.ix['x','C']=10
Using 'chained indexing' (df['x']['C']) may lead to problems.
See:
https://stackoverflow.com/a/21287235/1579844
http://pandas.pydata.org/pandas-docs/dev/indexing.html#indexing-view-versus-copy
https://github.com/pydata/pandas/pull/6031
This is the only thing that worked for me!
df.loc['C', 'x'] = 10
Learn more about .loc here.
To set values, use:
df.at[0, 'clm1'] = 0
The fastest recommended method for setting variables.
set_value, ix have been deprecated.
No warning, unlike iloc and loc
.iat/.at is the good solution.
Supposing you have this simple data_frame:
A B C
0 1 8 4
1 3 9 6
2 22 33 52
if we want to modify the value of the cell [0,"A"] u can use one of those solution :
df.iat[0,0] = 2
df.at[0,'A'] = 2
And here is a complete example how to use iat to get and set a value of cell :
def prepossessing(df):
for index in range(0,len(df)):
df.iat[index,0] = df.iat[index,0] * 2
return df
y_train before :
0
0 54
1 15
2 15
3 8
4 31
5 63
6 11
y_train after calling prepossessing function that iat to change to multiply the value of each cell by 2:
0
0 108
1 30
2 30
3 16
4 62
5 126
6 22
I would suggest:
df.loc[index_position, "column_name"] = some_value
To modifiy multiple cells at the same time:
df.loc[start_idx_pos: End_idx_pos, "column_name"] = some_value
Avoid Assignment with Chained Indexing
You are dealing with an assignment with chained indexing which will result in a SettingWithCopy warning. This should be avoided by all means.
Your assignment will have to resort to one single .loc[] or .iloc[] slice, as explained here. Hence, in your case:
df.loc['C', 'x'] = 10
In my example i just change it in selected cell
for index, row in result.iterrows():
if np.isnan(row['weight']):
result.at[index, 'weight'] = 0.0
'result' is a dataField with column 'weight'
Here is a summary of the valid solutions provided by all users, for data frames indexed by integer and string.
df.iloc, df.loc and df.at work for both type of data frames, df.iloc only works with row/column integer indices, df.loc and df.at supports for setting values using column names and/or integer indices.
When the specified index does not exist, both df.loc and df.at would append the newly inserted rows/columns to the existing data frame, but df.iloc would raise "IndexError: positional indexers are out-of-bounds". A working example tested in Python 2.7 and 3.7 is as follows:
import numpy as np, pandas as pd
df1 = pd.DataFrame(index=np.arange(3), columns=['x','y','z'])
df1['x'] = ['A','B','C']
df1.at[2,'y'] = 400
# rows/columns specified does not exist, appends new rows/columns to existing data frame
df1.at['D','w'] = 9000
df1.loc['E','q'] = 499
# using df[<some_column_name>] == <condition> to retrieve target rows
df1.at[df1['x']=='B', 'y'] = 10000
df1.loc[df1['x']=='B', ['z','w']] = 10000
# using a list of index to setup values
df1.iloc[[1,2,4], 2] = 9999
df1.loc[[0,'D','E'],'w'] = 7500
df1.at[[0,2,"D"],'x'] = 10
df1.at[:, ['y', 'w']] = 8000
df1
>>> df1
x y z w q
0 10 8000 NaN 8000 NaN
1 B 8000 9999 8000 NaN
2 10 8000 9999 8000 NaN
D 10 8000 NaN 8000 NaN
E NaN 8000 9999 8000 499.0
you can use .iloc.
df.iloc[[2], [0]] = 10
set_value() is deprecated.
Starting from the release 0.23.4, Pandas "announces the future"...
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 245.0
2 Chevrolet Malibu 190.0
>>> df.set_value(2, 'Prices (U$)', 240.0)
__main__:1: FutureWarning: set_value is deprecated and will be removed in a future release.
Please use .at[] or .iat[] accessors instead
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 245.0
2 Chevrolet Malibu 240.0
Considering this advice, here's a demonstration of how to use them:
by row/column integer positions
>>> df.iat[1, 1] = 260.0
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 260.0
2 Chevrolet Malibu 240.0
by row/column labels
>>> df.at[2, "Cars"] = "Chevrolet Corvette"
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 260.0
2 Chevrolet Corvette 240.0
References:
pandas.DataFrame.iat
pandas.DataFrame.at
One way to use index with condition is first get the index of all the rows that satisfy your condition and then simply use those row indexes in a multiple of ways
conditional_index = df.loc[ df['col name'] <condition> ].index
Example condition is like
==5, >10 , =="Any string", >= DateTime
Then you can use these row indexes in variety of ways like
Replace value of one column for conditional_index
df.loc[conditional_index , [col name]]= <new value>
Replace value of multiple column for conditional_index
df.loc[conditional_index, [col1,col2]]= <new value>
One benefit with saving the conditional_index is that you can assign value of one column to another column with same row index
df.loc[conditional_index, [col1,col2]]= df.loc[conditional_index,'col name']
This is all possible because .index returns a array of index which .loc can use with direct addressing so it avoids traversals again and again.
I tested and the output is df.set_value is little faster, but the official method df.at looks like the fastest non deprecated way to do it.
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(100, 100))
%timeit df.iat[50,50]=50 # ✓
%timeit df.at[50,50]=50 # ✔
%timeit df.set_value(50,50,50) # will deprecate
%timeit df.iloc[50,50]=50
%timeit df.loc[50,50]=50
7.06 µs ± 118 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
5.52 µs ± 64.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
3.68 µs ± 80.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
98.7 µs ± 1.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
109 µs ± 1.42 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Note this is setting the value for a single cell. For the vectors loc and iloc should be better options since they are vectorized.
If one wants to change the cell in the position (0,0) of the df to a string such as '"236"76"', the following options will do the work:
df[0][0] = '"236"76"'
# %timeit df[0][0] = '"236"76"'
# 938 µs ± 83.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Or using pandas.DataFrame.at
df.at[0, 0] = '"236"76"'
# %timeit df.at[0, 0] = '"236"76"'
#15 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Or using pandas.DataFrame.iat
df.iat[0, 0] = '"236"76"'
# %timeit df.iat[0, 0] = '"236"76"'
# 41.1 µs ± 3.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Or using pandas.DataFrame.loc
df.loc[0, 0] = '"236"76"'
# %timeit df.loc[0, 0] = '"236"76"'
# 5.21 ms ± 401 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Or using pandas.DataFrame.iloc
df.iloc[0, 0] = '"236"76"'
# %timeit df.iloc[0, 0] = '"236"76"'
# 5.12 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
If time is of relevance, using pandas.DataFrame.at is the fastest approach.
Soo, your question to convert NaN at ['x',C] to value 10
the answer is..
df['x'].loc['C':]=10
df
alternative code is
df.loc['C', 'x']=10
df
df.loc['c','x']=10
This will change the value of cth row and
xth column.
If you want to change values not for whole row, but only for some columns:
x = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
x.iloc[1] = dict(A=10, B=-10)
From version 0.21.1 you can also use .at method. There are some differences compared to .loc as mentioned here - pandas .at versus .loc, but it's faster on single value replacement
In addition to the answers above, here is a benchmark comparing different ways to add rows of data to an already existing dataframe. It shows that using at or set-value is the most efficient way for large dataframes (at least for these test conditions).
Create new dataframe for each row and...
... append it (13.0 s)
... concatenate it (13.1 s)
Store all new rows in another container first, convert to new dataframe once and append...
container = lists of lists (2.0 s)
container = dictionary of lists (1.9 s)
Preallocate whole dataframe, iterate over new rows and all columns and fill using
... at (0.6 s)
... set_value (0.4 s)
For the test, an existing dataframe comprising 100,000 rows and 1,000 columns and random numpy values was used. To this dataframe, 100 new rows were added.
Code see below:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 21 16:38:46 2018
#author: gebbissimo
"""
import pandas as pd
import numpy as np
import time
NUM_ROWS = 100000
NUM_COLS = 1000
data = np.random.rand(NUM_ROWS,NUM_COLS)
df = pd.DataFrame(data)
NUM_ROWS_NEW = 100
data_tot = np.random.rand(NUM_ROWS + NUM_ROWS_NEW,NUM_COLS)
df_tot = pd.DataFrame(data_tot)
DATA_NEW = np.random.rand(1,NUM_COLS)
#%% FUNCTIONS
# create and append
def create_and_append(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = df.append(df_new)
return df
# create and concatenate
def create_and_concat(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = pd.concat((df, df_new))
return df
# store as dict and
def store_as_list(df):
lst = [[] for i in range(NUM_ROWS_NEW)]
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
lst[i].append(DATA_NEW[0,j])
df_new = pd.DataFrame(lst)
df_tot = df.append(df_new)
return df_tot
# store as dict and
def store_as_dict(df):
dct = {}
for j in range(NUM_COLS):
dct[j] = []
for i in range(NUM_ROWS_NEW):
dct[j].append(DATA_NEW[0,j])
df_new = pd.DataFrame(dct)
df_tot = df.append(df_new)
return df_tot
# preallocate and fill using .at
def fill_using_at(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.at[NUM_ROWS+i,j] = DATA_NEW[0,j]
return df
# preallocate and fill using .at
def fill_using_set(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.set_value(NUM_ROWS+i,j,DATA_NEW[0,j])
return df
#%% TESTS
t0 = time.time()
create_and_append(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
create_and_concat(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_list(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_dict(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_at(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_set(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
I too was searching for this topic and I put together a way to iterate through a DataFrame and update it with lookup values from a second DataFrame. Here is my code.
src_df = pd.read_sql_query(src_sql,src_connection)
for index1, row1 in src_df.iterrows():
for index, row in vertical_df.iterrows():
src_df.set_value(index=index1,col=u'etl_load_key',value=etl_load_key)
if (row1[u'src_id'] == row['SRC_ID']) is True:
src_df.set_value(index=index1,col=u'vertical',value=row['VERTICAL'])

How to change row value by index [duplicate]

I have created a Pandas DataFrame
df = DataFrame(index=['A','B','C'], columns=['x','y'])
and have got this
x y
A NaN NaN
B NaN NaN
C NaN NaN
Now, I would like to assign a value to particular cell, for example to row C and column x.
I would expect to get this result:
x y
A NaN NaN
B NaN NaN
C 10 NaN
with this code:
df.xs('C')['x'] = 10
However, the contents of df has not changed. The dataframe contains yet again only NaNs.
Any suggestions?
RukTech's answer, df.set_value('C', 'x', 10), is far and away faster than the options I've suggested below. However, it has been slated for deprecation.
Going forward, the recommended method is .iat/.at.
Why df.xs('C')['x']=10 does not work:
df.xs('C') by default, returns a new dataframe with a copy of the data, so
df.xs('C')['x']=10
modifies this new dataframe only.
df['x'] returns a view of the df dataframe, so
df['x']['C'] = 10
modifies df itself.
Warning: It is sometimes difficult to predict if an operation returns a copy or a view. For this reason the docs recommend avoiding assignments with "chained indexing".
So the recommended alternative is
df.at['C', 'x'] = 10
which does modify df.
In [18]: %timeit df.set_value('C', 'x', 10)
100000 loops, best of 3: 2.9 µs per loop
In [20]: %timeit df['x']['C'] = 10
100000 loops, best of 3: 6.31 µs per loop
In [81]: %timeit df.at['C', 'x'] = 10
100000 loops, best of 3: 9.2 µs per loop
Update: The .set_value method is going to be deprecated. .iat/.at are good replacements, unfortunately pandas provides little documentation
The fastest way to do this is using set_value. This method is ~100 times faster than .ix method. For example:
df.set_value('C', 'x', 10)
You can also use a conditional lookup using .loc as seen here:
df.loc[df[<some_column_name>] == <condition>, [<another_column_name>]] = <value_to_add>
where <some_column_name is the column you want to check the <condition> variable against and <another_column_name> is the column you want to add to (can be a new column or one that already exists). <value_to_add> is the value you want to add to that column/row.
This example doesn't work precisely with the question at hand, but it might be useful for someone wants to add a specific value based on a condition.
Try using df.loc[row_index,col_indexer] = value
The recommended way (according to the maintainers) to set a value is:
df.ix['x','C']=10
Using 'chained indexing' (df['x']['C']) may lead to problems.
See:
https://stackoverflow.com/a/21287235/1579844
http://pandas.pydata.org/pandas-docs/dev/indexing.html#indexing-view-versus-copy
https://github.com/pydata/pandas/pull/6031
This is the only thing that worked for me!
df.loc['C', 'x'] = 10
Learn more about .loc here.
To set values, use:
df.at[0, 'clm1'] = 0
The fastest recommended method for setting variables.
set_value, ix have been deprecated.
No warning, unlike iloc and loc
.iat/.at is the good solution.
Supposing you have this simple data_frame:
A B C
0 1 8 4
1 3 9 6
2 22 33 52
if we want to modify the value of the cell [0,"A"] u can use one of those solution :
df.iat[0,0] = 2
df.at[0,'A'] = 2
And here is a complete example how to use iat to get and set a value of cell :
def prepossessing(df):
for index in range(0,len(df)):
df.iat[index,0] = df.iat[index,0] * 2
return df
y_train before :
0
0 54
1 15
2 15
3 8
4 31
5 63
6 11
y_train after calling prepossessing function that iat to change to multiply the value of each cell by 2:
0
0 108
1 30
2 30
3 16
4 62
5 126
6 22
I would suggest:
df.loc[index_position, "column_name"] = some_value
To modifiy multiple cells at the same time:
df.loc[start_idx_pos: End_idx_pos, "column_name"] = some_value
Avoid Assignment with Chained Indexing
You are dealing with an assignment with chained indexing which will result in a SettingWithCopy warning. This should be avoided by all means.
Your assignment will have to resort to one single .loc[] or .iloc[] slice, as explained here. Hence, in your case:
df.loc['C', 'x'] = 10
In my example i just change it in selected cell
for index, row in result.iterrows():
if np.isnan(row['weight']):
result.at[index, 'weight'] = 0.0
'result' is a dataField with column 'weight'
Here is a summary of the valid solutions provided by all users, for data frames indexed by integer and string.
df.iloc, df.loc and df.at work for both type of data frames, df.iloc only works with row/column integer indices, df.loc and df.at supports for setting values using column names and/or integer indices.
When the specified index does not exist, both df.loc and df.at would append the newly inserted rows/columns to the existing data frame, but df.iloc would raise "IndexError: positional indexers are out-of-bounds". A working example tested in Python 2.7 and 3.7 is as follows:
import numpy as np, pandas as pd
df1 = pd.DataFrame(index=np.arange(3), columns=['x','y','z'])
df1['x'] = ['A','B','C']
df1.at[2,'y'] = 400
# rows/columns specified does not exist, appends new rows/columns to existing data frame
df1.at['D','w'] = 9000
df1.loc['E','q'] = 499
# using df[<some_column_name>] == <condition> to retrieve target rows
df1.at[df1['x']=='B', 'y'] = 10000
df1.loc[df1['x']=='B', ['z','w']] = 10000
# using a list of index to setup values
df1.iloc[[1,2,4], 2] = 9999
df1.loc[[0,'D','E'],'w'] = 7500
df1.at[[0,2,"D"],'x'] = 10
df1.at[:, ['y', 'w']] = 8000
df1
>>> df1
x y z w q
0 10 8000 NaN 8000 NaN
1 B 8000 9999 8000 NaN
2 10 8000 9999 8000 NaN
D 10 8000 NaN 8000 NaN
E NaN 8000 9999 8000 499.0
you can use .iloc.
df.iloc[[2], [0]] = 10
set_value() is deprecated.
Starting from the release 0.23.4, Pandas "announces the future"...
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 245.0
2 Chevrolet Malibu 190.0
>>> df.set_value(2, 'Prices (U$)', 240.0)
__main__:1: FutureWarning: set_value is deprecated and will be removed in a future release.
Please use .at[] or .iat[] accessors instead
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 245.0
2 Chevrolet Malibu 240.0
Considering this advice, here's a demonstration of how to use them:
by row/column integer positions
>>> df.iat[1, 1] = 260.0
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 260.0
2 Chevrolet Malibu 240.0
by row/column labels
>>> df.at[2, "Cars"] = "Chevrolet Corvette"
>>> df
Cars Prices (U$)
0 Audi TT 120.0
1 Lamborghini Aventador 260.0
2 Chevrolet Corvette 240.0
References:
pandas.DataFrame.iat
pandas.DataFrame.at
One way to use index with condition is first get the index of all the rows that satisfy your condition and then simply use those row indexes in a multiple of ways
conditional_index = df.loc[ df['col name'] <condition> ].index
Example condition is like
==5, >10 , =="Any string", >= DateTime
Then you can use these row indexes in variety of ways like
Replace value of one column for conditional_index
df.loc[conditional_index , [col name]]= <new value>
Replace value of multiple column for conditional_index
df.loc[conditional_index, [col1,col2]]= <new value>
One benefit with saving the conditional_index is that you can assign value of one column to another column with same row index
df.loc[conditional_index, [col1,col2]]= df.loc[conditional_index,'col name']
This is all possible because .index returns a array of index which .loc can use with direct addressing so it avoids traversals again and again.
I tested and the output is df.set_value is little faster, but the official method df.at looks like the fastest non deprecated way to do it.
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.rand(100, 100))
%timeit df.iat[50,50]=50 # ✓
%timeit df.at[50,50]=50 # ✔
%timeit df.set_value(50,50,50) # will deprecate
%timeit df.iloc[50,50]=50
%timeit df.loc[50,50]=50
7.06 µs ± 118 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
5.52 µs ± 64.2 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
3.68 µs ± 80.8 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
98.7 µs ± 1.07 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
109 µs ± 1.42 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Note this is setting the value for a single cell. For the vectors loc and iloc should be better options since they are vectorized.
If one wants to change the cell in the position (0,0) of the df to a string such as '"236"76"', the following options will do the work:
df[0][0] = '"236"76"'
# %timeit df[0][0] = '"236"76"'
# 938 µs ± 83.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
Or using pandas.DataFrame.at
df.at[0, 0] = '"236"76"'
# %timeit df.at[0, 0] = '"236"76"'
#15 µs ± 2.09 µs per loop (mean ± std. dev. of 7 runs, 100000 loops each)
Or using pandas.DataFrame.iat
df.iat[0, 0] = '"236"76"'
# %timeit df.iat[0, 0] = '"236"76"'
# 41.1 µs ± 3.09 µs per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Or using pandas.DataFrame.loc
df.loc[0, 0] = '"236"76"'
# %timeit df.loc[0, 0] = '"236"76"'
# 5.21 ms ± 401 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Or using pandas.DataFrame.iloc
df.iloc[0, 0] = '"236"76"'
# %timeit df.iloc[0, 0] = '"236"76"'
# 5.12 ms ± 300 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
If time is of relevance, using pandas.DataFrame.at is the fastest approach.
Soo, your question to convert NaN at ['x',C] to value 10
the answer is..
df['x'].loc['C':]=10
df
alternative code is
df.loc['C', 'x']=10
df
df.loc['c','x']=10
This will change the value of cth row and
xth column.
If you want to change values not for whole row, but only for some columns:
x = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
x.iloc[1] = dict(A=10, B=-10)
From version 0.21.1 you can also use .at method. There are some differences compared to .loc as mentioned here - pandas .at versus .loc, but it's faster on single value replacement
In addition to the answers above, here is a benchmark comparing different ways to add rows of data to an already existing dataframe. It shows that using at or set-value is the most efficient way for large dataframes (at least for these test conditions).
Create new dataframe for each row and...
... append it (13.0 s)
... concatenate it (13.1 s)
Store all new rows in another container first, convert to new dataframe once and append...
container = lists of lists (2.0 s)
container = dictionary of lists (1.9 s)
Preallocate whole dataframe, iterate over new rows and all columns and fill using
... at (0.6 s)
... set_value (0.4 s)
For the test, an existing dataframe comprising 100,000 rows and 1,000 columns and random numpy values was used. To this dataframe, 100 new rows were added.
Code see below:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Nov 21 16:38:46 2018
#author: gebbissimo
"""
import pandas as pd
import numpy as np
import time
NUM_ROWS = 100000
NUM_COLS = 1000
data = np.random.rand(NUM_ROWS,NUM_COLS)
df = pd.DataFrame(data)
NUM_ROWS_NEW = 100
data_tot = np.random.rand(NUM_ROWS + NUM_ROWS_NEW,NUM_COLS)
df_tot = pd.DataFrame(data_tot)
DATA_NEW = np.random.rand(1,NUM_COLS)
#%% FUNCTIONS
# create and append
def create_and_append(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = df.append(df_new)
return df
# create and concatenate
def create_and_concat(df):
for i in range(NUM_ROWS_NEW):
df_new = pd.DataFrame(DATA_NEW)
df = pd.concat((df, df_new))
return df
# store as dict and
def store_as_list(df):
lst = [[] for i in range(NUM_ROWS_NEW)]
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
lst[i].append(DATA_NEW[0,j])
df_new = pd.DataFrame(lst)
df_tot = df.append(df_new)
return df_tot
# store as dict and
def store_as_dict(df):
dct = {}
for j in range(NUM_COLS):
dct[j] = []
for i in range(NUM_ROWS_NEW):
dct[j].append(DATA_NEW[0,j])
df_new = pd.DataFrame(dct)
df_tot = df.append(df_new)
return df_tot
# preallocate and fill using .at
def fill_using_at(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.at[NUM_ROWS+i,j] = DATA_NEW[0,j]
return df
# preallocate and fill using .at
def fill_using_set(df):
for i in range(NUM_ROWS_NEW):
for j in range(NUM_COLS):
#print("i,j={},{}".format(i,j))
df.set_value(NUM_ROWS+i,j,DATA_NEW[0,j])
return df
#%% TESTS
t0 = time.time()
create_and_append(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
create_and_concat(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_list(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
store_as_dict(df)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_at(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
t0 = time.time()
fill_using_set(df_tot)
t1 = time.time()
print('Needed {} seconds'.format(t1-t0))
I too was searching for this topic and I put together a way to iterate through a DataFrame and update it with lookup values from a second DataFrame. Here is my code.
src_df = pd.read_sql_query(src_sql,src_connection)
for index1, row1 in src_df.iterrows():
for index, row in vertical_df.iterrows():
src_df.set_value(index=index1,col=u'etl_load_key',value=etl_load_key)
if (row1[u'src_id'] == row['SRC_ID']) is True:
src_df.set_value(index=index1,col=u'vertical',value=row['VERTICAL'])

Pandas lookup values in one column based on list of values of another column

I have a dataframe with and id and value columns.
df=
id val
'a' 1
'b' 3
'c' 9
....
I have a list of (repeated) id values.
i_list=['a','a','a','b']
I need to map this list of (repeated) id values into the corresponding (repeated) value columns, using the dataframe pairs (id,val)
out_desired=[1,1,1,3]
Right now I am doing:
out_desired=[df[df.id==curr_id].val.values for curr_id in i_list ]
How to do this in a more efficient yet still concise way?
You can try using pandas.merge as it seems to be faster for me.
df = {'id': ['a', 'b', 'c'], 'value': [1,3,9]}
df = pd.DataFrame(df).set_index('id')
test = ['a', 'b', 'c']*8
%timeit df.merge(pd.DataFrame({'id':test}), left_index=True, right_on='id', how='right')['value'].values
1.32 ms ± 33.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit [df[df.index==curr_id].values for curr_id in test ]
5.81 ms ± 123 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
I believe it gives the right answer
If the ids are lexicigraphically sorted you can use Series.searchsorted:
df.loc[df['id'].searchsorted(i_list), 'val'].to_numpy().tolist()
[1, 1, 1, 3]
Or you could set id as index (works for non sorted id too):
df.set_index('id').loc[i_list, 'val'].to_numpy().tolist()
# [1, 1, 1, 3]
If the id column is not sorted, sort and then do as above for the first approach to work:
print(df)
id val
0 c 1
1 b 3
2 a 9
df_ = df.sort_values(['id'])
df_.loc[df_['id'].searchsorted(i_list), 'val'].to_numpy().tolist()
[1, 1, 1, 3]

Encode Python lists as indexes of unique values

I'd like to represent an arbitrary list as two other lists. The first, call it values, containing the unique elements in the original list, and the second, call it codes, containing the index in values of each element in the original list, in such a way that the original list could be reconstructed as
orig_list = [values[c] for c in codes]
(Note: this is similar to how pandas.Categorical represents series)
I've created the function below to do this decomposition:
def decompose(x):
values = sorted(list(set(x)))
codes = [0 for _ in x]
for i, value in enumerate(values):
codes = [i if elem == value else code for elem, code in zip(x, codes)]
return values, codes
This works, but I would like to know if there is a better/more efficient way of achieving this (no double loop?), or if there's something in the standard library that could do this for me.
Update:
The answers below are great and a big improvement to my function. I've timed all that worked as intended:
test_list = [random.randint(1, 10) for _ in range(10000)]
functions = [decompose, decompose_boris1, decompose_boris2,
decompose_alexander, decompose_stuart1, decompose_stuart2,
decompose_dan1]
for f in functions:
print("-- " + f.__name__)
# test
values, codes = f(test_list)
decoded_list = [values[c] for c in codes]
if decoded_list == test_list:
print("Test passed")
%timeit f(test_list)
else:
print("Test failed")
Results:
-- decompose
Test passed
12.4 ms ± 269 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
-- decompose_boris1
Test passed
1.69 ms ± 21.9 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
-- decompose_boris2
Test passed
1.63 ms ± 18.6 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
-- decompose_alexander
Test passed
681 µs ± 2.15 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
-- decompose_stuart1
Test passed
1.7 ms ± 3.42 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
-- decompose_stuart2
Test passed
682 µs ± 5.98 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
-- decompose_dan1
Test passed
896 µs ± 19.5 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
I'm accepting Stuart's answer for being the simplest and one of the fastest.
I’m quite happy with this solution, although I am still trying to find a better one.
Code
def decompose(original_list: List[Any]) -> Tuple[List[int], Dict[int, Any]]:
code_to_elem = dict(enumerate(set(original_list)))
elem_to_code = {v: k for k, v in code_to_elem.items()}
encoded_list = [elem_to_code[elem] for elem in original_list]
return encoded_list, code_to_elem
Test run
# t_list for test_list
t_list = [1, 2, 19, 3, 2, 19, 2, 3, 19, 1, 1, 3]
t_encoded, t_decoder = decompose(t_list)
t_decoded = [t_decoder[curr_code] for curr_code in t_encoded]
Here are the contents of the important variables:
t_list: [1, 2, 19, 3, 2, 19, 2, 3, 19, 1, 1, 3]
t_encoded: [1, 2, 3, 0, 2, 3, 2, 0, 3, 1, 1, 0]
t_decoder: {0: 3, 1: 1, 2: 2, 3: 19}
t_decoded: [1, 2, 19, 3, 2, 19, 2, 3, 19, 1, 1, 3]
Let me know if you have any questions :)
This would count as an answer even if it is merely an improvement on Boris's answer.
I would use index_of_values.append(values.setdefault(elem, len(values))) as the loop body as that reduces three dict lookups to one and keeps the branch outside the interpreter. One might even create locals for the two methods to not repeatedly do lookups for them. But it seems that the savings of doing both is only 7%.
But using the insane looking values = defaultdict(lambda: len(values)) gives a 23%.
from collections import defaultdict
def decompose(x):
values = defaultdict(lambda: len(values))
index_of_values = []
_append = index_of_values.append
for elem in x:
_append(values[elem])
return list(values), index_of_values
It is even better if the loop is replaced by a map:
def decompose(x):
values = defaultdict(lambda: len(values))
index_of_values = list(map(values.__getitem__, x))
return list(values), index_of_values
Gives 57%. I would have caught that if I had been looking at the output of the function. Also get evidently doesn't trigger the factory. I don't know why it doesn't.
If the dict does not retain insertion order:
return sorted(values, key=values.get), index_of_values
You can use a simple index lookup:
def decompose(x):
values = sorted(set(x))
return values, [values.index(v) for v in x]
If more time-efficiency is needed (because x is very large) then this can be achieved (in exchange for some memory overhead) by representing values as a dictionary:
def decompose(x):
values = sorted(set(x))
d = {value: index for index, value in enumerate(values)}
return values, [d[v] for v in x]
If sorting is not needed (or not possible for some reason) then replace sorted with list in the above.
You can do this, IIUC, with cummin:
df['newgroup'] = df.reset_index().groupby('group')['index'].cummin()
In [1579]: df
Out[1579]:
group newgroup
0 5 0
1 4 1
2 5 0
3 6 3
4 7 4
5 8 5
6 5 0
7 3 7
8 2 8
9 5 0
10 6 3
11 7 4
12 8 5
13 8 5
14 5 0
Create a dictionary where the keys are the unique values and they map to their index in the keys (dictionaries keep order starting with CPython 3.6). You do this by iterating over the list, if an element is not in the dictionary, you add it to the dictionary and map it to the length of the dictionary at the time you added it. Then you look up the element's index in the dictionary and append it to the list. Then you return just the keys, along with the list of indexes.
def decompose(x):
values = {}
index_of_values = [values.setdefault(elem, len(values)) for elem in x]
return list(values), index_of_values
This is linear time and space complexity. Use it like this:
>>> decompose([2, 1, 1, 1, 131, 42, 2])
([2, 1, 131, 42], [0, 1, 1, 1, 2, 3, 0])
Using the side effect of a list comprehension is generally frowned upon, so you might want to write it out this function more explicitly:
def decompose(x):
values = {}
index_of_values = []
for elem in x:
if elem not in values:
values[elem] = len(values)
index_of_values.append(values[elem])
return list(values), index_of_values
If you need some thing like pandas.Caterogical
arbi_arr=[1, 2, 3, 1, 2, 3]
value=list(dict.fromkey(arbi_arr))
code=list(range(0, len(arbi_arr)))

Vectorized dot product

I have a pandas dataframe, and one of the columns has a list in each row. I have a separate numpy array, query_ebd, which I want to dot with every row in that column. The output I want is one number per row, representing the dot product of the list in that row with query_ebd. Currently, I am doing:
for row in df.itertuples():
chat_result = query_ebd.dot(row[2])
This works, but it's slow. I am trying to vectorize this function but I can't get it to work. I tried:
chat_result = df['Embedding'].dot(query_ebd)
but I get a dot-product shape mismatch because it is trying to dot the column with query_ebd rather than dot each row with it. How can I make it dot each row? Thanks.
As far as I understand, you need to multiply a column containing lists by a numpy array of column's length.
Consider below example:
df = pd.DataFrame({
'col':[[1,2,3], [1,2]]
})
query_ebd = np.array([2,4])
Let's create a seprate column for query_ebd and perform row-wise list multiplication.
df['mul'] = query_ebd
chat_result = df.apply(lambda x: [y * int(x['mul']) for y in x['col']], axis=1)
Output:
0 [2, 4, 6]
1 [4, 8]
dtype: object
IIUC
df.col.transform(lambda k: query_ebd.dot(k))
Just re-read your quesiton and now I understand what you want.
I believe this is a good solution, but I'm open to critics.
You can define your own type and overwrite __mul__, and let built-in implementation of numpy to do the vectorization for you
class O(object):
def __init__(self, d):
self.d = d
def __mul__(self, o):
return np.dot(self.d, o.d)
df.col.transform(O) * O(query_ebd)
Using this sample data,
df = pd.DataFrame({
'col':[ ([1,2,3]), ([1,2,0])]
})
query_ebd = np.array([2,4,3])
df.col.transform(O) * O(query_ebd)
0 19
1 10
Peformance:
For a 2 million rows data frame,
%timeit v * O(query_ebd)
4.5 s ± 230 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit s.transform(lambda k: query_ebd.dot(k))
12.4 s ± 264 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

Categories