Iteratively combine text in first column with existing text in other columns - python

I am in the process of creating a python script that extracts data from a poorly designed output file (which I can't change) from a piece of equipment within our research lab. I would like to include a way to iteratively combine the text in the first column of a dataframe (example below) with each other column in the dataframe.
A simple example of the dataframe:
Filename
1
2
3
4
5
a
Sheet(1)
Sheet(2)
Sheet(3)
Sheet(4)
....
b
Sheet(1)
Sheet(2)
--------
--------
....
c
Sheet(1)
Sheet(2)
Sheet(3)
Sheet(4)
....
d
Sheet(1)
Sheet(2)
Sheet(3)
--------
....
e
Sheet(1)
Sheet(2)
Sheet(3)
Sheet(4)
....
f
Sheet(1)
--------
--------
--------
....
What I am looking to produce:
Filename
1
2
3
4
5
a
a_Sheet(1)
a_Sheet(2)
a_Sheet(3)
a_Sheet(4)
....
b
b_Sheet(1)
b_Sheet(2)
--------
--------
....
c
c_Sheet(1)
c_Sheet(2)
c_Sheet(3)
c_Sheet(4)
....
d
d_Sheet(1)
d_Sheet(2)
d_Sheet(3)
--------
....
e
e_Sheet(1)
e_Sheet(2)
e_Sheet(3)
e_Sheet(4)
....
f
f_Sheet(1)
--------
--------
--------
....

Use .apply to prepend the 'Filename' string to the other columns.
Of the current answers, the solution from Mykola Zotko is the fastest solution, tested against a 3 column dataframe with 100k rows.
If your dataframe has, undesired strings (e.g. '--------'), then use something like df.replace('--------', pd.NA, inplace=True), before combining the column strings.
If the final result must have '--------', then use df.fillna('--------', inplace=True) at the end. This will be better than trying to iteratively deal with them.
import pandas as pd
import numpy as np
# test dataframe
df = pd.DataFrame({'Filename': ['a', 'b', 'c'], 'c1': ['s1'] * 3, 'c2': ['s2', np.nan, 's2']})
# display(df)
Filename c1 c2
0 a s1 s2
1 b s1 NaN
2 c s1 s2
# prepend the filename strings to the other columns
df.iloc[:, 1:] = df.iloc[:, 1:].apply(lambda x: df.Filename + '_' + x)
# display(df)
Filename c1 c2
0 a a_s1 a_s2
1 b b_s1 NaN
2 c c_s1 c_s2
%%timeit test against other answers
# test data with 100k rows
df = pd.concat([pd.DataFrame({'Filename': ['a', 'b', 'c'], 'c1': ['s1'] * 3, 'c2': ['s2'] * 3})] * 33333).reset_index(drop=True)
# Solution from Trenton
%%timeit
df.iloc[:, 1:].apply(lambda x: df.Filename + '_' + x)
[out]:
33.6 ms ± 1.17 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Solution from Mykola
%%timeit
df['Filename'].to_numpy().reshape(-1, 1) + '_' + df.loc[:, 'c1':]
[out]:
29.6 ms ± 2.5 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
# Solution from Alex
%%timeit
df.loc[:, cols].apply(lambda s: df["Filename"].str.cat(s, sep="_"))
[out]:
45.3 ms ± 1.08 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
# iterating the columns in a for-loop
def test(d):
for cols in d.columns[1:]:
d[cols]=d['Filename'] + '_' + d[cols]
return d
%%timeit
test(df)
[out]:
53.8 ms ± 4.75 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)

For example, if you have the following data frame:
col1 col2 col3 col4
0 a x y z
1 b x y z
2 c x y NaN
You can use broadcasting:
df.loc[:, 'col2':] = df['col1'].to_numpy().reshape(-1, 1) + '_' + df.loc[:, 'col2':]
Result:
col1 col2 col3 col4
0 a a_x a_y a_z
1 b b_x b_y b_z
2 c c_x c_y NaN

Try:
for cols in df.loc[:,'1':]:
df[cols]=df['Filename']+'_'+df[cols]

I've represented the -------- as np.NaN. You should be able to label these as NaN when you load the file, see nan_values.
This is the dict for the DataFrame:
d = {
1: [nan, "Sheet(1)", nan],
2: [nan, "Sheet(2)", nan],
3: ["Sheet(3)", nan, "Sheet(3)"],
4: ["Sheet(4)", nan, nan],
"Filename": ["a", "b", "c"],
}
df = pd.DatFrame(d)
Then we can:
Make a mask of the columns we want to change, everything but Filename
cols = df.columns != "Filename"
# array([ True, True, True, True, False])
Apply a function, which uses Series.str.cat:
df.loc[:, cols] = df.loc[:, cols].apply(lambda s: df["Filename"].str.cat(s, sep="_"))
this function takes each column specified in cols and concatenates it with the Filename column.
Which produces:
1 2 3 4 Filename
0 NaN NaN a_Sheet(3) a_Sheet(4) a
1 b_Sheet(1) b_Sheet(2) NaN NaN b
2 NaN NaN c_Sheet(3) NaN c

Related

Replace elements in a list in a dataframe matching elements in a list in another dataframe with matching column value

I have a pandas dataframe, df1. I have another pandas time frame, df2, with fruits columns that I would like to replace the elements inside the list that are found in the duplicates column in df1 with value of the name column in df1.
df1
name duplicates
0 a.apple ['b.apple', 'c.apple']
1 t.orange ['arr.orange', 'pg.orange']
2 ts.grape ['a.grape' , 'test.grape']
3 u.berryCool ['X.berryCool', 'cool.berryCool']
df2
people fruits
0 jack ['b.apple', 'c.apple', 'pp.tomato', 'ao.banana' ]
1 mary ['arr.orange', 'b.apple', 'X.berryCool', 'op.mango']
2 andy ['cool.berryCool' , 'test.grape', 'yu.papaya']
3 lawrence ['jc.orange', 'c.apple']
Expected Output
people fruits
0 jack ['a.apple', 'a.apple', 'pp.tomato', 'ao.banana' ]
1 mary ['t.orange', 'a.apple', 'u.berryCool', 'op.mango']
2 andy ['u.berryCool' , 'ts.grape', 'yu.papaya']
3 lawrence ['t.orange' , 'a.apple']
How can I accomplish this efficiently? Any suggestion is appreciated.
Create dictionary by flatten values in list in column duplicates first and then mapping values with dict.get - if no match return same value:
d = {x: a for a, b in zip(df1['name'], df1['duplicates']) for x in b}
df2['fruits'] = [[d.get(y,y) for y in x] for x in df2['fruits']]
print (df2)
people fruits
0 jack [a.apple, a.apple, pp.tomato, ao.banana]
1 mary [t.orange, a.apple, u.berryCool, op.mango]
2 andy [u.berryCool, ts.grape, yu.papaya]
3 lawrence [jc.orange, a.apple]
Performance in 4k DataFrame: (depends of data, best test real data)
df2 = pd.concat([df2] * 1000, ignore_index=True)
In [135]: %%timeit
...: MAPPING = df1.explode('duplicates').set_index('duplicates')['name']
...: df2['fruits1'] = (df2.explode('fruits')['fruits'].replace(MAPPING).groupby(level=0).agg(list))
...:
128 ms ± 2.81 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [136]: %%timeit
...: d = {x: a for a, b in zip(df1['name'], df1['duplicates']) for x in b}
...:
...: df2['fruits2'] = [[d.get(y,y) for y in x] for x in df2['fruits']]
...:
5.27 ms ± 245 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
You can create a mapping dict (Series):
MAPPING = df1.explode('duplicates').set_index('duplicates')['name']
df2['fruits'] = (df2.explode('fruits')['fruits'].replace(MAPPING)
.groupby(level=0).agg(list))
print(df2)
# Output
people fruits
0 jack [a.apple, a.apple, pp.tomato, ao.banana]
1 mary [t.orange, a.apple, u.berryCool, op.mango]
2 andy [u.berryCool, ts.grape, yu.papaya]
3 lawrence [jc.orange, a.apple]

process columns in pandas dataframe

I have a dataframe df:
Col1 Col2 Col3
0 a1 NaN NaN
1 a2 b1 NaN
2 a3 b3 c1
3 a4 NaN c2
I have tried :
new_df = '[' + df + ']'
new_df['Col4']=new_df[new_df.columns[0:]].apply(lambda x:','.join(x.dropna().astype(str)),axis =1)
df_final = pd.concat([df, new_df['col4']], axis =1)
I am getting at this :
I was looking for a robust solution to get to something which must look like this:
I know there is no direct way to do this, the data frame eventually is going to be at least 20k rows and so the question to fellow stack-people.
Thanks.
let me know if you have any more questions and I can edit the question to add points.
I'm not sure what your usecase is, but here you go
df['Col4'] = df.apply(lambda row:", ".join([(val if val[0]=='a' else "["+val+"]") for val in row if not pd.isna(val)]), axis=1)
It joins the rows together, by concatenating their values with ", ".join, but only if they are not pd.isna. It further puts everything in brackets that does not begin with a.
Whatever you want to do with it, there probably is a better solution though
You can add [] for all columns without first not missing value tested with helper i from enumerate:
def f(x):
gen = (y for y in x if pd.notna(y))
return ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(gen))
#f = lambda x: ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(x.dropna()))
df['col4'] = df.apply(f, axis=1)
print (df)
Col1 Col2 Col3 Col4 col4
0 a1 NaN d8 NaN a1,[d8]
1 a2 b1 d3 NaN a2,[b1],[d3]
2 NaN b3 c1 NaN b3,[c1]
3 a4 NaN c2 NaN a4,[c2]
4 NaN NaN c6 d5 c6,[d5]
Performance test:
#test for 25k rows
df = pd.concat([df] * 5000, ignore_index=True)
f1 = lambda x: ','.join(y if i == 0 else '['+y+']' for i, y in enumerate(x.dropna()))
%timeit df.apply(f1, axis=1)
3.62 s ± 21 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit df.apply(f, axis =1)
475 ms ± 3.92 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
new_col = []
for idx, row in df.iterrows():
val1 = row["Col1"]
val2 = row["Col2"]
val3 = row["Col3"]
new_val2 = f",[{val2}]" if pd.notna(val2) else ""
new_val3 = f",[{val3}]" if pd.notna(val3) else ""
val4 = f"{val1}{new_val2}{new_val3}"
new_col.append(val4)
df["Col4"] = new_col
Maybe my answer is not the most "computationally efficient", but if your dataset is 20k rows, it will be fast enough!
I think my answer is very easy to read, and it is also easy to adapt it to different scenarios!

Different groupers for each column with pandas GroupBy

How could I use a multidimensional Grouper, in this case another dataframe, as a Grouper for another dataframe? Can it be done in one step?
My question is essentially regarding how to perform an actual grouping under these circumstances, but to make it more specific, say I want to then transform and take the sum.
Consider for example:
df1 = pd.DataFrame({'a':[1,2,3,4], 'b':[5,6,7,8]})
print(df1)
a b
0 1 5
1 2 6
2 3 7
3 4 8
df2 = pd.DataFrame({'a':['A','B','A','B'], 'b':['A','A','B','B']})
print(df2)
a b
0 A A
1 B A
2 A B
3 B B
Then, the expected output would be:
a b
0 4 11
1 6 11
2 4 15
3 6 15
Where columns a and b in df1 have been grouped by columns a and b from df2 respectively.
You will have to group each column individually since each column uses a different grouping scheme.
If you want a cleaner version, I would recommend a list comprehension over the column names, and call pd.concat on the resultant series:
pd.concat([df1[c].groupby(df2[c]).transform('sum') for c in df1.columns], axis=1)
a b
0 4 11
1 6 11
2 4 15
3 6 15
Not to say there's anything wrong with using apply as in the other answer, just that I don't like apply, so this is my suggestion :-)
Here are some timeits for your perusal. Just for your sample data, you will notice the difference in timings is obvious.
%%timeit
(df1.stack()
.groupby([df2.stack().index.get_level_values(level=1), df2.stack()])
.transform('sum').unstack())
%%timeit
df1.apply(lambda x: x.groupby(df2[x.name]).transform('sum'))
%%timeit
pd.concat([df1[c].groupby(df2[c]).transform('sum') for c in df1.columns], axis=1)
8.99 ms ± 4.55 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
8.35 ms ± 859 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
6.13 ms ± 279 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Not to say apply is slow, but explicit iteration in this case is faster. Additionally, you will notice the second and third timed solution will scale better with larger length v/s breadth since the number of iterations depends on the number of columns.
Try using apply to apply a lambda function to each column of your dataframe, then use the name of that pd.Series to group by the second dataframe:
df1.apply(lambda x: x.groupby(df2[x.name]).transform('sum'))
Output:
a b
0 4 11
1 6 11
2 4 15
3 6 15
Using stack and unstack
df1.stack().groupby([df2.stack().index.get_level_values(level=1),df2.stack()]).transform('sum').unstack()
Out[291]:
a b
0 4 11
1 6 11
2 4 15
3 6 15
I'm going to propose a (mostly) numpythonic solution that uses a scipy.sparse_matrix to perform a vectorized groupby on the entire DataFrame at once, rather than column by column.
The key to performing this operation efficiently is finding a performant way to factorize the entire DataFrame, while avoiding duplicates in any columns. Since your groups are represented by strings, you can simply concatenate the column
name on the end of each value (since columns should be unique), and then factorize the result, like so [*]
>>> df2 + df2.columns
a b
0 Aa Ab
1 Ba Ab
2 Aa Bb
3 Ba Bb
>>> pd.factorize((df2 + df2.columns).values.ravel())
(array([0, 1, 2, 1, 0, 3, 2, 3], dtype=int64),
array(['Aa', 'Ab', 'Ba', 'Bb'], dtype=object))
Once we have a unique grouping, we can utilize our scipy.sparse matrix, to perform a groupby in a single pass on the flattened arrays, and use advanced indexing and a reshaping operation to convert the result back to the original shape.
from scipy import sparse
a = df1.values.ravel()
b, _ = pd.factorize((df2 + df2.columns).values.ravel())
o = sparse.csr_matrix(
(a, b, np.arange(a.shape[0] + 1)), (a.shape[0], b.max() + 1)
).sum(0).A1
res = o[b].reshape(df1.shape)
array([[ 4, 11],
[ 6, 11],
[ 4, 15],
[ 6, 15]], dtype=int64)
Performance
Functions
def gp_chris(f1, f2):
a = f1.values.ravel()
b, _ = pd.factorize((f2 + f2.columns).values.ravel())
o = sparse.csr_matrix(
(a, b, np.arange(a.shape[0] + 1)), (a.shape[0], b.max() + 1)
).sum(0).A1
return pd.DataFrame(o[b].reshape(f1.shape), columns=df1.columns)
def gp_cs(f1, f2):
return pd.concat([f1[c].groupby(f2[c]).transform('sum') for c in f1.columns], axis=1)
def gp_scott(f1, f2):
return f1.apply(lambda x: x.groupby(f2[x.name]).transform('sum'))
def gp_wen(f1, f2):
return f1.stack().groupby([f2.stack().index.get_level_values(level=1), f2.stack()]).transform('sum').unstack()
Setup
import numpy as np
from scipy import sparse
import pandas as pd
import string
from timeit import timeit
import matplotlib.pyplot as plt
res = pd.DataFrame(
index=[f'gp_{f}' for f in ('chris', 'cs', 'scott', 'wen')],
columns=[10, 50, 100, 200, 400],
dtype=float
)
for f in res.index:
for c in res.columns:
df1 = pd.DataFrame(np.random.rand(c, c))
df2 = pd.DataFrame(np.random.choice(list(string.ascii_uppercase), (c, c)))
df1.columns = df1.columns.astype(str)
df2.columns = df2.columns.astype(str)
stmt = '{}(df1, df2)'.format(f)
setp = 'from __main__ import df1, df2, {}'.format(f)
res.at[f, c] = timeit(stmt, setp, number=50)
ax = res.div(res.min()).T.plot(loglog=True)
ax.set_xlabel("N")
ax.set_ylabel("time (relative)")
plt.show()
Results
Validation
df1 = pd.DataFrame(np.random.rand(10, 10))
df2 = pd.DataFrame(np.random.choice(list(string.ascii_uppercase), (10, 10)))
df1.columns = df1.columns.astype(str)
df2.columns = df2.columns.astype(str)
v = np.stack([gp_chris(df1, df2), gp_cs(df1, df2), gp_scott(df1, df2), gp_wen(df1, df2)])
print(np.all(v[:-1] == v[1:]))
True
Either we're all wrong or we're all correct :)
[*] There is a possibility that you could get a duplicate value here if one item is the concatenation of a column and another item before concatenation occurs. However if this is the case, you shouldn't need to adjust much to fix it.
You could do something like the following:
res = df1.assign(a_sum=lambda df: df['a'].groupby(df2['a']).transform('sum'))\
.assign(b_sum=lambda df: df['b'].groupby(df2['b']).transform('sum'))
Results:
a b
0 4 11
1 6 11
2 4 15
3 6 15

Reference a row from another Pandas DataFrame

Suppose I have 2 DataFrames:
DataFrame 1
A B
a 1
b 2
c 3
d 4
DataFrame2:
C D
a c
b a
a b
The goal is to add a column to DataFrame 2 ('E').
C D E
a c (1-3=-2)
b a (2-1=1)
a b (1-2=-1)
If this were excel, a formula could be something similar to "=vlookup(A1,DataFrame1,2)-vlookup(B1,DataFrame1,2)". Any idea what this formula looks like in Python?
Thanks!
A Pandas Series can be thought of as a mapping from its index to its values.
Here, we wish to use the first DataFrame, df1 as a mapping from column A to column B. So the natural thing to do is to convert df1 into a Series:
s = df1.set_index('A')['B']
# A
# a 0
# b 1
# c 2
# d 3
# Name: B, dtype: int64
Now we can use the Series.map method to "lookup" values in a Series based on s:
import pandas as pd
df1 = pd.DataFrame({'A':list('abcd'), 'B':[1,2,3,4]})
df2 = pd.DataFrame({'C':list('aba'), 'D':list('cab')})
s = df1.set_index('A')['B']
df2['E'] = df2['C'].map(s) - df2['D'].map(s)
print(df2)
yields
C D E
0 a c -2
1 b a 1
2 a b -1
You can do something like this:
#set column A as index, so you can index it
df1 = df1.set_index('A')
df2['E'] = df1.loc[df2.C, 'B'].values - df1.loc[df2.D, 'B'].values
And the result is:
C D E
0 a c -2
1 b a 1
2 a b -1
Hope it helps :)
Option 1
Using replace and eval with assign
df2.assign(E=df2.replace(df1.A.values, df1.B).eval('C - D'))
C D E
0 a c -2
1 b a 1
2 a b -1
I like this answer for it's succinctness.
I use replace with two iterables, nameley df1.A that specifies what to replace and df1.B that specifies what to replace with.
I use eval to elegantly perform the differencing of the new found C less D.
I use assign to create a copy of df2 with a new column named E that has the values from the steps above.
I could have used a dictionary instead dict(zip(df1.A, df1.B))
df2.assign(E=df2.replace(dict(zip(df1.A, df1.B))).eval('C - D'))
C D E
0 a c -2
1 b a 1
2 a b -1
PROJECT/kill
numpy + pd.factorize
base = df1.A.values
vals = df1.B.values
refs = df2.values.ravel()
f, u = pd.factorize(np.append(base, refs))
look = vals[f[base.size:]]
df2.assign(E=look[::2] - look[1::2])
C D E
0 a c -2
1 b a 1
2 a b -1
Timing
Among the pure pandas #unutbu's answer is the clear winner. While my overly verbose numpy solution only improves by about 40ish%
Let's use these functions for the numpy versions. Note using_F_order is #unutbu's contribution.
def using_numpy(df1, df2):
base = df1.A.values
vals = df1.B.values
refs = df2.values.ravel()
f, u = pd.factorize(np.append(base, refs))
look = vals[f[base.size:]]
return df2.assign(E=look[::2] - look[1::2])
def using_F_order(df1, df2):
base = df1.A.values
vals = df1.B.values
refs = df2.values.ravel(order='F')
f, u = pd.factorize(np.append(base, refs))
look = vals[f[base.size:]].reshape(-1, 2, order='F')
return df2.assign(E=look[:, 0]-look[:, 1])
small data
%timeit df2.assign(E=df2.replace(df1.A.values, df1.B).eval('C - D'))
%timeit df2.assign(E=df2.replace(dict(zip(df1.A, df1.B))).eval('C - D'))
%timeit df2.assign(E=(lambda s: df2['C'].map(s) - df2['D'].map(s))(df1.set_index('A')['B']))
%timeit using_numpy(df1, df2)
%timeit using_F_order(df1, df2)
100 loops, best of 3: 2.31 ms per loop
100 loops, best of 3: 2.44 ms per loop
1000 loops, best of 3: 1.25 ms per loop
1000 loops, best of 3: 436 µs per loop
1000 loops, best of 3: 424 µs per loop
large data
from string import ascii_lowercase, ascii_uppercase
import pandas as pd
import numpy as np
upper = np.array(list(ascii_uppercase))
lower = np.array(list(ascii_lowercase))
ch = np.core.defchararray.add(upper[:, None], lower).ravel()
np.random.seed([3,1415])
n = 100000
df1 = pd.DataFrame(dict(A=ch, B=np.arange(ch.size)))
df2 = pd.DataFrame(dict(C=np.random.choice(ch, n), D=np.random.choice(ch, n)))
%timeit df2.assign(E=df2.replace(df1.A.values, df1.B).eval('C - D'))
%timeit df2.assign(E=df2.replace(dict(zip(df1.A, df1.B))).eval('C - D'))
%timeit df2.assign(E=(lambda s: df2['C'].map(s) - df2['D'].map(s))(df1.set_index('A')['B']))
%timeit using_numpy(df1, df2)
%timeit using_F_order(df1, df2)
1 loop, best of 3: 11.1 s per loop
1 loop, best of 3: 10.6 s per loop
100 loops, best of 3: 17.7 ms per loop
100 loops, best of 3: 10.9 ms per loop
100 loops, best of 3: 9.11 ms per loop
Here's a very simple way to achieve this:
newdf = df2.replace(['a','b','c','d'],[1,2,3,4])
df2['E'] = newdf['C'] - newdf['D']
df2
I hope this helps !

Pandas: Appending a row to a dataframe and specify its index label

Is there any way to specify the index that I want for a new row, when appending the row to a dataframe?
The original documentation provides the following example:
In [1301]: df = DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
In [1302]: df
Out[1302]:
A B C D
0 -1.137707 -0.891060 -0.693921 1.613616
1 0.464000 0.227371 -0.496922 0.306389
2 -2.290613 -1.134623 -1.561819 -0.260838
3 0.281957 1.523962 -0.902937 0.068159
4 -0.057873 -0.368204 -1.144073 0.861209
5 0.800193 0.782098 -1.069094 -1.099248
6 0.255269 0.009750 0.661084 0.379319
7 -0.008434 1.952541 -1.056652 0.533946
In [1303]: s = df.xs(3)
In [1304]: df.append(s, ignore_index=True)
Out[1304]:
A B C D
0 -1.137707 -0.891060 -0.693921 1.613616
1 0.464000 0.227371 -0.496922 0.306389
2 -2.290613 -1.134623 -1.561819 -0.260838
3 0.281957 1.523962 -0.902937 0.068159
4 -0.057873 -0.368204 -1.144073 0.861209
5 0.800193 0.782098 -1.069094 -1.099248
6 0.255269 0.009750 0.661084 0.379319
7 -0.008434 1.952541 -1.056652 0.533946
8 0.281957 1.523962 -0.902937 0.068159
where the new row gets the index label automatically. Is there any way to control the new label?
The name of the Series becomes the index of the row in the DataFrame:
In [99]: df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
In [100]: s = df.xs(3)
In [101]: s.name = 10
In [102]: df.append(s)
Out[102]:
A B C D
0 -2.083321 -0.153749 0.174436 1.081056
1 -1.026692 1.495850 -0.025245 -0.171046
2 0.072272 1.218376 1.433281 0.747815
3 -0.940552 0.853073 -0.134842 -0.277135
4 0.478302 -0.599752 -0.080577 0.468618
5 2.609004 -1.679299 -1.593016 1.172298
6 -0.201605 0.406925 1.983177 0.012030
7 1.158530 -2.240124 0.851323 -0.240378
10 -0.940552 0.853073 -0.134842 -0.277135
df.loc will do the job :
>>> df = pd.DataFrame(np.random.randn(3, 2), columns=['A','B'])
>>> df
A B
0 -0.269036 0.534991
1 0.069915 -1.173594
2 -1.177792 0.018381
>>> df.loc[13] = df.loc[1]
>>> df
A B
0 -0.269036 0.534991
1 0.069915 -1.173594
2 -1.177792 0.018381
13 0.069915 -1.173594
I shall refer to the same sample of data as posted in the question:
import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
print('The original data frame is: \n{}'.format(df))
Running this code will give you
The original data frame is:
A B C D
0 0.494824 -0.328480 0.818117 0.100290
1 0.239037 0.954912 -0.186825 -0.651935
2 -1.818285 -0.158856 0.359811 -0.345560
3 -0.070814 -0.394711 0.081697 -1.178845
4 -1.638063 1.498027 -0.609325 0.882594
5 -0.510217 0.500475 1.039466 0.187076
6 1.116529 0.912380 0.869323 0.119459
7 -1.046507 0.507299 -0.373432 -1.024795
Now you wish to append a new row to this data frame, which doesn't need to be copy of any other row in the data frame. #Alon suggested an interesting approach to use df.loc to append a new row with different index. The issue, however, with this approach is if there is already a row present at that index, it will be overwritten by new values. This is typically the case for datasets when row index is not unique, like store ID in transaction datasets. So a more general solution to your question is to create the row, transform the new row data into a pandas series, name it to the index you want to have and then append it to the data frame. Don't forget to overwrite the original data frame with the one with appended row. The reason is df.append returns a view of the dataframe and does not modify its contents. Following is the code:
row = pd.Series({'A':10,'B':20,'C':30,'D':40},name=3)
df = df.append(row)
print('The new data frame is: \n{}'.format(df))
Following would be the new output:
The new data frame is:
A B C D
0 0.494824 -0.328480 0.818117 0.100290
1 0.239037 0.954912 -0.186825 -0.651935
2 -1.818285 -0.158856 0.359811 -0.345560
3 -0.070814 -0.394711 0.081697 -1.178845
4 -1.638063 1.498027 -0.609325 0.882594
5 -0.510217 0.500475 1.039466 0.187076
6 1.116529 0.912380 0.869323 0.119459
7 -1.046507 0.507299 -0.373432 -1.024795
3 10.000000 20.000000 30.000000 40.000000
There is another solution. The next code is bad (although I think pandas needs this feature):
import pandas as pd
# empty dataframe
a = pd.DataFrame()
a.loc[0] = {'first': 111, 'second': 222}
But the next code runs fine:
import pandas as pd
# empty dataframe
a = pd.DataFrame()
a = a.append(pd.Series({'first': 111, 'second': 222}, name=0))
Maybe my case is a different scenario but looks similar. I would define my own question as: 'How to insert a row with new index at some (given) position?'
Let's create test dataframe:
import pandas as pd
df = pd.DataFrame([[1, 2], [3, 4]], columns=['A', 'B'], index=['x', 'y'])
Result:
A B
x 1 2
y 3 4
Then, let's say, we want to place a new row with index z at position 1 (second row).
pos = 1
index_name = 'z'
# create new indexes where index is at the specified position
new_indexes = df.index.insert(pos, index_name)
# create new dataframe with new row
# specify new index in name argument
new_line = pd.Series({'A': 5, 'B': 6}, name=index_name)
df_new_row = pd.DataFrame([new_line], columns=df.columns)
# append new line to dataframe
df = pd.concat([df, df_new_row])
Now it is in the end:
A B
x 1 2
y 3 4
z 5 6
Now let's sort it specifying new index' position:
df = df.reindex(new_indexes)
Result:
A B
x 1 2
z 5 6
y 3 4
You should consider using df.loc[row_name] = row_value.
df.append(pd.Series({row_name: row_value}, name=column will lead to
FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
df.loc[row_name] = row_value is faster than pd.concat
Here is an example:
p = pd.DataFrame(data=np.random.rand(100), columns=['price'], index=np.arange(100))
def func1(p):
for i in range(100):
p.loc[i] = 0
def func2(p):
for i in range(100):
p.append(pd.Series({'BTC': 0}, name=i))
def func3(p):
for i in range(100):
p = pd.concat([p, pd.Series({i: 0}, name='price')])
%timeit func1(p)
1.87 ms ± 23.7 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
%timeit func2(p)
1.56 s ± 43.4 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
%timeit func3(p)
24.8 ms ± 748 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)

Categories