Concatenate two columns - python

I have two text columns A and B. I want to take the first non empty string or if both A and B has values take the values from A. C is the column im trying to create:
import pandas as pd
cols = ['A','B']
data = [['data','data'],
['','data'],
['',''],
['data1','data2']]
df = pd.DataFrame.from_records(data=data, columns=cols)
A B
0 data data
1 data
2
3 data1 data2
My attempt:
df['C'] = df[cols].apply(lambda row: sorted([val if val else '' for val in row], reverse=True)[0], axis=1) #Reverse sort to avoid picking an empty string
A B C
0 data data data
1 data data
2
3 data1 data2 data2 #I want data1 here
Expected output:
A B C
0 data data data
1 data data
2
3 data1 data2 data1
I think I want the pandas equivalent of SQL coalesce.

You can also use numpy.where:
In [1022]: import numpy as np
In [1023]: df['C'] = np.where(df['A'].eq(''), df['B'], df['A'])
In [1024]: df
Out[1024]:
A B C
0 data data data
1 data data
2
3 data1 data2 data1

Let's try idxmax + lookup:
df['C'] = df.lookup(df.index, df.ne('').idxmax(1))
Alternatively you can use Series.where:
df['C'] = df['A'].where(lambda x: x.ne(''), df['B'])
A B C
0 data data data
1 data data
2
3 data1 data2 data1

Related

Insert space in Pandas Data Frame Column String for each character

I have a data frame look like below I need to give space between each letter of word in same column
import pandas as pd
df = pd.DataFrame({'sequence': ['ABCAD', 'DBAACR']})
df
Expected Output
sequence
A b C A D
D B A A C R
import pandas as pd
df = pd.DataFrame({'sequence':['ABCAD','DBAACR']})
A = []
for i in df['sequence']:
a = (" ".join(i))
A.append(a)
df = pd.DataFrame({'sequence':A})
df
If you execute above cell which will return the pandas DataFrame as below.
sequence
0 A B C A D
1 D B A A C R
Thanks and don't forget to upvote :D
pd.DataFrame({'sequence':[' '.join('ABCAD'),' '.join('DBAACR')]})
You can use apply with lambda function to process columns in pandas data frame
df.sequence.apply(lambda x: ' '.join(list(x)))
Output:
0 A B C A D
1 D B A A C R

How to replace data in one pandas df by the data of another one?

Want to replace some rows of some columns in a bigger pandas df by data in a smaller pandas df. The column names are same in both.
Tried using combine_first but it only updates the null values.
For example lets say df1.shape is 100, 25 and df2.shape is 10,5
df1
A B C D E F G ...Z Y Z
1 abc 10.20 0 pd.NaT
df2
A B C D E
1 abc 15.20 1 10
Now after replacing df1 should look like:
A B C D E F G ...Z Y Z
1 abc 15.20 1 10 ...
To replace values in df1 the condition is where df1.A = df2.A and df1.B = df2.B
How can it be achieved in the most pythonic way? Any help will be appreciated.
Don't know I really understood your question does this solves your problem ?
df1 = pd.DataFrame(data={'A':[1],'B':[2],'C':[3],'D':[4]})
df2 = pd.DataFrame(data={'A':[1],'B':[2],'C':[5],'D':[6]})
new_df=pd.concat([df1,df2]).drop_duplicates(['A','B'],keep='last')
print(new_df)
output:
A B C D
0 1 2 5 6
You could play with Multiindex.
First let us create those dataframe that you are working with:
cols = pd.Index(list(ascii_uppercase))
vals = np.arange(100*len(cols)).reshape(100, len(cols))
df = pd.DataFrame(vals, columns=cols)
df1 = pd.DataFrame(vals[:10,:5], columns=cols[:5])
Then transform A and B in indices:
df = df.set_index(["A","B"])
df1 = df1.set_index(["A","B"])*1.5 # multiply just to make the other values different
df.loc[df1.index, df1.columns] = df1
df = df.reset_index()

concat by taking the values from column

i have a list ['df1', 'df2'] where i have stores some dataframes which have been filtered on few conditions. Then i have converted this list to dataframe using
df = pd.DataFrame(list1)
now the df has only one column
0
df1
df2
sometimes it may also have
0
df1
df2
df3
i wanted to concate all these my static code is
df_new = pd.concat([df1,df2],axis=1) or
df_new = pd.concat([df1,df2,df3],axis=1)
how can i make it dynamic (without me specifying as df1,df2) so that it takes the values and concat it.
Using array to add the lists and data frames :
import pandas as pd
lists = [[1,2,3],[4,5,6]]
arr = []
for l in lists:
new_df = pd.DataFrame(l)
arr.append(new_df)
df = pd.concat(arr,axis=1)
df
Result :
0 0
0 1 4
1 2 5
2 3 6

How to initialize a two dimensional string DataFrame array in python

I want to initialize a 31756x2 data frame of strings.
I want it to look like this:
index column1 column2
0 A B
1 A B
.
.
31756 A B
I wrote:
content_split = [["A", "B"] for x in range(31756)]
This is the result:
I did get a two dimensional list, but I want the columns to be separated like in a data frame, and I can't seem to get it to work (like column1: A.. , column2: B...)
Would love some help.
Use DataFrame constructor only:
df = pd.DataFrame([["A", "B"] for x in range(31756)], columns=['col1','col2'])
print (df.head())
col1 col2
0 A B
1 A B
2 A B
3 A B
4 A B
Or:
N = 31756
df = pd.DataFrame({'col1':['A'] * N, 'col2':['B'] * N})
print (df.head())
col1 col2
0 A B
1 A B
2 A B
3 A B
4 A B
import pandas as pd
df = pd.DataFrame(index=range(31756))
df.loc[:,'column1'] = 'A'
df.loc[:,'column2'] = 'B'
Using numpy.tile:
import numpy as np
df = pd.DataFrame(np.tile(list('AB'), (31756, 1)), columns=['col1','col2'])
Or just passing a dictionary:
df = pd.DataFrame({'A':['A']*31756, 'B':['B']*31756})
If using this latter method you may want to explicitly sort the columns since the dictionary doesn't have order:
df = pd.DataFrame({'A':['A']*31756, 'B':['B']*31756}).sort_index(axis=1)
For fun
pd.DataFrame(index=range(31756)).assign(dict(col1='A', col2='B'))

Pandas - Interleave / Zip two DataFrames by row

Suppose I have two dataframes:
>> df1
0 1 2
0 a b c
1 d e f
>> df2
0 1 2
0 A B C
1 D E F
How can I interleave the rows? i.e. get this:
>> interleaved_df
0 1 2
0 a b c
1 A B C
2 d e f
3 D E F
(Note my real DFs have identical columns, but not the same number of rows).
What I've tried
inspired by this question (very similar, but asks on columns):
import pandas as pd
from itertools import chain, zip_longest
df1 = pd.DataFrame([['a','b','c'], ['d','e','f']])
df2 = pd.DataFrame([['A','B','C'], ['D','E','F']])
concat_df = pd.concat([df1,df2])
new_index = chain.from_iterable(zip_longest(df1.index, df2.index))
# new_index now holds the interleaved row indices
interleaved_df = concat_df.reindex(new_index)
ValueError: cannot reindex from a duplicate axis
The last call fails because df1 and df2 have some identical index values (which is also the case with my real DFs).
Any ideas?
You can sort the index after concatenating and then reset the index i.e
import pandas as pd
df1 = pd.DataFrame([['a','b','c'], ['d','e','f']])
df2 = pd.DataFrame([['A','B','C'], ['D','E','F']])
concat_df = pd.concat([df1,df2]).sort_index().reset_index(drop=True)
Output :
0 1 2
0 a b c
1 A B C
2 d e f
3 D E F
EDIT (OmerB) : Incase of keeping the order regardless of the index value then.
import pandas as pd
df1 = pd.DataFrame([['a','b','c'], ['d','e','f']]).reset_index()
df2 = pd.DataFrame([['A','B','C'], ['D','E','F']]).reset_index()
concat_df = pd.concat([df1,df2]).sort_index().set_index('index')
Use toolz.interleave
In [1024]: from toolz import interleave
In [1025]: pd.DataFrame(interleave([df1.values, df2.values]))
Out[1025]:
0 1 2
0 a b c
1 A B C
2 d e f
3 D E F
Here's an extension of #Bharath's answer that can be applied to DataFrames with user-defined indexes without losing them, using pd.MultiIndex.
Define Dataframes with the full set of column/ index labels and names:
df1 = pd.DataFrame([['a','b','c'], ['d','e','f']], index=['one', 'two'], columns=['col_a', 'col_b','col_c'])
df1.columns.name = 'cols'
df1.index.name = 'rows'
df2 = pd.DataFrame([['A','B','C'], ['D','E','F']], index=['one', 'two'], columns=['col_a', 'col_b','col_c'])
df2.columns.name = 'cols'
df2.index.name = 'rows'
Add DataFrame ID to MultiIndex:
df1.index = pd.MultiIndex.from_product([[1], df1.index], names=["df_id", df1.index.name])
df2.index = pd.MultiIndex.from_product([[2], df2.index], names=["df_id", df2.index.name])
Then use #Bharath's concat() and sort_index():
data = pd.concat([df1, df2], axis=0, sort=True)
data.sort_index(axis=0, level=data.index.names[::-1], inplace=True)
Output:
cols col_a col_b col_c
df_id rows
1 one a b c
2 one A B C
1 two d e f
2 two D E F
You could also preallocate a new DataFrame, and then fill it using a slice.
def interleave(dfs):
data = np.transpose(np.array([np.empty(dfs[0].shape[0]*len(dfs), dtype=dt) for dt in dfs[0].dtypes]))
out = pd.DataFrame(data, columns=dfs[0].columns)
for ix, df in enumerate(dfs):
out.iloc[ix::len(dfs),:] = df.values
return out
The preallocation code is taken from this question.
While there's a chance it could outperform the index method for certain data types / sizes, it won't behave gracefully if the DataFrames have different sizes.
Note - for ~200000 rows with 20 columns of mixed string, integer and floating types, the index method is around 5x faster.
You can try this way :
In [31]: import pandas as pd
...: from itertools import chain, zip_longest
...:
...: df1 = pd.DataFrame([['a','b','c'], ['d','e','f']])
...: df2 = pd.DataFrame([['A','B','C'], ['D','E','F']])
In [32]: concat_df = pd.concat([df1,df2]).sort_index()
...:
In [33]: interleaved_df = concat_df.reset_index(drop=1)
In [34]: interleaved_df
Out[34]:
0 1 2
0 a b c
1 A B C
2 d e f
3 D E F

Categories