I searched a lot for an answer, the closest question was Compare 2 columns of 2 different pandas dataframes, if the same insert 1 into the other in Python, but the answer to this person's particular problem was a simple merge, which doesn't answer the question in a general way.
I have two large dataframes, df1 (typically about 10 million rows), and df2 (about 130 million rows). I need to update values in three columns of df1 with values from three columns of df2, based on two df1 columns matching two df2 columns. It is imperative that the order of df1 remains unchanged, and that only rows with matching values get updated.
This is how the dataframes look like:
df1
chr snp x pos a1 a2
1 1-10020 0 10020 G A
1 1-10056 0 10056 C G
1 1-10108 0 10108 C G
1 1-10109 0 10109 C G
1 1-10139 0 10139 C T
Note that it's not always the case that the values of "snp" is chr-pos, it can take many other values with no link to any of the columns (like rs1234, indel-6032 etc)
df2
ID CHR STOP OCHR OSTOP
rs376643643 1 10040 1 10020
rs373328635 1 10066 1 10056
rs62651026 1 10208 1 10108
rs376007522 1 10209 1 10109
rs368469931 3 30247 1 10139
I need to update ['snp', 'chr', 'pos'] in df1 with df2[['ID', 'OCHR', 'OSTOP']] only when df1[['chr', 'pos']] matches df2[['OCHR', 'OSTOP']]
so in this case, after update, df1 would look like:
chr snp x pos a1 a2
1 rs376643643 0 10040 G A
1 rs373328635 0 10066 C G
1 rs62651026 0 10208 C G
1 rs376007522 0 10209 C G
3 rs368469931 0 30247 C T
I have used merge as a workaround:
df1 = pd.merge(df1, df2, how='left', left_on=["chr", "pos"], right_on=["OCHR", "OSTOP"],
left_index=False, right_index=False, sort=False)
and then
df1.loc[~df1.OCHR.isnull(), ["snp", "chr", "pos"]] = df1.loc[~df1.OCHR.isnull(), ["ID", "CHR", "STOP"]].values
and then remove the extra columns.
Yes, it works, but what would be a way to do that directly by comparing the values from both dataframes, I just don't know how to formulate it, and I couldn't find an answer anywhere; I guess it could be useful to get a general answer on this.
I tried that but it doesn't work:
df1.loc[(df1.chr==df2.OCHR) & (df1.pos==df2.OSTOP),["snp", "chr", "pos"]] = df2.loc[df2[['OCHR', 'OSTOP']] == df1.loc[(df1.chr==df2.OCHR) & (df1.pos==df2.OSTOP),["chr", "pos"]],['ID', ''CHR', 'STOP']].values
Thanks,
Stephane
You can use the update function (requires setting the matching criteria to index). I've modified your sample data to allow some mismatch.
# your data
# =====================
# df1 pos is modified from 10020 to 10010
print(df1)
chr snp x pos a1 a2
0 1 1-10020 0 10010 G A
1 1 1-10056 0 10056 C G
2 1 1-10108 0 10108 C G
3 1 1-10109 0 10109 C G
4 1 1-10139 0 10139 C T
print(df2)
ID CHR STOP OCHR OSTOP
0 rs376643643 1 10040 1 10020
1 rs373328635 1 10066 1 10056
2 rs62651026 1 10208 1 10108
3 rs376007522 1 10209 1 10109
4 rs368469931 3 30247 1 10139
# processing
# ==========================
# set matching columns to multi-level index
x1 = df1.set_index(['chr', 'pos'])['snp']
x2 = df2.set_index(['OCHR', 'OSTOP'])['ID']
# call update function, this is inplace
x1.update(x2)
# replace the values in original df1
df1['snp'] = x1.values
print(df1)
chr snp x pos a1 a2
0 1 1-10020 0 10010 G A
1 1 rs373328635 0 10056 C G
2 1 rs62651026 0 10108 C G
3 1 rs376007522 0 10109 C G
4 1 rs368469931 0 10139 C T
Start by renaiming the columns you want to merge in df2
df2.rename(columns={'OCHR':'chr','OSTOP':'pos'},inplace=True)
Now merge on these columns
df_merged = pd.merge(df1, df2, how='inner', on=['chr', 'pos']) # you might have to preserve the df1 index at this stage, not sure
Next, you want to
updater = df_merged[['D','CHR','STOP']] #this will be your update frame
updater.rename( columns={'D':'snp','CHR':'chr','STOP':'pos'},inplace=True) # rename columns to update original
Finally update (see bottom of this link):
df1.update( df1_updater) #updates in place
# chr snp x pos a1 a2
#0 1 rs376643643 0 10040 G A
#1 1 rs373328635 0 10066 C G
#2 1 rs62651026 0 10208 C G
#3 1 rs376007522 0 10209 C G
#4 3 rs368469931 0 30247 C T
update works by matching index/column so you might have to string along the index of df1 for the entire process, then do df1_updater.re_index(... before df1.update(df1_updater)
Related
I have a dataframe df with the shape (4573,64) that I'm trying to pivot. The last column is an 'id' with two possible string values 'old' and 'new'. I would like to set the first 63 columns as index and then have the 'id' column across the top with values being the count of 'old' or 'new' for each index row.
I've created a list object out of columns labels that I want as index named cols.
I tried the following:
df.pivot(index=cols, columns='id')['id']
this gives an error: 'all arrays must be same length'
also tried the following to see if I can get sum but no luck either:
pd.pivot_table(df,index=cols,values=['id'],aggfunc=np.sum)
any ides greatly appreciated
I found a thread online talking about a possible bug in pandas 0.23.0 where the pandas.pivot_table() will not accept the multiindex as long as it contains NaN's (link to github in comments). My workaround was to do
df.fillna('empty', inplace=True)
then the solution below:
df1 = pd.pivot_table(df, index=cols,columns='id',aggfunc='size', fill_value=0)
as proposed by jezrael will work as intended hence the answer accepted.
I believe need convert columns names to list and then aggregate size with unstack:
df = pd.DataFrame({'B':[4,4,4,5,5,4],
'C':[1,1,9,4,2,3],
'D':[1,1,5,7,1,0],
'E':[0,0,6,9,2,4],
'id':list('aaabbb')})
print (df)
B C D E id
0 4 1 1 0 a
1 4 1 1 0 a
2 4 9 5 6 a
3 5 4 7 9 b
4 5 2 1 2 b
5 4 3 0 4 b
cols = df.columns.tolist()
df1 = df.groupby(cols)['id'].size().unstack(fill_value=0)
print (df1)
id a b
B C D E
4 1 1 0 2 0
3 0 4 0 1
9 5 6 1 0
5 2 1 2 0 1
4 7 9 0 1
Solution with pivot_table:
df1 = pd.pivot_table(df, index=cols,columns='id',aggfunc='size', fill_value=0)
print (df1)
id a b
B C D E
4 1 1 0 2 0
3 0 4 0 1
9 5 6 1 0
5 2 1 2 0 1
4 7 9 0 1
This is a follow up to a previous question here
combinations of two columns
I'm trying to take one dataframe and create another, with all possible combinations of 3 columns together and the difference between the corresponding values, i.e on 11-apr column ABC should be (2B -A - C)= 0, then 2*B-A-D = 0 and so on etc.
e.g, starting with
Dt A B C D
11-apr 1 1 1 1
10-apr 2 3 1 2
how do I get a new frame that looks like this:
I think need:
cc = list(combinations(df.columns,3))
df = pd.concat([df[c[1]].mul(2).sub(df[c[2]]).sub(df[c[0]]) for c in cc], axis=1, keys=cc)
df.columns = df.columns.map(''.join)
print (df)
ABC ABD ACD BCD
Dt
11-apr 0 0 0 0
10-apr 3 2 -2 -3
I have two categories A and B that can take on 5 different states (values, names or categories) defined by the list abcde. Counting the occurence of each state and storing it in a data frame is fairly easy. However, I would also like the resulting data frame to include zeros for the possible values that have not occured in Category A or B.
First, here's a dataframe that matches the description:
In[1]:
import pandas as pd
possibleValues = list('abcde')
df = pd.DataFrame({'Category A':list('abbc'), 'Category B':list('abcc')})
print(df)
Out[1]:
Category A Category B
0 a a
1 b b
2 b c
3 c c
I've tried different approaches with df.groupby(...).size() and .count() , combined with the list of possible values and the names of the categories in a list, but with no success.
Here's the desired output:
Category A Category B
a 1 1
b 2 1
c 1 2
d 0 0
e 0 0
To go one step further, I'd also like to include a column with the totals for each possible state across all categories:
Category A Category B Total
a 1 1 2
b 2 1 3
c 1 2 3
d 0 0 0
e 0 0 0
SO has got many related questions and answers, but to my knowledge none that suggest a solution to this particular problem. Thank you for any suggestions!
P.S
I'd like to make the solution adjustable to the number of categories, possible values and number of rows.
Need apply + value_counts + reindex + sum:
cols = ['Category A','Category B']
df1 = df[cols].apply(pd.value_counts).reindex(possibleValues, fill_value=0)
df1['total'] = df1.sum(axis=1)
print (df1)
Category A Category B total
a 1 1 2
b 2 1 3
c 1 2 3
d 0 0 0
e 0 0 0
Another solution is convert columns to categorical and then 0 values are added without reindex:
cols = ['Category A','Category B']
df1 = df[cols].apply(lambda x: pd.Series.value_counts(x.astype('category',
categories=possibleValues)))
df1['total'] = df1.sum(axis=1)
print (df1)
Category A Category B total
a 1 1 2
b 2 1 3
c 1 2 3
d 0 0 0
e 0 0 0
I have 2 dataframes, one of them contains some general information about football players, and second of them contains other information like winning matches for each player. They both have the "id" column. However, they are not in same length.
What I want to do is creating a new dataframe which contains 2 columns: "x" from first dataframe and "y" from second dataframe, ONLY where the "id" column contains the same value in both dataframes. Thus, I can match the "x" and "y" columns which belong to same person.
I tried to do it using concat function:
pd.concat([firstdataframe['x'], seconddataframe['y']], axis=1, keys=['x', 'y'])
But I didn't manage to know how to apply the condition of the "id" being equal in both dataframes.
It seems you need merge with default inner join, also each values in id columns has to be unique:
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
Sample:
df1 = pd.DataFrame({'id':[1,2,3],'x':[4,3,8]})
print (df1)
id x
0 1 4
1 2 3
2 3 8
df2 = pd.DataFrame({'id':[1,2],'y':[7,0]})
print (df2)
id y
0 1 7
1 2 0
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
print (df)
id x y
0 1 4 7
1 2 3 0
Solution with concat is possible, but a bit complicated, becasue need join on indexes with inner join:
df = pd.concat([df1.set_index('id')['x'],
df2.set_index('id')['y']], axis=1, join='inner')
.reset_index()
print (df)
id x y
0 1 4 7
1 2 3 0
EDIT:
If ids are not unique, duplicates create all combinations and output dataframe is expanded:
df1 = pd.DataFrame({'id':[1,2,3],'x':[4,3,8]})
print (df1)
id x
0 1 4
1 2 3
2 3 8
df2 = pd.DataFrame({'id':[1,2,1,1],'y':[7,0,4,2]})
print (df2)
id y
0 1 7
1 2 0
2 1 4
3 1 2
df = pd.merge(df1[['id','x']], df2[['id','y']], on='id')
print (df)
id x y
0 1 4 7
1 1 4 4
2 1 4 2
3 2 3 0
I have two DataFrames, one looks something like this:
df1:
x y Counts
a b 1
a c 3
b c 2
c d 1
The other one has both as index and as columns the list of unique values in the first two columns:
df2
a b c d
a
b
c
d
What I wouldl like to do is to fill in the second DataFrame with values from the first one, given the intersection of column and index is the same line from the first DataFrame, e.g.:
a b c d
a 0 1 3 0
b 1 0 2 0
c 3 2 0 1
d 0 0 1 0
While I try to use two for loops with a double if-condition, it makes the computer block (given that a real DataFrame contains more than 1000 rows).
The piece of code I am trying to implement (and which makes calculations apparently too 'heavy' for a computer to perform):
for i in df2.index:
for j in df2.columns:
if (i==df1.x.any() and j==df1.y.any()):
df2.loc[i,j]=df1.Counts
Important to notice, the list of unique values (i.e., index and columns in the second DataFrame) is longer than the number of rows in the first columns, in my example they coincided.
If it is of any relevance, the first dataframe represents basically combinations of words in the first and in the second column and their occurences in the text. Occurrences are basically the weights of edges.
So, I am trying to create a matrix so as to plot a graph via igraph. I chose to first create a DataFrame, then its values taken as an array pass to igraph.
As far as I could understand, python-igraph cannot use dataframe to plot a graph, a numpy array only.
Tried some of the soulutions suggested for the similar issues, nothing worked out so far.
Any suggestions to improve my question are warmly welcomed (it's my first question here).
You can do something like this:
import pandas as pd
#df = pd.read_clipboard()
#df2 = df.copy()
df3=df2.pivot(index='x',columns='y',values='Counts')
print df3
print
new=sorted((set(df3.columns.tolist()+df3.index.tolist())))
df3 = df3.reindex(new,columns=new).fillna(0).applymap(int)
print df3
output:
y b c d
x
a 1.0 3.0 NaN
b NaN 2.0 NaN
c NaN NaN 1.0
y a b c d
x
a 0 1 3 0
b 0 0 2 0
c 0 0 0 1
d 0 0 0 0
stack df2 and fillna with df1
idx = pd.Index(np.unique(df1[['x', 'y']]))
df2 = pd.DataFrame(index=idx, columns=idx)
df2.stack(dropna=False).fillna(df1.set_index(['x', 'y']).Counts) \
.unstack().fillna(0).astype(int)
a b c d
a 0 1 3 0
b 0 0 2 0
c 0 0 0 1
d 0 0 0 0