Pandas: Add row depending on index - python

I just want to create a dataFrame that is updated with itself(df3), adding rows from other dataFrames (df1,df2) based on an index ("ID").
When adding a new dataFrame if an overlap index is found, update the data. If it is not found, add the data including the new index.
df1 = pd.DataFrame({"Proj. Num" :["A"],'ID':[000],'DATA':["NO_DATA"]})
df1 = df1.set_index(["ID"])
df2 = pd.DataFrame({"Proj. Num" :["B"],'ID':[100],'DATA':["OK"], })
df2 = df2.set_index(["ID"])
df3 = pd.DataFrame({"Proj. Num" :["B"],'ID':[100],'DATA':["NO_OK"], })
df3 = df3.set_index(["ID"])
#df3 = pd.concat([df1,df2, df3]) #Concat,merge,join???
df3
I have tried concatenate with _verify_integrity=False_ but it just gives an error, and I think there is a more simple/nicer way to do it.

Solution with concat + Index.duplicated for boolean mask and filter by boolean indexing:
df3 = pd.concat([df1, df2, df3])
df3 = df3[~df3.index.duplicated()]
print (df3)
DATA Proj. Num
ID
0 NO_DATA A
100 OK B
Another solution by comment, thank you:
df3 = pd.concat([df3,df1])
df3 = df3[~df3.index.duplicated(keep='last')]
print (df3)
DATA Proj. Num
ID
100 NO_OK B
0 NO_DATA A

You can concatenate all the dataframes along the index; group by index and decide which element to keep of the group sharing the same index.
From your question it looks like you want to keep the last (most updated) element with the same index. It is then important the order in which you pass the dataframes in the pd.concat function.
For a list of other methods, see here.
res = pd.concat([df1, df2, df3], axis = 0)
res.groupby(res.index).last()
Which gives:
DATA Proj. Num
ID
0 NO_DATA A
100 NO_OK B

#update existing rows
df3.update(df1)
#append new rows
df3 = pd.concat([df3,df1[~df1.index.isin(df3.index)]])
#update existing rows
df3.update(df2)
#append new rows
df3 = pd.concat([df3,df2[~df2.index.isin(df3.index)]])
Out[2438]:
DATA Proj. Num
ID
100 OK B
0 NO_DATA A

Related

How can i add a column that has the same value

I was trying to add a new Column to my dataset but when i did the column only had 1 index
is there a way to make one value be in al indexes in a column
import pandas as pd
df = pd.read_json('file_1.json', lines=True)
df2 = pd.read_json('file_2.json', lines=True)
df3 = pd.concat([df,df2])
df3 = df.loc[:, ['renderedContent']]
görüş_column = ['Milet İttifakı']
df3['Siyasi Yönelim'] = görüş_column
As per my understanding, this could be your possible solution:-
You have mentioned these lines of code:-
df3 = pd.concat([df,df2])
df3 = df.loc[:, ['renderedContent']]
You can modify them into
df3 = pd.concat([df,df2],axis=1) ## axis=1 means second dataframe will add to columns, default value is axis=0 which adds to the rows
Second point is,
df3 = df3.loc[:, ['renderedContent']]
I think you want to write this one , instead of df3=df.loc[:,['renderedContent']].
Hope it will solve your problem.

Efficient way to find row in df2 based on condition from value in df1

I have two dataframes. df1 has ~31,000 rows, while df2 has ~117,000 rows. I want to add a column to df1 based on the following conditions.
(df1.id == df2.id) and (df2.min_value < df1.value <= df2.max_value)
I know that df2 will return either 0 or 1 rows satisfying the condition for each value of id in df1. For each row in df1, I want to add a column from df2 when the above condition is satisfied.
My current code is as follows. It is a line by line approach.
new_df1 = pd.DataFrame(columns = df1.columns.tolist()+[new_col])
for i, row in df1.iterrows():
val = row['value']
id = row['id']
dummy = df2[(df2.id == id) & (df2.max_value >= val) & (df2.min_value < val)]
if dummy.shape[0] == 0:
new_col = np.nan
else:
new_col = dummy.new_column.values[0]
l = len(new_df1)
new_df1.loc[l] = row.tolist()+[new_col]
This is a time costly approach. Is there a way to more efficiently do this problem?
You can merge df1 and df2 based on the id column:
merged_df = df1.merge(df2, on='id', how='left')
Now, any row in DF1 for which the id matches an id of a row in DF2 will have all the DF2 columns placed alongside it. Then, you can simply filter the merged dataframe for your given condition:
merged_df.query('max_value > val and min_value < val')

How to extract the data from a column with priority order given in another file?

I have two dataframes
df1
IMPACT Rank
HIGH 1
MODERATE 2
LOW 3
MODIFIER 4
df2['Annotation']
Annotation
A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|MODERATE|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||
A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|HIGH|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||
A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|LOW|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||
There are multiple annotation in separated by , (comma), I want to consider only one annotation from the dataframe based on Rank in the df1.
My expected outputwill be:
df['RANKED']
RANKED
A|missense_variant|MODERATE|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999)
A|missense_variant|HIGH|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999)
A|missense_variant|LOW|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999)
I tried following code to generate the output: but did not give me the expected result
d = df1.set_index('IMPACT')['Rank'].to_dict()
max1 = df1['Rank'].max()+1
def f(x):
d1 = {y: d.get(y, max1) for y in x for y in x.split(',')}
return min(d1, key=d1.get)
df2['RANKED'] = df2['Annotation'].apply(f)
Any help appreciated..
TL;DR
df2['RANKED'] = df2['Annotation'].str.split(',')
df2 = df2.explode(column='RANKED')
df2['IMPACT'] = df["RANKED"].str.findall(r"|".join(df1['IMPACT'])).apply("".join)
df_merge = df2.merge(df1, how='left', on='IMPACT')
df_final = df_merge.loc[df_merge.groupby(['Annotation'])['Rank'].idxmin().sort_values()].drop(columns=['Annotation', 'IMPACT'])
Step-by-step
First you define your dataframes
df1 = pd.DataFrame({'IMPACT':['HIGH', 'MODERATE', 'LOW', 'MODIFIER'], 'Rank':[1,2,3,4]})
df2 = pd.DataFrame({
'Annotation':[
'A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|MODERATE|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||',
'A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|HIGH|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||',
'A|intron_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000341290|protein_coding||2/4||||||||||-1||HGNC|HGNC:28208||||,A|missense_variant|LOW|PERM1|ENSG00000187642|Transcript|ENST00000433179|protein_coding|1/3||||72|72|24|E/D|gaG/gaT|||-1||HGNC|HGNC:28208|YES|CCDS76083.1|deleterious(0)|probably_damaging(0.999),A|upstream_gene_variant|MODIFIER|PERM1|ENSG00000187642|Transcript|ENST00000479361|retained_intron|||||||||||4317|-1||HGNC|HGNC:28208||||']
})
Now here is the tricky part. You should create a column with the list of the split by comma string of the original Annotation column. Then you explode this column so you can have the objective values repeated for each original string.
df2['RANKED'] = df2['Annotation'].str.split(',')
df2 = df2.explode(column='RANKED')
Next, you extract the IMPACT word from each RANKED column.
df2['IMPACT'] = df2["RANKED"].str.findall(r"|".join(df1['IMPACT'])).apply("".join)
Then, you merge df1 and df2 to get the rank of each RANKED.
df_merge = df2.merge(df1, how='left', on='IMPACT')
Finally, this is the easy part where you discard everything you do not want in the final dataframe. This can be done via groupby.
df_final = df_merge.loc[df_merge.groupby(['Annotation'])['Rank'].idxmin().sort_values()].drop(columns=['Annotation', 'IMPACT'])
RANKED Rank
A|missense_variant|MODERATE|PERM1|ENSG00000187... 2
A|missense_variant|HIGH|PERM1|ENSG00000187642|... 1
A|missense_variant|LOW|PERM1|ENSG00000187642|T... 3
OR by dropping duplicates
df_final = df_merge.sort_values(['Annotation', 'Rank'], ascending=[False,True]).drop_duplicates(subset=['Annotation']).drop(columns=['Annotation', 'IMPACT'])

Join columns of several dataframes into a new dataframe

I have a dictionary of dataframes. Each of these dataframes has a column 'defrost_temperature'. What I want to do is make one new dataframe that collects all those columns, maintaining them as seperate columns.
This is what I am doing right now:
merged_defrosts = pd.DataFrame()
for key in df_dict.keys():
merged_defrosts[key] = df_dict[key]["defrost_temperature"]
But unfortunately, only the first column is filled correctly. The other columns are filled with NaN as shown in the screenshot
enter image description here
The different defrosts are not necessarily the same length. (the fourth dataframe is 108 rows, the others are 109 rows)
You can try pd.merge on index of the larger.
df_result = pd.DataFrame()
for i, df in enumerate(df_dict.values()):
s1, s2 = f'_{i}', f'_{i+1}'
m1, m2 = df_result.shape[0], df.shape[0]
if m1 == 0:
df_result = df
elif m1 >= m2:
df_result = df_result.merge(df, how=left, left_index=True, right_index=True, suffixes=(s1, s2))
else:
df_result = df.merge(df_result, how=left, left_index=True, right_index=True, suffixes=(s2, s1))
This would create undesired column names though that you can manually rename them afterwards.
You could try to concat the dataframes horizontaly after making the common column the index:
merged_defrosts = pd.concat([df.set_index("defrost_temperature") for df in df_dict.values()]
).reset_index()

I want to extract QSTS_ID column and delimit by full stop and append it to the exisisting list as a seperate column

enter image description hereWhen applying the below code , i am getting NAN values in the entire column of QSTS_ID
df['QSTS_ID'] = df['QSTS_ID'].str.split('.',expand=True)
df
I want to copy the entire QSTS_ID column and append it at the end. I also have to delimit it by fullstop and apply new headers
Problem is if add parameter expand=True it return DataFrame with one or more columns, so assign return NaNs.
Solution is add new columns with join or concat to original DataFrame, also add_prefix is for change new columns names:
df = df.join(df['QSTS_ID'].str.split('.',expand=True).add_prefix('QSTS_ID_'))
df = pd.concat([df, df['QSTS_ID'].str.split('.',expand=True).add_prefix('QSTS_ID_')], axis=1)
If want also remove original column:
df = df.join(df.pop('QSTS_ID').str.split('.',expand=True).add_prefix('QSTS_ID_'))
df = pd.concat([df,
df.pop('QSTS_ID').str.split('.',expand=True).add_prefix('QSTS_ID_')], axis=1)
Sample:
df = pd.DataFrame({
'QSTS_ID':['val_k.lo','val2.s','val3.t'],
'F':list('abc')
})
df1 = df['QSTS_ID'].str.split('.',expand=True).add_prefix('QSTS_ID_')
df = df.join(df1)
print (df)
QSTS_ID F QSTS_ID_0 QSTS_ID_1
0 val_k.lo a val_k lo
1 val2.s b val2 s
2 val3.t c val3 t
#check columns names of new columns
print (df1.columns)

Categories