Pandas Concat two dataframes with different amount of rows - python

I have a question about pd.concat. I get some weird results and I do not get why.
Let start with a simple example (this should also show what I want to achieve):
import pandas as pd
df1 = pd.DataFrame([[1,2,3],[7,6,5]], columns = ["A","B","C"])
print("DF1: \n", df1)
df2 = pd.DataFrame([[4,5,6]], columns = ["A","B","C"])
print("DF2: \n", df2)
df3 = pd.concat([df1, df2], ignore_index = True)
print("Concat DF1 and DF2: \n",df3)
Now I have my actual programm where I have DataFrames like this:
When I am applying the concat function, I get this:
It makes zero sense to me. What can possible be the reason?
P.S. It's not urgent, because I found a workaround but this bothers me and makes me a bit angry too.

Use the following code for connecting two DataFrame based on their rows
Code1) self.teste_df= (self.teste_df).append(test,ignore_index=True)
Code2) pd.concat([self.teste_df, test], axis = 0, ignore_index=True )

I made them both a list, and combined the lists with +.

Related

merging two excel files and then removing duplicates that it creates

I've just started using python so could do with some help.
I've merged data in two excel files using the following code:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#export new dataframe to excel
df.to_excel('WLM module data_test4.xlsx')
This does merge the data, but what it also does is where dataframe 1 has multiple entries for a module, it creates duplicate data in the new merged file so that there are equal entries in the df2 data. Here's an example:
output
So I want to only have one entry for the moderation of the module, whereas I have two at the moment (highlighted in red).
I also want to remove the additional columns : "term_y", "semester_y", "credits_y" and "students_y" in the final output as they are just repeats of data I already have in df1.
Thanks!
I think what you want is duplicated garnerd from
Pandas - Replace Duplicates with Nan and Keep Row
&
Replace duplicated values with a blank string
So what you want is this after your merge: df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
Please read both stackoverflow link examples to understand how this works better.
So full code would look like this
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#export new dataframe to excel
df.to_excel('WLM module data_test5-working.xlsx')
Many ways to drop columns too.
Ive chosen, for lack of more time, to do this:
df.drop(df.columns[2], axis=1, inplace=True)
from https://www.stackvidhya.com/drop-column-in-pandas/
change df.columns[2] to the N'th number column you want to drop. (Since my working data was differernt to yours*)
After the merge. so that full code will look like this:
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#https://www.stackvidhya.com/drop-column-in-pandas/
#export new dataframe to excel
df.to_excel('WLM module data_test6-working.xlsx')
df.drop(df.columns[2], axis=1, inplace=True)
Hope ive helped.
I'm just very happy I got you somwhere/did this. For both of our sakess!
Happy you have a working answer.
& if you want to create a new df out of the merged, duplicated and droped columns df, you can do this:
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
from Extracting specific selected columns to new DataFrame as a copy
*So that full code * would look something like this (please adjust column numbers as your need) which is what I wanted:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test12.xlsx')
new.to_excel('WLM module data_test13.xlsx')
Note: *When I did mine above , I deliberately didn't have any headers In columns, to try make it generic as possible. So used iloc to specify colum Number Initially. ( Since your original question was not that descriptive or clear, but kind got the point.). Think you should include copyable draft data (not screen shots) next time to make it easier for people/entice insentivise experts on here to enagage with the post. Plus more clearer Why's & How's. & SO isnt a free code writing servce, you know, but it was to my benefit also (hugely) to do/delve into this.
Could you provide a sample of desired output?
Otherwise, choosing the right type of merge should resolve your issue. Have a look at the documentation, there are the possible options and their corresponding SQL statements listed:
https://pandas.pydata.org/docs/reference/api/pandas.merge.html
Regarding the additional columns you have two options:
Again from the documentation: Select the suffixes with the suffixes parameter. To add suffixes only to duplicate columns from df1, you could set them to something like suffixes=("B2", "").
Use df2 within the merge only with the columns needed in the output. E.g.
df = df1.merge(df2[['module_id', 'moderator']], on = 'module_id', how='outer')
& further to the 3 successful and working codes below, each one answering a part of your queston,
you could do the whole thing by/using iloc, which is what I prefer .
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#do/apply duplicated() on one of the columns. (see more about duplicated in my post below)
df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA
# drop the columns you dont want and save to new df/create a new sheet w
new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test82.xlsx')
new.to_excel('WLM module data_test83.xlsx') #<--this is your data after dropping the columns. Wanted to do it sepertely so you can see what is happeningh/know how to. The default is just to change/modify the old df.
print(df.iloc[:0,8].name) # to show you how to get the header name from iloc
print(df.iloc[:0,8]) # to shoe you what iloc gives on its own
: "term_y", "semester_y", "credits_y" and "students_y"
are
12, 14, 15 & 16 are the columns you want to remove , so ive done that here.
iloc starts from 0. so new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
so, like in the 3rd piece of code before, does what you wanted. All you have to do is change the column numbers it refers to (if youd given us a non snapshot picture, and dummy text replicating you use case, instead we would have copied that to work with, instead of having no time and writing out outselves to do it). Post Edit 14:48 24/04/22 - Just done that here for you. Just copy the code and run.
you have Module (col 3), Module_Id (col 4) and module name (col 13) in your data [in my dummy data, that was column 9 (iloc 8). as said, didnt have time to replicate perfectly, just the idea) but I think its module_id column (column 9, iloc 8) you are wanting to : not just to merge on, but also then do .duplicated() by on. so you can run code as is , if thats the case.
If its not, just change df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA from number 8, to 2, 3 or 12 for your use-case/columns.
I think I prefer this answer, for fact knowing/counting the number of columns frees you up from having to call it by name , and allows for a different type of automation. You can still implement contains or find regex to locate and work woth columns data later on , but this is another method with its own power over having to rely on names. more precise i feel.
Literally plug this code and run, and play & let me know how it goes. All work for me.
Thanks everyone for your help, this is my final code which seems to work:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#drop columns not needed
df.drop('term_y', inplace=True, axis=1)
df.drop('semester_y', inplace=True, axis=1)
df.drop('credits_y', inplace=True, axis=1)
df.drop('n_students_y', inplace=True, axis=1)
#drop duplicated rows
df.loc[df['module_name'].duplicated(), 'module_name'] = pd.NA
df.loc[df['moderation_wl'].duplicated(), 'moderation_wl'] = pd.NA
#export new dataframe to excel
df.to_excel('output.xlsx')

Pandas Dataframes - Combine two Dataframes but leave out entry with same column

I'm trying to create a DataFrame out of two existing ones. I read the title of some articles in the web, first column is title and the ones after are timestamps
i want to concat both data frames but leave out the ones with the same title (column one)
I tried
df = pd.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
but because the other columns may not be the exact same all the time, I need to leave out every data pack that has the same first column. how would I do this?
btw sorry for not knowing all the right terms for my problem
You should first remove the duplicate rows from df2 and then concat it with df1:
df = pd.concat([df1, df2[~df2.title.isin(df1.title)]]).reset_index(drop=True)
This probably solves your problem:
import pandas as pd
import numpy as np
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df2=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah1','blah2','blah3','blah4','blah']
df2.columns=['blah5','blah6','blah7','blah8','blah']
for i in range(len(df.columns)):
for j in range(len(df2.columns)):
if df.columns[i] == df2.columns[j]:
df2 = df2.drop(df2.columns[j], axis = 1)
else:
continue
print(pd.concat([df, df2], axis =1))

Filling a dataframe with multiple dataframe values

I have some 100 dataframes that need to be filled in another big dataframe. Presenting the question with two dataframes
import pandas as pd
df1 = pd.DataFrame([1,1,1,1,1], columns=["A"])
df2 = pd.DataFrame([2,2,2,2,2], columns=["A"])
Please note that both the dataframes have same column names.
I have a master dataframe that has repetitive index values as follows:-
master_df=pd.DataFrame(index=df1.index)
master_df= pd.concat([master_df]*2)
Expected Output:-
master_df['A']=[1,1,1,1,1,2,2,2,2,2]
I am using for loop to replace every n rows of master_df with df1,df2... df100.
Please suggest a better way of doing it.
In fact df1,df2...df100 are output of a function where the input is column A values (1,2). I was wondering if there is something like
another_df=master_df['A'].apply(lambda x: function(x))
Thanks in advance.
If you want to concatenate the dataframes you could just use pandas concat with a list as the code below shows.
First you can add df1 and df2 to a list:
df_list = [df1, df2]
Then you can concat the dfs:
master_df = pd.concat(df_list)
I used the default value of 0 for 'axis' in the concat function (which is what I think you are looking for), but if you want to concatenate the different dfs side by side you can just set axis=1.

How to stand different dataframes side by side iteratively generated in Python?

I have a code through which I am generating different DataFrames and appending them one over the other.
df = pd.DataFrame()
...
new_col = pd.read_parquet(filepath)
aux = pd.concat([aux, new_col])
aux['measure'] = sn
df = df.append(aux)
The code works fine, but I need them side by side. df is a empty dataframe in which I am appending all aux which contain all the data. Therefore, apparently, concat neither join or merge don't work since I cannot concat df and aux.
Thanks!
Side by side ?
pd.concat(list_of_df, axis=1)
This should be good
In order to concatenate them, just make as you have done on the above line. However, concatenate them specifying the axis=1 and the join='outer'. Nonetheless, you have to reset the index before because when concatenating it takes into account the index.
aux.reset_index(inplace=True, drop=True)
df = pd.concat([df, aux], axis=1, join='outer')

Pandas how to concat two dataframes without losing the column headers

I have the following toy code:
import pandas as pd
df = pd.DataFrame()
df["foo"] = [1,2,3,4]
df2 = pd.DataFrame()
df2["bar"]=[4,5,6,7]
df = pd.concat([df,df2], ignore_index=True,axis=1)
print(list(df))
Output: [0,1]
Expected Output: [foo,bar] (order is not important)
Is there any way to concatenate two dataframes without losing the original column headers, if I can guarantee that the headers will be unique?
Iterating through the columns and then adding them to one of the DataFrames comes to mind, but is there a pandas function, or concat parameter that I am unaware of?
Thanks!
As stated in merge, join, and concat documentation, ignore index will remove all name references and use a range (0...n-1) instead. So it should give you the result you want once you remove ignore_index argument or set it to false (default).
df = pd.concat([df, df2], axis=1)
This will join your df and df2 based on indexes (same indexed rows will be concatenated, if other dataframe has no member of that index it will be concatenated as nan).
If you have different indexing on your dataframes, and want to concatenate it this way. You can either create a temporary index and join on that, or set the new dataframe's columns after using concat(..., ignore_index=True).
I don't think the accepted answer answers the question, which is about column headers, not indexes.
I am facing the same problem, and my workaround is to add the column names after the concatenation:
df.columns = ["foo", "bar"]

Categories