I have data files which are converted to pandas dataframes which sometimes share column names while others sharing time series index, which all I wish to combine as one dataframe based on both column and index whenever matching. Since there is no sequence in naming they appear randomly for concatenation. If two dataframe have different columns are concatenated along axis=1 it works well, but if the resulting dataframe is combined with new df with the column name from one of the earlier merged pandas dataframe, it fails to concat. For example with these data files :
import pandas as pd
df1 = pd.read_csv('0.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df2 = pd.read_csv('1.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df3 = pd.read_csv('2.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
data1 = pd.DataFrame()
file_list = [df1, df2, df3] # fails
# file_list = [df2, df3,df1] # works
for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
I get ValueError: Plan shapes are not aligned when I try to do that. In my case there is no way to first load all the DataFrames and check their column names. Having that I could combine all df with same column names to later only concat these resulting dataframes with different column names along axis=1 which I know always works as shown below. However, a solution which requires preloading all the DataFrames and rearranging the sequence of concatenation is not possible in my case (it was only done for a working example above). I need a flexibility in terms of in whichever sequence the information comes it can be concatenated with the larger dataframe data1. Please let me know if you have a suggested suitable approach.
If you go through the loop step by step, you can find that in the first iteration it goes into the if, so data1 is equal to df1. In the second iteration it goes to the else, since data1 is not empty and ''Temperature product barrel ValueY'' is not in data1.columns.
After the else, data1 has some duplicated column names. In every row of the duplicated column names. (one of the 2 columns is Nan, the other one is a float). This is the reason why pd.concat() fails.
You can aggregate the duplicate columns before you try to concatenate to get rid of it:
for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
# new:
data1 = data1.groupby(data1.columns, axis=1).agg(np.nansum)
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
After that, you would get
data1.shape
(30, 23)
Related
I've just started using python so could do with some help.
I've merged data in two excel files using the following code:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#export new dataframe to excel
df.to_excel('WLM module data_test4.xlsx')
This does merge the data, but what it also does is where dataframe 1 has multiple entries for a module, it creates duplicate data in the new merged file so that there are equal entries in the df2 data. Here's an example:
output
So I want to only have one entry for the moderation of the module, whereas I have two at the moment (highlighted in red).
I also want to remove the additional columns : "term_y", "semester_y", "credits_y" and "students_y" in the final output as they are just repeats of data I already have in df1.
Thanks!
I think what you want is duplicated garnerd from
Pandas - Replace Duplicates with Nan and Keep Row
&
Replace duplicated values with a blank string
So what you want is this after your merge: df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
Please read both stackoverflow link examples to understand how this works better.
So full code would look like this
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#export new dataframe to excel
df.to_excel('WLM module data_test5-working.xlsx')
Many ways to drop columns too.
Ive chosen, for lack of more time, to do this:
df.drop(df.columns[2], axis=1, inplace=True)
from https://www.stackvidhya.com/drop-column-in-pandas/
change df.columns[2] to the N'th number column you want to drop. (Since my working data was differernt to yours*)
After the merge. so that full code will look like this:
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#https://www.stackvidhya.com/drop-column-in-pandas/
#export new dataframe to excel
df.to_excel('WLM module data_test6-working.xlsx')
df.drop(df.columns[2], axis=1, inplace=True)
Hope ive helped.
I'm just very happy I got you somwhere/did this. For both of our sakess!
Happy you have a working answer.
& if you want to create a new df out of the merged, duplicated and droped columns df, you can do this:
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
from Extracting specific selected columns to new DataFrame as a copy
*So that full code * would look something like this (please adjust column numbers as your need) which is what I wanted:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test12.xlsx')
new.to_excel('WLM module data_test13.xlsx')
Note: *When I did mine above , I deliberately didn't have any headers In columns, to try make it generic as possible. So used iloc to specify colum Number Initially. ( Since your original question was not that descriptive or clear, but kind got the point.). Think you should include copyable draft data (not screen shots) next time to make it easier for people/entice insentivise experts on here to enagage with the post. Plus more clearer Why's & How's. & SO isnt a free code writing servce, you know, but it was to my benefit also (hugely) to do/delve into this.
Could you provide a sample of desired output?
Otherwise, choosing the right type of merge should resolve your issue. Have a look at the documentation, there are the possible options and their corresponding SQL statements listed:
https://pandas.pydata.org/docs/reference/api/pandas.merge.html
Regarding the additional columns you have two options:
Again from the documentation: Select the suffixes with the suffixes parameter. To add suffixes only to duplicate columns from df1, you could set them to something like suffixes=("B2", "").
Use df2 within the merge only with the columns needed in the output. E.g.
df = df1.merge(df2[['module_id', 'moderator']], on = 'module_id', how='outer')
& further to the 3 successful and working codes below, each one answering a part of your queston,
you could do the whole thing by/using iloc, which is what I prefer .
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#do/apply duplicated() on one of the columns. (see more about duplicated in my post below)
df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA
# drop the columns you dont want and save to new df/create a new sheet w
new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test82.xlsx')
new.to_excel('WLM module data_test83.xlsx') #<--this is your data after dropping the columns. Wanted to do it sepertely so you can see what is happeningh/know how to. The default is just to change/modify the old df.
print(df.iloc[:0,8].name) # to show you how to get the header name from iloc
print(df.iloc[:0,8]) # to shoe you what iloc gives on its own
: "term_y", "semester_y", "credits_y" and "students_y"
are
12, 14, 15 & 16 are the columns you want to remove , so ive done that here.
iloc starts from 0. so new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
so, like in the 3rd piece of code before, does what you wanted. All you have to do is change the column numbers it refers to (if youd given us a non snapshot picture, and dummy text replicating you use case, instead we would have copied that to work with, instead of having no time and writing out outselves to do it). Post Edit 14:48 24/04/22 - Just done that here for you. Just copy the code and run.
you have Module (col 3), Module_Id (col 4) and module name (col 13) in your data [in my dummy data, that was column 9 (iloc 8). as said, didnt have time to replicate perfectly, just the idea) but I think its module_id column (column 9, iloc 8) you are wanting to : not just to merge on, but also then do .duplicated() by on. so you can run code as is , if thats the case.
If its not, just change df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA from number 8, to 2, 3 or 12 for your use-case/columns.
I think I prefer this answer, for fact knowing/counting the number of columns frees you up from having to call it by name , and allows for a different type of automation. You can still implement contains or find regex to locate and work woth columns data later on , but this is another method with its own power over having to rely on names. more precise i feel.
Literally plug this code and run, and play & let me know how it goes. All work for me.
Thanks everyone for your help, this is my final code which seems to work:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#drop columns not needed
df.drop('term_y', inplace=True, axis=1)
df.drop('semester_y', inplace=True, axis=1)
df.drop('credits_y', inplace=True, axis=1)
df.drop('n_students_y', inplace=True, axis=1)
#drop duplicated rows
df.loc[df['module_name'].duplicated(), 'module_name'] = pd.NA
df.loc[df['moderation_wl'].duplicated(), 'moderation_wl'] = pd.NA
#export new dataframe to excel
df.to_excel('output.xlsx')
I have a initial dataframe D. I extract two data frames from it like this:
A = D[D.label == k]
B = D[D.label != k]
I want to combine A and B into one DataFrame. The order of the data is not important. However, when we sample A and B from D, they retain their indexes from D.
DEPRECATED: DataFrame.append and Series.append were deprecated in v1.4.0.
Use append:
df_merged = df1.append(df2, ignore_index=True)
And to keep their indexes, set ignore_index=False.
Use pd.concat to join multiple dataframes:
df_merged = pd.concat([df1, df2], ignore_index=True, sort=False)
Merge across rows:
df_row_merged = pd.concat([df_a, df_b], ignore_index=True)
Merge across columns:
df_col_merged = pd.concat([df_a, df_b], axis=1)
If you're working with big data and need to concatenate multiple datasets calling concat many times can get performance-intensive.
If you don't want to create a new df each time, you can instead aggregate the changes and call concat only once:
frames = [df_A, df_B] # Or perform operations on the DFs
result = pd.concat(frames)
This is pointed out in the pandas docs under concatenating objects at the bottom of the section):
Note: It is worth noting however, that concat (and therefore append)
makes a full copy of the data, and that constantly reusing this
function can create a significant performance hit. If you need to use
the operation over several datasets, use a list comprehension.
If you want to update/replace the values of first dataframe df1 with the values of second dataframe df2. you can do it by following steps —
Step 1: Set index of the first dataframe (df1)
df1.set_index('id')
Step 2: Set index of the second dataframe (df2)
df2.set_index('id')
and finally update the dataframe using the following snippet —
df1.update(df2)
To join 2 pandas dataframes by column, using their indices as the join key, you can do this:
both = a.join(b)
And if you want to join multiple DataFrames, Series, or a mixture of them, by their index, just put them in a list, e.g.,:
everything = a.join([b, c, d])
See the pandas docs for DataFrame.join().
# collect excel content into list of dataframes
data = []
for excel_file in excel_files:
data.append(pd.read_excel(excel_file, engine="openpyxl"))
# concatenate dataframes horizontally
df = pd.concat(data, axis=1)
# save combined data to excel
df.to_excel(excelAutoNamed, index=False)
You can try the above when you are appending horizontally! Hope this helps sum1
Use this code to attach two Pandas Data Frames horizontally:
df3 = pd.concat([df1, df2],axis=1, ignore_index=True, sort=False)
You must specify around what axis you intend to merge two frames.
I have some 100 dataframes that need to be filled in another big dataframe. Presenting the question with two dataframes
import pandas as pd
df1 = pd.DataFrame([1,1,1,1,1], columns=["A"])
df2 = pd.DataFrame([2,2,2,2,2], columns=["A"])
Please note that both the dataframes have same column names.
I have a master dataframe that has repetitive index values as follows:-
master_df=pd.DataFrame(index=df1.index)
master_df= pd.concat([master_df]*2)
Expected Output:-
master_df['A']=[1,1,1,1,1,2,2,2,2,2]
I am using for loop to replace every n rows of master_df with df1,df2... df100.
Please suggest a better way of doing it.
In fact df1,df2...df100 are output of a function where the input is column A values (1,2). I was wondering if there is something like
another_df=master_df['A'].apply(lambda x: function(x))
Thanks in advance.
If you want to concatenate the dataframes you could just use pandas concat with a list as the code below shows.
First you can add df1 and df2 to a list:
df_list = [df1, df2]
Then you can concat the dfs:
master_df = pd.concat(df_list)
I used the default value of 0 for 'axis' in the concat function (which is what I think you are looking for), but if you want to concatenate the different dfs side by side you can just set axis=1.
I have the following toy code:
import pandas as pd
df = pd.DataFrame()
df["foo"] = [1,2,3,4]
df2 = pd.DataFrame()
df2["bar"]=[4,5,6,7]
df = pd.concat([df,df2], ignore_index=True,axis=1)
print(list(df))
Output: [0,1]
Expected Output: [foo,bar] (order is not important)
Is there any way to concatenate two dataframes without losing the original column headers, if I can guarantee that the headers will be unique?
Iterating through the columns and then adding them to one of the DataFrames comes to mind, but is there a pandas function, or concat parameter that I am unaware of?
Thanks!
As stated in merge, join, and concat documentation, ignore index will remove all name references and use a range (0...n-1) instead. So it should give you the result you want once you remove ignore_index argument or set it to false (default).
df = pd.concat([df, df2], axis=1)
This will join your df and df2 based on indexes (same indexed rows will be concatenated, if other dataframe has no member of that index it will be concatenated as nan).
If you have different indexing on your dataframes, and want to concatenate it this way. You can either create a temporary index and join on that, or set the new dataframe's columns after using concat(..., ignore_index=True).
I don't think the accepted answer answers the question, which is about column headers, not indexes.
I am facing the same problem, and my workaround is to add the column names after the concatenation:
df.columns = ["foo", "bar"]
I have a initial dataframe D. I extract two data frames from it like this:
A = D[D.label == k]
B = D[D.label != k]
I want to combine A and B into one DataFrame. The order of the data is not important. However, when we sample A and B from D, they retain their indexes from D.
DEPRECATED: DataFrame.append and Series.append were deprecated in v1.4.0.
Use append:
df_merged = df1.append(df2, ignore_index=True)
And to keep their indexes, set ignore_index=False.
Use pd.concat to join multiple dataframes:
df_merged = pd.concat([df1, df2], ignore_index=True, sort=False)
Merge across rows:
df_row_merged = pd.concat([df_a, df_b], ignore_index=True)
Merge across columns:
df_col_merged = pd.concat([df_a, df_b], axis=1)
If you're working with big data and need to concatenate multiple datasets calling concat many times can get performance-intensive.
If you don't want to create a new df each time, you can instead aggregate the changes and call concat only once:
frames = [df_A, df_B] # Or perform operations on the DFs
result = pd.concat(frames)
This is pointed out in the pandas docs under concatenating objects at the bottom of the section):
Note: It is worth noting however, that concat (and therefore append)
makes a full copy of the data, and that constantly reusing this
function can create a significant performance hit. If you need to use
the operation over several datasets, use a list comprehension.
If you want to update/replace the values of first dataframe df1 with the values of second dataframe df2. you can do it by following steps —
Step 1: Set index of the first dataframe (df1)
df1.set_index('id')
Step 2: Set index of the second dataframe (df2)
df2.set_index('id')
and finally update the dataframe using the following snippet —
df1.update(df2)
To join 2 pandas dataframes by column, using their indices as the join key, you can do this:
both = a.join(b)
And if you want to join multiple DataFrames, Series, or a mixture of them, by their index, just put them in a list, e.g.,:
everything = a.join([b, c, d])
See the pandas docs for DataFrame.join().
# collect excel content into list of dataframes
data = []
for excel_file in excel_files:
data.append(pd.read_excel(excel_file, engine="openpyxl"))
# concatenate dataframes horizontally
df = pd.concat(data, axis=1)
# save combined data to excel
df.to_excel(excelAutoNamed, index=False)
You can try the above when you are appending horizontally! Hope this helps sum1
Use this code to attach two Pandas Data Frames horizontally:
df3 = pd.concat([df1, df2],axis=1, ignore_index=True, sort=False)
You must specify around what axis you intend to merge two frames.