I've just started using python so could do with some help.
I've merged data in two excel files using the following code:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#export new dataframe to excel
df.to_excel('WLM module data_test4.xlsx')
This does merge the data, but what it also does is where dataframe 1 has multiple entries for a module, it creates duplicate data in the new merged file so that there are equal entries in the df2 data. Here's an example:
output
So I want to only have one entry for the moderation of the module, whereas I have two at the moment (highlighted in red).
I also want to remove the additional columns : "term_y", "semester_y", "credits_y" and "students_y" in the final output as they are just repeats of data I already have in df1.
Thanks!
I think what you want is duplicated garnerd from
Pandas - Replace Duplicates with Nan and Keep Row
&
Replace duplicated values with a blank string
So what you want is this after your merge: df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
Please read both stackoverflow link examples to understand how this works better.
So full code would look like this
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#export new dataframe to excel
df.to_excel('WLM module data_test5-working.xlsx')
Many ways to drop columns too.
Ive chosen, for lack of more time, to do this:
df.drop(df.columns[2], axis=1, inplace=True)
from https://www.stackvidhya.com/drop-column-in-pandas/
change df.columns[2] to the N'th number column you want to drop. (Since my working data was differernt to yours*)
After the merge. so that full code will look like this:
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
#https://www.stackvidhya.com/drop-column-in-pandas/
#export new dataframe to excel
df.to_excel('WLM module data_test6-working.xlsx')
df.drop(df.columns[2], axis=1, inplace=True)
Hope ive helped.
I'm just very happy I got you somwhere/did this. For both of our sakess!
Happy you have a working answer.
& if you want to create a new df out of the merged, duplicated and droped columns df, you can do this:
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
from Extracting specific selected columns to new DataFrame as a copy
*So that full code * would look something like this (please adjust column numbers as your need) which is what I wanted:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
df.loc[df['module_id'].duplicated(), 'module_id'] = pd.NA
new = df.drop(df.iloc[: , [1, 2, 7]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test12.xlsx')
new.to_excel('WLM module data_test13.xlsx')
Note: *When I did mine above , I deliberately didn't have any headers In columns, to try make it generic as possible. So used iloc to specify colum Number Initially. ( Since your original question was not that descriptive or clear, but kind got the point.). Think you should include copyable draft data (not screen shots) next time to make it easier for people/entice insentivise experts on here to enagage with the post. Plus more clearer Why's & How's. & SO isnt a free code writing servce, you know, but it was to my benefit also (hugely) to do/delve into this.
Could you provide a sample of desired output?
Otherwise, choosing the right type of merge should resolve your issue. Have a look at the documentation, there are the possible options and their corresponding SQL statements listed:
https://pandas.pydata.org/docs/reference/api/pandas.merge.html
Regarding the additional columns you have two options:
Again from the documentation: Select the suffixes with the suffixes parameter. To add suffixes only to duplicate columns from df1, you could set them to something like suffixes=("B2", "").
Use df2 within the merge only with the columns needed in the output. E.g.
df = df1.merge(df2[['module_id', 'moderator']], on = 'module_id', how='outer')
& further to the 3 successful and working codes below, each one answering a part of your queston,
you could do the whole thing by/using iloc, which is what I prefer .
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("Moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#do/apply duplicated() on one of the columns. (see more about duplicated in my post below)
df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA
# drop the columns you dont want and save to new df/create a new sheet w
new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
#new=pd.DataFrame(df.drop(df.columns[2], axis=1, inplace=True))
print(new)
#export new dataframe to excel
df.to_excel('WLM module data_test82.xlsx')
new.to_excel('WLM module data_test83.xlsx') #<--this is your data after dropping the columns. Wanted to do it sepertely so you can see what is happeningh/know how to. The default is just to change/modify the old df.
print(df.iloc[:0,8].name) # to show you how to get the header name from iloc
print(df.iloc[:0,8]) # to shoe you what iloc gives on its own
: "term_y", "semester_y", "credits_y" and "students_y"
are
12, 14, 15 & 16 are the columns you want to remove , so ive done that here.
iloc starts from 0. so new = df.drop(df.iloc[: , [11, 13, 14, 15]], axis=1)
so, like in the 3rd piece of code before, does what you wanted. All you have to do is change the column numbers it refers to (if youd given us a non snapshot picture, and dummy text replicating you use case, instead we would have copied that to work with, instead of having no time and writing out outselves to do it). Post Edit 14:48 24/04/22 - Just done that here for you. Just copy the code and run.
you have Module (col 3), Module_Id (col 4) and module name (col 13) in your data [in my dummy data, that was column 9 (iloc 8). as said, didnt have time to replicate perfectly, just the idea) but I think its module_id column (column 9, iloc 8) you are wanting to : not just to merge on, but also then do .duplicated() by on. so you can run code as is , if thats the case.
If its not, just change df.loc[df[df.iloc[:0,8].name].duplicated(), 'module_id'] = pd.NA from number 8, to 2, 3 or 12 for your use-case/columns.
I think I prefer this answer, for fact knowing/counting the number of columns frees you up from having to call it by name , and allows for a different type of automation. You can still implement contains or find regex to locate and work woth columns data later on , but this is another method with its own power over having to rely on names. more precise i feel.
Literally plug this code and run, and play & let me know how it goes. All work for me.
Thanks everyone for your help, this is my final code which seems to work:
# Import pandas library
import pandas as pd
#import excel files
df1 = pd.read_excel("B2 teaching.xlsx")
df2 = pd.read_excel("moderation.xlsx")
#merge dataframes 1 and 2
df = df1.merge(df2, on = 'module_id', how='outer')
#drop columns not needed
df.drop('term_y', inplace=True, axis=1)
df.drop('semester_y', inplace=True, axis=1)
df.drop('credits_y', inplace=True, axis=1)
df.drop('n_students_y', inplace=True, axis=1)
#drop duplicated rows
df.loc[df['module_name'].duplicated(), 'module_name'] = pd.NA
df.loc[df['moderation_wl'].duplicated(), 'moderation_wl'] = pd.NA
#export new dataframe to excel
df.to_excel('output.xlsx')
Related
I have a question about pd.concat. I get some weird results and I do not get why.
Let start with a simple example (this should also show what I want to achieve):
import pandas as pd
df1 = pd.DataFrame([[1,2,3],[7,6,5]], columns = ["A","B","C"])
print("DF1: \n", df1)
df2 = pd.DataFrame([[4,5,6]], columns = ["A","B","C"])
print("DF2: \n", df2)
df3 = pd.concat([df1, df2], ignore_index = True)
print("Concat DF1 and DF2: \n",df3)
Now I have my actual programm where I have DataFrames like this:
When I am applying the concat function, I get this:
It makes zero sense to me. What can possible be the reason?
P.S. It's not urgent, because I found a workaround but this bothers me and makes me a bit angry too.
Use the following code for connecting two DataFrame based on their rows
Code1) self.teste_df= (self.teste_df).append(test,ignore_index=True)
Code2) pd.concat([self.teste_df, test], axis = 0, ignore_index=True )
I made them both a list, and combined the lists with +.
I'm trying to create a DataFrame out of two existing ones. I read the title of some articles in the web, first column is title and the ones after are timestamps
i want to concat both data frames but leave out the ones with the same title (column one)
I tried
df = pd.concat([df1,df2]).drop_duplicates().reset_index(drop=True)
but because the other columns may not be the exact same all the time, I need to leave out every data pack that has the same first column. how would I do this?
btw sorry for not knowing all the right terms for my problem
You should first remove the duplicate rows from df2 and then concat it with df1:
df = pd.concat([df1, df2[~df2.title.isin(df1.title)]]).reset_index(drop=True)
This probably solves your problem:
import pandas as pd
import numpy as np
df=pd.DataFrame(np.arange(2*5).reshape(2,5))
df2=pd.DataFrame(np.arange(2*5).reshape(2,5))
df.columns=['blah1','blah2','blah3','blah4','blah']
df2.columns=['blah5','blah6','blah7','blah8','blah']
for i in range(len(df.columns)):
for j in range(len(df2.columns)):
if df.columns[i] == df2.columns[j]:
df2 = df2.drop(df2.columns[j], axis = 1)
else:
continue
print(pd.concat([df, df2], axis =1))
I wrote the following code to in python to read multiple csv files into pandas in separate dfs:
dfs = []
for f in filenames:
df = pd.read_csv(f, encoding= 'unicode_escape')
dfs.append(df)
It worked great, and I could index the dfs object I created to access the different dataframes like so:
dfs[0], dfs[1], etc
However, the dataframes have NaN values in them, and I am trying to write a second loop that will iterate through and drop them. I was sure this would work, however, it did not:
for df in dfs:
df.dropna()
The cell ran, but when I called dfs[0], the NaNs were still there. Could this be because the dataframes are in a list? Note, I want to drop rows with Nans, not columns.
I would appreciate any help. Thanks!
You need to assign it back
for i in range(len(dfs)):
dfs[i]=dfs[i].dropna()
Or add inplace
for df in dfs:
df.dropna(inplace=True)
I am operating with python 2.7 and I wrote a script that should take the name of two .xlsx files, use pandas to convert them in two dataframe and then concatenate them.
The two files under consideration have the same rows and different columns.
Basically, I have these two Excel files:
I would like to keep the same rows and just unite the columns.
The code is the following:
import pandas as pd
file1 = 'file1.xlsx'
file2 = 'file2.xlsx'
sheet10 = pd.read_excel(file1, sheet_name = 0)
sheet20 = pd.read_excel(file2, sheet_name = 0)
conc1 = pd.concat([sheet10, sheet20], sort = False)
output = pd.ExcelWriter('output.xlsx')
conc1.to_excel(output, 'Sheet 1')
output.save()
Instead of doing what I expected (given the examples I read online), the output becomes something like this:
Does anyone know I could I improve my script?
Thank you very much.
The best answer here really depends on the exact shape of your data. Based on the example you have provided it looks like the data is indexed identically between the two dataframes with differing column headers that you want preserved. If this is the case this would be the best solution:
import pandas as pd
file1 = 'file1.xlsx'
file2 = 'file2.xlsx'
sheet10 = pd.read_excel(file1, sheet_name = 0)
sheet20 = pd.read_excel(file2, sheet_name = 0)
conc1 = sheet10.merge(sheet20, how="left", left_index=True, right_index=True)
output = pd.ExcelWriter('output.xlsx')
conc1.to_excel(output, sheet_name='Sheet 1', ignore_index=True)
output.save()
Since there is a direct match between the number of rows in the two initial dataframes it doesn't really matter if a left, right, outer, or inner join is used. In this example I used a left join.
If the rows in the two data frames do not perfectly line up though, the join method selected can have a huge impact on your output. I recommend looking at pandas documentation on merge/join/concatenate before you go any further.
To get the expected output using pd.concat, column names in both the dataframes should be same. Here's how to do,
# Create a 1:1 mapping of sheet10 and sheet20 columns
cols_mapping = dict(zip(sheet20.columns, sheet10.columns))
# Rename the columns in sheet20 to match with that of sheet10
sheet20_renamed = sheet20.rename(cols_mapping, axis=1)
concatenated = pd.concat([sheet10, sheet20_renamed])
I have data files which are converted to pandas dataframes which sometimes share column names while others sharing time series index, which all I wish to combine as one dataframe based on both column and index whenever matching. Since there is no sequence in naming they appear randomly for concatenation. If two dataframe have different columns are concatenated along axis=1 it works well, but if the resulting dataframe is combined with new df with the column name from one of the earlier merged pandas dataframe, it fails to concat. For example with these data files :
import pandas as pd
df1 = pd.read_csv('0.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df2 = pd.read_csv('1.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
df3 = pd.read_csv('2.csv', index_col=0, parse_dates=True, infer_datetime_format=True)
data1 = pd.DataFrame()
file_list = [df1, df2, df3] # fails
# file_list = [df2, df3,df1] # works
for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
I get ValueError: Plan shapes are not aligned when I try to do that. In my case there is no way to first load all the DataFrames and check their column names. Having that I could combine all df with same column names to later only concat these resulting dataframes with different column names along axis=1 which I know always works as shown below. However, a solution which requires preloading all the DataFrames and rearranging the sequence of concatenation is not possible in my case (it was only done for a working example above). I need a flexibility in terms of in whichever sequence the information comes it can be concatenated with the larger dataframe data1. Please let me know if you have a suggested suitable approach.
If you go through the loop step by step, you can find that in the first iteration it goes into the if, so data1 is equal to df1. In the second iteration it goes to the else, since data1 is not empty and ''Temperature product barrel ValueY'' is not in data1.columns.
After the else, data1 has some duplicated column names. In every row of the duplicated column names. (one of the 2 columns is Nan, the other one is a float). This is the reason why pd.concat() fails.
You can aggregate the duplicate columns before you try to concatenate to get rid of it:
for fn in file_list:
if data1.empty==True or fn.columns[1] in data1.columns:
# new:
data1 = data1.groupby(data1.columns, axis=1).agg(np.nansum)
data1 = pd.concat([data1,fn])
else:
data1 = pd.concat([data1,fn], axis=1)
After that, you would get
data1.shape
(30, 23)