How to read separate Excel sheets into separate DataFrames? - python

I have an Excel file with 13 tabs, and I want to write a function that takes specified sheets from the file, converts them into separate DataFrames, then bundles them into a list of DataFrames. In this case, I want to take the sheets labeled 'tblProviderDetails', 'tblSubmissionStatus', and 'Data Validation Ref Data', convert them into DataFrames and make a list. The reason I want the dfs in a list, is because I want to eventually want to take the input dfs and return a dictionary which will then be used to create a YAML file.
This is ultimately what I want:
dfs = [ 'tblProviderDetails', 'tblSubmissionStatus', 'Data Validation Ref Data']
The reason that I want to use a user-defined function is that I want the flexibility to call any sheet and any number of sheets into a list.
I was able to write a function that converts single specified sheets to dataframes, but I'm not sure how to call any number of sheets in the Excel file or create a list within the function. This is as far as I've gotten:
def read_excel(path, sheet_name, header):
dfs = pd.read_excel(path, sheet_name=sheet_name, header=header)
return dfs
df1 = read_excel(path=BASEDIR, sheet_name='tblProviderDetails', header=2)
df2 = read_excel(path=BASEDIR, sheet_name='tblSubmissionStatus', header=2)
df3 = read_excel(path=BASEDIR, sheet_name='Data Validation Ref Data', header=2)
Thank you for your help.

There are multiple ways to do this but perhaps the simplest way is to first get all the sheet names and then in a loop for every sheet name, load the result in a data frame and append it to the required list.
dfList = []
def read_excel(path, h):
xls = pd.ExcelFile(path)
# Now you can access all sheetnames in the file
sheetsList = xls.sheet_names
# ['sheet1', 'sheet2', ...]
for sheet in sheetsList:
dfList.append(pd.read_excel(path, sheet_name=sheet, header
=h))
read_excel('book.xlsx',2)
print(dfList)

You can pass the a list of sheet names and\or sheet number to parameter sheet_name.
def read_excel(path, sheet_name, header):
sheet_name = ['tblProviderDetails','tblSubmissionStatus','Data Validation
Ref Data']
dfs = pd.read_excel(path, sheet_name=sheet_name, header=header)
return dfs

Related

Multiple sheets of an Excel workbook into different dataframes using Pandas

I have a Excel workbook which has 5 sheets containing data.
I want each sheet to be a different dataframe.
I tried using the below code for one sheet of my Excel Sheet
df = pd.read_excel("path",sheet_name = ['Product Capacity'])
df
But this returns the sheet as a dictionary of the sheet, not a dataframe.
I need a data frame.
Please suggest the code that will return a dataframe
If you want separate dataframes without dictionary, you have to read individual sheets:
with pd.ExcelFile('data.xlsx') as xlsx:
prod_cap = pd.read_excel(xlsx, sheet_name='Product Capacity')
load_cap = pd.read_excel(xlsx, sheet_name='Load Capacity')
# and so on
But you can also load all sheets and use a dict:
dfs = pd.read_excel('data.xlsx', sheet_name=None)
# dfs['Product Capacity']
# dfs['Load Capacity']

Read all Excel sheets except one of them

I'm using this line code to get all sheets from an Excel file:
excel_file = pd.read_excel('path_file',skiprows=35,sheet_name=None)
sheet_name=None option gets all the sheets.
How do I get all sheets except one of them?
If all you want to do is exclude one of the sheets, there is not much to change from your base code.
Assume file.xlsx is an excel file with multiple sheets, and you want to skip 'Sheet1'.
One possible solution is as follows:
import pandas as pd
# Returns a dictionary with key:value := sheet_name:df
xlwb = pd.read_excel('file.xlsx', sheet_name=None)
unwanted_sheet = 'Sheet1'
# list comprehension that filters out unwanted sheet
# all other sheets are kept in df_generator
df_generator = (items for keys, items in xlwb.items()
if keys != unwanted_sheet)
# get to the actual dataframes
for df in df_generator:
print(df.head())

Pandas concat dataframe per excel file

I have a code that read multiple files inside the directory and every excel file have more than 10 sheet's. After that I need to exclude some sheet's every file's and the others extracted.
I got all data needed, but the problem is every sheet's from the excel created new Dataframe even I used concat so when I save it to json only the last dataframe per file saved instead of whole data.
Here's my code:
excluded_sheet = ['Sheet 2','Sheet 6']
for index, xls_path in enumerate(file_paths):
data_file = pd.ExcelFile(xls_path)
sheets = [ sheet for sheet in data_file.sheet_names if sheet not in excluded_sheet ]
for sheet_name in sheets:
file = xls_path.rfind(".")
head, tail = os.path.split(xls_path[1:file])
df =pd.concat([pd.read_excel(xls_path, sheet_name=sheet_name, header=None)],ignore_index=True)
df.insert(loc=0, column='sheet name', value=sheet_name)
pd.DataFrame(df.to_json(f"{json_folder_path}{tail}.json", orient='records',indent=4))
I didn't used sheet_name=None because I need to read sheet name and add to column values.
Data status of my dataFrame:
I got many DF because every sheet create new DF, instead of 2 DF only since I have 2 files inside the directory. Thanks guys for your help.
You can use list comprehension for join all sheetnames to one DataFrame:
...
...
sheets = [ sheet for sheet in data_file.sheet_names if sheet not in excluded_sheet ]
file = xls_path.rfind(".")
head, tail = os.path.split(xls_path[1:file])
dfs = [pd.read_excel(xls_path,sheet_name=sheet_name,header=None) for sheet_name in sheets]
df =pd.concat(dfs,keys=sheets)
df = df.reset_index(level=0, drop=True).rename_axis('sheet name').reset_index()
pd.DataFrame(df.to_json(f"{json_folder_path}{tail}.json", orient='records',indent=4))
Or create helper list dfs with append DataFrames per loop, outside loop use concat:
...
...
sheets = [ sheet for sheet in data_file.sheet_names if sheet not in excluded_sheet ]
dfs = []
for sheet_name in sheets:
file = xls_path.rfind(".")
head, tail = os.path.split(xls_path[1:file])
df = pd.read_excel(xls_path, sheet_name=sheet_name, header=None)
df.insert(loc=0, column='sheet name', value=sheet_name)
dfs.append(df)
df1 = pd.concat(dfs,ignore_index=True)
pd.DataFrame(df1.to_json(f"{json_folder_path}{tail}.json", orient='records',indent=4))

Using Pandas (Python) with Excel to loop through multiple worksheets to return all rows where a value in a list appears in a column

I have a list of values, if they appear in a the column 'Books' I would like that row to be returned.
I think I have achieved this with the below code:
def return_Row():
file = 'TheFile.xls'
df = pd.read_excel(file)
listOfValues = ['A','B','C']
return(df.loc[df['Column'].isin(listOfValues)])
This currently only seems to work on the first Worksheet as there are multiple worksheets in 'TheFile.xls' how would I go about looping through these worksheets to return any rows where the listOfValues is found in the 'Books' column of all the other sheets?
Any help would be greatly appreciated.
Thank you
The thing is, pd.read_excel() returns a dataframe for the first sheet only if you didn't specify the sheet_name argument. If you want to get all the sheets in excel file without specifying their names, you can pass None to sheet_name as follows:
df = pd.read_excel(file, sheet_name=None)
This will give you a different dataframe for each sheet on which you can loop and do whatever you want. For example you can append the results that you need to a list and return the list:
def return_Row():
file = 'TheFile.xls'
results = []
dfs = pd.read_excel(file, sheet_name=None)
listOfValues = ['A','B','C']
for df in dfs.values():
results.append(df.loc[df['Column'].isin(listOfValues)])
return(results)

Convert excel file with many sheets (with spaces in the name of the shett) in pandas data frame

I would like to convert an excel file to a pandas dataframe. All the sheets name have spaces in the name, for instances, ' part 1 of 22, part 2 of 22, and so on. In addition the first column is the same for all the sheets.
I would like to convert this excel file to a unique dataframe. However I dont know what happen with the name in python. I mean I was hable to import them, but i do not know the name of the data frame.
The sheets are imported but i do not know the name of them. After this i would like to use another 'for' and use a pd.merge() in order to create a unique dataframe
for sheet_name in Matrix.sheet_names:
sheet_name = pd.read_excel(Matrix, sheet_name)
print(sheet_name.info())
Using only the code snippet you have shown, each sheet (each DataFrame) will be assigned to the variable sheet_name. Thus, this variable is overwritten on each iteration and you will only have the last sheet as a DataFrame assigned to that variable.
To achieve what you want to do you have to store each sheet, loaded as a DataFrame, somewhere, a list for example. You can then merge or concatenate them, depending on your needs.
Try this:
all_my_sheets = []
for sheet_name in Matrix.sheet_names:
sheet_name = pd.read_excel(Matrix, sheet_name)
all_my_sheets.append(sheet_name)
Or, even better, using list comprehension:
all_my_sheets = [pd.read_excel(Matrix, sheet_name) for sheet_name in Matrix.sheet_names]
You can then concatenate them into one DataFrame like this:
final_df = pd.concat(all_my_sheets, sort=False)
You might consider using the openpyxl package:
from openpyxl import load_workbook
import pandas as pd
wb = load_workbook(filename=file_path, read_only=True)
all_my_sheets = wb.sheetnames
# Assuming your sheets have the same headers and footers
n = 1
for ws in all_my_sheets:
records = []
for row in ws._cells_by_row(min_col=1,
min_row=n,
max_col=ws.max_column,
max_row=n):
rec = [cell.value for cell in row]
records.append(rec)
# Make sure you don't duplicate the header
n = 2
# ------------------------------
# Set the column names
records = records[header_row-1:]
header = records.pop(0)
# Create your df
df = pd.DataFrame(records, columns=header)
It may be easiest to call read_excel() once, and save the contents into a list.
So, the first step would look like this:
dfs = pd.read_excel(["Sheet 1", "Sheet 2", "Sheet 3"])
Note that the sheet names you use in the list should be the same as those in the excel file. Then, if you wanted to vertically concatenate these sheets, you would just call:
final_df = pd.concat(dfs, axis=1)
Note that this solution would result in a final_df that includes column headers from all three sheets. So, ideally they would be the same. It sounds like you want to merge the information, which would be done differently; we can't help you with the merge without more information.
I hope this helps!

Categories