Renaming read data in a for loop - python

I need to read data from an excel file which has a large number of sheets. Each sheet has a big dataset from a different year.
The name of the tabs are shown by a year which represent the year of data collection. The data is read as follows:
Data_2000 = pd.read_excel('Database.xlsx',sheet_name = 2000)
Because there are a lot of sheets, I want to use a for loop to read the data as follows:
import pandas as pd
import xlrd
wb = xlrd.open_workbook('Database.xlsx', on_demand=True)
SheetName = wb.sheet_names() # Reading the name of the sheets
for i in SheetName:
Data = pd.read_excel('Database.xlsx',sheet_name = i )
The problem is that I cannot change the name of the data frame, i.e., Data, in this loop and set it as Data_2000, Data_2001,...

why don't you just store your variable in a dictionary like so:
import pandas as pd
import xlrd
wb = xlrd.open_workbook('Database.xlsx', on_demand=True)
SheetName = wb.sheet_names() # Reading the name of the sheets
data = dict()
for i in SheetName:
data[f"Data_{i}"] = pd.read_excel('Database.xlsx',sheet_name = i )
You can then access the data you want like this:
data["Data_2000"]

One way to handle this is to just use dicts.
#First set up a dict
sheets = {}
# Then when you run the loop you append to that dict
for name in SheetName:
data = pd.read_excel('Database.xlsx', sheet_name = name)
year = name
sheets[year] = data
You can then call the dict using the keys if you need data for a certain year with the format called_data = sheets[year]

Thanks for all of the suggestions. I also tried the following approach and it worked well and could deliver what I expected:
import pandas as pd
import xlrd
wb = xlrd.open_workbook('Data.xlsx', on_demand=True)
SheetName = wb.sheet_names() # Reading the name of the sheets
for i in SheetName:
globals()['Data_%s' %i] = pd.read_excel('Data.xlsx', sheet_name = i)
It produces different Dataframe keeping Data_ at the binning of the name and put the name of the sheet after that (e.g., Data_2000, Data_2001, ...)

Related

how to use element as dataframe name when looping over a list

I need to read data from several sheets in a xlsx file, and save data as a dataframe with the same name as sheet name. Here is the code I use. It can read data from different sheets, however, all dataframes are named as temp. How should I change it. Thanks.
import pandas as pd
sheet_name_list = ['sheet1','sheet2','sheet3']
for temp in sheet_name_list:
temp = pd.read_excel("data_spreadsheet.xlsx", sheet_name = temp)
You can use dictionary:
pd_dict = {}
for temp in sheet_name_list:
pd_dict[temp] = pd.read_excel("data_spreadsheet.xlsx", sheet_name=temp)

Read CSV sheet data and created new one

I have a CSV file which have multiple sheets in it. Want to read it sheet by sheet and filter some data and want to create csv file in same format. how can I do that. Please suggest. I was trying it though pandas.ExcelReader but its not working for CSV file.
you can use the following code for this may help!
import pandas as pd
def read_excel_sheets(xls_path):
"""Read all sheets of an Excel workbook and return a single DataFrame"""
print(f'Loading {xls_path} into pandas')
xl = pd.ExcelFile(xls_path)
df = pd.DataFrame()
columns = None
for idx, name in enumerate(xl.sheet_names):
print(f'Reading sheet #{idx}: {name}')
sheet = xl.parse(name)
if idx == 0:
# Save column names from the first sheet to match for append
columns = sheet.columns
sheet.columns = columns
# Assume index of existing data frame when appended
df = df.append(sheet, ignore_index=True)
return df
the resource for this code is the link below:
click here
and for converting it back to csv you can follow the post which link is
attached here

Make a filter of a excel data in python

I want to make a filter or exception in my code for the excel file.
I have this table in excel
But in my result I only want the Machine 'S9401-1', how can I Get this.
This is my code
import xlrd
#First open the workbook
wb = xlrd.open_workbook('Book1.xlsx')
#Then select the sheet. Replace the sheet1 with name of your sheet
sheet = wb.sheet_by_name('connx 94')
#Then get values of each column. Excuse first item which is header
machine = sheet.cell_value(1,0)
alid = sheet.cell_value(1,1)
descripcion = sheet.cell_value(1,3)
result=[machine,alid,descripcion]
print (result)
Using only xlrd package, you could do brute force like this:
import xlrd
wb = xlrd.open_workbook(r'c:\debug\py.xlsx')
sheet = wb.sheet_by_name('Sheet1')
def filterdata(sh,ID):
vals = sh.row_values
data = [[vals(r,0)[1], vals(r,0)[3]] for r in range(sh.nrows) if vals(r,0)[0] == ID]
return data
print(filterdata(sheet,'S9401-1))
Making a function call, you can use different IDs:
print(filterdata(sheet,'S9401-1'))
print(filterdata(sheet,'S9401-3')) # should return an empty list

Looping through a folder to merge several excel sheets into one column

I have several workbooks, each with three sheets. I want to loop through each workbook and merge all the data from sheet_1 into a new workbook_1 file, sheet_2 into workbook_2 file & sheet_3 into workbook_3.
As far as I can tell the script below does everything I need, except rather than appending the data, it overwrites the data from the previous iteration.
For the sake of parsimony I've shortened, cleaned & simplified my script, but I'm happy to share the full script if needed.
import pandas as pd
import glob
search_dir= ('/Users/PATH/*.xlsx')
sheet_names = ['sheet_1','sheet_2','sheet_2']
def a_joiner(sheet):
for loop_x in glob.glob(search_dir):
try:
if sheet == 'sheet_1':
id_file= pd.ExcelFile(loop_x)
df_1 = id_file.parse(sheet, header= None)
writer= pd.ExcelWriter('/Users/PATH/%s.xlsx' %(sheet), engine= 'xlsxwriter')
df_1.to_excel(writer)
writer.save()
elif sheet == 'sheet_2':
#do same as above
else:
#and do same as above again
except Exception as e:
print('Error:',e)
for sheet in sheet_names:
a_joiner(sheet)
You can also easilly append data like:
df = []
for f in ['c:\\file1.xls', 'c:\\ file2.xls']:
data = pd.read_excel(f, 'Sheet1').iloc[:-2]
data.index = [os.path.basename(f)] * len(data)
df.append(data)
df = pd.concat(df)
From:
Using pandas Combining/merging 2 different Excel files/sheets

How to concatenate three excels files xlsx using python?

Hello I would like to concatenate three excels files xlsx using python.
I have tried using openpyxl, but I don't know which function could help me to append three worksheet into one.
Do you have any ideas how to do that ?
Thanks a lot
Here's a pandas-based approach. (It's using openpyxl behind the scenes.)
import pandas as pd
# filenames
excel_names = ["xlsx1.xlsx", "xlsx2.xlsx", "xlsx3.xlsx"]
# read them in
excels = [pd.ExcelFile(name) for name in excel_names]
# turn them into dataframes
frames = [x.parse(x.sheet_names[0], header=None,index_col=None) for x in excels]
# delete the first row for all frames except the first
# i.e. remove the header row -- assumes it's the first
frames[1:] = [df[1:] for df in frames[1:]]
# concatenate them..
combined = pd.concat(frames)
# write it out
combined.to_excel("c.xlsx", header=False, index=False)
I'd use xlrd and xlwt. Assuming you literally just need to append these files (rather than doing any real work on them), I'd do something like: Open up a file to write to with xlwt, and then for each of your other three files, loop over the data and add each row to the output file. To get you started:
import xlwt
import xlrd
wkbk = xlwt.Workbook()
outsheet = wkbk.add_sheet('Sheet1')
xlsfiles = [r'C:\foo.xlsx', r'C:\bar.xlsx', r'C:\baz.xlsx']
outrow_idx = 0
for f in xlsfiles:
# This is all untested; essentially just pseudocode for concept!
insheet = xlrd.open_workbook(f).sheets()[0]
for row_idx in xrange(insheet.nrows):
for col_idx in xrange(insheet.ncols):
outsheet.write(outrow_idx, col_idx,
insheet.cell_value(row_idx, col_idx))
outrow_idx += 1
wkbk.save(r'C:\combined.xls')
If your files all have a header line, you probably don't want to repeat that, so you could modify the code above to look more like this:
firstfile = True # Is this the first sheet?
for f in xlsfiles:
insheet = xlrd.open_workbook(f).sheets()[0]
for row_idx in xrange(0 if firstfile else 1, insheet.nrows):
pass # processing; etc
firstfile = False # We're done with the first sheet.
When I combine excel files (mydata1.xlsx, mydata2.xlsx, mydata3.xlsx) for data analysis, here is what I do:
import pandas as pd
import numpy as np
import glob
all_data = pd.DataFrame()
for f in glob.glob('myfolder/mydata*.xlsx'):
df = pd.read_excel(f)
all_data = all_data.append(df, ignore_index=True)
Then, when I want to save it as one file:
writer = pd.ExcelWriter('mycollected_data.xlsx', engine='xlsxwriter')
all_data.to_excel(writer, sheet_name='Sheet1')
writer.save()
Solution with openpyxl only (without a bunch of other dependencies).
This script should take care of merging together an arbitrary number of xlsx documents, whether they have one or multiple sheets. It will preserve the formatting.
There's a function to copy sheets in openpyxl, but it is only from/to the same file. There's also a function insert_rows somewhere, but by itself it won't insert any rows. So I'm afraid we are left to deal (tediously) with one cell at a time.
As much as I dislike using for loops and would rather use something compact and elegant like list comprehension, I don't see how to do that here as this is a side-effect show.
Credit to this answer on copying between workbooks.
#!/usr/bin/env python3
#USAGE
#mergeXLSX.py <a bunch of .xlsx files> ... output.xlsx
#
#where output.xlsx is the unified file
#This works FROM/TO the xlsx format. Libreoffice might help to convert from xls.
#localc --headless --convert-to xlsx somefile.xls
import sys
from copy import copy
from openpyxl import load_workbook,Workbook
def createNewWorkbook(manyWb):
for wb in manyWb:
for sheetName in wb.sheetnames:
o = theOne.create_sheet(sheetName)
safeTitle = o.title
copySheet(wb[sheetName],theOne[safeTitle])
def copySheet(sourceSheet,newSheet):
for row in sourceSheet.rows:
for cell in row:
newCell = newSheet.cell(row=cell.row, column=cell.col_idx,
value= cell.value)
if cell.has_style:
newCell.font = copy(cell.font)
newCell.border = copy(cell.border)
newCell.fill = copy(cell.fill)
newCell.number_format = copy(cell.number_format)
newCell.protection = copy(cell.protection)
newCell.alignment = copy(cell.alignment)
filesInput = sys.argv[1:]
theOneFile = filesInput.pop(-1)
myfriends = [ load_workbook(f) for f in filesInput ]
#try this if you are bored
#myfriends = [ openpyxl.load_workbook(f) for k in range(200) for f in filesInput ]
theOne = Workbook()
del theOne['Sheet'] #We want our new book to be empty. Thanks.
createNewWorkbook(myfriends)
theOne.save(theOneFile)
Tested with openpyxl 2.5.4, python 3.4.
You can simply use pandas and os library to do this.
import pandas as pd
import os
#create an empty dataframe which will have all the combined data
mergedData = pd.DataFrame()
for files in os.listdir():
#make sure you are only reading excel files
if files.endswith('.xlsx'):
data = pd.read_excel(files, index_col=None)
mergedData = mergedData.append(data)
#move the files to other folder so that it does not process multiple times
os.rename(files, 'path to some other folder')
mergedData DF will have all the combined data which you can export in a separate excel or csv file. Same code will work with csv files as well. just replace it in the IF condition
Just to add to p_barill's answer, if you have custom column widths that you need to copy, you can add the following to the bottom of copySheet:
for col in sourceSheet.column_dimensions:
newSheet.column_dimensions[col] = sourceSheet.column_dimensions[col]
I would just post this in a comment on his or her answer but my reputation isn't high enough.

Categories