I have two questions regarding reading data from a file in .xlsx format.
Is it possible to convert an .xlsx file to .csv without actually opening the file in pandas or using xlrd? Because when I have to open many files this is quite slow and I was trying to speed it up.
Is it possible to use some sort of for loop to loop through decoded xlsx lines? I put an example below.
xlsx_file = 'some_file.xlsx'
with open(xlsx_file) as lines:
for line in lines:
<do something like I would do for a normal string>
I would like to know if this is possible without the well known xlrd module.
Related
I have an excel that is generated daily and can have up to 50k+ rows. Is there a way to read only the last row (which is the sum of the columns)?
right now I am just reading the entire sheet and keeping only the last row but it is taking up a huge amount of runtime.
my code:
df=pd.read_excel(filepath,header=1,usecols="O:AC")
df=df.tail(1)
Pandas is quite slow, especially with large in memory data. You can think about a lazy loading method, for example check dask.
Else you can read the file using "open" and read the last line :
with open(filepath, "r") as file:
last_line = file.readlines()[-1]
I dont think there is a way to decrease runtime when you read excel file.
When you read a excel or one sheet of excel,you would load excel all data into dask,even you use pd.read_excel skiprows,Its just keep the row the skiprows choose after you load all data into dask.So it cant decrease runtime.
If you really want decrease runtime of read file,you should save the file into another format,.csv or .txt and so on.
AND you generally you can't read Microsoft Excel files as a text files using methods like readlines or read. You should convert files to another format before (good solution is .csv which can be readed by csv module) or use a special python modules like pyexcel and openpyxl to read .xlsx files directly.
I'm struggeling with one task that can save plenty of time. I'm new to Python so please don't kill me :)
I've got huge txt file with millions of records. I used to split them in MS Access, delimiter "|", filtered data so I can have about 400K records and then copied to Excel.
So basically file looks like:
What I would like to have:
I'm using Spyder so it would be great to see data in variable explorer so I can easily check and (after additional filters) export it to excel.
I use LibreOffice so I'm not 100% sure about Excel but if you change the .txt to .csv and try to open the file with Excel, it should allow to change the delimiter from a comma to '|' and then import it directly. That work with LibreOffice Calc anyway.
u have to split the file in lines then split the lines by the char l and map the data to a list o dicts.
with open ('filename') as file:
data = [{'id': line[0], 'fname':line[1]} for line in f.readlines()]
you have to fill in tve rest of the fields
Doing this with pandas will be much easier
Note: I am assuming that each entry is on a new line.
import pandas as pd
data = pd.read_csv("data.txt", delimiter='|')
# Do something here or let it be if you want to just convert text file to excel file
data.to_excel("data.xlsx")
I will explain in detail:
I have an Excel file and my client is using one tool which reads .csv format files only.
Now I am opening the Excel file in Excel and saving into .CSV format by using Save As option in excel. let me take this is a File_1.
I wrote Python code by using pandas module and i converted that Excel file into csv. let me take this is as a File_2.
My client tool is able to read File_1 but not File_2. Why? What would be the problem?
My observations:
When I am reading File_1 in pandas (which is converted into .CSV manually) I had to mention --> encoding = "ISO-8859-1", otherwise it is giving Unicode error.
Ex: pd.read_csv("File_1.csv", encoding = 'ISO-8859-1")
But when I am reading File_2 in pandas, it simply reading and not giving any error.
Ex: pd.read_csv("File_2.csv")
So what would be the reason to not read File_2 by client tool? Is it Unicode problem or any other?
I am trying to code a function where I grab data from my database, which already works correctly.
This is my code for the headers prior to adding the actual records:
with open('csv_template.csv', 'a') as template_file:
#declares the variable template_writer ready for appending
template_writer = csv.writer(template_file, delimiter=',')
#appends the column names of the excel table prior to adding the actual physical data
template_writer.writerow(['Arrangement_ID','Quantity','Cost'])
#closes the file after appending
template_file.close()
This is my code for the records which is contained in a while loop and is the main reason that the two scripts are kept separate.
with open('csv_template.csv', 'a') as template_file:
#declares the variable template_writer ready for appending
template_writer = csv.writer(template_file, delimiter=',')
#appends the data of the current fetched values of the sql statement within the while loop to the csv file
template_writer.writerow([transactionWordData[0],transactionWordData[1],transactionWordData[2]])
#closes the file after appending
template_file.close()
Now once I have got this data ready for excel, I run the file in excel and I would like it to be in a format where I can print immediately, however, when I do print the column width of the excel cells is too small and leads to it being cut off during printing.
I have tried altering the default column width within excel and hoping that it would keep that format permanently but that doesn't seem to be the case and every time that I re-open the csv file in excel it seems to reset completely back to the default column width.
Here is my code for opening the csv file in excel using python and the comment is the actual code I want to use when I can actually format the spreadsheet ready for printing.
#finds the os path of the csv file depending where it is in the file directories
file_path = os.path.abspath("csv_template.csv")
#opens the csv file in excel ready to print
os.startfile(file_path)
#os.startfile(file_path, 'print')
If anyone has any solutions to this or ideas please let me know.
Unfortunately I don't think this is possible for CSV file formats, since they are just plaintext comma separated values and don't support formatting.
I have tried altering the default column width within excel but every time that I re-open the csv file in excel it seems to reset back to the default column width.
If you save the file to an excel format once you have edited it that should solve this problem.
Alternatively, instead of using the csv library you could use xlsxwriter instead which does allow you to set the width of the columns in your code.
See https://xlsxwriter.readthedocs.io and https://xlsxwriter.readthedocs.io/worksheet.html#worksheet-set-column.
Hope this helps!
The csv format is nothing else than a text file, where the lines follow a given pattern, that is, a fixed number of fields (your data) delimited by comma. In contrast an .xlsx file is a binary file that contains specifications about the format. Therefore you may want write to an Excel file instead using the rich pandas library.
You can add space like as it is string so it will automatically adjust the width do it like this:
template_writer.writerow(['Arrangement_ID ','Quantity ','Cost '])
Problem Statement :
I have a directory with gzip files , and each gzip file contains a text file.
I have written a code in such a way that it unzips all the gzip files and then used to read each unzipped text file and then combined that output to one text file, then applied a condition , if that condition meets then it writes to excel.
The above process is bit tedious and lengthy.
Can anyone please help me out in writing the code where the data is read directly from the txt file which is gzipped and write it contents to excel.
IIUC you can use pandas using first read_csv:
df = read_csv('yourfile.gzip', compression='gzip')
then apply your conditions on df and write back the dataframe to excel using to_excel:
df.to_excel(file.xls)