I have a large .xlsx file with 1 million rows. I don't want to open the whole file in one go. I was wondering if I can read a chunk of the file, process it and then read the next chunk? (I prefer to use pandas for it.)
UPDATE: 2019-09-05
The chunksize parameter has been deprecated as it wasn't used by pd.read_excel(), because of the nature of XLSX file format, which will be read up into memory as a whole during parsing.
There are more details about that in this great SO answer...
OLD answer:
you can use read_excel() method:
chunksize = 10**5
for chunk in pd.read_excel(filename, chunksize=chunksize):
# process `chunk` DF
if your excel file has multiple sheets, take a look at bpachev's solution
Yes. Pandas supports chunked reading. You would go about reading an excel file like so.
import pandas as pd
xl = pd.ExcelFile("myfile.xlsx")
for sheet_name in xl.sheet_names:
reader = xl.parse(sheet_name, chunksize=1000):
for chunk in reader:
#parse chunk here
Related
I have large CSV files that I'd ultimately like to convert to parquet. Pandas won't help because of memory constraints and its difficulty handling NULL values (which are common in my data). I checked the PyArrow docs and there are tools for reading parquet files, but I didn't see anything about reading CSVs. Did I miss something, or is this feature somehow incompatible with PyArrow?
We're working on this feature, there is a pull request up now: https://github.com/apache/arrow/pull/2576. You can help by testing it out!
You can read the CSV in chunks with pd.read_csv(chunksize=...), then write a chunk at a time with Pyarrow.
The one caveat is, as you mentioned, Pandas will give inconsistent dtypes if you have a column that is all nulls in one chunk, so you have to make sure the chunk size is larger than the longest run of nulls in your data.
This reads CSV from stdin and writes Parquet to stdout (Python 3).
#!/usr/bin/env python
import sys
import pandas as pd
import pyarrow.parquet
# This has to be big enough you don't get a chunk of all nulls: https://issues.apache.org/jira/browse/ARROW-2659
SPLIT_ROWS = 2 ** 16
def main():
writer = None
for split in pd.read_csv(sys.stdin.buffer, chunksize=SPLIT_ROWS):
table = pyarrow.Table.from_pandas(split, preserve_index=False)
# Timestamps have issues if you don't convert to ms. https://github.com/dask/fastparquet/issues/82
writer = writer or pyarrow.parquet.ParquetWriter(sys.stdout.buffer, table.schema, coerce_timestamps='ms', compression='gzip')
writer.write_table(table)
writer.close()
if __name__ == "__main__":
main()
I have a 14MB Excel file with five worksheets that I'm reading into a Pandas dataframe, and although the code below works, it takes 9 minutes!
Does anyone have suggestions for speeding it up?
import pandas as pd
def OTT_read(xl,site_name):
df = pd.read_excel(xl.io,site_name,skiprows=2,parse_dates=0,index_col=0,
usecols=[0,1,2],header=None,
names=['date_time','%s_depth'%site_name,'%s_temp'%site_name])
return df
def make_OTT_df(FILEDIR,OTT_FILE):
xl = pd.ExcelFile(FILEDIR + OTT_FILE)
site_names = xl.sheet_names
df_list = [OTT_read(xl,site_name) for site_name in site_names]
return site_names,df_list
FILEDIR='c:/downloads/'
OTT_FILE='OTT_Data_All_stations.xlsx'
site_names_OTT,df_list_OTT = make_OTT_df(FILEDIR,OTT_FILE)
As others have suggested, csv reading is faster. So if you are on windows and have Excel, you could call a vbscript to convert the Excel to csv and then read the csv. I tried the script below and it took about 30 seconds.
# create a list with sheet numbers you want to process
sheets = map(str,range(1,6))
# convert each sheet to csv and then read it using read_csv
df={}
from subprocess import call
excel='C:\\Users\\rsignell\\OTT_Data_All_stations.xlsx'
for sheet in sheets:
csv = 'C:\\Users\\rsignell\\test' + sheet + '.csv'
call(['cscript.exe', 'C:\\Users\\rsignell\\ExcelToCsv.vbs', excel, csv, sheet])
df[sheet]=pd.read_csv(csv)
Here's a little snippet of python to create the ExcelToCsv.vbs script:
#write vbscript to file
vbscript="""if WScript.Arguments.Count < 3 Then
WScript.Echo "Please specify the source and the destination files. Usage: ExcelToCsv <xls/xlsx source file> <csv destination file> <worksheet number (starts at 1)>"
Wscript.Quit
End If
csv_format = 6
Set objFSO = CreateObject("Scripting.FileSystemObject")
src_file = objFSO.GetAbsolutePathName(Wscript.Arguments.Item(0))
dest_file = objFSO.GetAbsolutePathName(WScript.Arguments.Item(1))
worksheet_number = CInt(WScript.Arguments.Item(2))
Dim oExcel
Set oExcel = CreateObject("Excel.Application")
Dim oBook
Set oBook = oExcel.Workbooks.Open(src_file)
oBook.Worksheets(worksheet_number).Activate
oBook.SaveAs dest_file, csv_format
oBook.Close False
oExcel.Quit
""";
f = open('ExcelToCsv.vbs','w')
f.write(vbscript.encode('utf-8'))
f.close()
This answer benefited from Convert XLS to CSV on command line and csv & xlsx files import to pandas data frame: speed issue
I used xlsx2csv to virtually convert excel file to csv in memory and this helped cut the read time to about half.
from xlsx2csv import Xlsx2csv
from io import StringIO
import pandas as pd
def read_excel(path: str, sheet_name: str) -> pd.DataFrame:
buffer = StringIO()
Xlsx2csv(path, outputencoding="utf-8", sheet_name=sheet_name).convert(buffer)
buffer.seek(0)
df = pd.read_csv(buffer)
return df
If you have less than 65536 rows (in each sheet) you can try xls (instead of xlsx. In my experience xls is faster than xlsx. It is difficult to compare to csv because it depends on the number of sheets.
Although this is not an ideal solution (xls is a binary old privative format), I have found this is useful if you are working with a lof many sheets, internal formulas with values that are often updated, or for whatever reason you would really like to keep the excel multisheet functionality (instead of csv separated files).
In my experience, Pandas read_excel() works fine with Excel files with multiple sheets. As suggested in Using Pandas to read multiple worksheets, if you assign sheet_name to None it will automatically put every sheet in a Dataframe and it will output a dictionary of Dataframes with the keys of sheet names.
But the reason that it takes time is for where you parse texts in your code. 14MB excel with 5 sheets is not that much. I have a 20.1MB excel file with 46 sheets each one with more than 6000 rows and 17 columns and using read_excel it took like below:
t0 = time.time()
def parse(datestr):
y,m,d = datestr.split("/")
return dt.date(int(y),int(m),int(d))
data = pd.read_excel("DATA (1).xlsx", sheet_name=None, encoding="utf-8", skiprows=1, header=0, parse_dates=[1], date_parser=parse)
t1 = time.time()
print(t1 - t0)
## result: 37.54169297218323 seconds
In code above data is a dictionary of 46 Dataframes.
As others suggested, using read_csv() can help because reading .csv file is faster. But consider that for the fact that .xlsx files use compression, .csv files might be larger and hence, slower to read. But if you wanted to convert your file to comma-separated using python (VBcode is offered by Rich Signel), you can use: Convert xlsx to csv
I know this is old but in case anyone else is looking for an answer that doesn't involve VB. Pandas read_csv() is faster but you don't need a VB script to get a csv file.
Open your Excel file and save as *.csv (comma separated value) format.
Under tools you can select Web Options and under the Encoding tab you can change the encoding to whatever works for your data. I ended up using Windows, Western European because Windows UTF encoding is "special" but there's lots of ways to accomplish the same thing. Then use the encoding argument in pd.read_csv() to specify your encoding.
Encoding options are listed here
I encourage you to do the comparison yourself and see which approach is appropriate in your situation.
For instance, if you are processing a lot of XLSX files and are only going to ever read each one once, you may not want to worry about the CSV conversion. However, if you are going to read the CSVs over and over again, then I would highly recommend saving each of the worksheets in the workbook to a csv once, then read them repeatedly using pd.read_csv().
Below is a simple script that will let you compare Importing XLSX Directly, Converting XLSX to CSV in memory, and Importing CSV. It is based on Jing Xue's answer.
Spoiler alert: If you are going to read the file(s) multiple times, it's going to be faster to convert the XLSX to CSV.
I did some testing with some files I'm working on are here are my results:
5,874 KB xlsx file (29,415 rows, 58 columns)
Elapsed time for [Import XLSX with Pandas]: 0:00:31.75
Elapsed time for [Convert XLSX to CSV in mem]: 0:00:22.19
Elapsed time for [Import CSV file]: 0:00:00.21
********************
202,782 KB xlsx file (990,832 rows, 58 columns)
Elapsed time for [Import XLSX with Pandas]: 0:17:04.31
Elapsed time for [Convert XLSX to CSV in mem]: 0:12:11.74
Elapsed time for [Import CSV file]: 0:00:07.11
YES! the 202MB file really did take only 7 seconds compared to 17 minutes for the XLSX!!!
If you're ready to set up your own test, just open you XLSX in Excel and save one of the worksheets to CSV. For a final solution, you would obviously need to loop through the worksheets to process each one.
You will also need to pip install rich pandas xlsx2csv.
from rich import print
import pandas as pd
from datetime import datetime
from xlsx2csv import Xlsx2csv
from io import StringIO
def timer(name, startTime = None):
if startTime:
print(f"Timer: Elapsed time for [{name}]: {datetime.now() - startTime}")
else:
startTime = datetime.now()
print(f"Timer: Starting [{name}] at {startTime}")
return startTime
def read_excel(path: str, sheet_name: str) -> pd.DataFrame:
buffer = StringIO()
Xlsx2csv(path, outputencoding="utf-8", sheet_name=sheet_name).convert(buffer)
buffer.seek(0)
df = pd.read_csv(buffer)
return df
xlsxFileName = "MyBig.xlsx"
sheetName = "Sheet1"
csvFileName = "MyBig.csv"
startTime = timer(name="Import XLSX with Pandas")
df = pd.read_excel(xlsxFileName, sheet_name=sheetName)
timer("Import XLSX with Pandas", startTime)
startTime = timer(name="Convert XLSX to CSV first")
df = read_excel(path=xlsxFileName, sheet_name=sheetName)
timer("Convert XLSX to CSV first", startTime)
startTime = timer(name="Import CSV")
df = pd.read_csv(csvFileName)
timer("Import CSV", startTime)
There's no reason to open excel if you're willing to deal with slow conversion once.
Read the data into a dataframe with pd.read_excel()
Dump it into a csv right away with pd.to_csv()
Avoid both excel and windows specific calls. In my case the one-time time hit was worth the hassle. I got a ☕.
I have a really large csv file about 10GB. When ever I try to read in into iPython notebook using
data = pd.read_csv("data.csv")
my laptop gets stuck. Is it possible to just read like 10,000 rows or 500 MB of a csv file.
It is possible. You can create an iterator yielding chunks of your csv of a certain size at a time as a DataFrame by passing iterator=True with your desired chunksize to read_csv.
df_iter = pd.read_csv('data.csv', chunksize=10000, iterator=True)
for iter_num, chunk in enumerate(df_iter, 1):
print(f'Processing iteration {iter_num}')
# do things with chunk
Or more briefly
for chunk in pd.read_csv('data.csv', chunksize=10000):
# do things with chunk
Alternatively if there was just a specific part of the csv you wanted to read, you could use the skiprows and nrows options to start at a particular line and subsequently read n rows, as the naming suggests.
Likely a memory issue. On read_csv you can set chunksize (where you can specify number of rows).
Alternatively, if you don't need all the columns, you can change usecols on read_csv to import only the columns you need.
I have an extremely large dataframe saved as a gzip file. The data also needs a good deal of manipulation before being saved.
One could try to convert this entire gzip dataframe into text format, save this to a variable, parse/clean the data, and then save as a .csv file via pandas.read_csv(). However, this is extremely memory intensive.
I would like to read/decompress this file line by line (as this would be the most memory-efficient solution, I think), parse this (e.g. with regex re or perhaps a pandas solution) and then save each line into a pandas dataframe.
Python has a gzip library for this:
with gzip.open('filename.gzip', 'rb') as input_file:
reader = reader(input_file, delimiter="\t")
data = [row for row in reader]
df = pd.DataFrame(data)
However, this seems to drop all information into the 'reader' variable, and then parses. How can one do this in a more (memory) efficient manner?
Should I be using a different library instead of gzip?
It's not quite clear what do you want to do with your huge GZIP file. IIUC you can't read the whole data into memory, because your GZIP file is huge. So the only option you have is to process your data in chunks.
Assuming that you want to read your data from the GZIP file, process it and write it to compressed HDF5 file:
hdf_key = 'my_hdf_ID'
cols_to_index = ['colA','colZ'] # list of indexed columns, use `cols_to_index=True` if you want to index ALL columns
store = pd.HDFStore('/path/to/filename.h5')
chunksize = 10**5
for chunk in pd.read_csv('filename.gz', sep='\s*', chunksize=chunksize):
# process data in the `chunk` DF
# don't index data columns in each iteration - we'll do it later
store.append(hdf_key, chunk, data_columns=cols_to_index, index=False, complib='blosc', complevel=4)
# index data columns in HDFStore
store.create_table_index(hdf_key, columns=cols_to_index, optlevel=9, kind='full')
store.close()
Perhaps extract your data with gunzip -c, pipe it to your Python script and work with standard input there:
$ gunzip -c source.gz | python ./line_parser.py | gzip -c - > destination.gz
In the Python script line_parser.py:
#!/usr/bin/env python
import sys
for line in sys.stdin:
sys.stdout.write(line)
Replace sys.stdout.write(line) with code to process each line in your custom way.
Have you considered using HDFStore:
HDFStore is a dict-like object which reads and writes pandas using the high performance HDF5 format using the excellent PyTables library. See the cookbook for some advanced strategies
Create Store, save DataFrame and close store.
# Note compression.
store = pd.HDFStore('my_store.h5', mode='w', comp_level=9, complib='blosc')
with store:
store['my_dataframe'] = df
Reopen store, retrieve dataframe and close store.
with pd.HDFStore('my_store.h5', mode='r') as store:
df = store.get('my_dataframe')
I am exploring switching to python and pandas as a long-time SAS user.
However, when running some tests today, I was surprised that python ran out of memory when trying to pandas.read_csv() a 128mb csv file. It had about 200,000 rows and 200 columns of mostly numeric data.
With SAS, I can import a csv file into a SAS dataset and it can be as large as my hard drive.
Is there something analogous in pandas?
I regularly work with large files and do not have access to a distributed computing network.
Wes is of course right! I'm just chiming in to provide a little more complete example code. I had the same issue with a 129 Mb file, which was solved by:
import pandas as pd
tp = pd.read_csv('large_dataset.csv', iterator=True, chunksize=1000) # gives TextFileReader, which is iterable with chunks of 1000 rows.
df = pd.concat(tp, ignore_index=True) # df is DataFrame. If errors, do `list(tp)` instead of `tp`
In principle it shouldn't run out of memory, but there are currently memory problems with read_csv on large files caused by some complex Python internal issues (this is vague but it's been known for a long time: http://github.com/pydata/pandas/issues/407).
At the moment there isn't a perfect solution (here's a tedious one: you could transcribe the file row-by-row into a pre-allocated NumPy array or memory-mapped file--np.mmap), but it's one I'll be working on in the near future. Another solution is to read the file in smaller pieces (use iterator=True, chunksize=1000) then concatenate then with pd.concat. The problem comes in when you pull the entire text file into memory in one big slurp.
This is an older thread, but I just wanted to dump my workaround solution here. I initially tried the chunksize parameter (even with quite small values like 10000), but it didn't help much; had still technical issues with the memory size (my CSV was ~ 7.5 Gb).
Right now, I just read chunks of the CSV files in a for-loop approach and add them e.g., to an SQLite database step by step:
import pandas as pd
import sqlite3
from pandas.io import sql
import subprocess
# In and output file paths
in_csv = '../data/my_large.csv'
out_sqlite = '../data/my.sqlite'
table_name = 'my_table' # name for the SQLite database table
chunksize = 100000 # number of lines to process at each iteration
# columns that should be read from the CSV file
columns = ['molecule_id','charge','db','drugsnow','hba','hbd','loc','nrb','smiles']
# Get number of lines in the CSV file
nlines = subprocess.check_output('wc -l %s' % in_csv, shell=True)
nlines = int(nlines.split()[0])
# connect to database
cnx = sqlite3.connect(out_sqlite)
# Iteratively read CSV and dump lines into the SQLite table
for i in range(0, nlines, chunksize):
df = pd.read_csv(in_csv,
header=None, # no header, define column header manually later
nrows=chunksize, # number of rows to read at each iteration
skiprows=i) # skip rows that were already read
# columns to read
df.columns = columns
sql.to_sql(df,
name=table_name,
con=cnx,
index=False, # don't use CSV file index
index_label='molecule_id', # use a unique column from DataFrame as index
if_exists='append')
cnx.close()
Below is my working flow.
import sqlalchemy as sa
import pandas as pd
import psycopg2
count = 0
con = sa.create_engine('postgresql://postgres:pwd#localhost:00001/r')
#con = sa.create_engine('sqlite:///XXXXX.db') SQLite
chunks = pd.read_csv('..file', chunksize=10000, encoding="ISO-8859-1",
sep=',', error_bad_lines=False, index_col=False, dtype='unicode')
Base on your file size, you'd better optimized the chunksize.
for chunk in chunks:
chunk.to_sql(name='Table', if_exists='append', con=con)
count += 1
print(count)
After have all data in Database, You can query out those you need from database.
If you want to load huge csv files, dask might be a good option. It mimics the pandas api, so it feels quite similar to pandas
link to dask on github
You can use Pytable rather than pandas df.
It is designed for large data sets and the file format is in hdf5.
So the processing time is relatively fast.