I'm processing large CSV files (on the order of several GBs with 10M lines) using a Python script.
The files have different row lengths, and cannot be loaded fully into memory for analysis.
Each line is handled separately by a function in my script. It takes about 20 minutes to analyze one file, and it appears disk access speed is not an issue, but rather processing/function calls.
The code looks something like this (very straightforward). The actual code uses a Class structure, but this is similar:
csvReader = csv.reader(open("file","r")
for row in csvReader:
handleRow(row, dataStructure)
Given the calculation requires a shared data structure, what would be the best way to run the analysis in parallel in Python utilizing multiple cores?
In general, how do I read multiple lines at once from a .csv in Python to transfer to a thread/process? Looping with for over the rows doesn't sound very efficient.
Thanks!
This might be too late, but just for future users I'll post anyway. Another poster mentioned using multiprocessing. I can vouch for it and can go into more detail. We deal with files in the hundreds of MB/several GB every day using Python. So it's definitely up to the task. Some of files we deal with aren't CSVs, so the parsing can be fairly complex and take longer than the disk access. However, the methodology is the same no matter what file type.
You can process pieces of the large files concurrently. Here's pseudo code of how we do it:
import os, multiprocessing as mp
# process file function
def processfile(filename, start=0, stop=0):
if start == 0 and stop == 0:
... process entire file...
else:
with open(file, 'r') as fh:
fh.seek(start)
lines = fh.readlines(stop - start)
... process these lines ...
return results
if __name__ == "__main__":
# get file size and set chuck size
filesize = os.path.getsize(filename)
split_size = 100*1024*1024
# determine if it needs to be split
if filesize > split_size:
# create pool, initialize chunk start location (cursor)
pool = mp.Pool(cpu_count)
cursor = 0
results = []
with open(file, 'r') as fh:
# for every chunk in the file...
for chunk in xrange(filesize // split_size):
# determine where the chunk ends, is it the last one?
if cursor + split_size > filesize:
end = filesize
else:
end = cursor + split_size
# seek to end of chunk and read next line to ensure you
# pass entire lines to the processfile function
fh.seek(end)
fh.readline()
# get current file location
end = fh.tell()
# add chunk to process pool, save reference to get results
proc = pool.apply_async(processfile, args=[filename, cursor, end])
results.append(proc)
# setup next chunk
cursor = end
# close and wait for pool to finish
pool.close()
pool.join()
# iterate through results
for proc in results:
processfile_result = proc.get()
else:
...process normally...
Like I said, that's only pseudo code. It should get anyone started who needs to do something similar. I don't have the code in front of me, just doing it from memory.
But we got more than a 2x speed up from this on the first run without fine tuning it. You can fine tune the number of processes in the pool and how large the chunks are to get an even higher speed up depending on your setup. If you have multiple files as we do, create a pool to read several files in parallel. Just be careful no to overload the box with too many processes.
Note: You need to put it inside an "if main" block to ensure infinite processes aren't created.
Try benchmarking reading your file and parsing each CSV row but doing nothing with it. You ruled out disk access, but you still need to see if the CSV parsing is what's slow or if your own code is what's slow.
If it's the CSV parsing that's slow, you might be stuck, because I don't think there's a way to jump into the middle of a CSV file without scanning up to that point.
If it's your own code, then you can have one thread reading the CSV file and dropping rows into a queue, and then have multiple threads processing rows from that queue. But don't bother with this solution if the CSV parsing itself is what's making it slow.
Because of the GIL, Python's threading won't speed-up computations that are processor bound like it can with IO bound.
Instead, take a look at the multiprocessing module which can run your code on multiple processors in parallel.
If the rows are completely independent just split the input file in as many files as CPUs you have. After that, you can run as many instances of the process as input files you have now. This instances, since they are completely different processes, will not be bound by GIL problems.
Just found a solution to this old problem. I tried Pool.imap, and it seems to simplify processing large file significantly. imap has one significant benefit when comes to processing large files: It returns results as soon as they are ready, and not wait for all the results to be available. This saves lot of memory.
(Here is an untested snippet of code which reads a csv file row by row, process each row and write it back to a different csv file. Everything is done in parallel.)
import multiprocessing as mp
import csv
CHUNKSIZE = 10000 # Set this to whatever you feel reasonable
def _run_parallel(csvfname, csvoutfname):
with open(csvfname) as csvf, \
open(csvoutfname, 'w') as csvout\
mp.Pool() as p:
reader = csv.reader(csvf)
csvout.writerows(p.imap(process, reader, chunksize=CHUNKSIZE))
If you use zmq and a DEALER middle man, you'd be able spread the row processing not just to the CPUs on your computer but across a network to as many processes as necessary. This would essentially guarentee that you hit an IO limit vs a CPU limit :)
Related
I have a very large text file, and a function that does what I want it to do to each line. However, when reading line by line and applying the function, it takes roughly three hours. I'm wondering if there isn't a way to speed this up with chunking or multiprocessing.
My code looks like this:
with open('f.txt', 'r') as f:
function(f,w)
Where the function takes in the large text file and an empty text file and applies the function and writes to the empty file.
I have tried:
def multiprocess(f,w):
cores = multiprocessing.cpu_count()
with Pool(cores) as p:
pieces = p.map(function,f,w)
f.close()
w.close()
multiprocess(f,w)
But when I do this, I get a TypeError <= unsupported operand with type 'io.TextWrapper' and 'int'. This could also be the wrong approach, or I may be doing this wrong entirely. Any advice would be much appreciated.
even if you can successfully pass open file objects to child OS processes in your Pool as arguments f and w (which I don't think you can on any OS) trying to read from and write to files concurrently is a bad idea, to say the least.
In general, I recommend using the Process class rather than Pool, assuming that the output end result needs to maintain the same order as the input 20m lines file.
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Process
The slowest solution, but most efficient RAM usage
Your initial solution to execute and process the file line by line
For maximum speed, but most RAM consumption
Read the entire File into RAM as a list via f.readlines(), if your entire dataset can fit in memory, comfortably
Figure out the number of cores (say 8 cores for example)
Split the list evenly into 8 lists
pass each list to the function to be executed by a Process instance (at this point your RAM usage will be further doubled, which is the trade off for max speed), but you should del the original big list right after to free some RAM
Each Process handles its entire chunk in order line by line, and write it into its own output file (out_file1.txt, out_file2.txt, etc.)
Have your OS concatenate your output files in order into one big output file. you can use subprocess.run('cat out_file* > big_output.txt') if you are running a UNIX system, or the equivalent Windows command for windows.
for an intermediate trade-off between speed and RAM, but the most complex, we will have to use the Queue class
https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Queue
Figure out the number of cores in a variable cores (say 8)
Initialize 8 queue, 8 processes, and pass each Queue to each process. At this point each Process should open its own output file (outfile1.txt, outfile2.txt, etc.)
Each process shall poll (and block) for a chunk of 10_000 rows, process them, and write them to their respective output files sequentially
In a loop in the Parent Process, Read 10_000 * 8 lines from your input 20m-rows file
split that into several lists (10K chunks) to push to your respective Processes Queues
When your done with 20m rows exit the loop, pass a special value into each process Queue that signals the end of input data
When each process detects that special End of Data value in its own Queue, each shall close their output file, and exit
Have your OS concatenate your output files in order into one big output file. you can use subprocess.run('cat out_file* > big_output.txt') if you are running a UNIX system, or the equivalent Windows command for windows.
Convoluted? well, it is usually a trade-off between Speed, RAM, Complexity. Also for a 20m row task, one needs to make sure that data processing is as optimal as possible - inline as much functions as you can, avoid alot of math, use Pandas / numpy in child processes if possible, etc.
Using in to iterate is not the way but you can call more than one line by time, you just need to sum one or more to read more than one line, doing this the program will read faster.
Look this snippet.
# Python code to
# demonstrate readlines()
L = ["Geeks\n", "for\n", "Geeks\n"]
# writing to file
file1 = open('myfile.txt', 'w')
file1.writelines(L)
file1.close()
# Using readlines()
file1 = open('myfile.txt', 'r')
Lines = file1.readlines()
count = 0
# Strips the newline character
for line in Lines:
count += 1
print("Line{}: {}".format(count, line.strip()))
I got it from: https://www.geeksforgeeks.org/read-a-file-line-by-line-in-python/.
From one of our client's requirement, I have to develop an application which should be able to process huge CSV files. File size could be in the range of 10 MB - 2GB in size.
Depending on size, module decides whether to read the file using Multiprocessing pool or by using normal CSV reader.
But from observation, multi processing taking longer time than normal CSV reading when tested both the modes for a file with size of 100 MB.
Is this correct behaviour? OR Am I doing something wrong?
Here is my code:
def set_file_processing_mode(self, fpath):
""" """
fsize = self.get_file_size(fpath)
if fsize > FILE_SIZE_200MB:
self.read_in_async_mode = True
else:
self.read_in_async_mode = False
def read_line_by_line(self, filepath):
"""Reads CSV line by line"""
with open(filepath, 'rb') as csvin:
csvin = csv.reader(csvin, delimiter=',')
for row in iter(csvin):
yield row
def read_huge_file(self, filepath):
"""Read file in chunks"""
pool = mp.Pool(1)
for chunk_number in range(self.chunks): #self.chunks = 20
proc = pool.apply_async(read_chunk_by_chunk,
args=[filepath, self.chunks, chunk_number])
reader = proc.get()
yield reader
pool.close()
pool.join()
def iterate_chunks(self, filepath):
"""Read huge file rows"""
for chunklist in self.read_huge_file(filepath):
for row in chunklist:
yield row
#timeit #-- custom decorator
def read_csv_rows(self, filepath):
"""Read CSV rows and pass it to processing"""
if self.read_in_async_mode:
print("Reading in async mode")
for row in self.iterate_chunks(filepath):
self.process(row)
else:
print("Reading in sync mode")
for row in self.read_line_by_line(filepath):
self.process(row)
def process(self, formatted_row):
"""Just prints the line"""
self.log(formatted_row)
def read_chunk_by_chunk(filename, number_of_blocks, block):
'''
A generator that splits a file into blocks and iterates
over the lines of one of the blocks.
'''
results = []
assert 0 <= block and block < number_of_blocks
assert 0 < number_of_blocks
with open(filename) as fp :
fp.seek(0,2)
file_size = fp.tell()
ini = file_size * block / number_of_blocks
end = file_size * (1 + block) / number_of_blocks
if ini <= 0:
fp.seek(0)
else:
fp.seek(ini-1)
fp.readline()
while fp.tell() < end:
results.append(fp.readline())
return results
if __name__ == '__main__':
classobj.read_csv_rows(sys.argv[1])
Here is a test:
$ python csv_utils.py "input.csv"
Reading in async mode
FINISHED IN 3.75 sec
$ python csv_utils.py "input.csv"
Reading in sync mode
FINISHED IN 0.96 sec
Question is :
Why Async mode is taking longer?
NOTE: Removed unnecessary functions/lines to avoid complexity in the code
Is this correct behaviour?
Yes - it may not be what you expect, but it is consistent with the way you implemented it and how multiprocessing works.
Why Async mode is taking longer?
The way your example works is perhaps best illustrated by a parable - bear with me please:
Let's say you ask your friend to engage in an experiment. You want him to go through a book and mark each page with a pen, as fast as he can. There are two rounds with a distinct setup, and you are going to time each round and then compare which one was faster:
open the book on the first page, mark it, then flip the page and mark the following pages as they come up. Pure sequential processing.
process the book in chunks. For this he should run through the book's pages chunk by chunk. That is he should first make a list of page numbers
as starting points, say 1, 10, 20, 30, 40, etc. Then for each chunk, he should close the book, open it on the page for the starting point, process all pages before the next starting point comes up, close the book, then start all over again for the next chunk.
Which of these approaches will be faster?
Am I doing something wrong?
You decide both approaches take too long. What you really want to do is ask multiple people (processes) to do the marking in parallel. Now with a book (as with a file) that's difficult because, well, only one person (process) can access the book (file) at any one point. Still it can be done if the order of processing doesn't matter and it is the marking itself - not the accessing - that should run in parallel. So the new approach is like this:
cut the pages out of the book and sort them into say 10 stacks
ask ten people to mark one stack each
This approach will most certainly speed up the whole process. Perhaps surprisingly though the speed up will be less than a factor of 10 because step 1 takes some time, and only one person can do it. That's called Amdahl's law [wikipedia]:
Essentially what it means is that the (theoretical) speed-up of any process can only be as fast as the parallel processing part p is reduced in speed in relation to the part's sequential processing time (p/s).
Intuitively, the speed-up can only come from the part of the task that is processed in parallel, all the sequential parts are not affected and take the same amount of time, whether p is processed in parallel or not.
That said, in our example, obviously the speed-up can only come from step 2 (marking pages in parallel by multiple people), as step 1 (tearing up the book) is clearly sequential.
develop an application which should be able to process huge CSV files
Here's how to approach this:
determine what part of the processing can be done in parallel, i.e. process each chunk sepearately and out of sequence
read the file sequentially, splitting it up into chunks as you go
use multiprocessing to run multiple processing steps in parallel
Something like this:
def process(rows):
# do all the processing
...
return result
if __name__ == '__main__':
pool = mp.Pool(N) # N > 1
chunks = get_chunks(...)
for rows in chunks:
result += pool.apply_async(process, rows)
pool.close()
pool.join()
I'm not defining get_chunks here because there are several documented approaches to doing this e.g. here or here.
Conclusion
Depending on the kind of processing required for each file, it may well be that the sequential approach to processing any one file is the fastest possible approach, simply because the processing parts don't gain much from being done in parallel. You may still end up processing it chunk by chunk due to e.g. memory constraints. If that is the case, you probably don't need multiprocessing.
If you have multiple files that can be processed in parallel,
multiprocessing is a very good approach. It works the same way as shown above, where the chunks are not rows but filenames.
I have this code which reads a field from the top row of a csv
and writes it into a new column
it then saves out the csv ignoring the first and third rows which are no longer needed.
The problem is I have 50,000+ csvs to process.
Is it possible to parallel this so that it runs faster?
I need to do this a few times and it's a bit too slow.
import glob
import csv
import os
path = '/in/'
out = '/out/'
for fname in glob.glob(path):
with open(fname) as csv_open:
print j
raw_name = os.path.basename(fname)
outname = os.path.join(out, raw_name)
reader = csv.reader(csv_open)
all_t = []
row0 = reader.next()
train = row0[0]
row1 = reader.next()
row1.append('Loco')
all_t.append(row1)
reader.next()
for i, row in enumerate(reader):
row.append(train)
all_t.append(row)
with open(outname, 'w') as csv_out:
write_func = csv.writer(csv_out, lineterminator='\n')
write_func.writerows(all_t)
You can definitely spawn several threads/processes to run that task in parallel. Have a look at (note I am assuming you are using Python 2 for your print j statement):
multiprocessing
threading
Note however that Python's threading might not behave as you would expect. So perhaps multiprocessing is what you are looking for: create a handful of processes and pass them the file names using a multiprocessing.Queue (keep the processes alive and pass them multiple file names instead of creating a new process for each new file).
Also, your bottleneck might be in the system's I/O throughput (meaning that the slowest part might be reading and writing those CSVs). In that case, parallelization won't really help much.
You could also try to optimize your code (or perhaps use another module or even programming language that might accelerate the processing). But I do not think this is the most worthy step to spend your time on.
I would go first for the multiprocessing until adding more processes does not improve performance (something that may happen sooner than later if the bottleneck is you system's I/O throughput). Then I would perhaps try to optimize the processing code a bit. Then if it is still very slow, I would just be patient and wait for the script to finish execution while I keep doing my work.
So I have about 400 files ranging from 10kb to 56mb in size, file type being .txt/.doc(x)/.pdf/.xml and I have to read them all. My read in files are basically:
#for txt files
with open("TXT\\" + path, 'r') as content_file:
content = content_file.read().split(' ')
#for doc files using pydoc
contents = '\n'.join([para.text for para in doc.paragraphs]).encode("ascii","ignore").decode("utf-8").split(' ')
#for pdf files using pypdf2
for i in range(0, pdf.getNumPages()):
content += pdf.getPage(i).extractText() + "\n"
content = " ".join(content.replace(u"\xa0", " ").strip().split())
contents = content.encode("ascii","ignore").decode("utf-8").split(' ')
#for xml files using lxml
tree = etree.parse(path)
contents = etree.tostring(tree, encoding='utf8', method='text')
contents = contents.decode("utf-8").split(' ')
But I notice even reading 30 text files with under 50kb size each and doing operations on it will take 41 seconds. But If I read a single text file with 56mb takes me 9 seconds. So I'm guessing that it's the file I/O that's slowing me down instead of my program.
Any idea on how to speed up this process? Maybe break down each file type into 4 different threads? But how would you go about doing that since they are sharing the same list and that single list will be written to a file when they are done.
If you're blocked on file I/O, as you suspect, there's probably not much you can do.
But parallelizing to different threads might help if you have great bandwidth but terrible latency. Especially if you're dealing with, say, a networked filesystem or a multi-platter logical drive. So, it can't hurt to try.
But there's no reason to do it per file type; just use a single pool to handle all your files. For example, using the futures module:*
import concurrent.futures
with concurrent.futures.ThreadPoolExecutor(max_workers=4) as executor:
results = executor.map(process_file, list_of_filenames)
A ThreadPoolExecutor is slightly smarter than a basic thread pool, because it lets you build composable futures, but here you don't need any of that, so I'm just using it as a basic thread pool because Python doesn't have one of those.**
The constructor creates 4 threads, and all the queues and anything else needed to manage putting tasks on those threads and getting results back.
Then, the map method just goes through each filename in list_of_filenames, creates a task out of calling process_file on that filename, submits it to the pool, and then waits for all of the tasks to finish.
In other words, this is the same as writing:
results = [process_file(filename) for filename in list_of_filenames]
… except that it uses four threads to process the files in parallel.
There are some nice examples in the docs if this isn't clear enough.
* If you're using Python 2.x, you'll need to install a backport before you can use this. Or you can use multiprocessing.dummy.Pool instead, as noted below.
** Actually, it does, in multiprocessing.dummy.Pool, but that's not very clearly documented.
I was reading a similar thread where the OP wanted to process each line in a function using multiprocessing (found here). The answer to this question that was intriguing was the following:
from multiprocessing import Pool
def process_line(line):
return "FOO: %s" % line
if __name__ == "__main__":
pool = Pool(4)
with open('file.txt') as source_file:
# chunk the work into batches of 4 lines at a time
results = pool.map(process_line, source_file, 4)
I'm wondering if you can do the same, but instead of returning each line processed, write it into another file.
Basically I want to see if there is a way to MP reading and writing a file in order to split it up by lines. Say I want 100,000 lines per file.
from multiprocessing import Pool
def write_lines(line):
#need method to write lines to multiple files, perhaps a Queue?
if __name__ == "__main__":
#all my procs
pool = Pool()
with open('file.txt') as source_file:
# chunk the work into batches of 4 lines at a time
results = pool.map(process_line, source_file, 100000)
I could use a MP Queue to split up the file into separate Queue objects, then fill each processor with a job of writing out all the lines, but I still have to read through the file first. So will it always be completely IO bound and never be able to be MP in an efficient way?
As you suspected, this is workload really won't benefit much (if at all) from multiprocessing. All you're doing here is reading one file, then writing the contents of that file to other files. This is completely I/O bound; the bottleneck is going to be the speed of reading and writing to disk. Using multiprocessing to try to write multiple files to the same disk concurrently isn't going to make the writes any faster, because the disk can only write one thing at a time.
Where multiprocessing can help is if you've got some CPU-bound work that can be parallelized, but that really isn't the case with what you're trying to do. If you wanted to read lines from a file, do some fairly heavy processing of each line, and then write them to some other file, multiprocessing would help, but it doesn't sound like you need to do any processing prior to writing each line.