I want to do a textual analysis of multiple text files (>50,000 files), some of which are in html script. My program (below) iterates over these files, opening each one in turn, analyzing the content with NLTK module and writing the output to a CSV file, before continuing the analysis with the second file.
The program runs fine for single files, but the loop almost stalls after the 8th run, even though the 9th file to analyse is no larger than the 8th. Eg. The first eight iterations took 10 minutes total, whereas the 9th took 45 minutes. The 10th took even longer than 45 minutes (file is much smaller than the first ones).
I am sure the program could be optimized further, as I am still relatively new to Python, but I don't understand why its becoming so slow after the 8th run? Any help would be appreciated. Thanks!
#import necessary modules
import urllib, csv, re, nltk
from string import punctuation
from bs4 import BeautifulSoup
import glob
#Define bags of words (There are more variable, ie word counts, that are calculated)
adaptability=['adaptability', 'flexibility']
csvfile=open("test.csv", "w", newline='', encoding='cp850', errors='replace')
writer=csv.writer(csvfile)
for filename in glob.glob('*.txt'):
###Open files and arrange them so that they are ready for pre-processing
review=open(filename, encoding='utf-8', errors='ignore').read()
soup=BeautifulSoup(review)
text=soup.get_text()
from nltk.stem import WordNetLemmatizer
wnl=WordNetLemmatizer()
adaptability_counts=[]
adaptability_counter=0
review_processed=text.lower().replace('\r',' ').replace('\t',' ').replace('\n',' ').replace('. ', ' ').replace(';',' ').replace(', ',' ')
words=review_processed.split(' ')
word_l1=[word for word in words if word not in stopset]
word_l=[x for x in word_l1 if x != ""]
word_count=len(word_l)
for word in words:
wnl.lemmatize(word)
if word in adaptability:
adaptability_counter=adaptability_counter+1
adaptability_counts.append(adaptability_counter)
#I then repeat the analysis with 2 subsections of the text files
#(eg. calculate adaptability_counts for Part I only)
output=zip(adaptability_counts)
writer=csv.writer(open('test_10.csv','a',newline='', encoding='cp850', errors='replace'))
writer.writerows(output)
csvfile.flush()
You're never closing the files once you open them. My guess is you are running out of memory and it's taking so long because your machine has to swap data from the page file (on disk). Instead of just calling open(), you either have to close() the file when you are finished with it or use the with open construct, which will close the file automatically when you are done. See this page for more information: http://effbot.org/zone/python-with-statement.htm
If it were me, I would change this line:
review=open(filename, encoding='utf-8', errors='ignore').read()
to this:
with open(filename, encoding='utf-8', errors='ignore') as f:
review = f.read()
...
and make sure you indent appropriately. The code you execute with the file open will need to be indented within the with block.
Since the accepted answer hasn't quite solved your problem, here's a follow-up:
You have a list adaptability in which you look up each word in your inputs. Never look up words in a list! Replace the list with a set and you should see a huge improvement. (If you are using the list to count individual words, replace it with collections.counter, or the nltk's FreqDist.) If your adaptability list grows with each file you read (does it? was it supposed to?), this is definitely enough to cause your problem.
But there might be more than one culprit. You leave out a lot of your code, so there's no telling what you other data structures are growing with each file you see, or if that's intended. It's pretty clear that your code is "quadratic" and gets slower as your data gets bigger, not because of memory size but because you need more steps.
Don't bother switching to arrays and CountVectorizer, you'll just postpone the problem a little. Figure out how to process each file in constant time. If your algorithm doesn't require collecting words from more than one file, the quickest solution is to run it on each file separately (it's not hard to automate that).
Related
I want to iterate over each line of an entire file. One way to do this is by reading the entire file, saving it to a list, then going over the line of interest. This method uses a lot of memory, so I am looking for an alternative.
My code so far:
for each_line in fileinput.input(input_file):
do_something(each_line)
for each_line_again in fileinput.input(input_file):
do_something(each_line_again)
Executing this code gives an error message: device active.
Any suggestions?
The purpose is to calculate pair-wise string similarity, meaning for each line in file, I want to calculate the Levenshtein distance with every other line.
Nov. 2022 Edit: A related question that was asked 8 months after this question has many useful answers and comments. To get a deeper understanding of python logic, do also read this related question How should I read a file line-by-line in Python?
The correct, fully Pythonic way to read a file is the following:
with open(...) as f:
for line in f:
# Do something with 'line'
The with statement handles opening and closing the file, including if an exception is raised in the inner block. The for line in f treats the file object f as an iterable, which automatically uses buffered I/O and memory management so you don't have to worry about large files.
There should be one -- and preferably only one -- obvious way to do it.
Two memory efficient ways in ranked order (first is best) -
use of with - supported from python 2.5 and above
use of yield if you really want to have control over how much to read
1. use of with
with is the nice and efficient pythonic way to read large files. advantages - 1) file object is automatically closed after exiting from with execution block. 2) exception handling inside the with block. 3) memory for loop iterates through the f file object line by line. internally it does buffered IO (to optimized on costly IO operations) and memory management.
with open("x.txt") as f:
for line in f:
do something with data
2. use of yield
Sometimes one might want more fine-grained control over how much to read in each iteration. In that case use iter & yield. Note with this method one explicitly needs close the file at the end.
def readInChunks(fileObj, chunkSize=2048):
"""
Lazy function to read a file piece by piece.
Default chunk size: 2kB.
"""
while True:
data = fileObj.read(chunkSize)
if not data:
break
yield data
f = open('bigFile')
for chunk in readInChunks(f):
do_something(chunk)
f.close()
Pitfalls and for the sake of completeness - below methods are not as good or not as elegant for reading large files but please read to get rounded understanding.
In Python, the most common way to read lines from a file is to do the following:
for line in open('myfile','r').readlines():
do_something(line)
When this is done, however, the readlines() function (same applies for read() function) loads the entire file into memory, then iterates over it. A slightly better approach (the first mentioned two methods above are the best) for large files is to use the fileinput module, as follows:
import fileinput
for line in fileinput.input(['myfile']):
do_something(line)
the fileinput.input() call reads lines sequentially, but doesn't keep them in memory after they've been read or even simply so this, since file in python is iterable.
References
Python with statement
To strip newlines:
with open(file_path, 'rU') as f:
for line_terminated in f:
line = line_terminated.rstrip('\n')
...
With universal newline support all text file lines will seem to be terminated with '\n', whatever the terminators in the file, '\r', '\n', or '\r\n'.
EDIT - To specify universal newline support:
Python 2 on Unix - open(file_path, mode='rU') - required [thanks #Dave]
Python 2 on Windows - open(file_path, mode='rU') - optional
Python 3 - open(file_path, newline=None) - optional
The newline parameter is only supported in Python 3 and defaults to None. The mode parameter defaults to 'r' in all cases. The U is deprecated in Python 3. In Python 2 on Windows some other mechanism appears to translate \r\n to \n.
Docs:
open() for Python 2
open() for Python 3
To preserve native line terminators:
with open(file_path, 'rb') as f:
with line_native_terminated in f:
...
Binary mode can still parse the file into lines with in. Each line will have whatever terminators it has in the file.
Thanks to #katrielalex's answer, Python's open() doc, and iPython experiments.
this is a possible way of reading a file in python:
f = open(input_file)
for line in f:
do_stuff(line)
f.close()
it does not allocate a full list. It iterates over the lines.
Some context up front as to where I am coming from. Code snippets are at the end.
When I can, I prefer to use an open source tool like H2O to do super high performance parallel CSV file reads, but this tool is limited in feature set. I end up writing a lot of code to create data science pipelines before feeding to H2O cluster for the supervised learning proper.
I have been reading files like 8GB HIGGS dataset from UCI repo and even 40GB CSV files for data science purposes significantly faster by adding lots of parallelism with the multiprocessing library's pool object and map function. For example clustering with nearest neighbor searches and also DBSCAN and Markov clustering algorithms requires some parallel programming finesse to bypass some seriously challenging memory and wall clock time problems.
I usually like to break the file row-wise into parts using gnu tools first and then glob-filemask them all to find and read them in parallel in the python program. I use something like 1000+ partial files commonly. Doing these tricks helps immensely with processing speed and memory limits.
The pandas dataframe.read_csv is single threaded so you can do these tricks to make pandas quite faster by running a map() for parallel execution. You can use htop to see that with plain old sequential pandas dataframe.read_csv, 100% cpu on just one core is the actual bottleneck in pd.read_csv, not the disk at all.
I should add I'm using an SSD on fast video card bus, not a spinning HD on SATA6 bus, plus 16 CPU cores.
Also, another technique that I discovered works great in some applications is parallel CSV file reads all within one giant file, starting each worker at different offset into the file, rather than pre-splitting one big file into many part files. Use python's file seek() and tell() in each parallel worker to read the big text file in strips, at different byte offset start-byte and end-byte locations in the big file, all at the same time concurrently. You can do a regex findall on the bytes, and return the count of linefeeds. This is a partial sum. Finally sum up the partial sums to get the global sum when the map function returns after the workers finished.
Following is some example benchmarks using the parallel byte offset trick:
I use 2 files: HIGGS.csv is 8 GB. It is from the UCI machine learning repository. all_bin .csv is 40.4 GB and is from my current project.
I use 2 programs: GNU wc program which comes with Linux, and the pure python fastread.py program which I developed.
HP-Z820:/mnt/fastssd/fast_file_reader$ ls -l /mnt/fastssd/nzv/HIGGS.csv
-rw-rw-r-- 1 8035497980 Jan 24 16:00 /mnt/fastssd/nzv/HIGGS.csv
HP-Z820:/mnt/fastssd$ ls -l all_bin.csv
-rw-rw-r-- 1 40412077758 Feb 2 09:00 all_bin.csv
ga#ga-HP-Z820:/mnt/fastssd$ time python fastread.py --fileName="all_bin.csv" --numProcesses=32 --balanceFactor=2
2367496
real 0m8.920s
user 1m30.056s
sys 2m38.744s
In [1]: 40412077758. / 8.92
Out[1]: 4530501990.807175
That’s some 4.5 GB/s, or 45 Gb/s, file slurping speed. That ain’t no spinning hard disk, my friend. That’s actually a Samsung Pro 950 SSD.
Below is the speed benchmark for the same file being line-counted by gnu wc, a pure C compiled program.
What is cool is you can see my pure python program essentially matched the speed of the gnu wc compiled C program in this case. Python is interpreted but C is compiled, so this is a pretty interesting feat of speed, I think you would agree. Of course, wc really needs to be changed to a parallel program, and then it would really beat the socks off my python program. But as it stands today, gnu wc is just a sequential program. You do what you can, and python can do parallel today. Cython compiling might be able to help me (for some other time). Also memory mapped files was not explored yet.
HP-Z820:/mnt/fastssd$ time wc -l all_bin.csv
2367496 all_bin.csv
real 0m8.807s
user 0m1.168s
sys 0m7.636s
HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=16 --balanceFactor=2
11000000
real 0m2.257s
user 0m12.088s
sys 0m20.512s
HP-Z820:/mnt/fastssd/fast_file_reader$ time wc -l HIGGS.csv
11000000 HIGGS.csv
real 0m1.820s
user 0m0.364s
sys 0m1.456s
Conclusion: The speed is good for a pure python program compared to a C program. However, it’s not good enough to use the pure python program over the C program, at least for linecounting purpose. Generally the technique can be used for other file processing, so this python code is still good.
Question: Does compiling the regex just one time and passing it to all workers will improve speed? Answer: Regex pre-compiling does NOT help in this application. I suppose the reason is that the overhead of process serialization and creation for all the workers is dominating.
One more thing.
Does parallel CSV file reading even help? Is the disk the bottleneck, or is it the CPU? Many so-called top-rated answers on stackoverflow contain the common dev wisdom that you only need one thread to read a file, best you can do, they say. Are they sure, though?
Let’s find out:
HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=16 --balanceFactor=2
11000000
real 0m2.256s
user 0m10.696s
sys 0m19.952s
HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=1 --balanceFactor=1
11000000
real 0m17.380s
user 0m11.124s
sys 0m6.272s
Oh yes, yes it does. Parallel file reading works quite well. Well there you go!
Ps. In case some of you wanted to know, what if the balanceFactor was 2 when using a single worker process? Well, it’s horrible:
HP-Z820:/mnt/fastssd/fast_file_reader$ time python fastread.py --fileName="HIGGS.csv" --numProcesses=1 --balanceFactor=2
11000000
real 1m37.077s
user 0m12.432s
sys 1m24.700s
Key parts of the fastread.py python program:
fileBytes = stat(fileName).st_size # Read quickly from OS how many bytes are in a text file
startByte, endByte = PartitionDataToWorkers(workers=numProcesses, items=fileBytes, balanceFactor=balanceFactor)
p = Pool(numProcesses)
partialSum = p.starmap(ReadFileSegment, zip(startByte, endByte, repeat(fileName))) # startByte is already a list. fileName is made into a same-length list of duplicates values.
globalSum = sum(partialSum)
print(globalSum)
def ReadFileSegment(startByte, endByte, fileName, searchChar='\n'): # counts number of searchChar appearing in the byte range
with open(fileName, 'r') as f:
f.seek(startByte-1) # seek is initially at byte 0 and then moves forward the specified amount, so seek(5) points at the 6th byte.
bytes = f.read(endByte - startByte + 1)
cnt = len(re.findall(searchChar, bytes)) # findall with implicit compiling runs just as fast here as re.compile once + re.finditer many times.
return cnt
The def for PartitionDataToWorkers is just ordinary sequential code. I left it out in case someone else wants to get some practice on what parallel programming is like. I gave away for free the harder parts: the tested and working parallel code, for your learning benefit.
Thanks to: The open-source H2O project, by Arno and Cliff and the H2O staff for their great software and instructional videos, which have provided me the inspiration for this pure python high performance parallel byte offset reader as shown above. H2O does parallel file reading using java, is callable by python and R programs, and is crazy fast, faster than anything on the planet at reading big CSV files.
Katrielalex provided the way to open & read one file.
However the way your algorithm goes it reads the whole file for each line of the file. That means the overall amount of reading a file - and computing the Levenshtein distance - will be done N*N if N is the amount of lines in the file. Since you're concerned about file size and don't want to keep it in memory, I am concerned about the resulting quadratic runtime. Your algorithm is in the O(n^2) class of algorithms which often can be improved with specialization.
I suspect that you already know the tradeoff of memory versus runtime here, but maybe you would want to investigate if there's an efficient way to compute multiple Levenshtein distances in parallel. If so it would be interesting to share your solution here.
How many lines do your files have, and on what kind of machine (mem & cpu power) does your algorithm have to run, and what's the tolerated runtime?
Code would look like:
with f_outer as open(input_file, 'r'):
for line_outer in f_outer:
with f_inner as open(input_file, 'r'):
for line_inner in f_inner:
compute_distance(line_outer, line_inner)
But the questions are how do you store the distances (matrix?) and can you gain an advantage of preparing e.g. the outer_line for processing, or caching some intermediate results for reuse.
Need to frequently read a large file from last position reading ?
I have created a script used to cut an Apache access.log file several times a day.
So I needed to set a position cursor on last line parsed during last execution.
To this end, I used file.seek() and file.seek() methods which allows the storage of the cursor in file.
My code :
ENCODING = "utf8"
CURRENT_FILE_DIR = os.path.dirname(os.path.abspath(__file__))
# This file is used to store the last cursor position
cursor_position = os.path.join(CURRENT_FILE_DIR, "access_cursor_position.log")
# Log file with new lines
log_file_to_cut = os.path.join(CURRENT_FILE_DIR, "access.log")
cut_file = os.path.join(CURRENT_FILE_DIR, "cut_access", "cut.log")
# Set in from_line
from_position = 0
try:
with open(cursor_position, "r", encoding=ENCODING) as f:
from_position = int(f.read())
except Exception as e:
pass
# We read log_file_to_cut to put new lines in cut_file
with open(log_file_to_cut, "r", encoding=ENCODING) as f:
with open(cut_file, "w", encoding=ENCODING) as fw:
# We set cursor to the last position used (during last run of script)
f.seek(from_position)
for line in f:
fw.write("%s" % (line))
# We save the last position of cursor for next usage
with open(cursor_position, "w", encoding=ENCODING) as fw:
fw.write(str(f.tell()))
From the python documentation for fileinput.input():
This iterates over the lines of all files listed in sys.argv[1:], defaulting to sys.stdin if the list is empty
further, the definition of the function is:
fileinput.FileInput([files[, inplace[, backup[, mode[, openhook]]]]])
reading between the lines, this tells me that files can be a list so you could have something like:
for each_line in fileinput.input([input_file, input_file]):
do_something(each_line)
See here for more information
#Using a text file for the example
with open("yourFile.txt","r") as f:
text = f.readlines()
for line in text:
print line
Open your file for reading (r)
Read the whole file and save each line into a list (text)
Loop through the list printing each line.
If you want, for example, to check a specific line for a length greater than 10, work with what you already have available.
for line in text:
if len(line) > 10:
print line
I would strongly recommend not using the default file loading as it is horrendously slow. You should look into the numpy functions and the IOpro functions (e.g. numpy.loadtxt()).
http://docs.scipy.org/doc/numpy/user/basics.io.genfromtxt.html
https://store.continuum.io/cshop/iopro/
Then you can break your pairwise operation into chunks:
import numpy as np
import math
lines_total = n
similarity = np.zeros(n,n)
lines_per_chunk = m
n_chunks = math.ceil(float(n)/m)
for i in xrange(n_chunks):
for j in xrange(n_chunks):
chunk_i = (function of your choice to read lines i*lines_per_chunk to (i+1)*lines_per_chunk)
chunk_j = (function of your choice to read lines j*lines_per_chunk to (j+1)*lines_per_chunk)
similarity[i*lines_per_chunk:(i+1)*lines_per_chunk,
j*lines_per_chunk:(j+1)*lines_per_chunk] = fast_operation(chunk_i, chunk_j)
It's almost always much faster to load data in chunks and then do matrix operations on it than to do it element by element!!
Best way to read large file, line by line is to use python enumerate function
with open(file_name, "rU") as read_file:
for i, row in enumerate(read_file, 1):
#do something
#i in line of that line
#row containts all data of that line
I have a pretty large file (about 8 GB).. now I read this post: How to read a large file line by line and this one Tokenizing large (>70MB) TXT file using Python NLTK. Concatenation & write data to stream errors
But this still doesnt do the job.. when I run my code, my pc gets stuck.
Am I doing something wrong?
I want to get all words into a list (tokenize them). Further, doesnt the code reads each line and tokenizes the line? Doesnt this might prevent the tokenizer from tokenizing words properly since some words (and sentences) do not end after just one line?
I considered splitting it up into smaller files, but doesnt this still consume my RAM if I just have 8GB Ram since the list of words will probably be equally big (8GB) like the initial txt file?
word_list=[]
number = 0
with open(os.path.join(save_path, 'alldata.txt'), 'rb',encoding="utf-8") as t:
for line in t.readlines():
word_list+=nltk.word_tokenize(line)
number = number + 1
print(number)
By using the following line:
for line in t.readlines():
# do the things
You are forcing python to read the whole file with t.readlines(), then return an array of strings that represents the whole file, thus bringing the whole file into memory.
Instead, if you do as the example you linked states:
for line in t:
# do the things
The Python VM will natively process the file line-by-line, like you want.
the file will act like a generator, yielding each line one at a time.
After looking at your code again, I see that you are constantly appending to the word list, with word_list += nltk.word_tokenize(line). This means that even if you do import the file one line at a time, you are still retaining that data in your memory, even after the file has moved on. You will likely need to find a better way of doing whatever this is, as you will still be consuming massive amounts of memory, because the data has not been dropped from memory.
For data this large, you will have to either
find a way to store an intermediate version of your tokenized data, or
design your code in a way that you can handle one, or just a few tokenized words at a time.
Some thing like this might do the trick:
def enumerated_tokens(filepath):
index = 0
with open(filepath, rb, encoding="utf-8") as t:
for line in t:
for word in nltk.word_tokenize(line):
yield (index, word)
index += 1
for index, word in enumerated_tokens(os.path.join(save_path, 'alldata.txt')):
print(index, word)
# Do the thing with your word.
Notice how this never actually stores the word anywhere. This doesn't mean that you can't temporarily store anything, but if you're memory constrained, generators are the way to go. This approach will likely be faster, more stable, and use less memory overall.
I'm writing a program that takes a string and compute all possible repeated permutations from this string. I'll show some fragments of my code, I would be grateful if someone can point me how to improved the speed when sending the data to a file.
Scenario 1
Sending the output to stdout took about 12 seconds to write 531,441 lines (3mb)
import itertools
for word in itertools.product(abcdefghi,repeat = 6):
print(word)
Scenario 2
Then I tried sending the output to a file instead of stdout, and this took a roughly around 5 minutes.
import itertools
word_counter=0
for word in itertools.product(abcdefghi,repeat = 6):
word_counter=word_counter+1
if word_counter==1:
open('myfile', 'w').write(word)
else:
open('myfile', 'a').write(word)
word_counter keep track of the number of repeated permutations as the function is looping. When word_counter is 1 the program creates the file and afterwards append the data to the file when word_counter is greater than 1.
I use a program on the web to do this and I found the program took the same time when printing the data to a terminal and this same web prgoram took about 3 seconds to output these combinations to a file while my program took 5 minutes to output the data to a file!
I also tried running my program and redirecting output to a file in a bash terminal, and this took the same time (3 sec)!
'myprog' > 'output file'
You are reopening the file for every write, try not doing that:
import itertools
output = open('myfile', 'w')
for word in itertools.product(abcdefghi, repeat=6):
output.write(word + '\n')
[Edit with explanation]
When you're working with 530,000 words, even making something a tiny bit slower for each word, adds up to a LOT slower for the whole program.
My way, you do one piece of setup work (open the file) and put it in memory, then go through 500,000 words and save them, then do one piece of tidy up work (close the file). That's why the file is saved in a variable - so you can set it up once, and use it again and again.
Your way, you do almost no setup work first, then you add one to the counter 500,000 times, check the value of the counter 500,000 times, branch this way or that 500,000 times, open the file and force Windows (or Linux) to check your permissions every time, put it in memory 500,000 times, write to it 500,000 times, stop using the file you opened (because you didn't save it) so it falls into the 'garbage' and gets tidied up - 500,000 times, and then finish.
The amount of work is small each time, but when you do them all so many times, it adds up.
The same as previous answers but with a context!
import itertools
with open('myfile', 'w') as output:
for word in itertools.product(abcdefghi, repeat=6):
output.write(word + '\n')
Context have the benefits of cleaning up after themselves and handling errors.
Say I have an absurdly large text file. I would not think my file would grow larger than ~500mb, but for the sake of scalability and my own curiosity, let's say it is on the order of a few gig.
My end goal is to map it to an array of sentences (separated by '?' '!' '.' and for all intents and purposes ';') and each sentence to an array of words. I was then going to use numpy for some statistical analysis.
What would be the most scalable way to go about doing this?
PS: I thought of rewriting the file to have one sentence per line, but I ran into problems trying to load the file into memory. I know of the solution where you read off chucks of data in one file, manipulate them, and write them to another, but that seems inefficient with disk memory. I know, most people would not worry about using 10gig of scratch space nowadays, but it does seem like there ought to be a way of directly editing chucks of the file.
My first thought would be to use a stream parser: basically you read in the file a piece at a time and do the statistical analysis as you go. This is typically done with markup languages like HTML and XML, so you'll find a lot of parsers for those languages out there, including in the Python standard library. A simple sentence parser is something you can write yourself, though; for example:
import re, collections
sentence_terminator = re.compile(r'(?<=[.!?;])\s*')
class SentenceParser(object):
def __init__(self, filelike):
self.f = filelike
self.buffer = collections.deque([''])
def next(self):
while len(self.buffer) < 2:
data = self.f.read(512)
if not data:
raise StopIteration()
self.buffer += sentence_terminator.split(self.buffer.pop() + data)
return self.buffer.popleft()
def __iter__(self):
return self
This will only read data from the file as needed to complete a sentence. It reads in 512-byte blocks so you'll be holding less than a kilobyte of file contents in memory at any one time, no matter how large the actual file is.
After a stream parser, my second thought would be to memory map the file. That way you could go through and replace the space that (presumably) follows each sentence terminator by a newline; after that, each sentence would start on a new line, and you'd be able to open the file and use readline() or a for loop to go through it line by line. But you'd still have to worry about multi-line sentences; plus, if any sentence terminator is not followed by a whitespace character, you would have to insert a newline (instead of replacing something else with it) and that could be horribly inefficient for a large file.
I am a complete beginner to Python or any serious programming language for that matter. I finally got a prototype code to work but I think it will be too slow.
My goal is to find and replace some Chinese characters across all files (they are csv) in a directory with integers as per a csv file I have. The files are nicely numbered by year-month, for example 2000-01.csv, and will be the only files in that directory.
I will be looping across about 25 files that are in the neighborhood of 500mb each (and about a million lines). The dictionary I will be using will have about 300 elements and I will be changing unicode (Chinese character) to integers. I tried with a test run and, assuming everything scales up linearly (?), it looks like it would take about a week for this to run.
Thanks in advance. Here is my code (don't laugh!):
# -*- coding: utf-8 -*-
import os, codecs
dir = "C:/Users/Roy/Desktop/test/"
Dict = {'hello' : 'good', 'world' : 'bad'}
for dirs, subdirs, files in os.walk(dir):
for file in files:
inFile = codecs.open(dir + file, "r", "utf-8")
inFileStr = inFile.read()
inFile.close()
inFile = codecs.open(dir + file, "w", "utf-8")
for key in Dict:
inFileStr = inFileStr.replace(key, Dict[key])
inFile.write(inFileStr)
inFile.close()
In your current code, you're reading the whole file into memory at once. Since they're 500Mb files, that means 500Mb strings. And then you do repeated replacements of them, which means Python has to create a new 500Mb string with the first replacement, then destroy the first string, then create a second 500Mb string for the second replacement, then destroy the second string, et cetera, for each replacement. That turns out to be quite a lot of copying of data back and forth, not to mention using a lot of memory.
If you know the replacements will always be contained in a line, you can read the file line by line by iterating over it. Python will buffer the read, which means it will be fairly optimized. You should open a new file, under a new name, for writing the new file simultaneously. Perform the replacement on each line in turn, and write it out immediately. Doing this will greatly reduce the amount of memory used and the amount of memory copied back and forth as you do the replacements:
for file in files:
fname = os.path.join(dir, file)
inFile = codecs.open(fname, "r", "utf-8")
outFile = codecs.open(fname + ".new", "w", "utf-8")
for line in inFile:
newline = do_replacements_on(line)
outFile.write(newline)
inFile.close()
outFile.close()
os.rename(fname + ".new", fname)
If you can't be certain if they'll always be on one line, things get a little harder; you'd have to read in blocks manually, using inFile.read(blocksize), and keep careful track of whether there might be a partial match at the end of the block. Not as easy to do, but usually still worth it to avoid the 500Mb strings.
Another big improvement would be if you could do the replacements in one go, rather than trying a whole bunch of replacements in order. There are several ways of doing that, but which fits best depends entirely on what you're replacing and with what. For translating single characters into something else, the translate method of unicode objects may be convenient. You pass it a dict mapping unicode codepoints (as integers) to unicode strings:
>>> u"\xff and \ubd23".translate({0xff: u"255", 0xbd23: u"something else"})
u'255 and something else'
For replacing substrings (and not just single characters), you could use the re module. The re.sub function (and the sub method of compiled regexps) can take a callable (a function) as the first argument, which will then be called for each match:
>>> import re
>>> d = {u'spam': u'spam, ham, spam and eggs', u'eggs': u'saussages'}
>>> p = re.compile("|".join(re.escape(k) for k in d))
>>> def repl(m):
... return d[m.group(0)]
...
>>> p.sub(repl, u"spam, vikings, eggs and vikings")
u'spam, ham, spam and eggs, vikings, saussages and vikings'
I think you can lower memory use greatly (and thus limit swap use and make things faster) by reading a line at a time and writing it (after the regexp replacements already suggested) to a temporary file - then moving the file to replace the original.
A few things (unrelated to the optimization problem):
dir + file should be os.path.join(dir, file)
You might want to not reuse infile, but instead open (and write to) a separate outfile. This also won't increase performance, but is good practice.
I don't know if you're I/O bound or cpu bound, but if your cpu utilization is very high, you may want to use threading, with each thread operating on a different file (so with a quad core processor, you'd be reading/writing 4 different files simultaneously).
Open the files read/write ('r+') and avoid the double open/close (and likely associated buffer flush). Also, if possible, don't write back the entire file, seek and write back only the changed areas after doing the replace on the file's contents. Read, replace, write changed areas (if any).
That still won't help performance too much though: I'd profile and determine where the performance hit actually is and then move onto optimising it. It could just be the reading of the data from disk that's very slow, and there's not much you can do about that in Python.