I'm looking for a data file structure that enables fast reading of random data samples for deep learning, and have been experimenting with lmdb today. However, one thing that seems surprising to me is how inefficiently it seems to store the data.
I have an ASCII file that is around 120 GB with gene sequences.
Now I would have expected to be able to fit this data in a lmdb database of roughly the same size or perhaps even a bit smaller since ASCII is a highly inefficient storing method.
However what I'm seeing seems, to suggest that I need around 350 GB to store this data in a lmdb file and I simply don't understand that.
Am I not utilizing some setting correctly, or what exactly am I doing wrong here?
import time
import lmdb
import pyarrow as pa
def dumps_pyarrow(obj):
"""
Serialize an object.
Returns:
Implementation-dependent bytes-like object
"""
return pa.serialize(obj).to_buffer()
t0 = time.time()
filepath = './../../Uniparc/uniparc_active/uniparc_active.fasta'
output_file = './../data/out_lmdb.fasta'
write_freq = 100000
start_line = 2
nprot = 0
db = lmdb.open(output_file, map_size=1e9, readonly=False,
meminit=False, map_async=True)
txn = db.begin(write=True)
with open(filepath) as fp:
line = fp.readline()
cnt = 1
protein = ''
while line:
if cnt >= start_line:
if line[0] == '>': #Old protein finished, new protein starting on next line
txn.put(u'{}'.format(nprot).encode('ascii'), dumps_pyarrow((protein)))
nprot += 1
if nprot % write_freq == 0:
t1 = time.time()
print("comitting... nprot={} ,time={:2.2f}".format(nprot,t1-t0))
txn.commit()
txn = db.begin(write=True)
line_checkpoint = cnt
protein = ''
else:
protein += line.strip()
line = fp.readline()
cnt += 1
txn.commit()
keys = [u'{}'.format(k).encode('ascii') for k in range(nprot + 1)]
with db.begin(write=True) as txn:
txn.put(b'__keys__', dumps_pyarrow(keys))
txn.put(b'__len__', dumps_pyarrow(len(keys)))
print("Flushing database ...")
db.sync()
db.close()
t2 = time.time()
print("All done, time taken {:2.2f}s".format(t2-t0))
Edit:
Some additional information about the data:
In the 120 GB file the data is structured like this (Here I am showing the first 2 proteins):
>UPI00001E0F7B status=inactive
YPRSRSQQQGHHNAAQQAHHPYQLQHSASTVSHHPHAHGPPSQGGPGGPGPPHGGHPHHP
HHGGAGSGGGSGPGSHGGQPHHQKPRRTASQRIRAATAARKLHFVFDPAGRLCYYWSMVV
SMAFLYNFWVIIYRFAFQEINRRTIAIWFCLDYLSDFLYLIDILFHFRTGYLEDGVLQTD
ALKLRTHYMNSTIFYIDCLCLLPLDFLYLSIGFNSILRSFRLVKIYRFWAFMDRTERHTN
YPNLFRSTALIHYLLVIFHWNGCLYHIIHKNNGFGSRNWVYHDSESADVVKQYLQSYYWC
TLALTTIGDLPKPRSKGEYVFVILQLLFGLMLFATVLGHVANIVTSVSAARKEFQGESNL
RRQWVKVVWSAPASG
>UPI00001E0FBF status=active
MWRAQPSLWIWWIFLILVPSIRAVYEDYRLPRSVEPLHYNLRILTHLNSTDQRFEGSVTI
DLLARETTKNITLHAAYLKIDENRTSVVSGQEKFGVNRIEVNEVHNFYILHLGRELVKDQ
IYKLEMHFKAGLNDSQSGYYKSNYTDIVTKEVHHLAVTQFSPTFARQAFPCFDEPSWKAT
FNITLGYHKKYMGLSGMPVLRCQDHDSLTNYVWCDHDTLLRTSTYLVAFAVHDLENAATE
ESKTSNRVIFRNWMQPKLLGQEMISMEIAPKLLSFYENLFQINFPLAKVDQLTVPTHRFT
AMENWGLVTYNEERLPQNQGDYPQKQKDSTAFTVAHEYAHQWFGNLVTMNWWNDLWLKEG
PSTYFGYLALDSLQPEWRRGERFISRDLANFFSKDSNATVPAISKDVKNPAEVLGQFTEY
VYEKGSLTIRMLHKLVGEEAFFHGIRSFLERFSFGNVAQADLWNSLQMAALKNQVISSDF
NLSRAMDSWTLQGGYPLVTLIRNYKTGEVTLNQSRFFQEHGIEKASSCWWVPLRFVRQNL
PDFNQTTPQFWLECPLNTKVLKLPDHLSTDEWVILNPQVATIFRVNYDEHNWRLIIESLR
NDPNSGGIHKLNKAQLLDDLMALAAVRLHKYDKAFDLLEYLKKEQDFLPWQRAIGILNRL
GALLNVAEANKFKNYMQKLLLPLYNRFPKLSGIREAKPAIKDIPFAHFAYSQACRYHVAD
CTDQAKILAITHRTEGQLELPSDFQKVAYCSLLDEGGDAEFLEVFGLFQNSTNGSQRRIL
ASALGCVRNFGNFEQFLNYTLESDEKLLGDCYMLAVKSALNREPLVSPTANYIISHAKKL
GEKFKKKELTGLLLSLAQNLRSTEEIDRLKAQLEDLKEFEEPLKKALYQGKMNQKWQKDC
SSDFIEAIEKHL
When I store the data in the database I concatenate all the lines making up each protein, and store those as a single data point. I ignore the headerline (the line starting with >).
The reason why I believe that the data should be more compressed when stored in the database is because I expect it to be stored in some binary form which I would expect would be more compressed - though I will admit I don't know whether that is how it would actually work (For comparison the data is only 70 GB when compressed/zipped).
I would be okay with the data taking up a similar amount of space in lmdb format, but I don't understand why it should take up almost 3 times the space as it does in ASCII format.
LMDB does not implement any sort of compression on data. 1Byte in memory is 1Byte on disk.
But its internals can amplify space required:
data handling is in pages (generally 4KB)
each "record" need to store additional structures for the b-tree (for key and data) and the count of pages occupied by key and data
Bottom line : LMDB is designed for FAST data access, not to save space.
Related
Following suggestions on SO Post, I also found PyTables-append is exceptionally time efficient. However, in my case the output file (earray.h5) has huge size. Is there a way to append the data such that the output file is not as huge? For example, in my case (see link below) a 13GB input file (dset_1: 2.1E8 x 4 and dset_2: 2.1E8 x 4) gives a 197 GB output file with just one column (2.5E10 x 1). All elements are float64.
I want to reduce the output file size such that the execution speed of the script is not compromised and the output file reading is also efficient for later use. Can saving the data along columns and not just rows help? Any suggestions on this? Given below is a MWE.
Output and input files' details here
# no. of chunks from dset-1 and dset-2 in inp.h5
loop_1 = 40
loop_2 = 20
# save to disk after these many rows
app_len = 10**6
# **********************************************
# Grabbing input.h5 file
# **********************************************
filename = 'inp.h5'
f2 = h5py.File(filename, 'r')
chunks1 = f2['dset_1']
chunks2 = f2['dset_2']
shape1, shape2 = chunks1.shape[0], chunks2.shape[0]
f1 = tables.open_file("table.h5", "w")
a = f1.create_earray(f1.root, "dataset_1", atom=tables.Float64Atom(), shape=(0, 4))
size1 = shape1//loop_1
size2 = shape2//loop_2
# ***************************************************
# Grabbing chunks to process and append data
# ***************************************************
for c in range(loop_1):
h = c*size1
# grab chunks from dset_1 of inp.h5
chunk1 = chunks1[h:(h + size1)]
for d in range(loop_2):
g = d*size2
chunk2 = chunks2[g:(g + size2)] # grab chunks from dset_2 of inp.h5
r1 = chunk1.shape[0]
r2 = chunk2.shape[0]
left, right = 0, 0
for j in range(r1): # grab col.2 values from dataset-1
e1 = chunk1[j, 1]
#...Algaebraic operations here to output a row containing 4 float64
#...append to a (earray) when no. of rows reach a million
del chunk2
del chunk1
f2.close()
I wrote the answer you are referencing. That is a simple example that "only" writes 1.5e6 rows. I didn't do anything to optimize performance for very large files. You are creating a very large file, but did not say how many rows (obviously way more than 10**6). Here are some suggestions based on comments in another thread.
Areas I recommend (3 related to PyTables code, and 2 based on external utilizes).
PyTables code suggestions:
Enable compression when you create the file (add the filters= parameter when you create the file). Start with tb.Filters(complevel=1).
Define the expectedrows= parameter in .create_tables() (per PyTables docs, 'this will optimize the HDF5 B-Tree and amount of memory used'). The default value is set in tables/parameters.py (look for EXPECTED_ROWS_TABLE; It's only 10000 in my installation). I suggest you set this to a larger value if you are creating 10**6 (or more) rows.
There is a side benefit to setting expectedrows=. If you don't define chunkshape, 'a sensible value is calculated based on the expectedrows parameter'. Check the value used. This won't decrease the created file size, but will improve I/O performance.
If you didn't use compression when you created the file, there are 2 methods to compress existing files:
External Utilities:
The PyTables utility ptrepack - run against a HDF5 file to create a
new file (useful to go from uncompressed to compressed, or vice-versa). It is delivered with PyTables, and runs on the command line.
The HDF5 utility h5repack - works similar to ptrepack. It is delivered with the HDF5 installer from The HDF Group.
There are trade-offs with file compression: it reduces the file size, but increases access time (reduces I/O performance). I tend to use uncompressed files I open frequently (for best I/O performance). Then when done, I convert to compressed format for long term archiving. You can continue to work with them in compress format (the API handles cleanly).
I have a function which I want to compute in parallel using multiprocessing. The function takes an argument, but also loads subsets from two very large dataframe which has already been loaded into memory (one of which is about 1G and the other is just over 6G).
largeDF1 = pd.read_csv(directory + 'name1.csv')
largeDF2 = pd.read_csv(directory + 'name2.csv')
def f(x):
load_content1 = largeDF1.loc[largeDF1['FirstRow'] == x]
load_content2 = largeDF1.loc[largeDF1['FirstRow'] == x]
#some computation happens here
new_data.to_csv(directory + 'output.csv', index = False)
def main():
multiprocessing.set_start_method('spawn', force = True)
pool = multiprocessing.Pool(processes = multiprocessing.cpu_count())
input = input_data['col']
pool.map_async(f, input)
pool.close()
pool.join()
The problem is that the files are too big and when I run them over multiple cores I get a memory issue. I want to know if there is a way where the loaded files can be shared across all processes.
I have tried manager() but could not get it to work. Any help is appreciated. Thanks.
If you were running this on a UNIX-like system (which uses the fork startmethod by default) the data would be shared out-of-the-box. Most operating systems use copy-on-write for memory pages. So even if you fork a process several times they would share most of the memory pages that contain the dataframes, al long as you don't modify those dataframes.
But when using the spawn start method, each worker process has to load the dataframe. I'm not sure if the OS is smart enough in that case to share the memory pages. Or indeed that these spawned processes would all have the same memory lay-out.
The only portable solution I can think of would be to leave the data on disk and use mmap in the workers to map it into memory read-only. That way the OS would notice that multiple processes are mapping the same file, and it would only load one copy.
The downside is that the data would be in memory in on-disk csv format, which makes reading data from it (without making a copy!) less convenient. So you might want to prepare the data beforehand into a form that it easier to use. Like e.g. convert the data from 'FirstRow' into a binary file of float or double that you can iterate over with struct.iter_unpack.
The function below (from my statusline script) uses mmap to count the amount of messages in a mailbox file.
def mail(storage, mboxname):
"""
Report unread mail.
Arguments:
storage: a dict with keys (unread, time, size) from the previous call or an empty dict.
This dict will be *modified* by this function.
mboxname (str): name of the mailbox to read.
Returns: A string to display.
"""
stats = os.stat(mboxname)
if stats.st_size == 0:
return 'Mail: 0'
# When mutt modifies the mailbox, it seems to only change the
# ctime, not the mtime! This is probably releated to how mutt saves the
# file. See also stat(2).
newtime = stats.st_ctime
newsize = stats.st_size
if not storage or newtime > storage['time'] or newsize != storage['size']:
with open(mboxname) as mbox:
with mmap.mmap(mbox.fileno(), 0, prot=mmap.PROT_READ) as mm:
start, total = 0, 1 # First mail is not found; it starts on first line...
while True:
rv = mm.find(b'\n\nFrom ', start)
if rv == -1:
break
else:
total += 1
start = rv + 7
start, read = 0, 0
while True:
rv = mm.find(b'\nStatus: R', start)
if rv == -1:
break
else:
read += 1
start = rv + 10
unread = total - read
# Save values for the next run.
storage['unread'], storage['time'], storage['size'] = unread, newtime, newsize
else:
unread = storage['unread']
return f'Mail: {unread}'
In this case I used mmap because it was 4x faster than just reading the file. See normal reading versus using mmap.
I'm having trouble reading large amounts of data from a text file, and splitting and removing certain objects from it to get a more refined list. For example, let's say I have a text file, we'll call it 'data.txt', that has this data in it.
Some Header Here
Object Number = 1
Object Symbol = A
Mass of Object = 1
Weight of Object = 1.2040
Hight of Object = 0.394
Width of Object = 4.2304
Object Number = 2
Object Symbol = B
Mass Number = 2
Weight of Object = 1.596
Height of Object = 3.293
Width of Object = 4.654
.
.
. ...Same format continuing down
My problem is taking the data I need from this file. Let's say I'm only interested in the Object Number and Mass of Object, which repeats through the file, but with different numerical values. I need a list of this data. Example
Object Number Mass of Object
1 1
2 2
. .
. .
. .
etc.
With the headers excluded of course, as this data will be applied to an equation. I'm very new to Python, and don't have any knowledge of OOP. What would be the easiest way to do this? I know the basics of opening and writing to text files, even a little bit of using the split and strip functions. I've researched quite a bit on this site about sorting data, but I can't get it to work for me.
Try this:
object_number = [] # list of Object Number
mass_of_object = [] # list of Mass of Object
with open('data.txt') as f:
for line in f:
if line.startswith('Object Number'):
object_number.append(int(line.split('=')[1]))
elif line.startswith('Mass of Object'):
mass_of_object.append(int(line.split('=')[1]))
In my opinion dictionary (and sub-classes) has an efficiency greater than a group of lists for huge data input.
Moreover, my code don't need any modification if you need to extract a new object data from your file.
from _collections import defaultdict
checklist = ["Object Number", "Mass of Object"]
data = dict()
with open("text.txt") as f:
# iterating over the file allows
# you to read it automatically one line at a time
for line in f:
for regmatch in checklist:
if line.startswith(regmatch):
# this is to erase newline characters
val = line.rstrip()
val = val.split(" = ")[1]
data.setdefault(regmatch, []).append(val)
print data
This is the output:
defaultdict(None, {'Object Number': ['1', '2'], 'Mass of Object': ['1']})
Here some theory about speed, here some tips about performance optimization and here about dependency between type of data and implementation efficiency.
Last, some examples about re (regular expression):
https://docs.python.org/2/howto/regex.html
Is there a limit to memory for python? I've been using a python script to calculate the average values from a file which is a minimum of 150mb big.
Depending on the size of the file I sometimes encounter a MemoryError.
Can more memory be assigned to the python so I don't encounter the error?
EDIT: Code now below
NOTE: The file sizes can vary greatly (up to 20GB) the minimum size of the a file is 150mb
file_A1_B1 = open("A1_B1_100000.txt", "r")
file_A2_B2 = open("A2_B2_100000.txt", "r")
file_A1_B2 = open("A1_B2_100000.txt", "r")
file_A2_B1 = open("A2_B1_100000.txt", "r")
file_write = open ("average_generations.txt", "w")
mutation_average = open("mutation_average", "w")
files = [file_A2_B2,file_A2_B2,file_A1_B2,file_A2_B1]
for u in files:
line = u.readlines()
list_of_lines = []
for i in line:
values = i.split('\t')
list_of_lines.append(values)
count = 0
for j in list_of_lines:
count +=1
for k in range(0,count):
list_of_lines[k].remove('\n')
length = len(list_of_lines[0])
print_counter = 4
for o in range(0,length):
total = 0
for p in range(0,count):
number = float(list_of_lines[p][o])
total = total + number
average = total/count
print average
if print_counter == 4:
file_write.write(str(average)+'\n')
print_counter = 0
print_counter +=1
file_write.write('\n')
(This is my third answer because I misunderstood what your code was doing in my original, and then made a small but crucial mistake in my second—hopefully three's a charm.
Edits: Since this seems to be a popular answer, I've made a few modifications to improve its implementation over the years—most not too major. This is so if folks use it as template, it will provide an even better basis.
As others have pointed out, your MemoryError problem is most likely because you're attempting to read the entire contents of huge files into memory and then, on top of that, effectively doubling the amount of memory needed by creating a list of lists of the string values from each line.
Python's memory limits are determined by how much physical ram and virtual memory disk space your computer and operating system have available. Even if you don't use it all up and your program "works", using it may be impractical because it takes too long.
Anyway, the most obvious way to avoid that is to process each file a single line at a time, which means you have to do the processing incrementally.
To accomplish this, a list of running totals for each of the fields is kept. When that is finished, the average value of each field can be calculated by dividing the corresponding total value by the count of total lines read. Once that is done, these averages can be printed out and some written to one of the output files. I've also made a conscious effort to use very descriptive variable names to try to make it understandable.
try:
from itertools import izip_longest
except ImportError: # Python 3
from itertools import zip_longest as izip_longest
GROUP_SIZE = 4
input_file_names = ["A1_B1_100000.txt", "A2_B2_100000.txt", "A1_B2_100000.txt",
"A2_B1_100000.txt"]
file_write = open("average_generations.txt", 'w')
mutation_average = open("mutation_average", 'w') # left in, but nothing written
for file_name in input_file_names:
with open(file_name, 'r') as input_file:
print('processing file: {}'.format(file_name))
totals = []
for count, fields in enumerate((line.split('\t') for line in input_file), 1):
totals = [sum(values) for values in
izip_longest(totals, map(float, fields), fillvalue=0)]
averages = [total/count for total in totals]
for print_counter, average in enumerate(averages):
print(' {:9.4f}'.format(average))
if print_counter % GROUP_SIZE == 0:
file_write.write(str(average)+'\n')
file_write.write('\n')
file_write.close()
mutation_average.close()
You're reading the entire file into memory (line = u.readlines()) which will fail of course if the file is too large (and you say that some are up to 20 GB), so that's your problem right there.
Better iterate over each line:
for current_line in u:
do_something_with(current_line)
is the recommended approach.
Later in your script, you're doing some very strange things like first counting all the items in a list, then constructing a for loop over the range of that count. Why not iterate over the list directly? What is the purpose of your script? I have the impression that this could be done much easier.
This is one of the advantages of high-level languages like Python (as opposed to C where you do have to do these housekeeping tasks yourself): Allow Python to handle iteration for you, and only collect in memory what you actually need to have in memory at any given time.
Also, as it seems that you're processing TSV files (tabulator-separated values), you should take a look at the csv module which will handle all the splitting, removing of \ns etc. for you.
Python can use all memory available to its environment. My simple "memory test" crashes on ActiveState Python 2.6 after using about
1959167 [MiB]
On jython 2.5 it crashes earlier:
239000 [MiB]
probably I can configure Jython to use more memory (it uses limits from JVM)
Test app:
import sys
sl = []
i = 0
# some magic 1024 - overhead of string object
fill_size = 1024
if sys.version.startswith('2.7'):
fill_size = 1003
if sys.version.startswith('3'):
fill_size = 497
print(fill_size)
MiB = 0
while True:
s = str(i).zfill(fill_size)
sl.append(s)
if i == 0:
try:
sys.stderr.write('size of one string %d\n' % (sys.getsizeof(s)))
except AttributeError:
pass
i += 1
if i % 1024 == 0:
MiB += 1
if MiB % 25 == 0:
sys.stderr.write('%d [MiB]\n' % (MiB))
In your app you read whole file at once. For such big files you should read the line by line.
No, there's no Python-specific limit on the memory usage of a Python application. I regularly work with Python applications that may use several gigabytes of memory. Most likely, your script actually uses more memory than available on the machine you're running on.
In that case, the solution is to rewrite the script to be more memory efficient, or to add more physical memory if the script is already optimized to minimize memory usage.
Edit:
Your script reads the entire contents of your files into memory at once (line = u.readlines()). Since you're processing files up to 20 GB in size, you're going to get memory errors with that approach unless you have huge amounts of memory in your machine.
A better approach would be to read the files one line at a time:
for u in files:
for line in u: # This will iterate over each line in the file
# Read values from the line, do necessary calculations
Not only are you reading the whole of each file into memory, but also you laboriously replicate the information in a table called list_of_lines.
You have a secondary problem: your choices of variable names severely obfuscate what you are doing.
Here is your script rewritten with the readlines() caper removed and with meaningful names:
file_A1_B1 = open("A1_B1_100000.txt", "r")
file_A2_B2 = open("A2_B2_100000.txt", "r")
file_A1_B2 = open("A1_B2_100000.txt", "r")
file_A2_B1 = open("A2_B1_100000.txt", "r")
file_write = open ("average_generations.txt", "w")
mutation_average = open("mutation_average", "w") # not used
files = [file_A2_B2,file_A2_B2,file_A1_B2,file_A2_B1]
for afile in files:
table = []
for aline in afile:
values = aline.split('\t')
values.remove('\n') # why?
table.append(values)
row_count = len(table)
row0length = len(table[0])
print_counter = 4
for column_index in range(row0length):
column_total = 0
for row_index in range(row_count):
number = float(table[row_index][column_index])
column_total = column_total + number
column_average = column_total/row_count
print column_average
if print_counter == 4:
file_write.write(str(column_average)+'\n')
print_counter = 0
print_counter +=1
file_write.write('\n')
It rapidly becomes apparent that (1) you are calculating column averages (2) the obfuscation led some others to think you were calculating row averages.
As you are calculating column averages, no output is required until the end of each file, and the amount of extra memory actually required is proportional to the number of columns.
Here is a revised version of the outer loop code:
for afile in files:
for row_count, aline in enumerate(afile, start=1):
values = aline.split('\t')
values.remove('\n') # why?
fvalues = map(float, values)
if row_count == 1:
row0length = len(fvalues)
column_index_range = range(row0length)
column_totals = fvalues
else:
assert len(fvalues) == row0length
for column_index in column_index_range:
column_totals[column_index] += fvalues[column_index]
print_counter = 4
for column_index in column_index_range:
column_average = column_totals[column_index] / row_count
print column_average
if print_counter == 4:
file_write.write(str(column_average)+'\n')
print_counter = 0
print_counter +=1
I have two 3GB text files, each file has around 80 million lines. And they share 99.9% identical lines (file A has 60,000 unique lines, file B has 80,000 unique lines).
How can I quickly find those unique lines in two files? Is there any ready-to-use command line tools for this? I'm using Python but I guess it's less possible to find a efficient Pythonic method to load the files and compare.
Any suggestions are appreciated.
If order matters, try the comm utility. If order doesn't matter, sort file1 file2 | uniq -u.
I think this is the fastest method (whether it's in Python or another language shouldn't matter too much IMO).
Notes:
1.I only store each line's hash to save space (and time if paging might occur)
2.Because of the above, I only print out line numbers; if you need actual lines, you'd just need to read the files in again
3.I assume that the hash function results in no conflicts. This is nearly, but not perfectly, certain.
4.I import hashlib because the built-in hash() function is too short to avoid conflicts.
import sys
import hashlib
file = []
lines = []
for i in range(2):
# open the files named in the command line
file.append(open(sys.argv[1+i], 'r'))
# stores the hash value and the line number for each line in file i
lines.append({})
# assuming you like counting lines starting with 1
counter = 1
while 1:
# assuming default encoding is sufficient to handle the input file
line = file[i].readline().encode()
if not line: break
hashcode = hashlib.sha512(line).hexdigest()
lines[i][hashcode] = sys.argv[1+i]+': '+str(counter)
counter += 1
unique0 = lines[0].keys() - lines[1].keys()
unique1 = lines[1].keys() - lines[0].keys()
result = [lines[0][x] for x in unique0] + [lines[1][x] for x in unique1]
With 60,000 or 80,000 unique lines you could just create a dictionary for each unique line, mapping it to a number. mydict["hello world"] => 1, etc. If your average line is around 40-80 characters this will be in the neighborhood of 10 MB of memory.
Then read each file, converting it to an array of numbers via the dictionary. Those will fit easily in memory (2 files of 8 bytes * 3GB / 60k lines is less than 1 MB of memory). Then diff the lists. You could invert the dictionary and use it to print out the text of the lines that differ.
EDIT:
In response to your comment, here's a sample script that assigns numbers to unique lines as it reads from a file.
#!/usr/bin/python
class Reader:
def __init__(self, file):
self.count = 0
self.dict = {}
self.file = file
def readline(self):
line = self.file.readline()
if not line:
return None
if self.dict.has_key(line):
return self.dict[line]
else:
self.count = self.count + 1
self.dict[line] = self.count
return self.count
if __name__ == '__main__':
print "Type Ctrl-D to quit."
import sys
r = Reader(sys.stdin)
result = 'ignore'
while result:
result = r.readline()
print result
If I understand correctly, you want the lines of these files without duplicates. This does the job:
uniqA = set(open('fileA', 'r'))
Python has difflib which claims to be quite competitive with other diff utilities see:
http://docs.python.org/library/difflib.html