I have a big input 20Gb text file which I process. I create an index which I store in a dict. Problem is that I access this dict for every term inside the file plus for every term I may add it as an item to the dict, so I can not just write it to the disk. When I reach my maximum RAM capacity (8gb ram) the system (win8 64-bit) starts paging to virtual memory so I/O is extremely high and system is unstable (I got blue screen 1 time). Any idea how can I improve it ?
edit for example psedocode
input = open("C:\\input.txt",'r').read()
text = input.split()
temp_dict = {}
for i,word in text:
if word in temp_dict :
text[i] = something()
else:
temp_dict[word] = hash_function()
print(temp_dict , file=...)
print(text, file=...)
Don't read the entire file into memory, you should do something like this:
with open("/input.txt",'rU') as file:
index_dict = {}
for line in file:
for word in line.split()
index_dict.setdefault(word, []).append(file.tell() + line.find(word))
To break it down, open the file with a context manager so that if you get an error, it automatically closes the file for you. I also changed the path to work on Unix, and added the U flag for Universal readline mode.
with open("/input.txt",'rU') as file:
Since semantically, an index is a list of words keyed to their location, I'm changing the dict to index_dict:
index_dict = {}
Using the file object directly as an iterator prevents you from reading the entire file into memory:
for line in file:
Then we can split the line and iterate by word:
for word in line.split()
and using the dict.setdefault method, we'll put the location of the word in an empty list if the key isn't already there, but if it is there, we just append it to the list already there:
index_dict.setdefault(word, []).append(file.tell() + line.find(word))
Does that help?
I would recommend simply using a database instead of a dictionary. In its simplest form, a database is a disk-based datastructure which are meant to span several gigabytes.
You can have a look at sqlite3 or SQLAlchemy for instance.
Additionally, you probably don't want to load the whole input file in memory at once either.
Related
Good afternoon, I have a multiple list of IP and MAC, list of arbitrary length
A = [['10.0.0.1','00:4C:3S:**:**:**', 0], ['10.0.0.2', '00:5C:4S:**:**:**', 0], [....], [....]]
I want to check if this MAC is in the oui file:
E043DB (base 16) Shenzhen
2405f5 (base 16) Integrated
3CD92B (base 16) Hewlett Packard
...
If the MAC from the list is in the file, write the name of the manufacturer as 3 list items. I'm trying to do so and it turns out to check only the first element, the remaining ones are not checked, how can I do this please tell me?
f = open('oui.txt', 'r')
for values in A:
for line in f.readlines():
if values[1][0:8].replace(':','') in line:
values[2]=(line.split('(base 16)')[1].strip())
f.close()
print (A)
And get an answer:
A = [['10.0.0.1','00:4C:3S:**:**:**', 'Firm Name'], ['10.0.0.2', '00:5C:4S:**:**:**', 0], [....], [....]]
The Problem
Consider the "shape" of your code:
f = open('a file')
for values in [ 'some list' ]:
for line in f.readlines():
Your two loops are doing this:
Start with first value in list
Read all lines remaining in file object f
Move to next value in list
Read all lines remaining in file object f
Except that the first time you told it to "read all lines remaining" it would do so.
So, unless you have some way to put more lines into f (which can happen with async files like stdin!) you are going to get one "good" pass through the file, and then every subsequent pass the file object will point to the end of the file, so you'll get nothing.
A Solution
When you are dealing with a file, you want to only process it one time. File I/O is expensive compared to other operations. So you can choose to either (a) read the entire file into memory, and do whatever you want since it's not a file any more; or (b) scan through it only one time.
If you choose to scan through it only once, the easy solution is just to invert the two for loops. Instead of doing this:
for item in list:
for line in file:
Do this instead:
for line in file:
for item in list:
And presto! You are now only reading the file one time.
Other Considerations
If I look at your code, and your examples, it seems like you are trying for an exact match on a particular key. You trim down the MAC addresses in your list to check them against the manufacturer ids.
This suggests to me that you might well have many, many more list values (source MAC addresses) than you have manufacturers. So perhaps you should consider reading the contents of the tile into memory, rather than processing it one line at a time.
Once you have the file in memory, consider building a proper dictionary. You have a key (MAC prefix) and a value (manufacturer). So build something like:
for line in f:
mac = line.split('(base 16)')[0].strip()
mfg = line.split('(base 16)')[1].strip()
mac_to_mfg[mac] = mfg
Then you can make one pass through the source addresses and use the dict's O(1) lookup to your advantage:
for src in A:
prefix = src[1][:8].replace(':', '')
if prefix in mac_to_mfg:
# etc...
The problem is you got the order of the loops reversed. Usually this isn't that big of a problem, but when working objects that are consumed (like the IO file object) the contents will no longer produce once it's been iterated over.
You'll need to iterate the lines first, and then within each lines iterate through A to check the values:
with open('oui.txt', 'r') as f:
for line in f.readlines():
for values in A:
if values[1][0:8].replace(':','') in line:
values[2]=(line.split('(base 16)')[1].strip())
print (A)
Notice I changed your file opening to use the with context manager instead, where once your code exists the with block it will automatically close() the file for you. It is recommended over manually opening the file as you might forget to close() it after.
Most of what I do involves writing simple parsing scripts that reads search terms from one file and searches, line by line, another file. Once the search term is found, the line and sometimes the following line are written to another output file. The code I use is rudimentary and likely crude.
#!/usr/bin/env python
data = open("data.txt", "r")
search_terms = ids.read().splitlines()
data.close()
db = open("db.txt", "r")
output = open("output.txt", "w")
for term in search_terms:
for line in db:
if line.find(term) > -1:
next_line = db.next()
output.write(">" + head + "\n" + next_line)
print("Found %s" % term)
There are a few problems here. First, I don't think it's the most efficient and fastest to search line by line, but I'm not exactly sure about that. Second, I often run into issues with cursor placement and the cursor doesn't reset to the beginning of the file when the search term is found. Third, while I am usually confident that all of the terms can be found in the db, there are rare times when I can't be sure, so I would like to write to another file whenever it iterates through the entire db and can't find the term. I've tried adding a snippet that counts the number of lines of the db so if the find() function gets to the last line and the term isn't found, then it outputs to another "not found" file, but I haven't been able to get my elif and else loops right.
Overall, I'd just like any hints or corrections that could make this sort of script more efficient and robust.
Thanks.
Unless it's a really big file, why not iterate line by line? If the input file's size is some significant portion of your machine's available resources (memory), then you might want to look into buffered input and other, more low-level abstractions of what the computer is doing. But if you're talking about a few hundred MB or less on a relatively modern machine, let the computer do the computing ;)
Off the bat you might want to get into the habit of using the built-in context manager with. For instance, in your snippet, you don't have a call to output.close().
with open('data.txt', 'r') as f_in:
search_terms = f_in.read().splitlines()
Now search_terms is a handle to a list that has each line from data.txt as a string (but with the newline characters removed). And data.txt is closed thanks to with.
In fact, I would do that with the db.txt file, also.
with open('db.txt', 'r') as f_in:
lines = f_in.read().splitlines()
Context managers are cool.
As a side note, you could open your destination file now, and do your parsing and results-tracking with it open the whole time, but I like leaving as many files closed as possible for as long as possible.
I would suggest setting the biggest object on the outside of your loop, which I'm guessing is db.txt contents. The outermost loop only usually only gets iterated once, so might as well put the biggest thing there.
results = []
for i, line in enumerate(lines):
for term in search_terms:
if term in line:
# Use something not likely to appear in your line as a separator
# for these "second lines". I used three pipe characters, but
# you could just as easily use something even more random
results.append('{}|||{}'.format(line, lines[i+1]))
if results:
with open('output.txt', 'w') as f_out:
for result in results:
# Don't forget to replace your custom field separator
f_out.write('> {}\n'.format(result.replace('|||', '\n')))
else:
with open('no_results.txt', 'w') as f_out:
# This will write an empty file to disk
pass
The nice thing about this approach is each line in db.txt is checked once for each search_term in search_terms. However, the downside is that any line will be recorded for each search term it contains, ie., if it has three search terms in it, that line will appear in your output.txt three times.
And all the files are magically closed.
Context managers are cool.
Good luck!
search_terms keeps whole data.txt in memory. That it's not good in general but in this case it's not quite bad.
Looking line-by-line is not sufficient but if the case is simple and files are not too big it's not a big deal. If you want more efficiency you should sort data.txt file and put this to some tree-like structure. It depends on data which is inside.
You have to use seek to move pointer back after using next.
Propably the easiest way here is to generate two lists of lines and search using in like:
`db = open('db.txt').readlines()
db_words = [x.split() for x in db]
data = open('data.txt').readlines()
print('Lines in db {}'.format(len(db)))
for item in db:
for words in db_words:
if item in words:
print("Found {}".format(item))`
Your key issue is that you may be looping in the wrong order -- in your code as posted, you'll always exhaust the db looking for the first term, so after the first pass of the outer for loop db will be at end, no more lines to read, no other term will ever be found.
Other improvements include using the with statement to guarantee file closure, and a set to track which search terms were not found. (There are also typos in your posted code, such as opening a file as data but then reading it as ids).
So, for example, something like:
with open("data.txt", "r") as data:
search_terms = data.read().splitlines()
missing_terms = set(search_terms)
with open("db.txt", "r") as db, open("output.txt", "w") as output:
for line in db:
for term in search_terms:
if term in line:
missing_terms.discard(term)
next_line = db.next()
output.write(">" + head + "\n" + next_line)
print("Found {}".format(term))
break
if missing_terms:
diagnose_not_found(missing_terms)
where the diagnose_not_found function does whatever you need to do to warn the user about missing terms.
There are assumptions embedded here, such as the fact that you don't care if some other search term is present in a line where you've found a previous one, or the very next one; they might take substantial work to fix if not applicable and it will require that you edit your Q with a very complete and unambiguous list of specifications.
If your db is actually small enough to comfortably fit in memory, slurping it all in as a list of lines once and for all would allow easier accommodation for more demanding specs (as in that case you can easily go back and forth, while iterating on a file means you can only go forward one line at a time), so if your specs are indeed more demanding please also clarify if this crucial condition hold, or rather you need this script to process potentially humungous db files (say gigabyte-plus sizes, so as to not "comfortably fit in memory", depending on your platform of course).
I'm trying to use mmap to load a dictionary from file.
I'll explain my problem on simplified example. In real, I have 10 files, which have to be loaded in miliseconds (or act like been loaded).
So lets have a dictionary - 50 mb. My program should find a value by key under 1 sec. Searching in this dictionary is not a problem, it could be done far under 1 sec. The problem is that when sb puts an input into the text field and press enter, the program starts to load the dictionary into memory so program can find a key. This loading can takes several seconds but I have to get result under 1 sec (dictionary can't be loaded before pressing enter). So I was recommended to use mmap module which should be far faster.
I can't google a good example. I've tried this (I know that it is an incorrect use)
def loadDict():
with open('dict','r+b') as f: # used pickle to save
fmap = mmap.mmap(f.fileno(),0)
dictionary = cpickle.load(fmap)
return dictionary
def search(pattern):
dictionary = loadDict()
return dictionary['pattern']
search('apple') <- it still takes many seconds
Could you give me a good example of proper mmap use?
Using an example file of 2,400,000 keys/values (52.7 megabytes) pairs such as:
key1,value1
key2,value2
etc , etc
Creating example file:
with open("stacktest.txt", "a") as f:
contents = ["key" + str(i) + ",value" + str(i) for i in range(2400000)]
f.write("\n".join(contents) + "\n")
What is actually slow is having to construct the dictionary. Reading a file of 50mb is fast enough. Finding a value in a wall of text of this size is also fast enough. Using that, you will be able to find a single value in under 1 second.
Since I know the structure of my file I am able to use this shortcut. This should be tuned to your exact file structure though:
Reading in the file and manually searching for the known pattern (searching for the unique string in the whole file, then using the comma delimiter and newline delimiters).
with open("stacktest.txt") as f:
bigfile = f.read()
my_key = "key2399999"
start = bigfile.find(my_key)
comma = bigfile[start:start+1000].find(",") + 1
end = bigfile[start:start+1000].find("\n")
print bigfile[start+comma:start+end]
# value2399999
Timing for it all: 0.43s on average
Mission accomplished?
I have a text file containing 7000 lines of strings. I got to search for a specific string based upon few params.
Some are saying that the below code wouldn't be efficient (speed and memory usage).
f = open("file.txt")
data = f.read().split() # strings as list
First of all, if don't even make it as a list, how would I even start searching at all?
Is it efficient to load the entire file? If not, how to do it?
To filter anything, we need to search for that we need to read it right!
A bit confused
iterate over each line of the file, without storing it. This will make for program memory Efficient.
with open(filname) as f:
for line in f:
if "search_term" in line:
break
I want to create a dictionary with values from a file.
The problem is that it would have to be read line by line to be added to the dictionary because I don't think I have enough memory to load in all the information to be appended to the dictionary.
The key can be default but the value will be one selected from each line in the file. The file is not csv but I always split the lines so I can be able to select a value from it.
import sys
def prod_check(dirname):
dict1 = {}
k = 0
with open('select_sha_sub_hashes.out') as inf:
for line in inf:
pline = line.split('|')
value = pline[3]
dict1[line] = dict1[k]
k += 1
print dict1
if __name__ =="__main__":
dirname=sys.argv[1]
prod_check(dirname)
This is the code I am working with, and the variable I have set as value is the index in the line from the file which I am pulling data from. I seem to be coming to a problem when I try and call the dictionary to print the values, but I think it may be a problem in my syntax or maybe an assignment I made. I want the values to be added to the keys, but the keys to remain as regular numbers like 0-100
If you don't have enough memory to store the entire dictionary in RAM at once, try anydbm, bsddb and/or gdbm. These are dictionary-like objects that keep key-value pairs on disk in a single-table, keystring-valuestring database.
Optionally, consider:
http://stromberg.dnsalias.org/~strombrg/cachedb.html
...which will allow you to transparently convert between serialized and not-serialized representations pretty transparently.
Have a look at something like "Tokyo Cabinet" # http://fallabs.com/tokyocabinet/ which has Python bindings and is fairly efficient. There's also Kyoto cabinet but the licensing on that is a little restrictive.
Also check out this previous S/O post: Reliable and efficient key--value database for Linux?
So it sounds as if the main problem is reading the file line-by-line. To read a file line-by-line you can do this:
with open('data.txt') as inf:
for line in inf:
# do your rest of processing
The advantage of using with is that the file is closed for you automagically when you are done or an exception occurs.
--
Note, the original post didn't contain any code, it now seems to have incorporated a copy of this code to help further explain the problem.