Determine if a dice roll contains certain combinations? - python

I am writing a dice game simulator in Python. I represent a roll by using a list containing integers from 1-6. So I might have a roll like this:
[1,2,1,4,5,1]
I need to determine if a roll contains scoring combinations, such as 3 of a kind, 4 of a kind, 2 sets of 3, and straights.
Is there a simple Pythonic way of doing this? I've tried several approaches, but they all have turned out to be messy.

Reorganize into a dict with value: count and test for presence of various patterns.

There are two ways to do this:
def getCounts(L):
d = {}
for i in range(1, 7):
d[i] = L.count(i)
return d # d is the dictionary which contains the occurrences of all possible dice values
# and has a 0 if it doesn't occur in th roll
This one is inspired by Ignacio Vazquez-Abrams and dkamins
def getCounts(L):
d = {}
for i in set(L):
d[i] = L.count(i)
return d # d is the dictionary which contains the occurrences of
# all and only the values in the roll

I have written code like this before (but with cards for poker). A certain amount of code-sprawl is unavoidable to encode all of the rules of the game. For example, the code to look for n-of-a-kind will be completely different from the code to look for a straight.
Let's consider n-of-a-kind first. As others have suggested, create a dict containing the counts of each element. Then:
counts = sorted(d.values())
if counts[-1] == 4:
return four_of_a_kind
if counts[-1] and counts[-2] == 3:
return two_sets_of_three
# etc.
Checking for straights requires a different approach. When checking for n-of-a-kind, you need to get the counts and ignore the values. Now we need to examine the values and ignore the counts:
ranks = set(rolls)
if len(ranks) == 6: # all six values are present
return long_straight
# etc.
In general, you should be able to identify rules with a similar flavor, abstract out code that helps with those kinds of rules, and then write just a few lines per rule. Some rules may be completely unique and will not be able to share code with other rules. That's just the way the cookie crumbles.

Related

How to keep on genrating random values untill the given condtion is satisfied without going into infinte iterations?

I am trying to generate random values for some purpose which satisfy some condition. For ex- Two variables 'a' and 'b' and the condition is a+b>30
and i want random values for a and b. The problem is that i am facing an infinite loop as it is not able to satisfy the condition.
I have used eval() to evaluate the condition as my condition is in string format. I have tried using random.randrange.
var=[{a:'Value',b:'Value'},{},{}]
cond=[['a+b>20'],[],[]]
class check_condition:
def condition(self,cond,var,i):
for k,v in var[i].items():
exec("%s=%s" % (k,v))
for j in range(len(cond[i])):
while eval(cond[i][j])==False:
reassign(var,i)
class variable:
def assign(self,var):
for i in range(len(var)):
for k,e in var[i].items():
var[i].add(k,random.randrange(10,30))
def reassign(self,var,i):
for k,e in var[i].items():
#print("Reassigning")
var[i].add(k,random.randrange(10,30))
NO errors but i need to know some improved logical statement.
Generate two arrays of all possible values:
as = range(10,30)
bs = range(10,30)
Choose two random values:
a = random.choice(as)
b = random.choice(bs)
Check your condition, and if false, then pop those two items and continue the loop:
as.remove(a)
bs.remove(b)
Update - this is not a good solution, because it allows for only 20 out of 400 combinations.
In addition to that (or more precisely - as a result of that), it can also end without success.
So here is a more complete solution, which chooses randomly from all possible combinations:
import random
aa = range(10,30)
bb = range(10,30)
combinations = [(a,b) for a in aa for b in bb]
while True:
(a,b) = random.choice(combinations)
if a+b == 30:
print(a,b)
break
I would suggest another approach:
Generate all possible pairs of numbers (a, b) given some limited
range for both of them. Your code highlights range (10, 30].
Filter out all pairs that sums to a number lesser than 30
Use something like random.choice to select a random pair of numbers from rest of As and Bs
Benefit of such approach is that it would never fall into infinite loop - it's just indexing operation using random integer.

Keeping count of values available from among multiple sets

I have the following situation:
I am generating n combinations of size 3 from, made from n values. Each kth combination [0...n] is pulled from a pool of values, located in the kth index of a list of n sets. Each value can appear 3 times. So if I have 10 values, then I have a list of size 10. Each index holds a set of values 0-10.
So, it seems to me that a good way to do this is to have something keeping count of all the available values from among all the sets. So, if a value is rare(lets say there is only 1 left), if I had a structure where I could look up the rarest value, and have the structure tell me which index it was located in, then it would make generating the possible combinations much easier.
How could I do this? What about one structure to keep count of elements, and a dictionary to keep track of list indices that contain the value?
edit: I guess I should put in that a specific problem I am looking to solve here, is how to update the set for every index of the list (or whatever other structures i end up using), so that when I use a value 3 times, it is made unavailable for every other combination.
Thank you.
Another edit
It seems that this may be a little too abstract to be asking for solutions when it's hard to understand what I am even asking for. I will come back with some code soon, please check back in 1.5-2 hours if you are interested.
how to update the set for every index of the list (or whatever other structures i end up using), so that when I use a value 3 times, it is made unavailable for every other combination.
I assume you want to sample the values truly randomly, right? What if you put 3 of each value into a list, shuffle it with random.shuffle, and then just keep popping values from the end of the list when you're building your combination? If I'm understanding your problem right, here's example code:
from random import shuffle
valid_values = [i for i in range(10)] # the valid values are 0 through 9 in my example, update accordingly for yours
vals = 3*valid_values # I have 3 of each valid value
shuffle(vals) # randomly shuffle them
while len(vals) != 0:
combination = (vals.pop(), vals.pop(), vals.pop()) # combinations are 3 values?
print(combination)
EDIT: Updated code based on the added information that you have sets of values (but this still assumes you can use more than one value from a given set):
from random import shuffle
my_sets_of_vals = [......] # list of sets
valid_values = list()
for i in range(my_sets_of_vals):
for val in my_sets_of_vals[i]:
valid_values.append((i,val)) # this can probably be done in list comprehension but I forgot the syntax
vals = 3*valid_values # I have 3 of each valid value
shuffle(vals) # randomly shuffle them
while len(vals) != 0:
combination = (vals.pop()[1], vals.pop()[1], vals.pop()[1]) # combinations are 3 values?
print(combination)
Based on the edit you could make an object for each value. It could hold the number of times you have used the element and the element itself. When you find you have used an element three times, remove it from the list

Dictionary use instead of dynamic variable names in Python

I have a long text file having truck configurations. In each line some properties of a truck is listed as a string. Each property has its own fixed width space in the string, such as:
2 chracters = number of axles
2 characters = weight of the first axle
2 characters = weight of the second axle
...
2 characters = weight of the last axle
2 characters = length of the first axle spacing (spacing means distance between axles)
2 characters = length of the second axle spacing
...
2 characters = length of the last axle spacing
As an example:
031028331004
refers to:
number of axles = 3
first axle weight = 10
second axle weight = 28
third axle weight = 33
first spacing = 10
second spacing = 4
Now, you have an idea about my file structure, here is my problem: I would like to group these trucks in separate lists, and name the lists in terms of axle spacings. Let's say I am using a boolean type of approach, and if the spacing is less than 6, the boolean is 1, if it is greater than 6, the boolean is 0. To clarify, possible outcomes in a three axle truck becomes:
00 #Both spacings > 6
10 #First spacing < 6, second > 6
01 #First spacing > 6, second < 6
11 #Both spacings < 6
Now, as you see there are not too many outcomes for a 3 axle truck. However, if I have a 12 axle truck, the number of "possible" combinations go haywire. The thing is, in reality you would not see all "possible" combinations of axle spacings in a 12 axle truck. There are certain combinations (I don't know which ones, but to figure it out is my aim) with a number much less than the "possible" number of combinations.
I would like the code to create lists and fill them with the strings that define the properties I mentioned above if only such a combination exists. I thought maybe I should create lists with variable names such as:
truck_0300[]
truck_0301[]
truck_0310[]
truck_0311[]
on the fly. However, from what I read in SF and other sources, this is strongly discouraged. How would you do it using the dictionary concept? I understand that dictionaries are like 2 dimensional arrays, with a key (in my case the keys would be something like truck_0300, truck_0301 etc.) and value pair (again in my case, the values would probably be lists that hold the actual strings that belong to the corresponding truck type), however I could not figure out how to create that dictionary, and populate it with variable keys and values.
Any insight would be welcome!
Thanks a bunch!
You are definitely correct that it is almost always a bad idea to try and create "dynamic variables" in a scope. Dictionaries usually are the answer to build up a collection of objects over time and reference back to them...
I don't fully understand your application and format, but in general to define and use your dictionary it would look like this:
trucks = {}
trucks['0300'] = ['a']
trucks['0300'].append('c')
trucks['0300'].extend(['c','d'])
aTruck = trucks['0300']
Now since every one of these should be a list of your strings, you might just want to use a defaultdict, and tell it to use a list as default value for non existant keys:
from collections import defaultdict
trucks = defaultdict(list)
trucks['0300']
# []
Note that even though it was a brand new dict that contained no entries, the 'truck_0300' key still return a new list. This means you don't have to check for the key. Just append:
trucks = defaultdict(list)
trucks['0300'].append('a')
A defaultdict is probably what you want, since you do not have to pre-define keys at all. It is there when you are ready for it.
Getting key for the max value
From your comments, here is an example of how to get the key with the max value of a dictionary. It is pretty easy, as you just use max and define how it should determine the key to use for the comparisons:
d = {'a':10, 'b':5, 'c':50}
print max(d.iteritems(), key=lambda (k,v): v)
# ('c', 50)
d['c'] = 1
print max(d.iteritems(), key=lambda (k,v): v)
# ('a', 10)
All you have to do is define how to produce a comparison key. In this case I just tell it to take the value as the key. For really simply key functions like this where you are just telling it to pull an index or attribute from the object, you can make it more efficient by using the operator module so that the key function is in C and not in python as a lambda:
from operator import itemgetter
...
print max(d.iteritems(), key=itemgetter(1))
#('c', 50)
itemgetter creates a new callable that will pull the second item from the tuple that is passed in by the loop.
Now assume each value is actually a list (similar to your structure). We will make it a list of numbers, and you want to find the key which has the list with the largest total:
d = {'a': range(1,5), 'b': range(2,4), 'c': range(5,7)}
print max(d.iteritems(), key=lambda (k,v): sum(v))
# ('c', [5, 6])
If the number of keys is more than 10,000, then this method is not viable. Otherwise define a dictionary d = {} and do a loop over your lines:
key = line[:4]
if not key in d.keys():
d[key] = []
d[key] += [somevalue]
I hope this helps.
Here's a complete solution from string to output:
from collections import namedtuple, defaultdict
# lightweight class
Truck = namedtuple('Truck', 'weights spacings')
def parse_truck(s):
# convert to array of numbers
numbers = [int(''.join(t)) for t in zip(s[::2], s[1::2])]
# check length
n = numbers[0]
assert n * 2 == len(numbers)
numbers = numbers[1:]
return Truck(numbers[:n], numbers[n:])
trucks = [
parse_truck("031028331004"),
...
]
# dictionary where every key contains a list by default
trucks_by_spacing = defaultdict(list)
for truck in trucks:
# (True, False) instead of '10'
key = tuple(space > 6 for space in truck.spacings)
trucks_by_spacing[key].append(truck)
print trucks_by_spacing
print trucks_by_spacing[True, False]

Finding the most similar numbers across multiple lists in Python

In Python, I have 3 lists of floating-point numbers (angles), in the range 0-360, and the lists are not the same length. I need to find the triplet (with 1 number from each list) in which the numbers are the closest. (It's highly unlikely that any of the numbers will be identical, since this is real-world data.) I was thinking of using a simple lowest-standard-deviation method to measure agreement, but I'm not sure of a good way to implement this. I could loop through each list, comparing the standard deviation of every possible combination using nested for loops, and have a temporary variable save the indices of the triplet that agrees the best, but I was wondering if anyone had a better or more elegant way to do something like this. Thanks!
I wouldn't be surprised if there is an established algorithm for doing this, and if so, you should use it. But I don't know of one, so I'm going to speculate a little.
If I had to do it, the first thing I would try would be just to loop through all possible combinations of all the numbers and see how long it takes. If your data set is small enough, it's not worth the time to invent a clever algorithm. To demonstrate the setup, I'll include the sample code:
# setup
def distance(nplet):
'''Takes a pair or triplet (an "n-plet") as a list, and returns its distance.
A smaller return value means better agreement.'''
# your choice of implementation here. Example:
return variance(nplet)
# algorithm
def brute_force(*lists):
return min(itertools.product(*lists), key = distance)
For a large data set, I would try something like this: first create one triplet for each number in the first list, with its first entry set to that number. Then go through this list of partially-filled triplets and for each one, pick the number from the second list that is closest to the number from the first list and set that as the second member of the triplet. Then go through the list of triplets and for each one, pick the number from the third list that is closest to the first two numbers (as measured by your agreement metric). Finally, take the best of the bunch. This sample code demonstrates how you could try to keep the runtime linear in the length of the lists.
def item_selection(listA, listB, listC):
# make the list of partially-filled triplets
triplets = [[a] for a in listA]
iT = 0
iB = 0
while iT < len(triplets):
# make iB the index of a value in listB closes to triplets[iT][0]
while iB < len(listB) and listB[iB] < triplets[iT][0]:
iB += 1
if iB == 0:
triplets[iT].append(listB[0])
elif iB == len(listB)
triplets[iT].append(listB[-1])
else:
# look at the values in listB just below and just above triplets[iT][0]
# and add the closer one as the second member of the triplet
dist_lower = distance([triplets[iT][0], listB[iB]])
dist_upper = distance([triplets[iT][0], listB[iB + 1]])
if dist_lower < dist_upper:
triplets[iT].append(listB[iB])
elif dist_lower > dist_upper:
triplets[iT].append(listB[iB + 1])
else:
# if they are equidistant, add both
triplets[iT].append(listB[iB])
iT += 1
triplets[iT:iT] = [triplets[iT-1][0], listB[iB + 1]]
iT += 1
# then another loop while iT < len(triplets) to add in the numbers from listC
return min(triplets, key = distance)
The thing is, I can imagine situations where this wouldn't actually find the best triplet, for instance if a number from the first list is close to one from the second list but not at all close to anything in the third list. So something you could try is to run this algorithm for all 6 possible orderings of the lists. I can't think of a specific situation where that would fail to find the best triplet, but one might still exist. In any case the algorithm will still be O(N) if you use a clever implementation, assuming the lists are sorted.
def symmetrized_item_selection(listA, listB, listC):
best_results = []
for ordering in itertools.permutations([listA, listB, listC]):
best_results.extend(item_selection(*ordering))
return min(best_results, key = distance)
Another option might be to compute all possible pairs of numbers between list 1 and list 2, between list 1 and list 3, and between list 2 and list 3. Then sort all three lists of pairs together, from best to worst agreement between the two numbers. Starting with the closest pair, go through the list pair by pair and any time you encounter a pair which shares a number with one you've already seen, merge them into a triplet. For a suitable measure of agreement, once you find your first triplet, that will give you a maximum pair distance that you need to iterate up to, and once you get up to it, you just choose the closest triplet of the ones you've found. I think that should consistently find the best possible triplet, but it will be O(N^2 log N) because of the requirement for sorting the lists of pairs.
def pair_sorting(listA, listB, listC):
# make all possible pairs of values from two lists
# each pair has the structure ((number, origin_list),(number, origin_list))
# so we know which lists the numbers came from
all_pairs = []
all_pairs += [((nA,0), (nB,1)) for (nA,nB) in itertools.product(listA,listB)]
all_pairs += [((nA,0), (nC,2)) for (nA,nC) in itertools.product(listA,listC)]
all_pairs += [((nB,1), (nC,2)) for (nB,nC) in itertools.product(listB,listC)]
all_pairs.sort(key = lambda p: distance(p[0][0], p[1][0]))
# make a dict to track which (number, origin_list)s we've already seen
pairs_by_number_and_list = collections.defaultdict(list)
min_distance = INFINITY
min_triplet = None
# start with the closest pair
for pair in all_pairs:
# for the first value of the current pair, see if we've seen that particular
# (number, origin_list) combination before
for pair2 in pairs_by_number_and_list[pair[0]]:
# if so, that means the current pair shares its first value with
# another pair, so put the 3 unique values together to make a triplet
this_triplet = (pair[1][0], pair2[0][0], pair2[1][0])
# check if the triplet agrees more than the previous best triplet
this_distance = distance(this_triplet)
if this_distance < min_distance:
min_triplet = this_triplet
min_distance = this_distance
# do the same thing but checking the second element of the current pair
for pair2 in pairs_by_number_and_list[pair[1]]:
this_triplet = (pair[0][0], pair2[0][0], pair2[1][0])
this_distance = distance(this_triplet)
if this_distance < min_distance:
min_triplet = this_triplet
min_distance = this_distance
# finally, add the current pair to the list of pairs we've seen
pairs_by_number_and_list[pair[0]].append(pair)
pairs_by_number_and_list[pair[1]].append(pair)
return min_triplet
N.B. I've written all the code samples in this answer out a little more explicitly than you'd do it in practice to help you to understand how they work. But when doing it for real, you'd use more list comprehensions and such things.
N.B.2. No guarantees that the code works :-P but it should get the rough idea across.

Memory Efficient Alternatives to Python Dictionaries

In one of my current side projects, I am scanning through some text looking at the frequency of word triplets. In my first go at it, I used the default dictionary three levels deep. In other words, topDict[word1][word2][word3] returns the number of times these words appear in the text, topDict[word1][word2] returns a dictionary with all the words that appeared following words 1 and 2, etc.
This functions correctly, but it is very memory intensive. In my initial tests it used something like 20 times the memory of just storing the triplets in a text file, which seems like an overly large amount of memory overhead.
My suspicion is that many of these dictionaries are being created with many more slots than are actually being used, so I want to replace the dictionaries with something else that is more memory efficient when used in this manner. I would strongly prefer a solution that allows key lookups along the lines of the dictionaries.
From what I know of data structures, a balanced binary search tree using something like red-black or AVL would probably be ideal, but I would really prefer not to implement them myself. If possible, I'd prefer to stick with standard python libraries, but I'm definitely open to other alternatives if they would work best.
So, does anyone have any suggestions for me?
Edited to add:
Thanks for the responses so far. A few of the answers so far have suggested using tuples, which didn't really do much for me when I condensed the first two words into a tuple. I am hesitant to use all three as a key since I want it to be easy to look up all third words given the first two. (i.e. I want something like the result of topDict[word1, word2].keys()).
The current dataset I am playing around with is the most recent version of Wikipedia For Schools. The results of parsing the first thousand pages, for example, is something like 11MB for a text file where each line is the three words and the count all tab separated. Storing the text in the dictionary format I am now using takes around 185MB. I know that there will be some additional overhead for pointers and whatnot, but the difference seems excessive.
Some measurements. I took 10MB of free e-book text and computed trigram frequencies, producing a 24MB file. Storing it in different simple Python data structures took this much space in kB, measured as RSS from running ps, where d is a dict, keys and freqs are lists, and a,b,c,freq are the fields of a trigram record:
295760 S. Lott's answer
237984 S. Lott's with keys interned before passing in
203172 [*] d[(a,b,c)] = int(freq)
203156 d[a][b][c] = int(freq)
189132 keys.append((a,b,c)); freqs.append(int(freq))
146132 d[intern(a),intern(b)][intern(c)] = int(freq)
145408 d[intern(a)][intern(b)][intern(c)] = int(freq)
83888 [*] d[a+' '+b+' '+c] = int(freq)
82776 [*] d[(intern(a),intern(b),intern(c))] = int(freq)
68756 keys.append((intern(a),intern(b),intern(c))); freqs.append(int(freq))
60320 keys.append(a+' '+b+' '+c); freqs.append(int(freq))
50556 pair array
48320 squeezed pair array
33024 squeezed single array
The entries marked [*] have no efficient way to look up a pair (a,b); they're listed only because others have suggested them (or variants of them). (I was sort of irked into making this because the top-voted answers were not helpful, as the table shows.)
'Pair array' is the scheme below in my original answer ("I'd start with the array with keys
being the first two words..."), where the value table for each pair is
represented as a single string. 'Squeezed pair array' is the same,
leaving out the frequency values that are equal to 1 (the most common
case). 'Squeezed single array' is like squeezed pair array, but gloms key and value together as one string (with a separator character). The squeezed single array code:
import collections
def build(file):
pairs = collections.defaultdict(list)
for line in file: # N.B. file assumed to be already sorted
a, b, c, freq = line.split()
key = ' '.join((a, b))
pairs[key].append(c + ':' + freq if freq != '1' else c)
out = open('squeezedsinglearrayfile', 'w')
for key in sorted(pairs.keys()):
out.write('%s|%s\n' % (key, ' '.join(pairs[key])))
def load():
return open('squeezedsinglearrayfile').readlines()
if __name__ == '__main__':
build(open('freqs'))
I haven't written the code to look up values from this structure (use bisect, as mentioned below), or implemented the fancier compressed structures also described below.
Original answer: A simple sorted array of strings, each string being a space-separated concatenation of words, searched using the bisect module, should be worth trying for a start. This saves space on pointers, etc. It still wastes space due to the repetition of words; there's a standard trick to strip out common prefixes, with another level of index to get them back, but that's rather more complex and slower. (The idea is to store successive chunks of the array in a compressed form that must be scanned sequentially, along with a random-access index to each chunk. Chunks are big enough to compress, but small enough for reasonable access time. The particular compression scheme applicable here: if successive entries are 'hello george' and 'hello world', make the second entry be '6world' instead. (6 being the length of the prefix in common.) Or maybe you could get away with using zlib? Anyway, you can find out more in this vein by looking up dictionary structures used in full-text search.) So specifically, I'd start with the array with keys being the first two words, with a parallel array whose entries list the possible third words and their frequencies. It might still suck, though -- I think you may be out of luck as far as batteries-included memory-efficient options.
Also, binary tree structures are not recommended for memory efficiency here. E.g., this paper tests a variety of data structures on a similar problem (unigrams instead of trigrams though) and finds a hashtable to beat all of the tree structures by that measure.
I should have mentioned, as someone else did, that the sorted array could be used just for the wordlist, not bigrams or trigrams; then for your 'real' data structure, whatever it is, you use integer keys instead of strings -- indices into the wordlist. (But this keeps you from exploiting common prefixes except in the wordlist itself. Maybe I shouldn't suggest this after all.)
Use tuples.
Tuples can be keys to dictionaries, so you don't need to nest dictionaries.
d = {}
d[ word1, word2, word3 ] = 1
Also as a plus, you could use defaultdict
so that elements that don't have entries always return 0
and so that u can say d[w1,w2,w3] += 1 without checking if the key already exists or not
example:
from collections import defaultdict
d = defaultdict(int)
d["first","word","tuple"] += 1
If you need to find all words "word3" that are tupled with (word1,word2) then search for it in dictionary.keys() using list comprehension
if you have a tuple, t, you can get the first two items using slices:
>>> a = (1,2,3)
>>> a[:2]
(1, 2)
a small example for searching tuples with list comprehensions:
>>> b = [(1,2,3),(1,2,5),(3,4,6)]
>>> search = (1,2)
>>> [a[2] for a in b if a[:2] == search]
[3, 5]
You see here, we got a list of all items that appear as the third item in the tuples that start with (1,2)
In this case, ZODB¹ BTrees might be helpful, since they are much less memory-hungry. Use a BTrees.OOBtree (Object keys to Object values) or BTrees.OIBTree (Object keys to Integer values), and use 3-word tuples as your key.
Something like:
from BTrees.OOBTree import OOBTree as BTree
The interface is, more or less, dict-like, with the added bonus (for you) that .keys, .items, .iterkeys and .iteritems have two min, max optional arguments:
>>> t=BTree()
>>> t['a', 'b', 'c']= 10
>>> t['a', 'b', 'z']= 11
>>> t['a', 'a', 'z']= 12
>>> t['a', 'd', 'z']= 13
>>> print list(t.keys(('a', 'b'), ('a', 'c')))
[('a', 'b', 'c'), ('a', 'b', 'z')]
¹ Note that if you are on Windows and work with Python >2.4, I know there are packages for more recent python versions, but I can't recollect where.
PS They exist in the CheeseShop ☺
A couple attempts:
I figure you're doing something similar to this:
from __future__ import with_statement
import time
from collections import deque, defaultdict
# Just used to generate some triples of words
def triplegen(words="/usr/share/dict/words"):
d=deque()
with open(words) as f:
for i in range(3):
d.append(f.readline().strip())
while d[-1] != '':
yield tuple(d)
d.popleft()
d.append(f.readline().strip())
if __name__ == '__main__':
class D(dict):
def __missing__(self, key):
self[key] = D()
return self[key]
h=D()
for a, b, c in triplegen():
h[a][b][c] = 1
time.sleep(60)
That gives me ~88MB.
Changing the storage to
h[a, b, c] = 1
takes ~25MB
interning a, b, and c makes it take about 31MB. My case is a bit special because my words never repeat on the input. You might try some variations yourself and see if one of these helps you.
Are you implementing Markovian text generation?
If your chains map 2 words to the probabilities of the third I'd use a dictionary mapping K-tuples to the 3rd-word histogram. A trivial (but memory-hungry) way to implement the histogram would be to use a list with repeats, and then random.choice gives you a word with the proper probability.
Here's an implementation with the K-tuple as a parameter:
import random
# can change these functions to use a dict-based histogram
# instead of a list with repeats
def default_histogram(): return []
def add_to_histogram(item, hist): hist.append(item)
def choose_from_histogram(hist): return random.choice(hist)
K=2 # look 2 words back
words = ...
d = {}
# build histograms
for i in xrange(len(words)-K-1):
key = words[i:i+K]
word = words[i+K]
d.setdefault(key, default_histogram())
add_to_histogram(word, d[key])
# generate text
start = random.randrange(len(words)-K-1)
key = words[start:start+K]
for i in NUM_WORDS_TO_GENERATE:
word = choose_from_histogram(d[key])
print word,
key = key[1:] + (word,)
You could try to use same dictionary, only one level deep.
topDictionary[word1+delimiter+word2+delimiter+word3]
delimiter could be plain " ". (or use (word1,word2,word3))
This would be easiest to implement.
I believe you will see a little improvement, if it is not enough...
...i'll think of something...
Ok, so you are basically trying to store a sparse 3D space. The kind of access patterns you want to this space is crucial for the choice of algorithm and data structure. Considering your data source, do you want to feed this to a grid? If you don't need O(1) access:
In order to get memory efficiency you want to subdivide that space into subspaces with a similar number of entries. (like a BTree). So a data structure with :
firstWordRange
secondWordRange
thirdWordRange
numberOfEntries
a sorted block of entries.
next and previous blocks in all 3 dimensions
Scipy has sparse matrices, so if you can make the first two words a tuple, you can do something like this:
import numpy as N
from scipy import sparse
word_index = {}
count = sparse.lil_matrix((word_count*word_count, word_count), dtype=N.int)
for word1, word2, word3 in triple_list:
w1 = word_index.setdefault(word1, len(word_index))
w2 = word_index.setdefault(word2, len(word_index))
w3 = word_index.setdefault(word3, len(word_index))
w1_w2 = w1 * word_count + w2
count[w1_w2,w3] += 1
If memory is simply not big enough, pybsddb can help store a disk-persistent map.
You could use a numpy multidimensional array. You'll need to use numbers rather than strings to index into the array, but that can be solved by using a single dict to map words to numbers.
import numpy
w = {'word1':1, 'word2':2, 'word3':3, 'word4':4}
a = numpy.zeros( (4,4,4) )
Then to index into your array, you'd do something like:
a[w[word1], w[word2], w[word3]] += 1
That syntax is not beautiful, but numpy arrays are about as efficient as anything you're likely to find. Note also that I haven't tried this code out, so I may be off in some of the details. Just going from memory here.
Here's a tree structure that uses the bisect library to maintain a sorted list of words. Each lookup in O(log2(n)).
import bisect
class WordList( object ):
"""Leaf-level is list of words and counts."""
def __init__( self ):
self.words= [ ('\xff-None-',0) ]
def count( self, wordTuple ):
assert len(wordTuple)==1
word= wordTuple[0]
loc= bisect.bisect_left( self.words, word )
if self.words[loc][0] != word:
self.words.insert( loc, (word,0) )
self.words[loc]= ( word, self.words[loc][1]+1 )
def getWords( self ):
return self.words[:-1]
class WordTree( object ):
"""Above non-leaf nodes are words and either trees or lists."""
def __init__( self ):
self.words= [ ('\xff-None-',None) ]
def count( self, wordTuple ):
head, tail = wordTuple[0], wordTuple[1:]
loc= bisect.bisect_left( self.words, head )
if self.words[loc][0] != head:
if len(tail) == 1:
newList= WordList()
else:
newList= WordTree()
self.words.insert( loc, (head,newList) )
self.words[loc][1].count( tail )
def getWords( self ):
return self.words[:-1]
t = WordTree()
for a in ( ('the','quick','brown'), ('the','quick','fox') ):
t.count(a)
for w1,wt1 in t.getWords():
print w1
for w2,wt2 in wt1.getWords():
print " ", w2
for w3 in wt2.getWords():
print " ", w3
For simplicity, this uses a dummy value in each tree and list. This saves endless if-statements to determine if the list was actually empty before we make a comparison. It's only empty once, so the if-statements are wasted for all n-1 other words.
You could put all words in a dictionary.
key would be word, and value is number (index).
Then you use it like this:
Word1=indexDict[word1]
Word2=indexDict[word2]
Word3=indexDict[word3]
topDictionary[Word1][Word2][Word3]
Insert in indexDict with:
if word not in indexDict:
indexDict[word]=len(indexDict)

Categories