I want to create a list of all possible 6-element permutations from "abcdefghijklmnopqrstuvwxyz0123456789" so for example it should output:
['aaaaaa','aaaaab','aaaaac'...,'aaaaa0','aaaaa1'...,'aaaaba','aaaabb'...] and so on.
This is what I tried:
import itertools
dictionary = 'abcdefghijklmnopqrstuvwxyz0123456789'
print(list(itertools.product(dictionary, repeat=6)))
but I ran into a MemoryError and then my computer froze up completely, so is there a more efficient way to compute this list?
(I'm using Python 3.8 64-bit)
Do you know how long your list would be? It is 36**6 = 2176782336 items. A bit too much to hold in memory. You should have used a generator:
dictionary = 'abcdefghijklmnopqrstuvwxyz0123456789'
for x in itertools.product(dictionary, repeat=6):
print(''.join(x))
The size of the permutation is huge: 36^6! That's 2176782336 strings. A 6-char string in python is already relatively large due to how python stores separate objects.
from sys import getsizeof
getsizeof('aaaaaa') # 55
At 55 bytes per string, the whole list is almost 120 Gigabytes. You probably don't much have memory on your machine.
If you try to convert this iterator to a list, it will generate all permutations at once. What you can do instead is to use the iterator returned by itertools.product(dictionary, repeat=6) without converting it to a list.
for s in itertools.product(dictionary, repeat=6):
# Do something with the string, such as writing it to a file.
Without knowing what you are trying to do with the product, I can't specifically tell you how to optimize this. But I can still say that trying to convert this iterator to a list is a bad idea.
Related
I want to perform calculations on a list and assign this to a second list, but I want to do this in the most efficient way possible as I'll be using a lot of data. What is the best way to do this? My current version uses append:
f=time_series_data
output=[]
for i, f in enumerate(time_series_data):
if f > x:
output.append(calculation with f)
etc etc
should I use append or declare the output list as a list of zeros at the beginning?
Appending the values is not slower compared to other ways possible to accomplish this.
The code looks fine and creating a list of zeroes would not help any further. Although it can create problems as you might not know how many values will pass the condition f > x.
Since you wrote etc etc I am not sure how long or what operations you need to do there. If possible try using list comprehension. That would be a little faster.
You can have a look at below article which compared the speed for list creation using 3 methods, viz, list comprehension, append, pre-initialization.
https://levelup.gitconnected.com/faster-lists-in-python-4c4287502f0a
I have a list with around 1500 number (1 to 1500) and I want to get all the possible Permutation out of it to do some calculations and choose the smallest number out of all of it.
Problem is that the number of possibilites as you can figure is wayyy wayyy too big and my computer just freeze whiel running the code so I have to make a forced restart. Also my RAM is 8GB so it should be big enough (?) or so I hope.
To limit it I can specify a start point but that won't reduce it much.
Also it's a super important thing but I feel so lost. what do you think should I do to make it run ?
Use of generators and itertools saves memory a lot. Generators are just like lists with an exception that new elements generated one by one rather than stored in memory. Even if your problem has a better solution, mastering the generator will help you to save memory in future.
Note in Python 3 map is already a generator (while for Python 2 use imap instead of map)
import itertools
array = [ 1, 2, ... , 1500] # or other numbers
f = sum # or whatever function you have
def fmax_on_perms(array, f):
perms = itertools.permutations(array) # generator of all the
permutations rather than static list
fvalues = map(my_function, perms) # generator of values of your function on different permutations
return max(fvalues)
I am trying to preallocate a list in python
c=[1]*mM #preallocate array
My Problem is that I run in to a MemoryError since
mM=4999999950000000
What is the best way to deal with this. I am thinking about creating a new object where is split my list at about a value of 500000000.
Is this what I should do or is there a best practice to create an array with a lot of inputs?
Using a Generator
You are attempting to create an object that you very likely will not be able to fit into your computer's memory. If you truly need to represent a list of that length, you can use a generator that dynamically produces values as they are needed.
def ones_generator(length):
for _ in range(length):
yield 1
gen = ones_generator(4999999950000000)
for i in gen:
print(i) # prints 1, a lot
Note: The question is tagged for Python 3, but if you are using Python 2.7, you will want to use xrange instead of range.
Using a Dictionary
By the sound of your question, you do not actually need to preallocate a list of that length, but you want to store values very sparsely at indexes that are very large. This pattern matches the dict type in Python more so than the list. You can simply store values in a dictionary, without pre-allocating they keys/space, Python handles that under the hood for you.
dct = {}
dct[100000] = "A string"
dct[592091] = 123
dct[4999999950000000] = "I promise, I need to be at this index"
print(dct[4999999950000000])
# I promise, I need to be at this index
In that example, I just stored str and int values, but they can be any object in Python. The best part about this is that this dictionary will not consume memory based on the maximum index (like a list would) but instead based on how many values are stored within it.
I am trying to generate all 16^16,
but there are a few problems. Mainly memory.
I tried to generate them in python like this:
for y in range (0, 16**16):
print '0x%0*X' % (16,y)
This gives me:
OverflowError: range() result has too many items
If I use sys.maxint I get a MemoryError.
To be more precise, I want to generate all combinations of HEX in length of 16, i.e:
0000000000000000
0000000000000001
0000000000000002
...
FFFFFFFFFFFFFFFF
Also, how do I calculate the approximate time it will take me to generate them?
I am open to the use of any programming language as long as I can save them to an output file.
Well... 16^16 = 1.8446744e+19, so lets say you could calculate 10 values per nanosecond (that's a 10GHz rate btw). Then it would take you 16^16 / 10 nanoseconds to compute them all, or 58.4 years. Also, if you could somehow compress each value into 1-bit (which is impossible), it would require 2 exabytes of memory to contain those values (16^16/8/2^60).
This seems like a very artificial exercise. Is it homework, or is there a reason for generating this list? It will be very long (see other answers)!
Having said that, you should ask yourself: why is this happening? The answer is that in Python 2.x, range produces an actual list. If you want to avoid that, you can:
Use Python 3.x, in which range does not actually make a list, but a special generator-like object.
Use xrange, which also doesn't actually make a list, but again produces an object.
As for timing, all of the time will be in writing to the file or screen. You can get an estimate by making a somewhat smaller list and then doing some math, but you have to be careful that it's big enough that the time is dominated by writing the lines, and not opening and closing the file.
But you should also ask yourself how big the resultant file will be... You may not like what you find. Perhaps you mean 2^16?
how would i search through a list with ~5 mil 128bit (or 256, depending on how you look at it) strings quickly and find the duplicates (in python)? i can turn the strings into numbers, but i don't think that's going to help much. since i haven't learned much information theory, is there anything about this in information theory?
and since these are hashes already, there's no point in hashing them again
If it fits into memeory, use set(). I think it will be faster than sort. O(n log n) for 5 million items is going to cost you.
If it does not fit into memory, say you've lot more than 5 million record, divide and conquer. Break the records at the mid point like 1 x 2^127. Apply any of the above methods. I guess information theory helps by stating that a good hash function will distribute the keys evenly. So the divide by mid point method should work great.
You can also apply divide and conquer even if it fit into memory. Sorting 2 x 2.5 mil records is faster than sorting 5 mil records.
Load them into memory (5M x 64B = 320MB), sort them, and scan through them finding the duplicates.
In Python2.7+ you can use collections.Counter for older Python use collections.deaultdict(int). Either way is O(n).
first make a list with some hashes in it
>>> import hashlib
>>> s=[hashlib.sha1(str(x)).digest() for x in (1,2,3,4,5,1,2)]
>>> s
['5j\x19+y\x13\xb0LTWM\x18\xc2\x8dF\xe69T(\xab', '\xdaK\x927\xba\xcc\xcd\xf1\x9c\x07`\xca\xb7\xae\xc4\xa85\x90\x10\xb0', 'w\xdeh\xda\xec\xd8#\xba\xbb\xb5\x8e\xdb\x1c\x8e\x14\xd7\x10n\x83\xbb', '\x1bdS\x89$s\xa4g\xd0sr\xd4^\xb0Z\xbc 1dz', '\xac4x\xd6\x9a<\x81\xfab\xe6\x0f\\6\x96\x16ZN^j\xc4', '5j\x19+y\x13\xb0LTWM\x18\xc2\x8dF\xe69T(\xab', '\xdaK\x927\xba\xcc\xcd\xf1\x9c\x07`\xca\xb7\xae\xc4\xa85\x90\x10\xb0']
If you are using Python2.7 or later
>>> from collections import Counter
>>> c=Counter(s)
>>> duplicates = [k for k in c if c[k]>1]
>>> print duplicates
['\xdaK\x927\xba\xcc\xcd\xf1\x9c\x07`\xca\xb7\xae\xc4\xa85\x90\x10\xb0', '5j\x19+y\x13\xb0LTWM\x18\xc2\x8dF\xe69T(\xab']
if you are using Python2.6 or earlier
>>> from collections import defaultdict
>>> d=defaultdict(int)
>>> for i in s:
... d[i]+=1
...
>>> duplicates = [k for k in d if d[k]>1]
>>> print duplicates
['\xdaK\x927\xba\xcc\xcd\xf1\x9c\x07`\xca\xb7\xae\xc4\xa85\x90\x10\xb0', '5j\x19+y\x13\xb0LTWM\x18\xc2\x8dF\xe69T(\xab']
Is this array sorted?
I think the fastest solution can be a heap sort or quick sort, and after go through the array, and find the duplicates.
You say you have a list of about 5 million strings, and the list may contain duplicates. You don't say (1) what you want to do with the duplicates (log them, delete all but one occurrence, ...) (2) what you want to do with the non-duplicates (3) whether this list is a stand-alone structure or whether the strings are keys to some other data that you haven't mentioned (4) why you haven't deleted duplicates at input time instead building a list containing duplicates.
As a Data Structures and Algorithms 101 exercise, the answer you have accepted is a nonsense. If you have enough memory, detecting duplicates using a set should be faster than sorting a list and scanning it. Note that deleting M items from a list of size N is O(MN). The code for each of the various alternatives is short and rather obvious; why don't you try writing them, timing them, and reporting back?
If this is a real-world problem that you have, you need to provide much more information if you want a sensible answer.