Is random.sample truly random? - python

I have a list with 155k files. When I random.sample(list, 100), while the results are not the same from the previous sample, they look similar.
Is there a better alternative to random.sample that returns a new list of random 100 files?
folders = get_all_folders('/data/gazette-txt-files')
# get all files from all folders
def get_all_files():
files = []
for folder in folders:
files.append(glob.glob("/data/gazette-txt-files/" + folder + "/*.txt"))
# convert 2D list into 1D
formatted_list = []
for file in files:
for f in file:
formatted_list.append(f)
# 200 random text files
return random.sample(formatted_list, 200)

For purposes like randomly selecting elements from a list, using random.sample suffices, true randomness isn't provided and I'm unaware if this is even theoretically possible.
random (by default) uses a Pseudo Random Number Generator (PRNG) called Mersenne Twister (MT) which, although suitable for applications such as simulations (and minor things like picking from a list of paths), shouldn't be used in areas where security is a concern due to the fact that it is deterministic.
This is why Python 3.6 also introduces secrets.py with PEP 506, which uses SystemRandom (urandom) by default and is capable of producing cryptographically secure pseudo random numbers.
Of course, bottom line is, that even if you use a PRNG or CPRNG to generate your numbers they're still going to be pseudo random.

You may need to seed the generator. See here in the Documentation.
Just call random.seed() before you get the samples.

Related

Can I precompute/pregenerate pseudo-random numbers for numpy?

I don't think I'm the first one to come up with this slightly unorthodox idea, but I can't seem to get google to show my why it is bad, nor how to do it properly.
I have a piece of code that is CPU-bound, and the second most expensive function is np.randint(...) which is called for single numbers (one at a time). I don't have hard requirements for "true" randomness between multiple executions of the program. Hence, I thought it could be smart to precompute/cache a whole bunch (~2 million) of random numbers, save these somewhere, and then have numpy feed me those numbers as required instead of running the rng.
Could somebody please enlighten me as to why this is a bad idea, or how to do it? Thank you!
Whether or not this is a bad idea really depends on your application (with respect to the randomness). However generating two million random numbers isn't really expensive:
numbers = np.random.randint(..., size=2_000_000)
This takes about 40 ms on my machine. Loading from file instead can result in even larger execution times (depending on your file system and how busy it is).
So precomputing all the random numbers and then fetching one at a time seems like a decent improvement (indeed, calling np.random.randint two million times takes about 500 times as long). For example:
numbers = iter(np.random.randint(..., size=2_000_000))
single_number = next(numbers)
If you cannot precompute all numbers (perhaps because your boundaries change dynamically) then you can use random.randint which should be faster. Then saving the random numbers from one run and reusing them for the next might be a thing.
numbers = []
numbers.append(math.randint(...)) # dynamically generate the random numbers
np.save('numbers.npy', numbers) # eventually save the numbers
Then for the next run, to ensure some level of variation, you can shuffle these numbers after loading them:
numbers = np.load('numbers.npy')
np.random.shuffle(numbers)
Generate your numbers, save them to filesystem and load them back. This script could help you?
import pickle
import numpy as np
# Numbers generation
random_numbers = list(np.random.randint(0, 100, 10000))
# Save numbers
with open('filename.pickle', 'wb') as handle:
pickle.dump(random_numbers, handle, protocol=pickle.HIGHEST_PROTOCOL)
# Load numbers
with open('filename.pickle', 'rb') as handle:
loaded_random_numbers = pickle.load(handle)
# Check equality
print(random_numbers == loaded_random_numbers)
I have used this before and works like a charm for order of millions:
First create an array of random numbers one time:
random_arr = np.random.randint(0, size=20000)
And every time you need a random number, simply pick one using this:
number = np.random.choice(random_arr)
It should boost your performance significantly.

Find a seed that creates a specific number sequence with random.choice(range(128))?

I want to find a seed that creates a specific number sequence:
[115,91,45,76,78,93,35,5,29,8,99,88,98,70,40,116,11,39,102,41,124,98,120,57,36,67,57,23,52,34,75,32,117,66,12,19,86,67,62,121,60,5,54,37,65,18,5,56,66,115,32,99,73,70,115,73,123,74,31]
I wonder if I could find one of seed that give me this result with the function get() I created :
def get():
seed(x)
return [choice(range(128)) for _ in range(59)]
with x a constant equal to the number that, apply as seed, give me the right top above sequence.
This is a little program I made to expect find it, but right now I'm about 1.6 milions tested seed and still nothing.
from random import choice, seed
lc =[115,91,45,76,78,93,35,5,29,8,99,88,98,70,40,116,11,39,102,41,124,98,120,57,36,67,57,23,52,34,75,32,117,66,12,19,86,67,62,121,60,5,54,37,65,18,5,56,66,115,32,99,73,70,115,73,123,74,31]
sd, h = 0,0
while 1:
seed(sd)
for c, o in enumerate(lc):
if not choice(range(128)) == o:
if c > h :
print(f"[Seeed {sd}] {c} matchs")
h = c
sd += 1
break
Can someone help me to find one of the right seed ?
I hope it is not possible.
Technically, it is possible to code a quasi-random generator that allows restoring a seed by a short sequence of results. But normal quasi-random generator should disallow that.
E.g. for quite common Mersenne Twister the internal state is 624 ints. But your seed is just one int.
Even if you brute-force the seed that gives you’re the same short sequence, the whole internal state actual may be different and consequent generation will goes completely other way.
Any seeded PRNG will have a formula to generate the next number from the internal data it holds. With something as simple as a Linear Congruential PRNG then it is easy to back-calculate the internal data and the numbers used in the formula from the output. With a more complex PRNG, such as Mersenne Twister, then the back-calculation becomes very difficult.
One solution would be to copy the sequence of numbers you want and store them somewhere, pulling them from the store as needed. Alternatively read the documentation of the PRNG used to generate those numbers initially to see if a back-calculation is possible.
If the numbers came from a cryptographically secure PRNG then your task becomes orders of magnitude more difficult.
Using brute force and assuming that every seed represent an extraction from your set of number (128), with replacement, you have a probability of
1/(128)^59 = 1 / 2.1153791001287955166461289857048673274508949854856999 × 10^124
for every extraction to get your exact set of numbers (assuming uniform distribution for every number extraction of your random function). Which is a probability pretty near to zero.
So yes. You could hang (almost) forever for that brute force search

Endless Permutation and RAM

I have a list with around 1500 number (1 to 1500) and I want to get all the possible Permutation out of it to do some calculations and choose the smallest number out of all of it.
Problem is that the number of possibilites as you can figure is wayyy wayyy too big and my computer just freeze whiel running the code so I have to make a forced restart. Also my RAM is 8GB so it should be big enough (?) or so I hope.
To limit it I can specify a start point but that won't reduce it much.
Also it's a super important thing but I feel so lost. what do you think should I do to make it run ?
Use of generators and itertools saves memory a lot. Generators are just like lists with an exception that new elements generated one by one rather than stored in memory. Even if your problem has a better solution, mastering the generator will help you to save memory in future.
Note in Python 3 map is already a generator (while for Python 2 use imap instead of map)
import itertools
array = [ 1, 2, ... , 1500] # or other numbers
f = sum # or whatever function you have
def fmax_on_perms(array, f):
perms = itertools.permutations(array) # generator of all the
permutations rather than static list
fvalues = map(my_function, perms) # generator of values of your function on different permutations
return max(fvalues)

python data type to track duplicates

I often keep track of duplicates with something like this:
processed = set()
for big_string in strings_generator:
if big_string not in processed:
processed.add(big_string)
process(big_string)
I am dealing with massive amounts of data so don't want to maintain the processed set in memory. I have a version that uses sqlite to store the data on disk, but then this process runs much slower.
To cut down on memory use what do you think of using hashes like this:
processed = set()
for big_string in string_generator:
key = hash(big_string)
if key not in ignored:
processed.add(key)
process(big_string)
The drawback is I could lose data through occasional hash collisions.
1 collision in 1 billion hashes would not be a problem for my use.
I tried the md5 hash but found generating the hashes became a bottleneck.
What would you suggest instead?
I'm going to assume you are hashing web pages. You have to hash at most 55 billion web pages (and that measure almost certainly overlooks some overlap).
You are willing to accept a less than one in a billion chance of collision, which means that if we look at a hash function which number of collisions is close to what we would get if the hash was truly random[ˆ1], we want a hash range of size (55*10ˆ9)*10ˆ9. That is log2((55*10ˆ9)*10ˆ9) = 66 bits.
[ˆ1]: since the hash can be considered to be chosen at random for this purpose,
p(collision) = (occupied range)/(total range)
Since there is a speed issue, but no real cryptographic concern, we can use a > 66-bits non-cryptographic hash with the nice collision distribution property outlined above.
It looks like we are looking for the 128-bit version of the Murmur3 hash. People have been reporting speed increases upwards of 12x comparing Murmur3_128 to MD5 on a 64-bit machine. You can use this library to do your speed tests. See also this related answer, which:
shows speed test results in the range of python's str_hash, which speed you have already deemed acceptable elsewhere – though python's hash is a 32-bit hash leaving you only 2ˆ32/(10ˆ9) (that is only 4) values stored with a less than one in a billion chance of collision.
spawned a library of python bindings that you should be able to use directly.
Finally, I hope to have outlined the reasoning that could allow you to compare with other functions of varied size should you feel the need for it (e.g. if you up your collision tolerance, if the size of your indexed set is smaller than the whole Internet, etc, ...).
You have to decide which is more important: space or time.
If time, then you need to create unique representations of your large_item which take as little space as possible (probably some str value) that is easy (i.e. quick) to calculate and will not have collisions, and store them in a set.
If space, find the quickest disk-backed solution you can and store the smallest possible unique value that will identify a large_item.
So either way, you want small unique identifiers -- depending on the nature of large_item this may be a big win, or not possible.
Update
they are strings of html content
Perhaps a hybrid solution then: Keep a set in memory of the normal Python hash, while also keeping the actual html content on disk, keyed by that hash; when you check to see if the current large_item is in the set and get a positive, double-check with the disk-backed solution to see if it's a real hit or not, then skip or process as appropriate. Something like this:
import dbf
on_disk = dbf.Table('/tmp/processed_items', 'hash N(17,0); value M')
index = on_disk.create_index(lambda rec: rec.hash)
fast_check = set()
def slow_check(hashed, item):
matches = on_disk.search((hashed,))
for record in matches:
if item == record.value:
return True
return False
for large_item in many_items:
hashed = hash(large_item) # only calculate once
if hashed not in fast_check or not slow_check(hashed, large_item):
on_disk.append((hashed, large_item))
fast_check.add(hashed)
process(large_item)
FYI: dbf is a module I wrote which you can find on PyPI
If many_items already resides in memory, you are not creating another copy of the large_item. You are just storing a reference to it in the ignored set.
If many_items is a file or some other generator, you'll have to look at other alternatives.
Eg if many_items is a file, perhaps you can store a pointer to the item in the file instead of the actual item
As you have already seen few options but unfortunately none of them can fully address the situation partly because
Memory Constraint, to store entire object in memory
No perfect Hash function, and for huge data set change of collision is there.
Better Hash functions (md5) are slower
Use of database like sqlite would actually make things slower
As I read this following excerpt
I have a version that uses sqlite to store the data on disk, but then this process runs much slower.
I feel if you work on this, it might help you marginally. Here how it should be
Use tmpfs to create a ramdisk. tmpfs has several advantages over other implementation because it supports swapping of less-used space to swap space.
Store the sqlite database on the ramdisk.
Change the size of the ramdisk and profile your code to check your performance.
I suppose you already have a working code to save your data in sqllite. You only need to define a tmpfs and use the path to store your database.
Caveat: This is a linux only solution
a bloom filter? http://en.wikipedia.org/wiki/Bloom_filter
well you can always decorate large_item with a processed flag. Or something similar.
You can give a try to the str type __hash__ function.
In [1]: hash('http://stackoverflow.com')
Out[1]: -5768830964305142685
It's definitely not a cryptographic hash function, but with a little chance you won't have too much collision. It works as described here: http://effbot.org/zone/python-hash.htm.
I suggest you profile standard Python hash functions and choose the fastest: they are all "safe" against collisions enough for your application.
Here are some benchmarks for hash, md5 and sha1:
In [37]: very_long_string = 'x' * 1000000
In [39]: %timeit hash(very_long_string)
10000000 loops, best of 3: 86 ns per loop
In [40]: from hashlib import md5, sha1
In [42]: %timeit md5(very_long_string).hexdigest()
100 loops, best of 3: 2.01 ms per loop
In [43]: %timeit sha1(very_long_string).hexdigest()
100 loops, best of 3: 2.54 ms per loop
md5 and sha1 are comparable in speed. hash is 20k times faster for this string and it does not seem to depend much on the size of the string itself.
how does your sql lite version work? If you insert all your strings into a database table and then run the query "select distinct big_string from table_name", the database should optimize it for you.
Another option for you would be to use hadoop.
Another option could be to split the strings into partitions such that each partition is small enough to fit in memory. then you only need to check for duplicates within each partition. the formula you use to decide the partition will choose the same partition for each duplicate. the easiest way is to just look at the first few digits of the string e.g.:
d=defaultdict(int)
for big_string in strings_generator:
d[big_string[:4]]+=1
print d
now you can decide on your partitions, go through the generator again and write each big_string to a file that has the start of the big_string in the filename. Now you could just use your original method on each file and just loop through all the files
This can be achieved much more easily by performing simpler checks first, then investigating these cases with more elaborate checks. The example below contains extracts of your code, but it is performing the checks on much smaller sets of data. It does this by first matching on a simple case that is cheap to check. And if you find that a (filesize, checksum) pairs are not discriminating enough you can easily change it for a more cheap, yet vigorous check.
# Need to define the following functions
def GetFileSize(filename):
pass
def GenerateChecksum(filename):
pass
def LoadBigString(filename):
pass
# Returns a list of duplicates pairs.
def GetDuplicates(filename_list):
duplicates = list()
# Stores arrays of filename, mapping a quick hash to a list of filenames.
filename_lists_by_quick_checks = dict()
for filename in filename_list:
quickcheck = GetQuickCheck(filename)
if not filename_lists_by_quick_checks.has_key(quickcheck):
filename_lists_by_quick_checks[quickcheck] = list()
filename_lists_by_quick_checks[quickcheck].append(filename)
for quickcheck, filename_list in filename_lists.iteritems():
big_strings = GetBigStrings(filename_list)
duplicates.extend(GetBigstringDuplicates(big_strings))
return duplicates
def GetBigstringDuplicates(strings_generator):
processed = set()
for big_string in strings_generator:
if big_sring not in processed:
processed.add(big_string)
process(big_string)
# Returns a tuple containing (filesize, checksum).
def GetQuickCheck(filename):
return (GetFileSize(filename), GenerateChecksum(filename))
# Returns a list of big_strings from a list of filenames.
def GetBigStrings(file_list):
big_strings = list()
for filename in file_list:
big_strings.append(LoadBigString(filename))
return big_strings

Generating non-repeating random numbers in Python

Ok this is one of those trickier than it sounds questions so I'm turning to stack overflow because I can't think of a good answer. Here is what I want: I need Python to generate a simple a list of numbers from 0 to 1,000,000,000 in random order to be used for serial numbers (using a random number so that you can't tell how many have been assigned or do timing attacks as easily, i.e. guessing the next one that will come up). These numbers are stored in a database table (indexed) along with the information linked to them. The program generating them doesn't run forever so it can't rely on internal state.
No big deal right? Just generate a list of numbers, shove them into an array and use Python "random.shuffle(big_number_array)" and we're done. Problem is I'd like to avoid having to store a list of numbers (and thus read the file, pop one off the top, save the file and close it). I'd rather generate them on the fly. Problem is that the solutions I can think of have problems:
1) Generate a random number and then check if it has already been used. If it has been used generate a new number, check, repeat as needed until I find an unused one. Problem here is that I may get unlucky and generate a lot of used numbers before getting one that is unused. Possible fix: use a very large pool of numbers to reduce the chances of this (but then I end up with silly long numbers).
2) Generate a random number and then check if it has already been used. If it has been used add or subtract one from the number and check again, keep repeating until I hit an unused number. Problem is this is no longer a random number as I have introduced bias (eventually I will get clumps of numbers and you'd be able to predict the next number with a better chance of success).
3) Generate a random number and then check if it has already been used. If it has been used add or subtract another randomly generated random number and check again, problem is we're back to simply generating random numbers and checking as in solution 1.
4) Suck it up and generate the random list and save it, have a daemon put them into a Queue so there are numbers available (and avoid constantly opening and closing a file, batching it instead).
5) Generate much larger random numbers and hash them (i.e. using MD5) to get a smaller numeric value, we should rarely get collisions, but I end up with larger than needed numbers again.
6) Prepend or append time based information to the random number (i.e. unix timestamp) to reduce chances of a collision, again I get larger numbers than I need.
Anyone have any clever ideas that will reduce the chances of a "collision" (i.e. generating a random number that is already taken) but will also allow me to keep the number "small" (i.e. less than a billion (or a thousand million for your europeans =)).
Answer and why I accepted it:
So I will simply go with 1, and hope it's not an issue, however if it is I will go with the deterministic solution of generating all the numbers and storing them so that there is a guarentee of getting a new random number, and I can use "small" numbers (i.e. 9 digits instead of an MD5/etc.).
This is a neat problem, and I've been thinking about it for a while (with solutions similar to Sjoerd's), but in the end, here's what I think:
Use your point 1) and stop worrying.
Assuming real randomness, the probability that a random number has already been chosen before is the count of previously chosen numbers divided by the size of your pool, i.e. the maximal number.
If you say you only need a billion numbers, i.e. nine digits: Treat yourself to 3 more digits, so you have 12-digit serial numbers (that's three groups of four digits – nice and readable).
Even when you're close to having chosen a billion numbers previously, the probability that your new number is already taken is still only 0,1%.
Do step 1 and draw again. You can still check for an "infinite" loop, say don't try more than 1000 times or so, and then fallback to adding 1 (or something else).
You'll win the lottery before that fallback ever gets used.
You could use Format-Preserving Encryption to encrypt a counter. Your counter just goes from 0 upwards, and the encryption uses a key of your choice to turn it into a seemingly random value of whatever radix and width you want.
Block ciphers normally have a fixed block size of e.g. 64 or 128 bits. But Format-Preserving Encryption allows you to take a standard cipher like AES and make a smaller-width cipher, of whatever radix and width you want (e.g. radix 10, width 9 for the parameters of the question), with an algorithm which is still cryptographically robust.
It is guaranteed to never have collisions (because cryptographic algorithms create a 1:1 mapping). It is also reversible (a 2-way mapping), so you can take the resulting number and get back to the counter value you started with.
AES-FFX is one proposed standard method to achieve this.
I've experimented with some basic Python code for AES-FFX--see Python code here (but note that it doesn't fully comply with the AES-FFX specification). It can e.g. encrypt a counter to a random-looking 7-digit decimal number. E.g.:
0000000 0731134
0000001 6161064
0000002 8899846
0000003 9575678
0000004 3030773
0000005 2748859
0000006 5127539
0000007 1372978
0000008 3830458
0000009 7628602
0000010 6643859
0000011 2563651
0000012 9522955
0000013 9286113
0000014 5543492
0000015 3230955
... ...
For another example in Python, using another non-AES-FFX (I think) method, see this blog post "How to Generate an Account Number" which does FPE using a Feistel cipher. It generates numbers from 0 to 2^32-1.
With some modular arithmic and prime numbers, you can create all numbers between 0 and a big prime, out of order. If you choose your numbers carefully, the next number is hard to guess.
modulo = 87178291199 # prime
incrementor = 17180131327 # relative prime
current = 433494437 # some start value
for i in xrange(1, 100):
print current
current = (current + incrementor) % modulo
If they don't have to be random, but just not obviously linear (1, 2, 3, 4, ...), then here's a simple algorithm:
Pick two prime numbers. One of them will be the largest number you can generate, so it should be around one billion. The other should be fairly large.
max_value = 795028841
step = 360287471
previous_serial = 0
for i in xrange(0, max_value):
previous_serial += step
previous_serial %= max_value
print "Serial: %09i" % previous_serial
Just store the previous serial each time so you know where you left off. I can't prove mathmatically that this works (been too long since those particular classes), but it's demonstrably correct with smaller primes:
s = set()
with open("test.txt", "w+") as f:
previous_serial = 0
for i in xrange(0, 2711):
previous_serial += 1811
previous_serial %= 2711
assert previous_serial not in s
s.add(previous_serial)
You could also prove it empirically with 9-digit primes, it'd just take a bit more work (or a lot more memory).
This does mean that given a few serial numbers, it'd be possible to figure out what your values are--but with only nine digits, it's not likely that you're going for unguessable numbers anyway.
If you don't need something cryptographically secure, but just "sufficiently obfuscated"...
Galois Fields
You could try operations in Galois Fields, e.g. GF(2)32, to map a simple incrementing counter x to a seemingly random serial number y:
x = counter_value
y = some_galois_function(x)
Multiply by a constant
Inverse is to multiply by the reciprocal of the constant
Raise to a power: xn
Reciprocal x-1
Special case of raising to power n
It is its own inverse
Exponentiation of a primitive element: ax
Note that this doesn't have an easily-calculated inverse (discrete logarithm)
Ensure a is a primitive element, aka generator
Many of these operations have an inverse, which means, given your serial number, you can calculate the original counter value from which it was derived.
As for finding a library for Galois Field for Python... good question. If you don't need speed (which you wouldn't for this) then you could make your own. I haven't tried these:
NZMATH
Finite field Python package
Sage, although it's a whole environment for mathematical computing, much more than just a Python library
Matrix multiplication in GF(2)
Pick a suitable 32×32 invertible matrix in GF(2), and multiply a 32-bit input counter by it. This is conceptually related to LFSR, as described in S.Lott's answer.
CRC
A related possibility is to use a CRC calculation. Based on the remainder of long-division with an irreducible polynomial in GF(2). Python code is readily available for CRCs (crcmod, pycrc), although you might want to pick a different irreducible polynomial than is normally used, for your purposes. I'm a little fuzzy on the theory, but I think a 32-bit CRC should generate a unique value for every possible combination of 4-byte inputs. Check this. It's quite easy to experimentally check this, by feeding the output back into the input, and checking that it produces a complete cycle of length 232-1 (zero just maps to zero). You may need to get rid of any initial/final XORs in the CRC algorithm for this check to work.
I think you are overestimating the problems with approach 1). Unless you have hard-realtime requirements just checking by random choice terminates rather fast. The probability of needing more than a number of iterations decays exponentially. With 100M numbers outputted (10% fillfactor) you'll have one in billion chance of requiring more than 9 iterations. Even with 50% of numbers taken you'll on average need 2 iterations and have one in a billion chance of requiring more than 30 checks. Or even the extreme case where 99% of the numbers are already taken might still be reasonable - you'll average a 100 iterations and have 1 in a billion change of requiring 2062 iterations
The standard Linear Congruential random number generator's seed sequence CANNOT repeat until the full set of numbers from the starting seed value have been generated. Then it MUST repeat precisely.
The internal seed is often large (48 or 64 bits). The generated numbers are smaller (32 bits usually) because the entire set of bits are not random. If you follow the seed values they will form a distinct non-repeating sequence.
The question is essentially one of locating a good seed that generates "enough" numbers. You can pick a seed, and generate numbers until you get back to the starting seed. That's the length of the sequence. It may be millions or billions of numbers.
There are some guidelines in Knuth for picking suitable seeds that will generate very long sequences of unique numbers.
You can run 1) without running into the problem of too many wrong random numbers if you just decrease the random interval by one each time.
For this method to work, you will need to save the numbers already given (which you want to do anyway) and also save the quantity of numbers taken.
It is pretty obvious that, after having collected 10 numbers, your pool of possible random numbers will have been decreased by 10. Therefore, you must not choose a number between 1 and 1.000.000 but between 1 an 999.990. Of course this number is not the real number but only an index (unless the 10 numbers collected have been 999.991, 999.992, …); you’d have to count now from 1 omitting all the numbers already collected.
Of course, your algorithm should be smarter than just counting from 1 to 1.000.000 but I hope you understand the method.
I don’t like drawing random numbers until I get one which fits either. It just feels wrong.
My solution https://github.com/glushchenko/python-unique-id, i think you should extend matrix for 1,000,000,000 variations and have fun.
I'd rethink the problem itself... You don't seem to be doing anything sequential with the numbers... and you've got an index on the column which has them. Do they actually need to be numbers?
Consider a sha hash... you don't actually need the entire thing. Do what git or other url shortening services do, and take first 3/4/5 characters of the hash. Given that each character now has 36 possible values instead of 10, you have 2,176,782,336 combinations instead of 999,999 combinations (for six digits). Combine that with a quick check on whether the combination exists (a pure index query) and a seed like a timestamp + random number and it should do for almost any situation.
Do you need this to be cryptographically secure or just hard to guess? How bad are collisions? Because if it needs to be cryptographically strong and have zero collisions, it is, sadly, impossible.
I started trying to write an explanation of the approach used below, but just implementing it was easier and more accurate. This approach has the odd behavior that it gets faster the more numbers you've generated. But it works, and it doesn't require you to generate all the numbers in advance.
As a simple optimization, you could easily make this class use a probabilistic algorithm (generate a random number, and if it's not in the set of used numbers add it to the set and return it) at first, keep track of the collision rate, and switch over to the deterministic approach used here once the collision rate gets bad.
import random
class NonRepeatingRandom(object):
def __init__(self, maxvalue):
self.maxvalue = maxvalue
self.used = set()
def next(self):
if len(self.used) >= self.maxvalue:
raise StopIteration
r = random.randrange(0, self.maxvalue - len(self.used))
result = 0
for i in range(1, r+1):
result += 1
while result in self.used:
result += 1
self.used.add(result)
return result
def __iter__(self):
return self
def __getitem__(self):
raise NotImplemented
def get_all(self):
return [i for i in self]
>>> n = NonRepeatingRandom(20)
>>> n.get_all()
[12, 14, 13, 2, 20, 4, 15, 16, 19, 1, 8, 6, 7, 9, 5, 11, 10, 3, 18, 17]
If it is enough for you that a casual observer can't guess the next value, you can use things like a linear congruential generator or even a simple linear feedback shift register to generate the values and keep the state in the database in case you need more values. If you use these right, the values won't repeat until the end of the universe. You'll find more ideas in the list of random number generators.
If you think there might be someone who would have a serious interest to guess the next values, you can use a database sequence to count the values you generate and encrypt them with an encryption algorithm or another cryptographically strong perfect has function. However you need to take care that the encryption algorithm isn't easily breakable if one can get hold of a sequence of successive numbers you generated - a simple RSA, for instance, won't do it because of the Franklin-Reiter Related Message Attack.
Bit late answer, but I haven't seen this suggested anywhere.
Why not use the uuid module to create globally unique identifiers
To generate a list of totally random numbers within a defined threshold, as follows:
plist=list()
length_of_list=100
upbound=1000
lowbound=0
while len(pList)<(length_of_list):
pList.append(rnd.randint(lowbound,upbound))
pList=list(set(pList))
I bumped into the same problem and opened a question with a different title before getting to this one. My solution is a random sample generator of indexes (i.e. non-repeating numbers) in the interval [0,maximal), called itersample. Here are some usage examples:
import random
generator=itersample(maximal)
another_number=generator.next() # pick the next non-repeating random number
or
import random
generator=itersample(maximal)
for random_number in generator:
# do something with random_number
if some_condition: # exit loop when needed
break
itersample generates non-repeating random integers, storage need is limited to picked numbers, and the time needed to pick n numbers should be (as some tests confirm) O(n log(n)), regardelss of maximal.
Here is the code of itersample:
import random
def itersample(c): # c = upper bound of generated integers
sampled=[]
def fsb(a,b): # free spaces before middle of interval a,b
fsb.idx=a+(b+1-a)/2
fsb.last=sampled[fsb.idx]-fsb.idx if len(sampled)>0 else 0
return fsb.last
while len(sampled)<c:
sample_index=random.randrange(c-len(sampled))
a,b=0,len(sampled)-1
if fsb(a,a)>sample_index:
yielding=sample_index
sampled.insert(0,yielding)
yield yielding
elif fsb(b,b)<sample_index+1:
yielding=len(sampled)+sample_index
sampled.insert(len(sampled),yielding)
yield yielding
else: # sample_index falls inside sampled list
while a+1<b:
if fsb(a,b)<sample_index+1:
a=fsb.idx
else:
b=fsb.idx
yielding=a+1+sample_index
sampled.insert(a+1,yielding)
yield yielding
You are stating that you store the numbers in a database.
Wouldn't it then be easier to store all the numbers there, and ask the database for a random unused number?
Most databases support such a request.
Examples
MySQL:
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
PostgreSQL:
SELECT column FROM table
ORDER BY RANDOM()
LIMIT 1

Categories