How to generate 10 digit unique-id in python? - python

I want to generate 10 digit unique-id in python. I have tried below methods but no-one worked
get_random_string(10) -> It generate random string which has probability of collision
str(uuid.uuid4())[:10] -> Since I am taking a prefix only, it also has probability of collision
sequential -> I can generate unique sequential ID by prepending 0s but it will be sequential and easier to guess. So I want to avoid sequential ids
Do we have any proper system to generate 10 digit unique-id?

Depends on how many of those you need, how fast you are creating them, if they have to be unique only on your machine or for multiple machines, what characters (digits only, alphanumeric, ...), you want to allow, ... Generating unique IDs is not an easy task.
A few ideas:
The easiest (but of course not random at all) way is starting at 0000000000 and going up all the way up to zzzzzzzzzz by just incrementing.
The safest way (but probably the slowest) is storing the generated IDs, and when you generate a new one, check if it already exists ...
Base your ID on the current timestamp + some internal incrementor (for ids generated at the same timestamp, depending on the resolution of your timestamp) + optionally machine configuation if you need it to be unique among multiple machines + some random + some shuffling.
But the more randomness you put into your ID, the less control you have on the output, thus you need to check for collisions. Especially if the number of generated IDs is large, with respect to the number of possible IDs.
So in the end of the day, there are only two ways to guarantee uniqueness (within a certain scope)
check if the newly created id was already issued before
calculated by a collision-free function, and you are in control that the input to this function cannot be used twice (for instance by incrementing the input after each id you generate)

Try to use hashlib module so:
import hashlib
import random
def generate_id():
# Generate a random number
random_num = str(random.randint(0, 99999999)).encode()
# Generate a SHA-256 hash of the random number.
hash_obj = hashlib.sha256(random_num)
hex_digit = hash_obj.hexdigest()
return hex_digit[:10]

Related

Python: Hashing a lot of large numbers

I've got a large data problem. My python program calculates numbers from 1 to 2^32, and I want to know if I've already calculated a number.
I could track them as a bitmap using half a gigabyte of memory. But since some numbers can be put in a bag along with others (approx 100 numbers per bag), I was wondering, if there is another way of storing my values, like hashes but less memory consuming.
Thank you for your help.
As I said in my comment above, you can use a setthat stores all computed numbers.
And wehn you compute a new number, you check if it's already the the set.
Let consider compute is the function that computes numbers
computedNumbers = set() # initialize set
for i in range(1, 2**32): # for loop
number = compute(i)
if number in computedNumbers:
print("Number", number, " is already computed!")
else:
computedNumbers.add(number) # add the number to the set
Hope this helps you.

summing entries in a variable based on another variable(make unique) in python lists

i have a question as to how i can perform this task in python:-
i have an array of entries like:
[IPAddress, connections, policystatus, activity flag, longitude, latitude] (all as strings)
ex.
['172.1.21.26','54','1','2','31.15424','12.54464']
['172.1.21.27','12','2','4','31.15424','12.54464']
['172.1.27.34','40','1','1','-40.15474','-54.21454']
['172.1.2.45','32','1','1','-40.15474','-54.21454']
...
till about 110000 entries with about 4000 different combinations of longitude-latitude
i want to count the average connections, average policy status,average of activity flag for each location
something like this:
[longitude,latitude,avgConn,avgPoli,avgActi]
['31.15424','12.54464','33','2','3']
['-40.15474','-54.21454','31','1','1']
...
so on
and i have about 195 files with ~110,000 entries each (sort of a big data problem)
my files are in .csv but im using it as .txt to easily work with it in python(not sure if this is the best idea)
im still new to python so im not really sure whats the best approach to use but i sincerely appreciate any help or guidance for this problem
thanks in advance!
No, if you have the files as .csv, threating them as text does not make sense, since python ships with the excellent csv module.
You could read the csv rows into a dict to group them, but I'd suggest writing the data in a proper database, and use SQL's AVG() and GROUP BY. Python ships with bindings for most databaases. If you have none installed, consider using the sqlite module.
I'll only give you the algorithm, you would learn more by writing the actual code yourself.
Use a Dictionary, with the key as a pair of the form (longitude, latitude) and value as a list of the for [ConnectionSum,policystatusSum,ActivityFlagSum]
loop over the entries once (do count the total number of entries, N)
a. for each entry, if the location exists - add the conn, policystat and Activity value to the existing sum.
b. if the entry does not exist, then assign [0,0,0] as the value
Do 1 and 2 for all files.
After all the entries have been scanned. Loop over the dictionary and divide each element of the list [ConnectionSum,policystatusSum,ActivityFlagSum] by N to get the average values of each.
As long as your locations are restricted to being in the same files (or even close to each other in a file), all you need to do is the stream-processing paradigm. For example if you know that duplicate locations only appear in a file, read each file, calculate the averages, then close the file. As long as you let the old data float out of scope, the garbage collector will get rid of it for you. Basically do this:
def processFile(pathToFile):
...
totalResults = ...
for path in filePaths:
partialResults = processFile(path)
totalResults = combine...partialResults...with...totalResults
An even more elegant solution would be to use the O(1) method of calculating averages "on-line". If for example you are averaging 5,6,7, you would do 5/1=5.0, (5.0*1+6)/2=5.5, (5.5*2+7)/3=6. At each step, you only keep track of the current average and the number of elements. This solution will yield the minimal amount of memory used (no more than the size of your final result!), and doesn't care about which order you visit elements in. It would go something like this. See http://docs.python.org/library/csv.html for what functions you'll need in the CSV module.
import csv
def allTheRecords():
for path in filePaths:
for row in csv.somehow_get_rows(path):
yield SomeStructure(row)
averages = {} # dict: keys are tuples (lat,long), values are an arbitrary
# datastructure, e.g. dict, representing {avgConn,avgPoli,avgActi,num}
for record in allTheRecords():
position = (record.lat, record.long)
currentAverage = averages.get(position, default={'avgConn':0, 'avgPoli':0, 'avgActi':0, num:0})
newAverage = {apply the math I mentioned above}
averages[position] = newAverage
(Do note that the notion of an "average at a location" is not well-defined. Well, it is well-defined, but not very useful: If you knew the exactly location of every IP event to infinite precision, the average of everything would be itself. The only reason you can compress your dataset is because your latitude and longitude have finite precision. If you run into this issue if you acquire more precise data, you can choose to round to the appropriate precision. It may be reasonable to round to within 10 meters or something; see latitude and longitude. This requires just a little bit of math/geometry.)

faster membership testing in python than set()

I have to check presence of millions of elements (20-30 letters str) in the list containing 10-100k of those elements. Is there faster way of doing that in python than set() ?
import sys
#load ids
ids = set( x.strip() for x in open(idfile) )
for line in sys.stdin:
id=line.strip()
if id in ids:
#print fastq
print id
#update ids
ids.remove( id )
set is as fast as it gets.
However, if you rewrite your code to create the set once, and not change it, you can use the frozenset built-in type. It's exactly the same except immutable.
If you're still having speed problems, you need to speed your program up in other ways, such as by using PyPy instead of cPython.
As I noted in my comment, what's probably slowing you down is that you're sequentially checking each line from sys.stdin for membership of your 'master' set. This is going to be really, really slow, and doesn't allow you to make use of the speed of set operations. As an example:
#!/usr/bin/env python
import random
# create two million-element sets of random numbers
a = set(random.sample(xrange(10000000),1000000))
b = set(random.sample(xrange(10000000),1000000))
# a intersection b
c = a & b
# a difference c
d = list(a - c)
print "set d is all remaining elements in a not common to a intersection b"
print "length of d is %s" % len(d)
The above runs in ~6 wallclock seconds on my five year-old machine, and it's testing for membership in larger sets than you require (unless I've misunderstood you). Most of that time is actually taken up creating the sets, so you won't even have that overhead. The fact that the strings you refer to are long isn't relevant here; creating a set creates a hash table, as agf explained. I suspect (though again, it's not clear from your question) that if you can get all your input data into a set before you do any membership testing, it'll be a lot faster, as opposed to reading it in one item at a time, then checking for set membership
You should try to split your data to make the search faster. The tree strcuture would allow you to find very quickly if the data is present or not.
For example, start with a simple map that links the first letter with all the keys starting with that letter, thus you don't have to search all the keys but only a smaller part of them.
This would look like :
ids = {}
for id in open(idfile):
ids.setdefault(id[0], set()).add(id)
for line in sys.stdin:
id=line.strip()
if id in ids.get(id[0], set()):
#print fastq
print id
#update ids
ids[id[0]].remove( id )
Creation will be a bit slower but search should be much faster (I would expect 20 times faster, if the fisrt character of your keys is well distributed and not always the same).
This is a first step, you could do the same thing with the second character and so on, search would then just be walking the tree with each letter...
As mentioned by urschrei, you should "vectorize" the check.
It is faster to check for the presence of a million elements once (as that is done in C) than to do the check for one element a million times.

Python unhash value

I am a newbie to the python. Can I unhash, or rather how can I unhash a value. I am using std hash() function. What I would like to do is to first hash a value send it somewhere and then unhash it as such:
#process X
hashedVal = hash(someVal)
#send n receive in process Y
someVal = unhash(hashedVal)
#for example print it
print someVal
Thx in advance
It can't be done.
A hash is not a compressed version of the original value, it is a number (or something similar ) derived from the original value. The nature of hash implementations is that it is possible (but statistically unlikely if the hash algorithm is a good one) that two different objects produce the same hash value.
This is known as the Pigeonhole Principle which basically states that if you have N different items, and want to place them into M different categories, where the N number is larger than M (ie. more items than categories), you're going to end up with some categories containing multiple items. Since a hash value is typically much smaller in size than the data it hashes, it follows the same principles.
As such, it is impossible to go back once you have the hash value. You need a different way of transporting data than this.
For instance, an example (but not a very good one) hash algorithm would be to calculate the number modulus 3 (ie. the remainder after dividing by 3). Then you would have the following hash values from numbers:
1 --> 1 <--+- same hash number, but different original values
2 --> 2 |
3 --> 0 |
4 --> 1 <--+
Are you trying to use the hash function in this way in order to:
Save space (you have observed that the hash value is much smaller in size than the original data)
Secure transportation (you have observed that the hash value is difficult to reverse)
Transport data (you have observed that the hash number/string is easier to transport than a complex object hierarchy)
... ?
Knowing why you want to do this might give you a better answer than just "it can't be done".
For instance, for the above 3 different observations, here's a way to do each of them properly:
Compression/Decompression, for instance using gzip or zlib (the two typically available in most programming languages/runtimes)
Encryption/Decryption, for instance using RSA, AES or a similar secure encryption algorithm
Serialization/Deserialization, which is code built to take a complex object hierarchy and produce either a binary or textual representation that later on can be deserialized back into new objects
Even if I'm almost 8 years late with an answer, I want to say it is possible to unhash data (not with the std hash() function though).
The previous answers are all describing cryptographic hash functions, which by design should compute hashes that are impossible (or at least very hard to unhash).
However, this is not the case with all hash functions.
Solution
You can use basehash python lib (pip install basehash) to achieve what you want.
There is an important thing to keep in mind though: in order to be able to unhash the data, you need to hash it without loss of data. This generally means that the bigger the pool of data types and values you would like to hash, the bigger the hash length has to be, so that you won't get hash collisions.
Anyway, here's a simple example of how to hash/unhash data:
import basehash
hash_fn = basehash.base36() # you can initialize a 36, 52, 56, 58, 62 and 94 base fn
hash_value = hash_fn.hash(1) # returns 'M8YZRZ'
unhashed = hash_fn.unhash('M8YZRZ') # returns 1
You can define the hash length on hash function initialization and hash other data types as well.
I leave out the explanation of the necessity for various bases and hash lengths to the readers who would like to find out more about hashing.
You can't "unhash" data, hash functions are irreversible due to the pigeonhole principle
http://en.wikipedia.org/wiki/Hash_function
http://en.wikipedia.org/wiki/Pigeonhole_principle
I think what you are looking for encryption/decryption. (Or compression or serialization as mentioned in other answers/comments.)
This is not possible in general. A hash function necessarily loses information, and python's hash is no exception.

Generating non-repeating random numbers in Python

Ok this is one of those trickier than it sounds questions so I'm turning to stack overflow because I can't think of a good answer. Here is what I want: I need Python to generate a simple a list of numbers from 0 to 1,000,000,000 in random order to be used for serial numbers (using a random number so that you can't tell how many have been assigned or do timing attacks as easily, i.e. guessing the next one that will come up). These numbers are stored in a database table (indexed) along with the information linked to them. The program generating them doesn't run forever so it can't rely on internal state.
No big deal right? Just generate a list of numbers, shove them into an array and use Python "random.shuffle(big_number_array)" and we're done. Problem is I'd like to avoid having to store a list of numbers (and thus read the file, pop one off the top, save the file and close it). I'd rather generate them on the fly. Problem is that the solutions I can think of have problems:
1) Generate a random number and then check if it has already been used. If it has been used generate a new number, check, repeat as needed until I find an unused one. Problem here is that I may get unlucky and generate a lot of used numbers before getting one that is unused. Possible fix: use a very large pool of numbers to reduce the chances of this (but then I end up with silly long numbers).
2) Generate a random number and then check if it has already been used. If it has been used add or subtract one from the number and check again, keep repeating until I hit an unused number. Problem is this is no longer a random number as I have introduced bias (eventually I will get clumps of numbers and you'd be able to predict the next number with a better chance of success).
3) Generate a random number and then check if it has already been used. If it has been used add or subtract another randomly generated random number and check again, problem is we're back to simply generating random numbers and checking as in solution 1.
4) Suck it up and generate the random list and save it, have a daemon put them into a Queue so there are numbers available (and avoid constantly opening and closing a file, batching it instead).
5) Generate much larger random numbers and hash them (i.e. using MD5) to get a smaller numeric value, we should rarely get collisions, but I end up with larger than needed numbers again.
6) Prepend or append time based information to the random number (i.e. unix timestamp) to reduce chances of a collision, again I get larger numbers than I need.
Anyone have any clever ideas that will reduce the chances of a "collision" (i.e. generating a random number that is already taken) but will also allow me to keep the number "small" (i.e. less than a billion (or a thousand million for your europeans =)).
Answer and why I accepted it:
So I will simply go with 1, and hope it's not an issue, however if it is I will go with the deterministic solution of generating all the numbers and storing them so that there is a guarentee of getting a new random number, and I can use "small" numbers (i.e. 9 digits instead of an MD5/etc.).
This is a neat problem, and I've been thinking about it for a while (with solutions similar to Sjoerd's), but in the end, here's what I think:
Use your point 1) and stop worrying.
Assuming real randomness, the probability that a random number has already been chosen before is the count of previously chosen numbers divided by the size of your pool, i.e. the maximal number.
If you say you only need a billion numbers, i.e. nine digits: Treat yourself to 3 more digits, so you have 12-digit serial numbers (that's three groups of four digits – nice and readable).
Even when you're close to having chosen a billion numbers previously, the probability that your new number is already taken is still only 0,1%.
Do step 1 and draw again. You can still check for an "infinite" loop, say don't try more than 1000 times or so, and then fallback to adding 1 (or something else).
You'll win the lottery before that fallback ever gets used.
You could use Format-Preserving Encryption to encrypt a counter. Your counter just goes from 0 upwards, and the encryption uses a key of your choice to turn it into a seemingly random value of whatever radix and width you want.
Block ciphers normally have a fixed block size of e.g. 64 or 128 bits. But Format-Preserving Encryption allows you to take a standard cipher like AES and make a smaller-width cipher, of whatever radix and width you want (e.g. radix 10, width 9 for the parameters of the question), with an algorithm which is still cryptographically robust.
It is guaranteed to never have collisions (because cryptographic algorithms create a 1:1 mapping). It is also reversible (a 2-way mapping), so you can take the resulting number and get back to the counter value you started with.
AES-FFX is one proposed standard method to achieve this.
I've experimented with some basic Python code for AES-FFX--see Python code here (but note that it doesn't fully comply with the AES-FFX specification). It can e.g. encrypt a counter to a random-looking 7-digit decimal number. E.g.:
0000000 0731134
0000001 6161064
0000002 8899846
0000003 9575678
0000004 3030773
0000005 2748859
0000006 5127539
0000007 1372978
0000008 3830458
0000009 7628602
0000010 6643859
0000011 2563651
0000012 9522955
0000013 9286113
0000014 5543492
0000015 3230955
... ...
For another example in Python, using another non-AES-FFX (I think) method, see this blog post "How to Generate an Account Number" which does FPE using a Feistel cipher. It generates numbers from 0 to 2^32-1.
With some modular arithmic and prime numbers, you can create all numbers between 0 and a big prime, out of order. If you choose your numbers carefully, the next number is hard to guess.
modulo = 87178291199 # prime
incrementor = 17180131327 # relative prime
current = 433494437 # some start value
for i in xrange(1, 100):
print current
current = (current + incrementor) % modulo
If they don't have to be random, but just not obviously linear (1, 2, 3, 4, ...), then here's a simple algorithm:
Pick two prime numbers. One of them will be the largest number you can generate, so it should be around one billion. The other should be fairly large.
max_value = 795028841
step = 360287471
previous_serial = 0
for i in xrange(0, max_value):
previous_serial += step
previous_serial %= max_value
print "Serial: %09i" % previous_serial
Just store the previous serial each time so you know where you left off. I can't prove mathmatically that this works (been too long since those particular classes), but it's demonstrably correct with smaller primes:
s = set()
with open("test.txt", "w+") as f:
previous_serial = 0
for i in xrange(0, 2711):
previous_serial += 1811
previous_serial %= 2711
assert previous_serial not in s
s.add(previous_serial)
You could also prove it empirically with 9-digit primes, it'd just take a bit more work (or a lot more memory).
This does mean that given a few serial numbers, it'd be possible to figure out what your values are--but with only nine digits, it's not likely that you're going for unguessable numbers anyway.
If you don't need something cryptographically secure, but just "sufficiently obfuscated"...
Galois Fields
You could try operations in Galois Fields, e.g. GF(2)32, to map a simple incrementing counter x to a seemingly random serial number y:
x = counter_value
y = some_galois_function(x)
Multiply by a constant
Inverse is to multiply by the reciprocal of the constant
Raise to a power: xn
Reciprocal x-1
Special case of raising to power n
It is its own inverse
Exponentiation of a primitive element: ax
Note that this doesn't have an easily-calculated inverse (discrete logarithm)
Ensure a is a primitive element, aka generator
Many of these operations have an inverse, which means, given your serial number, you can calculate the original counter value from which it was derived.
As for finding a library for Galois Field for Python... good question. If you don't need speed (which you wouldn't for this) then you could make your own. I haven't tried these:
NZMATH
Finite field Python package
Sage, although it's a whole environment for mathematical computing, much more than just a Python library
Matrix multiplication in GF(2)
Pick a suitable 32×32 invertible matrix in GF(2), and multiply a 32-bit input counter by it. This is conceptually related to LFSR, as described in S.Lott's answer.
CRC
A related possibility is to use a CRC calculation. Based on the remainder of long-division with an irreducible polynomial in GF(2). Python code is readily available for CRCs (crcmod, pycrc), although you might want to pick a different irreducible polynomial than is normally used, for your purposes. I'm a little fuzzy on the theory, but I think a 32-bit CRC should generate a unique value for every possible combination of 4-byte inputs. Check this. It's quite easy to experimentally check this, by feeding the output back into the input, and checking that it produces a complete cycle of length 232-1 (zero just maps to zero). You may need to get rid of any initial/final XORs in the CRC algorithm for this check to work.
I think you are overestimating the problems with approach 1). Unless you have hard-realtime requirements just checking by random choice terminates rather fast. The probability of needing more than a number of iterations decays exponentially. With 100M numbers outputted (10% fillfactor) you'll have one in billion chance of requiring more than 9 iterations. Even with 50% of numbers taken you'll on average need 2 iterations and have one in a billion chance of requiring more than 30 checks. Or even the extreme case where 99% of the numbers are already taken might still be reasonable - you'll average a 100 iterations and have 1 in a billion change of requiring 2062 iterations
The standard Linear Congruential random number generator's seed sequence CANNOT repeat until the full set of numbers from the starting seed value have been generated. Then it MUST repeat precisely.
The internal seed is often large (48 or 64 bits). The generated numbers are smaller (32 bits usually) because the entire set of bits are not random. If you follow the seed values they will form a distinct non-repeating sequence.
The question is essentially one of locating a good seed that generates "enough" numbers. You can pick a seed, and generate numbers until you get back to the starting seed. That's the length of the sequence. It may be millions or billions of numbers.
There are some guidelines in Knuth for picking suitable seeds that will generate very long sequences of unique numbers.
You can run 1) without running into the problem of too many wrong random numbers if you just decrease the random interval by one each time.
For this method to work, you will need to save the numbers already given (which you want to do anyway) and also save the quantity of numbers taken.
It is pretty obvious that, after having collected 10 numbers, your pool of possible random numbers will have been decreased by 10. Therefore, you must not choose a number between 1 and 1.000.000 but between 1 an 999.990. Of course this number is not the real number but only an index (unless the 10 numbers collected have been 999.991, 999.992, …); you’d have to count now from 1 omitting all the numbers already collected.
Of course, your algorithm should be smarter than just counting from 1 to 1.000.000 but I hope you understand the method.
I don’t like drawing random numbers until I get one which fits either. It just feels wrong.
My solution https://github.com/glushchenko/python-unique-id, i think you should extend matrix for 1,000,000,000 variations and have fun.
I'd rethink the problem itself... You don't seem to be doing anything sequential with the numbers... and you've got an index on the column which has them. Do they actually need to be numbers?
Consider a sha hash... you don't actually need the entire thing. Do what git or other url shortening services do, and take first 3/4/5 characters of the hash. Given that each character now has 36 possible values instead of 10, you have 2,176,782,336 combinations instead of 999,999 combinations (for six digits). Combine that with a quick check on whether the combination exists (a pure index query) and a seed like a timestamp + random number and it should do for almost any situation.
Do you need this to be cryptographically secure or just hard to guess? How bad are collisions? Because if it needs to be cryptographically strong and have zero collisions, it is, sadly, impossible.
I started trying to write an explanation of the approach used below, but just implementing it was easier and more accurate. This approach has the odd behavior that it gets faster the more numbers you've generated. But it works, and it doesn't require you to generate all the numbers in advance.
As a simple optimization, you could easily make this class use a probabilistic algorithm (generate a random number, and if it's not in the set of used numbers add it to the set and return it) at first, keep track of the collision rate, and switch over to the deterministic approach used here once the collision rate gets bad.
import random
class NonRepeatingRandom(object):
def __init__(self, maxvalue):
self.maxvalue = maxvalue
self.used = set()
def next(self):
if len(self.used) >= self.maxvalue:
raise StopIteration
r = random.randrange(0, self.maxvalue - len(self.used))
result = 0
for i in range(1, r+1):
result += 1
while result in self.used:
result += 1
self.used.add(result)
return result
def __iter__(self):
return self
def __getitem__(self):
raise NotImplemented
def get_all(self):
return [i for i in self]
>>> n = NonRepeatingRandom(20)
>>> n.get_all()
[12, 14, 13, 2, 20, 4, 15, 16, 19, 1, 8, 6, 7, 9, 5, 11, 10, 3, 18, 17]
If it is enough for you that a casual observer can't guess the next value, you can use things like a linear congruential generator or even a simple linear feedback shift register to generate the values and keep the state in the database in case you need more values. If you use these right, the values won't repeat until the end of the universe. You'll find more ideas in the list of random number generators.
If you think there might be someone who would have a serious interest to guess the next values, you can use a database sequence to count the values you generate and encrypt them with an encryption algorithm or another cryptographically strong perfect has function. However you need to take care that the encryption algorithm isn't easily breakable if one can get hold of a sequence of successive numbers you generated - a simple RSA, for instance, won't do it because of the Franklin-Reiter Related Message Attack.
Bit late answer, but I haven't seen this suggested anywhere.
Why not use the uuid module to create globally unique identifiers
To generate a list of totally random numbers within a defined threshold, as follows:
plist=list()
length_of_list=100
upbound=1000
lowbound=0
while len(pList)<(length_of_list):
pList.append(rnd.randint(lowbound,upbound))
pList=list(set(pList))
I bumped into the same problem and opened a question with a different title before getting to this one. My solution is a random sample generator of indexes (i.e. non-repeating numbers) in the interval [0,maximal), called itersample. Here are some usage examples:
import random
generator=itersample(maximal)
another_number=generator.next() # pick the next non-repeating random number
or
import random
generator=itersample(maximal)
for random_number in generator:
# do something with random_number
if some_condition: # exit loop when needed
break
itersample generates non-repeating random integers, storage need is limited to picked numbers, and the time needed to pick n numbers should be (as some tests confirm) O(n log(n)), regardelss of maximal.
Here is the code of itersample:
import random
def itersample(c): # c = upper bound of generated integers
sampled=[]
def fsb(a,b): # free spaces before middle of interval a,b
fsb.idx=a+(b+1-a)/2
fsb.last=sampled[fsb.idx]-fsb.idx if len(sampled)>0 else 0
return fsb.last
while len(sampled)<c:
sample_index=random.randrange(c-len(sampled))
a,b=0,len(sampled)-1
if fsb(a,a)>sample_index:
yielding=sample_index
sampled.insert(0,yielding)
yield yielding
elif fsb(b,b)<sample_index+1:
yielding=len(sampled)+sample_index
sampled.insert(len(sampled),yielding)
yield yielding
else: # sample_index falls inside sampled list
while a+1<b:
if fsb(a,b)<sample_index+1:
a=fsb.idx
else:
b=fsb.idx
yielding=a+1+sample_index
sampled.insert(a+1,yielding)
yield yielding
You are stating that you store the numbers in a database.
Wouldn't it then be easier to store all the numbers there, and ask the database for a random unused number?
Most databases support such a request.
Examples
MySQL:
SELECT column FROM table
ORDER BY RAND()
LIMIT 1
PostgreSQL:
SELECT column FROM table
ORDER BY RANDOM()
LIMIT 1

Categories