Long numbers hashing complexity in Python - python

How does Python hash long numbers? I guess it takes O(1) time for 32-bit ints, but the way long integers work in Python makes me think the complexity is not O(1) for them. I've looked for answers in relevant questions, but have found none straightforward enough to make me confident. Thank you in advance.

The long_hash() function indeed loops and depends on the size of the integer, yes:
/* The following loop produces a C unsigned long x such that x is
congruent to the absolute value of v modulo ULONG_MAX. The
resulting x is nonzero if and only if v is. */
while (--i >= 0) {
/* Force a native long #-bits (32 or 64) circular shift */
x = (x >> (8*SIZEOF_LONG-PyLong_SHIFT)) | (x << PyLong_SHIFT);
x += v->ob_digit[i];
/* If the addition above overflowed we compensate by
incrementing. This preserves the value modulo
ULONG_MAX. */
if (x < v->ob_digit[i])
x++;
}
where i is the 'object size', e.g. the number of digits used to represent the number, where the size of a digit depends on your platform.

Related

Times-two faster than bit-shift, for Python 3.x integers?

I was looking at the source of sorted_containers and was surprised to see this line:
self._load, self._twice, self._half = load, load * 2, load >> 1
Here load is an integer. Why use bit shift in one place, and multiplication in another? It seems reasonable that bit shifting may be faster than integral division by 2, but why not replace the multiplication by a shift as well? I benchmarked the the following cases:
(times, divide)
(shift, shift)
(times, shift)
(shift, divide)
and found that #3 is consistently faster than other alternatives:
# self._load, self._twice, self._half = load, load * 2, load >> 1
import random
import timeit
import pandas as pd
x = random.randint(10 ** 3, 10 ** 6)
def test_naive():
a, b, c = x, 2 * x, x // 2
def test_shift():
a, b, c = x, x << 1, x >> 1
def test_mixed():
a, b, c = x, x * 2, x >> 1
def test_mixed_swapped():
a, b, c = x, x << 1, x // 2
def observe(k):
print(k)
return {
'naive': timeit.timeit(test_naive),
'shift': timeit.timeit(test_shift),
'mixed': timeit.timeit(test_mixed),
'mixed_swapped': timeit.timeit(test_mixed_swapped),
}
def get_observations():
return pd.DataFrame([observe(k) for k in range(100)])
The question:
Is my test valid? If so, why is (multiply, shift) faster than (shift, shift)?
I run Python 3.5 on Ubuntu 14.04.
Edit
Above is the original statement of the question. Dan Getz provides an excellent explanation in his answer.
For the sake of completeness, here are sample illustrations for larger x when multiplication optimizations do not apply.
This seems to be because multiplication of small numbers is optimized in CPython 3.5, in a way that left shifts by small numbers are not. Positive left shifts always create a larger integer object to store the result, as part of the calculation, while for multiplications of the sort you used in your test, a special optimization avoids this and creates an integer object of the correct size. This can be seen in the source code of Python's integer implementation.
Because integers in Python are arbitrary-precision, they are stored as arrays of integer "digits", with a limit on the number of bits per integer digit. So in the general case, operations involving integers are not single operations, but instead need to handle the case of multiple "digits". In pyport.h, this bit limit is defined as 30 bits on 64-bit platform, or 15 bits otherwise. (I'll just call this 30 from here on to keep the explanation simple. But note that if you were using Python compiled for 32-bit, your benchmark's result would depend on if x were less than 32,768 or not.)
When an operation's inputs and outputs stay within this 30-bit limit, the operation can be handled in an optimized way instead of the general way. The beginning of the integer multiplication implementation is as follows:
static PyObject *
long_mul(PyLongObject *a, PyLongObject *b)
{
PyLongObject *z;
CHECK_BINOP(a, b);
/* fast path for single-digit multiplication */
if (Py_ABS(Py_SIZE(a)) <= 1 && Py_ABS(Py_SIZE(b)) <= 1) {
stwodigits v = (stwodigits)(MEDIUM_VALUE(a)) * MEDIUM_VALUE(b);
#ifdef HAVE_LONG_LONG
return PyLong_FromLongLong((PY_LONG_LONG)v);
#else
/* if we don't have long long then we're almost certainly
using 15-bit digits, so v will fit in a long. In the
unlikely event that we're using 30-bit digits on a platform
without long long, a large v will just cause us to fall
through to the general multiplication code below. */
if (v >= LONG_MIN && v <= LONG_MAX)
return PyLong_FromLong((long)v);
#endif
}
So when multiplying two integers where each fits in a 30-bit digit, this is done as a direct multiplication by the CPython interpreter, instead of working with the integers as arrays. (MEDIUM_VALUE() called on a positive integer object simply gets its first 30-bit digit.) If the result fits in a single 30-bit digit, PyLong_FromLongLong() will notice this in a relatively small number of operations, and create a single-digit integer object to store it.
In contrast, left shifts are not optimized this way, and every left shift deals with the integer being shifted as an array. In particular, if you look at the source code for long_lshift(), in the case of a small but positive left shift, a 2-digit integer object is always created, if only to have its length truncated to 1 later: (my comments in /*** ***/)
static PyObject *
long_lshift(PyObject *v, PyObject *w)
{
/*** ... ***/
wordshift = shiftby / PyLong_SHIFT; /*** zero for small w ***/
remshift = shiftby - wordshift * PyLong_SHIFT; /*** w for small w ***/
oldsize = Py_ABS(Py_SIZE(a)); /*** 1 for small v > 0 ***/
newsize = oldsize + wordshift;
if (remshift)
++newsize; /*** here newsize becomes at least 2 for w > 0, v > 0 ***/
z = _PyLong_New(newsize);
/*** ... ***/
}
Integer division
You didn't ask about the worse performance of integer floor division compared to right shifts, because that fit your (and my) expectations. But dividing a small positive number by another small positive number is not as optimized as small multiplications, either. Every // computes both the quotient and the remainder using the function long_divrem(). This remainder is computed for a small divisor with a multiplication, and is stored in a newly-allocated integer object, which in this situation is immediately discarded.
Or at least, that was the case when this question was originally asked. In CPython 3.6, a fast path for small int // was added, so // now beats >> for small ints too.

What is the difference between this C++ code and this Python code?

Answer
Thanks to #TheDark for spotting the overflow. The new C++ solution is pretty freakin' funny, too. It's extremely redundant:
if(2*i > n && 2*i > i)
replaced the old line of code if(2*i > n).
Background
I'm doing this problem on HackerRank, though the problem may not be entirely related to this question. If you cannot see the webpage, or have to make an account and don't want to, the problem is listed in plain text below.
Question
My C++ code is timing out, but my python code is not. I first suspected this was due to overflow, but I used sizeof to be sure that unsigned long long can reach 2^64 - 1, the upper limit of the problem.
I practically translated my C++ code directly into Python to see if it was my algorithms causing the timeouts, but to my surprise my Python code passed every test case.
C++ code:
#include <iostream>
bool pot(unsigned long long n)
{
if (n % 2 == 0) return pot(n/2);
return (n==1); // returns true if n is power of two
}
unsigned long long gpt(unsigned long long n)
{
unsigned long long i = 1;
while(2*i < n) {
i *= 2;
}
return i; // returns greatest power of two less than n
}
int main()
{
unsigned int t;
std::cin >> t;
std::cout << sizeof(unsigned long long) << std::endl;
for(unsigned int i = 0; i < t; i++)
{
unsigned long long n;
unsigned long long count = 1;
std::cin >> n;
while(n > 1) {
if (pot(n)) n /= 2;
else n -= gpt(n);
count++;
}
if (count % 2 == 0) std::cout << "Louise" << std::endl;
else std::cout << "Richard" << std::endl;
}
}
Python 2.7 code:
def pot(n):
while n % 2 == 0:
n/=2
return n==1
def gpt(n):
i = 1
while 2*i < n:
i *= 2
return i
t = int(raw_input())
for i in range(t):
n = int(raw_input())
count = 1
while n != 1:
if pot(n):
n /= 2
else:
n -= gpt(n)
count += 1
if count % 2 == 0:
print "Louise"
else:
print "Richard"
To me, both versions look identical. I still think I'm somehow being fooled and am actually getting overflow, causing timeouts, in my C++ code.
Problem
Louise and Richard play a game. They have a counter is set to N. Louise gets the first turn and the turns alternate thereafter. In the game, they perform the following operations.
If N is not a power of 2, they reduce the counter by the largest power of 2 less than N.
If N is a power of 2, they reduce the counter by half of N.
The resultant value is the new N which is again used for subsequent operations.
The game ends when the counter reduces to 1, i.e., N == 1, and the last person to make a valid move wins.
Given N, your task is to find the winner of the game.
Input Format
The first line contains an integer T, the number of testcases.
T lines follow. Each line contains N, the initial number set in the counter.
Constraints
1 ≤ T ≤ 10
1 ≤ N ≤ 2^64 - 1
Output Format
For each test case, print the winner's name in a new line. So if Louise wins the game, print "Louise". Otherwise, print "Richard". (Quotes are for clarity)
Sample Input
1
6
Sample Output
Richard
Explanation
As 6 is not a power of 2, Louise reduces the largest power of 2 less than 6 i.e., 4, and hence the counter reduces to 2.
As 2 is a power of 2, Richard reduces the counter by half of 2 i.e., 1. Hence the counter reduces to 1.
As we reach the terminating condition with N == 1, Richard wins the game.
When n is greater than 2^63, your gpt function will eventually have i as 2^63 and then multiply 2^63 by 2, giving an overflow and a value of 0. This will then end up with an infinite loop, multiplying 0 by 2 each time.
Try this bit-twiddling hack, which is probably slightly faster:
unsigned long largest_power_of_two_not_greater_than(unsigned long x) {
for (unsigned long y; (y = x & (x - 1)); x = y) {}
return x;
}
x&(x-1) is x without its least significant one-bit. So y will be zero (terminating the loop) exactly when x has been reduced to a power of two, which will be the largest power of two not greater than the original x. The loop is executed once for every 1-bit in x, which is on average half as many iterations as your approach. Also, this one has not issues with overflow. (It does return 0 if the original x was 0. That may or may not be what you want.)
Note the if the original x was a power of two, that value is simply returned immediately. So the function doubles as a test whether x is a power of two (or 0).
While that is fun and all, in real-life code you'd probably be better off finding your compiler's equivalent to this gcc built-in (unless your compiler is gcc, in which case here it is):
Built-in Function: int __builtin_clz (unsigned int x)
Returns the number of leading 0-bits in X, starting at the most
significant bit position. If X is 0, the result is undefined.
(Also available as __builtin_clzl for unsigned long arguments and __builtin_clzll for unsigned long long.)

Python frozenset hashing algorithm / implementation

I'm currently trying to understand the mechanism behind the hash function defined for Python's built-in frozenset data type. The implementation is shown at the bottom for reference. What I'm interested in particular is the rationale for the choice of this scattering operation:
lambda h: (h ^ (h << 16) ^ 89869747) * 3644798167
where h is the hash of each element. Does anyone know where these came from? (That is, was there any particular reason to pick these numbers?) Or were they simply chosen arbitrarily?
Here is the snippet from the official CPython implementation,
static Py_hash_t
frozenset_hash(PyObject *self)
{
PySetObject *so = (PySetObject *)self;
Py_uhash_t h, hash = 1927868237UL;
setentry *entry;
Py_ssize_t pos = 0;
if (so->hash != -1)
return so->hash;
hash *= (Py_uhash_t)PySet_GET_SIZE(self) + 1;
while (set_next(so, &pos, &entry)) {
/* Work to increase the bit dispersion for closely spaced hash
values. The is important because some use cases have many
combinations of a small number of elements with nearby
hashes so that many distinct combinations collapse to only
a handful of distinct hash values. */
h = entry->hash;
hash ^= (h ^ (h << 16) ^ 89869747UL) * 3644798167UL;
}
hash = hash * 69069U + 907133923UL;
if (hash == -1)
hash = 590923713UL;
so->hash = hash;
return hash;
}
and an equivalent implementation in Python:
def _hash(self):
MAX = sys.maxint
MASK = 2 * MAX + 1
n = len(self)
h = 1927868237 * (n + 1)
h &= MASK
for x in self:
hx = hash(x)
h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167
h &= MASK
h = h * 69069 + 907133923
h &= MASK
if h > MAX:
h -= MASK + 1
if h == -1:
h = 590923713
return h
The problem being solved is that the previous hash algorithm in Lib/sets.py had horrendous performance on datasets that arise in a number of graph algorithms (where nodes are represented as frozensets):
# Old-algorithm with bad performance
def _compute_hash(self):
result = 0
for elt in self:
result ^= hash(elt)
return result
def __hash__(self):
if self._hashcode is None:
self._hashcode = self._compute_hash()
return self._hashcode
A new algorithm was created because it had much better performance. Here is an overview of the salient parts of the new algorithm:
1) The xor-equal in h ^= (hx ^ (hx << 16) ^ 89869747) * 3644798167 is necessary so that the algorithm is commutative (the hash does not depend on the order that set elements are encountered). Since sets has an unordered equality test, the hash for frozenset([10, 20]) needs to be the same as for frozenset([20, 10]).
2) The xor with89869747 was chosen for its interesting bit pattern 101010110110100110110110011 which is used to break-up sequences of nearby hash values prior to multiplying by 3644798167, a randomly chosen large prime with another interesting bit pattern.
3) The xor with hx << 16 was included so that the lower bits had two chances to affect the outcome (resulting in better dispersion of nearby hash values). In this, I was inspired by how CRC algorithms shuffled bits back on to themselves.
4) If I recall correctly, the only one of the constants that is special is 69069. It had some history from the world of linear congruential random number generators. See https://www.google.com/search?q=69069+rng for some references.
5) The final step of computing hash = hash * 69069U + 907133923UL was added to handle cases with nested frozensets and to make the algorithm disperse in a pattern orthogonal to the hash algorithms for other objects (strings, tuples, ints, etc).
6) Most of the other constants were randomly chosen large prime numbers.
Though I would like to claim divine inspiration for the hash algorithm, the reality was that I took a bunch of badly performing datasets, analyzed why their hashes weren't dispersing, and then toyed with the algorithm until the collision statistics stopped being so embarrassing.
For example, here is an efficacy test from Lib/test/test_set.py that failed for algorithms with less diffusion:
def test_hash_effectiveness(self):
n = 13
hashvalues = set()
addhashvalue = hashvalues.add
elemmasks = [(i+1, 1<<i) for i in range(n)]
for i in xrange(2**n):
addhashvalue(hash(frozenset([e for e, m in elemmasks if m&i])))
self.assertEqual(len(hashvalues), 2**n)
Other failing examples included powersets of strings and small integer ranges as well as the graph algorithms in the test suite: See TestGraphs.test_cuboctahedron and TestGraphs.test_cube in Lib/test/test_set.py.
Unless Raymond Hettinger (the code's author) chimes in, we'll never know for sure ;-) But there's usually less "science" in these things than you might expect: you take some general principles, and a test suite, and fiddle the constants almost arbitrarily until the results look "good enough".
Some general principles "obviously" at work here:
To get the desired quick "bit dispersion", you want to multiply by a large integer. Since CPython's hash result has to fit in 32 bits on many platforms, an integer that requires 32 bits is best for this. And, indeed, (3644798167).bit_length() == 32.
To avoid systematically losing the low-order bit(s), you want to multiply by an odd integer. 3644798167 is odd.
More generally, to avoid compounding patterns in the input hashes, you want to multiply by a prime. And 3644798167 is prime.
And you also want a multiplier whose binary representation doesn't have obvious repeating patterns. bin(3644798167) == '0b11011001001111110011010011010111'. That's pretty messed up, which is a good thing ;-)
The other constants look utterly arbitrary to me. The
if h == -1:
h = 590923713
part is needed for another reason: internally, CPython takes a -1 return value from an integer-valued C function as meaning "an exception needs to be raised"; i.e., it's an error return. So you'll never see a hash code of -1 for any object in CPython. The value returned instead of -1 is wholly arbitrary - it just needs to be the same value (instead of -1) each time.
EDIT: playing around
I don't know what Raymond used to test this. Here's what I would have used: look at hash statistics for all subsets of a set of consecutive integers. Those are problematic because hash(i) == i for a great many integers i.
>>> all(hash(i) == i for i in range(1000000))
True
Simply xor'ing hashes together will yield massive cancellation on inputs like that.
So here's a little function to generate all subsets, and another to do a dirt-simple xor across all hash codes:
def hashxor(xs):
h = 0
for x in xs:
h ^= hash(x)
return h
def genpowerset(xs):
from itertools import combinations
for length in range(len(xs) + 1):
for t in combinations(xs, length):
yield t
Then a driver, and a little function to display collision statistics:
def show_stats(d):
total = sum(d.values())
print "total", total, "unique hashes", len(d), \
"collisions", total - len(d)
def drive(n, hasher=hashxor):
from collections import defaultdict
d = defaultdict(int)
for t in genpowerset(range(n)):
d[hasher(t)] += 1
show_stats(d)
Using the dirt-simple hasher is disastrous:
>> drive(20)
total 1048576 unique hashes 32 collisions 1048544
Yikes! OTOH, using the _hash() designed for frozensets does a perfect job in this case:
>>> drive(20, _hash)
total 1048576 unique hashes 1048576 collisions 0
Then you can play with that to see what does - and doesn't - make a real difference in _hash(). For example, it still does a perfect job on these inputs if
h = h * 69069 + 907133923
is removed. And I have no idea why that line is there. Similarly, it continues to do a perfect job on these inputs if the ^ 89869747 in the inner loop is removed - don't know why that's there either. And initialization can be changed from:
h = 1927868237 * (n + 1)
to:
h = n
without harm here too. That all jibes with what I expected: it's the multiplicative constant in the inner loop that's crucial, for reasons already explained. For example, add 1 to it (use 3644798168) and then it's no longer prime or odd, and the stats degrade to:
total 1048576 unique hashes 851968 collisions 196608
Still quite usable, but definitely worse. Change it to a small prime, like 13, and it's worse:
total 1048576 unique hashes 483968 collisions 564608
Use a multiplier with an obvious binary pattern, like 0b01010101010101010101010101010101, and worse again:
total 1048576 unique hashes 163104 collisions 885472
Play around! These things are fun :-)
In
(h ^ (h << 16) ^ 89869747) * 3644798167
the multiplicative integer is a large prime to reduce collisions. This is especially relevant since the operation is under modulo.
The rest is probably arbitrary; I see no reason for the 89869747 to be specific. The most important usage you would get out of that is enlarging hashes of small numbers (most integers hash to themselves). This prevents high collisions for sets of small integers.
That's all I can think of. What do you need this for?

Extendible hashing - most significant bits

I want to write extendible hashing. On wiki I have found good implementation in python. But this code uses least significant bits, so when I have hash 1101 for d = 1 value is 1 and for d = 2 value is 01. I would like to use most significant bits. For exmaple: hash 1101, d = 1 value is 1, d = 2 value is 11. Is there any simple way to do that? I tried, but I can't.
Do you understand why it uses the least significant bits?
More or less. It makes efficient when we using arrays. Ok so for hash function I would like to use four least bits from 4-bytes integer but from left to right.
h = hash(k)
h = h & 0xf #use mask to get four least bits
p = self.pp[ h >> ( 4 - GD)]
And it doesn't work, and I don't know why.
Computing a hash using the least significant bits is the fastest way to compute a hash, because it only requires an AND bitwise operation. This makes it very popular.
Here is an implemetation (in C) for a hash using the most significant bits. Since there is no direct way to know the most significant bit, it repeatedly tests that the remaining value has only the specified amount of bits.
int significantHash(int value, int bits) {
int mask = (1 << bits) - 1;
while (value > mask) {
value >>= 1;
}
return value;
}
I recommend the overlapping hash, that makes use of all the bits of the number. Essentially, it cuts the number in parts of equal number of bits and XORs them. It runs slower than the least significant hash, but faster than the significant hash. Above all else, it offers a better dispersion than the other two methods, making it a better candidate when the numbers that must be hashed have a certain bit-related-pattern.
int overlappingHash(int value, int bits) {
int mask = (1 << bits) - 1;
int answer = 0;
do {
answer ^= (value & mask);
value >>= bits;
} while (value > 0);
return answer;
}

Is CRC32 additive?

On several places I've read that crc32 is additive and so: CRC(A xor B) = CRC(A) xor CRC(B).
The above statement was disproven by the following code I wrote:
import zlib
def crc32(data):
return zlib.crc32(data) & 0xffffffff
print crc32(chr(ord("A") ^ ord("B")))
print crc32("A") ^ crc32("B")
Program output:
1259060791
2567524794
Could someone provide a proper code proving this theory or point me where I've failed?
CRC is additive in the mathematical sense since the CRC hash is just a remainder value from a carryless division of all the data (treated as a giant integer) divided by the polynomial constant. Using your example, it's akin to this sort of thing:
7 mod 5 = 2
6 mod 5 = 1
(7 mod 5) + (6 mod 5) = 3
(7 + 6) mod 5 = 3
In that analogy, '5' is our CRC polynomial.
Here's an example to play with (gcc based):
#include <stdio.h>
#include <x86intrin.h>
int main(void)
{
unsigned int crc_a = __builtin_ia32_crc32si( 0, 5);
printf( "crc(5) = %08X\n", crc_a );
unsigned int crc_b = __builtin_ia32_crc32si( 0, 7);
printf( "crc(7) = %08X\n", crc_b );
unsigned int crc_xor = crc_a ^ crc_b;
printf( "crc(5) XOR crc(7) = %08X\n", crc_xor );
unsigned int crc_xor2 = __builtin_ia32_crc32si( 0, 5 ^ 7);
printf( "crc(5 XOR 7) = %08X\n", crc_xor2 );
return 0;
}
The output is as expected:
plxc15034> gcc -mcrc32 -Wall -O3 crctest.c
plxc15034> ./a.out
crc(5) = A6679B4B
crc(7) = 1900B8CA
crc(5) XOR crc(7) = BF672381
crc(5 XOR 7) = BF672381
Because this code uses the x86 CRC32 instruction, it will only run on an Intel i7 or newer. The intrinsic function takes the running CRC hash as the first parameter and the new data to accumulate as the second parameter. The return value is the new running CRC.
The initial running CRC value of 0 in the code above is critical. Using any other initial value, then CRC is not "additive" in the practical sense because you have effectively thrown away information about the integer you are dividing into. And this is exactly what's happening in your example. CRC functions never initialize that initial running CRC value to zero, but usually -1. The reason is that an initial CRC of 0 allows any number of leading 0's in the data to simply fall through without changing the running CRC value, which remains 0. So, initializing the CRC to 0 is mathematically sound, but for practical purposes of calculating hash, it's the last thing you'd want.
The CRC-32 algorithm is based on polynomial division, with some extra steps added. Pure polynomial remainder is additive.
By that, I mean: mod(poly1 + poly2, poly3) = mod(mod(poly1, poly3) + mod(poly2, poly3), poly3)
The CRC-32 algorithm builds on this, and is non-additive. To compute the CRC-32 of a byte array m:
XOR the first 4 bytes by 0xFFFFFFFF.
Treat earlier bytes as higher polynomial powers and treat lower order bits as higher polynomial powers. For example, the bytes 0x01 0x04 would be the polynomial x^15 + x^3.
Multiply the polynomial by x^32.
Take the remainder of this polynomial divided by the CRC-32 polynomial, 0x104C11DB7. The remainder polynomial has degree < 32.
Treat lower powers as higher order bits. For example, the polynomial x^2 would be the 32-bit integer 0x40000000.
XOR the result by 0xFFFFFFFF.
The pure polynomial remainder operation is in step #4. It's steps #1 and #6 that make the CRC-32 algorithm non-additive. So if you undo the effect of steps #1 and #6, then you can modify the CRC-32 algorithm to be additive.
(See also: Python CRC-32 woes)
If a, b, and c are the same length, CRC(a) xor CRC(b) xor CRC(c) equals CRC(a xor b xor c). Returning to your original formulation, it means that CRC(a xor b) equals CRC(a) xor CRC(b) xor CRC(z), where z is a sequence of zeroes the same length as the other two sequences.
This would imply that each bit position of the CRC result is only driven by the equivalent bit position in the input. Consider your example with B == 0.
The relationship you're describing is more likely to be true for some primitive xor or additive checksum algorithms.

Categories