Transform bits into byte series - python

Given a Python integer which is within the size of 4 bits, how does one transform it – with bitwise arithmetic instead of string processing – into an integer within the size of 4 bytes, for which each bit in the original corresponds to a byte which is the bit repeated 8 times?
For example: 0b1011 should become 0b11111111000000001111111111111111

With apologies to ncoghlan:
expanded_bits = [
0b00000000000000000000000000000000,
0b00000000000000000000000011111111,
0b00000000000000001111111100000000,
0b00000000000000001111111111111111,
0b00000000111111110000000000000000,
0b00000000111111110000000011111111,
0b00000000111111111111111100000000,
0b00000000111111111111111111111111,
0b11111111000000000000000000000000,
0b11111111000000000000000011111111,
0b11111111000000001111111100000000,
0b11111111000000001111111111111111,
0b11111111111111110000000000000000,
0b11111111111111110000000011111111,
0b11111111111111111111111100000000,
0b11111111111111111111111111111111,
]
Then just index this list with the nibble you want to transform:
>>> bin(expanded_bits[0b1011])
"0b11111111000000001111111111111111"

I'd just do a loop:
x = 0b1011
y = 0
for i in range(4):
if x & (1 << i):
y |= (255 << (i * 8))
print "%x" % y

The following recursive solution uses only addition, left/right shift operators and bitwise & operator with integers:
def xform_rec(n):
if n == 0:
return 0
else:
if 0 == n & 0b1:
return xform_rec(n >> 1) << 8
else:
return 0b11111111 + (xform_rec(n >> 1) << 8)
Or, as a one-liner:
def xform_rec(n):
return 0 if n == 0 else (0 if 0 == n & 0b1 else 0b11111111) + (xform_rec(n >> 1) << 8)
Examples:
>>> print bin(xform_rec(0b1011))
0b11111111000000001111111111111111
>>> print bin(xform_rec(0b0000))
0b0
>>> print bin(xform_rec(0b1111))
0b11111111111111111111111111111111)

Related

Python - Fastest way to strip the trailing zeros from the bit representation of a number

This is the python version of the same C++ question.
Given a number, num, what is the fastest way to strip off the trailing zeros from its binary representation?
For example, let num = 232. We have bin(num) equal to 0b11101000 and we would like to strip the trailing zeros, which would produce 0b11101. This can be done via string manipulation, but it'd probably be faster via bit manipulation. So far, I have thought of something using num & -num
Assuming num != 0, num & -num produces the binary 0b1<trailing zeros>. For example,
num 0b11101000
-num 0b00011000
& 0b1000
If we have a dict having powers of two as keys and the powers as values, we could use that to know by how much to right bit shift num in order to strip just the trailing zeros:
# 0b1 0b10 0b100 0b1000
POW2s = { 1: 0, 2: 1, 4: 2, 8: 3, ... }
def stripTrailingZeros(num):
pow2 = num & -num
pow_ = POW2s[pow2] # equivalent to math.log2(pow2), but hopefully faster
return num >> pow_
The use of dictionary POW2s trades space for speed - the alternative is to use math.log2(pow2).
Is there a faster way?
Perhaps another useful tidbit is num ^ (num - 1) which produces 0b1!<trailing zeros> where !<trailing zeros> means take the trailing zeros and flip them into ones. For example,
num 0b11101000
num-1 0b11100111
^ 0b1111
Yet another alternative is to use a while loop
def stripTrailingZeros_iterative(num):
while num & 0b1 == 0: # equivalent to `num % 2 == 0`
num >>= 1
return num
Ultimately, I need to execute this function on a big list of numbers. Once I do that, I want the maximum. So if I have [64, 38, 22, 20] to begin with, I would have [1, 19, 11, 5] after performing the stripping. Then I would want the maximum of that, which is 19.
There's really no answer to questions like this in the absence of specifying the expected distribution of inputs. For example, if all inputs are in range(256), you can't beat a single indexed lookup into a precomputed list of the 256 possible cases.
If inputs can be two bytes, but you don't want to burn the space for 2**16 precomputed results, it's hard to beat (assuming that_table[i] gives the count of trailing zeroes in i):
low = i & 0xff
result = that_table[low] if low else 8 + that_table[i >> 8]
And so on.
You do not want to rely on log2(). The accuracy of that is entirely up to the C library on the platform CPython is compiled for.
What I actually use, in a context where ints can be up to hundreds of millions of bits:
assert d
if d & 1 == 0:
ntz = (d & -d).bit_length() - 1
d >>= ntz
A while loop would be a disaster in this context, taking time quadratic in the number of bits shifted off. Even one needless shift in that context would be a significant expense, which is why the code above first checks to see that at least one bit needs to shifted off. But if ints "are much smaller", that check would probably cost more than it saves. "No answer in the absence of specifying the expected distribution of inputs."
On my computer, a simple integer divide is fastest:
import timeit
timeit.timeit(setup='num=232', stmt='num // (num & -num)')
0.1088077999993402
timeit.timeit(setup='d = { 1: 0, 2 : 1, 4: 2, 8 : 3, 16 : 4, 32 : 5 }; num=232', stmt='num >> d[num & -num]')
0.13014470000052825
timeit.timeit(setup='import math; num=232', stmt='num >> int(math.log2(num & -num))')
0.2980690999993385
You say you "Ultimately, [..] execute this function on a big list of numbers to get odd numbers and find the maximum of said odd numbers."
So why not simply:
from random import randint
numbers = [randint(0, 10000) for _ in range(5000)]
odd_numbers = [n for n in numbers if n & 1]
max_odd = max(odd_numbers)
print(max_odd)
To do what you say you want to do ultimately, there seems to be little point in performing the "shift right until the result is odd" operation? Unless you want the maximum of the result of that operation performed on all elements, which is not what you stated?
I agree with #TimPeters answer, but if you put Python through its paces and actually generate some data sets and try the various solutions proposed, they maintain their spread for any number of integer size when using Python ints, so your best option is integer division for numbers up to 32-bits, after that see the chart below:
from pandas import DataFrame
from timeit import timeit
import math
from random import randint
def reduce0(ns):
return [n // (n & -n)
for n in ns]
def reduce1(ns, d):
return [n >> d[n & -n]
for n in ns]
def reduce2(ns):
return [n >> int(math.log2(n & -n))
for n in ns]
def reduce3(ns, t):
return [n >> t.index(n & -n)
for n in ns]
def reduce4(ns):
return [n if n & 1 else n >> ((n & -n).bit_length() - 1)
for n in ns]
def single5(n):
while (n & 0xffffffff) == 0:
n >>= 32
if (n & 0xffff) == 0:
n >>= 16
if (n & 0xff) == 0:
n >>= 8
if (n & 0xf) == 0:
n >>= 4
if (n & 0x3) == 0:
n >>= 2
if (n & 0x1) == 0:
n >>= 1
return n
def reduce5(ns):
return [single5(n)
for n in ns]
numbers = [randint(1, 2 ** 16 - 1) for _ in range(5000)]
d = {2 ** n: n for n in range(16)}
t = tuple(2 ** n for n in range(16))
assert(reduce0(numbers) == reduce1(numbers, d) == reduce2(numbers) == reduce3(numbers, t) == reduce4(numbers) == reduce5(numbers))
df = DataFrame([{}, {}, {}, {}, {}, {}])
for p in range(1, 16):
p = 2 ** p
numbers = [randint(1, 2 ** p - 1) for _ in range(4096)]
d = {2**n: n for n in range(p)}
t = tuple(2 ** n for n in range(p))
df[p] = [
timeit(lambda: reduce0(numbers), number=100),
timeit(lambda: reduce1(numbers, d), number=100),
timeit(lambda: reduce2(numbers), number=100),
timeit(lambda: reduce3(numbers, t), number=100),
timeit(lambda: reduce4(numbers), number=100),
timeit(lambda: reduce5(numbers), number=100)
]
print(f'Complete for {p} bit numbers.')
print(df)
df.to_csv('test_results.csv')
Result (when plotted in Excel):
Note that the plot that was previously here was wrong! The code and data were not though. The code has been updated to include #MarkRansom's solution, since it turns out to be the optimal solution for very large numbers (over 4k-bit numbers).
while (num & 0xffffffff) == 0:
num >>= 32
if (num & 0xffff) == 0:
num >>= 16
if (num & 0xff) == 0:
num >>= 8
if (num & 0xf) == 0:
num >>= 4
if (num & 0x3) == 0:
num >>= 2
if (num & 0x1) == 0:
num >>= 1
The idea here is to perform as few shifts as possible. The initial while loop handles numbers that are over 32 bits long, which I consider unlikely but it has to be provided for completeness. After that each statement shifts half as many bits; if you can't shift by 16, then the most you could shift is 15 which is (8+4+2+1). All possible cases are covered by those 5 if statements.

What's the best way to modify a bitwise Karatsuba-Algorithm to work with negative numbers?

I wrote this bitwise Karatsuba multiplication algorithm. It does not use strings or math.pow. It's just divide-and-conquer-recursion, bitwise operations and addition:
def karatsuba(x,y):
n = max(x.bit_length(), y.bit_length())
if n < 2:
return x&y
# split in O(1)
n = (n + 1) >> 1
b = x >> n;
a = x - (b << n);
d = y >> n;
c = y - (d << n);
ac = karatsuba(a, c);
bd = karatsuba(b, d);
abcd = karatsuba(a+b, c+d);
return ac + ((abcd - ac - bd) << n) + (bd << (n << 1));
print(karatsuba(23,24))
print(karatsuba(-29,31))
# 552
# 381
It works absolutly fine with positive numbers, but obviously -29*31 is not equal 381.
What's the easiest way to fix the problem?
My first idea was to make the number positive with (~(-29)+1) = 29, store wheather it was negative or not in a boolean and handle that boolean in my return statement, but is there a better (maybe bitwise) solution?
Thanks in advance
The issue is with your exit case, in particular x&y returns the wrong value for negative numbers:
-1 & 1 == 1 # Needs to return -1
So you can fix this with testing for it or or just returning:
if n < 2:
return x*y
E.g.:
In []:
print(karatsuba(-29,31))
Out[]:
-899

Efficient way to transpose the bit of an integer in python?

Consider a 6 bits integer
x = a b c d e f
that should be transpose to three integers of 2 bits as follows
x1 = a d
x2 = b e
x3 = c f
What is an efficient way to do this in python?
I currently goes as follows
bit_list = list( bin(x)[2:] ) # to chop of '0b'
# pad beginning if necessary, to make sure bit_list contains 6 bits
nb_of_bit_to_pad_on_the_left = 6 - len(bit_list)
for i in xrange(nb_of_bit_to_pad_on_the_left):
bit_list.insert(0,'0')
# transposition
transpose = [ [], [], [] ]
for bit in xrange(0, 6, 2):
for dimension in xrange(3):
x = bit_list[bit + dimension]
transpose[dimension].append(x)
for i in xrange(n):
bit_in_string = ''.join(transpose[i])
transpose[i] = int(bit_in_string, 2)
but this is slow when transposing a 5*1e6 bits integer, to one million of 5 bits integer.
Is there a better method?
Or some bitshit magic <</>> that will be speedier?
This question arised by trying to make a python implementation of Skilling Hilbert curve algorithm
This should work:
mask = 0b100100
for i in range(2, -1, -1):
tmp = x & mask
print(((tmp >> 3 + i) << 1) + ((tmp & (1 << i)) >> i))
mask >>= 1
The first mask extracts only a and d, then it is shifted to extract only b and e and then c and f.
In the print statement the numbers are either x00y00 or 0x00y0 or 00x00y. The (tmp >> 3 + i) transforms these numbers into x and then the << 1 obtains x0.
The ((tmp & (1 << i)) >> i)) first transforms those numbers into y00/y0 or y and then right-shifts to obtain simply y. Summing the two parts you get the xy number you want.
Slices will work if your working with strings ( bin(x) ).
>>>
>>> HInt = 'ABCDEFGHIJKLMNO'
>>> x = []
>>> for i in [0, 1, 2]:
x.append(HInt[i::3])
>>> x[0]
'ADGJM'
>>> x[1]
'BEHKN'
>>> x[2]
'CFILO'
>>>

Creating a bit mask with BigInts

Is there a more efficient way of performing the following calculation? It works fine, but something tells me that x &= (1 << 8) - 1 ^ 1 << 3 can be written to avoid some calculations and increase speed.
def unset_mask(width, index):
return (1 << width) - 1 ^ 1 << index
x = 0b11111111
x &= unset_mask(8, 3)
assert x == 0b11110111
Actually, you don't need to state the width. Bigints behave the right way when you do this:
>>> bin(255 & ~(1 << 3))
'0b11110111'
>>> bin(65535 & ~(1 << 3))
'0b1111111111110111'
>>> bin(75557863725914323419135 & ~(1 << 3))
'0b1111111111111111111111111111111111111111111111111111111111111111111111110111'
It's because negative numbers have an "infinite" string of ones preceding them. So when you complement a positive number (which starts with an "infinte" string of zeros), you get a negative number (-(x + 1) to be exact). Just don't trust the bin representation of negative numbers; it doesn't reflect the actual bits in memory.
So you would rewrite unset_mask like so:
def unset_mask(index):
return ~(1 << index)
x = 0b11111111
x &= unset_mask(3)
print x == 0b11110111 # prints True
You can use this to clear a bit in x:
x &= ~(1 << index)
This will unset the bit:
x ^= 1 << 3 & x
In a function:
def unset_bit(x, n):
return 1 << n & x ^ x

Please help me understand a bit shift operation

def different(s):
x = len(s)
for i in range(1, 1 << x):
u.append([s[j] for j in range(x) if (i & (1 << j))])
It takes a list and makes different combinations
(a,b,c) = ((a,b,c),(a,b),(a,c) ...)
But what does the range do? From 1 to what. I don't understand the "<<"
and also, if (i & (1 << j)) what does this do? It checks if i and 2 to the power of j? Doesn't make any sense to me.
The range function returns a list of numbers from zero to the given number minus one. It also has two- and three-argument forms (see the doc for more info):
range(n) == [0, 1, 2, ..., n - 1]
<< is the left-shift operator, and has the effect of multiplying the left hand side by two to the power of the right hand side:
x << n == x * 2**n
Thus the above range function (range(1, 1 << x)) returns [1, 2, 3, ..., 2**x - 1].
In the seconds usage of <<, the left-shift is being used as a bit-mask. It moves the 1-bit into the j-th bit, and performs a bit-wise and with i, so the result will be non-zero (and pass the if test) if and only if the j-th bit of i is set. For example:
j = 4
1 << j = 0b1000 (binary notation)
i = 41 = 0b101001
i & (1 << j) = 0b101001
& 0b001000
= 0b001000 (non-zero, the if-test passes)
i = 38 = 0b100110
i & (1 << j) = 0b100110
& 0b001000
= 0b000000 (zero, the if-test fails)
In short, x & (1 << y) is non-zero iff the y-th bit of x is set.
<< is the binary left shift operator. 1 << x is a way of saying two to the power of x.

Categories