I am trying to find a very fast way to find the next higher powers of 2 than a very large number (1,000,000) digits. Example, i have 1009, and want to find it's next higher powers of two which is 1024 or 2**10
I tried using a loop, but for large numbers this is very, very slow
y=0
while (1<<y)<1009:
y+=1
print(1<<y)
1024
While this works, it's slow for numbers larger than a million digits. Is there a faster algorithm to find the next higher powers of 2 than a number that is large?
ANSWERED BY #JonClements
using 2**number.bit_length() works perfectly. So this will work for large numbers as well. Thanks Jon.
Here's a code example from Jon's implementation:
2**j.bit_length()
1024
Here's a code example using the shift operator
2<<(j.bit_length()-1)
1024
Here is the time difference using the million length number, the shift operator and bit_length is significantly faster:
len(str(aa))
1000000
def useBITLENGTHwithshiftoperator(hm):
return 1<<hm.bit_length()-1<<1
def useBITLENGTHwithpowersoperator(hm):
return 2**hm.bit_length()
start = time.time()
l=useBITLENGTHwithpowersoperator(aa)
end = time.time()
print(end - start)
0.014303922653198242
start = time.time()
l=useBITLENGTHwithshiftoperator(aa)
end = time.time()
print(end - start)
0.0002968311309814453
take 2^ceiling(logBase2(x)) - should work unless x is a power of 2. and you can check for that with: if x==ceiling(x).
I do not code in python but millions of digits implies bignums so:
try to look inside your bignum lib
It might return the number of words or bits used in O(1) as some number representations need it to speed up other stuff. In such case you can obtain your answer in O(1) for free.
As #JonClements suggested in a comments try bit_length() and measure if it is O(1) or O(log(n)) ...
Your while is O(n^3) instead of O(n^2)
You are bitshifting from 1 over and over again in each iteration. Why not just shift last result by 1 bit again instead? Something like
for (y=0,yy=1;yy<1009;y++,yy<<=1);
using log2 might be faster
in case the bignum class you use have it implemented correctly after some number size threshold the log2(1009) might be signifficantly faster. But that depends on the type of numbers you using and bignum implementation itself.
bit-shifting can be even faster
If you got some upper limit on your numbers you can use binary search converting your bitshifting into O(n.log2(n)).
If not you can start bitshifting by 32 bits instead of by 1 when reached target size bitshift by 1 bit. Or even use more layers like 1024/128/16/1 bits. The complexity would be still O(n^2) but the constant time would be ~1024 times smaller speeding up ~1024 times your code for big numbers...
Other option is to find the limit by shifting by 1 bit, then by 2 then by 4,8,16,32,64,... until result is bigger than your target number and from there either bitshift back or use binary search. This one would be O(n.log2(n)) even without any upper limit..
However all of these brings up much higher overhead and will slow down the processing of smaller numbers.
Constructing 2^(y-1) < x <= 2^y might be possible to enhance too. For example by using bit shifting approach to find the y you got your answer as byproduct for free. For example with floating point or fixed point numbers you can directly construct such number as computing exponent for 1 or by setting correct bit in the zero ... But for arbitrary numbers (where size of number is dynamic) i sthis much harder/slower. So all boils down what kind of bignums class you got and what values you use.
I am trying to simulate a fixed-point filter implementation. I want to capture low-level hardware features like 2s-complement wraparound/overflow and fixed register widths. Some of the registers widths are set by hardware features at unusual and long widths (ie 72b).
I've been making some progress using the built-in integers. The infinite width is incredibly useful... but I find myself fighting Python a lot because it sometimes wants to interpret a binary as a positive integer, and sometimes it seems to want to interpret a very similar binary as a negative 2's complement. For example:
>> a = 0b11111 # sign-extended -1
>> b = 0b0011
>> print("{0:b}".format(a*b))
5f
>> print("{0:b}".format((a*b)&a)) # Truncate to correct product length
11101 # == -3 in 2s complement. Great!
>> print("{0:b}".format(~((a*b)&a)+1)) # Actually perform the 2's complement
-11101 # Arrrrggggghhh
>> print("{0:b}".format((~((a*b)&a)&a)+1)) # Truncate with extreme prejudice
11 # OK. Fine.
I guess if I think hard enough I can figure out why all this works the way it does, but if I could just do it all in unsigned space without worrying about python adding sign bits it would make things easier and less error-prone. Anyone know if there's a relatively easy way to do this? I considered bit strings, but I have to do a lot of adds & multiplies in this application and built-in integer arithmetic is really useful for that.
~x is literally defined on arbitrary precision integers as -(x+1). It does not do bit arithmetic: ~0 is 255 in one-byte integers, 65535 in two-byte integers, 1023 for 10-bit integers etc; so defining ~ via bit inversion on stretchy integers is useless.
If a defines the fixed width of your integers (with 0b11111 saying you are working with five-bit numbers), bit inversion is as simple as a^x.
print("{0:b}".format(a ^ b)
# => 11100
Two's complement is meanwhile easiest done as a+1-b, or equivalently a^b+1:
print("{0:b}".format((a + 1) - b))
# => 11101
print("{0:b}".format((a ^ b) + 1))
# => 11101
tl;dr: Don't use ~ if you want to stay unsigned.
I've been playing around with proving certain SIMD vectorizations using Z3 and I'm running into a problem trying to model SIMD operations that conditionally move around bits or lanes (such as Intel _mm_shuffle_epi8 for example)
The problem occurs when I try to use symbolic high and low with Extract which does not seem supported:
assert a.sort() == BitVecSort(128)
assert b.sort() == BitVecSort(128)
Extract( Extract(i+3,i,b)*8+7, Extract(i+3,i,b)*8, a)
results in
z3.z3types.Z3Exception: Symbolic expressions cannot be cast to concrete Boolean values.
The problem appears to be two-fold:
Z3 does not appear to support symbolically sized BitVecs
>>> a = Int('a')
>>> b = BitVec('b', a)
ctypes.ArgumentError: argument 2: <class 'TypeError'>: wrong type
Would be neat, but alas. As a result, Extract needs to be able to know the precise BitVec sort of its return value, and demands that both high and low are concrete, even though it appears the only real requirement should be that simplify(high - low) results in a concrete value.
What's the correct way of doing this?
SMTLib bit-vector logic is only defined for concrete bit-sizes. This is not just an oversight: It is a fundamental limitation of the logic: There's no decision procedure that can decide correctness of bit-vector formulae that can involve symbolic sizes, since truth of bit-vector formulae can change depending on the size. The classic example is:
x <= 7
if x is a bitvector of size <= 3, then the above is true, otherwise it isn't. If that looks contrived, consider the following:
x*x <= x+x+x
Again, this is true if x is 2-bits wide, but not true if it is 3-bits wide. Thus, SMTLib requires all bit-vector sizes to be concrete at specification time. Note that you can write higher-level programs that work for arbitrary bit-sizes, but once they are rendered and sent to the solver, all bit-vector sizes must be known concrete constants.
Regarding your question about Extract. You're right, strictly speaking, concreteness of the final length is sufficient. But z3py is a thin-layer on top of SMTLib, and it doesn't do such simplifications. The "concreteness" requirement follows from the similar limitation on the corresponding SMTLib function:
All function symbols with declaration of the form
((_ extract i j) (_ BitVec m) (_ BitVec n))
where
- i, j, m, n are numerals
- m > i ≥ j ≥ 0,
- n = i - j + 1 "
see here: http://smtlib.cs.uiowa.edu/theories-FixedSizeBitVectors.shtml Note that even the logic itself is called "FixedSizeBitVectors" for this very reason, not just "BitVectors".
However, it isn't really hard to extract a fixed sized chunk, simply right shift by lo, and mask/extract the required amount of bits:
((_ extract 7 0) (bvlshr x lo))
If your chunk size is not constant, then again you land in the world of symbolic bit-vector sizes and SMTLib avoids this for the reasons I mentioned above. (And this is also the reason why extract takes concrete integers as arguments and written in that funny SMTLib notation to indicate the arguments are concrete values.)
If you do have to work with "symbolic" word sizes, your best bet is to write your program and prove for each "symbolic" size of interest separately, by making sure the sizes are concrete in each case. (Essentially a case split for all the sizes you are interested in.)
Yep, high/low need to be constant so that the resulting type is known statically.
You can use shifts and/or masks to do what you want, though you'll need to fix a maximum size for the output.
I want to be able to access the sign bit of a number in python. I can do something like n >> 31 in C since int is represented as 32 bits.
I can't make use of the conditional operator and > <.
in python 3 integers don't have a fixed size, and aren't represented using the internal CPU representation (which allows to handle very large numbers without trouble).
So the best way is
signbit = 1 if n < 0 else 0
or
signbit = int(n < 0)
EDIT: if you cannot use < or > (which is ludicrious but so be it) you could use the fact that a-b will be positive if a is greater than b, so you could do
abs(a-b) == a-b
that doesn't use < or > (at least in the text, because abs uses it you can trust me)
I would argue that in Python there is not really a concept of a sign bit. As far as the programmer is concerned, an int is just a type with certain behavior. You don't get access to the low-level representation. Even bin(-3) returns a "negative" binary representation: '-0b11'
However, there are ways to get the sign or the bigger if two integers without comparisons. The following approach abuses floating point math to avoid comparisons.
def sign(a):
try:
return (1 - int(a / (a**2)**0.5)) // 2
except ZeroDivisionError:
return 0
def return_bigger(a, b):
s = sign(b - a)
return a * s + b * (1 - s)
assert sign(-33) == 1
assert sign(33) == 0
assert return_bigger(10, 15) == 15
assert return_bigger(25, 3) == 25
assert return_bigger(42, 42) == 42
(a**2)**0.5 could be replaced with abs but I bet internally this is implemented with a comparison.
The try/except is not needed if you don't care about 0 or equal integers (or there may be another horrible math workaround).
Finally, I'd like to point out that I have absolutely no idea why on earth anybody would want to implement something like that, except for the hell of it.
Conceptually, the bit representation of a negative integer is padded with an infinite number of 1 bits to the left (just like a non-negative number is regarded as padded with an infinite number of 0 bits). The operation n >> 31 does work (given that n is in the range of signed 32-bit numbers) in the sense that it places the sign bit (or if you prefer, one of the left-padding bits) in the lowest bit position. You just need to get rid of the rest of the left-padding bits, which you can do with a bitwise and operation like this:
n >> 31 & 1
Or you can make use of the fact that all one bits is how −1 is represented, and simply negate the result:
-(n >> 31)
Alternatively, you can cut off all but the lowest 32 1 bits before you do the shift, like this:
(n & 0xffffffff) >> 31
This is all under the assumption that you are working with numbers that fit into a signed 32-bit int. If n might need a 64 bit representation, shift by 63 places instead of 31, and if it's just 16 bits numbers, shifting by 15 places is enoough. (If you use the (n & 0xffffffff) >> 31 variant, adjust the number of fs accordingly).
On machine code level, and-ing/negating and shifting is potentially much more efficient than using comparison. The former is just a couple of machine instructions, while the latter would usually boil down to a branch. Branching doesn't just take more machine instructions, it also has a bad influence on the pipelining and out-of-order execution of modern CPUs. But Python execution takes place in a higher-level layer than machine code execution, and therefore it's difficult to say anything about the performance impact in Python: it may depend on the context – as it would in machine code – and may therefore also be difficult to test generally. (Caveat: I don't know much about how low-level execution happens in CPython, or in Python in general. For someone who does, this might not be so difficult to say.)
If you don't know how big n is (in Python, an integer is not required to fit into any specific number of bits), you can use bit_length() to find out. This will work for integers of any size:
-(n >> n.bit_length())
The bit_length() operation might boil down to a single machine instruction, or it might actually need a loop to find the result, depending on the implementation and the underlying machine architecture. In the latter case, this should be noticeably more costly than using a constant
Final remark: in C, n >> 31 is actually not guaranteed to work like you assume, because the C language leaves it undefined whether >> does logical right shift (like you assume) or arithmetic shift right (like Python does). In some languages, these are different operators, which makes it clear what you get. In Java for instance, logical shift right is >>>, and arithmetic shift right is >>.
How about this?
def is_negative(num, places):
return not long(num * 10**places) & 0xFFFFFFFF == long(num * 10**places)
Not efficient, but not using < or > definitely restricts you to weirdness. Note that zero will evaluate as positive.
So in Ruby there is a trick to specify infinity:
1.0/0
=> Infinity
I believe in Python you can do something like this
float('inf')
These are just examples though, I'm sure most languages have infinity in some capacity. When would you actually use this construct in the real world? Why would using it in a range be better than just using a boolean expression? For instance
(0..1.0/0).include?(number) == (number >= 0) # True for all values of number
=> true
To summarize, what I'm looking for is a real world reason to use Infinity.
EDIT: I'm looking for real world code. It's all well and good to say this is when you "could" use it, when have people actually used it.
Dijkstra's Algorithm typically assigns infinity as the initial edge weights in a graph. This doesn't have to be "infinity", just some arbitrarily constant but in java I typically use Double.Infinity. I assume ruby could be used similarly.
Off the top of the head, it can be useful as an initial value when searching for a minimum value.
For example:
min = float('inf')
for x in somelist:
if x<min:
min=x
Which I prefer to setting min initially to the first value of somelist
Of course, in Python, you should just use the min() built-in function in most cases.
There seems to be an implied "Why does this functionality even exist?" in your question. And the reason is that Ruby and Python are just giving access to the full range of values that one can specify in floating point form as specified by IEEE.
This page seems to describe it well:
http://steve.hollasch.net/cgindex/coding/ieeefloat.html
As a result, you can also have NaN (Not-a-number) values and -0.0, while you may not immediately have real-world uses for those either.
In some physics calculations you can normalize irregularities (ie, infinite numbers) of the same order with each other, canceling them both and allowing a approximate result to come through.
When you deal with limits, calculations like (infinity / infinity) -> approaching a finite a number could be achieved. It's useful for the language to have the ability to overwrite the regular divide-by-zero error.
Use Infinity and -Infinity when implementing a mathematical algorithm calls for it.
In Ruby, Infinity and -Infinity have nice comparative properties so that -Infinity < x < Infinity for any real number x. For example, Math.log(0) returns -Infinity, extending to 0 the property that x > y implies that Math.log(x) > Math.log(y). Also, Infinity * x is Infinity if x > 0, -Infinity if x < 0, and 'NaN' (not a number; that is, undefined) if x is 0.
For example, I use the following bit of code in part of the calculation of some log likelihood ratios. I explicitly reference -Infinity to define a value even if k is 0 or n AND x is 0 or 1.
Infinity = 1.0/0.0
def Similarity.log_l(k, n, x)
unless x == 0 or x == 1
k * Math.log(x.to_f) + (n-k) * Math.log(1.0-x)
end
-Infinity
end
end
Alpha-beta pruning
I use it to specify the mass and inertia of a static object in physics simulations. Static objects are essentially unaffected by gravity and other simulation forces.
In Ruby infinity can be used to implement lazy lists. Say i want N numbers starting at 200 which get successively larger by 100 units each time:
Inf = 1.0 / 0.0
(200..Inf).step(100).take(N)
More info here: http://banisterfiend.wordpress.com/2009/10/02/wtf-infinite-ranges-in-ruby/
I've used it for cases where you want to define ranges of preferences / allowed.
For example in 37signals apps you have like a limit to project number
Infinity = 1 / 0.0
FREE = 0..1
BASIC = 0..5
PREMIUM = 0..Infinity
then you can do checks like
if PREMIUM.include? current_user.projects.count
# do something
end
I used it for representing camera focus distance and to my surprise in Python:
>>> float("inf") is float("inf")
False
>>> float("inf") == float("inf")
True
I wonder why is that.
I've used it in the minimax algorithm. When I'm generating new moves, if the min player wins on that node then the value of the node is -∞. Conversely, if the max player wins then the value of that node is +∞.
Also, if you're generating nodes/game states and then trying out several heuristics you can set all the node values to -∞/+∞ which ever makes sense and then when you're running a heuristic its easy to set the node value:
node_val = -∞
node_val = max(heuristic1(node), node_val)
node_val = max(heuristic2(node), node_val)
node_val = max(heuristic2(node), node_val)
I've used it in a DSL similar to Rails' has_one and has_many:
has 0..1 :author
has 0..INFINITY :tags
This makes it easy to express concepts like Kleene star and plus in your DSL.
I use it when I have a Range object where one or both ends need to be open
I've used symbolic values for positive and negative infinity in dealing with range comparisons to eliminate corner cases that would otherwise require special handling:
Given two ranges A=[a,b) and C=[c,d) do they intersect, is one greater than the other, or does one contain the other?
A > C iff a >= d
A < C iff b <= c
etc...
If you have values for positive and negative infinity that respectively compare greater than and less than all other values, you don't need to do any special handling for open-ended ranges. Since floats and doubles already implement these values, you might as well use them instead of trying to find the largest/smallest values on your platform. With integers, it's more difficult to use "infinity" since it's not supported by hardware.
I ran across this because I'm looking for an "infinite" value to set for a maximum, if a given value doesn't exist, in an attempt to create a binary tree. (Because I'm selecting based on a range of values, and not just a single value, I quickly realized that even a hash won't work in my situation.)
Since I expect all numbers involved to be positive, the minimum is easy: 0. Since I don't know what to expect for a maximum, though, I would like the upper bound to be Infinity of some sort. This way, I won't have to figure out what "maximum" I should compare things to.
Since this is a project I'm working on at work, it's technically a "Real world problem". It may be kindof rare, but like a lot of abstractions, it's convenient when you need it!
Also, to those who say that this (and other examples) are contrived, I would point out that all abstractions are somewhat contrived; that doesn't mean they are useful when you contrive them.
When working in a problem domain where trig is used (especially tangent) infinity is an answer that can come up. Trig ends up being used heavily in graphics applications, games, and geospatial applications, plus the obvious math applications.
I'm sure there are other ways to do this, but you could use Infinity to check for reasonable inputs in a String-to-Float conversion. In Java, at least, the Float.isNaN() static method will return false for numbers with infinite magnitude, indicating they are valid numbers, even though your program might want to classify them as invalid. Checking against the Float.POSITIVE_INFINITY and Float.NEGATIVE_INFINITY constants solves that problem. For example:
// Some sample values to test our code with
String stringValues[] = {
"-999999999999999999999999999999999999999999999",
"12345",
"999999999999999999999999999999999999999999999"
};
// Loop through each string representation
for (String stringValue : stringValues) {
// Convert the string representation to a Float representation
Float floatValue = Float.parseFloat(stringValue);
System.out.println("String representation: " + stringValue);
System.out.println("Result of isNaN: " + floatValue.isNaN());
// Check the result for positive infinity, negative infinity, and
// "normal" float numbers (within the defined range for Float values).
if (floatValue == Float.POSITIVE_INFINITY) {
System.out.println("That number is too big.");
} else if (floatValue == Float.NEGATIVE_INFINITY) {
System.out.println("That number is too small.");
} else {
System.out.println("That number is jussssst right.");
}
}
Sample Output:
String representation: -999999999999999999999999999999999999999999999
Result of isNaN: false
That number is too small.
String representation: 12345
Result of isNaN: false
That number is jussssst right.
String representation: 999999999999999999999999999999999999999999999
Result of isNaN: false
That number is too big.
It is used quite extensively in graphics. For example, any pixel in a 3D image that is not part of an actual object is marked as infinitely far away. So that it can later be replaced with a background image.
I'm using a network library where you can specify the maximum number of reconnection attempts. Since I want mine to reconnect forever:
my_connection = ConnectionLibrary(max_connection_attempts = float('inf'))
In my opinion, it's more clear than the typical "set to -1 to retry forever" style, since it's literally saying "retry until the number of connection attempts is greater than infinity".
Some programmers use Infinity or NaNs to show a variable has never been initialized or assigned in the program.
If you want the largest number from an input but they might use very large negatives. If I enter -13543124321.431 it still works out as the largest number since it's bigger than -inf.
enter code here
initial_value = float('-inf')
while True:
try:
x = input('gimmee a number or type the word, stop ')
except KeyboardInterrupt:
print("we done - by yo command")
break
if x == "stop":
print("we done")
break
try:
x = float(x)
except ValueError:
print('not a number')
continue
if x > initial_value: initial_value = x
print("The largest number is: " + str(initial_value))
You can to use:
import decimal
decimal.Decimal("Infinity")
or:
from decimal import *
Decimal("Infinity")
For sorting
I've seen it used as a sort value, to say "always sort these items to the bottom".
To specify a non-existent maximum
If you're dealing with numbers, nil represents an unknown quantity, and should be preferred to 0 for that case. Similarly, Infinity represents an unbounded quantity, and should be preferred to (arbitrarily_large_number) in that case.
I think it can make the code cleaner. For example, I'm using Float::INFINITY in a Ruby gem for exactly that: the user can specify a maximum string length for a message, or they can specify :all. In that case, I represent the maximum length as Float::INFINITY, so that later when I check "is this message longer than the maximum length?" the answer will always be false, without needing a special case.