How to merge two binary numbers into a ternary number - python

I have two binary integers, x0 and x1 that are 8 bits (so they span from 0 to 255). This statement is always true about these numbers: x0 & x1 == 0. Here is an example:
bx0 = 100 # represented as 01100100 in binary
bx1 = 129 # represented as 10000001 in binary
So I need to do the following operation on these numbers. First, interpret these binary representations as ternary (base-3) numbers, as follows:
tx0 = ternary(bx0) # becomes 981 represented as 01100100 in ternary
tx1 = ternary(bx1) # becomes 2188 represented as 10000001 in ternary
Then, swap all of the 1 in the ternary representation of tx1 to 2:
tx1_swap = swap(tx1) # becomes 4376, represented as 20000002 in ternary
Then use a ternary version of OR on them to get the final combined number:
result = ternary_or(tx0, tx1_swap) # becomes 5357, represented as 21100102 in ternary
I don't need the ternary representation saved at any point, I only need the result, where for example result=5357. Of course I could code this by converting the numbers to binary, converting to ternary, etc.. but I need this operation to be fast because I am doing this many times in my code. What would a fast way to implement this in python?

The fastest way to do this is probably with decimal addition:
a = 1100100
b = 10000001
result = int(str(a+2*b),3) #5357
You won't find ternary-wise operators in python (or any other language that I know of.) Since you need to go above bitwise operations, your next-fastest option is integer addition, which every computer on Earth is optimized to complete.
Other solutions that convert to ternary to get this done will require you to cast back and forth to strings which takes much longer than decimal addition. This only requires one string cast at the end, assuming you even need the decimal version of the final ternary number.

Re-explanation for dummies like me:
A straightforward way to "encode" two binary mutually exclusive numbers (w & b == 0) in ternary would be:
white_black_empty = lambda w, b: int(format(b, 'b'), base=3) + \
int(format(w, 'b').replace('1','2'), base=3)
Here are all possible 2-bit variants:
white_black_empty(0b00, 0b00) == 0
white_black_empty(0b00, 0b01) == 1
white_black_empty(0b01, 0b00) == 2
white_black_empty(0b00, 0b10) == 3
white_black_empty(0b00, 0b11) == 4
white_black_empty(0b01, 0b10) == 5
white_black_empty(0b10, 0b00) == 6
white_black_empty(0b10, 0b01) == 7
white_black_empty(0b11, 0b00) == 8
By observing that int(format(w, 'b').replace('1','2'), base=3) is actually equal to double of int(format(w, 'b'), base=3) (for example, 20220023 == 10110013*2), we get the solution that #Mark Dickinson posted in the comments above:
white_black_empty = lambda w, b: int(format(b, 'b'), base=3) + \
int(format(w, 'b'), base=3)*2

Related

Python innacuracies with large numbers?

I have to write a function that uses another function, but the other function has to return integers which get fairly innacurate with large numbers.
My code:
import math
def reduce(n, d):
m = min(n, d)
for i in range(m, 1, -1):
if n%i==0 and d%i==0:
n = n//i
d = d//i
return (n, d)
def almost_square(n, d):
f = n/d
c = math.ceil(f)
n*=c
return reduce(n, d)
def destiny(n, d):
b = n/d
fraction = n, d
while not b.is_integer():
breuk = almost_square(fraction[0], fraction[1])
b = fraction[0]/fraction[1]
return int(b)
What the functions are supposed to do:
reduce: just simplifying the fraction, so 2/4 becomes 1/2 for example
almost_square: multiplying the fraction with the rounded up integer of the fraction
destiny: applying almost square on a fraction until it returns an integer.
The thing is, my uni works with a program that tries 50 test cases for each function and you only completed the exercise when every function works for all 50 test cases, and they expect the function 'reduce' to return a tuple of integers, but making integers of the numbers there makes my function 'destiny' innacurate, or at least I think so.
So out of the 50 test cases, all 50 work on the function reduce, all 50 work on the function almost_square, but 5 fail for the function destiny which are:
destiny(10, 6), my output: 1484710602474311424, expected output: 1484710602474311520
destiny(17, 13), my output: 59832260230817688435680083968, expected output: 59832260230817687634146563200
destiny(10, 3), my output: 1484710602474311424, expected output: 1484710602474311520
destiny(15, 9), my output: 1484710602474311424, expected output: 1484710602474311520
destiny(11, 5), my output: 494764640798827343035498496, expected output: 494764640798827359861461484
Anything that could fix this?
There is some floating point arithmetic in that code, which can slightly throw off the results, and apparently it did. Forget about float, don't use any "floats, but larger" libraries either, integer arithmetic is the way to go.
For example,
f = n/d
c = math.ceil(f)
n*=c
This code looks like it computes n * āŒˆn / dāŒ‰, but it only approximately computes that because it uses floating point arithmetic, requiring values to be rounded to the nearest float (for example, int(float(1484710602474311520)) is 1484710602474311424). It should be implemented using integer arithmetic, for example like this:
n *= (n + d - 1) // d
The destiny function should not use floating point division either, and it does not need to. The "is b an integer" test can be stated equivalently as "does d divide n", which can be implemented with integer arithmetic.
Also for that reduce function you can use math.gcd, or implement gcd yourself, the implementation that you have now is very slow.
With those changes, I get the right results for the test cases that you mentioned. I could show the code, but since it is an assignment, you should probably write the code yourself. Asking this question at all is already risky.
Integers don't get inaccurate with large numbers. Floating point numbers do. And you are using floating point numbers.
Rewrite your algorithm to only use integers.

How does this bitwise manipulation change a single bit a particular index?

I understand what each of the individual operators does by itself, but I don't know how they interact in order to get the correct results.
def kill(n, k):
#Takes int n and replaces the bit k from right with 0. Returns the new number
return n & ~(1<<k-1)
I tested the program with the n as 37 and k as 3.
def b(n,s=""):
print (str(format(n, 'b')) +" "+ s)
def kill(n, k):
b(n, "n ")
b(1<<k-1, "1<<k-1")
b(~(1<<k-1), "~(1<<k-1) ")
b( n & ~(1<<k-1)," n & ~(1<<k-1) ")
return n & ~(1<<k-1)
#TESTS
kill(37, 3)
I decided to run through it step by step.
I printed both the binary representations of both n and ~(1<<k-1) but after that I was lost. ~(1<<k-1) gave me -101 and I'm not sure how to visualize that in binary. Can someone go through it step by step with visualizations for the binary?
All numbers below are printed in binary representation.
Say, n has m digits in binary representation. Observe that n & 11...1 (m ones) would return n. Indeed, working bitwise, if x is a one-bit digit (0 or 1), then x & 1 = x.
Moreover, observe that x & 0 = x. Therefore, to set up kth digit of number n to 0, we need to do operation and (&) with 11111..1011..1, where 0 is exactly on kth location from the end.
Now we need to generate 11111..1011..0. It has all ones except one digit. If we negate it, we get 00000..0100..1 which we get by 1 << k-1.
All in all: 1 << k-1 gives us 00000..0100..0. Its negation provides 11111..1011..1. Finally, we do & with the input.

Exact Value after Floating point not rounding up [duplicate]

I want to remove digits from a float to have a fixed number of digits after the dot, like:
1.923328437452 ā†’ 1.923
I need to output as a string to another function, not print.
Also I want to ignore the lost digits, not round them.
round(1.923328437452, 3)
See Python's documentation on the standard types. You'll need to scroll down a bit to get to the round function. Essentially the second number says how many decimal places to round it to.
First, the function, for those who just want some copy-and-paste code:
def truncate(f, n):
'''Truncates/pads a float f to n decimal places without rounding'''
s = '{}'.format(f)
if 'e' in s or 'E' in s:
return '{0:.{1}f}'.format(f, n)
i, p, d = s.partition('.')
return '.'.join([i, (d+'0'*n)[:n]])
This is valid in Python 2.7 and 3.1+. For older versions, it's not possible to get the same "intelligent rounding" effect (at least, not without a lot of complicated code), but rounding to 12 decimal places before truncation will work much of the time:
def truncate(f, n):
'''Truncates/pads a float f to n decimal places without rounding'''
s = '%.12f' % f
i, p, d = s.partition('.')
return '.'.join([i, (d+'0'*n)[:n]])
Explanation
The core of the underlying method is to convert the value to a string at full precision and then just chop off everything beyond the desired number of characters. The latter step is easy; it can be done either with string manipulation
i, p, d = s.partition('.')
'.'.join([i, (d+'0'*n)[:n]])
or the decimal module
str(Decimal(s).quantize(Decimal((0, (1,), -n)), rounding=ROUND_DOWN))
The first step, converting to a string, is quite difficult because there are some pairs of floating point literals (i.e. what you write in the source code) which both produce the same binary representation and yet should be truncated differently. For example, consider 0.3 and 0.29999999999999998. If you write 0.3 in a Python program, the compiler encodes it using the IEEE floating-point format into the sequence of bits (assuming a 64-bit float)
0011111111010011001100110011001100110011001100110011001100110011
This is the closest value to 0.3 that can accurately be represented as an IEEE float. But if you write 0.29999999999999998 in a Python program, the compiler translates it into exactly the same value. In one case, you meant it to be truncated (to one digit) as 0.3, whereas in the other case you meant it to be truncated as 0.2, but Python can only give one answer. This is a fundamental limitation of Python, or indeed any programming language without lazy evaluation. The truncation function only has access to the binary value stored in the computer's memory, not the string you actually typed into the source code.1
If you decode the sequence of bits back into a decimal number, again using the IEEE 64-bit floating-point format, you get
0.2999999999999999888977697537484345957637...
so a naive implementation would come up with 0.2 even though that's probably not what you want. For more on floating-point representation error, see the Python tutorial.
It's very rare to be working with a floating-point value that is so close to a round number and yet is intentionally not equal to that round number. So when truncating, it probably makes sense to choose the "nicest" decimal representation out of all that could correspond to the value in memory. Python 2.7 and up (but not 3.0) includes a sophisticated algorithm to do just that, which we can access through the default string formatting operation.
'{}'.format(f)
The only caveat is that this acts like a g format specification, in the sense that it uses exponential notation (1.23e+4) if the number is large or small enough. So the method has to catch this case and handle it differently. There are a few cases where using an f format specification instead causes a problem, such as trying to truncate 3e-10 to 28 digits of precision (it produces 0.0000000002999999999999999980), and I'm not yet sure how best to handle those.
If you actually are working with floats that are very close to round numbers but intentionally not equal to them (like 0.29999999999999998 or 99.959999999999994), this will produce some false positives, i.e. it'll round numbers that you didn't want rounded. In that case the solution is to specify a fixed precision.
'{0:.{1}f}'.format(f, sys.float_info.dig + n + 2)
The number of digits of precision to use here doesn't really matter, it only needs to be large enough to ensure that any rounding performed in the string conversion doesn't "bump up" the value to its nice decimal representation. I think sys.float_info.dig + n + 2 may be enough in all cases, but if not that 2 might have to be increased, and it doesn't hurt to do so.
In earlier versions of Python (up to 2.6, or 3.0), the floating point number formatting was a lot more crude, and would regularly produce things like
>>> 1.1
1.1000000000000001
If this is your situation, if you do want to use "nice" decimal representations for truncation, all you can do (as far as I know) is pick some number of digits, less than the full precision representable by a float, and round the number to that many digits before truncating it. A typical choice is 12,
'%.12f' % f
but you can adjust this to suit the numbers you're using.
1Well... I lied. Technically, you can instruct Python to re-parse its own source code and extract the part corresponding to the first argument you pass to the truncation function. If that argument is a floating-point literal, you can just cut it off a certain number of places after the decimal point and return that. However this strategy doesn't work if the argument is a variable, which makes it fairly useless. The following is presented for entertainment value only:
def trunc_introspect(f, n):
'''Truncates/pads the float f to n decimal places by looking at the caller's source code'''
current_frame = None
caller_frame = None
s = inspect.stack()
try:
current_frame = s[0]
caller_frame = s[1]
gen = tokenize.tokenize(io.BytesIO(caller_frame[4][caller_frame[5]].encode('utf-8')).readline)
for token_type, token_string, _, _, _ in gen:
if token_type == tokenize.NAME and token_string == current_frame[3]:
next(gen) # left parenthesis
token_type, token_string, _, _, _ = next(gen) # float literal
if token_type == tokenize.NUMBER:
try:
cut_point = token_string.index('.') + n + 1
except ValueError: # no decimal in string
return token_string + '.' + '0' * n
else:
if len(token_string) < cut_point:
token_string += '0' * (cut_point - len(token_string))
return token_string[:cut_point]
else:
raise ValueError('Unable to find floating-point literal (this probably means you called {} with a variable)'.format(current_frame[3]))
break
finally:
del s, current_frame, caller_frame
Generalizing this to handle the case where you pass in a variable seems like a lost cause, since you'd have to trace backwards through the program's execution until you find the floating-point literal which gave the variable its value. If there even is one. Most variables will be initialized from user input or mathematical expressions, in which case the binary representation is all there is.
The result of round is a float, so watch out (example is from Python 2.6):
>>> round(1.923328437452, 3)
1.923
>>> round(1.23456, 3)
1.2350000000000001
You will be better off when using a formatted string:
>>> "%.3f" % 1.923328437452
'1.923'
>>> "%.3f" % 1.23456
'1.235'
n = 1.923328437452
str(n)[:4]
At my Python 2.7 prompt:
>>> int(1.923328437452 * 1000)/1000.0
1.923
The truely pythonic way of doing it is
from decimal import *
with localcontext() as ctx:
ctx.rounding = ROUND_DOWN
print Decimal('1.923328437452').quantize(Decimal('0.001'))
or shorter:
from decimal import Decimal as D, ROUND_DOWN
D('1.923328437452').quantize(D('0.001'), rounding=ROUND_DOWN)
Update
Usually the problem is not in truncating floats itself, but in the improper usage of float numbers before rounding.
For example: int(0.7*3*100)/100 == 2.09.
If you are forced to use floats (say, you're accelerating your code with numba), it's better to use cents as "internal representation" of prices: (70*3 == 210) and multiply/divide the inputs/outputs.
Simple python script -
n = 1.923328437452
n = float(int(n * 1000))
n /=1000
def trunc(num, digits):
sp = str(num).split('.')
return '.'.join([sp[0], sp[1][:digits]])
This should work. It should give you the truncation you are looking for.
So many of the answers given for this question are just completely wrong. They either round up floats (rather than truncate) or do not work for all cases.
This is the top Google result when I search for 'Python truncate float', a concept which is really straightforward, and which deserves better answers. I agree with Hatchkins that using the decimal module is the pythonic way of doing this, so I give here a function which I think answers the question correctly, and which works as expected for all cases.
As a side-note, fractional values, in general, cannot be represented exactly by binary floating point variables (see here for a discussion of this), which is why my function returns a string.
from decimal import Decimal, localcontext, ROUND_DOWN
def truncate(number, places):
if not isinstance(places, int):
raise ValueError("Decimal places must be an integer.")
if places < 1:
raise ValueError("Decimal places must be at least 1.")
# If you want to truncate to 0 decimal places, just do int(number).
with localcontext() as context:
context.rounding = ROUND_DOWN
exponent = Decimal(str(10 ** - places))
return Decimal(str(number)).quantize(exponent).to_eng_string()
>>> from math import floor
>>> floor((1.23658945) * 10**4) / 10**4
1.2365
# divide and multiply by 10**number of desired digits
If you fancy some mathemagic, this works for +ve numbers:
>>> v = 1.923328437452
>>> v - v % 1e-3
1.923
I did something like this:
from math import trunc
def truncate(number, decimals=0):
if decimals < 0:
raise ValueError('truncate received an invalid value of decimals ({})'.format(decimals))
elif decimals == 0:
return trunc(number)
else:
factor = float(10**decimals)
return trunc(number*factor)/factor
You can do:
def truncate(f, n):
return math.floor(f * 10 ** n) / 10 ** n
testing:
>>> f=1.923328437452
>>> [truncate(f, n) for n in range(5)]
[1.0, 1.9, 1.92, 1.923, 1.9233]
Just wanted to mention that the old "make round() with floor()" trick of
round(f) = floor(f+0.5)
can be turned around to make floor() from round()
floor(f) = round(f-0.5)
Although both these rules break around negative numbers, so using it is less than ideal:
def trunc(f, n):
if f > 0:
return "%.*f" % (n, (f - 0.5*10**-n))
elif f == 0:
return "%.*f" % (n, f)
elif f < 0:
return "%.*f" % (n, (f + 0.5*10**-n))
def precision(value, precision):
"""
param: value: takes a float
param: precision: int, number of decimal places
returns a float
"""
x = 10.0**precision
num = int(value * x)/ x
return num
precision(1.923328437452, 3)
1.923
Short and easy variant
def truncate_float(value, digits_after_point=2):
pow_10 = 10 ** digits_after_point
return (float(int(value * pow_10))) / pow_10
>>> truncate_float(1.14333, 2)
>>> 1.14
>>> truncate_float(1.14777, 2)
>>> 1.14
>>> truncate_float(1.14777, 4)
>>> 1.1477
When using a pandas df this worked for me
import math
def truncate(number, digits) -> float:
stepper = 10.0 ** digits
return math.trunc(stepper * number) / stepper
df['trunc'] = df['float_val'].apply(lambda x: truncate(x,1))
df['trunc']=df['trunc'].map('{:.1f}'.format)
int(16.5);
this will give an integer value of 16, i.e. trunc, won't be able to specify decimals, but guess you can do that by
import math;
def trunc(invalue, digits):
return int(invalue*math.pow(10,digits))/math.pow(10,digits);
Here is an easy way:
def truncate(num, res=3):
return (floor(num*pow(10, res)+0.5))/pow(10, res)
for num = 1.923328437452, this outputs 1.923
def trunc(f,n):
return ('%.16f' % f)[:(n-16)]
A general and simple function to use:
def truncate_float(number, length):
"""Truncate float numbers, up to the number specified
in length that must be an integer"""
number = number * pow(10, length)
number = int(number)
number = float(number)
number /= pow(10, length)
return number
There is an easy workaround in python 3. Where to cut I defined with an help variable decPlace to make it easy to adapt.
f = 1.12345
decPlace= 4
f_cut = int(f * 10**decPlace) /10**decPlace
Output:
f = 1.1234
Hope it helps.
Most answers are way too complicated in my opinion, how about this?
digits = 2 # Specify how many digits you want
fnum = '122.485221'
truncated_float = float(fnum[:fnum.find('.') + digits + 1])
>>> 122.48
Simply scanning for the index of '.' and truncate as desired (no rounding).
Convert string to float as final step.
Or in your case if you get a float as input and want a string as output:
fnum = str(122.485221) # convert float to string first
truncated_float = fnum[:fnum.find('.') + digits + 1] # string output
I think a better version would be just to find the index of decimal point . and then to take the string slice accordingly:
def truncate(number, n_digits:int=1)->float:
'''
:param number: real number ā„
:param n_digits: Maximum number of digits after the decimal point after truncation
:return: truncated floating point number with at least one digit after decimal point
'''
decimalIndex = str(number).find('.')
if decimalIndex == -1:
return float(number)
else:
return float(str(number)[:decimalIndex+n_digits+1])
int(1.923328437452 * 1000) / 1000
>>> 1.923
int(1.9239 * 1000) / 1000
>>> 1.923
By multiplying the number by 1000 (10 ^ 3 for 3 digits) we shift the decimal point 3 places to the right and get 1923.3284374520001. When we convert that to an int the fractional part 3284374520001 will be discarded. Then we undo the shifting of the decimal point again by dividing by 1000 which returns 1.923.
use numpy.round
import numpy as np
precision = 3
floats = [1.123123123, 2.321321321321]
new_float = np.round(floats, precision)
Something simple enough to fit in a list-comprehension, with no libraries or other external dependencies. For Python >=3.6, it's very simple to write with f-strings.
The idea is to let the string-conversion do the rounding to one more place than you need and then chop off the last digit.
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nout+1}f}'[:-1] for x in [2/3, 4/5, 8/9, 9/8, 5/4, 3/2]]
['0.666', '0.800', '0.888', '1.125', '1.250', '1.500']
Of course, there is rounding happening here (namely for the fourth digit), but rounding at some point is unvoidable. In case the transition between truncation and rounding is relevant, here's a slightly better example:
>>> nacc = 6 # desired accuracy (maximum 15!)
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nacc}f}'[:-(nacc-nout)] for x in [2.9999, 2.99999, 2.999999, 2.9999999]]
>>> ['2.999', '2.999', '2.999', '3.000']
Bonus: removing zeros on the right
>>> nout = 3 # desired number of digits in output
>>> [f'{x:.{nout+1}f}'[:-1].rstrip('0') for x in [2/3, 4/5, 8/9, 9/8, 5/4, 3/2]]
['0.666', '0.8', '0.888', '1.125', '1.25', '1.5']
The core idea given here seems to me to be the best approach for this problem.
Unfortunately, it has received less votes while the later answer that has more votes is not complete (as observed in the comments). Hopefully, the implementation below provides a short and complete solution for truncation.
def trunc(num, digits):
l = str(float(num)).split('.')
digits = min(len(l[1]), digits)
return l[0] + '.' + l[1][:digits]
which should take care of all corner cases found here and here.
Am also a python newbie and after making use of some bits and pieces here, I offer my two cents
print str(int(time.time()))+str(datetime.now().microsecond)[:3]
str(int(time.time())) will take the time epoch as int and convert it to string and join with...
str(datetime.now().microsecond)[:3] which returns the microseconds only, convert to string and truncate to first 3 chars
# value value to be truncated
# n number of values after decimal
value = 0.999782
n = 3
float(int(value*1en))*1e-n

Rounding scientific notation in python

I have a number like 2.32432432423e25 in python that is the result of a computation.
I want to round this to 3 decimal points to get the output:
2.324e25
I have tried to use:
x = 2.32432432423e25
number_rounded = round(x, 3)
But when I print number_rounded it outputs a number with the same format as x.
How do I limit the display of x to just 4 significant digits?
You'll need to use string formatting for this:
'{:0.3e}'.format(2.32432432423e25)
The reason is that round is for specifying the number of the digits after the ones place, which is not really relevant when your numbers are O(25).
If you want to use Python's f-string syntax introduced in Python 3.6, specify the format after the variable, separated by :, e.g.:
>>> res = 2.32432432423e25
>>> f'The result is {res:.3e}'
'The result is 2.324e+25'
I was looking for an answer to this and mostly found string answers. While that is typically the best way to handle this question (because floats are always rounded to their defined precision regardless), there are situations where you'd like to round a float to a given decimal precision (plus whatever float imprecision added on) and I couldn't find a good answer. Here's what I came up with, I believe it handles all the possible cases: input of zero, input < 1, input > 1 for both positive and negative numbers:
def precision_round(number, digits=3):
power = "{:e}".format(number).split('e')[1]
return round(number, -(int(power) - digits))
Building on top of #Josh Duran nice function/idea, here is the same func that can handle up-to 2-D arrays. Maybe someone can modify this for the ndarrays.
def precision_round(numbers, digits = 3):
'''
Parameters:
-----------
numbers : scalar, 1D , or 2D array(-like)
digits: number of digits after decimal point
Returns:
--------
out : same shape as numbers
'''
import numpy as np
numbers = np.asarray(np.atleast_2d(numbers))
out_array = np.zeros(numbers.shape) # the returning array
for dim0 in range(numbers.shape[0]):
powers = [int(F"{number:e}".split('e')[1]) for number in numbers[dim0, :]]
out_array[dim0, :] = [round(number, -(int(power) - digits))
for number, power in zip(numbers[dim0, :], powers)]
# returning the original shape of the `numbers`
if out_array.shape[0] == 1 and out_array.shape[1] == 1:
out_array = out_array[0, 0]
elif out_array.shape[0] == 1:
out_array = out_array[0, :]
return out_array

Python: How to convert a string of zeros and ones to binary [duplicate]

I'd simply like to convert a base-2 binary number string into an int, something like this:
>>> '11111111'.fromBinaryToInt()
255
Is there a way to do this in Python?
You use the built-in int() function, and pass it the base of the input number, i.e. 2 for a binary number:
>>> int('11111111', 2)
255
Here is documentation for Python 2, and for Python 3.
Just type 0b11111111 in python interactive interface:
>>> 0b11111111
255
Another way to do this is by using the bitstring module:
>>> from bitstring import BitArray
>>> b = BitArray(bin='11111111')
>>> b.uint
255
Note that the unsigned integer (uint) is different from the signed integer (int):
>>> b.int
-1
Your question is really asking for the unsigned integer representation; this is an important distinction.
The bitstring module isn't a requirement, but it has lots of performant methods for turning input into and from bits into other forms, as well as manipulating them.
Using int with base is the right way to go. I used to do this before I found int takes base also. It is basically a reduce applied on a list comprehension of the primitive way of converting binary to decimal ( e.g. 110 = 2**0 * 0 + 2 ** 1 * 1 + 2 ** 2 * 1)
add = lambda x,y : x + y
reduce(add, [int(x) * 2 ** y for x, y in zip(list(binstr), range(len(binstr) - 1, -1, -1))])
If you wanna know what is happening behind the scene, then here you go.
class Binary():
def __init__(self, binNumber):
self._binNumber = binNumber
self._binNumber = self._binNumber[::-1]
self._binNumber = list(self._binNumber)
self._x = [1]
self._count = 1
self._change = 2
self._amount = 0
print(self._ToNumber(self._binNumber))
def _ToNumber(self, number):
self._number = number
for i in range (1, len (self._number)):
self._total = self._count * self._change
self._count = self._total
self._x.append(self._count)
self._deep = zip(self._number, self._x)
for self._k, self._v in self._deep:
if self._k == '1':
self._amount += self._v
return self._amount
mo = Binary('101111110')
Here's another concise way to do it not mentioned in any of the above answers:
>>> eval('0b' + '11111111')
255
Admittedly, it's probably not very fast, and it's a very very bad idea if the string is coming from something you don't have control over that could be malicious (such as user input), but for completeness' sake, it does work.
A recursive Python implementation:
def int2bin(n):
return int2bin(n >> 1) + [n & 1] if n > 1 else [1]
If you are using python3.6 or later you can use f-string to do the
conversion:
Binary to decimal:
>>> print(f'{0b1011010:#0}')
90
>>> bin_2_decimal = int(f'{0b1011010:#0}')
>>> bin_2_decimal
90
binary to octal hexa and etc.
>>> f'{0b1011010:#o}'
'0o132' # octal
>>> f'{0b1011010:#x}'
'0x5a' # hexadecimal
>>> f'{0b1011010:#0}'
'90' # decimal
Pay attention to 2 piece of information separated by colon.
In this way, you can convert between {binary, octal, hexadecimal, decimal} to {binary, octal, hexadecimal, decimal} by changing right side of colon[:]
:#b -> converts to binary
:#o -> converts to octal
:#x -> converts to hexadecimal
:#0 -> converts to decimal as above example
Try changing left side of colon to have octal/hexadecimal/decimal.
For large matrix (10**5 rows and up) it is better to use a vectorized matmult. Pass in all rows and cols in one shot. It is extremely fast. There is no looping in python here. I originally designed it for converting many binary columns like 0/1 for like 10 different genre columns in MovieLens into a single integer for each example row.
def BitsToIntAFast(bits):
m,n = bits.shape
a = 2**np.arange(n)[::-1] # -1 reverses array of powers of 2 of same length as bits
return bits # a
For the record to go back and forth in basic python3:
a = 10
bin(a)
# '0b1010'
int(bin(a), 2)
# 10
eval(bin(a))
# 10

Categories