How to get the string as binary IEEE 754 representation of a 32 bit float?
Example
1.00 -> '00111111100000000000000000000000'
You can do that with the struct package:
import struct
def binary(num):
return ''.join('{:0>8b}'.format(c) for c in struct.pack('!f', num))
That packs it as a network byte-ordered float, and then converts each of the resulting bytes into an 8-bit binary representation and concatenates them out:
>>> binary(1)
'00111111100000000000000000000000'
Edit:
There was a request to expand the explanation. I'll expand this using intermediate variables to comment each step.
def binary(num):
# Struct can provide us with the float packed into bytes. The '!' ensures that
# it's in network byte order (big-endian) and the 'f' says that it should be
# packed as a float. Alternatively, for double-precision, you could use 'd'.
packed = struct.pack('!f', num)
print 'Packed: %s' % repr(packed)
# For each character in the returned string, we'll turn it into its corresponding
# integer code point
#
# [62, 163, 215, 10] = [ord(c) for c in '>\xa3\xd7\n']
integers = [ord(c) for c in packed]
print 'Integers: %s' % integers
# For each integer, we'll convert it to its binary representation.
binaries = [bin(i) for i in integers]
print 'Binaries: %s' % binaries
# Now strip off the '0b' from each of these
stripped_binaries = [s.replace('0b', '') for s in binaries]
print 'Stripped: %s' % stripped_binaries
# Pad each byte's binary representation's with 0's to make sure it has all 8 bits:
#
# ['00111110', '10100011', '11010111', '00001010']
padded = [s.rjust(8, '0') for s in stripped_binaries]
print 'Padded: %s' % padded
# At this point, we have each of the bytes for the network byte ordered float
# in an array as binary strings. Now we just concatenate them to get the total
# representation of the float:
return ''.join(padded)
And the result for a few examples:
>>> binary(1)
Packed: '?\x80\x00\x00'
Integers: [63, 128, 0, 0]
Binaries: ['0b111111', '0b10000000', '0b0', '0b0']
Stripped: ['111111', '10000000', '0', '0']
Padded: ['00111111', '10000000', '00000000', '00000000']
'00111111100000000000000000000000'
>>> binary(0.32)
Packed: '>\xa3\xd7\n'
Integers: [62, 163, 215, 10]
Binaries: ['0b111110', '0b10100011', '0b11010111', '0b1010']
Stripped: ['111110', '10100011', '11010111', '1010']
Padded: ['00111110', '10100011', '11010111', '00001010']
'00111110101000111101011100001010'
Here's an ugly one ...
>>> import struct
>>> bin(struct.unpack('!i',struct.pack('!f',1.0))[0])
'0b111111100000000000000000000000'
Basically, I just used the struct module to convert the float to an int ...
Here's a slightly better one using ctypes:
>>> import ctypes
>>> bin(ctypes.c_uint32.from_buffer(ctypes.c_float(1.0)).value)
'0b111111100000000000000000000000'
Basically, I construct a float and use the same memory location, but I tag it as a c_uint32. The c_uint32's value is a python integer which you can use the builtin bin function on.
Note: by switching types we can do reverse operation as well
>>> ctypes.c_float.from_buffer(ctypes.c_uint32(int('0b111111100000000000000000000000', 2))).value
1.0
also for double-precision 64-bit float we can use the same trick using ctypes.c_double & ctypes.c_uint64 instead.
Found another solution using the bitstring module.
import bitstring
f1 = bitstring.BitArray(float=1.0, length=32)
print(f1.bin)
Output:
00111111100000000000000000000000
For the sake of completeness, you can achieve this with numpy using:
f = 1.00
int32bits = np.asarray(f, dtype=np.float32).view(np.int32).item() # item() optional
You can then print this, with padding, using the b format specifier
print('{:032b}'.format(int32bits))
With these two simple functions (Python >=3.6) you can easily convert a float number to binary and vice versa, for IEEE 754 binary64.
import struct
def bin2float(b):
''' Convert binary string to a float.
Attributes:
:b: Binary string to transform.
'''
h = int(b, 2).to_bytes(8, byteorder="big")
return struct.unpack('>d', h)[0]
def float2bin(f):
''' Convert float to 64-bit binary string.
Attributes:
:f: Float number to transform.
'''
[d] = struct.unpack(">Q", struct.pack(">d", f))
return f'{d:064b}'
For example:
print(float2bin(1.618033988749894))
print(float2bin(3.14159265359))
print(float2bin(5.125))
print(float2bin(13.80))
print(bin2float('0011111111111001111000110111011110011011100101111111010010100100'))
print(bin2float('0100000000001001001000011111101101010100010001000010111011101010'))
print(bin2float('0100000000010100100000000000000000000000000000000000000000000000'))
print(bin2float('0100000000101011100110011001100110011001100110011001100110011010'))
The output is:
0011111111111001111000110111011110011011100101111111010010100100
0100000000001001001000011111101101010100010001000010111011101010
0100000000010100100000000000000000000000000000000000000000000000
0100000000101011100110011001100110011001100110011001100110011010
1.618033988749894
3.14159265359
5.125
13.8
I hope you like it, it works perfectly for me.
This problem is more cleanly handled by breaking it into two parts.
The first is to convert the float into an int with the equivalent bit pattern:
import struct
def float32_bit_pattern(value):
return sum(ord(b) << 8*i for i,b in enumerate(struct.pack('f', value)))
Python 3 doesn't require ord to convert the bytes to integers, so you need to simplify the above a little bit:
def float32_bit_pattern(value):
return sum(b << 8*i for i,b in enumerate(struct.pack('f', value)))
Next convert the int to a string:
def int_to_binary(value, bits):
return bin(value).replace('0b', '').rjust(bits, '0')
Now combine them:
>>> int_to_binary(float32_bit_pattern(1.0), 32)
'00111111100000000000000000000000'
Piggy-tailing on Dan's answer with colored version for Python3:
import struct
BLUE = "\033[1;34m"
CYAN = "\033[1;36m"
GREEN = "\033[0;32m"
RESET = "\033[0;0m"
def binary(num):
return [bin(c).replace('0b', '').rjust(8, '0') for c in struct.pack('!f', num)]
def binary_str(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10], CYAN, bits[10:], RESET])
def binary_str_fp16(num):
bits = ''.join(binary(num))
return ''.join([BLUE, bits[:1], GREEN, bits[1:10][-5:], CYAN, bits[10:][:11], RESET])
x = 0.7
print(x, "as fp32:", binary_str(0.7), "as fp16 is sort of:", binary_str_fp16(0.7))
After browsing through lots of similar questions I've written something which hopefully does what I wanted.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
s = struct.pack('>f', f)
p = struct.unpack('>l', s)[0]
hex_data = hex(p)
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
binrep is the result.
Each part will be explained.
f = 1.00
negative = False
if f < 0:
f = f*-1
negative = True
Converts the number to a positive if negative, and sets the variable negative to false. The reason for this is that the difference between positive and negative binary representations is just in the first bit, and this was the simpler way than to figure out what goes wrong when doing the whole process with negative numbers.
s = struct.pack('>f', f) #'?\x80\x00\x00'
p = struct.unpack('>l', s)[0] #1065353216
hex_data = hex(p) #'0x3f800000'
s is a hex representation of the binary f. it is however not in the pretty form i need. Thats where p comes in. It is the int representation of the hex s. And then another conversion to get a pretty hex.
scale = 16
num_of_bits = 32
binrep = bin(int(hex_data, scale))[2:].zfill(num_of_bits)
if negative:
binrep = '1' + binrep[1:]
scale is the base 16 for the hex. num_of_bits is 32, as float is 32 bits, it is used later to fill the additional places with 0 to get to 32. Got the code for binrep from this question. If the number was negative, just change the first bit.
I know this is ugly, but i didn't find a nice way and I needed it fast. Comments are welcome.
This is a little more than was asked, but it was what I needed when I found this entry. This code will give the mantissa, base and sign of the IEEE 754 32 bit float.
import ctypes
def binRep(num):
binNum = bin(ctypes.c_uint.from_buffer(ctypes.c_float(num)).value)[2:]
print("bits: " + binNum.rjust(32,"0"))
mantissa = "1" + binNum[-23:]
print("sig (bin): " + mantissa.rjust(24))
mantInt = int(mantissa,2)/2**23
print("sig (float): " + str(mantInt))
base = int(binNum[-31:-23],2)-127
print("base:" + str(base))
sign = 1-2*("1"==binNum[-32:-31].rjust(1,"0"))
print("sign:" + str(sign))
print("recreate:" + str(sign*mantInt*(2**base)))
binRep(-0.75)
output:
bits: 10111111010000000000000000000000
sig (bin): 110000000000000000000000
sig (float): 1.5
base:-1
sign:-1
recreate:-0.75
Convert float between 0..1
def float_bin(n, places = 3):
if (n < 0 or n > 1):
return "ERROR, n must be in 0..1"
answer = "0."
while n > 0:
if len(answer) - 2 == places:
return answer
b = n * 2
if b >= 1:
answer += '1'
n = b - 1
else:
answer += '0'
n = b
return answer
Several of these answers did not work as written with Python 3, or did not give the correct representation for negative floating point numbers. I found the following to work for me (though this gives 64-bit representation which is what I needed)
def float_to_binary_string(f):
def int_to_8bit_binary_string(n):
stg=bin(n).replace('0b','')
fillstg = '0'*(8-len(stg))
return fillstg+stg
return ''.join( int_to_8bit_binary_string(int(b)) for b in struct.pack('>d',f) )
I made a very simple one. please check it. and if you think there was any mistake please let me know. this works fine for me.
sds=float(input("Enter the number : "))
sf=float("0."+(str(sds).split(".")[-1]))
aa=[]
while len(aa)<15:
dd=round(sf*2,5)
if dd-1>0:
aa.append(1)
sf=dd-1
else:
sf=round(dd,5)
aa.append(0)
des=aa[:-1]
print("\n")
AA=([str(i) for i in des])
print("So the Binary Of : %s>>>"%sds,bin(int(str(sds).split(".")[0])).replace("0b",'')+"."+"".join(AA))
or in case of integer number just use bin(integer).replace("0b",'')
Let's use numpy!
import numpy as np
def binary(num, string=True):
bits = np.unpackbits(np.array([num]).view('u1'))
if string:
return np.array2string(bits, separator='')[1:-1]
else:
return bits
e.g.,
binary(np.pi)
# '0001100000101101010001000101010011111011001000010000100101000000'
binary(np.pi, string=False)
# array([0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0, 0, 1,
# 0, 0, 0, 1, 0, 1, 0, 1, 0, 0, 1, 1, 1, 1, 1, 0, 1, 1, 0, 0, 1, 0,
# 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0],
# dtype=uint8)
You can use the .format for the easiest representation of bits in my opinion:
my code would look something like:
def fto32b(flt):
# is given a 32 bit float value and converts it to a binary string
if isinstance(flt,float):
# THE FOLLOWING IS AN EXPANDED REPRESENTATION OF THE ONE LINE RETURN
# packed = struct.pack('!f',flt) <- get the hex representation in (!)Big Endian format of a (f) Float
# integers = []
# for c in packed:
# integers.append(ord(c)) <- change each entry into an int
# binaries = []
# for i in integers:
# binaries.append("{0:08b}".format(i)) <- get the 8bit binary representation of each int (00100101)
# binarystring = ''.join(binaries) <- join all the bytes together
# return binarystring
return ''.join(["{0:08b}".format(i) for i in [ord(c) for c in struct.pack('!f',flt)]])
return None
Output:
>>> a = 5.0
'01000000101000000000000000000000'
>>> b = 1.0
'00111111100000000000000000000000'
Related
Given the letter(s) of an Excel column header I need to output the column number.
It goes A-Z, then AA-AZ then BA-BZ and so on.
I want to go through it like it's base 26, I just don't know how to implement that.
It works fine for simple ones like AA because 26^0 = 1 + 26^1 = 26 = 27.
But with something like ZA, if I do 26 ^ 26(z is the 26th letter) the output is obviously too large. What am I missing?
If we decode "A" as 0, "B" as 1, ... then "Z" is 25 and "AA" is 26.
So it is not a pure 26-base encoding, as then a prefixed "A" would have no influence on the value, and "AAAB" would have to be the same as "B", just like in the decimal system 0001 is equal to 1. But this is not the case here.
The value of "AA" is 1*261 + 0, and "ZA" is 26*261 + 0.
We can generalise and say that "A" should be valued 1, "B" 2, ...etc (with the exception of a single letter encoding). So in "AAA", the right most "A" represents a coefficient of 0, while the other "A"s represent ones: 1*262 + 1*261 + 0
This leads to the following code:
def decode(code):
val = 0
for ch in code: # base-26 decoding "plus 1"
val = val * 26 + ord(ch) - ord("A") + 1
return val - 1
Of course, if we want the column numbers to start with 1 instead of 0, then just replace that final statement with:
return val
sum of powers
You can sum the multiples of the powers of 26:
def xl2int(s):
s = s.strip().upper()
return sum((ord(c)-ord('A')+1)*26**i
for i,c in enumerate(reversed(s)))
xl2int('A')
# 1
xl2int('Z')
# 26
xl2int('AA')
# 27
xl2int('ZZ')
# 702
xl2int('AAA')
# 703
int builtin
You can use a string translation table and the int builtin with the base parameter.
As you have a broken base you need to add 26**n+26**(n-1)+...+26**0 for an input of length n, which you can obtain with int('11...1', base=26) where there are as many 1s as the length of the input string.
from string import ascii_uppercase, digits
t = str.maketrans(dict(zip(ascii_uppercase, digits+ascii_uppercase)))
def xl2int(s):
s = s.strip().upper().translate(t)
return int(s, base=26)+int('1'*len(s), base=26)
xl2int('A')
# 1
xl2int('Z')
# 26
xl2int('AA')
# 27
xl2int('ZZ')
# 702
xl2int('AAA')
# 703
How the translation works
It shifts each character so that A -> 0, B -> 1... J -> 9, K -> A... Z -> P. Then it converts it to integer using int. However the obtained number is incorrect as we are missing 26**x for each digit position in the number, so we add as many power of 26 as there are digits in the input.
Another way to do it, written in VBA:
Function nColumn(sColumn As String) As Integer
' Return column number for a given column letter.
' 676 = 26^2
' 64 = Asc("A") - 1
nColumn = _
(IIf(Len(sColumn) < 3, 0, Asc(Left( sColumn , 1)) - 64) * 676) + _
(IIf(Len(sColumn) = 1, 0, Asc(Left(Right(sColumn, 2), 1)) - 64) * 26) + _
(Asc( Right(sColumn , 1)) - 64)
End Function
Or you can do it directly in the worksheet:
=(if(len(<clm>) < 3, 0, code(left( <clm> , 1)) - 64) * 676) +
(if(len(<clm>) = 1, 0, code(left(right(<clm>, 2), 1)) - 64) * 26) +
(code( right(<clm> , 1)) - 64)
I've also posted the inverse operation done similarly.
Problem:Take a number example 37 is (binary 100101).
Count the binary 1s and create a binary like (111) and print the decimal of that binary(7)
num = bin(int(input()))
st = str(num)
count=0
for i in st:
if i == "1":
count +=1
del st
vt = ""
for i in range(count):
vt = vt + "1"
vt = int(vt)
print(vt)
I am a newbie and stuck here.
I wouldn't recommend your approach, but to show where you went wrong:
num = bin(int(input()))
st = str(num)
count = 0
for i in st:
if i == "1":
count += 1
del st
# start the string representation of the binary value correctly
vt = "0b"
for i in range(count):
vt = vt + "1"
# tell the `int()` function that it should consider the string as a binary number (base 2)
vt = int(vt, 2)
print(vt)
Note that the code below does the exact same thing as yours, but a bit more concisely so:
ones = bin(int(input())).count('1')
vt = int('0b' + '1' * ones, 2)
print(vt)
It uses the standard method count() on the string to get the number of ones in ones and it uses Python's ability to repeat a string a number of times using the multiplication operator *.
Try this once you got the required binary.
def binaryToDecimal(binary):
binary1 = binary
decimal, i, n = 0, 0, 0
while(binary != 0):
dec = binary % 10
decimal = decimal + dec * pow(2, i)
binary = binary//10
i += 1
print(decimal)
In one line:
print(int(format(int(input()), 'b').count('1') * '1', 2))
Let's break it down, inside out:
format(int(input()), 'b')
This built-in function takes an integer number from the input, and returns a formatted string according to the Format Specification Mini-Language. In this case, the argument 'b' gives us a binary format.
Then, we have
.count('1')
This str method returns the total number of occurrences of '1' in the string returned by the format function.
In Python, you can multiply a string times a number to get the same string repeatedly concatenated n times:
x = 'a' * 3
print(x) # prints 'aaa'
Thus, if we take the number returned by the count method and multiply it by the string '1' we get a string that only contains ones and only the same amount of ones as our original input number in binary. Now, we can express this number in binary by casting it in base 2, like this:
int(number_string, 2)
So, we have
int(format(int(input()), 'b').count('1') * '1', 2)
Finally, let's print the whole thing:
print(int(format(int(input()), 'b').count('1') * '1', 2))
I am new to Python. In Perl, to set specific bits to a scalar variable(integer), I can use vec() as below.
#!/usr/bin/perl -w
$vec = '';
vec($vec, 3, 4) = 1; # bits 0 to 3
vec($vec, 7, 4) = 10; # bits 4 to 7
vec($vec, 11, 4) = 3; # bits 8 to 11
vec($vec, 15, 4) = 15; # bits 12 to 15
print("vec() Has a created a string of nybbles,
in hex: ", unpack("h*", $vec), "\n");
Output:
vec() Has a created a string of nybbles,
in hex: 0001000a0003000f
I was wondering how to achieve the same in Python, without having to write bit manipulation code and using struct.pack manually?
Not sure how the vec function works in pearl (haven't worked with the vec function). However, according to the output you have mentioned, the following code in python works fine. I do not see the significance of the second argument. To call the vec function this way: vec(value, size). Every time you do so, the output string will be concatenated to the global final_str variable.
final_vec = ''
def vec(value, size):
global final_vec
prefix = ''
str_hex = str(hex(value)).replace('0x','')
str_hex_size = len(str_hex)
for i in range (0, size - str_hex_size):
prefix = prefix + '0'
str_hex = prefix + str_hex
final_vec = final_vec + str_hex
return 0
vec(1, 4)
vec(10, 4)
vec(3, 4)
vec(15, 4)
print(final_vec)
If you really want to create a hex string from nibbles, you could solve it this way
nibbles = [1,10,3,15]
hex = '0x' + "".join([ "%04x" % x for x in nibbles])
I need to do some decimal place formatting in python. Preferably, the floating point value should always show at least a starting 0 and one decimal place. Example:
Input: 0
Output: 0.0
Values with more decimal places should continue to show them, until it gets 4 out. So:
Input: 65.53
Output: 65.53
Input: 40.355435
Output: 40.3554
I know that I can use {0.4f} to get it to print out to four decimal places, but it will pad with unwanted 0s. Is there a formatting code to tell it to print out up to a certain number of decimals, but to leave them blank if there is no data? I believe C# accomplishes this with something like:
floatValue.ToString("0.0###")
Where the # symbols represent a place that can be left blank.
What you're asking for should be addressed by rounding methods like the built-in round function. Then let the float number be naturally displayed with its string representation.
>>> round(65.53, 4) # num decimal <= precision, do nothing
'65.53'
>>> round(40.355435, 4) # num decimal > precision, round
'40.3554'
>>> round(0, 4) # note: converts int to float
'0.0'
Sorry, the best I can do:
' {:0.4f}'.format(1./2.).rstrip('0')
Corrected:
ff=1./2.
' {:0.4f}'.format(ff).rstrip('0')+'0'[0:(ff%1==0)]
From trial and error I think :.15g is what you want:
In: f"{3/4:.15g}"
Out: '0.75'
In f"{355/113:.15g}"
Out: '3.14159292035398'
(while f"{3/4:.15f}" == '0.750000000000000')
>>> def pad(float, front = 0, end = 4):
s = '%%%s.%sf' % (front, end) % float
i = len(s)
while i > 0 and s[i - 1] == '0':
i-= 1
if s[i - 1] == '.' and len(s) > i:
i+= 1 # for 0.0
return s[:i] + ' ' * (len(s) - i)
>>> pad(0, 3, 4)
'0.0 '
>>> pad(65.53, 3, 4)
'65.53 '
>>> pad(40.355435, 3, 4)
'40.3554'
How do I convert a hex string to a signed int in Python 3?
The best I can come up with is
h = '9DA92DAB'
b = bytes(h, 'utf-8')
ba = binascii.a2b_hex(b)
print(int.from_bytes(ba, byteorder='big', signed=True))
Is there a simpler way? Unsigned is so much easier: int(h, 16)
BTW, the origin of the question is itunes persistent id - music library xml version and iTunes hex version
In n-bit two's complement, bits have value:
bit 0 = 20
bit 1 = 21
bit n-2 = 2n-2
bit n-1 = -2n-1
But bit n-1 has value 2n-1 when unsigned, so the number is 2n too high. Subtract 2n if bit n-1 is set:
def twos_complement(hexstr, bits):
value = int(hexstr, 16)
if value & (1 << (bits - 1)):
value -= 1 << bits
return value
print(twos_complement('FFFE', 16))
print(twos_complement('7FFF', 16))
print(twos_complement('7F', 8))
print(twos_complement('FF', 8))
Output:
-2
32767
127
-1
import struct
For Python 3 (with comments' help):
h = '9DA92DAB'
struct.unpack('>i', bytes.fromhex(h))
For Python 2:
h = '9DA92DAB'
struct.unpack('>i', h.decode('hex'))
or if it is little endian:
h = '9DA92DAB'
struct.unpack('<i', h.decode('hex'))
Here's a general function you can use for hex of any size:
import math
# hex string to signed integer
def htosi(val):
uintval = int(val,16)
bits = 4 * (len(val) - 2)
if uintval >= math.pow(2,bits-1):
uintval = int(0 - (math.pow(2,bits) - uintval))
return uintval
And to use it:
h = str(hex(-5))
h2 = str(hex(-13589))
x = htosi(h)
x2 = htosi(h2)
This works for 16 bit signed ints, you can extend for 32 bit ints. It uses the basic definition of 2's complement signed numbers. Also note xor with 1 is the same as a binary negate.
# convert to unsigned
x = int('ffbf', 16) # example (-65)
# check sign bit
if (x & 0x8000) == 0x8000:
# if set, invert and add one to get the negative value, then add the negative sign
x = -( (x ^ 0xffff) + 1)
It's a very late answer, but here's a function to do the above. This will extend for whatever length you provide. Credit for portions of this to another SO answer (I lost the link, so please provide it if you find it).
def hex_to_signed(source):
"""Convert a string hex value to a signed hexidecimal value.
This assumes that source is the proper length, and the sign bit
is the first bit in the first byte of the correct length.
hex_to_signed("F") should return -1.
hex_to_signed("0F") should return 15.
"""
if not isinstance(source, str):
raise ValueError("string type required")
if 0 == len(source):
raise valueError("string is empty")
sign_bit_mask = 1 << (len(source)*4-1)
other_bits_mask = sign_bit_mask - 1
value = int(source, 16)
return -(value & sign_bit_mask) | (value & other_bits_mask)