I have a hex string, for instance: 0xb69958096aff3148
And I want to convert this to a signed integer like: -5289099489896877752
In Python, if I use the int() function on above hex number, it returns me a positive value as shown below:
>>> int(0xb69958096aff3148)
13157644583812673864L
However, if I use the "Hex" to "Dec" feature on Windows Calculator, I get the value as: -5289099489896877752
And I need the above signed representation.
For 32-bit numbers, as I understand, we could do the following:
struct.unpack('>i', s)
How can I do it for 64-bit integers?
Thanks.
If you want to convert it to 64-bit signed integer then you can still use struct and pack it as unsigned integer ('Q'), then unpack as signed ('q'):
>>> struct.unpack('<q', struct.pack('<Q', int('0xb69958096aff3148', 16)))
(-5289099489896877752,)
I would recommend the bitstring package available through conda or pip.
from bitstring import BitArray
b = BitArray('0xb69958096aff3148')
b.int
# returns
-5289099489896877752
Want the unsigned int?:
b.uint
# returns:
13157644583812673864
You could do a 64-bit version of this, for example:
def signed_int(h):
x = int(h, 16)
if x > 0x7FFFFFFFFFFFFFFF:
x -= 0x10000000000000000
return x
print(signed_int('0xb69958096aff3148'))
Output
-5289099489896877752
Related
I am using Python 3.6.8 and the Eclipse IDE. I'm trying to rearrange the hex value. My code snippet is as follows:
def RearrangeData(Data, dataType):
Data= Data[6:] + Data[4:6] + Data[2:4] + Data[0:2]
return data
I'm calling this function as
#xxxxxxxx is a jumbled hex value and it is of type Float
RearrangeData(xxxxxxxx, Float)
In 3.6.8, the interpreter takes the input as int and not as a string. How can this be fixed? Using what attribute can I decode a hex value for example(0x3F800000) to its IEEE-754 equivalent value? In Python 2.7.5, I could use RearragedData.decode('hex'). What can I use in Python 3.6.8?
If you have a string with hexadecimal digits representing a floating point number in big-endian format, use a combination of struct and codecs to first unhexlify the hex string to binary, then unpack it into a float.
>>> import struct, codecs
>>> struct.unpack('>f', codecs.decode('3FA66666', 'hex'))
(1.2999999523162842,)
If you have an integer, 0x3FA66666 (i.e. 1067869798 in decimal), you can call hex() on it, strip the 0x prefix and do the above:
>>> v = 0x3FA66666
1067869798
>>> v_hex = hex(v)[2:]
'3fa66666'
>>> struct.unpack('>f', codecs.decode(v_hex, 'hex'))
(1.2999999523162842,)
Let's say I have integer x (-128 <= x <= 127). How to convert x to signed char?
I did like this. But I couldn't.
>>> x = -1
>>> chr(x)
ValueError: chr() arg not in range(256)
Within Python, there aren't really any borders between signed, unsigned, char, int, long, etc. However, if your aim is to serialize data to use with a C module, function, or whatever, you can use the struct module to pack and unpack data. b, for example, packs data as a signed char type.
>>> import struct
>>> struct.pack('bbb', 1, -2, 4)
b'\x01\xfe\x04'
If you're directly talking to a C function, use ctypes types as Ashwin suggests; when you pass them Python translates them as appropriate and piles them on the stack.
It does not work, because x is not between 0 and 256.
chr must be given an int between 0 and 256.
In Python 3, chr() does not convert an int into a signed char (which Python doesn't have anyway). chr() returns a Unicode string having the ordinal value of the inputted int.
>>> chr(77)
'M'
>>> chr(690)
'ʲ'
The input can be a number in any supported base, not just decimal:
>>> chr(0xBAFF)
'뫿'
Note: In Python 2, the int must be between 0 and 255, and will output a standard (non-Unicode) string. unichr() in Py2 behaves as above.
Is there a function in C++ or Python that is equivalent to MATLAB's num2hex function?
Python's float.hex and int.hex functions do not give the same result as MATLAB's num2hex function.
MATLAB
% single
>> num2hex(single(1))
ans =
3f800000
% double
>> num2hex(1)
ans =
3ff0000000000000
Python
>>> x = 1.0
# 32-bit float
>>> hex(struct.unpack('!l', struct.pack('!f',x))[0])
'0x3f800000'
# 64-bit float
>>> hex(struct.unpack('!q', struct.pack('!d',x))[0])
'0x3ff0000000000000L'
Unfortunately, this does not work for negative numbers (32-bit float, 64-bit float) since the minus symbol is included after the unpacking operation. By changing 'l' to uppercase 'L' and 'q' to uppercase 'Q' unsigned representations are used instead.
Furthermore, the hexadecimal numbers are displayed in big-endian format. In order to have little-endian format use '<' instead of '!'.
# 32-bit float
>>> hex(struct.unpack('<L', struct.pack('<f', x))[0])
# 64-bit float
>>> hex(struct.unpack('<Q', struct.pack('<d', x))[0])
Is there difference between '0x604f' and b'\x60\x4f' ?
1.how to change '0x604f' into b'\x60\x4f' in python ?
2.how to change b'\x60\x4f' into '0x604f' in python ?
I am in python3.3 .
You can use int(s, 0) to convert a string of the form 0x... into an integer. The argument 0 in place of an integer base instructs int to convert according to the prefix: 0x for base 16, 0o for base 8, 0b for base 2, and no prefix for base 10.
Then, you can use struct.pack (from the struct module) to pack the integer into a bytestring.
Demonstration:
>>> struct.pack('>H', int('0x604f', 0))
b'`O'
>>> b'\x60\x4f'
b'`O'
And in reverse, use hex and struct.unpack:
>>> hex(struct.unpack('>H', b'\x60\x4f')[0])
'0x604f'
Note that the format >H means "big-endian, 16-bit number". You can use a different format string for a different number of bytes (B for an 8-bit number, I for a 32-bit number, Q for a 64-bit number).
Since Python 3.2 you can do it directly using int.to_bytes as:
>>> int('0x604f', 16).to_bytes(2, 'big')
b'`O'
>>>
and the other way around using int.from_bytes:
>>> int.from_bytes(b'\x60\x4f', 'big')
24655
>>>
Input = 'FFFF' # 4 ASCII F's
desired result ... -1 as an integer
code tried:
hexstring = 'FFFF'
result = (int(hexstring,16))
print result #65535
Result: 65535
Nothing that I have tried seems to recognized that a 'FFFF' is a representation of a negative number.
Python converts FFFF at 'face value', to decimal 65535
input = 'FFFF'
val = int(input,16) # is 65535
You want it interpreted as a 16-bit signed number.
The code below will take the lower 16 bits of any number, and 'sign-extend', i.e. interpret
as a 16-bit signed value and deliver the corresponding integer
val16 = ((val+0x8000)&0xFFFF) - 0x8000
This is easily generalized
def sxtn( x, bits ):
h= 1<<(bits-1)
m = (1<<bits)-1
return ((x+h) & m)-h
In a language like C, 'FFFF' can be interpreted as either a signed (-1) or unsigned (65535) value. You can use Python's struct module to force the interpretation that you're wanting.
Note that there may be endianness issues that the code below makes no attempt to deal with, and it doesn't handle data that's more than 16-bits long, so you'll need to adapt if either of those cases are in effect for you.
import struct
input = 'FFFF'
# first, convert to an integer. Python's going to treat it as an unsigned value.
unsignedVal = int(input, 16)
assert(65535 == unsignedVal)
# pack that value into a format that the struct module can work with, as an
# unsigned short integer
packed = struct.pack('H', unsignedVal)
assert('\xff\xff' == packed)
# ..then UNpack it as a signed short integer
signedVal = struct.unpack('h', packed)[0]
assert(-1 == signedVal)