I have pairs like these: (-102,-56), (123, -56). First value from the pairs represents the lower 8 bits and the second value represents the upper 8 bits, both are in signed decimal form. I need to convert these pairs into a single 16 bit values.
I think I was able to convert (-102,-56) pair by:
l = bin(-102 & 0b1111111111111111)[-8:]
u = bin(-56 & 0b1111111111111111)[-8:]
int(u+l,2)
But when I try to do the same with (123, -56) pair I get the following error:
ValueError: invalid literal for int() with base 2: '11001000b1111011'.
I understand that it's due to the different lengths for different values and I need to fill them up to 8 bits.
Am I approaching this completely wrong? What's the best way to do this so it works both on negative and positive values?
UPDATE:
I was able to solve this by:
low_int = 123
up_int = -56
(low_int & 0xFF) | ((up_int & 0xFF) << 8)
You can try to shift the first value 8 bits: try to use the logic described here https://stackoverflow.com/a/1857965/8947333
Just guessing
l, u = -102 & 255, -56 & 255
# shift 8 bits to left
u << 8 + l
Bitwise operations are fine, but not strictly required.
In the most common 2's complement representation for 8 bits:
-1 signed == 255 unsigned
-2 signed == 254 unsigned
...
-127 signed = 129 usigned
-128 signed = 128 usigned
simply the two absolute values always give the sum 256.
Use this to convert negative values:
if b < 0:
b += 256
and then combine the high and low byte:
value = 256 * hi8 + lo8
Related
I'm a little confused by the ~ operator. Code goes below:
a = 1
~a #-2
b = 15
~b #-16
How does ~ do work?
I thought, ~a would be something like:
0001 = a
1110 = ~a
why not?
You are exactly right. It's an artifact of two's complement integer representation.
In 16 bits, 1 is represented as 0000 0000 0000 0001. Inverted, you get 1111 1111 1111 1110, which is -2. Similarly, 15 is 0000 0000 0000 1111. Inverted, you get 1111 1111 1111 0000, which is -16.
In general, ~n = -n - 1
The '~' operator is defined as:
"The bit-wise inversion of x is defined as -(x+1). It only applies to integral numbers."Python Doc - 5.5
The important part of this sentence is that this is related to 'integral numbers' (also called integers). Your example represents a 4 bit number.
'0001' = 1
The integer range of a 4 bit number is '-8..0..7'. On the other hand you could use 'unsigned integers', that do not include negative number and the range for your 4 bit number would be '0..15'.
Since Python operates on integers the behavior you described is expected. Integers are represented using two's complement. In case of a 4 bit number this looks like the following.
7 = '0111'
0 = '0000'
-1 = '1111'
-8 = '1000'
Python uses 32bit for integer representation in case you have a 32-bit OS. You can check the largest integer with:
sys.maxint # (2^31)-1 for my system
In case you would like an unsigned integer returned for you 4 bit number you have to mask.
'0001' = a # unsigned '1' / integer '1'
'1110' = ~a # unsigned '14' / integer -2
(~a & 0xF) # returns 14
If you want to get an unsigned 8 bit number range (0..255) instead just use:
(~a & 0xFF) # returns 254
It looks like I found simpler solution that does what is desired:
uint8: x ^ 0xFF
uint16: x ^ 0xFFFF
uint32: x ^ 0xFFFFFFFF
uint64: x ^ 0xFFFFFFFFFFFFFFFF
You could also use unsigned ints (for example from the numpy package) to achieve the expected behaviour.
>>> import numpy as np
>>> bin( ~ np.uint8(1))
'0b11111110'
The problem is that the number represented by the result of applying ~ is not well defined as it depends on the number of bits used to represent the original value. For instance:
5 = 101
~5 = 010 = 2
5 = 0101
~5 = 1010 = 10
5 = 00101
~5 = 11010 = 26
However, the two's complement of ~5 is the same in all cases:
two_complement(~101) = 2^3 - 2 = 6
two_complement(~0101) = 2^4 - 10 = 6
two_complement(~00101) = 2^5 - 26 = 6
And given that the two's complement is used to represent negative values, it makes sense to consider ~5 as the negative value, -6, of its complement.
So, more formally, to arrive at this result we have:
flipped zeros and ones (that's equivalent to taking the ones' complement)
taken two's complement
applied negative sign
and if x is a n-digit number:
~x = - two_complement(one_complement(x)) = - two_complement(2^n - 1 - x) = - (2^n - (2^n - 1 - x)) = - (x + 1)
I'm working with Python and I would like to simulate the effect of a C/C++ cast on an integer value.
For example, if I have the unsigned number 234 on 8 bits, I would like a formula to convert it to -22 (signed cast), and another function to convert -22 to 234 (unsigned cast).
I know numpy already has functions to do this, but I would like to avoid it.
You could use bitwise operations based on a 2^(n-1) mask value (for the sign bit):
size = 8
sign = 1 << size-1
number = 234
signed = (number & sign-1) - (number & sign)
unsigned = (signed & sign-1) | (signed & sign)
print(signed) # -22
print(unsigned) # 234
You can easily create such a function yourself:
def toInt8(value):
valueUint8 = value & 255
if valueUint8 & 128:
return valueUint8 - 256
return valueUint8
>>> toInt8(234)
-22
You can make a version that accepts the number of bits as a parameter, rather than it being hardcoded to 8:
def toSignedInt(value, bits):
valueUint8 = value & (2**bits - 1)
if valueUint8 & 2**(bits-1):
return valueUint8 - 2**bits
return valueUint8
>>> toSignedInt(234, 8)
-22
I have this in hex: 08
Which is this in binary: 0000 1000 (bit positions: 7,6,5,4,3,2,1,0)
Now I would like to make a bitmask in python, so I have bit position 3.
Here in example 1 or better (the one in ""): 0000 "1"000
What shall I do to have only this bit?
Thanks
Shift right by the bit index to have that bit in the 0th position, then AND with 1 to isolate it.
val = 0b01001000 # note the extra `1` to prove this works
pos = 3
bit = (val >> pos) & 1
print(bit)
outputs 1
you could just do this:
def get_bit(n, pos):
return (n >> pos) & 1
res = get_bit(n=8, pos=3)
# 1
shift the number n left by pos bits (>> pos) and then mask away the rest (& 1).
the doc on Bitwise Operations on Integer Types may help.
I am trying to test this implementation of the xtea algorithm in Python. The only testvectors I have found are these.
How can I test the output of the algorithm so that I can compare it bytewise?
Which password/key should I choose? Which endian would be best?
(I am on 64 bit xubuntu/x86/little endian)
XTEA
# 64 bit block of data to encrypt
v0, v1 = struct.unpack(endian + "2L", block)
# 128 bit key
k = struct.unpack(endian + "4L", key)
sum, delta, mask = 0L, 0x9e3779b9L, 0xffffffffL
for round in range(n):
v0 = (v0 + (((v1<<4 ^ v1>>5) + v1) ^ (sum + k[sum & 3]))) & mask
sum = (sum + delta) & mask
v1 = (v1 + (((v0<<4 ^ v0>>5) + v0) ^ (sum + k[sum>>11 & 3]))) & mask)
return struct.pack(endian + "2L", v0, v1)
Initial 64 bit test input
# pack 000000 in 64 bit string
byte_string = ''
for c in range(56, -8, -8):
byte_string += chr(000000 >> c & 0xff)
Testvectors (copied from here)
tean values
These are made by starting with a vector of 6 zeroes,
data followed by key, and coding with one cycle then
moving the six cyclically so that n becomes n-1 modulo 6.
We repeat with 2-64 cycles printing at powers of 2 in
hexadecimal. The process is reversed decoding back
to the original zeroes which are printed.
1 0 9e3779b9 0 0 0 0
2 ec01a1de aaa0256d 0 0 0 0
4 bc3a7de2 4e238eb9 0 0 ec01a1de 114f6d74
8 31c5fa6c 241756d6 bc3a7de2 845846cf 2794a127 6b8ea8b8
16 1d8e6992 9a478905 6a1d78c8 8c86d67 2a65bfbe b4bd6e46
32 d26428af a202283 27f917b1 c1da8993 60e2acaa a6eb923d
64 7a01cbc9 b03d6068 62ee209f 69b7afc 376a8936 cdc9e923
1 0 0 0 0 0 0
The C code you linked to seems to assume that a long has 32 bits -- XTEA uses a 64-bit block made of two uint32; the code uses a couple of long and doesn't do anything to handle the overflow which happens when you sum/leftshift (and propagates into later computations).
The python code lets you choose endianness, while the C code treats those numbers as... well, numbers, so if you want to compare them, you need to pick endianness (or if you're lazy, try both and see if one matches :)
Regarding the key, I'm not sure what your problem is, so I'll guess: in case you're not a C programmer, the line static long pz[1024], n, m; is a static declaration, meaning that all those values are implicitly initialized to zero.
Anything else I missed?
How do I convert a hex string to a signed int in Python 3?
The best I can come up with is
h = '9DA92DAB'
b = bytes(h, 'utf-8')
ba = binascii.a2b_hex(b)
print(int.from_bytes(ba, byteorder='big', signed=True))
Is there a simpler way? Unsigned is so much easier: int(h, 16)
BTW, the origin of the question is itunes persistent id - music library xml version and iTunes hex version
In n-bit two's complement, bits have value:
bit 0 = 20
bit 1 = 21
bit n-2 = 2n-2
bit n-1 = -2n-1
But bit n-1 has value 2n-1 when unsigned, so the number is 2n too high. Subtract 2n if bit n-1 is set:
def twos_complement(hexstr, bits):
value = int(hexstr, 16)
if value & (1 << (bits - 1)):
value -= 1 << bits
return value
print(twos_complement('FFFE', 16))
print(twos_complement('7FFF', 16))
print(twos_complement('7F', 8))
print(twos_complement('FF', 8))
Output:
-2
32767
127
-1
import struct
For Python 3 (with comments' help):
h = '9DA92DAB'
struct.unpack('>i', bytes.fromhex(h))
For Python 2:
h = '9DA92DAB'
struct.unpack('>i', h.decode('hex'))
or if it is little endian:
h = '9DA92DAB'
struct.unpack('<i', h.decode('hex'))
Here's a general function you can use for hex of any size:
import math
# hex string to signed integer
def htosi(val):
uintval = int(val,16)
bits = 4 * (len(val) - 2)
if uintval >= math.pow(2,bits-1):
uintval = int(0 - (math.pow(2,bits) - uintval))
return uintval
And to use it:
h = str(hex(-5))
h2 = str(hex(-13589))
x = htosi(h)
x2 = htosi(h2)
This works for 16 bit signed ints, you can extend for 32 bit ints. It uses the basic definition of 2's complement signed numbers. Also note xor with 1 is the same as a binary negate.
# convert to unsigned
x = int('ffbf', 16) # example (-65)
# check sign bit
if (x & 0x8000) == 0x8000:
# if set, invert and add one to get the negative value, then add the negative sign
x = -( (x ^ 0xffff) + 1)
It's a very late answer, but here's a function to do the above. This will extend for whatever length you provide. Credit for portions of this to another SO answer (I lost the link, so please provide it if you find it).
def hex_to_signed(source):
"""Convert a string hex value to a signed hexidecimal value.
This assumes that source is the proper length, and the sign bit
is the first bit in the first byte of the correct length.
hex_to_signed("F") should return -1.
hex_to_signed("0F") should return 15.
"""
if not isinstance(source, str):
raise ValueError("string type required")
if 0 == len(source):
raise valueError("string is empty")
sign_bit_mask = 1 << (len(source)*4-1)
other_bits_mask = sign_bit_mask - 1
value = int(source, 16)
return -(value & sign_bit_mask) | (value & other_bits_mask)