>>> a = -2147458560
>>> bin(a)
'-0b1111111111111111001111000000000'
My intention is to manipulate a as 32-bit signed binary and return it. The correct conversion for -2147458560 would be '0b10000000000000000110001000000000'; how can I achieve that?
Bitwise AND (&) with 0xffffffff (232 - 1) first:
>>> a = -2147458560
>>> bin(a & 0xffffffff)
'0b10000000000000000110001000000000'
>>> format(a & 0xffffffff, '32b')
'10000000000000000110001000000000'
>>> '{:32b}'.format(a & 0xffffffff)
'10000000000000000110001000000000'
Related
how to make 16bit to 8bit like that 0x0300 to 0x03 and 0x00 by python?
i try to it like that
reg = 0x0300
print(reg[0:4], reg[4::])..
it show me that 'int' object is not subscriptable
reg = 0x0316
regbinary = bin(reg)
print(regbinary)
regBotoom = regbinary[-8::]
print(regBotoom)
result = hex('0b'+regBotoom)
print(result)
it show me that
0b1100010110
00010110
TypeError: 'str' object cannot be interpreted as an integer
You can get lower and higher 8 bits with bitwise AND and shifting:
reg = 0x0316
lowBits = reg & 0xFF
highBits = (reg >> 8) & 0xFF
print(lowBits)
print(highBits)
Bitwise Operators:
x = 0x4321
y = x >> 8
= 0x43
z = x & 0x00ff
= 0x21
You can not manipulate an int like you are trying to do without converting it to a string first.
Perhaps you're after the answer #jafarlihi gave, where reg is really just an int and you shift its value to get the values you're after.
However, you may also want to see this:
import struct
reg = 790
regb = struct.pack('h', reg)
print(regb)
msb, lsb = struct.unpack('bb', regb)
print(msb, lsb)
regb_2 = struct.pack('bb', msb, lsb)
print(regb, regb_2)
value = struct.unpack('h', regb)
print(reg, value)
Result:
b'\x16\x03'
22 3
b'\x16\x03' b'\x16\x03'
790 (790,)
reg = 0x0300
print(hex((reg >> 8) & 0xFF))
print(hex(reg & 0xFF))
Output :
0x3
0x0
I have:
n = 257
a = n.to_bytes(2, 'little')
a = b'\x01\x01'
How can i convert this back into 257
Also, is there any way to show to_bytes without specifying how many bytes?
Use the complementary int.from_bytes and specify the byteorder again.
>>> n = 257
>>> n_bytes = n.to_bytes(2, "little")
>>> n_again = int.from_bytes(n_bytes, "little")
>>> n_again == n
True
I'm reverse engineering a proprietary network protocol that generates a (static) one-time pad on launch and then uses that to encode/decode each packet it sends/receives. It uses the one-time pad in a series of complex XORs, shifts, and multiplications.
I have produced the following C code after walking through the decoding function in the program with IDA. This function encodes/decodes the data perfectly:
void encodeData(char *buf)
{
int i;
size_t bufLen = *(unsigned short *)buf;
unsigned long entropy = *((unsigned long *)buf + 2);
int xorKey = 9 * (entropy ^ ((entropy ^ 0x3D0000) >> 16));
unsigned short baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
//Skip first 24 bytes, as that is the header
for (i = 24; i <= (signed int)bufLen; i++)
buf[i] ^= byteTable[((unsigned short)i + baseByteTableIndex) & 2047];
}
Now I want to try my hand at making a Peach fuzzer for this protocol. Since I'll need a custom Python fixup to do the encoding/decoding prior to doing the fuzzing, I need to port this C code to Python.
I've made the following Python function but haven't had any luck with it decoding the packets it receives.
def encodeData(buf):
newBuf = bytearray(buf)
bufLen = unpack('H', buf[:2])
entropy = unpack('I', buf[2:6])
xorKey = 9 * (entropy[0] ^ ((entropy[0] ^ 0x3D0000) >> 16))
baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
#Skip first 24 bytes, since that is header data
for i in range(24,bufLen[0]):
newBuf[i] = xorPad[(i + baseByteTableIndex) & 2047]
return str(newBuf)
I've tried with and without using array() or pack()/unpack() on various variables to force them to be the right size for the bitwise operations, but I must be missing something, because I can't get the Python code to work as the C code does. Does anyone know what I'm missing?
In case it would help you to try this locally, here is the one-time pad generating function:
def buildXorPad():
global xorPad
xorKey = array('H', [0xACE1])
for i in range(0, 2048):
xorKey[0] = -(xorKey[0] & 1) & 0xB400 ^ (xorKey[0] >> 1)
xorPad = xorPad + pack('B',xorKey[0] & 0xFF)
And here is the hex-encoded original (encoded) and decoded packet.
Original: 20000108fcf3d71d98590000010000000000000000000000a992e0ee2525a5e5
Decoded: 20000108fcf3d71d98590000010000000000000000000000ae91e1ee25252525
Solution
It turns out that my problem didn't have much to do with the difference between C and Python types, but rather some simple programming mistakes.
def encodeData(buf):
newBuf = bytearray(buf)
bufLen = unpack('H', buf[:2])
entropy = unpack('I', buf[8:12])
xorKey = 9 * (entropy[0] ^ ((entropy[0] ^ 0x3D0000) >> 16))
baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
#Skip first 24 bytes, since that is header data
for i in range(24,bufLen[0]):
padIndex = (i + baseByteTableIndex) & 2047
newBuf[i] ^= unpack('B',xorPad[padIndex])[0]
return str(newBuf)
Thanks everyone for your help!
This line of C:
unsigned long entropy = *((unsigned long *)buf + 2);
should translate to
entropy = unpack('I', buf[8:12])
because buf is cast to an unsigned long first before adding 2 to the address, which adds the size of 2 unsigned longs to it, not 2 bytes (assuming an unsigned long is 4 bytes in size).
Also:
newBuf[i] = xorPad[(i + baseByteTableIndex) & 2047]
should be
newBuf[i] ^= xorPad[(i + baseByteTableIndex) & 2047]
to match the C, otherwise the output isn't actually based on the contents of the buffer.
Python integers don't overflow - they are automatically promoted to arbitrary precision when they exceed sys.maxint (or -sys.maxint-1).
>>> sys.maxint
9223372036854775807
>>> sys.maxint + 1
9223372036854775808L
Using array and/or unpack does not seem to make a difference (as you discovered)
>>> array('H', [1])[0] + sys.maxint
9223372036854775808L
>>> unpack('H', '\x01\x00')[0] + sys.maxint
9223372036854775808L
To truncate your numbers, you'll have to simulate overflow by manually ANDing with an appropriate bitmask whenever you're increasing the size of the variable.
I'm trying to calulate CRC-CCITT (0xFFFF) for HEX string and get result back as HEX string. I tried binascii and crc16 but I get int values and when I convert them to HEX it's not the value I expected. I need this:
hex_string = "AA01"
crc_string = crccitt(hex_string)
print("CRC: ", crc_string)
>>> CRC: FF9B
You can use str.format / format to convert the int value to hexadecimal format: (used crc16 to get crc)
>>> import binascii
>>> import crc16
>>> hex_string = 'AA01'
>>> crc = crc16.crc16xmodem(binascii.unhexlify(hex_string), 0xffff)
>>> '{:04X}'.format(crc & 0xffff)
'FF9B'
>>> format(crc & 0xffff, '04X')
'FF9B'
or using % operator:
>>> '%04X' % (crc & 0xffff)
'FF9B'
import binascii
import crc16
def crccitt(hex_string):
byte_seq = binascii.unhexlify(hex_string)
crc = crc16.crc16xmodem(byte_seq, 0xffff)
return '{:04X}'.format(crc & 0xffff)
I want to realize IDEA algorithm in Python. In Python we have no limits for variable size, but I need limit bit number in the integer number, for example, to do cyclic left shift. What do you advise?
One way is to use the BitVector library.
Example of use:
>>> from BitVector import BitVector
>>> bv = BitVector(intVal = 0x13A5, size = 32)
>>> print bv
00000000000000000001001110100101
>>> bv << 6 #does a cyclic left shift
>>> print bv
00000000000001001110100101000000
>>> bv[0] = 1
>>> print bv
10000000000001001110100101000000
>>> bv << 3 #cyclic shift again, should be more apparent
>>> print bv
00000000001001110100101000000100
An 8-bit mask with a cyclic left shift:
shifted = number << 1
overflowed = (number & 0x100) >> 8
shifted &= 0xFF
result = overflowed | shifted
You should be able to make a class that does this for you. With a bit more of the same, it can shift an arbitrary amount out of an arbitrary sized value.
The bitstring module might be of help (documentation here). This example creates a 22 bit bitstring and rotates the bits 3 to the right:
>>> from bitstring import BitArray
>>> a = BitArray(22) # creates 22-bit zeroed bitstring
>>> a.uint = 12345 # set the bits with an unsigned integer
>>> a.bin # view the binary representation
'0b0000000011000000111001'
>>> a.ror(3) # rotate to the right
>>> a.bin
'0b0010000000011000000111'
>>> a.uint # and back to the integer representation
525831
If you want a the low 32 bits of a number, you can use binary-and like so:
>>> low32 = (1 << 32) - 1
>>> n = 0x12345678
>>> m = ((n << 20) | (n >> 12)) & low32
>>> "0x%x" % m
'0x67812345'