I am trying to write a decrypt logic for the below encrypt logic.
import os
import string
keybytes = bytes(os.urandom(8))
bit = keybytes[0] % 7 + 1
kxor = []
key = ""
for i in range(1, 8):
kxor.append(ord(string.ascii_letters[keybytes[i] % len(string.ascii_letters)]))
key = key + chr(kxor[i-1])
print("Key is %s rotated by %d bits." % (key, bit))
def rotatel(x, bit):
return ((x << bit) & 0xff) | (x >> (8 - bit))
plaintext = "ABCDE"
bit = 6
kxor = [65,115,113,107,98,75,85]
encryptedText = []
for i in range(0, len(plaintext)):
encryptedText.append(rotatel(plaintext[i], bit) ^ kxor[i % len(kxor)])
print (bytes.encryptedText)
Now, the bit and kxor values, I have hardcoded them as I am able to get them back; meaning if these were the values I used for encryption therefore I am able to get kxor programmatically while writing the decryption logic.
Where I am struggling is the rotatel function.
I am trying to reverse that logic but I am not able to figure out how. So need some pointers to reverse the rotatel function.
Is that the right way or am i approaching this entirely in a wrong way?
Basically my question is how do i reverse return ((x << bit) & 0xff) | (x >> (8 - bit))
For example:
bit = 6
x = "123456"
rotatel(x, bit) = "561234"
so what you need is:
y = "561234"
reverse_rotatel(y, bit) = "123456"
But you must remember that you xor y (561234) with kxor after you rotatel. So you must xor again with kxor to get y before you reverse_rotatel
I was able to finally figure it out. I wanted to write a reverse logic for rotatel-
def rotatel(x, bit):
return ((x << bit) & 0xff) | (x >> (8 - bit))
This is how I did it -
def rotater(x, bit):
return ((x >> bit)& 0xff) | (x << (8 - bit)) & 0xff
Related
I had a string of int as a result of converting text to int. Here is its code in python:
def mix(data):
b = len(data)*8
m = bytes_to_long(data)
c = 0
for i in range(b//2):
if (m >> i) & 1:
c ^= m >> (b//2) << i
return c
So now, I had to make an "unmix" function to decode my result to its origin. But I'm still struggling with those Binary Shift Operators. Can you guys help me, please?
Is there a fast possibility to reverse a binary number in python?
Example: I have the number 11 in binary 0000000000001011 with 16 Bits. Now I'm searching for a fast function f, which returns 1101000000000000 (decimal 53248). Lookup tables are no solutions since i want it to scale to 32Bit numbers. Thank you for your effort.
Edit:
Performances. I tested the code for all 2^16 pattern several times.
winner are the partially look up tables: 30ms
2nd int(format(num, '016b')[::-1], 2) from the comments: 56ms
3rd x = ((x & 0x00FF) << 8) | (x >> 8): 65ms
I did not expect my approach to be so horribly slow but it is.
approx. 320ms. Small improvement by using + instead of | 300ms
bytes(str(num).encode('utf-8')) fought for the 2nd place but somehow
the code did not provide valid answers. Most likely because I made a
mistake by transforming them into an integer again.
thank you very much for your input. I was quite surprised.
This might be faster using small 8-bit lookup table:
num = 11
# One time creation of 8bit lookup
rev = [int(format(b, '08b')[::-1], base=2) for b in range(256)]
# Run for each number to be flipped.
lower_rev = rev[num & 0xFF] << 8
upper_rev = rev[(num & 0xFF00) >> 8]
flipped = lower_rev + upper_rev
I think you can just use slicing to get what you are looking for:
b=bytes('0000000000001011'.encode('utf-8'))
>>> b
b'0000000000001011'
>>> b[::-1]
b'1101000000000000'
There's this, but in Python it seems slower than Matthias' proposed int->str->int solution.
x = ((x & 0x5555) << 1) | ((x & 0xAAAA) >> 1)
x = ((x & 0x3333) << 2) | ((x & 0xCCCC) >> 2)
x = ((x & 0x0F0F) << 4) | ((x & 0xF0F0) >> 4)
x = ((x & 0x00FF) << 8) | (x >> 8)
My current approach is to access the bits via bit shifting and mask and to shift them in the mirror number until they reach their destination. Still I have the feeling that there is room for improvement.
num = 11
print(format(num, '016b'))
right = num
left = 0
for i in range(16):
tmp = right & 1
left = (left << 1 ) | tmp
right = right >> 1
print(format(left, '016b'))
I have been trying to get my head around CRC32 calculations without much success, the values that I seem to get do not match what I should get.
I am aware that Python has libraries that are capable of generating these checksums (namely zlib and binascii) but I do not have the luxury of being able to use them as the CRC functionality do not exist on the micropython.
So far I have the following code:
import binascii
import zlib
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0x000000ffL] ^ (value >> 8)
return value
teststring = "test"
print "binascii calc: 0x%08x" % (binascii.crc32(teststring) & 0xffffffff)
print "zlib calc: 0x%08x" % (zlib.crc32(teststring) & 0xffffffff)
print "my calc: 0x%08x" % (crc32(teststring))
Then I get the following output:
binascii calc: 0xd87f7e0c
zlib calc: 0xd87f7e0c
my calc: 0x2780810c
The binascii and zlib calculations agree where as my one doesn't. I believe the calculated table of bytes is correct as I have compared it to examples available on the net. So the issue must be the routine where each byte is calculated, could anyone point me in the correct direction?
Thanks in advance!
I haven't looked closely at your code, so I can't pinpoint the exact source of the error, but you can easily tweak it to get the desired output:
import binascii
from array import array
poly = 0xEDB88320
table = array('L')
for byte in range(256):
crc = 0
for bit in range(8):
if (byte ^ crc) & 1:
crc = (crc >> 1) ^ poly
else:
crc >>= 1
byte >>= 1
table.append(crc)
def crc32(string):
value = 0xffffffffL
for ch in string:
value = table[(ord(ch) ^ value) & 0xff] ^ (value >> 8)
return -1 - value
# test
data = (
'',
'test',
'hello world',
'1234',
'A long string to test CRC32 functions',
)
for s in data:
print repr(s)
a = binascii.crc32(s)
print '%08x' % (a & 0xffffffffL)
b = crc32(s)
print '%08x' % (b & 0xffffffffL)
print
output
''
00000000
00000000
'test'
d87f7e0c
d87f7e0c
'hello world'
0d4a1185
0d4a1185
'1234'
9be3e0a3
9be3e0a3
'A long string to test CRC32 functions'
d2d10e28
d2d10e28
Here are a couple more tests that verify that the tweaked crc32 gives the same result as binascii.crc32.
from random import seed, randrange
print 'Single byte tests...',
for i in range(256):
s = chr(i)
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
seed(42)
print 'Multi-byte tests...'
for width in range(2, 20):
print 'Width', width
r = range(width)
for n in range(1000):
s = ''.join([chr(randrange(256)) for i in r])
a = binascii.crc32(s) & 0xffffffffL
b = crc32(s) & 0xffffffffL
assert a == b, (repr(s), a, b)
print('ok')
output
Single byte tests... ok
Multi-byte tests...
Width 2
Width 3
Width 4
Width 5
Width 6
Width 7
Width 8
Width 9
Width 10
Width 11
Width 12
Width 13
Width 14
Width 15
Width 16
Width 17
Width 18
Width 19
ok
As discussed in the comments, the source of the error in the original code is that this CRC-32 algorithm inverts the initial crc buffer, and then inverts the final buffer contents. So value is initialised to 0xffffffff instead of zero, and we need to return value ^ 0xffffffff, which can also be written as ~value & 0xffffffff, i.e. invert value and then select the low-order 32 bits of the result.
If using binary data where the crc is chained over multiple buffers I used the following (using the OPs table):
def crc32(data, crc=0xffffffff):
for b in data:
crc = table[(b ^ crc) & 0xff] ^ (crc >> 8)
return crc
One can XOR the final result with -1 to agree with the online calculators.
crc = crc32(b'test')
print('0x{:08x}'.format(crc))
crc = crc32(b'te')
crc = crc32(b'st', crc)
print('0x{:08x}'.format(crc))
print('xor: 0x{:08x}'.format(crc ^ 0xffffffff))
output
0x278081f3
0x278081f3
xor: 0xd87f7e0c
How do I convert a hex string to a signed int in Python 3?
The best I can come up with is
h = '9DA92DAB'
b = bytes(h, 'utf-8')
ba = binascii.a2b_hex(b)
print(int.from_bytes(ba, byteorder='big', signed=True))
Is there a simpler way? Unsigned is so much easier: int(h, 16)
BTW, the origin of the question is itunes persistent id - music library xml version and iTunes hex version
In n-bit two's complement, bits have value:
bit 0 = 20
bit 1 = 21
bit n-2 = 2n-2
bit n-1 = -2n-1
But bit n-1 has value 2n-1 when unsigned, so the number is 2n too high. Subtract 2n if bit n-1 is set:
def twos_complement(hexstr, bits):
value = int(hexstr, 16)
if value & (1 << (bits - 1)):
value -= 1 << bits
return value
print(twos_complement('FFFE', 16))
print(twos_complement('7FFF', 16))
print(twos_complement('7F', 8))
print(twos_complement('FF', 8))
Output:
-2
32767
127
-1
import struct
For Python 3 (with comments' help):
h = '9DA92DAB'
struct.unpack('>i', bytes.fromhex(h))
For Python 2:
h = '9DA92DAB'
struct.unpack('>i', h.decode('hex'))
or if it is little endian:
h = '9DA92DAB'
struct.unpack('<i', h.decode('hex'))
Here's a general function you can use for hex of any size:
import math
# hex string to signed integer
def htosi(val):
uintval = int(val,16)
bits = 4 * (len(val) - 2)
if uintval >= math.pow(2,bits-1):
uintval = int(0 - (math.pow(2,bits) - uintval))
return uintval
And to use it:
h = str(hex(-5))
h2 = str(hex(-13589))
x = htosi(h)
x2 = htosi(h2)
This works for 16 bit signed ints, you can extend for 32 bit ints. It uses the basic definition of 2's complement signed numbers. Also note xor with 1 is the same as a binary negate.
# convert to unsigned
x = int('ffbf', 16) # example (-65)
# check sign bit
if (x & 0x8000) == 0x8000:
# if set, invert and add one to get the negative value, then add the negative sign
x = -( (x ^ 0xffff) + 1)
It's a very late answer, but here's a function to do the above. This will extend for whatever length you provide. Credit for portions of this to another SO answer (I lost the link, so please provide it if you find it).
def hex_to_signed(source):
"""Convert a string hex value to a signed hexidecimal value.
This assumes that source is the proper length, and the sign bit
is the first bit in the first byte of the correct length.
hex_to_signed("F") should return -1.
hex_to_signed("0F") should return 15.
"""
if not isinstance(source, str):
raise ValueError("string type required")
if 0 == len(source):
raise valueError("string is empty")
sign_bit_mask = 1 << (len(source)*4-1)
other_bits_mask = sign_bit_mask - 1
value = int(source, 16)
return -(value & sign_bit_mask) | (value & other_bits_mask)
i have a little problem with my script, where i need to convert ip in form 'xxx.xxx.xxx.xxx' to integer representation and go back from this form.
def iptoint(ip):
return int(socket.inet_aton(ip).encode('hex'),16)
def inttoip(ip):
return socket.inet_ntoa(hex(ip)[2:].decode('hex'))
In [65]: inttoip(iptoint('192.168.1.1'))
Out[65]: '192.168.1.1'
In [66]: inttoip(iptoint('4.1.75.131'))
---------------------------------------------------------------------------
error Traceback (most recent call last)
/home/thc/<ipython console> in <module>()
/home/thc/<ipython console> in inttoip(ip)
error: packed IP wrong length for inet_ntoa`
Anybody knows how to fix that?
#!/usr/bin/env python
import socket
import struct
def ip2int(addr):
return struct.unpack("!I", socket.inet_aton(addr))[0]
def int2ip(addr):
return socket.inet_ntoa(struct.pack("!I", addr))
print(int2ip(0xc0a80164)) # 192.168.1.100
print(ip2int('10.0.0.1')) # 167772161
Python 3 has ipaddress module which features very simple conversion:
int(ipaddress.IPv4Address("192.168.0.1"))
str(ipaddress.IPv4Address(3232235521))
In pure python without use additional module
def IP2Int(ip):
o = map(int, ip.split('.'))
res = (16777216 * o[0]) + (65536 * o[1]) + (256 * o[2]) + o[3]
return res
def Int2IP(ipnum):
o1 = int(ipnum / 16777216) % 256
o2 = int(ipnum / 65536) % 256
o3 = int(ipnum / 256) % 256
o4 = int(ipnum) % 256
return '%(o1)s.%(o2)s.%(o3)s.%(o4)s' % locals()
# Example
print('192.168.0.1 -> %s' % IP2Int('192.168.0.1'))
print('3232235521 -> %s' % Int2IP(3232235521))
Result:
192.168.0.1 -> 3232235521
3232235521 -> 192.168.0.1
You lose the left-zero-padding which breaks decoding of your string.
Here's a working function:
def inttoip(ip):
return socket.inet_ntoa(hex(ip)[2:].zfill(8).decode('hex'))
Below are the fastest and most straightforward (to the best of my knowledge)
convertors for IPv4 and IPv6:
try:
_str = socket.inet_pton(socket.AF_INET, val)
except socket.error:
raise ValueError
return struct.unpack('!I', _str)[0]
-------------------------------------------------
return socket.inet_ntop(socket.AF_INET, struct.pack('!I', n))
-------------------------------------------------
try:
_str = socket.inet_pton(socket.AF_INET6, val)
except socket.error:
raise ValueError
a, b = struct.unpack('!2Q', _str)
return (a << 64) | b
-------------------------------------------------
a = n >> 64
b = n & ((1 << 64) - 1)
return socket.inet_ntop(socket.AF_INET6, struct.pack('!2Q', a, b))
Python code not using inet_ntop() and struct module is like order of magnitude slower than this regardless of what it is doing.
One line
reduce(lambda out, x: (out << 8) + int(x), '127.0.0.1'.split('.'), 0)
Python3 oneliner (based on Thomas Webber's Python2 answer):
sum([int(x) << 8*i for i,x in enumerate(reversed(ip.split('.')))])
Left shifts are much faster than pow().
It can be done without using any library.
def iptoint(ip):
h=list(map(int,ip.split(".")))
return (h[0]<<24)+(h[1]<<16)+(h[2]<<8)+(h[3]<<0)
def inttoip(ip):
return ".".join(map(str,[((ip>>24)&0xff),((ip>>16)&0xff),((ip>>8)&0xff),((ip>>0)&0xff)]))
iptoint("8.8.8.8") # 134744072
inttoip(134744072) # 8.8.8.8
I used following:
ip2int = lambda ip: reduce(lambda a,b: long(a)*256 + long(b), ip.split('.'))
ip2int('192.168.1.1')
#output
3232235777L
# from int to ip
int2ip = lambda num: '.'.join( [ str((num >> 8*i) % 256) for i in [3,2,1,0] ])
int2ip(3232235777L)
#output
'192.168.1.1'
Let me give a more understandable way:
ip to int
def str_ip2_int(s_ip='192.168.1.100'):
lst = [int(item) for item in s_ip.split('.')]
print lst
# [192, 168, 1, 100]
int_ip = lst[3] | lst[2] << 8 | lst[1] << 16 | lst[0] << 24
return int_ip # 3232235876
The above:
lst = [int(item) for item in s_ip.split('.')]
equivalent to :
lst = map(int, s_ip.split('.'))
also:
int_ip = lst[3] | lst[2] << 8 | lst[1] << 16 | lst[0] << 24
equivalent to :
int_ip = lst[3] + (lst[2] << 8) + (lst[1] << 16) + (lst[0] << 24)
int_ip = lst[3] + lst[2] * pow(2, 8) + lst[1] * pow(2, 16) + lst[0] * pow(2, 24)
int to ip:
def int_ip2str(int_ip=3232235876):
a0 = str(int_ip & 0xff)
a1 = str((int_ip & 0xff00) >> 8)
a2 = str((int_ip & 0xff0000) >> 16)
a3 = str((int_ip & 0xff000000) >> 24)
return ".".join([a3, a2, a1, a0])
or:
def int_ip2str(int_ip=3232235876):
lst = []
for i in xrange(4):
shift_n = 8 * i
lst.insert(0, str((int_ip >> shift_n) & 0xff))
return ".".join(lst)
My approach is to straightforwardly look at the the number the way it is stored, rather than displayed, and to manipulate it from the display format to the stored format and vice versa.
So, from an IP address to an int:
def convertIpToInt(ip):
return sum([int(ipField) << 8*index for index, ipField in enumerate(reversed(ip.split('.')))])
This evaluates each field, and shifts it to its correct offset, and then sums them all up, neatly converting the IP address' display into its numerical value.
In the opposite direction, from an int to an IP address:
def convertIntToIp(ipInt):
return '.'.join([str(int(ipHexField, 16)) for ipHexField in (map(''.join, zip(*[iter(str(hex(ipInt))[2:].zfill(8))]*2)))])
The numerical representation is first converted into its hexadecimal string representation, which can be manipulated as a sequence, making it easier to break up. Then, pairs are extracted by mapping ''.join onto tuples of pairs provided by zipping a list of two references to an iterator of the IP string (see How does zip(*[iter(s)]*n) work?), and those pairs are in turn converted from hex string representations to int string representations, and joined by '.'.
def ip2int(ip):
"""
Convert IP string to integer
:param ip: IP string
:return: IP integer
"""
return reduce(lambda x, y: x * 256 + y, map(int, ip.split('.')))
def int2ip(num):
"""
Convert IP integer to string
:param num: IP integer
:return: IP string
"""
return '.'.join(map(lambda x: str(num // 256 ** x % 256), range(3, -1, -1)))