Convert strings to int using many operators - python

I had a string of int as a result of converting text to int. Here is its code in python:
def mix(data):
b = len(data)*8
m = bytes_to_long(data)
c = 0
for i in range(b//2):
if (m >> i) & 1:
c ^= m >> (b//2) << i
return c
So now, I had to make an "unmix" function to decode my result to its origin. But I'm still struggling with those Binary Shift Operators. Can you guys help me, please?

Related

Crossover of two binary representations

I'm trying to get a quick implementation of the following problem, ideally such that it would work in a numba function. The problem is the following: I have two random integers a & b and consider their binary representation of length L, e.g.
L=4: a=10->1010, b=6->0110.
This is the information that is feed into the function. Then I cut both binary representations in two at the same random position and fuse one of the two results, e.g.
L=4: a=1|010, b=0|110 ---> c=1110 or 0010.
One of the two outcome is chosen with equal probability and that is the outcome of the function. The cut occurs between the first 1/0 and the last 0/1 of the binary representation.
This is currently my code:
def func(a,b,l):
bin_a = [int(i) for i in str(bin(a))[2:].zfill(l)]
bin_b = [int(i) for i in str(bin(b))[2:].zfill(l)]
randint = random.randint(1, l - 1)
print("randint", randint)
if random.random() < 0.5:
result = bin_a[0:randint]+bin_b[randint:l]
else:
result = bin_b[0:randint] + bin_a[randint:l]
return result
I have the feeling that there a possibly many shortcuts to this problem that I do not come up with. Also my code does not work in numba :/. Thanks for any help!
Edit: This is an update of my code, thanks to Prunes help! It also works as a numba function. If there is no further improvements to that, I would close the question.
def func2(a,b,l):
randint = random.randint(1, l - 1)
print("randint", randint)
bitlist_l = [1]*randint+[0]*(l-randint)
bitlist_r = [0]*randint+[1]*(l-randint)
print("bitlist_l", bitlist_l)
print("bitlist_r", bitlist_r)
l_mask = 0
r_mask = 0
for i in range(l):
l_mask = (l_mask << 1) | bitlist_l[i]
r_mask = (r_mask << 1) | bitlist_r[i]
print("l_mask", l_mask)
print("r_mask", r_mask)
if random.random() < 0.5:
c = (a & l_mask) | (b & r_mask)
else:
c = (b & l_mask) | (a & r_mask)
return c
You lose a lot of time converting between string and int. Try bit operations instead. Mask the items you want and construct the output without all the conversions. Try these steps:
size = [length of larger number in bits] There are many ways to get this.
Make a mask template, size 1-bits.
Pick your random position, pos randint is a poor anem, as it shadows the function you're using.
Make two masks: l_mask = mask << pos; r_mask = mask >> pos. This gives you two mutually exclusive and exhaustive bit-maps for your inputs.
Flip your random coin, the 50-50 chance. The < 0.5 result would be ...
(a & l_mask) | (b & rmask)
For the >= 0.5 result, switch a and b in that expression.
You can improve your code by realizing that you do not need a "human readable" binary representation to do binary operations.
For example, creating the mask:
m = (1<<randompos) - 1
The crossover can be done like so:
c = (a if coinflip else b) ^ ((a^b)&m)
And that's all.
Full example:
# create random sample
a,b = np.random.randint(1<<32,size=2)
randompos = np.random.randint(1,32)
coinflip = np.random.randint(2)
randompos
# 12
coinflip
# 0
# do the crossover
m = (1<<randompos) - 1
c = (a if coinflip else b) ^ ((a^b)&m)
# check
for i in (a,b,m,c):
print(f"{i:032b}")
# 11100011110111000001001111100011
# 11010110110000110010101001111011
# 00000000000000000000111111111111
# 11010110110000110010001111100011

Python string comparison doesn't short circuit?

The usual saying is that string comparison must be done in constant time when checking things like password or hashes, and thus, it is recommended to avoid a == b.
However, I run the follow script and the results don't support the hypothesis that a==b short circuit on the first non-identical character.
from time import perf_counter_ns
import random
def timed_cmp(a, b):
start = perf_counter_ns()
a == b
end = perf_counter_ns()
return end - start
def n_timed_cmp(n, a, b):
"average time for a==b done n times"
ts = [timed_cmp(a, b) for _ in range(n)]
return sum(ts) / len(ts)
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
n = 2 ** 8
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
# print the 10 fastest
for x in sorted_timed[:10]:
i, t = x
print("{}\t{:3f}".format(i, t))
print("---")
i, t = timed[0]
print("{}\t{:3f}".format(i, t))
i, t = timed[1]
print("{}\t{:3f}".format(i, t))
if __name__ == "__main__":
check_cmp_time()
Here is the result of a run, re-running the script gives slightly different results, but nothing satisfactory.
# ran with cpython 3.8.3
6 78.051700
1 78.203200
15 78.222700
14 78.384800
11 78.396300
12 78.441800
9 78.476900
13 78.519000
8 78.586200
3 78.631500
---
0 80.691100
1 78.203200
I would've expected that the fastest comparison would be where the first differing character is at the beginning of the string, but it's not what I get.
Any idea what's going on ???
There's a difference, you just don't see it on such small strings. Here's a small patch to apply to your code, so I use longer strings, and I do 10 checks by putting the A at a place, evenly spaced in the original string, from the beginning to the end, I mean, like this:
A_______________________________________________________________
______A_________________________________________________________
____________A___________________________________________________
__________________A_____________________________________________
________________________A_______________________________________
______________________________A_________________________________
____________________________________A___________________________
__________________________________________A_____________________
________________________________________________A_______________
______________________________________________________A_________
____________________________________________________________A___
## -15,13 +15,13 ## def n_timed_cmp(n, a, b):
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
- n = 2 ** 8
+ n = 2 ** 16
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
- diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
+ diffs = [s[:i] + "A" + s[i+1:] for i in range(0, n, n // 10)]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
and you'll get:
0 122.621000
1 213.465700
2 380.214100
3 460.422000
5 694.278700
4 722.010000
7 894.630300
6 1020.722100
9 1149.473000
8 1341.754500
---
0 122.621000
1 213.465700
Note that with your example, with only 2**8 characters, it's already noticable, apply this patch:
## -21,7 +21,7 ## def check_cmp_time():
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
- diffs = [s[:i] + "A" + s[i+1:] for i in range(min(50, n))]
+ diffs = [s[:i] + "A" + s[i+1:] for i in [0, n - 1]]
timed = [(i, n_timed_cmp(10000, s, d)) for (i, d) in enumerate(diffs)]
sorted_timed = sorted(timed, key=lambda t: t[1])
to only keep the two extreme cases (first letter change vs last letter change) and you'll get:
$ python3 cmp.py
0 124.131800
1 135.566000
Numbers may vary, but most of the time test 0 is a tad faster that test 1.
To isolate more precisely which caracter is modified, it's possible as long as the memcmp does it character by character, so as long as it does not use integer comparisons, typically on the last character if they get misaligned, or on really short strings, like 8 char string, as I demo here:
from time import perf_counter_ns
from statistics import median
import random
def check_cmp_time():
random.seed(123)
# generate a random string of n characters
n = 8
s = "".join([chr(random.randint(ord("a"), ord("z"))) for _ in range(n)])
# generate a list of strings, which all differs from the original string
# by one character, at a different position
# only do that for the first 50 char, it's enough to get data
diffs = [s[:i] + "A" + s[i + 1 :] for i in range(n)]
values = {x: [] for x in range(n)}
for _ in range(10_000_000):
for i, diff in enumerate(diffs):
start = perf_counter_ns()
s == diff
values[i].append(perf_counter_ns() - start)
timed = [[k, median(v)] for k, v in values.items()]
sorted_timed = sorted(timed, key=lambda t: t[1])
# print the 10 fastest
for x in sorted_timed[:10]:
i, t = x
print("{}\t{:3f}".format(i, t))
print("---")
i, t = timed[0]
print("{}\t{:3f}".format(i, t))
i, t = timed[1]
print("{}\t{:3f}".format(i, t))
if __name__ == "__main__":
check_cmp_time()
Which gives me:
1 221.000000
2 222.000000
3 223.000000
4 223.000000
5 223.000000
6 223.000000
7 223.000000
0 241.000000
The differences are so small, Python and perf_counter_ns may no longer be the right tools here.
See, to know why it doesn't short circuit, you'll have to do some digging. The simple answer is, of course, it doesn't short circuit because the standard doesn't specify so. But you might think, "Why wouldn't the implementations choose to short circuit? Surely, It must be faster!". Not quite.
Let's take a look at cpython, for obvious reasons. Look at the code for unicode_compare_eq function defined in unicodeobject.c
static int
unicode_compare_eq(PyObject *str1, PyObject *str2)
{
int kind;
void *data1, *data2;
Py_ssize_t len;
int cmp;
len = PyUnicode_GET_LENGTH(str1);
if (PyUnicode_GET_LENGTH(str2) != len)
return 0;
kind = PyUnicode_KIND(str1);
if (PyUnicode_KIND(str2) != kind)
return 0;
data1 = PyUnicode_DATA(str1);
data2 = PyUnicode_DATA(str2);
cmp = memcmp(data1, data2, len * kind);
return (cmp == 0);
}
(Note: This function is actually called after deducing that str1 and str2 are not the same object - if they are - well that's just a simple True immediately)
Focus on this line specifically-
cmp = memcmp(data1, data2, len * kind);
Ahh, we're back at another cross road. Does memcmp short circuit? The C standard does not specify such a requirement. As seen in the opengroup docs and also in Section 7.24.4.1 of the C Standard Draft
7.24.4.1 The memcmp function
Synopsis
#include <string.h>
int memcmp(const void *s1, const void *s2, size_t n);
Description
The memcmp function compares the first n characters of the object pointed to by s1 to
the first n characters of the object pointed to by s2.
Returns
The memcmp function returns an integer greater than, equal to, or less than zero,
accordingly as the object pointed to by s1 is greater than, equal to, or less than the object pointed to by s2.
Most Some C implementations (including glibc) choose to not short circuit. But why? are we missing something, why would you not short circuit?
Because the comparison they use isn't might not be as naive as a byte by byte by check. The standard does not require the objects to be compared byte by byte. Therein lies the chance of optimization.
What glibc does, is that it compares elements of type unsigned long int instead of just singular bytes represented by unsigned char. Check out the implementation
There's a lot more going under the hood - a discussion far outside the scope of this question, after all this isn't even tagged as a C question ;). Though I found that this answer may be worth a look. But just know, the optimization is there, just in a much different form than the approach that may come in mind at first glance.
Edit: Fixed wrong function link
Edit: As #Konrad Rudolph has stated, glibc memcmp does apparently short circuit. I've been misinformed.

How I do circular shifting (rotation) of int in Python?

In Java right rotation is done using:
(bits >>> k) | (bits << (Integer.SIZE - k))
But how to do similar thing in Python?
I tried to do (as described here):
n = 13
d = 2
INT_BITS = 4
print(bin(n))
print(bin((n >> d)|(n << (INT_BITS - d)) & 0xFFFFFFFF))
Output:
0b1101
0b110111
But I could not interpret this as a right rotation.
Also is it possible to perform the rotation by excluding leading zeroes, for example:
rightRotation of (...0001101) = 1110 not 1000...110
It is my mistake, if you want to change INT_BITS to 4 you also need to change 0xFFFFFFFF to 0xF (one hex equals 4-bits):
n = 13
d = 2
INT_BITS = 4
print(bin(n))
print(bin((n >> d)|(n << (INT_BITS - d)) & 0xF))
will output:
0b1101
0b111

Can't reproduce working C bitwise encoding function in Python

I'm reverse engineering a proprietary network protocol that generates a (static) one-time pad on launch and then uses that to encode/decode each packet it sends/receives. It uses the one-time pad in a series of complex XORs, shifts, and multiplications.
I have produced the following C code after walking through the decoding function in the program with IDA. This function encodes/decodes the data perfectly:
void encodeData(char *buf)
{
int i;
size_t bufLen = *(unsigned short *)buf;
unsigned long entropy = *((unsigned long *)buf + 2);
int xorKey = 9 * (entropy ^ ((entropy ^ 0x3D0000) >> 16));
unsigned short baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
//Skip first 24 bytes, as that is the header
for (i = 24; i <= (signed int)bufLen; i++)
buf[i] ^= byteTable[((unsigned short)i + baseByteTableIndex) & 2047];
}
Now I want to try my hand at making a Peach fuzzer for this protocol. Since I'll need a custom Python fixup to do the encoding/decoding prior to doing the fuzzing, I need to port this C code to Python.
I've made the following Python function but haven't had any luck with it decoding the packets it receives.
def encodeData(buf):
newBuf = bytearray(buf)
bufLen = unpack('H', buf[:2])
entropy = unpack('I', buf[2:6])
xorKey = 9 * (entropy[0] ^ ((entropy[0] ^ 0x3D0000) >> 16))
baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
#Skip first 24 bytes, since that is header data
for i in range(24,bufLen[0]):
newBuf[i] = xorPad[(i + baseByteTableIndex) & 2047]
return str(newBuf)
I've tried with and without using array() or pack()/unpack() on various variables to force them to be the right size for the bitwise operations, but I must be missing something, because I can't get the Python code to work as the C code does. Does anyone know what I'm missing?
In case it would help you to try this locally, here is the one-time pad generating function:
def buildXorPad():
global xorPad
xorKey = array('H', [0xACE1])
for i in range(0, 2048):
xorKey[0] = -(xorKey[0] & 1) & 0xB400 ^ (xorKey[0] >> 1)
xorPad = xorPad + pack('B',xorKey[0] & 0xFF)
And here is the hex-encoded original (encoded) and decoded packet.
Original: 20000108fcf3d71d98590000010000000000000000000000a992e0ee2525a5e5
Decoded: 20000108fcf3d71d98590000010000000000000000000000ae91e1ee25252525
Solution
It turns out that my problem didn't have much to do with the difference between C and Python types, but rather some simple programming mistakes.
def encodeData(buf):
newBuf = bytearray(buf)
bufLen = unpack('H', buf[:2])
entropy = unpack('I', buf[8:12])
xorKey = 9 * (entropy[0] ^ ((entropy[0] ^ 0x3D0000) >> 16))
baseByteTableIndex = (60205 * (xorKey ^ (xorKey >> 4)) ^ (668265261 * (xorKey ^ (xorKey >> 4)) >> 15)) & 0x7FFF;
#Skip first 24 bytes, since that is header data
for i in range(24,bufLen[0]):
padIndex = (i + baseByteTableIndex) & 2047
newBuf[i] ^= unpack('B',xorPad[padIndex])[0]
return str(newBuf)
Thanks everyone for your help!
This line of C:
unsigned long entropy = *((unsigned long *)buf + 2);
should translate to
entropy = unpack('I', buf[8:12])
because buf is cast to an unsigned long first before adding 2 to the address, which adds the size of 2 unsigned longs to it, not 2 bytes (assuming an unsigned long is 4 bytes in size).
Also:
newBuf[i] = xorPad[(i + baseByteTableIndex) & 2047]
should be
newBuf[i] ^= xorPad[(i + baseByteTableIndex) & 2047]
to match the C, otherwise the output isn't actually based on the contents of the buffer.
Python integers don't overflow - they are automatically promoted to arbitrary precision when they exceed sys.maxint (or -sys.maxint-1).
>>> sys.maxint
9223372036854775807
>>> sys.maxint + 1
9223372036854775808L
Using array and/or unpack does not seem to make a difference (as you discovered)
>>> array('H', [1])[0] + sys.maxint
9223372036854775808L
>>> unpack('H', '\x01\x00')[0] + sys.maxint
9223372036854775808L
To truncate your numbers, you'll have to simulate overflow by manually ANDing with an appropriate bitmask whenever you're increasing the size of the variable.

Hex string to signed int in Python

How do I convert a hex string to a signed int in Python 3?
The best I can come up with is
h = '9DA92DAB'
b = bytes(h, 'utf-8')
ba = binascii.a2b_hex(b)
print(int.from_bytes(ba, byteorder='big', signed=True))
Is there a simpler way? Unsigned is so much easier: int(h, 16)
BTW, the origin of the question is itunes persistent id - music library xml version and iTunes hex version
In n-bit two's complement, bits have value:
bit 0 = 20
bit 1 = 21
bit n-2 = 2n-2
bit n-1 = -2n-1
But bit n-1 has value 2n-1 when unsigned, so the number is 2n too high. Subtract 2n if bit n-1 is set:
def twos_complement(hexstr, bits):
value = int(hexstr, 16)
if value & (1 << (bits - 1)):
value -= 1 << bits
return value
print(twos_complement('FFFE', 16))
print(twos_complement('7FFF', 16))
print(twos_complement('7F', 8))
print(twos_complement('FF', 8))
Output:
-2
32767
127
-1
import struct
For Python 3 (with comments' help):
h = '9DA92DAB'
struct.unpack('>i', bytes.fromhex(h))
For Python 2:
h = '9DA92DAB'
struct.unpack('>i', h.decode('hex'))
or if it is little endian:
h = '9DA92DAB'
struct.unpack('<i', h.decode('hex'))
Here's a general function you can use for hex of any size:
import math
# hex string to signed integer
def htosi(val):
uintval = int(val,16)
bits = 4 * (len(val) - 2)
if uintval >= math.pow(2,bits-1):
uintval = int(0 - (math.pow(2,bits) - uintval))
return uintval
And to use it:
h = str(hex(-5))
h2 = str(hex(-13589))
x = htosi(h)
x2 = htosi(h2)
This works for 16 bit signed ints, you can extend for 32 bit ints. It uses the basic definition of 2's complement signed numbers. Also note xor with 1 is the same as a binary negate.
# convert to unsigned
x = int('ffbf', 16) # example (-65)
# check sign bit
if (x & 0x8000) == 0x8000:
# if set, invert and add one to get the negative value, then add the negative sign
x = -( (x ^ 0xffff) + 1)
It's a very late answer, but here's a function to do the above. This will extend for whatever length you provide. Credit for portions of this to another SO answer (I lost the link, so please provide it if you find it).
def hex_to_signed(source):
"""Convert a string hex value to a signed hexidecimal value.
This assumes that source is the proper length, and the sign bit
is the first bit in the first byte of the correct length.
hex_to_signed("F") should return -1.
hex_to_signed("0F") should return 15.
"""
if not isinstance(source, str):
raise ValueError("string type required")
if 0 == len(source):
raise valueError("string is empty")
sign_bit_mask = 1 << (len(source)*4-1)
other_bits_mask = sign_bit_mask - 1
value = int(source, 16)
return -(value & sign_bit_mask) | (value & other_bits_mask)

Categories