Z3python XOR sum? - python

I'm currently trying to solve some equation with z3python, and I am coming across a situation I could not deal with.
I need to xor certain BitVecs with specific non ascii char values, and sum them up to check a checksum.
Here is an example :
pbInput = [BitVec("{}".format(i), 8) for i in range(KEY_LEN)]
password = "\xff\xff\xde\x8e\xae"
solver.add(Xor(pbInput[0], password[0]) + Xor(pbInput[3], password[3]) == 300)
It results in a z3 type exception :
z3.z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value.
I found this post and tried to apply a function to my password string, adding this line to my script :
password = Function(password, StringSort(), IntSort(), BitVecSort(8))
But of course it fails as the string isn't an ASCII string.
I don't care about it being a string, I tried to just do Xor(pbInput[x] ^ 0xff), but this doesn't work either. I could not find any documentation on this particular situation.
EDIT :
Here is the full traceback.
Traceback (most recent call last):
File "solve.py", line 18, in <module>
(Xor(pbInput[0], password[0])
File "/usr/local/lib/python2.7/dist-packages/z3/z3.py", line 1555, in Xor
a = s.cast(a)
File "/usr/local/lib/python2.7/dist-packages/z3/z3.py", line 1310, in cast
_z3_assert(self.eq(val.sort()), "Value cannot be converted into a Z3 Boolean value")
File "/usr/local/lib/python2.7/dist-packages/z3/z3.py", line 91, in _z3_assert
raise Z3Exception(msg)
z3.z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value
Thanks in advance if you have any idea about how I could do this operation!

There are two problems in your code.
Xor is for Bool values only; for bit-vectors simply use ^
Use the function ord to convert characters to integers before passing to xor
You didn't give your full program (which is always helpful!), but here's how you'd write that section in z3py as a full program:
from z3 import *
solver = Solver()
KEY_LEN = 10
pbInput = [BitVec("c_{}".format(i), 8) for i in range(KEY_LEN)]
password = "\xff\xff\xde\x8e\xae"
solver.add((pbInput[0] ^ ord(password[0])) + (pbInput[3] ^ ord(password[3])) == 300)
print solver.check()
print solver.model()
This prints:
sat
[c_3 = 0, c_0 = 97]
(I gave the variables better names to distinguish more properly.) So, it's telling us the solution is:
>>> (0xff ^ 97) + (0x8e ^ 0)
300
Which is indeed what you asked for.

Related

Add binary operator to z3

I'm trying to parse and translate a string to its equivalent z3 form.
import z3
expr = 'x + y = 10'
p = my_parse_expr_to_z3(expr) # results in: ([x, '+', y], '==', [10])
p = my_flatten(p) # after flatten: [x, '+', y, '==', 10]
Type-checking of parsed string:
for e in p:
print(type(e), e)
# -->
<class 'z3.z3.ArithRef'> x
<class 'str'> +
<class 'z3.z3.ArithRef'> y
<class 'str'> ==
<class 'int'> 10
When I now try:
s = z3.Solver()
s.add(*p)
I get:
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "...\venv\lib\site-packages\z3\z3.py", line 6938, in add
self.assert_exprs(*args)
File "..\venv\lib\site-packages\z3\z3.py", line 6926, in assert_exprs
arg = s.cast(arg)
File "..\venv\lib\site-packages\z3\z3.py", line 1505, in cast
_z3_assert(self.eq(val.sort()), "Value cannot be converted into a Z3 Boolean value")
File "..\venv\lib\site-packages\z3\z3.py", line 112, in _z3_assert
raise Z3Exception(msg)
z3.z3types.Z3Exception: Value cannot be converted into a Z3 Boolean value
The equal and plus signs occurs to be of the false type/usage? How can I translate that correctly?
Where's the definition of parse_expr_to_z3 coming from? It's definitely not something that comes with z3 itself, so you must be getting it from some other third-party, or perhaps you wrote it yourself. Without knowing how it's defined, it's impossible for anyone on stack-overflow to give you any guidance.
In any case, as you suspected its results are not something you can feed back to z3. It fails precisely because what you can add to the solver must be constraints, i.e., expressions of type Bool in z3. Clearly, none of those constituents have that type.
So, long story short, this parse_expr_to_z3 doesn't seem to be designed to do what you intended. Contact its developer for further details on what the intended use case is.
If you're trying to load assertions from a string to z3, then you can do that using the so called SMTLib format. Something like:
from z3 import *
expr = """
(declare-const x Int)
(declare-const y Int)
(assert (= (+ x y) 10))
"""
p = parse_smt2_string(expr)
s = Solver()
s.add(p)
print(s.check())
print(s.model())
This prints:
sat
[y = 0, x = 10]
You can find more about SMTLib syntax in https://smtlib.cs.uiowa.edu/papers/smt-lib-reference-v2.6-r2021-05-12.pdf
Note that trying to do this using any other syntax (like your proposed 'x + y = 10') is going to require knowledge of the variables in the string (x and y in this case, but can of course be arbitrary), and what sort of symbols (+ and = in your case, but again can be any number of different symbols) and their precise meanings. Without knowing your exact needs, it's hard to opine, but using anything other than the existing support for SMTLib syntax itself will require a non-insignificant amount of work.

Why do divisions not work in jsonpath-ng?

Jsonpath-ng offers basic arithmetic as in:
from jsonpath_ng import jsonpath
from jsonpath_ng.ext import parse
jsonpath_expr = parse('$.foo * 2')
target = {'foo': 2}
result = jsonpath_expr.find(target)
result = [match.value for match in result]
print(result)
result: [4]
However, if I change the expression to $.foo / 2, then I get a Parse Error:
Traceback (most recent call last):
File "test.py", line 4, in <module>
jsonpath_expr = parse('$.foo / 2')
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\ext\parser.py", line 172, in parse
return ExtentedJsonPathParser(debug=debug).parse(path)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 32, in parse
return self.parse_token_stream(lexer.tokenize(string))
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 55, in parse_token_stream
return new_parser.parse(lexer = IteratorToTokenStream(token_iterator))
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\yacc.py", line 333, in parse
return self.parseopt_notrack(input, lexer, debug, tracking, tokenfunc)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\yacc.py", line 1201, in parseopt_notrack
tok = call_errorfunc(self.errorfunc, errtoken, self)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\ply\yacc.py", line 192, in call_errorfunc
r = errorfunc(token)
File "C:\Users\micha\AppData\Roaming\Python\Python38\site-packages\jsonpath_ng\parser.py", line 69, in p_error
raise Exception('Parse error at %s:%s near token %s (%s)' % (t.lineno, t.col, t.value, t.type))
Exception: Parse error at 1:6 near token / (SORT_DIRECTION)
I can sometimes work around this issue by dividing by the inverse value, so I would do $.foo * 0.5 to get the result [1.0]. But this doesn't work if both sides of the equation are numeric values of different types (int or float). So 2 * 0.5 and 0.5 * 2 will result in a Parse error, but 2.0 * 0.5 will not.
How do I get divisions to work? And why can I not multiply a float by an integer?
Those are both grammar bugs.
Extended JSON paths are allowed to be suffixed with a bracketed expression containing a list of "sorts"; each sort starts with / or \. To make that work, the extended lexer recognises those two symbols as the token SORT_DIRECTION, which takes precedence over the recognition of / as an arithmetic operator. Consequently the use of / as an arithmetic operator is not allowed by the parser. (In fact, the problem goes deeper than that, but that's the essence.)
For some reason, the grammar author chose to separate NUMBER (actually, integer) and FLOAT in arithmetic expressions, which means that they had to enumerate the possible combinations. What they chose was:
jsonpath : NUMBER operator NUMBER
| FLOAT operator FLOAT
| ID operator ID
| NUMBER operator jsonpath
| FLOAT operator jsonpath
| jsonpath operator NUMBER
| jsonpath operator FLOAT
| jsonpath operator jsonpath
There are other problems with this grammar, but the essence here is that it permits NUMBER operator NUMBER and FLOAT operator FLOAT but does not permit NUMBER operator FLOAT or FLOAT operator NUMBER, which is what you observe. However, path expressions can work with either NUMBER or FLOAT.

Convert Python's binascii.crc_hqx() back to ascii

I'm using the standard Python3 lib binascii, and specifically the crc_hqx() function
binascii.crc_hqx(data, value)
Compute a 16-bit CRC value of data, starting with value as the initial CRC, and return the result. This uses the CRC-CCITT polynomial x16 + x12 + x5 + 1, often represented as 0x1021. This CRC is used in the binhex4 format.
I'm able to convert to CRC with this code:
import binascii
t = 'abcd'
z = binascii.crc_hqx(t.encode('ascii'), 0)
print(t,z)
which, as expected, prints the line
abcd 43062
But how do I convert back to ASCII?
I've tried variations with the a2b_hqx() function
binascii.a2b_hqx(string)
Convert binhex4 formatted ASCII data to binary, without doing RLE-decompression. The string should contain a complete number of binary bytes, or (in case of the last portion of the binhex4 data) have the remaining bits zero.
The simplest version would be:
y = binascii.a2b_hqx(str(z))
But I've also tried variations with bytearray() and str.encode(), etc.
For this code:
import binascii
t = 'abcd'
z = binascii.crc_hqx(t.encode('ascii'), 0)
print(t,z)
y = binascii.a2b_hqx(str(z))
the Traceback:
abcd 43062
Traceback (most recent call last):
File "test.py", line 5, in <module>
y = binascii.a2b_hqx(str(z))
binascii.Incomplete: String has incomplete number of bytes
And with this code:
y = binascii.a2b_hqx(bytearray(z))
This Traceback:
binascii.Error: Illegal char
What is generated is a checksum and not possible to convert back to ascii.

declare a variable as *not* an integer in sage/maxima solve

I am trying to solve symbolically a simple equation for x:
solve(x^K + d == R, x)
I am declaring these variables and assumptions:
var('K, d, R')
assume(K>0)
assume(K, 'real')
assume(R>0)
assume(R<1)
assume(d<R)
assumptions()
︡> [K > 0, K is real, R > 0, R < 1, d < R]
Yet when I run the solve, I obtain the following error:
Error in lines 1-1
Traceback (most recent call last):
File
"/projects/sage/sage-7.3/local/lib/python2.7/site-packages/smc_sagews/sage_server.py",
line 957, in execute
exec compile(block+'\n', '', 'single') in namespace, locals
...
File "/projects/sage/sage-7.3/local/lib/python2.7/site-packages/sage/interfaces/interface.py",
line 671, in init
raise TypeError(x)
TypeError: Computation failed since Maxima requested additional constraints; using the 'assume' command before evaluation may help (example of legal syntax is 'assume(K>0)', see assume? for more details)
Is K an integer?
Apparently, maxima is asking whether K is an integer? But I explicitly declared it 'real'!
How can I spell out to maxima that it should not assume that K is an integer?
I am simply expecting (R-d)^(1/K) or exp(log(R-d)/K) as answer.
The assumption framework in both Sage and Maxima is fairly weak, though in this case it doesn't matter, since integers are real numbers, right?
However, you might want to try assume(K,'noninteger') because apparently Maxima does support this particular assumption (I had not seen it before). I can't try this right now, unfortunately, good luck!

How to treat numbers in list as integers in function

I'm trying to write a function that finds both the signed/unsigned max/min values given a bit length. But every time I execute the function I get an error, here is my code
#function to find max and min number of both signed and unsigned bit values
def minmax (n) :
for numbers in n :
unsignedmax = (2**n)-1
unsignedmin = 0
signedmax = 2**(n-1)-1
signedmin = -2**(n-1)
print "number ", "unsigned ", "signed"
#Main
bitlist = [2, 3, 8, 10, 16, 20, 32]
minmax(bitlist)
the error is
Traceback (most recent call last):
File "C:/Users/Issac94/Documents/Python Files/sanchez-hw07b.py", line 23, in <module>
minmax(bitlist)
File "C:/Users/Issac94/Documents/Python Files/sanchez-hw07b.py", line 6, in minmax
unsignedmax = (2**n)-1
TypeError: unsupported operand type(s) for ** or pow(): 'int' and 'list'
>>>
I wasn't done writing it but ran it to make sure there was no error in the logic part but i get that one error when trying to find the values. Is there maybe a way to insert int() or something similar to have the number be treated as type integer and not tyle list which im assuming is happening?
Change the method definition and first line to this:
def minmax (numbers) :
for n in numbers :
That is, in these two lines, replace 'n' with 'numbers' everywhere it appears and replace 'numbers' with 'n' where it appers.
As you have it written the variable "numbers" holds the item in the list you want to process and the variable "n" is holding the list. But the rest of your code is written with the reverse assumption.

Categories