Z3: Find if variable is multiple of some other number - python

I would like to create a constraint that makes sure that a Real value is quantized to some tick value.
TICK = 0.5
x = Real('x')
solve(x % TICK == 0)
Unfortunately, this does not work with Real numbers (it works with Int and FP).
Another solution that I thought of was to create a set of valid numbers and check whether the number is part of the set, however the set would need to be really big.
Is there any other solution?

As Christoph mentioned, computing the modulus of a real-number isn't really meaningful. But your question is still valid: You're asking if x is an integer multiple of TICK. You can do this as follows:
from z3 import *
TICK = 0.5
x = Real('x')
k = Int('k')
solve(x == 1, ToReal(k) * TICK == x)
solve(x == 1.2, ToReal(k) * TICK == x)
This prints:
[k = 2, x = 1]
no solution
Note that unless x is a constant, this'll lead to a mixed integer-real arithmetic, and it might give rise to non-linear constraints. This can make it hard for the solver to answer your query, i.e., it can return unknown or take too long to respond. It all depends on what other constraints you have on x.

Related

Fast computation for changing the leftmost different bit

Given the two number in 8-bit:
x = 0b11110111
y = 0b11001010
What I want to do is to compare x and y and change x only the first different leftmost bit based on y. For example:
z = 0b11010111 (Because the leftmost different bit between x and y is in the third place, therefore, change the third bit in x based on y and other remain the same.)
And my code is:
flag = True
for i in range(8):
if flag and x[i] != y[i]: # Change only the left-most different bit.
flag = False
else:
y[i] = x[i] # Otherwise, remain the same.
This could work find.
Buit the problem is if I have many pairs like:
for (x, y) in nums:
flag = True
for i in range(8):
if flag and x[i] != y[i]: # Change only the left-most different bit.
flag = False
else:
y[i] = x[i] # Otherwise, remain the same.
When nums is large, then this process will be really slow.
So how can I improve the process of the problem?
BTW, this is the project of the deep learning task, so it can run on GPU, but I don't know whether it can be paralleled by GPU or not.
The function you're after:
from math import floor, log2
def my_fun(x, y):
return x ^ (2 ** floor(log2(x ^ y)))
z = my_fun(0b11110111, 0b11001010)
print(f'{z:b}')
Output:
11010111
The function does the following:
compute the XOR result of x and y, which will include the most significant bit where they differ as the most significant bit to be 1
compute the floor of the log2 of that value, and raising 2 to that power, to get a number that only has that bit set to 1
return the XOR of x and that number, flipping the relevant bit

Minimize function with scipy

I'm trying to do one task, but I just can't figure it out.
This is my function:
1/(x**1/n) + 1/(y**1/n) + 1/(z**1/n) - 1
I want that sum to be as close to 1 as possible.
And these are my input variables (x,y,z):
test = np.array([1.42, 5.29, 7.75])
So n is the only decision variable.
To summarize:
I have a situation like this right now:
1/(1.42**1/1) + 1/(5.29**1/1) + 1/(7.75**1/1) = 1.02229
And I want to get the following:
1/(1.42^(1/0.972782944446024)) + 1/(5.29^(1/0.972782944446024)) + 1/(7.75^(1/0.972782944446024)) = 0.999625
So far I have roughly nothing, and any help is welcome.
import numpy as np
from scipy.optimize import minimize
def objectiv(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
test = np.array([1.42, 5.29, 7.75])
print(objectiv(test))
OUTPUT: 1.0222935270013889
How to properly define a constraint?
def conconstraint(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n)) - 1
And it is not at all clear to me how and what to do with n?
EDIT
I managed to do the following:
def objective(n,*args):
x = odds[0]
y = odds[1]
z = odds[2]
return abs((1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))) - 1)
odds = [1.42,5.29,7.75]
solve = minimize(objective,1.0,args=(odds))
And my output:
fun: -0.9999999931706812
x: array([0.01864994])
And really when put in the formula:
(1/(1.42^(1/0.01864994)) + 1/(5.29^(1/0.01864994)) + 1/(7.75^(1/0.01864994))) -1 = -0.999999993171
Unfortunately I need a positive 1 and I have no idea what to change.
We want to find n that gets our result for a fixed x, y, and z as close as possible to 1. minimize tries to get the lowest possible value for something, without negative bound; -3 is better than -2, and so on.
So what we actually want is called least-squares optimization. Similar idea, though. This documentation is a bit hard to understand, so I'll try to clarify:
All these optimization functions have a common design where you pass in a callable that takes at least one parameter, the one you want to optimize for (in your case, n). Then you can have it take more parameters, whose values will be fixed according to what you pass in.
In your case, you want to be able to solve the optimization problem for different values of x, y and z. So you make your callback accept n, x, y, and z, and pass the x, y, and z values to use when you call scipy.optimize.least_squares. You pass these using the args keyword argument (notice that it is not *args). We can also supply an initial guess of 1 for the n value, which the algorithm will refine.
The rest is customization that is not relevant for our purposes.
So, first let us make the callback:
def objective(n, x, y, z):
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
Now our call looks like:
best_n = least_squares(objective, 1.0, args=np.array([1.42, 5.29, 7.75]))
(You can call minimize the same way, and it will instead look for an n value to make the objective function return as low a value as possible. If I am thinking clearly: the guess for n should trend towards zero, making the denominators increase without bound, making the sum of the reciprocals go towards zero; negative values are not possible. However, it will stop when it gets close to zero, according to the default values for ftol, xtol and gtol. To understand this part properly is beyond the scope of this answer; please try on math.stackexchange.com.)

How do I evaluate this equation in z3 for python

I'm trying to evaluate a simple absolute value inequality like this using z3.
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < .01, y==1000)
The output is no solution every time. I know this is mathematically possible, I just can't figure out how z3 does stuff like this.
This is a common gotcha in z3py bindings. Constants are "promoted" to fit into the right type, following the usual Python methodology. But more often than not, it ends up doing the wrong conversion, and you end up with a very confusing situation.
Since your variables x and y are Int values, the comparison against .01 forces that constant to be 0 to fit the types, and that's definitely not what you wanted to say. The general advice is simply not to mix-and-match arithmetic like this: Cast this as a problem over real-values, not integers. (In general SMTLib doesn't allow mixing-and-matching types in numbers, though z3py does. I think that's misguided, but that's a different discussion.)
To address your issue, the simplest thing to do would be to wrap 0.01 into a real-constant, making the z3py bindings interpret it correctly. So, you'll have:
from z3 import *
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < RealVal(.01), y==1000)
Note the use of RealVal. This returns:
[x = 1000, y = 1000]
I guess this is what you are after.
But I'd, in general, recommend against using conversions like this. Instead, be very explicit yourself, and cast this as a problem, for instance, over Real values. Note that your division / 1000 is also interpreted in this equation as an integer division, i.e., one that produces an integer result. So, I'm guessing this isn't really what you want either. But I hope this gets you started on the right path.
Int('a') < 0.01 is turned (rightly or wrongly) into Int('a') < 0 and clearly the absolute value can never be smaller than 0.
I believe you want Int('a') <= 0 here.
Examples:
solve(Int('a') < 0.01, Int('a') > -1)
no solution
solve(Int('a') <= 0.01, Int('a') > -1)
[a = 0]
Int('a') < 0.01
a < 0

Different implementations of Newton's method in floating point arithmetic

I'm solving a one dimensional non-linear equation with Newton's method. I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
The following algorithm does not converge:
whereas the following does converge:
You may assume that the functions f and f' are smooth and well behaved. The best explanation I was able to come up with is that this is somehow related to what's called iterative improvement (Golub and Van Loan, 1989). Any further insight would be greatly appreciated!
Here is a simple python example illustrating the issue
# Python
def f(x):
return x*x-2.
def fp(x):
return 2.*x
xprev = 0.
# converges
x = 1. # guess
while x != xprev:
xprev = x
x = (x*fp(x)-f(x))/fp(x)
print(x)
# does not converge
x = 1. # guess
while x != xprev:
xprev = x
dx = -f(x)/fp(x)
x = x + dx
print(x)
Note: I'm aware of how floating point numbers work (please don't post your favourite link to a website telling me to never compare two floating point numbers). Also, I'm not looking for a solution to a problem but for an explanation as to why one of the algorithms converges but not the other.
Update:
As #uhoh pointed out, there are many cases where the second method does not converge. However, I still don't know why the second method converges so much more easily in my real world scenario than the first. All the test cases have very simple functions f whereas the real world f has several hundred lines of code (which is why I don't want to post it). So maybe the complexity of f is important. If you have any additional insight into this, let me know!
None of the methods is perfect:
One situation in which both methods will tend to fail is if the root is about exactly midway between two consecutive floating-point numbers f1 and f2. Then both methods, having arrived to f1, will try to compute that intermediate value and have a good chance of turning up f2, and vice versa.
/f(x)
/
/
/
/
f1 /
--+----------------------+------> x
/ f2
/
/
/
"I'm aware of how floating point numbers work...". Perhaps the workings of floating-point arithmetic are more complicated than imagined.
This is a classic example of cycling of iterates using Newton's method. The comparison of a difference to an epsilon is "mathematical thinking" and can burn you when using floating-point. In your example, you visit several floating-point values for x, and then you are trapped in a cycle between two numbers. The "floating-point thinking" is better formulated as the following (sorry, my preferred language is C++)
std::set<double> visited;
xprev = 0.0;
x = 1.0;
while (x != prev)
{
xprev = x;
dx = -F(x)/DF(x);
x = x + dx;
if (visited.find(x) != visited.end())
{
break; // found a cycle
}
visited.insert(x);
}
I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
Technically, it doesn't converge to the correct value. Try printing more digits, or using float.hex.
The first one gives
>>> print "%.16f" % x
1.4142135623730949
>>> float.hex(x)
'0x1.6a09e667f3bccp+0'
whereas the correctly rounded value is the next floating point value:
>>> print "%.16f" % math.sqrt(2)
1.4142135623730951
>>> float.hex(math.sqrt(2))
'0x1.6a09e667f3bcdp+0'
The second algorithm is actually alternating between the two values, so doesn't converge.
The problem is due to catastrophic cancellation in f(x): as x*x will be very close to 2, when you subtract 2, the result will be dominated by the rounding error incurred in computing x*x.
I think trying to force an exact equal (instead of err < small) is always going to fail frequently. In your example, for 100,000 random numbers between 1 and 10 (instead of your 2.0) the first method fails about 1/3 of the time, the second method about 1/6 of the time. I'll bet there's a way to predict that!
This takes ~30 seconds to run, and the results are cute!:
def f(x, a):
return x*x - a
def fp(x):
return 2.*x
def A(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
x = (x * fp(x) - f(x,a)) / fp(x)
n += 1
if n >100:
return n, x
return n, x
def B(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
dx = - f(x,a) / fp(x)
x = x + dx
n += 1
if n >100:
return n, x
return n, x
import numpy as np
import matplotlib.pyplot as plt
n = 100000
aa = 1. + 9. * np.random.random(n)
data_A = np.zeros((2, n))
data_B = np.zeros((2, n))
for i, a in enumerate(aa):
data_A[:,i] = A(a)
data_B[:,i] = B(a)
bins = np.linspace(0, 110, 12)
hist_A = np.histogram(data_A, bins=bins)
hist_B = np.histogram(data_B, bins=bins)
print "A: n<10: ", hist_A[0][0], " n>=100: ", hist_A[0][-1]
print "B: n<10: ", hist_B[0][0], " n>=100: ", hist_B[0][-1]
plt.figure()
plt.subplot(1,2,1)
plt.scatter(aa, data_A[0])
plt.subplot(1,2,2)
plt.scatter(aa, data_B[0])
plt.show()

Numerical accuracy loss in Python

I wish to calculate the standard error of a series of numbers. Suppose the numbers are x[i] where i = 1 ... N. To do this
I set
averageX = 0.0
averageXSquared = 0.0
I then loop over all i=1,...N and for each I calculate
averageX += x[i]
averageXSquared += x[i]**2
I then divide by N
averageX = averageXC / N
averageXSquared = averageXSquared/N
I then take the square root of the difference
stdX = math.sqrt(averageXSquared - averageX * averageX)
The argument here is sure to always be >=0.
However if I set all x[i] = 0.07 (for example) then I get a math domain error as the argument of the root function is negative. There seems to be some loss of precision.
The argument is of the order of 10e-15.
This does not look encouraging. I now have to check myself to see if the result is negative before taking the root.
Or have I done something wrong.
This is not a python problem, but a problem with finite precision in general. If you set all numbers to the same value, the standard error is mathematically 0, but not for a computer. The correct way to handle this, is to set very small values <0 to 0.
x = [0.7, 0.7, 0.7]
average = sum(x) / len(x)
sqav = sum(y**2 for y in x) / len(x)
stderr = math.sqrt(max(sqav - average**2, 0))
The correct way, of course is never subtract large numbers. Have another pass, which guarantees non-negativity (you need to do some algebra to realize that the result is mathematically the same):
y = [ v - average for v in x ]
dev = sum(v*v for v in y) / len(x)
stderr = math.sqrt(dev)

Categories