If i have 2 z3 Ints for examaple x1 and x2, and a 2d array of numbers for example:
list = [[1,2],[12,13],[45,7]]
i need to right a rule so that x1 and x2 are any of the pairs of numbers in the list for example x1 would be 1 and x2 would be 2 or x1 is 12 and x2 is 13
im guessing it would be something like:
solver = Solver()
for i in range(o,len(list)):
solver.add(And((x1==list[i][0]),(x2==list[i][1])))
but this would obviously just always be unsat, so i need to right it so that x1 and x2 can be any of the pairs in the list. It's worth noting that the number of pairs in the list could be anything not just 3 pairs.
You're on the right track. Simply iterate and form the disjunction instead. Something like:
from z3 import *
list = [[1,2],[12,13],[45,7]]
s = Solver()
x1, x2 = Ints('x1 x2')
s.add(Or([And(x1 == p[0], x2 == p[1]) for p in list]))
while s.check() == sat:
m = s.model()
print("x1 = %2d, x2 = %2d" % (m[x1].as_long(), m[x2].as_long()))
s.add(Or(x1 != m[x1], x2 != m[x2]))
When run, this prints:
x1 = 1, x2 = 2
x1 = 12, x2 = 13
x1 = 45, x2 = 7
I have n rows and n+1 columns matrix and need to construct such system
For example matrix is
x4 x3 x2 x1 result
1 1 0 1 0
1 0 1 0 1
0 1 0 1 1
1 0 1 1 0
Then equation will be (+ is XOR)
x4+x3+x1=0
x4+x2=1
x3+x1=1
x4+x2+x1=0
I need to return answer as list of x1,.....
How can we do it in python?
You could make use of the Python interface pyz3 of Microsoft's Z3 solver:
from z3 import *
def xor2(a, b):
return Xor(a, b)
def xor3(a, b, c):
return Xor(a, Xor(b, c))
# define Boolean variables
x1 = Bool('x1')
x2 = Bool('x2')
x3 = Bool('x3')
x4 = Bool('x4')
s = Solver()
# every equation is expressed as one constraint
s.add(Not(xor3(x4, x3, x1)))
s.add(xor2(x4, x2))
s.add(xor2(x3, x1))
s.add(Not(xor3(x4, x2, x1)))
# solve and output results
print(s.check())
print(s.model())
Result:
sat
[x3 = False, x2 = False, x1 = True, x4 = True]
Learn Gauss, can be also used for XOR. Then write a gauss python program
Can anyone suggest an efficient way of reshaping a column (in a python pandas dataframe) into multiple columns, with alternating column assignment. I could do this with a loop but wondering if there is a more elegant way. For an example, consider the following example:
Added: does anyone have a solution that will reshape every n values in a single column into n separate columns e.g. reshaping from a single column with n variables to n columns?
Col
1 x1
2 y1
3 z1
4 x2
5 y2
6 z2
7 x3
8 y3
9 z3
..
to
x y z
1 x1 y1 z1
2 x2 y2 z2
3 x3 y3 z3
...
You can just reshape the underlying values, assuming that you have the correct number of values for the given shape and that you only care about ordering the values by shape without respect to the values themselves
s
Col
1 x1
2 y1
3 z1
4 x2
5 y2
6 z2
7 x3
8 y3
9 z3
pd.DataFrame(s.to_numpy().reshape(3, 3))
0 1 2
0 x1 y1 z1
1 x2 y2 z2
2 x3 y3 z3
You can use:
df_final=(pd.DataFrame(df.groupby(df.Col.str[-1])['Col'].apply(list)
.values.tolist(),columns=['x','y','z']))
x y z
0 x1 y1 z1
1 x2 y2 z2
2 x3 y3 z3
You can use auxiliary variables to work as the row and column index, then apply df.pivot
df1['aux'] = df1.Col.str[:-1]
df1['aux_idx'] = df1.Col.str[-1:]
print(df1.pivot(index= 'aux_idx', columns='aux', values='Col'))
Output:
aux x y z
aux_idx
1 x1 y1 z1
2 x2 y2 z2
3 x3 y3 z3
For the same result by just counting the number of elements, use df.index module n as the key
df1['aux_idx'] = (df1.index-1)// 3
df1['aux'] = df1.Col.str[:-1]
print(df1.pivot(index= 'aux_idx', columns='aux', values='Col'))
Output:
aux x y z
aux_idx
0 x1 y1 z1
1 x2 y2 z2
2 x3 y3 z3
I am trying to implement a simple quadratic program using CPLEX's Python API. The sample file qpex1 provided with CPLEX discusses this. The problem, as mentioned in qpex.lp is
Maximize
obj: x1 + 2 x2 + 3 x3 + [ - 33 x1 ^2 + 12 x1 * x2 - 22 x2 ^2 + 23 x2 * x3
- 11 x3 ^2 ] / 2
Subject To
c1: - x1 + x2 + x3 <= 20
c2: x1 - 3 x2 + x3 <= 30
Bounds
0 <= x1 <= 40
End
The problem, while being implemented in python, receives a matrix qmat which implements the quadratic portion of the objective function. The matrix is :
qmat = [[[0, 1], [-33.0, 6.0]],
[[0, 1, 2], [6.0, -22.0, 11.5]],
[[1, 2], [11.5, -11.0]]]
p.objective.set_quadratic(qmat)
Can someone explain the structure of this matrix? What are the parts in the data structure that is being used? What are the components and so on.
First list is the set of indeces, the second list the set of the corresponding values, so the qmat matrix is:
-33 6 0
6 -22 11.5
0 11.5 -11
that results in:
| -33 6 0 | x1
x1 x2 x3 | 6 -22 11.5 | x2 = - 33 x1 ^2 + 12 x1 * x2 - 22 x2 ^2 + 23 x2 * x3 - 11 x3 ^2
| 0 11.5 -11 | x3
I'd like to minimize a set of equations where the variables are known with their uncertainties. In essence I'd like to test the hypothesis that the given measured variables conform to the formula constraints given by the equations. This seems like something I should be able to do with scipy-optimize. For example I have three equations:
8 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4
4 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4
1 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4
And four measured unknowns with their 1-sigma uncertainty:
x1 = 0.246 ± 0.007
x2 = 0.749 ± 0.010
x3 = 1.738 ± 0.009
x4 = 2.248 ± 0.007
Looking for any pointers in the right direction.
This is my approach. Assuming x1-x4 are approximately normally distributed around each mean (1-sigma uncertainty), the problem is turning into one of minimizing the sum of square of errors, with 3 linear constrain functions. Therefore, we can attack it using scipy.optimize.fmin_slsqp()
In [19]:
def eq_f1(x):
return (x*np.array([0.5, 1.0, 1.5, 2.0])).sum()-8
def eq_f2(x):
return (x*np.array([0.0, 0.0, 1.0, 1.0])).sum()-4
def eq_f3(x):
return (x*np.array([1.0, 1.0, 0.0, 0.0])).sum()-1
def error_f(x):
error=(x-np.array([0.246, 0.749, 1.738, 2.248]))/np.array([0.007, 0.010, 0.009, 0.007])
return (error*error).sum()
In [20]:
so.fmin_slsqp(error_f, np.array([0.246, 0.749, 1.738, 2.248]), eqcons=[eq_f1, eq_f2, eq_f3])
Optimization terminated successfully. (Exit mode 0)
Current function value: 2.17576389592
Iterations: 4
Function evaluations: 32
Gradient evaluations: 4
Out[20]:
array([ 0.25056582, 0.74943418, 1.74943418, 2.25056582])
I appear to me that I have a very similar problem. I am relatively new to py and used it mostly to sort and reduce data with pandas.
I have a set of linear equations, where I want to find the best fit parameters. However, the dataset has known uncertainties that need to be considered given in parentheses).
x1*99(1)+x2*45(1)=52(0.2)
x1*1(0.5)+x2*16(1)=15(0.1)
Moreover there are constraints:
x1>=0
x2>=0
x1+x2=1
My approach would be to treat the equations as constraints and solve the sum of the residues as it has been shown in the example above.
Solving this without uncertainties is not the issue. I ask to get a hint on how to account for the uncertainties while finding the best fit parameters.
As given, the problem has no solution. This is because if the inputs x1, x2, x3 and x4 are gaussian, then the outputs:
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8.0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4.0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1.0
are also gaussian.
Assuming that x1, x2, x3 and x4 are independent random variables, this is easy to see with OpenTURNS:
import openturns as ot
x1 = ot.Normal(0.246, 0.007)
x2 = ot.Normal(0.749, 0.010)
x3 = ot.Normal(1.738, 0.009)
x4 = ot.Normal(2.248, 0.007)
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8.0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4.0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1.0
The following script produces the graph:
graph1 = y1.drawPDF()
graph1.setLegends(["y1"])
graph2 = y2.drawPDF()
graph2.setLegends(["y2"])
graph3 = y3.drawPDF()
graph3.setLegends(["y3"])
graph1.add(graph2)
graph1.add(graph3)
graph1.setColors(["dodgerblue3",
"darkorange1",
"forestgreen"])
graph1.setXTitle("Y")
The previous script produces the following output.
Given the location of the 0.0 in this distribution, I would say that solving the equations is mathematically impossible, but physically consistent with the data.
Actually, I guess that the gaussian distributions you gave for x1, ..., x4 are estimated from data. So I would rather reformulate the problem as follows:
Given a sample of observed values of x1, x2, x3, x4, what is the value of e1, e2, e3 which is so that :
y1 = 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 - 8 + e1 = 0
y2 = 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 - 4 + e2 = 0
y3 = 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 - 1 + e3 = 0
This turns the problem into an inversion problem, which can be solved by calibrating e1, e2, e3. Furthermore, given the finite sample size of x1, ..., x4, we might want to produce the distribution of e1, e2, e3. This can be done by bootstraping the input / output pairs (x, y): the distribution of e1, e2, e3 then reflects the variability of these parameters depending on the sample at hand.
First, we have to generate a sample from the distribution (I suppose that you have this sample, but did not publish it so far):
distribution = ot.ComposedDistribution([x1, x2, x3, x4])
sampleSize = 10
xobs = distribution.getSample(sampleSize)
Then we define the model:
formulas = [
"y1 := 0.5 * x1 + 1.0 * x2 + 1.5 * x3 + 2.0 * x4 + e1 - 8.0",
"y2 := 0.0 * x1 + 0.0 * x2 + 1.0 * x3 + 1.0 * x4 + e2 - 4.0",
"y3 := 1.0 * x1 + 1.0 * x2 + 0.0 * x3 + 0.0 * x4 + e3 - 1.0"
]
program = ";".join(formulas)
g = ot.SymbolicFunction(["x1", "x2", "x3", "x4", "e1", "e2", "e3"],
["y1", "y2", "y3"],
program)
And set the observed outputs, which is a sample of zeros:
yobs = ot.Sample(sampleSize, 3)
We start with initial values equal to zero, and define the function to calibrate:
e1Initial = 0.0
e2Initial = 0.0
e3Initial = 0.0
thetaPrior = ot.Point([e1Initial,e2Initial,e3Initial])
calibratedIndices = [4, 5, 6]
mycf = ot.ParametricFunction(g, calibratedIndices, thetaPrior)
Then we can calibrate the model:
algo = ot.NonLinearLeastSquaresCalibration(mycf, xobs, yobs, thetaPrior)
algo.run()
calibrationResult = algo.getResult()
print(calibrationResult.getParameterMAP())
This prints:
[0.0265988,0.0153057,0.00495758]
This means that the errors e1, e2, e3 are rather small.
We can compute a confidence interval:
thetaPosterior = calibrationResult.getParameterPosterior()
print(thetaPosterior.computeBilateralConfidenceIntervalWithMarginalProbability(0.95)[0])
This prints:
[0.0110046, 0.0404756]
[0.00921992, 0.0210059]
[-0.00601084, 0.0156665]
The third parameter e3 might be zero, but neither e1 nor e2.
Finally, we can get the distribution of the errors:
thetaPosterior = calibrationResult.getParameterPosterior()
and draw it:
graph1 = thetaPosterior.getMarginal(0).drawPDF()
graph2 = thetaPosterior.getMarginal(1).drawPDF()
graph3 = thetaPosterior.getMarginal(2).drawPDF()
graph1.add(graph2)
graph1.add(graph3)
graph1.setColors(["dodgerblue3",
"darkorange1",
"forestgreen"])
graph1
This produces:
This shows that e3 might be zero given the variability in the observed inputs x1, ..., x4. But e1 and e2 cannot be zero. The conclusion for this sample is that the third equation is approximately solved by the observed values of x1, ..., x4, but not the two first equations.