Saving the "state" of a Z3 solver in SMT2 format - python

is it possible, using the Z3 API (e.g. the Python API), to save the current state of a solver, including what the solver has learned (in SAT solving we would say the "learned clauses") in a file in SMT2 format?
Because I would like to be able to save the state of the solver in a temporary file in order to resume solving later, in order to have some time to understand what further queries I should make to it.
Many thanks in advance...

SMT2 has no provisions of saving a given solvers state, which will no doubt differ widely from solver to solver. Each solver might have different mechanisms of doing so, however, but it will definitely not be in SMTLib2 format.
Since your question is entirely Z3 specific, I recommend asking it on https://github.com/Z3Prover/z3/issues to see if they might have anything interesting. So far as I know, however, this isn't possible currently.

At the end Levent was right :)
Below are some observations by Nikolaj Bjorner, from the Z3 github website.
"The state of the solver isn't fully serializable to SMT2 format.
You can print the solver to smt2 format based on the current assertions,
but not learned clauses/units using the sexpr() method on the Solver object."
...
"We don't expose ways to print internal state. You could perhaps interrupt the solver, then clone it using the "translate" methods and access the translated solver state using internal print utilities. You would have to change the code a bit to get to this state.
The print features on solvers don't access the internal state of any of the solvers, instead they look at the asserted formulas and print them.
I don't translate learned lemmas. For example, the code in smt_context.cpp line 176 is disabled because it didn't help with any performance enhancements. Similarly, the copy code in sat_solver does not copy learned clauses even though it retains unit literals and binary clauses that are learned."
You can see the above comments by Nicolaj at this link.

Related

Replace objetive function in Word2Vec Gensim

I'm doing my final degree project. I need to create an extended version of the word2vec algorithm, changing the default objective function of the original paper. This has already been done (check this paper). In that paper, they only say the new objective function, but they do not say how they have run the model.
Now, I need to extend that model too, with another function, but I'm not sure if I have to implement word2vec myself with the new function, or there is a way to replace it in the Gensim word2vec implementation.
I have checked the Word2Vec Gensim documentation but I have not seen any parameter to do this. Do you have any idea how to do it? It is even possible?
I was unsure if this StackExchange site was the correct one, maybe https://ai.stackexchange.com/ is more appropriate.
There's no official support in Gensim for simply dropping in your own objective function.
However, the full source code is available – https://github.com/RaRe-Technologies/gensim – so by editing it, or using it as a model for your own implementation, you could theoretically do anything.
Beware, though:
the code has gone through a lot of optimization & customization for new options that may not be relevant to your needs, so may not be the most clean & simple starting point
for performance, the core routines are written in Cython – see the .pyx files – which can be especially hard to debug, and rely on library bulk array functions that may obscure how to implement your alternate function instead

Ressources on how to find the appropiate algorithm for minimization problem?

I have a Python function where I want to find the global minimum.
I am looking for a resource on how to choose the appropriate algorithm for it.
When looking at scipy.optimize.minimize documentation, I can find some not very specific statements like
has proven good performance even for non-smooth optimizations
Suitable for large-scale problems
recommended for medium and large-scale problems
Apart from that I am unsure whether for my function basinhopping might be better or even bayesian-optimization. For the latter I am unsure whether that is only meant for machine learning cost functions or can be used generally.
I am looking for a cheat sheet like
but just for minimization problems.
Does something like that already exist?
If not, how would I be able to choose the most appropriate algorithm based on my constraints (that are: time consuming to compute, 4 input variables, known boundaries)?

CPLEX12.9: Strong branching is not available for mixed-integer problems?

I want to get strong branching scores by cplex and python, and for the first step I just tried to use "cplex.advanced.strong_branching" to solve a very simple MILP problem (my code followed the example usage of this function exactly). However it told me that "CPLEX Error 1017: Not available for mixed-integer problems", which made me quite confused because SB should be a traditional branch-and-bound algorithm. But when I used it to solve a LP problem it worked well.
The error seemed to be raised from "CPXXstrongbranch", a base C/C++ API, which also made me question that how cplex could make SB decisions when I set the branching strategy parameter to SB. A similar question is that I know Python API doesn't have the important "CPXgetcallbacknodelp" function, so how could "cplex.advanced.strong_branching" work? Could it be the reason of this error?
I don't totally understand how "CPXstrongbranch" works in C, so the following information may be incorrect: I tried to use "CPXstrongbranch" in the user-set branch callback of the example "adlpex1.c", and the same error was raised; it stopped me to use "ctypes" to get the "CPXgetcallbacknodelp" function.
Could it be a version problem? Does Cplex block the access of SB? Because I have read a paper which relied on the SB scores in Cplex 12.6.1 and C API. Or I just made some mistakes.
My question is whether Cplex can do SB and deliver its results to users in the MILP problem.
cplex.advanced.strong_branching does not carry out any branching. The doc is a bit confusing here. What this function does is computing the strong branching scores for the variables you pass (or all variables if you don't pass a list).
This requires an LP because usually in a MIP search tree you call this function with the current LP relaxation.
If you want to use CPLEX with strong branching then set the variable selection parameter to "strong branching":
with cplex.Cplex() as cpx:
cpx.parameters.mip.strategy.variableselect.set(cpx.parameters.mip.strategy.variableselect.values.strong_branching)
cpx.solve()
The strong_branching function is only needed if you want to implement your own branching algorithm.

Is there a wbo-like SAT-Solver for Python?

Is there a Python-Module / Program, that solves a SAT Problem? Probably a weighted Boolean one. (To be specific, something like wbo)
Or if not are there maybe bindings, or an API to use one of those solvers.
I don't think i could programm one myself in the time i have right now.
In order to solve SAT problems I would suggest using MiniSat (http://minisat.se), Glucose (https://www.lri.fr/~simon/?page=glucose) or Picosat (http://fmv.jku.at/picosat) and there are others. In the case of pseudo-boolean optimization I know of MiniSat+ (http://minisat.se/MiniSat+.html) and Gurobi (http://www.gurobi.com). I think all of them are free, except Gurobi, which offers trial and academic licences).
All of them offer a command line interface with input and output files that are easily generated/read from within Python. Moreover, Gurobi features a full Python shell.
Picosat has bindings for python (pycosat)

Nash equilibrium in Python

Is there a Python library out there that solves for the Nash equilibrium of two-person zero-games? I know the solution can be written down in terms of linear constraints and, in theory, scipy should be able to optimize it. However, for two-person zero-games the solution is exact and unique, but some of the solvers fail to converge for certain problems.
Rather than listing any of the libraries on Linear programing on the Python website, I would like to know what library would be most effective in terms of ease of use and speed.
Raymond Hettinger wrote a recipe for solving zero-sum payoff matrices. It should serve your purposes alright.
As for a more general library for solving game theory, there's nothing specifically designed for that. But, like you said, scipy can tackle optimization problems like this. You might be able to do something with GarlicSim, which claims to be for "any kind of simulation: Physics, game theory..." but I've never used it before so I can't recommend it.
There is Gambit, which is a little difficult to set up, but has a python API.
I've just started putting together some game theory python code: http://drvinceknight.github.com/Gamepy/
There's code which:
solves matching games,
calculates shapley values in cooperative games,
runs agent based simulations to identify emergent behaviour in normal form games,
(clumsily - my python foo is still growing) uses the lrs library (written in C: http://cgm.cs.mcgill.ca/~avis/C/lrs.html) to calculate the solutions to normal form games (this is I believe what you want).
The code is all available on github and that site (the first link at the beginning of this answer) explains how the code works and gives user examples.
You might also want to check out 'Gambit' which I've never used.

Categories