I'm trying to use SMT solver over a scheduling problem and could not find anything helping in the documentation.
It seems using following ways of setting parameters do not have any effect on the solver.
from z3 import *
set_param(logic="QF_UFIDL")
s = Optimize() # or even Solver()
or even
from z3 import *
s = Optimize()
s.set("parallel.enable", True)
So how can I set [global] parameters effectively in z3py. to be most specific I need to set parameters below:
parallel.enable=True
auto_confic=False
smtlib2_compliant=True
logic="QF_UFIDL"
Use global parameter statements like the following on separate lines before creating Solver or Optimize object:
set_param('parallel.enable', True)
set_param('parallel.threads.max', 4) # default 10000
To set non-global parameters specific to a Solver or Optimize object, you can use the help() function to show available parameters:
o = Optimize()
o.help()
s = Solver()
s.help()
The following example shows how to set an Optimize parameter:
opt = Optimize()
opt.set(priority='pareto')
Use set_param, as described here: https://z3prover.github.io/api/html/namespacez3py.html#a4ae524d5f91ad1b380d8821de01cd7c3
It isn't clear what's not working for you. Are you getting an error message back? From your description, I understand that the setting does indeed take place, but you don't see any change in behavior? For that, you'll have to provide a concrete example we can look at. Note that for most parameters, the effects will only be visible with benchmarks that trigger the option, and even then it'll be hard to tell what (if any) effect it had, unless you dig into verbose log output.
Also, parallel-solving features, which you seem to be interested in, isn't going to gain you much. See Section 9.2 of https://z3prover.github.io/papers/z3internals.html: Essentially it boils down to attempting to solve the same problem with different seeds to see if one of them goes faster. If you have many cores lying around it might be worth a try, but don't expect magic out of it.
Related
My Problem
I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear:
Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0).
My Question
Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives?
My Attempts
Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code.
Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem.
My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead:
Please let me know if any clarification is needed.
I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist.
Given one and the other, try replacing one with conjugate(other):
>>> one = x; other = y
>>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q)
(Abs(y)**2, 0)
If you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement:
>>> p.sub(one, conjugate(other)).subs(conjugate(other), one)
x*y
Problem
I have a function make_pipeline that accepts an arbitrary number of functions, which it then calls to perform sequential data transformation. The resulting call chain performs transformations on a pandas.DataFrame. Some, but not all functions that it may call need to operate on a sub-array of the DataFrame. I have written multiple selector functions. However at present each member-function of the chain has to be explicitly be given the user-selected selector/filter function. This is VERY error-prone and accessibility is very important as the end-code is addressed to non-specialists (possibly with no Python/programming knowledge), so it must be "batteries-included". This entire project is written in a functional style (that's what's always worked for me).
Sample Code
filter_func = simple_filter()
# The API looks like this
make_pipeline(
load_data("somepath", header = [1,0]),
transform1(arg1,arg2),
transform2(arg1,arg2, data_filter = filter_func),# This function needs access to user-defined filter function
transform3(arg1,arg2,, data_filter = filter_func),# This function needs access to user-defined filter function
transform4(arg1,arg2),
)
Expected API
filter_func = simple_filter()
# The API looks like this
make_pipeline(
load_data("somepath", header = [1,0]),
transform1(arg1,arg2),
transform2(arg1,arg2),
transform3(arg1,arg2),
transform4(arg1,arg2),
)
Attempted
I thought that if the data_filter alias is available in the caller's namespace, it also becomes available (something similar to a closure) to all functions it calls. This seems to happen with some toy examples but wont work in the case (UnboundError).
What's a good way to make a function defined in one place available to certain interested functions in the call chain? I'm trying to avoid global.
Notes/Clarification
I've had problems with OOP and mutable states in the past, and functional programming has worked quite well. Hence I've set a goal for myself to NOT use classes (to the extent that Python enables me to anyways). So no classes.
I should have probably clarified this initially: In the pipeline the output of all functions is a DataFrame and the input of all functions (except load data obviously) is a DataFrame. The functions are decorated with a wrapper that calls functools.partial because we want the user to supply the args to each function but not execute it. The actual execution is done be a forloop in make_pipeline.
Each function accepts df:pandas.DataFrame plus all arguements that are specific to that function. The statement seen above transform1(arg1,arg2,...) actually calls the decorated transform1 witch returns functools.partial(transform, arg1,arg2,...) which is now has a signature like transform(df:pandas.DataFrame).
load_dataframe is just a convenience function to load the initial dataframe so that all other functions can begin operating on it. It just felt more intuitive to users to have it part of the chain rather that a separate call
The problem is this: I need a way for a filter function to be initialized (called) in only on place, such that every function in the call chain that needs access to the filter function, gets it without it being explicitly passed as argument to said function. If you're wondering why this is the case, it's because I feel that end users will find it unintuitive and arbitrary. Some functions need it, some don't. I'm also pretty certain that they will make all kinds of errors like passing different filters, forgetting it sometimes etc.
(Update) I've also tried inspect.signature() in make_pipeline to check if each function accepts a data_filter argument and pass it on. However, this raises an incorrect function signature error so some unclear reason (likely because of the decorators/partial calls). If signature could the return the non-partial function signature, this would solve the issue, but I couldn't find much info in the docs
Turns out it was pretty easy. The solution is inspect.signature.
def make_pipeline(*args, data_filter:Optional[Callable[...,Any]] = None)
d = args[0]
for arg in args[1:]:
if "data_filter" in inspect.signature(arg):
d = arg(d, data_filter = data_filter)
else:
d= arg(d)
Leaving this here mostly for reference because I think this is a mini design pattern. I've also seen an function._closure_ on unrelated subject. That may also work, but will likely be more complicated.
I often find myself writing code that takes a set of parameters, does some calculalation and then returns the result to another function, which also requires some of the parameters to do some other manipulation, and so on. I end up with a lot of functions where I have to pass around parameters, such as f(x, y, N, epsilon) which then calls g(y, N, epsilon) and so on. All the while I have to include the parameters N and epsilon in every function call and not lose track of them, which is quite tedious.
What I want is to prevent this endlessly passing around of parameters, while still being able to, within a single for loop, to change some of these parameters, e.g.
for epsilon in [1,2,3]:
f(..., epsilon)
I usually have around 10 parameters to keep track of (these are physics problems) and do not know beforehand which I have to vary and which I can keep to a default.
The options I thought of are
Creating a global settings = {'epsilon': 1, 'N': 100} object, which is used by every function. However, I have always been told that putting stuff in the global namespace is bad. I am also afraid that this will not play nice with modifying the settings object within the for loop.
Passing around a settings object as a parameter in every function. This means that I can keep track of the object as it passed around, and makes it play nice with the for loop. However, it is still passing around, which seems stupid to me.
Is there another, third, option that I have not considered? Most of the solution I can find are for the case where your settings are only set once, as you start up the program, and are then unchanged.
I believe this is primarily a matter of preference among coding styles. I'm going to offer my opinion on the ones you posted as well as some other alternatives.
First, creating a global settings variable is not bad by itself. Problems arise if global settings are treated as mutable state rather than being immutable. As you want to modify parameters on the fly it's a dangerous option.
Second, passing the settings around is quite common in functional languages and it's not stupid although it can look clumsy if you're not used to it. The advantage of passing state this way is that you can isolate changes in the dictionary settings you pass around without corrupting the original one, the bad thing is that python messes a bit with immutability because of shared references and you can end up making many deepcopy's to prevent data races, which is totally inefficient. Unless your dict is not nested I would not go that way.
settings = {'epsilon': 1, 'N': 100}
# Unsafe but OK for plain dict
for x in [1, 2, 3]:
f(..., dict(zip(settings, ('epsilon', x))))
# Safe way.
ephimeral = copy.deepcopy(settings)
for x in [1, 2, 3]:
ephimeral['epsilon'] = x
f(..., ephimeral)
Now, there's another option which kind of mixes the other two, probably the one I'll take. Make a global immutable settings variable and write your functions signatures to accept optional keyword arguments. This way you have the advantages of both, ability to avoid continuous variable passing and ability to modify on the fly values without data races:
def f(..., **kwargs):
epsilon = kwargs.get('epsilon', settings['epsilon'])
...
You may also create a function that encapsulates the aforementioned behavior to decouple variable extraction from function definition. There are many possibilities.
Hope this helps.
This is a short question, but google points me every time to the documentation where I can't find the answer.
I am using scipy.optimize.minimize. It works pretty good, all things are fine.
I can define a method to use, but it works even if I don't specify the method.
Is there any way to get an output, which method was used? I know the result class, but the method isn't mentioned there.
Here's an example:
solution = opt.minimize(functitionTOminimize,initialGuess, \
constraints=cons,options={'disp':True,'verbose':2})
print(solution)
I could set the value method to something like slsqp or cobyla, but I want to see what the program is choosing. How can I get this information?
According to the scipy-optimize-minimize-docs: If no method is specified the default choice will be one of BFGS, L-BFGS-B, SLSQP, depending on whether the problem has constraints or bounds. To get more details on the methods deployement's order, you should take a look at the scipy-optimize-minimize-source-code-line-480. From the source code the order is the following:
if method is None:
# Select automatically
if constraints:
method = 'SLSQP'
elif bounds is not None:
method = 'L-BFGS-B'
else:
method = 'BFGS'
Multiple symfit model instances share parameter objects with the same name. I'd like to understand where this behaviour comes from, what it's intent is and if it's possible to deactivate.
To illustrate what I mean, a minimial example:
import symfit as sf
# Create Parameters and Variables
a = sf.Parameter('a',value=0)
b = sf.Parameter('b',value=1,fixed=True)
x, y = sf.variables('x, y')
# Instanciate two models
model1=sf.Model({y:a*x+b})
model2=sf.Model({y:a*x+b})
# They are indeed not the same
id(model1) == id(model2)
>>False
# There are two parameters
print(model1.params)
>>[a,b]
print(model1.params[1].name, model1.params[1].value)
>>b 1
print(model2.params[1].name, model2.params[1].value)
>>b 1
#They are initially identical
# We want to manually modify the fixed one in only one model
model1.params[1].value = 3
# Both have changed
print(model1.params[1].name, model1.params[1].value)
>>b 3
print(model2.params[1].name, model2.params[1].value)
>>b 3
id(model1.params[1]) == id(model2.params[1])
>>True
# The parameter is the same object
I want to fit multiple data streams with different models, but different fixed paramter values dependent on the data stream. Renaming the parameters in each instance of the model would work, but is ugly given that the paramter represents the same quantity. Processing them sequentially and modifying the parameters in between is possible, but I worry about unintended interactions between steps.
PS: Can someone with sufficient reputation please create the symfit tag
Excellent question. In principle this is because Parameter objects are a subclass of sympy.Symbol, and from its docstring:
Symbols are identified by name and assumptions:
>>> from sympy import Symbol
>>> Symbol("x") == Symbol("x")
True
>>> Symbol("x", real=True) == Symbol("x", real=False)
False
This is fundamental to the inner working of sympy, and therefore something we also use in symfit. But the value and fixed arguments are not viewed as assumptions, so they are not used to distinguish parameters.
Now, to your question on how this would affect fitting. Like you say, working sequentially is a good solution, and one that will not have any side effects:
model = sf.Model({y:a*x+b})
b.fixed = True
fit_results = []
for b_value, xdata, ydata in datastream:
b.value = b_value
fit = Fit(model, x=xdata, y=ydata)
fit_results.append(fit.execute())
So there is no need to define a new Parameter every iteration, the b.value attribute will be the same within each loop so there is no way this can go wrong. The only way I can imagine this going wrong is if you use threading, that will probably create some race conditions. But threading is not desirable for CPU bound tasks anyway, multiprocessing is the way to go. And in that case, separate processes will be spawned, creating separate microcosms, so there should be no problem there either.
I hope this answers your question, if not let me know.
p.s. I'm slowly answering my way up to 1500 to make that tag, but if someone beats me to it I'd be all the happier for it of course ;)