My Problem
I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear:
Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0).
My Question
Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives?
My Attempts
Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code.
Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem.
My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead:
Please let me know if any clarification is needed.
I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist.
Given one and the other, try replacing one with conjugate(other):
>>> one = x; other = y
>>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q)
(Abs(y)**2, 0)
If you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement:
>>> p.sub(one, conjugate(other)).subs(conjugate(other), one)
x*y
Related
I'm trying to use SMT solver over a scheduling problem and could not find anything helping in the documentation.
It seems using following ways of setting parameters do not have any effect on the solver.
from z3 import *
set_param(logic="QF_UFIDL")
s = Optimize() # or even Solver()
or even
from z3 import *
s = Optimize()
s.set("parallel.enable", True)
So how can I set [global] parameters effectively in z3py. to be most specific I need to set parameters below:
parallel.enable=True
auto_confic=False
smtlib2_compliant=True
logic="QF_UFIDL"
Use global parameter statements like the following on separate lines before creating Solver or Optimize object:
set_param('parallel.enable', True)
set_param('parallel.threads.max', 4) # default 10000
To set non-global parameters specific to a Solver or Optimize object, you can use the help() function to show available parameters:
o = Optimize()
o.help()
s = Solver()
s.help()
The following example shows how to set an Optimize parameter:
opt = Optimize()
opt.set(priority='pareto')
Use set_param, as described here: https://z3prover.github.io/api/html/namespacez3py.html#a4ae524d5f91ad1b380d8821de01cd7c3
It isn't clear what's not working for you. Are you getting an error message back? From your description, I understand that the setting does indeed take place, but you don't see any change in behavior? For that, you'll have to provide a concrete example we can look at. Note that for most parameters, the effects will only be visible with benchmarks that trigger the option, and even then it'll be hard to tell what (if any) effect it had, unless you dig into verbose log output.
Also, parallel-solving features, which you seem to be interested in, isn't going to gain you much. See Section 9.2 of https://z3prover.github.io/papers/z3internals.html: Essentially it boils down to attempting to solve the same problem with different seeds to see if one of them goes faster. If you have many cores lying around it might be worth a try, but don't expect magic out of it.
I am very new to Python (switching from Matlab) and I am currently working with the SymPy package. I realised that I can calculate the derivate of a function with f.diff(x), even when I have not imported the diff function. So, basically f.diff(x) works but diff(f,x) returns an error.
from sympy import symbols
x = symbols('x')
f = x**2 + 1
f.diff(x)
The reason that I could think of was that diff is actually defined as a method attribute for the class Symbol and thus, f.diff(x) works as long as x is of Symbol type and f has been defined using x. Is there a way to somehow view the Symbol class definition in order to verify that a diff method attribute actually exists?
The reason that I could think of was that diff is actually defined as a method attribute for the class Symbol and thus, f.diff(x) works as long as x is of Symbol type and f has been defined using x.
This is mostly correct (corrections below).
In contrast to Matlab, Python uses namespaces. This means that you only have very basic functions, classes, etc. available by default and everything else needs to be imported into the main namespace or is only available with a “prefix” specifying the namespace. What you gain from this is that you avoid name clashes and it’s easy to trace from which module a function is coming. For instance, in your example, the reader can see that symbols was imported from the sympy module (into the main namespace). This module also has a diff function (not the method) that you could use after importing with from sympy import diff.
In this sense, each object comes along with its own namespace, which is for most practical purposes determined by its class¹.
Functions in this namespace are called methods and (usually) do something on the object itself or using the specifics of the object itself.
Now, for the promised corrections or clarifications:
It is f’s class which is relevant here, not x’s.
You can see the class of f with type(f) and it is Add (residing in sympy.core.add).
This is because it is primarily a sum (of x**2 and 1).
More importantly, Add is a subclass of Expr (expression), which is the parent class for all SymPy expressions.
For example, the class Symbol is also a subclass of Expr.
(You can see this with type(f).mro().)
And this is the important thing here: All SymPy expressions have the diff method.
It is actually not relevant that the argument of f.diff is a Symbol or Expr.
It only needs to be something that SymPy can reasonably interpret as one.
For example f.diff("x") also works, because SymPy can translate the string "x" to a Symbol that is equivalent to your x.
Is there a way to somehow view the Symbol class definition in order to verify that a diff method attribute actually exists?
Yes. The easiest way is the basic Python function dir, which returns a list of all attributes (everything accessible by the . operator) of an object. Typically, most of these are methods. In you case, you can just call dir(f). Note that this lists also contains quite some attributes starting with _, which indicates that they are not designated for user consumption. In any reasonable programming environment (IDE, IPython, Jupyter), this list is also shown to you when you use tab completion (F, ., Tab).
However, while learning about a class by going through all its methods is usually a good approach, for SymPy expressions this is not feasible.
There is a lot of things somebody could want to do with these expressions, but you will only ever use a fraction of them.
Instead, you can either guess the name of the method and thus narrow down your search considerable.
For example, you can guess that the method for differentiation starts with a d (be it for differentiate or derivative), and here the tab completion (F, ., D, Tab) only gives you four results instead of three hundred.
Another approach is that you start searching the documentation (or the Internet in general) with what your operation of interest (here differentiating) instead of your the object of your operation (here, SymPy expressions, i.e., instances of Expr). After all SymPy is all about the latter, so that is kind of a given.
Finally, normally there is a documentation of a class featuring all its methods.
For Expr, this is here.
Unfortunately, in case of Expr the documentation is not exhaustive, e.g., it lacks the diff method.
While this is not ideal, it is somewhat understandable given the amount of methods as well as the duality of methods and functions of SymPy: For most methods of Expr, an analogous function can be directly imported from sympy.
¹ You can also just add stuff there (symbols.wrzlprmft = "foo"), but that’s a pretty advanced and rare usage. Also some classes are made to block this, e.g., you cannot do f.wrzlprmft = "foo".
I have a equation to solve. The equation can be described as the formula above. N and S are constants, for example N = 201 and S = 0.5. I use sympy in python to solve it. The python script is given as following:
from sympy import *
x=Symbol('x')
print solve( (((1-x)/200) **(1-x))* x**x - 2**(-0.5), x)
However, there is a RuntimeError: maximum recursion depth exceeded in __instancecheck__
I have also tried to use Mathematica, and it can output a result of 0.963
http://www.wolframalpha.com/input/?i=(((1-x)%2F200)+(1-x))*+xx+-+2**(-0.5)+%3D+0
Any suggestion is welcome. Thanks.
Assuming that you don't want a symbolic solution, just a value you can work with (like WA's 0.964), you can use mpmath for this. I'm not sure if it's actually possible to express the solution in radicals - WA certainly didn't even try. You should already have it installed as SymPy
Requires: mpmath
Specifically, mpmath.findroot seems to do what you want. It takes an actual callable Python object which is the function to find a root of, and a starting value for x. It also accepts some more parameters such as the minimum error tol and the solver to use which you could play around with, although they don't really seem necessary. You could quite simply use it like this:
import mpmath
f = lambda x: (((1-x)/200) **(1-x))* x**x - 2**(-0.5)
print mpmath.findroot(f, 1)
I just used 1 as a starting value - you could probably think of a better one. Judging by the shape of your graph, there's only one root to be found and it can be approached quite easily, without much need for fancy solvers, so this should suffice. Also, considering that "mpmath is a Python library for arbitrary-precision floating-point arithmetic", you should be able to get a very high precision answer from this if you wished. It has the output of
(0.963904761592753 + 0.0j)
This is actually an mpmath complex or mpc object,
mpc(real='0.96390476159275343', imag='0.0')
If you know it will have an imaginary value of 0, you can just use either of the following methods:
In [6]: abs(mpmath.mpc(23, 0))
Out[6]: mpf('23.0')
In [7]: mpmath.mpc(23, 0).real
Out[7]: mpf('23.0')
to "extract" a single float in the format of an mpf.
I'm struggling with the fact that elements of sympy.MatrixSymbol don't seem to interact well with sympy's differentiation routines.
The fact that I'm trying to work with elements of sympy.MatrixSymbol rather than "normal" sympy symbols is beacause I want to autowrap a large function, and this seems that this is the only way to overcome argument limitations and enable input of a single array.
To give the reader a picture of the restrictions on possible solutions, I'll start with an overview of my intentions; however, the hasty reader might as well jump to the codeblocks below, which illustrate my problem.
Declare a vector or array of variables of some sort.
Build some expressions out of the elements of the former; these expressions are to make up the components of a vector valued function of said vector. In addition to this function, I'd like to obtain the Jacobian w.r.t. the vector.
Use autowrap (with the cython backend) to get numerical implementations of the vector function and its Jacobian. This puts some limitations on the former steps: (a) it is desired that the input of the function is given as a vector, rather than a list of symbols. (Both because there seems to be a limit to the number of inputs for an autwrapped function, and to ease interaction with scipy later on, i.e. avoid having to unpack numpy vectors to lists often).
On my journey, I ran into 2 issues:
Cython does not seem to like some sympy function, among them sympy.Max, upon which I heavily rely. The "helpers" kwarg of autowrap seems unable to handle multiple helpers at once.
This is by itself not a big deal, as I learned to circumvent it using abs() or sign(), which cython readily understands.
(see also this question on the above)
As stated before, autowrap/cython do not accept more than 509 arguments in form of symbols, at least not in my compiler setup. (See also here)
As I would prefer to give a vector rather than a list as input to the function anyways, I looked for a way to get the wrapped function to take a numpy array as input (comparable to DeferredVector + lambdify). It seems the natural way to do this is sympy.MatrixSymbol. (See thread linked above. I'm not sure there'd be an alternative, if so, suggestions are welcome.)
My latest problem then starts here: I realized that the elements of sympy.MatrixSymbol in many ways do not behave like "other" sympy symbols. One has to assign the properties real and commutative individually, which then seems to work fine though. However, my real trouble starts when trying to get the Jacobian; sympy seems not to get derivatives of the elements right out of the box:
import sympy
X= sympy.MatrixSymbol("X",10,1)
for element in X:
element._assumptions.update({"real":True, "commutative":True})
X[0].diff(X[0])
Out[2]: Derivative(X[0, 0], X[0, 0])
X[1].diff(X[0])
Out[15]: Derivative(X[1, 0], X[0, 0])
The following block is a minimal example of what I'd like to do, but here using normal symbols:
(I think it captures all I need, if I forgot something I'll add that later.)
import sympy
from sympy.utilities.autowrap import autowrap
X = sympy.symbols("X:2", real = True)
expr0 = X[1]*( (X[0] - abs(X[0]) ) /2)**2
expr1 = X[0]*( (X[1] - abs(X[1]) ) /2)**2
F = sympy.Matrix([expr0, expr1])
J = F.jacobian([X[0],X[1]])
J_num = autowrap(J, args = [X[0],X[1]], backend="cython")
And here is my (currently) best guess using sympy.MatrixSymbol, which then of course fails because the Derivative-expressions within J:
X= sympy.MatrixSymbol("X",2,1)
for element in X:
element._assumptions.update({"real":True, "commutative":True, "complex":False})
expr0 = X[1]*( (X[0] - abs(X[0]) ) /2)**2
expr1 = X[0]*( (X[1] - abs(X[1]) ) /2)**2
F = sympy.Matrix([expr0, expr1])
J = F.jacobian([X[0],X[1]])
J_num = autowrap(J, args = [X], backend="cython")
Here is what Jlooks like after running the above:
J
Out[50]:
Matrix([
[(1 - Derivative(X[0, 0], X[0, 0])*X[0, 0]/Abs(X[0, 0]))*(-Abs(X[0, 0])/2 + X[0, 0]/2)*X[1, 0], (-Abs(X[0, 0])/2 + X[0, 0]/2)**2],
[(-Abs(X[1, 0])/2 + X[1, 0]/2)**2, (1 - Derivative(X[1, 0], X[1, 0])*X[1, 0]/Abs(X[1, 0]))*(-Abs(X[1, 0])/2 + X[1, 0]/2)*X[0, 0]]])
Which, unsurprisingly, autowrap does not like:
[...]
wrapped_code_2.c(4): warning C4013: 'Derivative' undefined; assuming extern returning int
[...]
wrapped_code_2.obj : error LNK2001: unresolved external symbol Derivative
How can I tell sympy that X[0].diff(X[0])=1 and X[0].diff(X[1])=0? And perhaps even that abs(X[0]).diff(X[0]) = sign(X[0]).
Or is there any way around using sympy.MatrixSymbol and still get a cythonized function, where the input is a single vector rather than a list of symbols?
Would be greatful for any input, might well be a workaround at any step of the process described above. Thanks for reading!
Edit:
One short remark: One solution I could come up with myself is this:
Construct F and J using normal symbols; then replace the symbols in both expressions by the elements of some sympy.MatrixSymbol. This seems to get the job done, but the replacement takes considerable time, as J can reach dimensions of ~1000x1000 and above. I therefore would prefer to avoid such an approach.
After more extensive research, it seems the problem I was describing above is already fixed in the development/github version. After updating accordingly, all the Derivative terms involving MatrixElementare resolved correctly!
See here for reference.
This question is related to the answer given by #unutbu here SymPy cannot lambdify Product
I'd like to lambdify the derivative of a product. The exception is global name 'Derivative' is not defined and I assume is similar to what happened with Product, i.e. there's no printer function defined for that. So I started trying to plug-in a custom function:
import sympy.printing.lambdarepr as SPL
def _print_Derivative(self, expr):
# implementation
SPL.NumPyPrinter._print_Derivative = _print_Derivative
But I got immediately stuck as the expr parameters looks something like Derivative(Product(x*z[i], (i, 0, _Dummy_4019)), x), i.e. also the Product (even if correctly calculated and pretty printed) seems not to have a lambda representation. I thought that given the derivative is calculated by SymPy, I just have to take care about the expansion of the product (with a loop), but I'm not sure. Since I don't know enough about SymPy implementation I'm having hard time figuring out how to properly do this.