I'm struggling with the fact that elements of sympy.MatrixSymbol don't seem to interact well with sympy's differentiation routines.
The fact that I'm trying to work with elements of sympy.MatrixSymbol rather than "normal" sympy symbols is beacause I want to autowrap a large function, and this seems that this is the only way to overcome argument limitations and enable input of a single array.
To give the reader a picture of the restrictions on possible solutions, I'll start with an overview of my intentions; however, the hasty reader might as well jump to the codeblocks below, which illustrate my problem.
Declare a vector or array of variables of some sort.
Build some expressions out of the elements of the former; these expressions are to make up the components of a vector valued function of said vector. In addition to this function, I'd like to obtain the Jacobian w.r.t. the vector.
Use autowrap (with the cython backend) to get numerical implementations of the vector function and its Jacobian. This puts some limitations on the former steps: (a) it is desired that the input of the function is given as a vector, rather than a list of symbols. (Both because there seems to be a limit to the number of inputs for an autwrapped function, and to ease interaction with scipy later on, i.e. avoid having to unpack numpy vectors to lists often).
On my journey, I ran into 2 issues:
Cython does not seem to like some sympy function, among them sympy.Max, upon which I heavily rely. The "helpers" kwarg of autowrap seems unable to handle multiple helpers at once.
This is by itself not a big deal, as I learned to circumvent it using abs() or sign(), which cython readily understands.
(see also this question on the above)
As stated before, autowrap/cython do not accept more than 509 arguments in form of symbols, at least not in my compiler setup. (See also here)
As I would prefer to give a vector rather than a list as input to the function anyways, I looked for a way to get the wrapped function to take a numpy array as input (comparable to DeferredVector + lambdify). It seems the natural way to do this is sympy.MatrixSymbol. (See thread linked above. I'm not sure there'd be an alternative, if so, suggestions are welcome.)
My latest problem then starts here: I realized that the elements of sympy.MatrixSymbol in many ways do not behave like "other" sympy symbols. One has to assign the properties real and commutative individually, which then seems to work fine though. However, my real trouble starts when trying to get the Jacobian; sympy seems not to get derivatives of the elements right out of the box:
import sympy
X= sympy.MatrixSymbol("X",10,1)
for element in X:
element._assumptions.update({"real":True, "commutative":True})
X[0].diff(X[0])
Out[2]: Derivative(X[0, 0], X[0, 0])
X[1].diff(X[0])
Out[15]: Derivative(X[1, 0], X[0, 0])
The following block is a minimal example of what I'd like to do, but here using normal symbols:
(I think it captures all I need, if I forgot something I'll add that later.)
import sympy
from sympy.utilities.autowrap import autowrap
X = sympy.symbols("X:2", real = True)
expr0 = X[1]*( (X[0] - abs(X[0]) ) /2)**2
expr1 = X[0]*( (X[1] - abs(X[1]) ) /2)**2
F = sympy.Matrix([expr0, expr1])
J = F.jacobian([X[0],X[1]])
J_num = autowrap(J, args = [X[0],X[1]], backend="cython")
And here is my (currently) best guess using sympy.MatrixSymbol, which then of course fails because the Derivative-expressions within J:
X= sympy.MatrixSymbol("X",2,1)
for element in X:
element._assumptions.update({"real":True, "commutative":True, "complex":False})
expr0 = X[1]*( (X[0] - abs(X[0]) ) /2)**2
expr1 = X[0]*( (X[1] - abs(X[1]) ) /2)**2
F = sympy.Matrix([expr0, expr1])
J = F.jacobian([X[0],X[1]])
J_num = autowrap(J, args = [X], backend="cython")
Here is what Jlooks like after running the above:
J
Out[50]:
Matrix([
[(1 - Derivative(X[0, 0], X[0, 0])*X[0, 0]/Abs(X[0, 0]))*(-Abs(X[0, 0])/2 + X[0, 0]/2)*X[1, 0], (-Abs(X[0, 0])/2 + X[0, 0]/2)**2],
[(-Abs(X[1, 0])/2 + X[1, 0]/2)**2, (1 - Derivative(X[1, 0], X[1, 0])*X[1, 0]/Abs(X[1, 0]))*(-Abs(X[1, 0])/2 + X[1, 0]/2)*X[0, 0]]])
Which, unsurprisingly, autowrap does not like:
[...]
wrapped_code_2.c(4): warning C4013: 'Derivative' undefined; assuming extern returning int
[...]
wrapped_code_2.obj : error LNK2001: unresolved external symbol Derivative
How can I tell sympy that X[0].diff(X[0])=1 and X[0].diff(X[1])=0? And perhaps even that abs(X[0]).diff(X[0]) = sign(X[0]).
Or is there any way around using sympy.MatrixSymbol and still get a cythonized function, where the input is a single vector rather than a list of symbols?
Would be greatful for any input, might well be a workaround at any step of the process described above. Thanks for reading!
Edit:
One short remark: One solution I could come up with myself is this:
Construct F and J using normal symbols; then replace the symbols in both expressions by the elements of some sympy.MatrixSymbol. This seems to get the job done, but the replacement takes considerable time, as J can reach dimensions of ~1000x1000 and above. I therefore would prefer to avoid such an approach.
After more extensive research, it seems the problem I was describing above is already fixed in the development/github version. After updating accordingly, all the Derivative terms involving MatrixElementare resolved correctly!
See here for reference.
Related
My Problem
I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear:
Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0).
My Question
Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives?
My Attempts
Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code.
Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem.
My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead:
Please let me know if any clarification is needed.
I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist.
Given one and the other, try replacing one with conjugate(other):
>>> one = x; other = y
>>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q)
(Abs(y)**2, 0)
If you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement:
>>> p.sub(one, conjugate(other)).subs(conjugate(other), one)
x*y
I've tried searching quite a lot on this one, but being relatively new to python I feel I am missing the required terminology to find what I'm looking for.
I have a function:
def my_function(x,y):
# code...
return(a,b,c)
Where x and y are numpy arrays of length 2000 and the return values are integers. I'm looking for a shorthand (one-liner) to loop over this function as such:
Output = [my_function(X[i],Y[i]) for i in range(len(Y))]
Where X and Y are of the shape (135,2000). However, after running this I am currently having to do the following to separate out 'Output' into three numpy arrays.
Output = np.asarray(Output)
a = Output.T[0]
b = Output.T[1]
c = Output.T[2]
Which I feel isn't the best practice. I have tried:
(a,b,c) = [my_function(X[i],Y[i]) for i in range(len(Y))]
But this doesn't seem to work. Does anyone know a quick way around my problem?
my_function(X[i], Y[i]) for i in range(len(Y))
On the verge of crossing the "opinion-based" border, ...Y[i]... for i in range(len(Y)) is usually a big no-no in Python. It is even a bigger no-no when working with numpy arrays. One of the advantages of working with numpy is the 'vectorization' that it provides, and thus pushing the for loop down to the C level rather than the (slower) Python level.
So, if you rewrite my_function so it can handle the arrays in a vectorized fashion using the multiple tools and methods that numpy provides, you may not even need that "one-liner" you are looking for.
I have a equation to solve. The equation can be described as the formula above. N and S are constants, for example N = 201 and S = 0.5. I use sympy in python to solve it. The python script is given as following:
from sympy import *
x=Symbol('x')
print solve( (((1-x)/200) **(1-x))* x**x - 2**(-0.5), x)
However, there is a RuntimeError: maximum recursion depth exceeded in __instancecheck__
I have also tried to use Mathematica, and it can output a result of 0.963
http://www.wolframalpha.com/input/?i=(((1-x)%2F200)+(1-x))*+xx+-+2**(-0.5)+%3D+0
Any suggestion is welcome. Thanks.
Assuming that you don't want a symbolic solution, just a value you can work with (like WA's 0.964), you can use mpmath for this. I'm not sure if it's actually possible to express the solution in radicals - WA certainly didn't even try. You should already have it installed as SymPy
Requires: mpmath
Specifically, mpmath.findroot seems to do what you want. It takes an actual callable Python object which is the function to find a root of, and a starting value for x. It also accepts some more parameters such as the minimum error tol and the solver to use which you could play around with, although they don't really seem necessary. You could quite simply use it like this:
import mpmath
f = lambda x: (((1-x)/200) **(1-x))* x**x - 2**(-0.5)
print mpmath.findroot(f, 1)
I just used 1 as a starting value - you could probably think of a better one. Judging by the shape of your graph, there's only one root to be found and it can be approached quite easily, without much need for fancy solvers, so this should suffice. Also, considering that "mpmath is a Python library for arbitrary-precision floating-point arithmetic", you should be able to get a very high precision answer from this if you wished. It has the output of
(0.963904761592753 + 0.0j)
This is actually an mpmath complex or mpc object,
mpc(real='0.96390476159275343', imag='0.0')
If you know it will have an imaginary value of 0, you can just use either of the following methods:
In [6]: abs(mpmath.mpc(23, 0))
Out[6]: mpf('23.0')
In [7]: mpmath.mpc(23, 0).real
Out[7]: mpf('23.0')
to "extract" a single float in the format of an mpf.
Say I have a function foo() that takes in a single float and returns a single float. What's the fastest/most pythonic way to apply this function to every element in a numpy matrix or array?
What I essentially need is a version of this code that doesn't use a loop:
import numpy as np
big_matrix = np.matrix(np.ones((1000, 1000)))
for i in xrange(np.shape(big_matrix)[0]):
for j in xrange(np.shape(big_matrix)[1]):
big_matrix[i, j] = foo(big_matrix[i, j])
I was trying to find something in the numpy documentation that will allow me to do this but I haven't found anything.
Edit: As I mentioned in the comments, specifically the function I need to work with is the sigmoid function, f(z) = 1 / (1 + exp(-z)).
If foo is really a black box that takes a scalar, and returns a scalar, then you must use some sort of iteration. People often try np.vectorize and realize that, as documented, it does not speed things up much. It is most valuable as a way of broadcasting several inputs. It uses np.frompyfunc, which is slightly faster, but with a less convenient interface.
The proper numpy way is to change your function so it works with arrays. That shouldn't be hard to do with the function in your comments
f(z) = 1 / (1 + exp(-z))
There's a np.exp function. The rest is simple math.
I'm running a model in Python and I'm trying to speed up the execution time. Through profiling the code I've found that a huge amount of the total processing time is spent in the cell_in_shadow function below. I'm wondering if there is any way to speed it up?
The aim of the function is to provide a boolean response stating whether the specified cell in the NumPy array is shadowed by another cell (in the x direction only). It does this by stepping backwards along the row checking each cell against the height it must be to make the given cell in shadow. The values in shadow_map are calculated by another function not shown here - for this example, take shadow_map to be an array with values similar to:
[0] = 0 (not used)
[1] = 3
[2] = 7
[3] = 18
The add_x function is used to ensure that the array indices loop around (using clock-face arithmetic), as the grid has periodic boundaries (anything going off one side will re-appear on the other side).
def cell_in_shadow(x, y):
"""Returns True if the specified cell is in shadow, False if not."""
# Get the global variables we need
global grid
global shadow_map
global x_len
# Record the original length and move to the left
orig_x = x
x = add_x(x, -1)
while x != orig_x:
# Gets the height that's needed from the shadow_map (the array index is the distance using clock-face arithmetic)
height_needed = shadow_map[( (x - orig_x) % x_len)]
if grid[y, x] - grid[y, orig_x] >= height_needed:
return True
# Go to the cell to the left
x = add_x(x, -1)
def add_x(a, b):
"""Adds the two numbers using clockface arithmetic with the x_len"""
global x_len
return (a + b) % x_len
I do agree with Sancho that Cython will probably be the way to go, but here are a couple of small speed-ups:
A. Store grid[y, orig_x] in some variable before you start the while loop and use that variable instead. This will save a bunch of look-up calls to the grid array.
B. Since you are basically just starting at x_len - 1 in shadow_map and working down to 1, you can avoid using the modulus so much. Basically, change:
while x != orig_x:
height_needed = shadow_map[( (x - orig_x) % x_len)]
to
for i in xrange(x_len-1,0,-1):
height_needed = shadow_map[i]
or just get rid of the height_needed variable all together with:
if grid[y, x] - grid[y, orig_x] >= shadow_map[i]:
These are small changes, but they might help a little bit.
Also, if you plan on going the Cython route, I would consider having your function do this process for the whole grid, or at least a row at a time. That will save a lot of the function call overhead. However, you might not be able to really do this depending on how you are using the results.
Lastly, have you tried using Psyco? It takes less work than Cython though it probably won't give you quite as big of a speed boost. I would certainly try it first.
If you're not limited to strict Python, I'd suggest using Cython for this. It can allow static typing of the indices and efficient, direct access to a numpy array's underlying data buffer at c speed.
Check out a short tutorial/example at http://wiki.cython.org/tutorials/numpy
In that example, which is doing operations very similar to what you're doing (incrementing indices, accessing individual elements of numpy arrays), adding type information to the index variables cut the time in half compared to the original. Adding efficient indexing into the numpy arrays by giving them type information cut the time to about 1% of the original.
Most Python code is already valid Cython, so you can just use what you have and add annotations and type information where needed to give you some speed-ups.
I suspect you'd get the most out of adding type information your indices x, y, orig_x and the numpy arrays.
The following guide compares several different approaches to optimising numerical code in python:
Scipy PerformancePython
It is a bit out of date, but still helpful. Note that it refers to pyrex, which has since been forked to create the Cython project, as mentioned by Sancho.
Personally I prefer f2py, because I think that fortran 90 has many of the nice features of numpy (e.g. adding two arrays together with one operation), but has the full speed of compiled code. On the other hand if you don't know fortran then this may not be the way to go.
I briefly experimented with cython, and the trouble I found was that by default cython generates code which can handle arbitrary python types, but which is still very slow. You then have to spend time adding all the necessary cython declarations to get it to be more specific and fast, whereas if you go with C or fortran then you will tend to get fast code straight out of the box. Again this is biased by me already being familiar with these languages, whereas Cython may be more appropriate if Python is the only language you know.