The Question:
Given a sympy expression, is there an easy way to generate python code (in the end I want a .py or perhaps a .pyc file)? I imagine this code would contain a function that is given any necessary inputs and returns the value of the expression.
Why
I find myself pretty frequently needing to generate python code to compute something that is nasty to derive, such as the Jacobian matrix of a nasty nonlinear function.
I can use sympy to derive the expression for the nonlinear thing I want: very good. What I then want is to generate python code from the resulting sympy expression, and save that python code to it's own module. I've done this before, but I had to:
Call str(sympyResult)
Do custom things with regular expressions to get this string to look like valid python code
write this python code to a file
I note that sympy has code generation capabilities for several other languages, but not python. Is there an easy way to get python code out of sympy?
I know of several possible but problematic ways around this problem:
I know that I could just call evalf on the sympy expression and plug in the numbers I want. This has several unfortuante side effects:
dependency: my code now depends on sympy to run. This is bad.
efficiency: sympy now must run every time I numerically evaluate: even if I pickle and unpickle the expression, I still need evalf every time.
I also know that I could generate, say, C code and then wrap that code using a host of tools (python/C api, cython, weave, swig, etc...). This, however, means that my code now depends on there being an appropriate C compiler.
Edit: Summary
It seems that sympy.python, or possibly just str(expression) are what there is (see answer from smichr and comment from Oliver W.), and they work for simple scalar expressions.
That doesn't help much with things like Jacobians, but then it seems that sympy.printing.print_ccode chokes on matrices as well. I suppose code that could handle the printing of matrices to another language would have to assume matrix support in the destination language, which for python would probably mean reliance on the presence of things like numpy. It would be nice if such a way to generate numpy code existed, but it seems it does not.
If you don't mind having a SymPy dependency in your code itself, a better solution is to generate the SymPy expression in your code and use lambdify to evaluate it. This will be much faster than using evalf, especially if you use numpy.
You could also look at using the printer in sympy.printing.lambdarepr directly, which is what lambdify uses to convert an expression into a lambda function.
The function you are looking for to generate python code is python. Although it generates python code, that code will need some tweaking to remove dependence on SymPy objects as Oliver W pointed out.
>>> import sympy as sp
>>> x = sp.Symbol('x')
>>> y = sp.Symbol('y')
>>> print(sp.python(sp.Matrix([[x**2,sp.exp(y) + x]]).jacobian([x, y])))
x = Symbol('x')
y = Symbol('y')
e = MutableDenseMatrix([[2*x, 0], [1, exp(y)]])
Related
I am interested in rewriting an old Fortran code to Python. The code was for solving any general field variable, call it F (velocity, temperature, pressure, etc). But to solve each variable we have to define EQUIVALENCE of that variable to F.
For example, something like this:
EQUIVALENCE (F(1,1,1),TP(1,1)),(FOLD(1,1,1),TPOLD(1,1))
Is there a Python version of the above concept?
To my knowledge, there is no way to manipulate the memory usage in python.
You can perhaps simply use a list.
F=[]
and
FOLD=[]
When you do
F=FOLD
F and FOLD will point to the same data.
I would suggest to use numpy and scipy to create solvers and use python concepts to make it efficient instead of trying to mimic fortran concepts. Especially very old ones.
I have this Python code which solves a 3 variable linear equation.
import numpy as np
from sympy import *
init_printing(use_latex='mathjax')
A = Matrix([[-2,3,-1],[2,2,3],[-4,-1,1]])
x,y,z= symbols('x,y,z')
In[12]:
X =Matrix([[x],[y],[z]])
B = Matrix([[1],[1],[1]])
solve(A*X-B)
I am happy as well as baffled with that output. I want to understand what steps sympy follows to solve this, and what solver it's using?
Part 1 of the question is How is sympy solving AX-B above?
Part 2: In general is there a method to see the disassembly for any python program (for the purpose of understanding it)?
There are two basic methods:
Read the source
The best way to understand it is to read the source. In IPython, you can type solve?? and it will show you the source code, as well as what file that source is in. You can also look at the SymPy GitHub.
solve in SymPy is a bit complicated, because it can solve many different types of equations. I believe in this case, you want to look at solve_linear_system, which uses row reduction. That will be replaced with linsolve in a future version, which uses essentially the same algorithm (Gauss-Jordan elimination).
Use a visual debugger
Another way to understand what is going on is to step through the code in a visual debugger. I recommend a debugger that can show you the code of the function that is being run, as well as a list of the variable, along with their values (pdb is not a great debugger in this respect). I personally prefer PuDB, which runs in the terminal, but there are other good ones as well. The advantage of using a debugger is that you can see exactly what code paths are being traversed and what values the variables have at each step.
SymPy comes equipped with the nice sympify() function which can parse arbitrary strings into SymPy expressions. But it has two major drawbacks:
It is not safe, as it relies on the notorious eval()
It automatically simplifies the read expression. e.g. sympify('binomial(5,3)') will return the expression 10.
So my questions are:
First, is there a way to "just parse" the string, without any additional computations? I want to achieve something like this effect:
latex(parse('binomial(5,3)')) #returns '{\\binom{5}{3}}'
Second, is there an accepted way to sympify (i.e. parse and compute) arbitrary user-generated strings while remaining safe? It is done by SymPy Gamma, so it's possible in practice, but the question is how much dirty work is needed.
Look at the internal functions in the SymPy parsing module.
There is no official way to do it. We need to rewrite sympify to avoid eval. Note that SymPy gamma just uses sympify. It remains safe because it's sandboxed on the App Engine.
I've been using the simplify function in Sympy to simplify some long complicated equations, but it's not proving sufficient, as it frequently does not simplify things as much as possible, giving my program numerical errors when it comes to solving the equations.
Does anyone know of any other symbolic engines with a simplify function that can be used instead?
Many thanks.
Maybe you use python's subprocess module to run maxima on behalf of your python program? This is what maxima-mode on Emacs does, just do something similar. Start maxima, keep file handles to it's input/output, feed it with equations and let it mangle them to your desire (Maxima has lots of equation-changing functions), and read back the result from the output file handle.
Sympy vs. Maxima
pyMaxima
I'm looking into speeding up my python code, which is all matrix math, using some form of CUDA. Currently my code is using Python and Numpy, so it seems like it shouldn't be too difficult to rewrite it using something like either PyCUDA or CudaMat.
However, on my first attempt using CudaMat, I realized I had to rearrange a lot of the equations in order to keep the operations all on the GPU. This included the creation of many temporary variables so I could store the results of the operations.
I understand why this is necessary, but it makes what were once easy to read equations into somewhat of a mess that difficult to inspect for correctness. Additionally, I would like to be able to easily modify the equations later on, which isn't in their converted form.
The package Theano manages to do this by first creating a symbolic representation of the operations, then compiling them to CUDA. However, after trying Theano out for a bit, I was frustrated by how opaque everything was. For example, just getting the actual value for myvar.shape[0] is made difficult since the tree doesn't get evaluated until much later. I would also much prefer less of a framework in which my code much conform to a library that acts invisibly in the place of Numpy.
Thus, what I would really like is something much simpler. I don't want automatic differentiation (there are other packages like OpenOpt that can do that if I require it), or optimization of the tree, but just a conversion from standard Numpy notation to CudaMat/PyCUDA/somethingCUDA. In fact, I want to be able to have it evaluate to just Numpy without any CUDA code for testing.
I'm currently considering writing this myself, but before even consider such a venture, I wanted to see if anyone else knows of similar projects or a good starting place. The only other project I know that might be close to this is SymPy, but I don't know how easy it would be to adapt to this purpose.
My current idea would be to create an array class that looked like a Numpy.array class. It's only function would be to build a tree. At any time, that symbolic array class could be converted to a Numpy array class and be evaluated (there would also be a one-to-one parity). Alternatively, the array class could be traversed and have CudaMat commands be generated. If optimizations are required they can be done at that stage (e.g. re-ordering of operations, creation of temporary variables, etc.) without getting in the way of inspecting what's going on.
Any thoughts/comments/etc. on this would be greatly appreciated!
Update
A usage case may look something like (where sym is the theoretical module), where we might be doing something such as calculating the gradient:
W = sym.array(np.rand(size=(numVisible, numHidden)))
delta_o = -(x - z)
delta_h = sym.dot(delta_o, W)*h*(1.0-h)
grad_W = sym.dot(X.T, delta_h)
In this case, grad_W would actually just be a tree containing the operations that needed to be done. If you wanted to evaluate the expression normally (i.e. via Numpy) you could do:
npGrad_W = grad_W.asNumpy()
which would just execute the Numpy commands that the tree represents. If on the other hand, you wanted to use CUDA, you would do:
cudaGrad_W = grad_W.asCUDA()
which would convert the tree into expressions that can executed via CUDA (this could happen in a couple of different ways).
That way it should be trivial to: (1) test grad_W.asNumpy() == grad_W.asCUDA(), and (2) convert your pre-existing code to use CUDA.
Have you looked at the GPUArray portion of PyCUDA?
http://documen.tician.de/pycuda/array.html
While I haven't used it myself, it seems like it would be what you're looking for. In particular, check out the "Single-pass Custom Expression Evaluation" section near the bottom of that page.