I've been using the simplify function in Sympy to simplify some long complicated equations, but it's not proving sufficient, as it frequently does not simplify things as much as possible, giving my program numerical errors when it comes to solving the equations.
Does anyone know of any other symbolic engines with a simplify function that can be used instead?
Many thanks.
Maybe you use python's subprocess module to run maxima on behalf of your python program? This is what maxima-mode on Emacs does, just do something similar. Start maxima, keep file handles to it's input/output, feed it with equations and let it mangle them to your desire (Maxima has lots of equation-changing functions), and read back the result from the output file handle.
Sympy vs. Maxima
pyMaxima
Related
I am interested in rewriting an old Fortran code to Python. The code was for solving any general field variable, call it F (velocity, temperature, pressure, etc). But to solve each variable we have to define EQUIVALENCE of that variable to F.
For example, something like this:
EQUIVALENCE (F(1,1,1),TP(1,1)),(FOLD(1,1,1),TPOLD(1,1))
Is there a Python version of the above concept?
To my knowledge, there is no way to manipulate the memory usage in python.
You can perhaps simply use a list.
F=[]
and
FOLD=[]
When you do
F=FOLD
F and FOLD will point to the same data.
I would suggest to use numpy and scipy to create solvers and use python concepts to make it efficient instead of trying to mimic fortran concepts. Especially very old ones.
I have a work to do on numerical analysis that consists on implementing algorithms for root-finding problems. Among them are the Newton method which calculates values for a function f(x) and it's first derivative on each iteraction.
For that method I need a way to the user of my application enter a (mathematical) function and save it as a variable and use that information to give values of that function on different points. I know just the very basic of Python programming and maybe this is pretty easy, but how can I do it?
If you trust the user, you could input a string, the complete function expression as it would be done in Python, then call eval() on that string. Python itself evaluates the expression. However, the user could use that string to do many things in your program, many of them very nasty such as taking over your computer or deleting files.
If you do not trust the user, you have much more work to do. You could program a "function builder", much like the equation editor in Microsoft Word and similar programs. If you "know just the very basic of Python programming" this is beyond you. You might be able to use a search engine to find one for Python.
One more possibility is to write your own evaluator. That would also be beyond you, and you also might be able to find one you can use.
If you need more detail, show some more work of your own then ask.
I have this Python code which solves a 3 variable linear equation.
import numpy as np
from sympy import *
init_printing(use_latex='mathjax')
A = Matrix([[-2,3,-1],[2,2,3],[-4,-1,1]])
x,y,z= symbols('x,y,z')
In[12]:
X =Matrix([[x],[y],[z]])
B = Matrix([[1],[1],[1]])
solve(A*X-B)
I am happy as well as baffled with that output. I want to understand what steps sympy follows to solve this, and what solver it's using?
Part 1 of the question is How is sympy solving AX-B above?
Part 2: In general is there a method to see the disassembly for any python program (for the purpose of understanding it)?
There are two basic methods:
Read the source
The best way to understand it is to read the source. In IPython, you can type solve?? and it will show you the source code, as well as what file that source is in. You can also look at the SymPy GitHub.
solve in SymPy is a bit complicated, because it can solve many different types of equations. I believe in this case, you want to look at solve_linear_system, which uses row reduction. That will be replaced with linsolve in a future version, which uses essentially the same algorithm (Gauss-Jordan elimination).
Use a visual debugger
Another way to understand what is going on is to step through the code in a visual debugger. I recommend a debugger that can show you the code of the function that is being run, as well as a list of the variable, along with their values (pdb is not a great debugger in this respect). I personally prefer PuDB, which runs in the terminal, but there are other good ones as well. The advantage of using a debugger is that you can see exactly what code paths are being traversed and what values the variables have at each step.
The Question:
Given a sympy expression, is there an easy way to generate python code (in the end I want a .py or perhaps a .pyc file)? I imagine this code would contain a function that is given any necessary inputs and returns the value of the expression.
Why
I find myself pretty frequently needing to generate python code to compute something that is nasty to derive, such as the Jacobian matrix of a nasty nonlinear function.
I can use sympy to derive the expression for the nonlinear thing I want: very good. What I then want is to generate python code from the resulting sympy expression, and save that python code to it's own module. I've done this before, but I had to:
Call str(sympyResult)
Do custom things with regular expressions to get this string to look like valid python code
write this python code to a file
I note that sympy has code generation capabilities for several other languages, but not python. Is there an easy way to get python code out of sympy?
I know of several possible but problematic ways around this problem:
I know that I could just call evalf on the sympy expression and plug in the numbers I want. This has several unfortuante side effects:
dependency: my code now depends on sympy to run. This is bad.
efficiency: sympy now must run every time I numerically evaluate: even if I pickle and unpickle the expression, I still need evalf every time.
I also know that I could generate, say, C code and then wrap that code using a host of tools (python/C api, cython, weave, swig, etc...). This, however, means that my code now depends on there being an appropriate C compiler.
Edit: Summary
It seems that sympy.python, or possibly just str(expression) are what there is (see answer from smichr and comment from Oliver W.), and they work for simple scalar expressions.
That doesn't help much with things like Jacobians, but then it seems that sympy.printing.print_ccode chokes on matrices as well. I suppose code that could handle the printing of matrices to another language would have to assume matrix support in the destination language, which for python would probably mean reliance on the presence of things like numpy. It would be nice if such a way to generate numpy code existed, but it seems it does not.
If you don't mind having a SymPy dependency in your code itself, a better solution is to generate the SymPy expression in your code and use lambdify to evaluate it. This will be much faster than using evalf, especially if you use numpy.
You could also look at using the printer in sympy.printing.lambdarepr directly, which is what lambdify uses to convert an expression into a lambda function.
The function you are looking for to generate python code is python. Although it generates python code, that code will need some tweaking to remove dependence on SymPy objects as Oliver W pointed out.
>>> import sympy as sp
>>> x = sp.Symbol('x')
>>> y = sp.Symbol('y')
>>> print(sp.python(sp.Matrix([[x**2,sp.exp(y) + x]]).jacobian([x, y])))
x = Symbol('x')
y = Symbol('y')
e = MutableDenseMatrix([[2*x, 0], [1, exp(y)]])
I'm currently working on translating some C code To Python. This code is being used to help identify errors arising from the CLEAN algorithm used in Radio Astronomy. In order to do this analysis the value of the Fourier Transforms of Intensity Maps, Q Stokes Map and U Stokes Map must be found at specific pixel values (given by ANT_pix). These Maps are just 257*257 arrays.
The below code takes a few seconds to run with C but takes hours to run with Python. I'm pretty sure that it is terribly optimized as my knowledge of Python is quite poor.
Thanks for any help you can give.
Update My question is if there is a better way to implement the loops in Python which will speed things up. I've read quite a few answer here for other questions on Python which recommend avoiding nested for loops in Python if possible and I'm just wondering if anyone knows a good way of implementing something like the Python code below without the loops or with better optimised loops. I realise this may be a tall order though!
I've been using the FFT up till now but my supervisor wants to see what sort of difference the DFT will make. This is because the Antenna position will not, in general, occur at exact pixels values. Using FFT requires round to the closest pixel value.
I'm using Python as CASA, the computer program used to reduce Radio Astronomy datasets is written in python and implementing Python scripts in it is far far easier than C.
Original Code
def DFT_Vis(ANT_Pix="",IMap="",QMap="",UMap="", NMap="", Nvis=""):
UV=numpy.zeros([Nvis,6])
Offset=(NMap+1)/2
ANT=ANT_Pix+Offset;
i=0
l=0
k=0
SumI=0
SumRL=0
SumLR=0
z=0
RL=QMap+1j*UMap
LR=QMap-1j*UMap
Factor=[math.e**(-2j*math.pi*z/NMap) for z in range(NMap)]
for i in range(Nvis):
X=ANT[i,0]
Y=ANT[i,1]
for l in range(NMap):
for k in range(NMap):
Temp=Factor[int((X*l)%NMap)]*Factor[int((Y*k)%NMap)];
SumI+=IMap[l,k]*Temp
SumRL+=RL[l,k]*Temp
SumLR+=IMap[l,k]*Temp
k=1
UV[i,0]=SumI.real
UV[i,1]=SumI.imag
UV[i,2]=SumRL.real
UV[i,3]=SumRL.imag
UV[i,4]=SumLR.real
UV[i,5]=SumLR.imag
l=1
k=1
SumI=0
SumRL=0
SumLR=0
return(UV)
You should probably use numpy's fourier transform code, rather than writing your own: http://docs.scipy.org/doc/numpy/reference/routines.fft.html
If you are interested in boosting the performance of your script cython could be an option.
I am not an expert on the FFT, but my understanding is that the FFT is simply a fast way to compute the DFT. So to me your question sounds like you are trying to write a bubble sort algorithm to see if it gives a better answer than quicksort. They are both sorting algorithms that would give the same result!
So I am questioning your basic premise. I am wondering if you can just change your rounding on your data and get the same result from the SciPy FFT code.
Also, according to my DSP textbook, the FFT can produce a more accurate answer than computing the DFT the long way, simply because floating point operations are inexact, and the FFT invokes fewer floating point operations along the way to finding the correct answer.
If you have some working C code that does the calculation you want, you could always wrap the C code to let you call it from Python. Discussion here: Wrapping a C library in Python: C, Cython or ctypes?
To answer your actual question: as #ZoZo123 noted, it would be a big win to change from range() to xrange(). With range(), Python has to build a list of numbers, and then destroy the list when done; with xrange() Python just makes an iterator that yields up the numbers one at a time. (But note that in Python 3.x, range() makes an iterator and there is no xrange().)
Also, if this code does not have to integrate with the rest of your code, you might try running this code under PyPy. This is exactly the sort of code that PyPy can best optimize. The problem with PyPy is that currently your project must be "pure" Python, and it looks like you are using NumPy. (There are projects to get NumPy and PyPy to work together, but that's not done yet.) http://pypy.org/
If this code does need to integrate with the rest of your code, then I think you need to look at Cython (as noted by #Krzysztof RosiĆski).