Keep expression exactly as written for latex conversion - python

I want to get the expression of p__s_alpha = 1/k * Product(a_0,(i,1,O)) in latex. I use:
print sympy.latex(p__s_alpha)
When I run latex on the result I get the following equation:
However, I want to print this equation:
Is there a way to keep the representation of the expression the way it is?

I started writing up an answer about how you could make your own custom printer that does this, but this I realized that there's already an option in latex that does what you want, the long_frac_ratio option. If that ratio is small enough, any fraction that is small enough will be printed as 1/b*a instead of a/b.
In [31]: latex(p__s_alpha, long_frac_ratio=1)
Out[31]: '\\frac{1}{k} \\prod_{i=1}^{O} a_{0}'
If you're interested, here is some of what I was going to write about writing a custom printer:
Internally, SymPy makes no distinction between a/b and a*1/b. They are both represented by the exact same object (see http://docs.sympy.org/latest/tutorial/manipulation.html).
However, the printing system is different. As you can see from that page, a/b is represented as Mul(a, Pow(b, -1)), i.e., a*b**-1, but it is the printer that converts this into a fraction format (this holds for any printer, not just the LaTeX one).
The good news for you is that the printing system in SymPy is very extensible (and the other good news is that SymPy is BSD open source, so you can freely reuse the printing logic that is already there in extending it). To create a custom LaTeX printer that does what you want, you need to subclass LatexPrinter in sympy.printing.latex and override the _print_Mul function (because as noted above, a/b is a Mul). The logic in this function is not really split up modularly, so you'll really need to copy the whole source and change the relevant parts of it [as I noted above, for this one, there is already an option that does what you want, but in other cases, there may not be].
And a final note, if you make a modification that would probably be wanted be a wider audience, we would love to have you submit it as a pull request to SymPy.

Related

Is there an python function or extension that is is similar to Matlab's format short?

The command format short in Matlab makes all the print outs in the command window be "Short, fixed-decimal format with 4 digits after the decimal point."
I know there is np.round, but I would like to have this functionality that Matlab offers in python so I dont have to write round every time. This in order to get a better overview of arrays/dataframes when they are printed.
I am interested in automatic rounding of numbers/floats printed in the terminal without using np.round
Ideally I would like also to be able to choose the number of digits (4).
Thanks
You can use numpy.set_printoptions, from the documentation:
np.set_printoptions(precision=4)
np.array([1.123456789])
[1.1235]

Highly precise division, multiplication, and exponentiation of large complex numbers

I am working on a project that requires highly precise division of large numbers which will sometimes be complex numbers. I need to do this in python, preferably python 3.7, but everything I have tried so far has not worked at all.
With real numbers, I can simply use the decimal module, but I found the decimal module does not work for complex numbers. In addition, when I have tried to extend the decimal module to the complex numbers, it has failed as I get inaccurate results with both large real and large complex inputs. When trying to download external modules with the functionality, it has not worked.
from decimal import *
def div(a,b):
y = b.real - (b.imag*1j)
a = a*y
b = Decimal((b*y).real)
return [Decimal(a.real)/b,Decimal(a.imag)/b]
Here is my code for using the decimal module on complex numbers, and to demonstrate what I mean (and to demonstrate this method of division works) Ill show below some inputs and outputs. The first one will be the method of working with relatively small inputs, and the 2nd will be the method very much not working with a large input.
>>> div(13243,23)[0]*23
Decimal('13243.00000000000000000000000')
>>> div(15**17,23)[0]*23
Decimal('98526125335693355453.21739130')
The result from trying with 15**17 is not only a few thousand higher than 15**17, but it's also not a whole number. This is very incorrect. As said I need this method to be transferable to the complex numbers, and as it stands to store complex numbers in a list is a pain and not ideal. It was necessary to do so in order to use decimal on the parts though, however, it clearly hasn't worked.
I thought at first that perhaps it was a case of I just needed to set the precision higher, but even when set to 1000 it still fails.
At this point, I tried to find some modules that would allow me to do this. I found 2. mpmath and gmpy. I tried to install gmpy via pip, and I tried doing so on multiple versions of python and with multiple versions of gmpy, and each time I got an error message, normally one about a sever "actively refusing connection", as well as others saying it wasn't supported, etc.
This kind of leaves me stuck. I can't get the modules that do it for me, and when I try to do it myself it quite blatantly isn't working. Is there another module that provides this functionality out there or is there something I am particularly doing wrong with my attempts that can be fixed somehow?

How to set a general option in Python to display only N digits everywhere?

I want to work with 3 digits after the decimal point in Python. What is the relevant setting to modify ?
I want that 1.0 / 3 would return 0.333, and not 0.3333333333333333 like it is the case in my Jupyter Notebook, using python 2.7.11 and Anaconda 4.0.0.
In my research, I heard about the Decimal class, but I don't want to use Decimal(x) in my code every time I display a float, neither the string formating or the round function, though I use it for the time being (because I don't want to use it every time).
I think there is a general solution, a setting computed only once.
There is no "one-time" solution to your problem.
And I think that your approach might be a little misguided.
I suppose that your interaction with Jupyter or Ipython has lead you to the conclusion that python is quite handy as a numerical calculator. Unfortunately both of the aforementioned programs are just wrappers or REPL programs and in the background come with the full programming language flexibility that Python offers.
use numpy and try this;
round(1.0/3, 3)
or
>>> 1.0/3
0.3333333333333333
>>> '{:0.3f}'.format(1.0/3)
'0.333'

Generate python code from a sympy expression?

The Question:
Given a sympy expression, is there an easy way to generate python code (in the end I want a .py or perhaps a .pyc file)? I imagine this code would contain a function that is given any necessary inputs and returns the value of the expression.
Why
I find myself pretty frequently needing to generate python code to compute something that is nasty to derive, such as the Jacobian matrix of a nasty nonlinear function.
I can use sympy to derive the expression for the nonlinear thing I want: very good. What I then want is to generate python code from the resulting sympy expression, and save that python code to it's own module. I've done this before, but I had to:
Call str(sympyResult)
Do custom things with regular expressions to get this string to look like valid python code
write this python code to a file
I note that sympy has code generation capabilities for several other languages, but not python. Is there an easy way to get python code out of sympy?
I know of several possible but problematic ways around this problem:
I know that I could just call evalf on the sympy expression and plug in the numbers I want. This has several unfortuante side effects:
dependency: my code now depends on sympy to run. This is bad.
efficiency: sympy now must run every time I numerically evaluate: even if I pickle and unpickle the expression, I still need evalf every time.
I also know that I could generate, say, C code and then wrap that code using a host of tools (python/C api, cython, weave, swig, etc...). This, however, means that my code now depends on there being an appropriate C compiler.
Edit: Summary
It seems that sympy.python, or possibly just str(expression) are what there is (see answer from smichr and comment from Oliver W.), and they work for simple scalar expressions.
That doesn't help much with things like Jacobians, but then it seems that sympy.printing.print_ccode chokes on matrices as well. I suppose code that could handle the printing of matrices to another language would have to assume matrix support in the destination language, which for python would probably mean reliance on the presence of things like numpy. It would be nice if such a way to generate numpy code existed, but it seems it does not.
If you don't mind having a SymPy dependency in your code itself, a better solution is to generate the SymPy expression in your code and use lambdify to evaluate it. This will be much faster than using evalf, especially if you use numpy.
You could also look at using the printer in sympy.printing.lambdarepr directly, which is what lambdify uses to convert an expression into a lambda function.
The function you are looking for to generate python code is python. Although it generates python code, that code will need some tweaking to remove dependence on SymPy objects as Oliver W pointed out.
>>> import sympy as sp
>>> x = sp.Symbol('x')
>>> y = sp.Symbol('y')
>>> print(sp.python(sp.Matrix([[x**2,sp.exp(y) + x]]).jacobian([x, y])))
x = Symbol('x')
y = Symbol('y')
e = MutableDenseMatrix([[2*x, 0], [1, exp(y)]])

Automatic CudaMat conversion in Python

I'm looking into speeding up my python code, which is all matrix math, using some form of CUDA. Currently my code is using Python and Numpy, so it seems like it shouldn't be too difficult to rewrite it using something like either PyCUDA or CudaMat.
However, on my first attempt using CudaMat, I realized I had to rearrange a lot of the equations in order to keep the operations all on the GPU. This included the creation of many temporary variables so I could store the results of the operations.
I understand why this is necessary, but it makes what were once easy to read equations into somewhat of a mess that difficult to inspect for correctness. Additionally, I would like to be able to easily modify the equations later on, which isn't in their converted form.
The package Theano manages to do this by first creating a symbolic representation of the operations, then compiling them to CUDA. However, after trying Theano out for a bit, I was frustrated by how opaque everything was. For example, just getting the actual value for myvar.shape[0] is made difficult since the tree doesn't get evaluated until much later. I would also much prefer less of a framework in which my code much conform to a library that acts invisibly in the place of Numpy.
Thus, what I would really like is something much simpler. I don't want automatic differentiation (there are other packages like OpenOpt that can do that if I require it), or optimization of the tree, but just a conversion from standard Numpy notation to CudaMat/PyCUDA/somethingCUDA. In fact, I want to be able to have it evaluate to just Numpy without any CUDA code for testing.
I'm currently considering writing this myself, but before even consider such a venture, I wanted to see if anyone else knows of similar projects or a good starting place. The only other project I know that might be close to this is SymPy, but I don't know how easy it would be to adapt to this purpose.
My current idea would be to create an array class that looked like a Numpy.array class. It's only function would be to build a tree. At any time, that symbolic array class could be converted to a Numpy array class and be evaluated (there would also be a one-to-one parity). Alternatively, the array class could be traversed and have CudaMat commands be generated. If optimizations are required they can be done at that stage (e.g. re-ordering of operations, creation of temporary variables, etc.) without getting in the way of inspecting what's going on.
Any thoughts/comments/etc. on this would be greatly appreciated!
Update
A usage case may look something like (where sym is the theoretical module), where we might be doing something such as calculating the gradient:
W = sym.array(np.rand(size=(numVisible, numHidden)))
delta_o = -(x - z)
delta_h = sym.dot(delta_o, W)*h*(1.0-h)
grad_W = sym.dot(X.T, delta_h)
In this case, grad_W would actually just be a tree containing the operations that needed to be done. If you wanted to evaluate the expression normally (i.e. via Numpy) you could do:
npGrad_W = grad_W.asNumpy()
which would just execute the Numpy commands that the tree represents. If on the other hand, you wanted to use CUDA, you would do:
cudaGrad_W = grad_W.asCUDA()
which would convert the tree into expressions that can executed via CUDA (this could happen in a couple of different ways).
That way it should be trivial to: (1) test grad_W.asNumpy() == grad_W.asCUDA(), and (2) convert your pre-existing code to use CUDA.
Have you looked at the GPUArray portion of PyCUDA?
http://documen.tician.de/pycuda/array.html
While I haven't used it myself, it seems like it would be what you're looking for. In particular, check out the "Single-pass Custom Expression Evaluation" section near the bottom of that page.

Categories