I have a small Python script that I want to convert into a single file executable using pyinstaller.
The script essentially takes an input file, manipulates it and writes an output file.
However, I need to calculate the tangent function in the script, and when I do this using either numpy.tan() or math.tan(), the final executable ends up being ridiculously huge because pyinstaller bundles either the whole numpy or math modules in the executable.
Thus, my question is, is there a pure python method to calculate trig functions?
My first though was that it must be possible to define sin, cos and tan purely mathematically, but I could not find a way to do this.
http://www.efunda.com/math/taylor_series/trig.cfm
the taylor series expansion is a usual numerical method to solve to arbitary precision.
By the way, you should take care of the cyclic property of the trig functions as the error is proportional to the input (well to some power of it), if you do not expect mostly well behaved usage.
It's possible to express trigonometric functions with complex numbers: Eulers formula
As this would however require you to perform complex math, and you would have to import cmath, you would need to implement complex math on your own if you want to go this way.
Otherwise, a simple approximation can be acquired by evaluating a taylor series. Again, as import math is not an option, this requires you to implement some pretty fundamental stuff like powers and factorials on your own.
Related
I have a real non-diagonalizable matrix that I'm looking to decompose as tidily as possible. I would love to put it in Jordan normal form, but since that's problematic numerically I'm looking for the next best thing. I've discovered that there are FORTRAN and MATLAB routines that will do a block-diagonal Schur factorization of a matrix. The FORTRAN implementation in SLICOT is MB03RD and the MATLAB implementation is bdschur (which for all I know could just be a wrapper around MB03RD).
I don't have MATLAB on my computer, and the code that's generating my matrices is in Python, so I'm looking for an equivalent function in Python. Old documentation for Python Control Systems Library indicated that an emulation of bdschur was planned, but it doesn't show up anywhere in the current docs. The Slycot repository has a FORTRAN file for MB03RD, but I can't seem to find much documentation for Slycot, and when I import it very few functions actually appear to be wrapped as Python functions.
I would like to know if anyone knows of a way to call an equivalent routine in Python, or if there exists some other similar decomposition that has an implementation in Python.
I have numbers too large to use with Python's inbuilt types and as such am using the decimal library. I want to use scipy.optimise.brentq with a function that operates on 'decimals' but when the function returns decimal it obviously cannot be used with the optimisation function's float based internals. How can I get around this: How can I use scipy optimisation techniques with the Decimal class for big numbers?
You can't. Scipy heavily relies on numerical algorithms that only deal with true numerical data types, and can't deal with the decimal class.
As a general rule of thumb: If your problem is well-defined and well-conditioned (that's something that numerical mathematicians define), you can just scale it so that it fits into normal python floats, and then you can apply scipy functionality to it.
If your problem, however, involves very small numbers as well as numbers that can't fit into float, there's little you can numerically do about that problem usually: It's hard to find a good solution.
If, however, your function only returns values that would fit into float, then you could just use
lambda x: float(your_function(x))
instead of your_function in brentq.
I have a follow up question to the post written a couple days ago, thank you for the previous feedback:
Finding complex roots from set of non-linear equations in python
I have gotten the set non-linear equations set up in python now so that fsolve will handle the real and imaginary parts independently. However, there are still problems with the python "fsolve" converging to the correct solution. I have exactly the same inputs that are used in Matlab, and after double checking, the set of equations are exactly the same as well. Matlab, no matter how I set the initial values, will always converge to the correct solution. With python however, every initial condition produces a different result, and never the correct one. After a fraction of a second, the following warning appears with python:
/opt/local/Library/Frameworks/Python.framework/Versions/Current/lib/python2.7/site-packages/scipy/optimize/minpack.py:227:
RuntimeWarning: The iteration is not making good progress, as measured by the
improvement from the last ten iterations.
warnings.warn(msg, RuntimeWarning)
I was wondering if there are some known differences between the fsolve in python and Matlab, and if there are some known methods to optimize the performance in python.
Thank you very much
I don't think that you should rely on the fact that the names are the same. I see from your other question that you are specifying that Matlab's fsolve use the 'levenberg-marquardt' algorithm rather than the default. Python's scipy.optimize.fsolve uses MINPACK's hybrd algorithms. Levenberg-Marquardt finds roots approximately by minimizing the sum of squares of the function and is quite robust. It is not a true root-finding method like the default 'trust-region-dogleg' algorithm. I don't know how the hybrd schemes work, but they claim to be a modification of Powell's method.
If you want something similar to what you're doing in Matlab, I'd look for an optimization scheme that implements Levenberg-Marquardt, such as scipy.optimize.root, which you were also using in your previous question. Is there a reason why you're not using that?
I'm trying to call upon the famous multilateration algorithm in order to pinpoint a radiation emission source given a set of arrival times for various detectors. I have the necessary data, but I'm still having trouble implementing this calculation; I am relatively new with Python.
I know that, if I were to do this by hand, I would use matrices and carry out elementary row operations in order to find my 3 unknowns (x,y,z), but I'm not sure how to code this. Is there a way to have Python implement ERO, or is there a better way to carry out my computation?
Depending on your needs, you could try:
NumPy if your interested in numerical solutions. As far as I remember, it could solve linear equations. Don't know how it deals with non-linear resolution.
SymPy for symbolic math. It solves symbolically linear equations ... according to their main page.
The two above are "generic" math packages. I doubt you will find (easily) any dedicated (and maintained) library for your specific need. Their was already a question on that topic here: Multilateration of GPS Coordinates
I am trying to numerically integrate an arbitrary (known when I code) function in my program
using numerical integration methods. I am using Python 2.5.2 along with SciPy's numerical integration package. In order to get a feel for it, i decided to try integrating sin(x) and observed this behavior-
>>> from math import pi
>>> from scipy.integrate import quad
>>> from math import sin
>>> def integrand(x):
... return sin(x)
...
>>> quad(integrand, -pi, pi)
(0.0, 4.3998892617846002e-14)
>>> quad(integrand, 0, 2*pi)
(2.2579473462709165e-16, 4.3998892617846002e-14)
I find this behavior odd because -
1. In ordinary integration, integrating over the full cycle gives zero.
2. In numerical integration, this (1) isn't necessarily the case, because you may just be
approximating the total area under the curve.
In any case, either assuming 1 is True or assuming 2 is True, I find the behavior to be inconsistent. Either both integrations (-pi to pi and 0 to 2*pi) should return 0.0 (first value in the tuple is the result and the second is the error) or return 2.257...
Can someone please explain why this is happening? Is this really an inconsistency? Can someone also tell me if I am missing something really basic about numerical methods?
In any case, in my final application, I plan to use the above method to find the arc length of a function. If someone has experience in this area, please advise me on the best policy for doing this in Python.
Edit
Note
I already have the first differential values at all points in the range stored in an array.
Current error is tolerable.
End note
I have read Wikipaedia on this. As Dimitry has pointed out, I will be integrating sqrt(1+diff(f(x), x)^2) to get the Arc Length. What I wanted to ask was - is there a better approximation/ Best practice(?) / faster way to do this. If more context is needed, I'll post it separately/ post context here, as you wish.
The quad function is a function from an old Fortran library. It works by judging by the flatness and slope of the function it is integrating how to treat the step size it uses for numerical integration in order to maximize efficiency. What this means is that you may get slightly different answers from one region to the next even if they're analytically the same.
Without a doubt both integrations should return zero. Returning something that is 1/(10 trillion) is pretty close to zero! The slight differences are due to the way quad is rolling over sin and changing its step sizes. For your planned task, quad will be all you need.
EDIT:
For what you're doing I think quad is fine. It is fast and pretty accurate. My final statement is use it with confidence unless you find something that really has gone quite awry. If it doesn't return a nonsensical answer then it is probably working just fine. No worries.
I think it is probably machine precision since both answers are effectively zero.
If you want an answer from the horse's mouth I would post this question on the scipy discussion board
I would say that a number O(10^-14) is effectively zero. What's your tolerance?
It might be that the algorithm underlying quad isn't the best. You might try another method for integration and see if that improves things. A 5th order Runge-Kutta can be a very nice general purpose technique.
It could be just the nature of floating point numbers: "What Every Computer Scientist Should Know About Floating Point Arithmetic".
This output seems correct to me since you have absolute error estimate here. The integral value of sin(x) is indeed should have value of zero for full period (any interval of 2*pi length) in both ordinary and numeric integration and your results is close to that value.
To evaluate arc length you should calculate integral for sqrt(1+diff(f(x), x)^2) function, where diff(f(x), x) is derivative of f(x). See also Arc length
0.0 == 2.3e-16 (absolute error tolerance 4.4e-14)
Both answers are the same and correct i.e., zero within the given tolerance.
The difference comes from the fact that sin(x)=-sin(-x) exactly even in finite precision. Whereas finite precision only gives sin(x)~sin(x+2*pi) approximately. Sure it would be nice if quad were smart enough to figure this out, but it really has no way of knowing apriori that the integral over the two intervals you give are equivalent or that the the first is a better result.