My Problem
I am using Sympy v. 1.11.1 on (Jupyter Notebook) Python v. 3.8.5. I am dealing with a large Hessian, where terms such as these appear:
Pi+ and Pi- are complex Sympy symbols. However, one is the complex conjugate of the other, that is conjugate(Pi+) = Pi- and vice versa. This means that the product Pi+ * Pi- is real and the derivatives can be easily evaluated by removing the Re/Im (in one case Re(Pi+ * Pi-) = Pi+ * Pi-, in the other Im(Pi+ * Pi-) = 0).
My Question
Is it possible to tell Sympy that Pi+ and Pi- are related by a complex conjugate, and it can therefore simplify the derivatives as explained above? Or does there exist some other way to simplify my derivatives?
My Attempts
Optimally, I would like to find a way to express the above relation between Pi+ and Pi- to Python, such that it can make simplifications where needed throughout the code.
Initially I wanted to use Sympy global assumptions and try to set an assumption that (Pi+ * Pi-) is real. However, when I try to use global assumptions it says name 'global_assumptions' is not defined and when I try to explicitly import it (instead of import *), it says cannot import name 'global_assumptions' from 'sympy.assumptions' I could not figure out the root of this problem.
My next attempt was to replace all instances of Re(Pi+ * Pi-) -> Pi+ * Pi- etc. manually with the Sympy function subs. The code replaced these instances successfully, but never evaluated the derivatives, so I got stuck with these instead:
Please let me know if any clarification is needed.
I found a similar question Setting Assumptions on Variables in Sympy Relative to Other Variables and it seems from the discussion there that there does not exist an efficient way to do this. However, seeing that this was asked back in 2013, and the discussions pointed towards the possibility of implementation of a new improved assumption system within Sympy in the near future, it would be nice to know if any new such useful methods exist.
Given one and the other, try replacing one with conjugate(other):
>>> one = x; other = y
>>> p = one*other; q = p.subs(one, conjugate(other); im(q),re(q)
(Abs(y)**2, 0)
If you want to get back the original symbol after the simplifications wrought by the first replacement, follow up with a second replacement:
>>> p.sub(one, conjugate(other)).subs(conjugate(other), one)
x*y
I am trying to find roots of a function in python using fsolve:
import math
import scipy
def f(a):
eq=-2*a**2 - 2*a**2*(math.sin(25*a**(1/4)))**2 - 2*a**2*(math.cos(25*a**(1/4)))**2 - 2*math.exp(-25*a**(1/4))*a**2*math.cos(25*a**(1/4)) - 2*math.exp(25*a**(1/4))*a**2*math.cos(25*a**(1/4))
return eq
print(f(scipy.optimize.fsolve(f,10)))
and it returns the following value:
[1234839.75468454]
That doesn't seem very close to 0 to me... Does it simply lack the computational power to calculate more decimals for the root? If so, what would be a good alternative for fsolve that could also calculate roots, just more accurately?
To better understand what happens, a first step would be to look at the infos of the run, which you can get by setting the full_output argument to True (see https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html for more details).
When starting with an initial point of 10 as you do, the algorithm claims to converge and do so in less evaluations than the maximum allocated, so it is not a problem of computing power.
Sometimes scipy.integrate.quad wrongly returns near-0 values. This has been addressed in this question, and seems to happen when the integration technique doesn't evaluate the function in the narrow range where it is significantly different than 0. In this and similar questions, the accepted solution was always to use the points parameter to tell scipy where to look. However, for me, this seems to actually make things worse.
Integration of exponential distribution pdf (answer should be just under 1):
from scipy import integrate
import numpy as np
t2=.01
#initial problem: fails when f is large
f=5000000
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2)
#>>>(3.8816838175855493e-22, 7.717972744727115e-22)
Now, the "fix" makes it fail, even on smaller values of f where the original worked:
f=2000000
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2)
#>>>(1.00000000000143, 1.6485317987792634e-14)
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=[t2])
#>>>(1.6117047218907458e-17, 3.2045611390981406e-17)
integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=[t2,t2])
#>>>(1.6117047218907458e-17, 3.2045611390981406e-17)
What's going on here? How can I tell scipy what to do so that this will evaluate for arbitrary values of f?
It's not a generic solution, but I've been able to fix this for the given function by using points=np.geomspace to guide the numerical algorithm towards heavier sampling in the interesting region. I'll leave this open for a little bit to see if anyone finds a generic solution.
Generate random values for t2 and f, then check min and max values for subset that should be very close to 1:
>>> t2s=np.exp((np.random.rand(20)-.5)*10)
>>> fs=np.exp((np.random.rand(20)-.1)*20)
>>> min(integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=t2-np.geomspace(1/f,t2,40))[0] for f in fs for t2 in t2s if f>(1/t2)*10)
0.9999621825009719
>>> max(integrate.quad(lambda t:f*np.exp(-f*(t2-t)),0,t2,points=t2-np.geomspace(1/f,t2,40))[0] for f in fs for t2 in t2s if f>(1/t2)*10)
1.000000288722783
I'm taking in a function (e.g. y = x**2) and need to solve for x. I know I can painstakingly solve this manually, but I'm trying to find instead a method to use. I've browsed numpy, scipy and sympy, but can't seem to find what I'm looking for. Currently I'm making a lambda out of the function so it'd be nice if i'm able to keep that format for the the method, but not necessary.
Thanks!
If you are looking for numerical solutions (i.e. just interested in the numbers, not the symbolic closed form solutions), then there are a few options for you in the SciPy.optimize module. For something simple, the newton is a pretty good start for simple polynomials, but you can take it from there.
For symbolic solutions (which is to say to get y = x**2 -> x = +/- sqrt(y)) SymPy solver gives you roughly what you need. The whole SymPy package is directed at doing symbolic manipulation.
Here is an example using the Python interpreter to solve the equation that is mentioned in the question. You will need to make sure that SymPy package is installed, then:
>>>> from sympy import * # we are importing everything for ease of use
>>>> x = Symbol("x")
>>>> y = Symbol("y") # create the two variables
>>>> equation = Eq(x ** 2, y) # create the equation
>>>> solve(equation, x)
[y**(1/2), -y**(1/2)]
As you see the basics are fairly workable, even as an interactive algebra system. Not nearly as nice as Mathematica, but then again, it is free and you can incorporate it into your own programs. Make sure to read the Gotchas and Pitfalls section of the SymPy documentation on how to encode the appropriate equations.
If all this was to get a quick and dirty solutions to equations then there is always Wolfram Alpha.
Use Newton-Raphson via scipy.optimize.newton. It finds roots of an equation, i.e., values of x for which f(x) = 0. In the example, you can cast the problem as looking for a root of the function f(x) = x² - y. If you supply a lambda that computes y, you can provide a general solution thus:
def inverse(f, f_prime=None):
def solve(y):
return newton(lambda x: f(x) - y, 1, f_prime, (), 1E-10, 1E6)
return solve
Using this function is quite simple:
>>> sqrt = inverse(lambda x: x**2)
>>> sqrt(2)
1.4142135623730951
>>> import math
>>> math.sqrt(2)
1.4142135623730951
Depending on the input function, you may need to tune the parameters to newton(). The current version uses a starting guess of 1, a tolerance of 10-10 and a maximum iteration count of 106.
For an additional speed-up, you can supply the derivative of the function in question:
>>> sqrt = inverse(lambda x: x**2, lambda x: 2*x)
In fact, without it, the function actually uses the secant method instead of Newton-Raphson, which relies on knowing the derivative.
Check out SymPy, specifically the solver.
Which programming language or a library is able to process infinite series (like geometric or harmonic)? It perhaps must have a database of some well-known series and automatically give proper values in case of convergence, and maybe generate an exception in case of divergence.
For example, in Python it could look like:
sum = 0
sign = -1.0
for i in range(1,Infinity,2):
sign = -sign
sum += sign / i
then, sum must be math.pi/4 without doing any computations in the loop (because it's a well-known sum).
Most functional languages which evaluate lazily can simulate the processing of infinite series. Of course, on a finite computer it is not possible to process infinite series, as I am sure you are aware. Off the top of my head, I guess Mathematica can do most of what you might want, I suspect that Maple can too, maybe Sage and other computer-algebra systems and I'd be surprised if you can't find a Haskell implementation that suits you.
EDIT to clarify for OP: I do not propose generating infinite loops. Lazy evaluation allows you to write programs (or functions) which simulate infinite series, programs which themselves are finite in time and space. With such languages you can determine many of the properties, such as convergence, of the simulated infinite series with considerable accuracy and some degree of certainty. Try Mathematica or, if you don't have access to it, try Wolfram Alpha to see what one system can do for you.
One place to look might be the Wikipedia category of Computer Algebra Systems.
There are two tools available in Haskell for this beyond simply supporting infinite lists.
First there is a module that supports looking up sequences in OEIS. This can be applied to the first few terms of your series and can help you identify a series for which you don't know the closed form, etc. The other is the 'CReal' library of computable reals. If you have the ability to generate an ever improving bound on your value (i.e. by summing over the prefix, you can declare that as a computable real number which admits a partial ordering, etc. In many ways this gives you a value that you can use like the sum above.
However in general computing the equality of two streams requires an oracle for the halting problem, so no language will do what you want in full generality, though some computer algebra systems like Mathematica can try.
Maxima can calculate some infinite sums, but in this particular case it doesn't seem to find the answer :-s
(%i1) sum((-1)^k/(2*k), k, 1, inf), simpsum;
inf
==== k
\ (- 1)
> ------
/ k
====
k = 1
(%o1) ------------
2
but for example, those work:
(%i2) sum(1/(k^2), k, 1, inf), simpsum;
2
%pi
(%o2) ----
6
(%i3) sum((1/2^k), k, 1, inf), simpsum;
(%o3) 1
You can solve the series problem in Sage (a free Python-based math software system) exactly as follows:
sage: k = var('k'); sum((-1)^k/(2*k+1), k, 1, infinity)
1/4*pi - 1
Behind the scenes, this is really using Maxima (a component of Sage).
For Python check out SymPy - clone of Mathematica and Matlab.
There is also a heavier Python-based math-processing tool called Sage.
You need something that can do a symbolic computation like Mathematica.
You can also consider quering wolframaplha: sum((-1)^i*1/i, i, 1 , inf)
There is a library called mpmath(python), a module of sympy, which provides the series support for sympy( I believe it also backs sage).
More specifically, all of the series stuff can be found here: Series documentation
The C++ iRRAM library performs real arithmetic exactly. Among other things it can compute limits exactly using the limit function. The homepage for iRRAM is here. Check out the limit function in the documentation. Note that I'm not talking about arbitrary precision arithmetic. This is exact arithmetic, for a sensible definition of exact. Here's their code to compute e exactly, pulled from the example on their web site:
//---------------------------------------------------------------------
// Compute an approximation to e=2.71.. up to an error of 2^p
REAL e_approx (int p)
{
if ( p >= 2 ) return 0;
REAL y=1,z=2;
int i=2;
while ( !bound(y,p-1) ) {
y=y/i;
z=z+y;
i+=1;
}
return z;
};
//---------------------------------------------------------------------
// Compute the exact value of e=2.71..
REAL e()
{
return limit(e_approx);
};
Clojure and Haskell off the top of my head.
Sorry I couldn't find a better link to haskell's sequences, if someone else has it, please let me know and I'll update.
Just install sympy on your computer. Then do the following code:
from sympy.abc import i, k, m, n, x
from sympy import Sum, factorial, oo, IndexedBase, Function
Sum((-1)**k/(2*k+1), (k, 0, oo)).doit()
Result will be: pi/4
I have worked in couple of Huge Data Series for Research purpose.
I used Matlab for that. I didn't know it can/can't process Infinite Series.
But I think there is a possibility.
U can try :)
This can be done in for instance sympy and sage (among open source alternatives) In the following, a few examples using sympy:
In [10]: summation(1/k**2,(k,1,oo))
Out[10]:
2
π
──
6
In [11]: summation(1/k**4, (k,1,oo))
Out[11]:
4
π
──
90
In [12]: summation( (-1)**k/k, (k,1,oo))
Out[12]: -log(2)
In [13]: summation( (-1)**(k+1)/k, (k,1,oo))
Out[13]: log(2)
Behind the scenes, this is using the theory for hypergeometric series, a nice introduction is the book "A=B" by Marko Petkovˇeks, Herbert S. Wilf
and Doron Zeilberger which you can find by googling. ¿What is a hypergeometric series?
Everybody knows what an geometric series is: $X_1, x_2, x_3, \dots, x_k, \dots $ is geometric if the contecutive terms ratio $x_{k+1}/x_k$ is constant. It is hypergeometric if the consecutive terms ratio is a rational function in $k$! sympy can handle basically all infinite sums where this last condition is fulfilled, but only very few others.