I am trying to rewrite some exponential functions in an expression to cosh and sinh. The rewrite() function works to get from a hyperbolic function to its exponential representation. But it does not work to get back.
>>> import sympy
>>> x=sympy.Symbol('x')
>>> sympy.cosh(x).rewrite(sympy.exp)
exp(x)/2 + exp(-x)/2
>>> sympy.cosh(x).rewrite(sympy.exp).rewrite(sympy.cosh)
exp(x)/2 + exp(-x)/2
I would expect the result of the last command to be 'cosh(x)'. Can someone explain to me why it is not?
I tried to find some documentation on the rewrite() function but the only bit I found was the short section in http://docs.sympy.org/latest/tutorial/simplification.html that is not really helpful.
Applying .rewrite(sympy.cos) returns cosh(x) as you wanted. Apparently, the hyperbolic cosine is treated by rewrite as a variant of the normal one.
Here is a reference on rewrite method.
Alternatively, simplify(expr) also transforms exp(x)/2 + exp(-x)/2 into cosh(x).
Related
I am trying to convert a code from Matlab to Python in which I came across a function named mynorm(x,y) in Matlab which I want to convert to Python. I searched equivalent for this function in Python but was not successful. So tried to find the implementation of this function on the net and found a small mynorm.m file which contains the function with just one-liner code which is as follows:
function L = mynorm(x,y)
%length of the vector [x,y]
L = sqrt(x^2 + y^2);
%note - if the input was a vector v, a better way to do this would be
%L = sqrt(dot(v,v))
%see help dot for the dot product. This would work for vectors of any size.
But when I looked into the call which is made to this function in matlab file, it is as follows:
feaNorm = mynorm(fea2, 1)
feaNorm = mynorm(iris(:,1:4),2);
which doesn't really look like length function as in the above implementation.
Thus, I am skeptical to use sqrt function in Python for this function call.
Can someone redirect me to the correct implementation or the equivalent python code?
I suggest you use the hypot trigonometric function available in the math module. You will find a description of the math module here.
I have a equation to solve. The equation can be described as the formula above. N and S are constants, for example N = 201 and S = 0.5. I use sympy in python to solve it. The python script is given as following:
from sympy import *
x=Symbol('x')
print solve( (((1-x)/200) **(1-x))* x**x - 2**(-0.5), x)
However, there is a RuntimeError: maximum recursion depth exceeded in __instancecheck__
I have also tried to use Mathematica, and it can output a result of 0.963
http://www.wolframalpha.com/input/?i=(((1-x)%2F200)+(1-x))*+xx+-+2**(-0.5)+%3D+0
Any suggestion is welcome. Thanks.
Assuming that you don't want a symbolic solution, just a value you can work with (like WA's 0.964), you can use mpmath for this. I'm not sure if it's actually possible to express the solution in radicals - WA certainly didn't even try. You should already have it installed as SymPy
Requires: mpmath
Specifically, mpmath.findroot seems to do what you want. It takes an actual callable Python object which is the function to find a root of, and a starting value for x. It also accepts some more parameters such as the minimum error tol and the solver to use which you could play around with, although they don't really seem necessary. You could quite simply use it like this:
import mpmath
f = lambda x: (((1-x)/200) **(1-x))* x**x - 2**(-0.5)
print mpmath.findroot(f, 1)
I just used 1 as a starting value - you could probably think of a better one. Judging by the shape of your graph, there's only one root to be found and it can be approached quite easily, without much need for fancy solvers, so this should suffice. Also, considering that "mpmath is a Python library for arbitrary-precision floating-point arithmetic", you should be able to get a very high precision answer from this if you wished. It has the output of
(0.963904761592753 + 0.0j)
This is actually an mpmath complex or mpc object,
mpc(real='0.96390476159275343', imag='0.0')
If you know it will have an imaginary value of 0, you can just use either of the following methods:
In [6]: abs(mpmath.mpc(23, 0))
Out[6]: mpf('23.0')
In [7]: mpmath.mpc(23, 0).real
Out[7]: mpf('23.0')
to "extract" a single float in the format of an mpf.
I am using symbolic integration to integrate a combined function of circular function and power function.
from sympy import *
import math
import numpy as np
t = Symbol('t')
integrate(0.000671813*(7/2*(1.22222222+sin(2*math.pi*t-math.pi/2))-6)**0.33516,t)
However, when I finished input, it gives me an odd result:
0.000671813*Integral((3.0*sin(6.28318530717959*t - 1.5707963267949) - 2.33333334)**0.33516, t)
Why does this result contain Integral()? I checked online other functions and there is no Integral() in them.
An unevaluated Integral answer means that SymPy was unable to compute the integral.
Essentially you are trying to integrate a function that looks like
(sin(t) + a)**0.33516
where a is a constant number.
In general such an integration is not possible to express in elementary functions; see, for example, http://www.sosmath.com/calculus/integration/fant/fant.html,
especially the sentence on Chebyshev's theorem.
I'm trying to teach myself programming and am currently working my way through 'A Primer on Scientific Programming with Python' by Hans Petter Langtangen.
Right now I'm on Exercise 3.20. Unfortunately I don't have any solutions to the problems..
I know I can use an arbitrary (mathematical) function f in the definition of a method:
def diff(f,x,h=0.001):
return (f(x+h)-f(x-h))/2*h
And when I call it i can use whatever function I wish:
print diff(math.exp,0)
print diff(math.cos,2*math.pi)
So here's my question:
Is There a way to accomodate more complex functions in this way?
Say for example I would like to approximate the derivative of a function like
x(t) = e^(-(t-4)^2)
like I did above for cos(x) and e^(x).
Edit: maybe I should be more clear. I'm not specifically trying to differentiate this one function e^(-(t-4)^2) but would like to define a method that takes ANY function of x as an argument and approximates a derivative at point x.
I was impressed when I learned that you could do it for simple functions like cos(x), sin(x) and exp(x) at all. So I thought if that works there must be a way to make it more general..
Sure, just define it first:
def x(t):
return math.exp(-(t-4)**2)
print diff(x, 0)
Instead of using def, it's often possible to use lambda if the function consists of a single expression that is returned:
print diff(lambda t: math.exp(-(t-4)**2), 0)
sure:
def x(t):
return diff(math.exp(-(t-4)**2))
I'm taking in a function (e.g. y = x**2) and need to solve for x. I know I can painstakingly solve this manually, but I'm trying to find instead a method to use. I've browsed numpy, scipy and sympy, but can't seem to find what I'm looking for. Currently I'm making a lambda out of the function so it'd be nice if i'm able to keep that format for the the method, but not necessary.
Thanks!
If you are looking for numerical solutions (i.e. just interested in the numbers, not the symbolic closed form solutions), then there are a few options for you in the SciPy.optimize module. For something simple, the newton is a pretty good start for simple polynomials, but you can take it from there.
For symbolic solutions (which is to say to get y = x**2 -> x = +/- sqrt(y)) SymPy solver gives you roughly what you need. The whole SymPy package is directed at doing symbolic manipulation.
Here is an example using the Python interpreter to solve the equation that is mentioned in the question. You will need to make sure that SymPy package is installed, then:
>>>> from sympy import * # we are importing everything for ease of use
>>>> x = Symbol("x")
>>>> y = Symbol("y") # create the two variables
>>>> equation = Eq(x ** 2, y) # create the equation
>>>> solve(equation, x)
[y**(1/2), -y**(1/2)]
As you see the basics are fairly workable, even as an interactive algebra system. Not nearly as nice as Mathematica, but then again, it is free and you can incorporate it into your own programs. Make sure to read the Gotchas and Pitfalls section of the SymPy documentation on how to encode the appropriate equations.
If all this was to get a quick and dirty solutions to equations then there is always Wolfram Alpha.
Use Newton-Raphson via scipy.optimize.newton. It finds roots of an equation, i.e., values of x for which f(x) = 0. In the example, you can cast the problem as looking for a root of the function f(x) = x² - y. If you supply a lambda that computes y, you can provide a general solution thus:
def inverse(f, f_prime=None):
def solve(y):
return newton(lambda x: f(x) - y, 1, f_prime, (), 1E-10, 1E6)
return solve
Using this function is quite simple:
>>> sqrt = inverse(lambda x: x**2)
>>> sqrt(2)
1.4142135623730951
>>> import math
>>> math.sqrt(2)
1.4142135623730951
Depending on the input function, you may need to tune the parameters to newton(). The current version uses a starting guess of 1, a tolerance of 10-10 and a maximum iteration count of 106.
For an additional speed-up, you can supply the derivative of the function in question:
>>> sqrt = inverse(lambda x: x**2, lambda x: 2*x)
In fact, without it, the function actually uses the secant method instead of Newton-Raphson, which relies on knowing the derivative.
Check out SymPy, specifically the solver.