I'm trying to obtain one of the angles of Freudenstein equation (psi):
k1 * cos(psi) - k2 * cos(fi) + k3 - cos(psi - fi) = 0
I have k1, k2, k3 and fi values. I tried the following:
from sympy import Symbol, solve, cos
x = Symbol('x')
realPsi = solve(k1 * cos(x) - k2 * cos(fi) + k3 - cos(x - fi), x)
I receive this error:
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 484, in solve solution = _solve(f, *symbols, **flags)
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 700, in _solve soln = tsolve(f_num, symbol)
File "/usr/lib/python2.7/dist-packages/sympy/solvers/solvers.py", line 1143, in tsolve "(tsolve: at least one Function expected at this point")
NotImplementedError: Unable to solve the equation(tsolve: at least one Function expected at this point
I don't use this kind of tools before, maybe I'm doing something really wrong...
Any idea?
Thanks,
Héctor.
EDIT:
Thanks for the fast response.
I tried the following (simple equation with cos):
eq = 3.2 * cos(x + 0.2).rewrite(exp) + 1.7
eq
Out[1]: 1.6*exp(I*(-x - 0.2)) + 1.6*exp(I*(x + 0.2)) + 1.7
solve(1.6*exp(I*(-x - 0.2)) + 1.6*exp(I*(x + 0.2)) + 1.7, x)
NotImplementedError: Unable to solve the equation(tsolve: at least one Function expected at this point
Am I using correctly .rewrite??
Of course it should "just work" but here is a case where, with a little help for the "simpler equation with cos" given above you can get an answer:
>>> eq=3.2*cos(x+.2)+1.7
>>> [w.n(3,chop=True) for w in solve(expand(eq.rewrite(exp)))]
[-2.33, 1.93]
NotImplementedError means what it says, namely that a solver for this type of equations is "not implemented".
You can help SymPy a bit to find the solution:
>>> k * cos(x) - m * cos(y) + n - cos(x - y)
k*cos(x) - m*cos(y) + n - cos(x - y)
>>> _.rewrite(exp)
k*(exp(I*x)/2 + exp(-I*x)/2) - .....
>>> solve(_, x)
..... long solution
You can use rewrite to transform expressions written with trigonometric functions into expressions containing complex exponentials.
Related
Recently I got a long equation to solve that looks like that.
I've tried to solve this using sympy.solveset(), but it returned ConditionSet which means it couldn't handle this equation. How can I solve this equation using simpy library and if not at least in python? The code that I used:
import sympy as sp
t = sp.symbols('t')
a = 1.46
b = 1.2042 * 10**-4 * ((1.2275 * 10**-5 + t) * sp.ln(1.2275 * 10**-5 + t) - t)
result = sp.solveset(sp.Eq(a, b), t)
print(result)
This is a transcendental equation. It possibly has an analytic solution in terms of the Lambert W function but I'm not sure. I'll assume that you just want a numerical solution which you can get using nsolve:
In [42]: nsolve(a - b, t, 1)
Out[42]: 1857.54700584719
I am having some trouble with this question. I am given this system of equations
dx / dt = -y -z
dy / dt = x + a * y
dz / dt = b + z * (x - c)
and default values a=0.1, b=0.1, c=14 and also the Runge-Kutta algorithm:
def rk4(f, xvinit, Tmax, N):
T = np.linspace(0,Tmax,N+1)
xv = np.zeros( (len(T), len(xvinit)) )
xv[0] = xvinit
h = Tmax / N
for i in range(N):
k1 = f(xv[i])
k2 = f(xv[i] + h/2.0*k1)
k3 = f(xv[i] + h/2.0*k2)
k4 = f(xv[i] + h*k3)
xv[i+1] = xv[i] + h/6.0 *( k1 + 2*k2 + 2*k3 + k4)
return T, xv
I need to solve this system from t=0 to t=100 in time steps of 0.1 and using initial conditions (𝑥0,𝑦0,𝑧0)=(0,0,0) at 𝑡=0
I'm not really sure where to begin on this, I've tried defining a function to give the Oscillator:
def roessler(xyx, a=0.1, b=0.1, c=14):
xyx=(x,y,x)
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return dxdt ,dydt ,dzdt
which returns the right side of the equation with default values, i've then tried to solve by replacing f with roessler and filling in values for xvinit,Tmax and N with values i'm given but it's not working.
Any help is appreciated sorry if some of this is formatted wrong i'm new here.
Well, you almost got it already. Changing your roessler function to the following
def roessler(xyx, a=0.1, b=0.1, c=14):
x, y, z = xyx
dxdt=-y-z
dydt=x+a*y
dzdt=b+z*(x-c)
return np.array([dxdt, dydt, dzdt])
and then calling
T, sol = rk4(roessler, np.array([0, 0, 0]), 100, 1000)
makes it work.
Taking aside the typo in the first line of your roessler function, the key to solving this is to understand that you have a system of differential equations, i.e., you need to work with vectors. While you already had the input as the vector correct, you also need to make the output of roessler a vector and put in the initial value with the appropriate shape.
I have the following equations:
sqrt((x0 - x)^2 + (y0 - y)^2) - sqrt((x1 - x)^2 + (y1 - y)^2) = c1
sqrt((x3 - x)^2 + (y3 - y)^2) - sqrt((x4 - x)^2 + (y4 - y)^2) = c2
And I would like to find the intersection. I tried using fsolve, and transforming the equations into linear f(x) functions, and it worked for small numbers. I am working with huge numbers and to solve the linear equation there are lots of calculations performed, specifically the calculations reach to a square root of a subtraction, and when handling huge numbers precision is lost, and the left operand is smaller than the right one getting to a math value domain error trying to solve the square root of a negative number.
I am trying to solve this issue in different manners:
Trying to use bigger precision floats. Tried using numpy.float128 but fsolve wont allow using this.
Currently searching for a library that allows to solve non linear equations system, but no luck so far.
Any help/guidance/tip I will appreciate!!
Thanks!!
Taking all advice, i ended using code like the following:
for the the system:
0 = x + y - 8
0 = sqrt((-6 - x)^2 + (4 - y)^2) - sqrt((1 - x)^2 + y^) - 5
from math import sqrt
import numpy as np
from scipy.optimize import fsolve
def f(x):
y = np.zeros(2)
y[0] = x[1] + x[0] - 8
y[1] = sqrt((-6 - x[0]) ** 2 + (4 - x[1]) ** 2) - sqrt((1 - x[0]) ** 2 + x[1] ** 2) - 5
return y
x0 = np.array([0, 0])
solution = fsolve(f, x0)
print "(x, y) = (" + str(solution[0]) + ", " + str(solution[1]) + ")"
Note: the line x0 = np.array([0, 0]) corresponds to the seed that the method uses in fsolve in order to get to a solution. It is important to have a close seed to reach for a solution.
The example provided works :)
You might find some use in SymPy, which is a symbolic algebra manipulation in Python.
From it's home page:
SymPy is a Python library for symbolic mathematics. It aims to become a full-featured computer algebra system (CAS) while keeping the code as simple as possible in order to be comprehensible and easily extensible. SymPy is written entirely in Python.
As you have a non-linear equation you need some kind of optimizer to solve it. Probably you can use something like scipy.optimize (https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html). However, as I have no experience with that scipy function can offer you only a solution with the gradient descent method of the tensorflow library. You can find a short guide here: https://learningtensorflow.com/lesson7/ (check out the Gradient descent cahpter). Analog to the method described there you could do something like that:
# These arrays are pseudo code, fill in your values for x0,x1,y0,y1,...
x_array = [x0,x1,x3,x4]
y_array = [y0,y1,y3,y4]
c_array = [c1,c2]
# Tensorflow model starts here
x=tf.placeholder("float")
y=tf.placeholder("float")
z=tf.placeholder("float")
# the array [0,0] are initial guesses for the "correct" x and y that solves the equation
xy_array = tf.Variable([0,0], name="xy_array")
x0 = tf.constant(x_array[0], name="x0")
x1 = tf.constant(x_array[1], name="x1")
x3 = tf.constant(x_array[2], name="x3")
x4 = tf.constant(x_array[3], name="x4")
y0 = tf.constant(y_array[0], name="y0")
y1 = tf.constant(y_array[1], name="y1")
y3 = tf.constant(y_array[2], name="y3")
y4 = tf.constant(y_array[3], name="y4")
c1 = tf.constant(c_array[0], name="c1")
c2 = tf.constant(c_array[1], name="c2")
# I took your first line and subtracted c1 from it, same for the second line, and introduced d_1 and d_2
d_1 = tf.sqrt(tf.square(x0 - xy_array[0])+tf.square(y0 - xy_array[1])) - tf.sqrt(tf.square(x1 - xy_array[0])+tf.square(y1 - xy_array[1])) - c_1
d_2 = tf.sqrt(tf.square(x3 - xy_array[0])+tf.square(y3 - xy_array[1])) - tf.sqrt(tf.square(x4 - xy_array[0])+tf.square(y4 - xy_array[1])) - c_2
# this z_model should actually be zero in the end, in that case there is an intersection
z_model = d_1 - d_2
error = tf.square(z-z_model)
# you can try different values for the "learning rate", here 0.01
train_op = tf.train.GradientDescentOptimizer(0.01).minimize(error)
model = tf.global_variables_initializer()
with tf.Session() as session:
session.run(model)
# here you are creating a "training set" of size 1000, you can also make it bigger if you like
for i in range(1000):
x_value = np.random.rand()
y_value = np.random.rand()
d1_value = np.sqrt(np.square(x_array[0]-x_value)+np.square(y_array[0]-y_value)) - np.sqrt(np.square(x_array[1]-x_value)+np.square(y_array[1]-y_value)) - c_array[0]
d2_value = np.sqrt(np.square(x_array[2]-x_value)+np.square(y_array[2]-y_value)) - np.sqrt(np.square(x_array[3]-x_value)+np.square(y_array[3]-y_value)) - c_array[1]
z_value = d1_value - d2_value
session.run(train_op, feed_dict={x: x_value, y: y_value, z: z_value})
xy_value = session.run(xy_array)
print("Predicted model: {a:.3f}x + {b:.3f}".format(a=xy_value[0], b=xy_value[1]))
But be aware: This code will probably run a while... This is why haven't tested it...
Also I am currently not sure what will happen if there is no intersection. Probably you get the coordinates of the closest distance of the functions...
Tensorflow can be somewhat difficult if you haven't used it yet, but it is worth to learn it, as you can also use it for any deep learning application (actual purpose of this library).
Here's what I did:
from sympy import *
x = symbols("x")
y = Function("y")
dsolve(diff(y(x),x) - y(x)**x)
The answer I get (SymPy 1.0) is:
Eq(y(x), (C1 - x*(x - 1))**(1/(-x + 1)))
But that's wrong. Both Mathematica and Maple can't solve this ODE. What's happening here?
A bug. SymPy thinks it's a Bernoulli equation
y' = P(x) * y + Q(x) * y**n
without checking that the exponent n is constant. So the solution is wrong.
I raised an issue on SymPy tracker. It should be soon fixed in the development version of SymPy and subsequently in version 1.2. (As an aside, 1.0 is a bit old, many things have improved in 1.1.1 although not that one.)
With the correction, SymPy recognizes there is no explicit solution and resorts to power series method, producing a few terms of the power series:
Eq(y(x), x + x**2*log(C1)/2 + x**3*(log(C1)**2 + 2/C1)/6 + x**4*(log(C1)**3 + 9*log(C1)/C1 - 3/C1**2)/24 + x**5*(log(C1)**4 + 2*(log(C1) - 1/C1)*log(C1)/C1 + 2*(2*log(C1) - 1/C1)*log(C1)/C1 + 22*log(C1)**2/C1 - 20*log(C1)/C1**2 + 20/C1**2 + 8/C1**3)/120 + C1 + O(x**6))
You don't have to wait for the patch to get this power series, it can be obtained by giving SymPy a "hint":
dsolve(diff(y(x), x) - y(x)**x, hint='1st_power_series')
Works better with an initial condition:
dsolve(diff(y(x), x) - y(x)**x, ics={y(0): 1}, hint='1st_power_series')
returns
Eq(y(x), 1 + x + x**3/3 - x**4/8 + 7*x**5/30 + O(x**6))
I am trying to solve equations such as the following for x:
Here the alpha's and K are given, and N will be upwards of 1,000. Is there a way to specify the LHS given an np.array for the alpha's using sympy? My hope was to define:
eqn = Eq(LHR - K)
solve(eqn,x)
by telling sympy that LHS= sum( a_i + x).
Any tips on solvers which would do this the fastest would also be appreciated. Thanks!
I was hoping for something like:
from sympy import Symbol, symbols, solve, summation, log
import numpy as np
N=10
K=1
alpha=np.random.randn(N, 1)
x = Symbol('x')
i = Symbol('i')
eqn = summation(log(x+alpha[i]), (i, 1, N))
solve(eqn-K,x)
You can't index a NumPy array with a SymPy symbol. Since your sum is finite, just use the Python sum function:
>>> alpha=np.random.randn(1, N)
>>> sum([log(x + i) for i in alpha[0]])
log(x - 1.85289943713841) + log(x - 1.40121781484552) + log(x - 1.21850393539695) + log(x - 0.605693136420962) + log(x - 0.575839713282035) + log(x - 0.105389419698408) + log(x + 0.415055726774043) + log(x + 0.71601559149345) + log(x + 0.866995633213984) + log(x + 1.12521825562504)
But even so, I don't get why you don't just rewrite this as (x - alpha[0])*(x - alpha[1])*...*(x - alpha[N - 1]) - exp(K), as suggested by Warren Weckesser. You can then use a numerical solver like SymPy's nsolve or something from another library to solve this numerically
>>> nsolve(Mul(*[(x - i) for i in alpha[0]]) - exp(K), 1)
mpf('1.2696755961730152')
You could also solve the log expression numerically, but unless your logs can have negative arguments, these should be the same.