I'm trying to do one task, but I just can't figure it out.
This is my function:
1/(x**1/n) + 1/(y**1/n) + 1/(z**1/n) - 1
I want that sum to be as close to 1 as possible.
And these are my input variables (x,y,z):
test = np.array([1.42, 5.29, 7.75])
So n is the only decision variable.
To summarize:
I have a situation like this right now:
1/(1.42**1/1) + 1/(5.29**1/1) + 1/(7.75**1/1) = 1.02229
And I want to get the following:
1/(1.42^(1/0.972782944446024)) + 1/(5.29^(1/0.972782944446024)) + 1/(7.75^(1/0.972782944446024)) = 0.999625
So far I have roughly nothing, and any help is welcome.
import numpy as np
from scipy.optimize import minimize
def objectiv(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
test = np.array([1.42, 5.29, 7.75])
print(objectiv(test))
OUTPUT: 1.0222935270013889
How to properly define a constraint?
def conconstraint(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n)) - 1
And it is not at all clear to me how and what to do with n?
EDIT
I managed to do the following:
def objective(n,*args):
x = odds[0]
y = odds[1]
z = odds[2]
return abs((1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))) - 1)
odds = [1.42,5.29,7.75]
solve = minimize(objective,1.0,args=(odds))
And my output:
fun: -0.9999999931706812
x: array([0.01864994])
And really when put in the formula:
(1/(1.42^(1/0.01864994)) + 1/(5.29^(1/0.01864994)) + 1/(7.75^(1/0.01864994))) -1 = -0.999999993171
Unfortunately I need a positive 1 and I have no idea what to change.
We want to find n that gets our result for a fixed x, y, and z as close as possible to 1. minimize tries to get the lowest possible value for something, without negative bound; -3 is better than -2, and so on.
So what we actually want is called least-squares optimization. Similar idea, though. This documentation is a bit hard to understand, so I'll try to clarify:
All these optimization functions have a common design where you pass in a callable that takes at least one parameter, the one you want to optimize for (in your case, n). Then you can have it take more parameters, whose values will be fixed according to what you pass in.
In your case, you want to be able to solve the optimization problem for different values of x, y and z. So you make your callback accept n, x, y, and z, and pass the x, y, and z values to use when you call scipy.optimize.least_squares. You pass these using the args keyword argument (notice that it is not *args). We can also supply an initial guess of 1 for the n value, which the algorithm will refine.
The rest is customization that is not relevant for our purposes.
So, first let us make the callback:
def objective(n, x, y, z):
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
Now our call looks like:
best_n = least_squares(objective, 1.0, args=np.array([1.42, 5.29, 7.75]))
(You can call minimize the same way, and it will instead look for an n value to make the objective function return as low a value as possible. If I am thinking clearly: the guess for n should trend towards zero, making the denominators increase without bound, making the sum of the reciprocals go towards zero; negative values are not possible. However, it will stop when it gets close to zero, according to the default values for ftol, xtol and gtol. To understand this part properly is beyond the scope of this answer; please try on math.stackexchange.com.)
Related
Use map to evaluate a given polynomial at a specific x-value.
Input:
p: A list of coefficients for increasing powers of x
x: The value of x to evaluate
Output: Number representing the value of the evaluated polynomial
Example: poly_eval([1, 2, 3], 2) = 1(2)^0 + 2(2)^1 + 3(2)^2 = 17
def poly_eval(coeff_list, x):
total = 0
for i, coeff in enumerate(coeff_list):
total += coeff * x**i
return total
or if you really want to use map :
def poly_eval(coeff_list, x):
n = len(coeff_list)
return sum(map(lambda coeff, x, y: coeff*x**y, coeff_list, [x]*n, range(n)))
This is actually an interesting question. Since the answer is relatively simple and the pen and paper solution is known by everybody the real thing is kind of overlooked.
As mentioned, normally most people would approach like how it's done by pen and paper. However there is a better way which is more suitable for coding purposes, known as the Ruffini Horner method. This is a perfect case for reducing.
Write your polynomial in an array. So y = x^3-7x+7 would be var y = [1,0,-7,7].
Then a simple function;
var calcP = (y,x) => y.reduce((p,c) => p*x+c);
That's it.
Not sure what went wrong, it's not giving output, and I'm thinking of putting "remainder(x-y, y)" in the statement, but still not working.
def remainder(x, y):
if x == y:
return 0
elif x>y:
x = x-y
if x<y:
return x
x=int(input("x: "))
y=int(input("y: "))
print (f'Remainder: {remainder(x, y)}')
'Recursion', by definition, means that your function calls itself at some point. So far, yours doesn't. Consider the flow through your function for a set of inputs - say, x=22 and y=7, and walk through each statement to see what it's doing. This usually helps to debug.
When designing a recursive function, it's usually easiest to start with what you're doing to recur (in this case, subtracting y from x). Then, ask what would be the base case - what are the circumstances that make it impossible to recur further? In this case, if subtracting y from x would put the result below zero. In other words, if y is greater than x. Then, the rest of the function is just setting up for calling itself, and calling itself.
In this case, what you'd actually want to do to implement modulo via recursion is more simple:
def remainder(x, y):
# base case
if y > x:
return x
# recursive case
return remainder(x-y, y)
Each iteration of the function subtracts another y from x, until y is larger than x, at which point whatever x is is returned as the remainder.
Note that this will not work for negative numbers, for what should be obvious reasons.
you can also use this answer you need no recursion.
def remainder(x,y):
ans = x/y
rem = y * (ans-int(ans))
return round(rem)
remainder(132,7)
will give you
6
You need to call the function to perform recursion
def remainder(x, y):
if y==0: # check to avoid infinite loops
return float("Nan")
if x>=y:
return remainder(x-y, y)
else: # base case
return x
I can be considered pretty much new to python and coding in general so forgive me for my ignorance.
I'm trying to solve a system of trigonometric functions in python, and I'm doing so using the solve command from sympy. However, this method returns only a finite number of solutions, two in this particular case.
I've read through the documentation and it seems that to get an expression for all the solutions solveset is to be used instead. However, I do not want all the solutions to be displayed, but rather only a finite amount which is contained within a certain range.
Here's the example:
from sympy import *
x, y = symbols('x, y')
eq1 = Eq(y - sin(x), 0)
eq2 = Eq(y - cos(x), 0)
sol = solve([eq1, eq2], [x, y])
print(sol)
which only returns the first two solutions in the positive x range.
How could I do to, for example, display all the solutions within the x range [-2pi, 2pi]?
I'd want them in explicit form rather than written in term of some multiplier since I then need to convert them into numerical form.
Thank you in advance.
SymPy can really take you down rabbit holes. I agree with kampmani's solution, only if you can easily solve for y on your own. However, in more general cases and in higher dimensions, his solution does not hold.
For example, the following will be slightly more tricky:
eq1 = Eq(z - x*y, 0)
eq2 = Eq(z - cos(x) - sin(y), 0)
eq3 = Eq(z + x*y, 0)
So here I am; killing a fly with a bazooka. The problem is that one was able to simplify the set of equations into a single equation with a single variable. But what if you can't do that (for example, if it was a larger system)?
In this case, one needs to use nonlinsolve to solve the system of equations. But this does not provide numeric solutions directly and does not have a domain argument.
So the following code unpacks the solutions. It goes through each tuple in the set of solutions and finds the numeric solutions for each component in the tuple. Then in order to get the full list, you need a Cartesian Product of each of those components. Repeat this for each tuple in the set of solutions.
The following should work for almost any system of equations in any dimension greater than 1. It produces numeric solutions in the cube whose boundaries are the domains variable.
from sympy import *
import itertools # used for cartesian product
x, y, z = symbols('x y z', real=True)
domains = [Interval(-10, 10), Interval(-10, 10), Interval(-10, 10)] # The domain for each variable
eq1 = z - x*y
eq2 = z - cos(x) - sin(y)
eq3 = z + x*y
solutions = nonlinsolve([eq1, eq2, eq3], [x, y, z]) # the recommended function for this situation
print("---------Solution set----------")
print(solutions) # make sure the solution set is reasonable. If not, assertion error will occur
_n = Symbol("n", integer=True) # the solution set often seems to contain these symbols
numeric_solutions = []
assert isinstance(solutions, Set) # everything that I had tried resulted in a FiniteSet output
for solution in solutions.args: # loop through the different kinds of solutions
assert isinstance(solution, Tuple) # each solution should be a Tuple if in 2D or higher
list_of_numeric_values = [] # the list of lists of a single numerical value
for i, element in enumerate(solution):
if isinstance(element, Set):
numeric_values = list(element.intersect(domains[i]))
else: # assume it is an Expr
assert isinstance(element, Expr)
if _n.name in [s.name for s in element.free_symbols]: # if n is in the expression
# change our own _n to the solutions _n since they have different hidden
# properties and they cannot be substituted without having the same _n
_n = [s for s in element.free_symbols if s.name == _n.name][0]
numeric_values = [element.subs(_n, n)
for n in range(-10, 10) # just choose a bunch of sample values
if element.subs(_n, n) in domains[i]]
elif len(element.free_symbols) == 0: # we just have a single, numeric number
numeric_values = [element] if element in domains[i] else []
else: # otherwise we just have an Expr that depends on x or y
# we assume this numerical value is in the domain
numeric_values = [element]
# note that we may have duplicates, so we remove them with `set()`
list_of_numeric_values.append(set(numeric_values))
# find the resulting cartesian product of all our numeric_values
numeric_solutions += itertools.product(*list_of_numeric_values)
# remove duplicates again to be safe with `set()` but then retain ordering with `list()`
numeric_solutions = list(set(numeric_solutions))
print("--------`Expr` values----------")
for i in numeric_solutions:
print(list(i)) # turn it into a `list` since the output below is also a `list`.
print("--------`float` values---------")
for i in numeric_solutions:
print([N(j) for j in i]) # could have been converted into a `tuple` instead
In particular, it produces the following output for the new problem:
---------Solution set----------
FiniteSet((0, ImageSet(Lambda(_n, 2*_n*pi + 3*pi/2), Integers), 0))
--------`Expr` values----------
[0, -5*pi/2, 0]
[0, -pi/2, 0]
[0, 3*pi/2, 0]
--------`float` values---------
[0, -7.85398163397448, 0]
[0, -1.57079632679490, 0]
[0, 4.71238898038469, 0]
It was a lot of effort and probably not worth it but oh well.
By using solveset you can restrict the solutions with domain argument. For evaluating numerical results use .evalf() or another similar method.
from sympy import Interval, symbols, solveset, sin, cos, pi
x = symbols('x')
sol = solveset(cos(x) - sin(x), x, domain=Interval(-2*pi, 2*pi))
print(sol)
print(sol.evalf())
Output
FiniteSet(-7*pi/4, -3*pi/4, pi/4, 5*pi/4)
FiniteSet(-5.49778714378214, -2.35619449019234, 0.785398163397448, 3.92699081698724)
I hope this helps!
Thanks to the brilliant suggestion from #kampmani it is possible to achieve the desired result.
For start, the FiniteSet elements are not indexed and cannot be used, so the FiniteSet has to be converted into a list:
solx_array = []
#
#
#
solx = solveset(cos(x) - sin(x), x, domain=Interval(-2*pi, 2*pi))
solx_array=list(solx)
The next step is to find the y coordinate of the intersection point given its x coordinate. The final code should look somewhat similar to this:
from sympy import Interval, symbols, solveset, sin, cos, pi
sol_array = []
x = symbols('x')
solx = solveset(cos(x) - sin(x), x, domain=Interval(-2*pi, 2*pi))
solx_array=list(solx)
for i in range(len(solx_array)):
soly = cos(solx_array[i])
sol_array.append(str(solx_array[i] + soly))
for i in range(len(sol_array)):
print(sol_array[i])
Still don't know how to convert the results into numerical form though, any idea is very appreciated.
I'm solving a one dimensional non-linear equation with Newton's method. I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
The following algorithm does not converge:
whereas the following does converge:
You may assume that the functions f and f' are smooth and well behaved. The best explanation I was able to come up with is that this is somehow related to what's called iterative improvement (Golub and Van Loan, 1989). Any further insight would be greatly appreciated!
Here is a simple python example illustrating the issue
# Python
def f(x):
return x*x-2.
def fp(x):
return 2.*x
xprev = 0.
# converges
x = 1. # guess
while x != xprev:
xprev = x
x = (x*fp(x)-f(x))/fp(x)
print(x)
# does not converge
x = 1. # guess
while x != xprev:
xprev = x
dx = -f(x)/fp(x)
x = x + dx
print(x)
Note: I'm aware of how floating point numbers work (please don't post your favourite link to a website telling me to never compare two floating point numbers). Also, I'm not looking for a solution to a problem but for an explanation as to why one of the algorithms converges but not the other.
Update:
As #uhoh pointed out, there are many cases where the second method does not converge. However, I still don't know why the second method converges so much more easily in my real world scenario than the first. All the test cases have very simple functions f whereas the real world f has several hundred lines of code (which is why I don't want to post it). So maybe the complexity of f is important. If you have any additional insight into this, let me know!
None of the methods is perfect:
One situation in which both methods will tend to fail is if the root is about exactly midway between two consecutive floating-point numbers f1 and f2. Then both methods, having arrived to f1, will try to compute that intermediate value and have a good chance of turning up f2, and vice versa.
/f(x)
/
/
/
/
f1 /
--+----------------------+------> x
/ f2
/
/
/
"I'm aware of how floating point numbers work...". Perhaps the workings of floating-point arithmetic are more complicated than imagined.
This is a classic example of cycling of iterates using Newton's method. The comparison of a difference to an epsilon is "mathematical thinking" and can burn you when using floating-point. In your example, you visit several floating-point values for x, and then you are trapped in a cycle between two numbers. The "floating-point thinking" is better formulated as the following (sorry, my preferred language is C++)
std::set<double> visited;
xprev = 0.0;
x = 1.0;
while (x != prev)
{
xprev = x;
dx = -F(x)/DF(x);
x = x + dx;
if (visited.find(x) != visited.end())
{
break; // found a cycle
}
visited.insert(x);
}
I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
Technically, it doesn't converge to the correct value. Try printing more digits, or using float.hex.
The first one gives
>>> print "%.16f" % x
1.4142135623730949
>>> float.hex(x)
'0x1.6a09e667f3bccp+0'
whereas the correctly rounded value is the next floating point value:
>>> print "%.16f" % math.sqrt(2)
1.4142135623730951
>>> float.hex(math.sqrt(2))
'0x1.6a09e667f3bcdp+0'
The second algorithm is actually alternating between the two values, so doesn't converge.
The problem is due to catastrophic cancellation in f(x): as x*x will be very close to 2, when you subtract 2, the result will be dominated by the rounding error incurred in computing x*x.
I think trying to force an exact equal (instead of err < small) is always going to fail frequently. In your example, for 100,000 random numbers between 1 and 10 (instead of your 2.0) the first method fails about 1/3 of the time, the second method about 1/6 of the time. I'll bet there's a way to predict that!
This takes ~30 seconds to run, and the results are cute!:
def f(x, a):
return x*x - a
def fp(x):
return 2.*x
def A(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
x = (x * fp(x) - f(x,a)) / fp(x)
n += 1
if n >100:
return n, x
return n, x
def B(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
dx = - f(x,a) / fp(x)
x = x + dx
n += 1
if n >100:
return n, x
return n, x
import numpy as np
import matplotlib.pyplot as plt
n = 100000
aa = 1. + 9. * np.random.random(n)
data_A = np.zeros((2, n))
data_B = np.zeros((2, n))
for i, a in enumerate(aa):
data_A[:,i] = A(a)
data_B[:,i] = B(a)
bins = np.linspace(0, 110, 12)
hist_A = np.histogram(data_A, bins=bins)
hist_B = np.histogram(data_B, bins=bins)
print "A: n<10: ", hist_A[0][0], " n>=100: ", hist_A[0][-1]
print "B: n<10: ", hist_B[0][0], " n>=100: ", hist_B[0][-1]
plt.figure()
plt.subplot(1,2,1)
plt.scatter(aa, data_A[0])
plt.subplot(1,2,2)
plt.scatter(aa, data_B[0])
plt.show()
Our assignment is to use sympy to evaluate the exact definite integral of a function and then compare it with the approximation of the definite integral obtained from another python function we wrote. With simple polynomial functions my code works fine, but with a complicated sine function it keeps either breaking or returning nan.
from numpy import *
def simpson(f,a,b,n):
if n<=0 or n%2!=0:
print('Error: the number of subintervals must be a positive even number')
return float('NaN')
h = float(b - a) / float(n)
x = arange(a,b+h,h)
fx = f(x)
fx[1:n:2] *= 4.0
fx[2:n:2] *= 2.0
return (h/3.)*sum(fx)
this is in one file (simpandtrap) and gives the approximation for the definite integral of f from a to b using a simpson's rule approximation with n subintervals
from pylab import *
def s(x):
return x*sin(3./(x+(x==0)))
This is the function giving me trouble, in a file called assignment8functions
import assignment8functions as a
import SimpAndTrap as st
import sympy as sp
x = sp.symbols('x')
Exact_int_q = sp.integrate(a.q(x),(x,0,2)).evalf(25)
Exact_int_s = sp.integrate(x*sp.sin(3./(x)),(x,0,2)).evalf(25)
q(x) is another function we're supposed to use that everything works fine for - it's just a polynomial. When I try to do the integration the same way it breaks, so I had to put the function for s(x) directly into the call instead of importing the one from the other file
n = a.array([10,100,1000,10000,10000,1000000])
s_error_simp_array = a.zeros(6)
for i in a.arange(6):
s_error_simp_array[i] = abs(Exact_int_s - st.simpson(a.s,0,2,n[i])
here I try to find the error in the approximation. the problem is first of all that Exact_int_s is apparently -4.5*Si(zoo) + 8.16827746848576, and I have no idea what that's supposed to mean, and also that the simpson function always returns nan.
I know it's a lot of words and code, but does anybody know what's wrong?
To avoid the answer -4.5*Si(zoo)+ 8.--- just start the integration at a small positive number, e.g.:
x = sp.Symbol('x')
print sp.integrate( x * sin(3./x), (x, 0.000001, 2) )
and you'll get an answer like 1.0996940...
You can justify this because |s(x)| <= x for small x, so the interval [0, epsilon] can't contribute that much.
Btw - your simpson implemention seems to check out.