Python 3.4.1 - Don't show the final result? - python

Make a program to find the roots of a quadratic equation using the Bhaskara formule:
for to calculate the square root using the formula: number ** 0.5
I can put the a,b and c, but when I run the program does't show the result of the roots x1 and x2 ...
This is my code so far:
a = int(input("a "))
b = int(input("b "))
c = int(input("c "))
delta = b * b - 4 * a * c
if (delta >= 0):
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)

All a, b, and c real values have two roots (except in the case where the delta is 0)—but sometimes the roots are complex numbers, not real numbers.
In particular, if I remember correctly:
If delta > 0, there are two real roots.
If delta == 0, there is only one root, the real number -b/(2*a).
If delta < 0, there are two complex roots (which are always conjugates).
If you do your math with complex numbers, you can use the same formula, (-b +/- delta**0.5) / 2a, for all three cases, and you'll get two real numbers, or 0 twice, or two complex numbers, as appropriate.
There are also ways to calculate the real and imaginary parts of the third case without doing complex math, but since Python makes complex math easy, why bother unless you're specifically trying to learn about those ways?
So, if you always want to print 2 roots, all you have to do is remove that if delta >= 0: line (and dedent the next few lines). Raising a negative float to the 0.5 power will give you a complex automatically, and that will make the rest of the expression complex. Like this:
delta = b * b - 4 * a * c
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)
If you only want 0-2 real roots, your code is already correct as-is. You might want to add a check for delta == 0, or just for x1 == x2, so you don't print the same value twice. Like this:
delta = b * b - 4 * a * c
if delta >= 0:
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
if x1 != x2:
print("x2: ", x2)
If you want some kind of error message, all you need to do is add an else clause. Something like this:
delta = b * b - 4 * a * c
if delta >= 0:
x1 = (-b + delta ** 0.5) / 2 * a
x2 = (-b - (delta) ** 0.5) / 2 * a
print("x1: ", x1)
print("x2: ", x2)
else:
print('No real solutions because of negative delta: ", delta)
Which one do you want? I have no idea. That's a question for you to answer. Once you decide what output you want for, say, 3, 4, and 5, you can pick the version that gives you that output.

Related

How to implement this equation in numpy

I'm new to numpy, and trying to implement the following equation.
The equation has two parts, and should give a final value called Sigma.
the equation is taken from the paper as below image:
image of the equation to provide the result of Sigma
I tried to implement it as below, but when running the code, the value c is giving nan
c = np.sqrt(np.log(2 / np.sqrt( 16 * delta + 1 ) -1 ))
sigma = (c + np.sqrt(np.square(c) + epsilon) ) * s / (epsilon * np.sqrt(2))
appreciate if you can advise on how to implement it in numpy
You missed a bracket in your code
c = np.sqrt(np.log(2 / (np.sqrt( 16 * delta + 1 ) -1 )))
sigma = (c + np.sqrt(np.square(c) + epsilon) ) * s / (epsilon * np.sqrt(2))
To get a valid c value, you should input delta like 0 < delta < 0.5.
You are missing a parenthesis. This is the correct formula:
c = np.sqrt(np.log(2/(np.sqrt(16*delta + 1) -1)))
Also, keep in mind that (as the paper states) this is defined only for .

Predict the number of iterations required - iterated weighted average

Pardon me but I could find a better title. Please look at this super-simple Python program:
x = start = 1.0
target = 0.1
coeff = 0.999
for c in range(100000):
print('{:5d} {:f}'.format(c, x))
if abs(target - x) < abs((x - start) * 0.01):
break
x = x * coeff + target * (1 - coeff)
Brief explaination: this program moves x towards target calculating iteratively the weighted average of x and target with coeff as weight. It stops when x reaches the 1% of the initial difference.
The number of iterations remains the same no matter the initial value of x and target.
How can I set coeff in order to predict how many iterations will take place?
Thanks a lot.
Let's make this a function, f.
f(0) is the initial value (start, in this case 1.0).
f(x) = f(x - 1) * c + T * (1 - c).
(So f(1) is the next value of x, f(2) is the one after that, and so on. We want to find the value of x where |T - f(x)| < 0.01 * |f(0) - f(x)|)
So let's rewrite f(x) to be linear:
f(x) = f(x - 1) * c + T * (1 - c)
= (f(x - 2) * c + T * (1 - c)) * c + T * (1 - c)
= (f(x - 2) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= ((f(x - 3) * c + T * (1 - c)) * c ** 2 + T * c * (1 - c)) + T * (1 - c)
= f(x - 3) * c ** 3 + T * c ** 2 * (1 - c) + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + T * c ** (x - 1) * (1 - c) + T * c ** (x - 2) * (1 - c) + ... + T * c * (1 - c) + T * (1 - c)
= f(0) * c ** x + (T * (1 - c)) [(sum r = 0 to x - 1) (c ** r)]
# Summation of a geometric series
= f(0) * c ** x + (T * (1 - c)) ((1 - c ** x) / (1 - c))
= f(0) * c ** x + T (1 - c ** x)
So, the nth value of x will be start * c ** n + target * (1 - c ** n).
We want:
|T - f(x)| < 0.01 * |f(0) - f(x)|
|T - f(0) * c ** x - T (1 - c ** x)| < 0.01 * |f(0) - f(0) * c ** x - T (1 - c ** x)|
|(c ** x) * T - (c ** x) f(0)| < 0.01 * |(1 - c ** x) * f(0) - (1 - c ** x) * T|
(c ** x) * |T - f(0)| < 0.01 * (1 - c ** x) * |T - f(0)|
c ** x < 0.01 * (1 - c ** x)
c ** x < 0.01 - 0.01 * c ** x
1.01 * c ** x < 0.01
c ** x < 1 / 101
x < log (1 / 101) / log c
(I somehow ended up with x < when it should be x >, but it gives the correct answer. With c = 0.999, x > 4612.8, and it terminates on step 4613).
In the end, it is independant of start and target.
Also, for a general percentage difference of p,
c ** x > p * (1 - c ** x)
c ** x > p - p c ** x
(1 + p) c ** x > p
c ** x > p / (1 + p)
x > log (p / (1 + p)) / log c
So for a coefficient of c, there will be log (1 / 101) / log c steps.
If you have the number of steps you want, call it I, you have
I = log_c(1 / 101)
c ** I = 1 / 101
c = (1 / 101) ** (1 / I)
So c should be set to the Ith root of 1 / 101.
Your code reduces the distance between x and the target by a factor of coeff in each execution of the loop. Thus, if start is greater than target, we get the formula
target - x = (x - start) * coeff ** c
where c is the number of loops we have done.
Your ending criterion is (again, if start is greater than target),
x - target < (start - x) * 0.01
Solving for x by algebra we get
x > (target + 0.01 * s) / (1 + 0.01)
Substituting that into our first expression and simplifying a bit makes both start and target drop out of the inequality--now you see why those values did not matter--and we get
0.01 / (1 + 0.01) < coeff ** c
Solving for c we get
c > log(0.01 / (1 + 0.01), coeff)
So the final answer for the number of loops is
ceil(log(0.01 / (1 + 0.01), coeff))
or alternatively, if you do not like logarithms to an arbitrary base,
ceil(log(0.01 / (1 + 0.01)) / log(coeff))
You could replace that first logarithm in that last expression with its result, but I left it that way to see what different result you would get if you change the constant in your end criterion away from 0.01.
The result of that expression in your particular case is
4613
which is correct. Note that both the ceil and log function are in Python's math unit, so remember to import those functions before doing that calculation. Also note that Python's floating point calculations are not exact, so your actual number of loops may differ from that by one, if you change the values of coeff or of 0.01.

Determine parabola with given arc length between two known points

Let (0,0) and (Xo,Yo) be two points on a Cartesian plane. We want to determine the parabolic curve, Y = AX^2 + BX + C, which passes from these two points and has a given arc length equal to S. Obviously, S > sqrt(Xo^2 + Yo^2). As the curve must pass from (0,0), it should be C=0. Hence, the curve equation reduces to: Y = AX^2 + BX. How can I determine {A,B} knowing {Xo,Yo,S}? There are two solutions, I want the one with A>0.
I have an analytical solution (complex) that gives S for a given set of {A,B,Xo,Yo}, though here the problem is inverted... I can proceed by solving numerically a complex system of equations... but perhaps there is a numerical routine out there that does exactly this?
Any useful Python library? Other ideas?
Thanks a lot :-)
Note that the arc length (line integral) of the quadratic a*x0^2 + b*x0 is given by the integral of sqrt(1 + (2ax + b)^2) from x = 0 to x = x0. On solving the integral, the value of the integral is obtained as 0.5 * (I(u) - I(l)) / a, where u = 2ax0 + b; l = b; and I(t) = 0.5 * (t * sqrt(1 + t^2) + log(t + sqrt(1 + t^2)), the integral of sqrt(1 + t^2).
Since y0 = a * x0^2 + b * x0, b = y0/x0 - a*x0. Substituting the value of b in u and l, u = y0/x0 + a*x0, l = y0/x0 - a*x0. Substituting u and l in the solution of the line integral (arc length), we get the arc length as a function of a:
s(a) = 0.5 * (I(y0/x0 + a*x0) - I(y0/x0 - a*x0)) / a
Now that we have the arc length as a function of a, we simply need to find the value of a for which s(a) = S. This is where my favorite root-finding algorithm, the Newton-Raphson method, comes into play yet again.
The working algorithm for the Newton-Raphson method of finding roots is as follows:
For a function f(x) whose root is to be obtained, if x(i) is the ith guess for the root,
x(i+1) = x(i) - f(x(i)) / f'(x(i))
Where f'(x) is the derivative of f(x). This process is continued till the difference between two consecutive guesses is very small.
In our case, f(a) = s(a) - S and f'(a) = s'(a). By simple application of the chain rule and the quotient rule,
s'(a) = 0.5 * (a*x0 * (I'(u) + I'(l)) + I(l) - I(u)) / (a^2)
Where I'(t) = sqrt(1 + t^2).
The only problem that remains is calculating a good initial guess. Due to the nature of the graph of s(a), the function is an excellent candidate for the Newton-Raphson method, and an initial guess of y0 / x0 converges to the solution in about 5-6 iterations for a tolerance/epsilon of 1e-10.
Once the value of a is found, b is simply y0/x0 - a*x0.
Putting this into code:
def find_coeff(x0, y0, s0):
def dI(t):
return sqrt(1 + t*t)
def I(t):
rt = sqrt(1 + t*t)
return 0.5 * (t * rt + log(t + rt))
def s(a):
u = y0/x0 + a*x0
l = y0/x0 - a*x0
return 0.5 * (I(u) - I(l)) / a
def ds(a):
u = y0/x0 + a*x0
l = y0/x0 - a*x0
return 0.5 * (a*x0 * (dI(u) + dI(l)) + I(l) - I(u)) / (a*a)
N = 1000
EPSILON = 1e-10
guess = y0 / x0
for i in range(N):
dguess = (s(guess) - s0) / ds(guess)
guess -= dguess
if abs(dguess) <= EPSILON:
print("Break:", abs((s(guess) - s0)))
break
print(i+1, ":", guess)
a = guess
b = y0/x0 - a*x0
print(a, b, s(a))
Run the example on CodeSkulptor.
Note that due to the rational approximation of the arc lengths given as input to the function in the examples, the coefficients obtained may ever so slightly differ from the expected values.

Slow abs() function in Python. Explain?

So I was bored, and I decided to come up with a method to calculate pi. I implemented it, and it ran well. I wanted to optimize it, so I ran the profiler. It took about 26 seconds. I discovered that the abs() function took up a lot of lag, so I came up with a way to avoid the abs() function. After that, I could run it in 8 seconds! Can someone explain to me why the abs() function was taking so long?
Here is the code without abs():
def picalc(radius = 10000000):
total = 0
x = 0
y = radius
for i in range(radius + 1):
x1 = i
y1 = (radius ** 2 - x1 ** 2) ** 0.5
total += ((x1 - x) ** 2 + (y1 - y) ** 2) ** 0.5
x = x1
y = y1
print(total / (radius / 2))
import profile
profile.run('picalc()')
If I change the line total += ((x1 - x) ** 2 + (y1 - y) ** 2) ** 0.5 to total += (abs(x1 - x) ** 2 + abs(y1 - y) ** 2) ** 0.5, the operation runs MUCH slower.
EDIT: I know that the negatives cancel when squaring. That was a mistake I made.
EDIT x2: I tried substituting total += ((x1 - x) ** 2 + (y1 - y) ** 2) ** 0.5 with total += math.hypot(x1 - x, y1 - y), but the profiler tells me it took 10 seconds longer! I read the docs and they said that the math library contains thin wrappers to the C math library (at least in IDLE). How can C be slower than Python in this case?
First of all: the abs() calls are entirely redundant if you are squaring the result anyway.
Next, you may be reading the profile output wrong; don't mistake the cumulative times with the time spent only on the function call itself; you are calling abs() many many times so the accumulated time will raise rapidly.
Moreover, profiling adds a lot of overhead to the interpreter. Use the timeit module to compare the performance between approaches, it gives you overall performance metrics so you can compare apples with apples.
It is not that the abs() function is slow; it is calling any function that is 'slow'. Looking up the global name is slower than looking up locals, and then you need to push the current frame on the stack, execute the function, then pop the frame from the stack again.
You can alleviate one of those pain points by making abs() a local name outside the loop:
_abs = abs
for i in range(radius + 1):
# ...
total += (_abs(x1 - x) ** 2 + _abs(y1 - y) ** 2) ** 0.5
Not that abs() really is taking such a huge toll on your performance, really, not when you time your functions properly. Using a radius of 1000 to make 100 repeats practical, timeit comparisons give me:
>>> from timeit import timeit
>>> def picalc(radius = 10000000):
... total = 0
... x = 0
... y = radius
... for i in range(radius + 1):
... x1 = i
... y1 = (radius ** 2 - x1 ** 2) ** 0.5
... total += ((x1 - x) ** 2 + (y1 - y) ** 2) ** 0.5
... x = x1
... y = y1
...
>>> def picalc_abs(radius = 10000000):
... total = 0
... x = 0
... y = radius
... for i in range(radius + 1):
... x1 = i
... y1 = (radius ** 2 - x1 ** 2) ** 0.5
... total += (abs(x1 - x) ** 2 + abs(y1 - y) ** 2) ** 0.5
... x = x1
... y = y1
...
>>> def picalc_abs_local(radius = 10000000):
... total = 0
... x = 0
... y = radius
... _abs = abs
... for i in range(radius + 1):
... x1 = i
... y1 = (radius ** 2 - x1 ** 2) ** 0.5
... total += (_abs(x1 - x) ** 2 + _abs(y1 - y) ** 2) ** 0.5
... x = x1
... y = y1
...
>>> timeit('picalc(1000)', 'from __main__ import picalc', number=100)
0.13862298399908468
>>> timeit('picalc(1000)', 'from __main__ import picalc_abs as picalc', number=100)
0.14540845900774002
>>> timeit('picalc(1000)', 'from __main__ import picalc_abs_local as picalc', number=100)
0.13702849800756667
Notice how there is very little difference between the three approaches now.

What's wrong with this function to solve cubic equations?

I am using Python 2 and the fairly simple method given in Wikipedia's article "Cubic function". This could also be a problem with the cube root function I have to define in order to create the function mentioned in the title.
# Cube root and cubic equation solver
#
# Copyright (c) 2013 user2330618
#
# This Source Code Form is subject to the terms of the Mozilla Public
# License, v. 2.0. If a copy of the MPL was not distributed with this
# file, you can obtain one at http://www.mozilla.org/MPL/2.0/.
from __future__ import division
import cmath
from cmath import log, sqrt
def cbrt(x):
"""Computes the cube root of a number."""
if x.imag != 0:
return cmath.exp(log(x) / 3)
else:
if x < 0:
d = (-x) ** (1 / 3)
return -d
elif x >= 0:
return x ** (1 / 3)
def cubic(a, b, c, d):
"""Returns the real roots to cubic equations in expanded form."""
# Define the discriminants
D = (18 * a * b * c * d) - (4 * (b ** 3) * d) + ((b ** 2) * (c ** 2)) - \
(4 * a * (c ** 3)) - (27 * (a ** 2) * d ** 2)
D0 = (b ** 2) - (3 * a * c)
i = 1j # Because I prefer i over j
# Test for some special cases
if D == 0 and D0 == 0:
return -(b / (3 * a))
elif D == 0 and D0 != 0:
return [((b * c) - (9 * a * d)) / (-2 * D0), ((b ** 3) - (4 * a * b * c)
+ (9 * (a ** 2) * d)) / (-a * D0)]
else:
D1 = (2 * (b ** 3)) - (9 * a * b * c) + (27 * (a ** 2) * d)
# More special cases
if D != 0 and D0 == 0 and D1 < 0:
C = cbrt((D1 - sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
else:
C = cbrt((D1 + sqrt((D1 ** 2) - (4 * (D0 ** 3)))) / 2)
u_2 = (-1 + (i * sqrt(3))) / 2
u_3 = (-1 - (i * sqrt(3))) / 2
x_1 = (-(b + C + (D0 / C))) / (3 * a)
x_2 = (-(b + (u_2 * C) + (D0 / (u_2 * C)))) / (3 * a)
x_3 = (-(b + (u_3 * C) + (D0 / (u_3 * C)))) / (3 * a)
if D > 0:
return [x_1, x_2, x_3]
else:
return x_1
I've found that this function is capable of solving some simple cubic equations:
print cubic(1, 3, 3, 1)
-1.0
And a while ago I had gotten it to a point where it could solve equations with two roots. I've just done a rewrite and now it's gone haywire. For example, these coefficients are the expanded form of (2x - 4)(x + 4)(x + 2) and it should return [4.0, -4.0, -2.0] or something similar:
print cubic(2, 8, -8, -32)
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Is this more a mathematical or a programming mistake I'm making?
Update: Thank you, everyone, for your answers, but there are more problems with this function than I've iterated so far. For example, I often get an error relating to the cube root function:
print cubic(1, 2, 3, 4) # Correct solution: about -1.65
...
if x > 0:
TypeError: no ordering relation is defined for complex numbers
print cubic(1, -3, -3, -1) # Correct solution: about 3.8473
if x > 0:
TypeError: no ordering relation is defined for complex numbers
Wolfram Alpha confirms that the roots to your last cubic are indeed
(-4, -2, 2)
and not as you say
... it should return [4.0, -4.0, -2.0]
Not withstanding that (I presume) typo, your program gives
[(-4+1.4802973661668753e-16j), (2+2.9605947323337506e-16j), (-2.0000000000000004-1.1842378929335002e-15j)]
Which to accuracy of 10**(-15) are the exact same roots as the correct solution. The tiny imaginary part is probably due, as others have said, to rounding.
Note that you'll have to use exact arithmetic to always correctly cancel if you are using a solution like Cardano's. This one of the reasons why programs like MAPLE or Mathematica exist, there is often a disconnect from the formula to the implementation.
To get only the real portion of a number in pure python you call .real. Example:
a = 3.0+4.0j
print a.real
>> 3.0
Hooked's answer is the way to go if you want to do this numerically. You can also do it symbolically using sympy:
>>> from sympy import roots
>>> roots('2*x**3 + 8*x**2 - 8*x - 32')
{2: 1, -4: 1, -2: 1}
This gives you the roots and their multiplicity.
You are using integer values - which are not automatically converted to floats by Python.
The more generic solution will be to write coefficients in the function as float numbers - 18.0 instead of 18, etc. That will do the trick
An illustration - from the code:
>>> 2**(1/3)
1
>>> 2**(1/3.)
1.2599210498948732
>>>

Categories