Why is sin(180) not zero when using python and numpy? - python

Does anyone know why the below doesn't equal 0?
import numpy as np
np.sin(np.radians(180))
or:
np.sin(np.pi)
When I enter it into python it gives me 1.22e-16.

The number π cannot be represented exactly as a floating-point number. So, np.radians(180) doesn't give you π, it gives you 3.1415926535897931.
And sin(3.1415926535897931) is in fact something like 1.22e-16.
So, how do you deal with this?
You have to work out, or at least guess at, appropriate absolute and/or relative error bounds, and then instead of x == y, you write:
abs(y - x) < abs_bounds and abs(y-x) < rel_bounds * y
(This also means that you have to organize your computation so that the relative error is larger relative to y than to x. In your case, because y is the constant 0, that's trivial—just do it backward.)
Numpy provides a function that does this for you across a whole array, allclose:
np.allclose(x, y, rel_bounds, abs_bounds)
(This actually checks abs(y - x) < abs_ bounds + rel_bounds * y), but that's almost always sufficient, and you can easily reorganize your code when it's not.)
In your case:
np.allclose(0, np.sin(np.radians(180)), rel_bounds, abs_bounds)
So, how do you know what the right bounds are? There's no way to teach you enough error analysis in an SO answer. Propagation of uncertainty at Wikipedia gives a high-level overview. If you really have no clue, you can use the defaults, which are 1e-5 relative and 1e-8 absolute.

One solution is to switch to sympy when calculating sin's and cos's, then to switch back to numpy using sp.N(...) function:
>>> # Numpy not exactly zero
>>> import numpy as np
>>> value = np.cos(np.pi/2)
6.123233995736766e-17
# Sympy workaround
>>> import sympy as sp
>>> def scos(x): return sp.N(sp.cos(x))
>>> def ssin(x): return sp.N(sp.sin(x))
>>> value = scos(sp.pi/2)
0
just remember to use sp.pi instead of sp.np when using scos and ssin functions.

Faced same problem,
import numpy as np
print(np.cos(math.radians(90)))
>> 6.123233995736766e-17
and tried this,
print(np.around(np.cos(math.radians(90)), decimals=5))
>> 0
Worked in my case. I set decimal 5 not lose too many information. As you can think of round function get rid of after 5 digit values.

Try this... it zeros anything below a given tiny-ness value...
import numpy as np
def zero_tiny(x, threshold):
if (x.dtype == complex):
x_real = x.real
x_imag = x.imag
if (np.abs(x_real) < threshold): x_real = 0
if (np.abs(x_imag) < threshold): x_imag = 0
return x_real + 1j*x_imag
else:
return x if (np.abs(x) > threshold) else 0
value = np.cos(np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)
value = np.exp(-1j*np.pi/2)
print(value)
value = zero_tiny(value, 10e-10)
print(value)

Python uses the normal taylor expansion theory it solve its trig functions and since this expansion theory has infinite terms, its results doesn't reach exact but it only approximates.
For e.g
sin(x) = x - x³/3! + x⁵/5! - ...
=> Sin(180) = 180 - ... Never 0 bout approaches 0.
That is my own reason by prove.

Simple.
np.sin(np.pi).astype(int)
np.sin(np.pi/2).astype(int)
np.sin(3 * np.pi / 2).astype(int)
np.sin(2 * np.pi).astype(int)
returns
0
1
0
-1

Related

Minimize function with scipy

I'm trying to do one task, but I just can't figure it out.
This is my function:
1/(x**1/n) + 1/(y**1/n) + 1/(z**1/n) - 1
I want that sum to be as close to 1 as possible.
And these are my input variables (x,y,z):
test = np.array([1.42, 5.29, 7.75])
So n is the only decision variable.
To summarize:
I have a situation like this right now:
1/(1.42**1/1) + 1/(5.29**1/1) + 1/(7.75**1/1) = 1.02229
And I want to get the following:
1/(1.42^(1/0.972782944446024)) + 1/(5.29^(1/0.972782944446024)) + 1/(7.75^(1/0.972782944446024)) = 0.999625
So far I have roughly nothing, and any help is welcome.
import numpy as np
from scipy.optimize import minimize
def objectiv(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
test = np.array([1.42, 5.29, 7.75])
print(objectiv(test))
OUTPUT: 1.0222935270013889
How to properly define a constraint?
def conconstraint(xyz):
x = xyz[0]
y = xyz[1]
z = xyz[2]
n = 1
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n)) - 1
And it is not at all clear to me how and what to do with n?
EDIT
I managed to do the following:
def objective(n,*args):
x = odds[0]
y = odds[1]
z = odds[2]
return abs((1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))) - 1)
odds = [1.42,5.29,7.75]
solve = minimize(objective,1.0,args=(odds))
And my output:
fun: -0.9999999931706812
x: array([0.01864994])
And really when put in the formula:
(1/(1.42^(1/0.01864994)) + 1/(5.29^(1/0.01864994)) + 1/(7.75^(1/0.01864994))) -1 = -0.999999993171
Unfortunately I need a positive 1 and I have no idea what to change.
We want to find n that gets our result for a fixed x, y, and z as close as possible to 1. minimize tries to get the lowest possible value for something, without negative bound; -3 is better than -2, and so on.
So what we actually want is called least-squares optimization. Similar idea, though. This documentation is a bit hard to understand, so I'll try to clarify:
All these optimization functions have a common design where you pass in a callable that takes at least one parameter, the one you want to optimize for (in your case, n). Then you can have it take more parameters, whose values will be fixed according to what you pass in.
In your case, you want to be able to solve the optimization problem for different values of x, y and z. So you make your callback accept n, x, y, and z, and pass the x, y, and z values to use when you call scipy.optimize.least_squares. You pass these using the args keyword argument (notice that it is not *args). We can also supply an initial guess of 1 for the n value, which the algorithm will refine.
The rest is customization that is not relevant for our purposes.
So, first let us make the callback:
def objective(n, x, y, z):
return 1/(x**(1/n)) + 1/(y**(1/n)) + 1/(z**(1/n))
Now our call looks like:
best_n = least_squares(objective, 1.0, args=np.array([1.42, 5.29, 7.75]))
(You can call minimize the same way, and it will instead look for an n value to make the objective function return as low a value as possible. If I am thinking clearly: the guess for n should trend towards zero, making the denominators increase without bound, making the sum of the reciprocals go towards zero; negative values are not possible. However, it will stop when it gets close to zero, according to the default values for ftol, xtol and gtol. To understand this part properly is beyond the scope of this answer; please try on math.stackexchange.com.)

How do I evaluate this equation in z3 for python

I'm trying to evaluate a simple absolute value inequality like this using z3.
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < .01, y==1000)
The output is no solution every time. I know this is mathematically possible, I just can't figure out how z3 does stuff like this.
This is a common gotcha in z3py bindings. Constants are "promoted" to fit into the right type, following the usual Python methodology. But more often than not, it ends up doing the wrong conversion, and you end up with a very confusing situation.
Since your variables x and y are Int values, the comparison against .01 forces that constant to be 0 to fit the types, and that's definitely not what you wanted to say. The general advice is simply not to mix-and-match arithmetic like this: Cast this as a problem over real-values, not integers. (In general SMTLib doesn't allow mixing-and-matching types in numbers, though z3py does. I think that's misguided, but that's a different discussion.)
To address your issue, the simplest thing to do would be to wrap 0.01 into a real-constant, making the z3py bindings interpret it correctly. So, you'll have:
from z3 import *
x = Int("x")
y = Int("y")
def abs(x):
return If(x >= 0,x,-x)
solve(abs( x / 1000 - y / 1000 ) < RealVal(.01), y==1000)
Note the use of RealVal. This returns:
[x = 1000, y = 1000]
I guess this is what you are after.
But I'd, in general, recommend against using conversions like this. Instead, be very explicit yourself, and cast this as a problem, for instance, over Real values. Note that your division / 1000 is also interpreted in this equation as an integer division, i.e., one that produces an integer result. So, I'm guessing this isn't really what you want either. But I hope this gets you started on the right path.
Int('a') < 0.01 is turned (rightly or wrongly) into Int('a') < 0 and clearly the absolute value can never be smaller than 0.
I believe you want Int('a') <= 0 here.
Examples:
solve(Int('a') < 0.01, Int('a') > -1)
no solution
solve(Int('a') <= 0.01, Int('a') > -1)
[a = 0]
Int('a') < 0.01
a < 0

Different implementations of Newton's method in floating point arithmetic

I'm solving a one dimensional non-linear equation with Newton's method. I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
The following algorithm does not converge:
whereas the following does converge:
You may assume that the functions f and f' are smooth and well behaved. The best explanation I was able to come up with is that this is somehow related to what's called iterative improvement (Golub and Van Loan, 1989). Any further insight would be greatly appreciated!
Here is a simple python example illustrating the issue
# Python
def f(x):
return x*x-2.
def fp(x):
return 2.*x
xprev = 0.
# converges
x = 1. # guess
while x != xprev:
xprev = x
x = (x*fp(x)-f(x))/fp(x)
print(x)
# does not converge
x = 1. # guess
while x != xprev:
xprev = x
dx = -f(x)/fp(x)
x = x + dx
print(x)
Note: I'm aware of how floating point numbers work (please don't post your favourite link to a website telling me to never compare two floating point numbers). Also, I'm not looking for a solution to a problem but for an explanation as to why one of the algorithms converges but not the other.
Update:
As #uhoh pointed out, there are many cases where the second method does not converge. However, I still don't know why the second method converges so much more easily in my real world scenario than the first. All the test cases have very simple functions f whereas the real world f has several hundred lines of code (which is why I don't want to post it). So maybe the complexity of f is important. If you have any additional insight into this, let me know!
None of the methods is perfect:
One situation in which both methods will tend to fail is if the root is about exactly midway between two consecutive floating-point numbers f1 and f2. Then both methods, having arrived to f1, will try to compute that intermediate value and have a good chance of turning up f2, and vice versa.
/f(x)
/
/
/
/
f1 /
--+----------------------+------> x
/ f2
/
/
/
"I'm aware of how floating point numbers work...". Perhaps the workings of floating-point arithmetic are more complicated than imagined.
This is a classic example of cycling of iterates using Newton's method. The comparison of a difference to an epsilon is "mathematical thinking" and can burn you when using floating-point. In your example, you visit several floating-point values for x, and then you are trapped in a cycle between two numbers. The "floating-point thinking" is better formulated as the following (sorry, my preferred language is C++)
std::set<double> visited;
xprev = 0.0;
x = 1.0;
while (x != prev)
{
xprev = x;
dx = -F(x)/DF(x);
x = x + dx;
if (visited.find(x) != visited.end())
{
break; // found a cycle
}
visited.insert(x);
}
I'm trying to figure out why one of the implementations of Newton's method is converging exactly within floating point precision, wheres another is not.
Technically, it doesn't converge to the correct value. Try printing more digits, or using float.hex.
The first one gives
>>> print "%.16f" % x
1.4142135623730949
>>> float.hex(x)
'0x1.6a09e667f3bccp+0'
whereas the correctly rounded value is the next floating point value:
>>> print "%.16f" % math.sqrt(2)
1.4142135623730951
>>> float.hex(math.sqrt(2))
'0x1.6a09e667f3bcdp+0'
The second algorithm is actually alternating between the two values, so doesn't converge.
The problem is due to catastrophic cancellation in f(x): as x*x will be very close to 2, when you subtract 2, the result will be dominated by the rounding error incurred in computing x*x.
I think trying to force an exact equal (instead of err < small) is always going to fail frequently. In your example, for 100,000 random numbers between 1 and 10 (instead of your 2.0) the first method fails about 1/3 of the time, the second method about 1/6 of the time. I'll bet there's a way to predict that!
This takes ~30 seconds to run, and the results are cute!:
def f(x, a):
return x*x - a
def fp(x):
return 2.*x
def A(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
x = (x * fp(x) - f(x,a)) / fp(x)
n += 1
if n >100:
return n, x
return n, x
def B(a):
xprev = 0.
x = 1.
n = 0
while x != xprev:
xprev = x
dx = - f(x,a) / fp(x)
x = x + dx
n += 1
if n >100:
return n, x
return n, x
import numpy as np
import matplotlib.pyplot as plt
n = 100000
aa = 1. + 9. * np.random.random(n)
data_A = np.zeros((2, n))
data_B = np.zeros((2, n))
for i, a in enumerate(aa):
data_A[:,i] = A(a)
data_B[:,i] = B(a)
bins = np.linspace(0, 110, 12)
hist_A = np.histogram(data_A, bins=bins)
hist_B = np.histogram(data_B, bins=bins)
print "A: n<10: ", hist_A[0][0], " n>=100: ", hist_A[0][-1]
print "B: n<10: ", hist_B[0][0], " n>=100: ", hist_B[0][-1]
plt.figure()
plt.subplot(1,2,1)
plt.scatter(aa, data_A[0])
plt.subplot(1,2,2)
plt.scatter(aa, data_B[0])
plt.show()

How can I use sympy to find the error in approximation of a definite integral?

Our assignment is to use sympy to evaluate the exact definite integral of a function and then compare it with the approximation of the definite integral obtained from another python function we wrote. With simple polynomial functions my code works fine, but with a complicated sine function it keeps either breaking or returning nan.
from numpy import *
def simpson(f,a,b,n):
if n<=0 or n%2!=0:
print('Error: the number of subintervals must be a positive even number')
return float('NaN')
h = float(b - a) / float(n)
x = arange(a,b+h,h)
fx = f(x)
fx[1:n:2] *= 4.0
fx[2:n:2] *= 2.0
return (h/3.)*sum(fx)
this is in one file (simpandtrap) and gives the approximation for the definite integral of f from a to b using a simpson's rule approximation with n subintervals
from pylab import *
def s(x):
return x*sin(3./(x+(x==0)))
This is the function giving me trouble, in a file called assignment8functions
import assignment8functions as a
import SimpAndTrap as st
import sympy as sp
x = sp.symbols('x')
Exact_int_q = sp.integrate(a.q(x),(x,0,2)).evalf(25)
Exact_int_s = sp.integrate(x*sp.sin(3./(x)),(x,0,2)).evalf(25)
q(x) is another function we're supposed to use that everything works fine for - it's just a polynomial. When I try to do the integration the same way it breaks, so I had to put the function for s(x) directly into the call instead of importing the one from the other file
n = a.array([10,100,1000,10000,10000,1000000])
s_error_simp_array = a.zeros(6)
for i in a.arange(6):
s_error_simp_array[i] = abs(Exact_int_s - st.simpson(a.s,0,2,n[i])
here I try to find the error in the approximation. the problem is first of all that Exact_int_s is apparently -4.5*Si(zoo) + 8.16827746848576, and I have no idea what that's supposed to mean, and also that the simpson function always returns nan.
I know it's a lot of words and code, but does anybody know what's wrong?
To avoid the answer -4.5*Si(zoo)+ 8.--- just start the integration at a small positive number, e.g.:
x = sp.Symbol('x')
print sp.integrate( x * sin(3./x), (x, 0.000001, 2) )
and you'll get an answer like 1.0996940...
You can justify this because |s(x)| <= x for small x, so the interval [0, epsilon] can't contribute that much.
Btw - your simpson implemention seems to check out.

for-loops in Python modules

I'm writing a function for an implicit scheme for solving a specific differential equation. The function looks like this:
import numpy as np
def scheme(N,T):
y = np.zeros(N+1) # Array for implicit scheme
h = T/N # Step length
for i in range(N):
y[i+1] = y[i] + h*(1+4*y[i])
print y
I save the file and later import it the usual way, but when I run the scheme function, y = [0 ... 0] where ... are N-1 zeros. It seems like the values are lost in the scope of the for-loop.
If I instead write the whole function in the interpreter (which in my case is Spyder), everything works as it should.
Why doesn't it work when importing the function from the module?
h = T/N
is it possible that T and N are both integers and T < N? In that case h = 0 (and y stays all zeros), because it is an integer division (1/2 == 0).
Try to replace this line with
h = 1. * T / N
and see the results.
y[i+1] = y[i] + h*(1+4*y[i])
can be rewritten as
y[i+1] = y[i] + h + 4 * h * y[i]
^^^
which means that for y[i] = 0, the new y[i+1] will be h. If the integer division T/N makes it zero, then it is what you get.
Normally, if you divide two integers in python, you will have also integer rounded towards minus infinity.
So
1/3 == 0
In your example, if T and N are integers and T < N, h will be 0.
If h is 0, then all elements of y will be also 0.
This could be fixed by casting value to float, i.e.
float(1)/3 == 0.333
In your case:
h = float(T)/N
Not familiar with Spyder, but quick look at documentation shows, that it is for scientists.
Maybe this interpreter always uses float division.

Categories