def nu(r):
'''Returns the stellar density function.'''
return 1 / ( r * (1 + (r / a))**3)
mass_int = lambda r: 4 * r**2 * nu(r)
print(mass_int(0))
This gives me a divide by zero error, presumably because of the 1/r term being evaluated in isolation. Is using sympy to form the correct algebraic expression the only way around this? Seems absurd.
It's not doing anything wrong. Given r = 0:
1 / ( r * (1 + (r / a))**3)
= 1 / ( 0 * (1 + (0 / a))**3)
= 1 / ( 0 * (1 + 0 )**3)
= 1 / ( 0 * 1 **3)
= 1 / ( 0 * 1 )
= 1 / 0
So when you ask for nu(0) you get an error. It doesn't look ahead and do special algebra things, it just throws the error. Python is not magic, it's not absurd that you'd need sympy for this.
I suggest you just add a special case to mass_int for r == 0.
This isn't a python question, or even a computer programming question. It's simple math.
f(x) = 1 / (x * (1 + x/a)**3)
g(x) = 4x**2
h(x) = 4x / (1 + x/a)**3
Is there a difference between f(g(r)) and h(r)? Of course there is. Even though the plots look exactly the same, f(g(0)) is undefined, and on a plot must show as a point discontinuity.
As an intelligent user of this function, you can recognize that the point discontinuity is there. If you choose, you can replace f(g(r)) with h(r). The only difference will be the desired one of having the function defined at 0.
Math doesn't do that on its own, and no programming language will reduce the function composition for you on its own, unless you ask it to, because you are the user, and you're supposed to know what you want from a program. If you create a function composition that has a point discontinuity, it's expected that you did so for a reason. The intelligence is expected to be in the programmer, not in the compiler. Either compose and reduce the function by yourself, or do something like having sympy do it. Either way, it's up to you to make it explicit to the computer that these two functions should be related to each other.
Related
I am having some trouble writing this equation in Python and having it produce correct results:
y = 2.95710^-7 * x^4 – 2.34310^-5 * x^3 – 1.67*10^-4 * x^2 + 0.04938 * x – 1.083.
I have tried the following code:
y = 0.0000002957*x**4 - 0.00002343*x**3 - 0.000167*x**2 + 0.04938*x - 1.083
and also:
y = (2.957*10**-7)*x**4 - (2.343*10**-5)x**3 - (1.67*10**4)*x**2 + 0.04938*x - 1.083
any advice would be helpful, I think the problem might be the scientific notation or the exponents and the way I am inputting them
EDIT
in response to questions, the equation spits out an incorrect number than what I get on a calculator
What you have can easily be converted to Python in a very direct way. The equivalent Python statement (after fixing a flaw in the original equation...see below) is :
y = 2.95710e-7 * x**4 - 2.34310e-5 * x**3 - 1.67e-4 * x**2 + 0.04938 * x - 1.083
This involves only the simple substitution of the ^ characters, where you replace that character with e in the exponential floating point constants and with ** for raising x to an integer power.
For reference to be able to more easily compare the source with the result, here's the original equation with the one slight fix mentioned below:
y = 2.95710^-7 * x^4 – 2.34310^-5 * x^3 – 1.67^-4 * x^2 + 0.04938 * x – 1.083
UPDATE: Thanks to Pranav for pointing out a flaw in the source equation. The term 1.67*10^-4 should be changed to 1.67^-4 to match the other similar terms, and the equivalent fix made to the resulting Python equation. Those fixes were made and commented on above.
I always like to see what I'm working with in situations like this. I used matplotlib to plot this function, using an unchanged version (except for the function and the bounds on x) of this sample code. Here's what I got for x values between -500 and 500:
Thanks for answering everybody. The most helpful answer was using "e" instead of scientific notation (*10^x).
I have one doubt on using user-defined sigmoid function(Logistic). I tried using numpy.exp & math.exp for the sigmoid formula (1 / 1+ e^-x).
1 / (1 + numpy.exp(-x))
1 / (1 + math.exp(-x))
Both methods gives the value 6.900207837141513e-36 for x = -80.96151181531121.
But, we are expecting a value between 0 & 1.
Am I missing anything?
6.900207837141513e-36 is scientific record for 0.00...06900207837141513 (I omitted 13 zeroes)
You can read it as "6.900... divided by 10^36". In this case you probably can treat value as 0.
See https://en.wikipedia.org/wiki/Scientific_notation#E_notation
>>> 6.900207837141513e-36 > 0
True
Note the e-36 on the end of 6.900207837141513e-36. This is equivalent to the scientific notation 6.900207837141513 * 10^-36. In other words, it is a very small value that is basically zero.
I have a simple factorization problem in sympy that I cannot sort out. I've had great success with sympy working with quite complex integrals, but I'm flummoxed by something simple.
How do I get
phi**2 - 2*phi*phi_0 + phi_0**2 - 8
to factor to
(phi - phi_0)**2 - 8
?
I've already tried the factor function
factor(phi**2 - 2*phi*phi_0 + phi_0**2 - 8,phi-phi_0)
which yields the same old solution.
As I noted in the comment, such "partial factorizations" are not unique (for instance, x**2 + 5*x + 7 equals (x + 2)*(x + 3) + 1 and (x + 1)*(x + 4) + 3, and once you understand what's going on it's not hard to come up with examples of your own).
There are ways to do such things manually, but it's hard to know what to tell you because I don't know what generality you are looking for. For instance, probably the easiest way to do this particular example is
>>> print(A.subs(phi, x + phi_0).factor().subs(x, phi - phi_0))
(phi - phi_0)**2 - 8
That is, let x = phi - phi_0 (SymPy isn't smart enough to replace phi - phi_0 with x, but it is smart enough to replace phi with x - phi_0, which is the same thing). This doesn't generalize as nicely if you want to factor in terms of a larger polynomial, or if you don't know what you are aiming for. But given the names of your variables, I'm guessing phi - phi_0 is all you care about.
Beyond this, I should point out that you can more or less do any kind of simplification you want by digging in and writing your own algorithms that dig into the expressions. Take a look at http://docs.sympy.org/latest/tutorial/manipulation.html to get you started. Also take a look at all the methods of Expr. There are quite a few useful helper functions if you end up writing such a thing, such as the various as_* methods.
I'm trying to code this expression in python but I'm having some difficulty.
This is the code I have so far and wanted some advice.
x = 1x2 vector
mu = 1x2 vector
Sigma = 2x2 matrix
xT = (x-mu).transpose()
sig = Sigma**(-1)
dotP = dot(xT ,sig )
dotdot = dot(dotP, (x-mu))
E = exp( (-1/2) dotdot )
Am I on the right track? Any suggestions?
Sigma ** (-1) isn't what you want. That would raise each element of Sigma to the -1 power, i.e. 1 / Sigma, whereas in the mathematical expression it means the inverse, which is written in Python as np.linalg.inv(Sigma).
(-1/2) dotdot is a syntax error; in Python, you need to always include * for multiplication, or just do - dotdot / 2. Since you're probably using python 2, division is a little wonky; unless you've done from __future__ import division (highly recommended), 1/2 will actually be 0, because it's integer division. You can use .5 to get around that, though like I said I do highly recommend doing the division import.
This is pretty trivial, but you're doing the x-mu subtraction twice where it's only necessary to do once. Could save a little speed if your vectors are big by doing it only once. (Of course, here you're doing it in two dimensions, so this doesn't matter at all.)
Rather than calling the_array.transpose() (which is fine), it's often nicer to use the_array.T, which is the same thing.
I also wouldn't use the name xT; it implies to me that it's the transpose of x, which is false.
I would probably combine it like this:
# near the top of the file
# you probably did some kind of `from somewhere import *`.
# most people like to only import specific names and/or do imports like this,
# to make it clear where your functions are coming from.
import numpy as np
centered = x - mu
prec = np.linalg.inv(Sigma)
E = np.exp(-.5 * np.dot(centered.T, np.dot(prec, centered)))
I'm using scipy's optimize.fsolve function for the first time to find the roots to an equation. The problem is that whatever number I use as the guess/estimate value is what I get back as my answer (to within about 8 decimal places). When using full_output=True, I get the exitflag to be '1', which is supposed to mean that 'The solution converged', which to the best of my understanding should mean that the output is indeed a root of the equation.
I know there are a finite number of distinct roots (that are spaced out), as when I graph the equation I can see them. Also, fsolve fails (gives error exitflags) when I input the starting point to be in a range which should return a undefined values (divide by zero, square root of a negative value). But besides that it always return the starting point as the root.
I tested fsolve with a very simple equation and it worked fine, so I know that I'm importing everything I need and should be using fsolve correctly. I also tried messing around with some of the input arguments, but I don't understand them very well and nothing seemed to change).
Below is the relevant code (E is the only variable, everything else has a non-zero value):
def func(E):
s = sqrt(c_sqr * (1 - E / V_0))
f = s / tan(s) + sqrt(c_sqr - s**2)
return f
guess = 3
fsolve(func, guess)
which just outputs '3' and says 'The solution converged.', even though the closest solutions should be at about 2.8 and 4.7.
Does anyone have any idea how to fix this and get a correct answer (using fsolve)?
I think your equation doesn't do what you think it does. For one thing, when I try it, it doesn't return the guess; it returns a number close to the guess. It's very unstable and that seems to be confusing fsolve. For example:
>>> V_0 = 100
>>> c_sqr = 3e8 ** 2
>>> guess = 5
>>> fsolve(func, guess)
array([ 5.00000079])
This is not 5. It is not even 5 within machine precision. It is also not a root of the equation:
>>> func(5.00000079)
2114979.3239706755
But the behavior of the equation is pretty unpredictable anyway:
>>> func(5.0000008)
6821403.0196130127
>>> func(5.0000006)
-96874198.203683496
So obviously there's a zero crossing somewhere around there. I'd say take a good look at your equation. Make sure you are specifying tan's argument in radians, for instance.
Did you try changing your function to something really trivial? Like this:
#!/usr/bin/python
from scipy.optimize import fsolve
def func(E):
# s = sqrt(c_sqr * (1 - E / V_0))
# f = s / tan(s) + sqrt(c_sqr - s**2)
f = E**2 -3.
return f
guess = 9
sol=fsolve(func, guess)
print sol, func(sol)
For me the code above does converge to where it should.
Also, in the code you've provided --- what are c_str and V_0? If in fact your function depends on more than one variable, and you're treating all of them but one as constant parameters, then use the args argument of the fsolve, like this:
#!/usr/bin/python
from scipy.optimize import fsolve
from numpy import sqrt
def func(E,V_0):
#s = sqrt(c_sqr * (1 - E / V_0))
#f = s / tan(s) + sqrt(c_sqr - s**2)
f = E**2 -V_0
return f
VV=4.
guess = 9
sol=fsolve(func, guess, args=(VV))
print sol, func(sol,VV)