Argument of a complex number sympi - python

when I am working with symbolic variables in sympi, I have to compute the argument of a complex number in sympi type, for this Im using arg() but when I try to compute the argument of zero y get NaN. Why does this happen? I need it to give me arg(0)=0, what can I do? thanks.

Related

Using an array as an input for a multivariable function

If I have a multivariable function such as
F= lambda x,y: x**2+y**2
and if I need to use the input x0=np.array([1,1])
May I know how I should use x0 to get the value from F?
I understand that I could use something like F(x0[0],x0[1])
But I would like to know whether there is a way that I can directly use x0 rather than calling each cordinate manually
Appreciate your help
Python lets you do this by doing F(*x0), which expands the array into the parameters. In other languages this is sometimes called "splatting".

Multiple variables and arguments in SciPy's optimize.fmin

I wish to use scipy's optimize.fmin function to find the minimum of a function, which is a function of both variables I wish to minimize over and parameters which do not change (are not optimized over).
I am able to do this when optimizing over a single variable here:
from scipy import optimize
c1=4
c2=-1
def f(x,c1,c2):
return x**2+c1+c2
guess_f=1
minimum = optimize.fmin(f,guess_f,args=(c1,c2),maxfun=400,maxiter=400,ftol=1e-2,xtol=1e-4)
However, I cannot get this to work when I add another variable to minimize over:
def g(x,y,c1,c2):
return x*y+c1+c2
guess_g=[1,1]
minimum2= optimize.fmin(g,guess_g,args=(c1,c2),maxfun=400,maxiter=400,ftol=1e-2,xtol=1e-4)
I get the following error message:
TypeError: g() missing 1 required positional argument: 'c2'
I did find Multiple variables in SciPy's optimize.minimize, and a solution is presented here in which the variables to be optimized over need to be grouped together as their own array. I try something like this below:
def g(params,c1,c2):
x,y=params
# print(params)
return x*y+c1*x+c2
guess_g=[1,1]
minimum2= optimize.fmin(g,guess_g,args=(c1,c2),maxfun=4000,maxiter=4000,ftol=1e-2,xtol=1e-4)
I do not receive a TypeError, but what I do get is the "Warning: Maximum number of function evaluations has been exceeded." message along with a RuntimeWarning: overflow encountered in double_scalars after removing the cwd from sys.path. (additionally, I tried using the optimize.minimize command to do the same thing, but was unable to get it to work when adding the extra arguments, but I do not post that code here as the question is already getting long).
So this does not seem to be the correct way to do this.
How do I go about optimizing with optimize.fmin function over multiple variables, while also giving my function additional arguments?

Is there a simple way to explain required argument and optional argument? Python

I am currently learning Python through edX platform, I came across two terms that got me confused: required argument and optional argument.
Can anyone feel so kind to explain the difference between the two?
Take function round() for example,
the quiz stated: "The function round has two arguments. Select the two correct statements about these arguments."
here are the options:
number is a required argument.
number is an optional argument.
ndigits is a required argument.
ndigits is an optional argument.
According to what I have learned, I know that you have to specify input in order to get ndigit, such as round(1.68 , 1) >>>>1.7
If I just wrote round(1.68) >>>> I will get 2
Thus, to my understanding, ndigit is an optional argument, meaning that you have to choose in order to make it work, otherwise, the function will only work what is it required, like an autopilot program
Please give me some feedback if I am wrong, or share some link where I can learn more.
I am not the native English speaker, so the words "required" and "optional" really seem confusing to me at one point. I hope to learn more from you all.
By the way, I got the right answer, I'll keep learning Python, hope to work in this field, cheers!
The docs for round() define the function as round(number[, ndigits])
The square brackets are common notation to show which arguments are optional.
In the case of round(), if the ndigits parameter is omitted or None it reverts to a predefined behaviour - in this case rounding to the nearest integer.
number on the other hand is required; the function cannot be called without this argument and will raise an error if it is missing.

Setting argument defaults from arguments in python

I'm trying to set a default value for an argument in a function I've defined. I also want another argument to have a default value dependent on the other argument. In my example, I'm trying to plot the quantum mechanical wavefunction for Hydrogen, but you don't need to know the physics to help me.
def plot_psi(n,l,start=(0.001*bohr),stop=(20*bohr),step=(0.005*bohr)):
where n is the principle quantum number, l is the angular momentum and start,stop,step will be the array I calculate over. But what I need is that the default value of stop actually depends on n, as n will effect the size of the wavefunction.
def plot_psi(n,l,start=(0.001*bohr),stop=((30*n-10)*bohr),step=(0.005*bohr)):
would be what I was going for, but n isn't yet defined because the line isn't complete. Any solutions? Or ideas for another way to arrange it? Thanks
Use None as the default value, and calculate the values inside the function, like this
def plot_psi(n, l, start=(0.001*bohr),stop=None,step=(0.005*bohr)):
if stop is None:
stop = ((30*n-10)*bohr)

How should I use #pm.stochastic in PyMC?

Fairly simple question: How should I use #pm.stochastic? I have read some blog posts that claim #pm.stochasticexpects a negative log value:
#pm.stochastic(observed=True)
def loglike(value=data):
# some calculations that generate a numeric result
return -np.log(result)
I tried this recently but found really bad results. Since I also noticed that some people used np.log instead of -np.log, I give it a try and worked much better. What is really expecting #pm.stochastic? I'm guessing there was a small confusion on the sign required due to a very popular example using something like np.log(1/(1+t_1-t_0)) which was written as -np.log(1+t_1-t_0)
Another question: What is this decorator doing with the value argument? As I understand it, we start with some proposed value for the priors that need to enter in the likelihood and the idea of #pm.stochastic is basically produce some number to compare this likelihood to the number generated by the previous iteration in the sampling process. The likelihood should receive the value argument and some values for the priors, but I'm not sure if this is all value is doing because that's the only required argument and yet I can write:
#pm.stochastic(observed=True)
def loglike(value=[1]):
data = [3,5,1] # some data
# some calculations that generate a numeric result
return np.log(result)
And as far as I can tell, that produces the same result as before. Maybe, it works in this way because I added observed=True to the decorator. If I would have tried this in a stochastic variable with observed=False by default, value would be changed in each iteration trying to obtain a better likelihood.
#pm.stochastic is a decorator, so it is expecting a function. The simplest way to use it is to give it a function that includes value as one of its arguments, and returns a log-likelihood.
You should use the #pm.stochastic decorator to define a custom prior for a parameter in your model. You should use the #pm.observed decorator to define a custom likelihood for data. Both of these decorators will create a pm.Stochastic object, which takes its name from the function it decorates, and has all the familiar methods and attributes (here is a nice article on Python decorators).
Examples:
A parameter a that has a triangular distribution a priori:
#pm.stochastic
def a(value=.5):
if 0 <= value < 1:
return np.log(1.-value)
else:
return -np.inf
Here value=.5 is used as the initial value of the parameter, and changing it to value=1 raises an exception, because it is outside of the support of the distribution.
A likelihood b that has is normal distribution centered at a, with a fixed precision:
#pm.observed
def b(value=[.2,.3], mu=a):
return pm.normal_like(value, mu, 100.)
Here value=[.2,.3] is used to represent the observed data.
I've put this together in a notebook that shows it all in action here.
Yes confusion is easy since the #stochastic returns a likelihood which is the opposite of the error essentially. So you take the negative log of your custom error function and return THAT as your log-likelihood.

Categories