I'm trying to set a default value for an argument in a function I've defined. I also want another argument to have a default value dependent on the other argument. In my example, I'm trying to plot the quantum mechanical wavefunction for Hydrogen, but you don't need to know the physics to help me.
def plot_psi(n,l,start=(0.001*bohr),stop=(20*bohr),step=(0.005*bohr)):
where n is the principle quantum number, l is the angular momentum and start,stop,step will be the array I calculate over. But what I need is that the default value of stop actually depends on n, as n will effect the size of the wavefunction.
def plot_psi(n,l,start=(0.001*bohr),stop=((30*n-10)*bohr),step=(0.005*bohr)):
would be what I was going for, but n isn't yet defined because the line isn't complete. Any solutions? Or ideas for another way to arrange it? Thanks
Use None as the default value, and calculate the values inside the function, like this
def plot_psi(n, l, start=(0.001*bohr),stop=None,step=(0.005*bohr)):
if stop is None:
stop = ((30*n-10)*bohr)
Related
when I am working with symbolic variables in sympi, I have to compute the argument of a complex number in sympi type, for this Im using arg() but when I try to compute the argument of zero y get NaN. Why does this happen? I need it to give me arg(0)=0, what can I do? thanks.
I have a function get_knng_graph that takes two parameters; a set of points and an integer k. I want to generate a sequence of functions, each of which only accepts the set of points, but with the value k of the parameter embedded inside different for every function.
Consider the code below:
# definition of get_knng_graph(....) here
graph_fns = []
for k in range(1,5):
def knng(pts):
return get_knng_graph(pts,k)
graph_fns.append(knng);
Is this reasonable code? By which I mean can I be assured that the values of the parameter k embedded inside each of the elements of graph_fns will continue to be different?
In the Haskell world, of course, this is nothing but currying, but this is the first time I am doing something like this in Python.
I tried it, and the code doesn't work. If I place a print(k) in the code above, then when I execute successive functions in the array, it keeps prints out 4 for all function runs.
The problem you are seeing is because Python creates that reference to the name k and doesn't capture the value, so your code is equivalent to this code:
graph_fns = []
def knng(pts):
return get_knng_graph(pts,k)
for k in range(1,5):
graph_fns.append(knng);
If you want to bind the value of k to the function, there are a couple of solutions.
The most trivial code change is to add an extra argument with a default parameter:
graph_fns = []
for k in range(1,5):
def knng(pts, k=k):
return get_knng_graph(pts, k)
graph_fns.append(knng)
You might also find it a bit cleaner to use functools.partial:
from functools import partial
graph_fns = []
for k in range(1,5):
knng = partial(get_knng_graph, k=k)
graph_fns.append(knng)
and by that time you could just use a list comprehension:
from functools import partial
graph_fns = [partial(get_knng_graph, k=k) for k in range(1, 5)]
There are some other options discussed on this page, like creating a class for this.
In Python, scopes are function wide, that is using a for loop does not introduce a new nested scope. Thus in this example, k is rebound every iteration, and the k in every knng closure refers to that same variable, and if you call any of them after the loop has run its course will show its last value (4 in this case). The standard Python way to deal with this is to shadow it with a default argument:
graph_fns = []
for k in range(1,5):
def knng(pts, k=k):
return get_knng_graph(pts,k)
graph_fns.append(knng)
This works because default arguments are bound when the definition is executed and the closure is created.
Seems to me, this is a good case for using partial from the functools module.
So say I have a function that takes an input, squares it and adds some variable to it before returning the result:
def x_squared_plus_n(x,n):
return (x**2)+n
If I want to curry, or redefine that function, modifying it so that a fixed number (say 5) is always squared, and has a number n added to it, I can do so by using partial
from functools import partial
five_squared_plus_n = partial(x_squared_plus_n,5)
Now, I have a new function five_squared_plus_n for which the first x parameter in the original function's parameter signature is fixed to x=5. The new function has a parameter signature containing only the remaining parameters, here n.
So calling:
five_squared_plus_n(15)
or equivalently,
five_squared_plus_n(n=15)
The answer of 40 is returned.
Any combination of parameters can be fixed like this and the resulting "curried" function be assigned to a new function name. It's a very powerful tool.
In your example, you could wrap your partial calls in a loop, over which the values of different values could be fixed, and assign the resultant functions to values in a dictionary. Using my simple example, that might look something like:
func_dict = {}
for k in range(1,5):
func_dict[k]=partial(x_squared_plus_n,k)
Which would prepare a series of functions, callable by reference to that dictionary - so:
func_dict[1](5)
Would return 12+5=6 , while
func_dict[3](12)
Would return 32+12=21 .
It is possible to assign proper python names to these functions, but that's probably for a different question - here, just imagine that the dictionary hosts a series of functions, accessible by key. I've used a numeric key, but you could assign strings or other values to help access the function you've prepared in this way.
Python's support for Haskell-style "functional" programming is fairly strong - you just need to dig around a little to access the appropriate hooks. I think,m subjectively, there's perhaps less purity in terms of functional design, but for most practical purposes, there is a functional solution.
I am setting up to use SciPy's basin-hopping global optimizer. Its documentation for parameter T states
T: float, optional
The “temperature” parameter for the accept or reject criterion. Higher “temperatures” mean that larger jumps in function value will be accepted. For best results T should be comparable to the separation (in function value) between local minima.
When it says "function value", does that mean the expected return value of the cost function func? Or the value passed to it? Or something else?
I read the source, and I see where T is passed to the Metropolis acceptance criterion, but I do not understand how it is used when converted to "beta".
I'm unfamiliar with the algorithm, but if you keep reading the documentation on the link you posted you'll find this:
Choosing T: The parameter T is the “temperature” used in the Metropolis criterion. Basinhopping steps are always accepted if func(xnew) < func(xold). Otherwise, they are accepted with probability:exp( -(func(xnew) - func(xold)) / T ). So, for best results, T should to be comparable to the typical difference (in function values) between local minima. (The height of “walls” between local minima is irrelevant.)
So I believe T should take on the value of the function which you are trying to optimize, func. This makes sense if you look at that probability expression -- you'd be comparing a difference in function values to what is meant to be a type of "upper bound" for the step. For example, if one local minima is func = 10 and another is func = 14, you might consider T = 4 to be an appropriate value.
I wonder whether it is possible - and if so, how - to use an argument as a function parameter. I would like to be able to put in the parameters of my function, the 'ord' argument of numpy.linalg.norm(x, ord = ...)
I want my function to depend on a term which, depending on its value, changes the norm used. Thx
If you want to declare a function that evaluates the norm on an array, and allows you to pass in an order you can use something like this:
def norm_with_ord(x, order):
return numpy.linalg.norm(x, ord = order)
Though that still requires that you need to pass in one of the valid ordering values as listed here.
Fairly simple question: How should I use #pm.stochastic? I have read some blog posts that claim #pm.stochasticexpects a negative log value:
#pm.stochastic(observed=True)
def loglike(value=data):
# some calculations that generate a numeric result
return -np.log(result)
I tried this recently but found really bad results. Since I also noticed that some people used np.log instead of -np.log, I give it a try and worked much better. What is really expecting #pm.stochastic? I'm guessing there was a small confusion on the sign required due to a very popular example using something like np.log(1/(1+t_1-t_0)) which was written as -np.log(1+t_1-t_0)
Another question: What is this decorator doing with the value argument? As I understand it, we start with some proposed value for the priors that need to enter in the likelihood and the idea of #pm.stochastic is basically produce some number to compare this likelihood to the number generated by the previous iteration in the sampling process. The likelihood should receive the value argument and some values for the priors, but I'm not sure if this is all value is doing because that's the only required argument and yet I can write:
#pm.stochastic(observed=True)
def loglike(value=[1]):
data = [3,5,1] # some data
# some calculations that generate a numeric result
return np.log(result)
And as far as I can tell, that produces the same result as before. Maybe, it works in this way because I added observed=True to the decorator. If I would have tried this in a stochastic variable with observed=False by default, value would be changed in each iteration trying to obtain a better likelihood.
#pm.stochastic is a decorator, so it is expecting a function. The simplest way to use it is to give it a function that includes value as one of its arguments, and returns a log-likelihood.
You should use the #pm.stochastic decorator to define a custom prior for a parameter in your model. You should use the #pm.observed decorator to define a custom likelihood for data. Both of these decorators will create a pm.Stochastic object, which takes its name from the function it decorates, and has all the familiar methods and attributes (here is a nice article on Python decorators).
Examples:
A parameter a that has a triangular distribution a priori:
#pm.stochastic
def a(value=.5):
if 0 <= value < 1:
return np.log(1.-value)
else:
return -np.inf
Here value=.5 is used as the initial value of the parameter, and changing it to value=1 raises an exception, because it is outside of the support of the distribution.
A likelihood b that has is normal distribution centered at a, with a fixed precision:
#pm.observed
def b(value=[.2,.3], mu=a):
return pm.normal_like(value, mu, 100.)
Here value=[.2,.3] is used to represent the observed data.
I've put this together in a notebook that shows it all in action here.
Yes confusion is easy since the #stochastic returns a likelihood which is the opposite of the error essentially. So you take the negative log of your custom error function and return THAT as your log-likelihood.