What does that mean in Python time.time() documentation? - python

Documentation says-"While this function normally returns non-decreasing values, it can return a lower value than a previous call if the system clock has been set back between the two calls."
in https://docs.python.org/3/library/time.html#time.time
Can somebody please explain this to me?

Related

Returning boolean values in a backtracking algorithm

I am working on the famous knight tour problem in python using the backtracking algorithm. I'm confused as to why the computer goes back to the last working case if the current case returns a False value. Let me elaborate.
In the above program, the knight_tour() function calls on the knight_tour_helper() function which starts from index[0][0] and checks all possible combinations from there on and backtracks if reaches a dead end. I'm unable to understand what role the boolean values hold here. For example, if the knight_tour_helper() function returns False(line 11), then what does the computer do? Why does it backtrack? What happens if the function returns False from line 5? Does it move on to the next values of x,y and why? Also when does this loop end, as we are printing the complete board at the end(line 6), so the knight_tour_helper() function must end somewhere and so we are able to print the final board. Why does the function stop here? My guess is that because counter reaches the value n*n, so the function stops, but why does it stop here? Does the boolean value True make it stop or some other influence. Why does the function work recursively since all we are retuning is True/False. A recursive function works like f(x)=f(x-1)+f(x-2). How is this applicable here? The knight_tour_helper() is called again at line 8 to check a condition. How are these two events equivalent?
To put it concisely, I guess I am asking for the proper sequence of events the computer performs based on this algorithm and what happens at each step in detail with the proper reasoning.
I hope I have made my doubt clear. Let me know in the comments if any clarification is required. Any help here will be much appreciated.

Is it implicit in python that 'other' is another object in a list? Parameter has never been introduced properly

I just don't understand how an example in a book lists a parameter 'other' which is never introduced. When called the function, Python automatically understands it to be the other elements of the Class?
See the example:
def get_neighbors(self, others, radius, angle):
"""Return the list of neighbors within the given radius and angle."""
boids = []
for other in others:
if other is self: continue
offset = other.pos - self.pos
# if not in range, skip it
if offset.mag > radius:
continue
# if not within viewing angle, skip it
if self.vel.diff_angle(offset) > angle:
continue
# otherwise add it to the list
boids.append(other)
return boids
Nowhere else in the code there is a mention of 'other'.
Thanks, just trying to understand the mechanisms.
Updated answer, in response to comment
Python doesn't have any special behavior for a method parameter named "others", or for any of the other parameters in your example.
Most likely the book you're reading simply didn't explain (yet) how that function will be invoked. It's also possible that the book made a mistake (in which case, perhaps you should find a better book!).
Original answer (for posterity)
The name other is declared by the for statement:
for other in others:
...
From the Python documentation for the for statement:
The suite is then executed once for each item provided by the iterator, in the order of ascending indices. Each item in turn is assigned to the target list using the standard rules for assignments, and then the suite is executed.
Here, "the iterator" is derived from the list others, and "the target list" is simply the variable other. So on each iteration through the loop, the other variable is assigned ("using the standard rules for assignments") the next value from the list.
The DocString for that method should include the list of arguments and explain the expected type for each (I am planning to update this code soon, and I will improve the documentation).
In this case, others should be a list (or other sequence) of objects that have an attribute named pos (probably the same type as other).
Note that there is nothing special about the name 'others'.

Setting argument defaults from arguments in python

I'm trying to set a default value for an argument in a function I've defined. I also want another argument to have a default value dependent on the other argument. In my example, I'm trying to plot the quantum mechanical wavefunction for Hydrogen, but you don't need to know the physics to help me.
def plot_psi(n,l,start=(0.001*bohr),stop=(20*bohr),step=(0.005*bohr)):
where n is the principle quantum number, l is the angular momentum and start,stop,step will be the array I calculate over. But what I need is that the default value of stop actually depends on n, as n will effect the size of the wavefunction.
def plot_psi(n,l,start=(0.001*bohr),stop=((30*n-10)*bohr),step=(0.005*bohr)):
would be what I was going for, but n isn't yet defined because the line isn't complete. Any solutions? Or ideas for another way to arrange it? Thanks
Use None as the default value, and calculate the values inside the function, like this
def plot_psi(n, l, start=(0.001*bohr),stop=None,step=(0.005*bohr)):
if stop is None:
stop = ((30*n-10)*bohr)

How should I use #pm.stochastic in PyMC?

Fairly simple question: How should I use #pm.stochastic? I have read some blog posts that claim #pm.stochasticexpects a negative log value:
#pm.stochastic(observed=True)
def loglike(value=data):
# some calculations that generate a numeric result
return -np.log(result)
I tried this recently but found really bad results. Since I also noticed that some people used np.log instead of -np.log, I give it a try and worked much better. What is really expecting #pm.stochastic? I'm guessing there was a small confusion on the sign required due to a very popular example using something like np.log(1/(1+t_1-t_0)) which was written as -np.log(1+t_1-t_0)
Another question: What is this decorator doing with the value argument? As I understand it, we start with some proposed value for the priors that need to enter in the likelihood and the idea of #pm.stochastic is basically produce some number to compare this likelihood to the number generated by the previous iteration in the sampling process. The likelihood should receive the value argument and some values for the priors, but I'm not sure if this is all value is doing because that's the only required argument and yet I can write:
#pm.stochastic(observed=True)
def loglike(value=[1]):
data = [3,5,1] # some data
# some calculations that generate a numeric result
return np.log(result)
And as far as I can tell, that produces the same result as before. Maybe, it works in this way because I added observed=True to the decorator. If I would have tried this in a stochastic variable with observed=False by default, value would be changed in each iteration trying to obtain a better likelihood.
#pm.stochastic is a decorator, so it is expecting a function. The simplest way to use it is to give it a function that includes value as one of its arguments, and returns a log-likelihood.
You should use the #pm.stochastic decorator to define a custom prior for a parameter in your model. You should use the #pm.observed decorator to define a custom likelihood for data. Both of these decorators will create a pm.Stochastic object, which takes its name from the function it decorates, and has all the familiar methods and attributes (here is a nice article on Python decorators).
Examples:
A parameter a that has a triangular distribution a priori:
#pm.stochastic
def a(value=.5):
if 0 <= value < 1:
return np.log(1.-value)
else:
return -np.inf
Here value=.5 is used as the initial value of the parameter, and changing it to value=1 raises an exception, because it is outside of the support of the distribution.
A likelihood b that has is normal distribution centered at a, with a fixed precision:
#pm.observed
def b(value=[.2,.3], mu=a):
return pm.normal_like(value, mu, 100.)
Here value=[.2,.3] is used to represent the observed data.
I've put this together in a notebook that shows it all in action here.
Yes confusion is easy since the #stochastic returns a likelihood which is the opposite of the error essentially. So you take the negative log of your custom error function and return THAT as your log-likelihood.

Multi-recursive functions

I’d like to be pointed toward a reference that could better explain recursion when a function employs multiple recursive calls. I think I get how Python handles memory when a function employs a single instance of recursion. I can use print statements to track where the data is at any given point while the function processes the data. I can then walk each of those steps back to see how the resultant return value was achieved.
Once multiple instances of recursion are firing off during a single function call I am no longer sure how the data is actually being processed. The previously illuminating method of well-placed print statements reveals a process that looks quantum, or at least more like voodoo.
To illustrate my quandary here are two basic examples: the Fibonacci and Hanoi towers problems.
def getFib(n):
if n == 1 or n == 2:
return 1
return getFib(n-1) + getFib(n-2)
The Fibonacci example features two inline calls. Is getFib(n-1) resolved all the way through the stack first, then getFib(n-2) resolved similarly, each of the resultants being put into new stacks, and those stacks added together line by line, with those sums being totaled for the result?
def hanoi(n, s, t, b):
assert n > 0
if n ==1:
print 'move ', s, ' to ', t
else:
hanoi(n-1,s,b,t)
hanoi(1,s,t,b)
hanoi(n-1,b,t,s)
Hanoi presents a different problem, in that the function calls are in successive lines. When the function gets to the first call, does it resolve it to n=1, then move to the second call which is already n=1, then to the third until n=1?
Again, just looking for reference material that can help me get smart on what’s going on under the hood here. I’m sure it’s likely a bit much to explain in this setting.
http://www.pythontutor.com/visualize.html
There's even a Hanoi link there so you can follow the flow of code.
This is a link to the hanoi code that they show on their site, but it may have to be adapated to visualize your exact code.
http://www.pythontutor.com/visualize.html#code=%23+move+a+stack+of+n+disks+from+stack+a+to+stack+b,%0A%23+using+tmp+as+a+temporary+stack%0Adef+TowerOfHanoi(n,+a,+b,+tmp)%3A%0A++++if+n+%3D%3D+1%3A%0A++++++++b.append(a.pop())%0A++++else%3A%0A++++++++TowerOfHanoi(n-1,+a,+tmp,+b)%0A++++++++b.append(a.pop())%0A++++++++TowerOfHanoi(n-1,+tmp,+b,+a)%0A++++++++%0Astack1+%3D+%5B4,3,2,1%5D%0Astack2+%3D+%5B%5D%0Astack3+%3D+%5B%5D%0A++++++%0A%23+transfer+stack1+to+stack3+using+Tower+of+Hanoi+rules%0ATowerOfHanoi(len(stack1),+stack1,+stack3,+stack2)&mode=display&cumulative=false&heapPrimitives=false&drawParentPointers=false&textReferences=false&showOnlyOutputs=false&py=2&curInstr=0

Categories