How do I input a math function without Python running it? - python

I want to define a function which takes in a mathematical function as an input (such as np.sin(x) as an example) but I want it to essentially store it and have a random number generator put random numbers into it.
I know how to do it by directly inputting the code but I want to know how I can do it as a user which only sees the console.
So using np.sin(x) again,
def function(np.sin(x)):
x=random.uniform(0,1)
return value of np.sin(whatever the random number was)

Use the function as parameter, not the evaluated function:
def function(func):
x=random.uniform(0,1)
return func(x)
function(np.sin)
That said it is not clear what you want to achieve, but you'll get a different random output at each call. Also, for such a trivial case, you should better use directly the numpy function:
np.sin(random.uniform(0,1))

Related

How do I use newtons method on python to solve a system of equations?

I have an assignment where I need to make a single defined function that runs newtons method and then be able plug in other defined functions to it and it will solve them all. I wrote one that works for equations that have 1 variable, and I only need to solve for one variable from the system but I don't know how to do that in code without solving for all four of them.
the function I wrote to run newtons method is this:
def fnewton(function, dx, x, n):
#defined the functions that need to be evaluated so that this code can be applied to any function I call
def f(x):
f=eval(function)
return f
#eval is used to evaluate whatever I put in the function place when I recall fnewton
#this won't work without eval to run the functions
def df(x):
df=eval(dx)
return df
for intercept in range(1,n):
i= x-(f(x)/df(x))
x= i
#this is literally just newtons method
#to find an intercept you can literally input intercept in a for loop and it'll do it for you
#I just found this out
#putting n in the range makes it count iterations
print ('root is at')
print (x)
print ('after this many iterations:')
print (n)
my current system of equations function looks like this:
def c(x):
T=x[0]
y=x[1]
nl=x[2]
nv=x[3]
RLN=.63*Antoine(An,Bn,Cn,T)-y*760
RLH=(1-.63)*Antoine(Ah,Bh,Ch,T)-(1-y)*760
N=.63*nl+y*nv-50
H=(1-.63)*nl+(1-y)*nv-50
return[RLN,RLH,N,H]
To use my function to solve this I've entered multiple variations of:
fnewton("c(x)","dcdx(x)", (2,2,2,2), 10)
Do I need to change the system of equations into 1 equation somehow or something I don't know how to manipulate my code to make this work and also work for equations with only 1 variable.
To perform Newton's method in multiple dimensions, you must replace the simple derivative by a Jacobian matrix, that is, a matrix that has the derivatives of all components of your multidimensional function with respect to all given variables. This is decribed here: https://en.wikipedia.org/wiki/Newton%27s_method#Systems_of_equations,
(or, perhaps more helpful, here: https://web.mit.edu/18.06/www/Spring17/Multidimensional-Newton.pdf in Sec. 1.4)
Instead of f(x)/f'(x), you need to work with the inverse of the Jacobian matrix times the vector function f. So the formula is actually quite similar!

How to define a python function 'on the fly' for use with pymanopt/autodifferentiation

I had no idea how to phrase the title of this question, so apologies for any confusion there. I am using the pymanopt package for optimization and would like to be able to create some sort of a function/method that allows for a generalized input (variable amount of input arrays). To use pymanopt, one has to provide a cost function defined in terms of array that are to be optimized to minimize the cost.
For example, a cost function could be:
#pymanopt.function.Autograd
def f(A,B):
return ((X - A#B.T)**2).sum()
To do the optimization, the variable X is defined prior to f, then f is supplied as the cost function to the pymanopt solver. Optimization is done with respect to the arguments of f and these arrays are returned by pymanopt with values that minimize the cost function.
Ideally, I would like to be able to do this definition more dynamically. So instead of defining a function in terms of hard coded arrays, to be able to supply a list of variables to be optimized. So if my cost function was instead:
#pymanopt.function.Autograd
def f(L):
return ((X - np.linalg.multi_dot(L)**2).sum()
Where the arrays A,B,...,C would be stored in a list, L. However, as far as I can tell, the variables to be optimized have to be directly defined as individual arrays in the cost function supplied to the solver.
The only thing I can think of doing is to define the cost function by creating a string that contains the 'hard coded' function and executing it via exec() with something like this:
args = ','.join(['A{}'.format(i) for i in range(len(L))])
exec('#pymanopt.function.Autograd\ndef({}):\n\treturn ((X-np.linalg.multi_dot({}))**2).sum()'.format(args,args))
but I understand that using this method should be avoided if possible. Any advice for navigating this sort of problem is greatly appreciated - thanks! Please let me know if anything is unclear/doesn't make sense.

Possible issues with embedding functions inside functions in python. Are there times when this should be avoided?

If you define a function in your python script and later want to use this function and possibly others inside a different function would this always work or are there any cases where this might cause issues? Also is there ever a case where this is considered bad practice?
E.g
Say I define a simple function to square a number and then use this function inside a function to some those square numbers, this seems to work however are there any cases with more complex functions where this could cause an issue (embedding functions inside functions).
def square(a):
c = a**2
return c
square(2)
def sum_of_squares(d,e):
x = square(d) + square(e) # Using the square function defined earlier
return x
sum_of_squares(2,4)
Note Not sure if this is the right forum to ask this question so feel free to move it if so.
Not only you can call functions in other functions, but the function itself inside the same function known as recursion. Check out this function of calculating a factorial as an example of recursion.
def factorial(n):
if n == 1:
return n
else:
return n * factorial(n-1)
So there isn't any case where calling a function inside other function (or same) would fail provided you are using the right syntax:)

Generating random numbers for a probability density function in Python

I'm currently working on a project relating to brownian motion, and trying to simulate some of it using Python (a language I'm admittedly very new at). Currently, my goal is to generate random numbers following a given probability density function. I've been trying to use the scipy library for it.
My current code looks like this:
>>> import scipy.stats as st
>>> class my_pdf(st.rv_continuous):
def _pdf(self,x,y):
return (1/math.sqrt(4*t*D*math.pi))*(math.exp(-((x^2)/(4*D*t))))*(1/math.sqrt(4*t*D*math.pi))*(math.exp(-((y^2)/(4*D*t))))
>>> def get_brown(a,b):
D,t = a,b
return my_pdf()
>>> get_brown(1,1)
<__main__.my_pdf object at 0x000000A66400A320>
All attempts at launching the get_brown function end up giving me these hexadecimals (always starting at 0x000000A66400A with only the last three digits changing, no matter what parameters I give for D and t). I'm not sure how to interpret that. All I want is to get random numbers following the given PDF; what do these hexadecimals mean?
The result you see is the memory address of the object you have created. Now you might ask: which object? Your method get_brown(int, int) calls return my_pdf() which creates an object of the class my_pdf and returns it. If you want to access the _pdf function of your class now and calculate the value of the pdf you can use this code:
get_brown(1,1)._pdf(x, y)
On the object you have just created you can also use all methods of the scipy.stats.rv_continous class, which you can find here.
For your situation you could also discard your current code and just use the normal distribution included in scipy as Brownian motion is mainly a Normal random process.
As noted, this is a memory location. Your function get_brown gets an instance of the my_pdf class, but doesn't evaluate the method inside that class.
What you probably want to do is call the _pdf method on that instance, rather than return the class itself.
def get_brown(a,b):
D,t = a,b # what is D,t for?
return my_pdf()_pdf(a,b)
I expect that the code you've posted is a simplification of what you're really doing, but functions don't need to be inside classes - so the _pdf function could live on it's own. Alternatively, you don't need to use the get_brown function - just instantiate the my_pdf class and call the calculation method.

Can I pass the objective and derivative functions to scipy.optimize.minimize as one function?

I'm trying to use scipy.optimize.minimize to minimize a complicated function. I noticed in hindsight that the minimize function takes the objective and derivative functions as separate arguments. Unfortunately, I've already defined a function which returns the objective function value and first-derivative values together -- because the two are computed simultaneously in a for loop. I don't think there is a good way to separate my function into two without the program essentially running the same for loop twice.
Is there a way to pass this combined function to minimize?
(FYI, I'm writing an artificial neural network backpropagation algorithm, so the for loop is used to loop over training data. The objective and derivatives are accumulated concurrently.)
Yes, you can pass them in a single function:
import numpy as np
from scipy.optimize import minimize
def f(x):
return np.sin(x) + x**2, np.cos(x) + 2*x
sol = minimize(f, [0], jac=True, method='L-BFGS-B')
Something that might work is: you can memoize the function, meaning that if it gets called with the same inputs a second time, it will simply return the same outputs corresponding to those inputs without doing any actual work the second time. What is happening behind the scenes is that the results are getting cached. In the context of a nonlinear program, there could be thousands of calls which implies a large cache. Often with memoizers(?), you can specify a cache limit and the population will be managed FIFO. IOW you still benefit fully for your particular case because the inputs will be the same only when you are needing to return function value and derivative around the same point in time. So what I'm getting at is that a small cache should suffice.
You don't say whether you are using py2 or py3. In Py 3.2+, you can use functools.lru_cache as a decorator to provide this memoization. Then, you write your code like this:
#functools.lru_cache
def original_fn(x):
blah
return fnvalue, fnderiv
def new_fn_value(x):
fnvalue, fnderiv = original_fn(x)
return fnvalue
def new_fn_deriv(x):
fnvalue, fnderiv = original_fn(x)
return fnderiv
Then you pass each of the new functions to minimize. You still have a penalty because of the second call, but it will do no work if x is unchanged. You will need to research what unchanged means in the context of floating point numbers, particularly since the change in x will fall away as the minimization begins to converge.
There are lots of recipes for memoization in py2.x if you look around a bit.
Did I make any sense at all?

Categories