Let's say I have inputs 'A' and 'B' for my function, which outputs 'C'. For each value of A, I would like to find what value of B results in the maximum value of C; I would then like to record values B and C. Is there a function that can perform this action? Perhaps something which depends on convergence mechanisms?
*in case you found this through one of the non-python related tags I applied, please make note that I am using python 3.x
Let's define function to take parameters (A,B) and return a value C. We can optimize this with Python by doing
from scipy import optimize
f = lambda a,b: ... # your_code_which_returns_C
optimal_vals = np.zeros((2, len(list_of_all_A_values)))
for i, a in enumerate(list_of_all_A_values) # assuming some list is defined above
b_opt, c_opt, *rest = optimize.fmin(lambda b: -f(a,b), 0)
optimal_vals[:,i] = np.array([b_opt, c_opt])
This takes advantage of scipy's fmin function, which relies on the convergence of the downhill simplex algorithm. For this reason, it's crucial to not forget the minus sign on .
Related
I'm trying to build a code which finds the eigenvectors of a matrix given its eigenvalues. I need to write the code myself so inbuilt functions are not an option here.
I already made a simple code to calculate the eigenvalues of a given matrix from which I use these values to calculate the eigenvectors. The problem is that when I solve the homogenous system (A−λI)v=0 where λ is an eigenvalue of A and I is the identity matrix, the code returns an empty set, whereas the analytical solution is x=t where t is some free parameter and y=0. The code I have is this:
import numpy as np
import sympy as sym
from sympy.solvers.solveset import linsolve
A_1 = sym.Matrix([[0,1],[0,1]])
system = A, b = A_1[:,0], A_1[:,-1]
linsolve(system, x, y)
This returns an empty set as I said before. When I print b however I get the vector (1,1) which I'm not sure why python is returning this. I need to emphasize that I'm only looking for non-trivial solutions here as I don't want an eigenvector of zeros.
I don't think this statement does what you think it does:
system = A, b = A_1[:,0], A_1[:,-1]
That will be parsed like this:
A, b = A_1[:,0], A_1[:,-1]
System = A, b
That first statement is a tuple assignment. A will be assigned A_1[:,0], which is [0,0], and b will be assigned the value of A_1[:,-1], which is [1,1]. system is then assigned a tuple with both values.
I wanted to make my own specified module in Python for scientific work and a crucial step is to design my function. For example, I would like to build a Freundlich adsorption isotherm function: output = K*(c^n), with K and n as constants, c is the concentration of a compound (variable).
def Freundlich(K, c, n):
ads_Freundlich = K * c ** n
return ads_Freundlich
However, with these codes I could only input K, c, n all as single figures. I would like to know how can I run a function by giving the constant(s) as figures and the variable(s) as lists (or pandas series, etc.). In the end, I want the function to return a list. Thanks!
For vanilla Python you have to use something like this:
def Freundlich(K, c, n_list):
return [K * c ** n for n in n_list]
If you pass a list to the function you wrote, it will not be automatically vectorized as you seem to expect; it will throw an error instead. However, as #juanpa says, such an automatic conversion is performed by a python module called numpy.
I'd like to figure out how to code the following pseudo-code:
# Base-case
u_0(x) = x^3
for i in [0,5):
u_(i+1)(x) = u_(i)(x)^2
So that in the end I can call u_5(x), for example.
The difficulty I'm having with accomplishing the above is finding a way to index Python functions by i so that I can iteratively define each function.
I tried using recursion with two functions in place of indexing but I get "maximum recursion depth exceeded".
Here is a minimal working example:
import math
import sympy as sym
a,b = sym.symbols('x y')
def f1(x,y):
return sym.sin(x) + sym.cos(y)*sym.tan(x*y)
for i in range(0,5):
def f2(x,y):
return sym.diff(f1(x,y),x) + sym.cos(sym.diff(f1(x,y),y,y))
def f1(x,y):
return f2(x,y)
print(f2(a,b))
Yes, the general idea would be to "index" the results in order to avoid recalculating them. The simplest way to achieve that is to "memoize", meaning telling a function to remember the result for values it has already calculated.
If f(i+1) is based on f(i) where i is a natural number, that can be especially effective.
In Python3, doing it for a 1 variable function is surprisingly simple, with a decorator:
import functools
#functools.lru_cache(maxsize=None)
def f(x):
.....
return ....
To know more about this, you can consult
What is memoization and how can I use it in Python?. (If you are using Python 2.7, there is also a way to do it with a prepackaged decorator.)
Your specific case (if my understanding of your pseudo-code is correct) relies on a two variables function, where i is an integer variable and x is a symbol (i.e. not supposed to be resolved here). So you would need to memoize along i.
To avoid blowing the stack up when you brutally ask for the image of 5 (not sure why, but no doubt there is more recursion than meets the eye), then use a for loop to calculate your images on the range from 0 to 5 (in that order: 0, 1, 2...).
I hope this helps.
The answer is actually pretty simple:
Pseudocode:
u_0(x) = x^3
for i in [0,5):
u_(i+1)(x) = u_(i)(x)^2
Actual code:
import sympy as sym
u = [None]*6 #Creates an empty array of 6 entries, i.e., u[0], u[1], ..., u[5]
x=sym.symbols('x')
u[0] = lambda x: x**3
for i in range(0,5):
u[i+1] = lambda x, i=i: (u[i](x))**2 #the i=i in the argument of the lambda function is
#necessary in Python; for more about this, see this question.
#Now, the functions are stores in the array u. However, to call them (e.g., evaluate them,
#plot them, print them, etc) requires that we "lambdify" them, i.e., replace sympy
#functions with numpy functions, which the following for loop accomplishes:
for i in range(0,6):
ulambdified[i] = sym.lambdify(x,u[i](x),"numpy")
for i in range(0,6):
print(ulambdified[i](x))
I am having trouble creating a function with two variables and three parameters. I want to perform a definite (numerical) integral over one of the variables (say t), and have it spit out an array F1(x;a,b,c), i.e. an array with a value associated with each entry in x, with scalar parameters a, b, and c. Ultimately I will need to fit the parameters (a,b,c) to data using leastsq, which I have done before using simpler functions.
Code looks like this:
def H1(t,x,a,b,c): #integrand
return (a function of the above, with parameters a,b,c, dummy variable to be integrated from 0 to inf t, and x)
def F1(x,a,b,c): #integrates H1: 0<t<inf
f_int1 = integrate.quad(H1,0.,np.inf,args=(x,a,b,c)) #integrating t from 0 to inf, x is going to be an element of the array in x_data.
return f_int1
Now, for example if I try to use F1 as a function:
F1(x_data,70.,.05,.1) #where x_data is an array of real numbers, between 0 and 500
I get the message:
quadpack.error: Supplied function does not return a valid float
I am hoping it will spit out an array: F1 for all the entries in x_data. If I just use a single scalar value for the first input into F1, e.g.:
F1(x_data[4],70.,.05,.1)
It spits out two numbers, which are the value of F1 at that point and the error tolerance. This looks like part of what I want, but I think I need it to work when passing an array through. So: it works for passing a single scalar value, but I need it to accept an array (and therefore make an array).
I am guessing the problem lies when I am trying to pass an array through the function as an argument. Though I am not sure what is a better way to do this? I think I have to figure out a way to do it as a function, since I will be using leastsq in the next few lines of code. (I know how to use leastsq, I think!)
Anyone have any ideas on how to get around this?
scipy.integrate.quad does not accept array-valued functions. Your best bet is to have a loop over the components (possibly with syntactic sugar of numpy.vectorize).
This is what my code looks like when simplified:
# This function returns some value depending on the index (integer)
# with which it is called.
def funct(index):
value <-- some_process[index]
# Return value for this index.
return value
where the indexes allowed are stored in a list:
# List if indexes.
x = [0,1,2,3,...,1000]
I need to find the x index that returns the minimum value for funct. I could just apply a brute force approach and loop through the full x list storing all values in new a list and then simply find the minimum with np.argmin():
list_of_values = []
for indx in x:
f_x = funct(x)
list_of_values.append(f_x)
min_value = np.argmin(list_of_values)
I've tried this and it works, but it becomes quite expensive when x has too many elements. I'm looking for a way to optimize this process.
I know that scipy.optimize has some optimization functions to find a global minimum like anneal and basin-hopping but I've failed to correctly apply them to my code.
Can these optimization tools be used when I can only call a function with an integer (the index) or do they require the function to be continuous?
the python builtin min accepts a key function:
min_idx = min(x, key=funct)
min_val = funct(min_idx)
This gives you an O(n) solution implemented about as well as you're going to get in python.