How to give an arbitrary initial condition on odient in python - python

I'm trying to solve a first order linear differential equation in one variable, and am currently using the odient module in scipy.integrate. However, the initial condition it takes in $y_0$ is evaluated at the initial boundary of the domain $x_0$, while what I have is the value of $y$ at some random point $x$.
Suggestions on similar questions were to use solve_bvp, which doesn't quite solve my problem either.
How do I go about this?

Numerical integrators always march in only one direction from the initial point. To get a two-sided solution one would have to call the numerical integrator twice, forward and backward, for instance as
ta = np.linspace(x0,a,Na+1)
ya = odeint(f,y0,ta)
tb = np.linspace(x0,b,Nb+1)
yb = odeint(f,y0,tb)
You can leave these two parts separate for further uses like plotting, or join them into one array each
t=np.concatenate([ta[::-1],tb[1:]])
y=np.concatenate([ya[::-1],yb[1:]])

Related

Constraint on parameters in lmfit

I am trying to fit 3 peaks using lmfit with a skewed-Voigt profile (this is not that important for my question). I want to set a constraint on the peaks centers of the form:
peak1 = SkewedVoigtModel(prefix='sv1_')
pars.update(peak1.make_params())
pars['sv1_center'].set(x)
peak2 = SkewedVoigtModel(prefix='sv2_')
pars.update(peak2.make_params())
pars['sv2_center'].set(1000+x)
peak3 = SkewedVoigtModel(prefix='sv3_')
pars.update(peak3.make_params())
pars['sv3_center'].set(2000+x)
Basically I want them to be 1000 apart from each other, but I need to fit for the actual shift, x. I know that I can force some parameters to be equal using pars['sv2_center'].set(expr='sv1_center'), but what I would need is pars['sv2_center'].set(expr='sv1_center'+1000) (which doesn't work just like that). How can I achieve what I need? Thank you!
Just do:
pars['sv2_center'].set(expr='sv1_center+1000')
pars['sv3_center'].set(expr='sv1_center+2000')
The constraint expression is a Python expression that will be evaluated every time the constrained parameter needs to get its value.

How to create a Series of matrices in Python (with pandas and Gurobi)

I'm doing a linear optimization in Gurobi and trying to make my decision variables in a Series of matrices, using this code:
schedule = pd.Series(index = Weekdays)
for day in Weekdays:
schedule[day] = m.addVars(Blocks, Departments, vtype=GRB.BINARY)
But it keeps throwing an error "cannot set using a list-like indexer with a different length than the value." How do I get around this to make a list of matrices?
If anyone comes across this, I figured out that the addVars method allows you to directly enter all three dimensions and uses a dictionary to reference. Therefore, you can simplify by writing:
schedule = m.addVars(Weekdays, Blocks, Departments, vtype=GRB.BINARY)
to reference, all you need to do is write:
schedule[weekday, block, department]

Converting from R to Python, trying to understand a line

I have a fairly simple question. I have been converting some statistical analysis code from R to Python. Up until now, I have been doing just fine, but I have gotten stuck on this particular line:
nlsfit <- nls(N~pnorm(m, mean=mean, sd=sd),data=data4fit,start=list(mean=mu, sd=sig), control=list(maxiter=100,warnOnly = TRUE))
Essentially, the program is calculating the non-linear least-squares fit for a set of data, the "nls" command. In the original text, the "tilde" looks like an "enye", I'm not sure if that is significant.
As I understand the equivalent of pnorm in Python is norm.cdf from from scipy.stats. What I want to know is, what does the "tilde/enye" do before the pnorm function is invoked. "m" is a predefined variable, while "mean" and "sd" are not.
I also found some code, essentially reproducing nls in Python: nls Python code, however, because of the date of the post (2013), I was wondering if there are any more recent equivalents, preferably written in Pyton 3.
Any advice is appreiated, thanks!
As you can see from ?nls: the first argument in nsl is formula:
formula: a nonlinear model formula including variables and parameters.
Will be coerced to a formula if necessary
Now, if you do ?formula, we can read this:
The models fit by, e.g., the lm and glm functions are specified in a
compact symbolic form. The ~ operator is basic in the formation of
such models. An expression of the form y ~ model is interpreted as a
specification that the response y is modelled by a linear predictor
specified symbolically by model
Therefore, the ~ in your case nls join the response/dependent/regressand variable in the left with the regressors/explanatory variables in the right part of your nonlinear least squares.
Best!
This minimizes
sum((N - pnorm(m, mean=mean, sd=sd))^2)
using starting values for mean and sd specified in start. It will perform a maximum of 100 iterations and it will return instead of signalling an error in the case of termination before convergence.
The first argument to nls is an R formula which specifies the regression where the left hand side of the tilde (N) is the dependent variable and the right side is the function of the parameters (mean, sd) and data (m) used to predict it.
Note that formula objects do not have a fixed meaning in R but rather each function can interpret them in any way it likes. For example, formula objects used by nls are interpreted differently than formula objects used by lm. In nls the formula y ~ a + b * x would be used to specify a linear regression but in lm the same regression would be expressed as y ~ x .
See ?pnorm, ?nls, ?nls.control and ?formula .

Cubic Spline function in scipy.interpolate returns a numpy.ndarray

I need to use CubicSpline to interpolated between points. This is my function
cs = CubicSpline(aTime, aControl)
u = cs(t) # u is a ndarray of one element.
I cannot convert u to a float. uu = float(u) or uu = float(u[0]) doesn't work in the function.
I can convert u to a float in the shell by float(u). This shouldn't work because I have not provided an index but I get an error if I use u[0].
I have read something about np.squeeze. I tried it but it didn't help.
I added a print ("u=",u) statement after the u=cs(t). The result was
u= [ 1.88006889e+09 5.39398193e-01 5.39398193e-01]
How can this be? I expect 1 value. The second and third numbers look about right.
I found the problem. Programming error, of course but the error messages I got were very misleading. I was calling the interpolate function with 3 values so it returned three vales. Why I couldn't get just the one afterwards is still a mystery but now that I call the interpolate with just one value I get one float as expected. Overall this still didn't help as the interpolate1d function is too slow. I wrote my own cubic interpolate function that is MUCH faster.
Again, programming error and poor error messages were the problem.

Minimizing an array and value in Python

I have a vector of floats (coming from an operation on an array) and a float value (which is actually an element of the array, but that's unimportant), and I need to find the smallest float out of them all.
I'd love to be able to find the minimum between them in one line in a 'Pythony' way.
MinVec = N[i,:] + N[:,j]
Answer = min(min(MinVec),N[i,j])
Clearly I'm performing two minimisation calls, and I'd love to be able to replace this with one call. Perhaps I could eliminate the vector MinVec as well.
As an aside, this is for a short program in Dynamic Programming.
TIA.
EDIT: My apologies, I didn't specify I was using numpy. The variable N is an array.
You can append the value, then minimize. I'm not sure what the relative time considerations of the two approaches are, though - I wouldn't necessarily assume this is faster:
Answer = min(np.append(MinVec, N[i, j]))
This is the same thing as the answer above but without using numpy.
Answer = min(MinVec.append(N[i, j]))

Categories