To solve the traveling salesman problem with simulated annealing, we need to start with an initial solution, a random permutation of the cities. This is the order in which the salesman is supposed to visit them. Then, we switch to a "neighboring solution" by swapping two cities. And then there are details around when to switch to the neighboring solution, etc.
I could implement all of this from scratch. But I wanted to see if I could use the inbuilt Scipy annealing solver. Looking at the documentation, looks like the old anneal method has been deprecated. The new methods that is supposed to have replaced it is basinhopping. But looking through the documentation and source code, it seems these are more towards optimizing a function of some array where any float values of that array are permissible (and then there are local and global optima). That's what all the examples in the documentation or code comments are geared towards. I can't imagine how I would use any of these inbuilt routines to solve the famous traveling salesman problem itself since the array there is a permutation array. If you just perturb its values by some floating point numbers, you won't get a valid solution.
So, the conclusion seems to be that those standard routines are inapplicable on a combinatorial optimization problem like the traveling salesman? Or am I missing something?
For my coursework I've been told to write an algorithm in Python to solve the Euler-Lagrange equations with Dirichlet boundary conditions.
For reasons I don't understand we have only studied finite difference methods yet (very briefly) in lectures. I was wondering if I should start with that algorithm, or would you recommend another method that is not too complicated but would be more efficient?
Thanks a lot!
Shooting methods, when they work, are fast and accurate. There
are some problems for which the BVP is well-posed but the IVP
used in the shooting method is not (e.g. the IVP has infinite
solutions for some values of z).
That's chapter 13 page 13 of your lecture notes
Hello people on the internet!
I have two 3D equations that I want to solve simultaneously. They are of the form F(x,y,z)=C and G(x,y,z)=0. The solution of these equations are supposed to describe curves (maybe even areas in some regions, I am not sure) and I want to obtain a discrete set of numerical solutions that "sample" these lines. I tried searching for a while, but the methods directed at solving I stumbled upon only aim to find a single solution.
I thought about using a Grid on 3d space and just check the equations, however that forces me to loosen the conditions a bit. But in case (or in regions where) the solution is a curve, the points are supposed to resemble a curve after all.
For better reference, my functions are of the form:
with random parameters c_i, d_i, k_i, phi_i.
For tips I would prefer native python, but I am open to any possible solution. Any ideas appreciated! :)
You're going to want to start by sampling those functions on a 3D grid containing the portion of the solution set you are interested.
Once you've identified which regions of the gird may contain potential solutions, you'll then use an iterative method to minimize the function (F(x)-C)^2 + (G(x))^2.
The key here is that you will do the iterative algorithm for each gird region you identified as "interesting." Each time, initializing the method with values laying inside the region of interest.
Note: Sorry for the poor notation.
I'm trying to call upon the famous multilateration algorithm in order to pinpoint a radiation emission source given a set of arrival times for various detectors. I have the necessary data, but I'm still having trouble implementing this calculation; I am relatively new with Python.
I know that, if I were to do this by hand, I would use matrices and carry out elementary row operations in order to find my 3 unknowns (x,y,z), but I'm not sure how to code this. Is there a way to have Python implement ERO, or is there a better way to carry out my computation?
Depending on your needs, you could try:
NumPy if your interested in numerical solutions. As far as I remember, it could solve linear equations. Don't know how it deals with non-linear resolution.
SymPy for symbolic math. It solves symbolically linear equations ... according to their main page.
The two above are "generic" math packages. I doubt you will find (easily) any dedicated (and maintained) library for your specific need. Their was already a question on that topic here: Multilateration of GPS Coordinates
I'm looking for a good library that will integrate stiff ODEs in Python. The issue is, scipy's odeint gives me good solutions sometimes, but the slightest change in the initial conditions causes it to fall down and give up. The same problem is solved quite happily by MATLAB's stiff solvers (ode15s and ode23s), but I can't use it (even from Python, because none of the Python bindings for the MATLAB C API implement callbacks, and I need to pass a function to the ODE solver). I'm trying PyGSL, but it's horrendously complex. Any suggestions would be greatly appreciated.
EDIT: The specific problem I'm having with PyGSL is choosing the right step function. There are several of them, but no direct analogues to ode15s or ode23s (bdf formula and modified Rosenbrock if that makes sense). So what is a good step function to choose for a stiff system? I have to solve this system for a really long time to ensure that it reaches steady-state, and the GSL solvers either choose a miniscule time-step or one that's too large.
If you can solve your problem with Matlab's ode15s, you should be able to solve it with the vode solver of scipy. To simulate ode15s, I use the following settings:
ode15s = scipy.integrate.ode(f)
ode15s.set_integrator('vode', method='bdf', order=15, nsteps=3000)
ode15s.set_initial_value(u0, t0)
and then you can happily solve your problem with ode15s.integrate(t_final). It should work pretty well on a stiff problem.
(See also Link)
Python can call C. The industry standard is LSODE in ODEPACK. It is public-domain. You can download the C version. These solvers are extremely tricky, so it's best to use some well-tested code.
Added: Be sure you really have a stiff system, i.e. if the rates (eigenvalues) differ by more than 2 or 3 orders of magnitude. Also, if the system is stiff, but you are only looking for a steady-state solution, these solvers give you the option of solving some of the equations algebraically. Otherwise, a good Runge-Kutta solver like DVERK will be a good, and much simpler, solution.
Added here because it would not fit in a comment: This is from the DLSODE header doc:
C T :INOUT Value of the independent variable. On return it
C will be the current value of t (normally TOUT).
C
C TOUT :IN Next point where output is desired (.NE. T).
Also, yes Michaelis-Menten kinetics is nonlinear. The Aitken acceleration works with it, though. (If you want a short explanation, first consider the simple case of Y being a scalar. You run the system to get 3 Y(T) points. Fit an exponential curve through them (simple algebra). Then set Y to the asymptote and repeat. Now just generalize to Y being a vector. Assume the 3 points are in a plane - it's OK if they're not.) Besides, unless you have a forcing function (like a constant IV drip), the MM elimination will decay away and the system will approach linearity. Hope that helps.
PyDSTool wraps the Radau solver, which is an excellent implicit stiff integrator. This has more setup than odeint, but a lot less than PyGSL. The greatest benefit is that your RHS function is specified as a string (typically, although you can build a system using symbolic manipulations) and is converted into C, so there are no slow python callbacks and the whole thing will be very fast.
I am currently studying a bit of ODE and its solvers, so your question is very interesting to me...
From what I have heard and read, for stiff problems the right way to go is to choose an implicit method as a step function (correct me if I am wrong, I am still learning the misteries of ODE solvers). I cannot cite you where I read this, because I don't remember, but here is a thread from gsl-help where a similar question was asked.
So, in short, seems like the bsimp method is worth taking a shot, although it requires a jacobian function. If you cannot calculate the Jacobian, I will try with rk2imp, rk4imp, or any of the gear methods.