I am working on a model describing a protein. Molecule may be found in few states, transitions from one state to another are described by matrix containing transition rates. Model can be resolved as set of ordinary differential equations giving nice numeric results (starting from adjacency matrix -> transition rate matrix -> Kolmogorow forward equations -> numeric integration). However, to get the stochastic nature of the process, I would like to use Monte Carlo method. As far as I know, the Gillespie algorithm is made for this. Are there any packages you recommend for this kind of task?
You could just try using the random module and perform the Monte Carlo simulation yourself, using a random float for the values in your initial adjacency matrix. Just be sure to randomly select a number within the accepted range of what your matrix can take.
To iterate over the simulation, use a for or while loop, depending on how many simulations you want to run.
Beside working with the random module as proposed by #PeterWang and create the matrices yourself, alternatively you can use the numpy module, which also provides random sampling. This way you can create random numbers in any matrix dimensions you prefer. Especially regarding your further tasks a module working with matrices numpy might be a good solution.
For more details, see reference of numpy.random
Related
To solve the traveling salesman problem with simulated annealing, we need to start with an initial solution, a random permutation of the cities. This is the order in which the salesman is supposed to visit them. Then, we switch to a "neighboring solution" by swapping two cities. And then there are details around when to switch to the neighboring solution, etc.
I could implement all of this from scratch. But I wanted to see if I could use the inbuilt Scipy annealing solver. Looking at the documentation, looks like the old anneal method has been deprecated. The new methods that is supposed to have replaced it is basinhopping. But looking through the documentation and source code, it seems these are more towards optimizing a function of some array where any float values of that array are permissible (and then there are local and global optima). That's what all the examples in the documentation or code comments are geared towards. I can't imagine how I would use any of these inbuilt routines to solve the famous traveling salesman problem itself since the array there is a permutation array. If you just perturb its values by some floating point numbers, you won't get a valid solution.
So, the conclusion seems to be that those standard routines are inapplicable on a combinatorial optimization problem like the traveling salesman? Or am I missing something?
I'm working on this project where I've been tasked to model the band structure of various potentials, I've worked through all of the math on paper and have had success hard coding some results, but I'm looking to be able to produce results from scratch providing only the eigenvector, Hamiltonian, and potential being explored, I'm working in Python by the way.
So far I've been able to represent the components of my Hamiltonian that are just dependent on the (-h_bar^2/2m)(d^2/dx^2). I'm using Numpy to represent my bra's as arrays and then using Numpy's inner product function to evaluate the components of my Hamiltonian that are just dependent on the second derivative and its respective constants. Issues arise when I go to try and evaluate the components of the Hamiltonian that are potential dependent (seeing that the full Hamiltonian operator is (-h_bar^2/2m)(d^2/dx^2) + V(x)).
I'm not quite sure how to complete this part. I've tried evaluating the inner product in its integral form using SciPy, but I keep running into issues when trying to evaluate these integrals that have complex components (SciPy doesn't like that).
To get a more solid idea of what I'm doing, here's the PDF I'm working out of:https://era.library.ualberta.ca/items/4835014b-1dbc-48e1-bbec-c175c87c9b03/view/000b36c4-55ba-471b-aaf8-d9d1c5f5ee06/Pavelich_Robert_L_201609_MSc.pdf (page 19-22 as written in the actual document)
Any help would be much appreciated, I'm fairly new to Quantum Mechanics as a whole and even greener when it comes to modeling it. Thanks a lot!
The other way to compute Hamiltonian with your given Potential can be evaluated using Finite difference method where you can diagonalize your Hamiltonian and calculate eigen value and eigen states, Therefore could obtain the spectrum .
This code might be helpful https://github.com/mholtrop/QMPython/blob/master/Finite%20Well%20Bound%20States.ipynb
I'm a total novice to this and I need some help implementing a solver in Python to optimize the following.
I want to minimize (1/4b)[(Π1-s)'K(Π1-s)+(Π'1-t)'K(Π'1-t)] - tr(KΠ) with respect to Π.
Π is an nxn matrix and 1 denotes the all ones' vector. Also s and t are vectors of dimension n and b is a fixed scalar. So the only quantity that varies is Π, and for that matrix, we have the constraint that all the entries sum up to 1.
How would I do this? Or if this isn't the correct place, where should I ask this?
First you need to express your equation as a python code. Raw python is not that great at pure number crunching, so you should consider a library like Numpy to do the heavy lifting for you.
Once you do that. You can try using one of the optimizers that come with scikit-learn package
If the domain of Π is weird (non continous for example) try using a HyperOpt package
I'm trying to call upon the famous multilateration algorithm in order to pinpoint a radiation emission source given a set of arrival times for various detectors. I have the necessary data, but I'm still having trouble implementing this calculation; I am relatively new with Python.
I know that, if I were to do this by hand, I would use matrices and carry out elementary row operations in order to find my 3 unknowns (x,y,z), but I'm not sure how to code this. Is there a way to have Python implement ERO, or is there a better way to carry out my computation?
Depending on your needs, you could try:
NumPy if your interested in numerical solutions. As far as I remember, it could solve linear equations. Don't know how it deals with non-linear resolution.
SymPy for symbolic math. It solves symbolically linear equations ... according to their main page.
The two above are "generic" math packages. I doubt you will find (easily) any dedicated (and maintained) library for your specific need. Their was already a question on that topic here: Multilateration of GPS Coordinates
I want to simulate a propagating wave with absorption and reflection on some bodies in three dimensional space. I want to do it with python. Should I use numpy? Are there some special libraries I should use?
How can I simulate the wave? Can I use the wave equation? But what if I have a reflection?
Is there a better method? Should I do it with vectors? But when the ray diverge the intensity gets lower. Difficult.
Thanks in advance.
If you do any computationally intensive numerical simulation in Python, you should definitely use NumPy.
The most general algorithm to simulate an electromagnetic wave in arbitrarily-shaped materials is the finite-difference time domain method (FDTD). It solves the wave equation, one time-step at a time, on a 3-D lattice. It is quite complicated to program yourself, though, and you are probably better off using a dedicated package such as Meep.
There are books on how to write your own FDTD simulations: here's one, here's a document with some code for 1-D FDTD and explanations on more than 1 dimension, and Googling "writing FDTD" will find you more of the same.
You could also approach the problem by assuming all your waves are plane waves, then you could use vectors and the Fresnel equations. Or if you want to model Gaussian beams being transmitted and reflected from flat or curved surfaces, you could use the ABCD matrix formalism (also known as ray transfer matrices). This takes into account the divergence of beams.
If you are solving 3D custom PDEs, I would recommend at least a look at FiPy. It'll save you the trouble of building a lot of your matrix conditioners and solvers from scratch. It uses numpy and/or trilinos. Here are some examples.
I recommend you use my project GarlicSim as the framework in which you build the simulation. You will still need to write your algorithm yourself, probably in Numpy, but GarlicSim may save you a bunch of boilerplate and allow you to explore your simulation results in a flexible way, similar to version control systems.
Don't use Python. I've tried using it for computationally expensive things and it just wasn't made for that.
If you need to simulate a wave in a Python program, write the necessary code in C/C++ and export it to Python.
Here's a link to the C API: http://docs.python.org/c-api/
Be warned, it isn't the easiest API in the world :)