I'm working on this project where I've been tasked to model the band structure of various potentials, I've worked through all of the math on paper and have had success hard coding some results, but I'm looking to be able to produce results from scratch providing only the eigenvector, Hamiltonian, and potential being explored, I'm working in Python by the way.
So far I've been able to represent the components of my Hamiltonian that are just dependent on the (-h_bar^2/2m)(d^2/dx^2). I'm using Numpy to represent my bra's as arrays and then using Numpy's inner product function to evaluate the components of my Hamiltonian that are just dependent on the second derivative and its respective constants. Issues arise when I go to try and evaluate the components of the Hamiltonian that are potential dependent (seeing that the full Hamiltonian operator is (-h_bar^2/2m)(d^2/dx^2) + V(x)).
I'm not quite sure how to complete this part. I've tried evaluating the inner product in its integral form using SciPy, but I keep running into issues when trying to evaluate these integrals that have complex components (SciPy doesn't like that).
To get a more solid idea of what I'm doing, here's the PDF I'm working out of:https://era.library.ualberta.ca/items/4835014b-1dbc-48e1-bbec-c175c87c9b03/view/000b36c4-55ba-471b-aaf8-d9d1c5f5ee06/Pavelich_Robert_L_201609_MSc.pdf (page 19-22 as written in the actual document)
Any help would be much appreciated, I'm fairly new to Quantum Mechanics as a whole and even greener when it comes to modeling it. Thanks a lot!
The other way to compute Hamiltonian with your given Potential can be evaluated using Finite difference method where you can diagonalize your Hamiltonian and calculate eigen value and eigen states, Therefore could obtain the spectrum .
This code might be helpful https://github.com/mholtrop/QMPython/blob/master/Finite%20Well%20Bound%20States.ipynb
Related
Is there any place with a brief description of each of the algorithms for the parameter method in the minimize function of the lmfit package? Both there and in the documentation of SciPy there is no explanation about the details of each algorithm. Right now I know I can choose between them but I don't know which one to choose...
My current problem
I am using lmfit in Python to minimize a function. I want to minimize the function within a finite and predefined range where the function has the following characteristics:
It is almost zero everywhere, which makes it to be numerically identical to zero almost everywhere.
It has a very, very sharp peak in some point.
The peak can be anywhere within the region.
This makes many minimization algorithms to not work. Right now I am using a combination of the brute force method (method="brute") to find a point close to the peak and then feed this value to the Nelder-Mead algorithm (method="nelder") to finally perform the minimization. It is working approximately 50 % of the times, and the other 50 % of the times it fails to find the minimum. I wonder if there are better algorithms for cases like this one...
I think it is a fair point that docs for lmfit (such as https://lmfit.github.io/lmfit-py/fitting.html#fit-methods-table) and scipy.optimize (such as https://docs.scipy.org/doc/scipy/reference/tutorial/optimize.html#optimization-scipy-optimize) do not give detailed mathematical descriptions of the algorithms.
Then again, most of the docs for scipy, numpy, and related libraries describe how to use the methods, but do not describe in much mathematical detail how the algorithms work.
In fairness, the different optimization algorithms share many features and the differences between them can get pretty technical. All of these methods try to minimize some metric (often called "cost" or "residual") by changing the values of parameters for the supplied function.
It sort of takes a text book (or at least a Wikipedia page) to establish the concepts and mathematical terms used for these methods, and then a paper (or at least a Wikipedia page) to describe how each method differs from the others. So, I think the basic answer would be to look up the different methods.
I would like to calculate the total variation distance(TVD) between two continuous probability distributions. I would like to point out that while there are two relevant questions(see here and here), they are both working with discrete distributions.
For those not familiar with TVD,
Informally, this is the largest possible difference between the
probabilities that the two probability distributions can assign to the
same event.
as it is described in the respective Wikipedia page. In the case of continuous distributions, TVD is equal with half the integral of the absolute difference between the two (since I cannot add math notation see this for a proof and for the notation).
So far, I wasn't able to find a tool for my job in Python. I would be interested in one if exists. Also, while I have no experience in R, I understand that is commonly used for such tasks so I would be interested in one as well (TVD calculation is the final step of my algorithm so I guess it won't be hard to read some data from a file, do the calculation and print a number even if I am completely new to R).
I would like to add that I am mainly interesting in normal distributions so a tool strictly for those is more than welcomed.
If no such tools exist, then any help adapting answers from this question to use the builtin probability functions will be of great help as well.
Thank you in advance.
I am preconditioning a matrix using spilu, however, to pass this preconditioner into cg (the built in conjugate gradient method) it is necessary to use the LinearOperator function, can someone explain to me the parameter matvec, and why I need to use it. Below is my current code
Ainv=scla.spilu(A,drop_tol= 1e-7)
Ainv=scla.LinearOperator(Ainv.shape,matvec=Ainv)
scla.cg(A,b,maxiter=maxIterations, M = Ainv)
However this doesnt work and I am given the error TypeError: 'SuperLU' object is not callable. I have played around and tried
Ainv=scla.LinearOperator(Ainv.shape,matvec=Ainv.solve)
instead. This seems to work but I want to know why matvec needs Ainv.solve rather than just Ainv, and is it the right thing to feed LinearOperator?
Thanks for your time
Without having much experience with this part of scipy, some comments:
According to the docs you don't have to use LinearOperator, but you might do
M : {sparse matrix, dense matrix, LinearOperator}, so you can use explicit matrices too!
The idea/advantage of the LinearOperator:
Many iterative methods (e.g. cg, gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Such solvers only require the computation of matrix vector products docs
Depending on the task, sometimes even matrix-free approaches are available which can be much more efficient
The working approach you presented is indeed the correct one (some other source doing it similarily, and some course-materials doing it like that)
The idea of not using the inverse matrix, but using solve() here is not to form the inverse explicitly (which might be very costly)
A similar idea is very common in BFGS-based optimization algorithms although wiki might not give much insight here
scipy has an extra LinearOperator for this not forming the inverse explicitly! (although i think it's only used for statistics / completing/finishing some optimization; but i successfully build some LBFGS-based optimizers with this one)
Source # scicomp.stackexchange discussing this without touching scipy
And because of that i would assume spilu is completely going for this too (returning an object with a solve-method)
Using a Microsoft Kinect, I am collecting depth data about an object. From these data, I create a "cloud" of points (point cloud), which, when plotted, allow me to view the object that I scanned using the Kinect.
However, I would like to be able to collect multiple point clouds from different "views" and align them. More specifically, I would like to use an algorithm such as Iterative Closest Point (ICP) to do so, transforming each point in my point cloud by calculating the rotation and translation between each cloud that I collect and the previously-collected cloud.
However, while I understand the process behind ICP, I do not understand how I would implement it in 3D. Perhaps it is my lack of mathematical experience or my lack of experience with frameworks such as OpenCV, but I cannot find a solution. I would like to avoid libraries such as the Point Cloud Library which does this sort of thing for me, since I would like to do it myself.
Any and all suggestions are appreciated (if there is a solution that involves OpenCV/python that I can work on, that would be even better!)
I am currently struggling with ICP myself. Here is what I have gathered so far:
ICP consists of three steps:
Given two point clouds A and B, find pairs of points between A and B that probably represent the same point in space. Often this is done simply by matching each point with its closest neighbor in the other cloud, but you can use additional features such as color, texture or surface normal to improve the matching. Optionally you can then discard the worst matches.
Given this list of correspondence pairs, find the optimal transformation from A to B
Apply this transformation to all points in A
repeat these three steps until you converge on an acceptable solution.
Step one is easy, although there are lots of ways to optimize its speed, since this is the major performance bottleneck of ICP; and to improve the accuracy, since this is the main source of errors. OpenCV can help you there with the FLANN library.
I assume your troubles are with step two, finding the best transformation given a list of correspondences.
One common approach works with Singular Value Decomposition (SVD). Here is a rough sketch of the algorithm. Searching for ICP & SVD will give a lot of further references.
Take the list of corresponding points A1..An and B1..Bn from step 1
calculate the centroid Ca of all points in A and the centroid Cb of all points in B
Calculate the 3x3 covariance matrix M
M = (A1 - Ca)* (B1 - Cb)T + ... + (An - Ca)* (Bn - Cb)T
Use SVD to calculate the 3x3 Matrices U and V for M
(OpenCV has a function to perform SVD)
Calculate R = U * VT.
This is your desired optimal rotation matrix.
Calculate the optimal translation as Cb - R*Ca
The optimal transformation is the combination of R and this translation
Please note that I have not yet implemented this algorithm myself, so I am only paraphrasing what I read.
A very good introduction to ICP, including accelerated variants, can be found in Rusinkievicz's old paper here.
A new ICP algorithm is now in OpenCV contrib (surface matching module). It also benefits from the variants of various types (including Rusinkievicz and more):
http://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html
For MATLAB implementation:
http://www.mathworks.co.jp/matlabcentral/fileexchange/47152-icp-registration-using-efficient-variants-and-multi-resolution-scheme/content/icp_mod_point_plane_pyr.m
#tdirdal:
Ok then I may not be looking at the correct code.
I am talking about this package link:
The code starts with constructing a transformation matrix and then loads a *.ply which contains a mesh (faces and vertices). The rest of the code depends on the mesh that has been loaded.
I have a very simple problem and I would appreciate it if you could let me know how I can solve this using the ICP method. I have the following two point clouds. P2 is a subset of P39 and I would like to find P2 in P39. Please let me know how I can use your matlab package to solve this problem.
P2:
11.2706 -5.3392 1.1903
13.6194 -4.8500 2.6222
8.8809 -3.8407 1.1903
10.7704 -2.1800 2.6222
8.5570 -1.0346 1.1903
13.1808 -2.5632 1.1903
P39:
-1.9977 -4.1434 -1.6750
-4.3982 -3.5743 -3.1069
-6.8065 -3.0071 -1.6751
-9.2169 -2.4386 -3.1070
-11.6285 -1.8696 -1.6751
-16.4505 -0.7305 -1.6751
-14.0401 -1.3001 -3.1070
-18.8577 -0.1608 -3.1070
-25.9398 -0.8647 -3.1070
-30.1972 -4.6857 -3.1069
-28.2349 -2.5200 -3.1069
-29.5843 -0.2496 -1.6751
-31.1688 -2.0974 -3.1070
-21.2580 0.4093 -1.6751
-23.6450 0.9838 -3.1070
-26.0636 1.5073 -1.6751
-28.4357 1.9258 -3.1070
I'm trying to call upon the famous multilateration algorithm in order to pinpoint a radiation emission source given a set of arrival times for various detectors. I have the necessary data, but I'm still having trouble implementing this calculation; I am relatively new with Python.
I know that, if I were to do this by hand, I would use matrices and carry out elementary row operations in order to find my 3 unknowns (x,y,z), but I'm not sure how to code this. Is there a way to have Python implement ERO, or is there a better way to carry out my computation?
Depending on your needs, you could try:
NumPy if your interested in numerical solutions. As far as I remember, it could solve linear equations. Don't know how it deals with non-linear resolution.
SymPy for symbolic math. It solves symbolically linear equations ... according to their main page.
The two above are "generic" math packages. I doubt you will find (easily) any dedicated (and maintained) library for your specific need. Their was already a question on that topic here: Multilateration of GPS Coordinates