I am performing Confirmatory factor analysis in python using the factor_analyzer module.
I have searched hi and low for a way to generate the model diagnostics such as the Root Mean Square Error of Approximation, the chi square, the CFI and Tucker-Lewis index. I'm not particularly mathematically inclined and relatively new to python but I have been able to muddle through for the most part.
I understand that the factor_analyzer module produces a lot of different objects that allow would, in-theory, allow me to carry out additional calculations and I have found this document which provides me with most of the formula I need. However, I do not know what to take (or calculate) to get the model diagnostics I need.
The CFA code is
model_dict = {"F1": factor_1,
"F2": factor_2}# I have made these lists previously
model_spec = ModelSpecificationParser.parse_model_specification_from_dict(df[influence_scale],
model_dict)
cfa = ConfirmatoryFactorAnalyzer(model_spec, disp=False)
cfa.fit(df[influence_scale].values)
cfa_loadings = pd.DataFrame(cfa.loadings_)
I have gotten no errors and the code works fine giving me clean loadings as I would have expected on each factor, however I'm really stuck on getting the additional stats I need.
If anyone can help me out I'd really really appreciate it.
Related
After taking a couple advanced statistics courses, I decided to code some functions/classes to just automate estimating parameters for different distributions via MLE. In Matlab, the below is something I easily coded once:
function [ params, max, confidence_interval ] = newroutine( fun, data, guesses )
lh = #(x,data) -sum(log(fun(x,data))); %Gets log-likelihood from user-defined fun.
options = optimset('Display', 'off', 'MaxIter', 1000000, 'TolX', 10^-20, 'TolFun', 10^-20);
[theta, max1] = fminunc(#(x) lh(x,data), guesses,options);
params = theta
max = max1
end
Where I just have to correctly specify the underlying pdf equation as fun, and with more code I can calculate p-values, confidence-intervals, etc.
With Python, however, all the sources I've found on MLE automation (for ex., here and here) insist that the easiest way to do this is to delve into OOP using a subclass of statsmodel's, GenericLikelihoodModel, which seems way too complicated for me. My reasoning is that, since the log-likelihood can be automatically created from the pdf (at least for the vast majority of functions), and scipy.stats."random_dist".fit() already easily returns MLE estimates, it seems ridiculous to have to write ~30 lines of class code each time you have a new dist. to fit.
I realize that doing it the way the two links suggests allows you to automatically tap into statsmodel's functions, but it honestly does not seem simpler than tapping into scipy oneself and writing much simpler functions.
Am I missing an easier way to perform basic MLE, or is there a real good reason for the way statsmodels does this?
I wrote the first post outlining the various methods, and I think it is fair to say that while I recommend the statsmodels approach, I did so to leverage the postestimation tools it provides and to get standard errors every time a model is estimated.
When using minimize, the python equivalent of fminunc (as you outline in your example), oftentimes I am forced to use "Nelder-Meade" or some other gradiant-free method to get convergence . Since I need standard errors for statistical inference, this entails an additional step using numdifftools to recover the hessian. So in the end, the method you propose has its complications too (for my work). If all you care about is the maximum likelihood estimate and not inference, then the approach you outline is probably best and you are correct that you don't need the machinery of statsmodel.
FYI: in a later post, I use your approach combined with autograd for significant speedups of big maximum likelihood models. I haven't successfully gotten this to work with statsmodels.
I would like to understand more the machine learning technics, I have read and watch a bunch of things on Python, sklearn and supervised feed forward net but I am still struggling to see how I can apply all this to my project and where to start with. Maybe it is a little bit too ambitious yet.
I have the following algorithm which generates nice patterns as binary format inputs on csv file. The outputs and the goal is to predict the next row.
The simplify logic of this algorithm is the prediction of the next line (top line being the most recent one) would be 0,0,1,1,1,0 and then the next after that would become either 0,0,0,1,1,0 or come back to its previous step 0,1,1,1,0. However you can see the model is slightly more complex and noisy this is why I would like to introduce some machine learnings here. I am aware to have a reliable prediction I will need to introduce other relevant inputs afterwards.
Would someone please help me to get started and stand on my feet here?
I don't like throwing this here and not being able to provide a single piece of code but I am slightly confused to where to start.
Should I pass as input each (line-1) as vectors and then the associated output would be the top line? Should I build the array manually with all my dataset?
I guess I have to use the sigmoid function and python seems the most common way to answer this but for the synapses (or weights), I understand I need also to provide a constant, should this be 1?
Finally assuming you want this to run continuously what would be required?
Please would you share with me readings or simplification tasks that could help me to increase my knowledge with all this.
Many thanks.
I am working on Python 2.7. I want to create nomograms based on the data of various variables in order to predict one variable. I am looking into and have installed PyNomo package.
However, the from the documentation here and here and the examples, it seems that nomograms can only be made when you have equation(s) relating these variables, and not from the data. For example, examples here show how to use equations to create nomograms. What I want, is to create a nomogram from the data and use that to predict things. How do I do that? In other words, how do I make the nomograph take data as input and not the function as input? Is it even possible?
Any input would be helpful. If PyNomo cannot do it, please suggest some other package (in any language). For example, I am trying function nomogram from package rms in R, but not having luck with figuring out how to properly use it. I have asked a separate question for that here.
The term "nomogram" has become somewhat confused of late as it now refers to two entirely different things.
A classic nomogram performs a full calculation - you mark two scales, draw a straight line across the marks and read your answer from a third scale. This is the type of nomogram that pynomo produces, and as you correctly say, you need a formula. As mentioned above, producing nomograms like this is definitely a two-step process.
The other use of the term (very popular, recently) is to refer to regression nomograms. These are graphical depictions of regression models (usually logistic regression models). For these, a group of parallel predictor variables are depicted with a common scale on the bottom; for each predictor you read the 'score' from the scale and add these up. These types of nomograms have become very popular in the last few years, and thats what the RMS package will draft. I haven't used this but my understanding is that it works directly from the data.
Hope this is of some use! :-)
I've already posted it on robotics.stackexchange but I had no relevant answer.
I'm currently developing a SLAM software on a robot, and I tried the Scan Matching algorithm to solve the odometry problem.
I read this article :
Metric-Based Iterative Closest Point Scan Matching
for Sensor Displacement Estimation
I found it really well explained, and I strictly followed the formulas given in the article to implement the algorithm.
You can see my implementation in python there :
ScanMatching.py
The problem I have is that, during my tests, the right rotation was found, but the translation was totally false. The values of translation are extremely high.
Do you have guys any idea of what can be the problem in my code ?
Otherwise, should I post my question on the Mathematics Stack Exchange ?
The ICP part should be correct, as I tested it many times, but the Least Square Minimization doesn't seem to give good results.
As you noticed, I used many bigfloat.BigFloat values, cause sometimes the max float was not big enough to contain some values.
don't know if you already solved this issue.
I didn't read the full article, but I noticed it is rather old.
IMHO (I'm not the expert here), I would try bunching specific algorithms, like feature detection and description to get a point cloud, descriptor matcher to relate points, bundle adjustement to get the rototraslation matrix.
I myself am going to try sba (http://users.ics.forth.gr/~lourakis/sba/), or more specifically cvsba (http://www.uco.es/investiga/grupos/ava/node/39/) because I'm on opencv.
If you have enough cpu/gpu power, give a chance to AKAZE feature detector and descriptor.
I'm trying to optimize a 4 dimensional function with scipy. Everything works so far, except that I'm not satisfied with the quality of the solution. Right now I have ground truth data, which I use to verify my code. What I get so far is:
End error: 1.52606896507e-05
End Gradient: [ -1.17291295e-05 2.60362493e-05 5.15347856e-06 -2.72388430e-05]
Ground Truth: [0.07999999..., 0.0178329..., 0.9372903878..., 1.7756283966...]
Reconstructed: [ 0.08375729 0.01226504 1.13730592 0.21389899]
The error itself sounds good, but as the values are totally wrong I want to force the optimization algorithm (BFGS) to do more steps.
In the documentation I found the options 'gtol' and 'norm' and I tried to set both to pretty small values (like 0.0000001) but it did not seem to change anything.
Background:
The problem is, that I try to demodulate waves, so I have sin and cos terms and potentially many local (or global) minima. I use bruteforce search to find a good starting point, witch helps a lot, but it currently seems that the most work is done by that brute force search, as the optimization uses often only one iteration step. So I'm trying to improve that part of the calculation somehow.
Many local minima + hardly any improvement after brute search, that sounds bad. It's hard to say something very specific with the level of detail you provide in the question, so here are vague ideas to try (basically, what I'd do if I suspect my minimizer gets stuck):
try manually starting the minimizer from a bunch of different initial guesses.
try using a stochastic minimizer. You're tagging a question scipy, so try basinhopping
if worst comes to worst, just throw random points in a loop, leave it to work over the lunch break (or overnight)
Also, waves, sines and cosines --- it might be useful to think if you can reformulate your problem in the Fourier space.
I found out that the gradient at the starting point is already very flat (values in 10^-5), so I tried to scale the gradient function witch I already provided. This seemed to be pretty effective, I could force the Algorithm to do much more steps and my results are far better now.
They are not perfect though, but a complete discussion of this is outside of the bounds of this question, so I might start a new one, where I describe the whole problem from bottom up.