Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
I wanted to apply one of the minimizations methods within sicpy.minimize to a function which may not always provide smooth derivatives. I've gotten comfortable with the Nelder-Mead implementation of the Simplex method, but it does not appear to accept the bounds argument: (...,bounds=[xmin, xmax],...). Reading this documentation it seems only L-BFGS-B, TNC and SLSQP methods accept bounds, and all three of those are based in some way upon Newton's method, and will either calculate a numerical derivative or accept one.
I don't know the exact term, but I'm looking for a 'Simplex-like' or 'derivativeless' method in scipy that accepts bounds, but will also be forgiving of functions that will not provide a smooth derivative (one example being staircase-like behavior). For now, I'm doing 1d. Later I may add dimensions, but that's not critical right now.
I would give lmfit a try (http://cars9.uchicago.edu/software/python/lmfit/).
While not part of scipy, but based on, it offers bounded minimization. I use it for curve-fitting and parameters extraction. Nevertheless, I couldn't tell how it would perform on your specific function.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have a few lists of movement tracking data, which looks something like this
I want to create a list of outputs where I mark these large spikes, essentially telling that there is a movement at that point.
I applied a rolling standard deviation on the data with a window size of two and got this result
Now I can see the spikes which mark the point of interest, but I am not sure how to do it in code. A statistical tool to measure these spikes, which can be used to flag these spikes.
There are several approaches that you can use for an anomaly detection task.
The choice depends on your data.
If you want to use a statistical approach, you can use some measures like z-score or IQR.
Here you can find a tutorial for these measures.
Here instead, you can find another tutorial for a statistical approach which uses mean and variance.
Last but not least, I suggest you also to check how to use a control chart, because in some cases it's enough.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I recently got interested in soccer statistics. Right now I want to implement the famous Dixon-Coles Model in Python 3.5 (paper-link).
The basic problem is, that from the model described in the paper a Likelihood function with numerous parameters results, which needs to be maximized.
For example: The likelihood function for one Bundesliga season would result in 37 parameters. Of course I do the minimization of the corresponding negative log-likelihood function. I know that this log function is strictly convex so the optimization should not be too difficult. I also included the analytic gradient, but as the number of parameters exceeds ~10 the optimization methods from the SciPy-Package fail (scipy.optimize.minimize()).
My question:
Which other optimization techniques are out there and are mostly suited for optimization problems involving ~40 independent parameters?
Some hints to other methods would be great!
You may want to have a look at convex optimization packages like https://cvxopt.org/ or https://www.cvxpy.org/. It's Python-based, hence easy to use!
You can make use of Metaheuristic algorithms which work both on convex and non-convex spaces. Probably the most famous one of them is Genetic algorithm. It is also easy to implement and the concept is straightforward. The beautiful thing about Genetic algorithm is that you can adapt it to solve most of the optimization problems.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Is it better to implement my own K-means Algorithm in Python or use the pre-implemented K-mean Algorithm in Python libraries like for example Scikit-Learn?
Before answering which is better, here is a quick reminder of the algorithm:
"Choose" the number of clusters K
Initiate your first centroids
For each point, find the closest centroid
according to a distance function D
When all points are attributed to a cluster, calculate the barycenter of the cluster which become its new centroid
Repeat step 3. and step 4. until convergence
As stressed previously, the algorithm depends on various parameters:
The number of clusters
Your initial centroid positions
A distance function to calculate distance between any point and centroid
A function to calculate the barycenter of each new cluster
A convergence metric
...
If none of the above is familiar to you, and you want to understand the role of each parameter, I would recommend to re-implement it on low-dimensional data-sets. Moreover, the implemented Python libraries might not match your specific requirements - even though they provide good tuning possibilities.
If your point is to use it quickly with a big-picture understanding, you can use existing implementation - scikit-learn would be a good choice.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
for example:
a=1500
b=[500,400,200]
One answer is:
ans=[1,2,1]
because 1*500+2*400+1*200=1500 I want to write a program with genetic algorithm with best evaluation function to solve this problem with this array with pyevolve python evolutionary tool.
Assuming that the coefficients in the answer must be integers, what you're describing is a linear Diophantine equation. It's not a good fit for a genetic algorithm, as the solution space is neither continuous nor smooth. (That is, there is not always a possible input between any two other inputs, and the "correct" answer will not necessarily be anywhere near other nearly-correct inputs.)
(If the coefficients in the answer can be real numbers, finding a solution is trivial to the point that a genetic algorithm would be overkill.)
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
How can I fit my data to an asymptotic power law curve or an exponential approach curve in R or Python?
My data essentially shows that the the y-axis increases continuously but the delta (increase) decreases with increase in x.
Any help will be much appreciated.
Using Python, if you have numpy and scipy installed, you could use curve_fit of thescipy package. It takes a user-defined function and x- as well as y-values (x_values and y_values in the code), and returns the optimized parameters and the covariance of the parameters.
import numpy
import scipy
def exponential(x,a,b):
return a*numpy.exp(b*x)
fit_data, covariance = scipy.optimize.curve_fit(exponential, x_values, y_values, (1.,1.))
This answer assumes you have your data as a one-dimensional numpy-array. You could easily convert your data into one of these, though.
The last argument contains starting values for your optimization. If you dont supply them, there might be problems in determining the number of parameters.