I am new to machine learning and Python. I am trying to understand when to use the functions in sklearn.linear_model (linearregression and logisticregression) and when to implement my own code for the same. Any suggestions or references will be highly appreciated.
regards
Souvik
I recommend you use as much as possible the functions given by sklearn or another ML library (I like TensorFlow). That's because it's very difficult to get the performance of any library. They are calculating in a low level of the operating system, meanwhile the common users don't implement the computational actions outside the python environment.
Moreover, python by itself is not very efficient in relation to the data structures. For example,a simple array is implemented as a LinkedList. The ML libraries use Numpy to their computations to get a better performance.
Related
I've been running some large logistic regression models in SAS, which take 4+ hours to converge. Recently however I acquired access to a Hadoop cluster and can use Python to fit the same models much faster (something more like 10-15 minutes).
Problematically, I have some complete/quasi-complete separation of data points in my data which results in failure to converge; I was using the FIRTH command in SAS to produce robust parameter estimates despite that, but there seems to be no equivalent option for Python, either in sklearn or statsmodels (I'm mostly using the latter).
Is there another way to get around this problem in Python?
AFAIK, there is no Firth penalization available in Python. Statsmodels has an open issue but nobody is working on it at the moment.
As alternative it would be possible to use a different kind of penalization, e.g. as available in sklearn or maybe statsmodels.
The other option is to change the observed response variable. Firth can be implemented by augmenting the dataset. However, I don't know of any recipe or prototype for this in Python.
https://github.com/statsmodels/statsmodels/issues/3561
Statsmodels has ongoing work on penalization but currently the emphasis is on feature/variable selection (elastic net, SCAD) and quadratic penalization for generalized additive models GAM, especially for splines.
Firth uses data dependent penalization which does not fit the generic penalization framework where the penalization structure is a data independent "prior".
Conditional likelihood is another way to work around perfect separation. This is in a Statsmodels PR that is basically ready to use:
https://github.com/statsmodels/statsmodels/pull/5304
As the title said, what I need is a python library not Matlab's BNT.
BNT is quite strong, but most of the time, I use python to clean data, and recently I found that use two different language to do one thing usually make the problem much more complex. So I want a python library that can fit the parameters of DBNs.
Thank you very much.
Has anyone succeeded in speeding up scikit-learn models using numba and jit compilaition. The specific models I am looking at are regression models such as Logistic Regressions.
I am able to use numba to optimize the functions I write using sklearn models, but the model functions themselves are not affected by this and are not optimized, thus not providing a notable increase in speed. Is there are way to optimize the sklearn functions?
Any info about this would be much appreciated.
Scikit-learn makes heavy use of numpy, most of which is written in C and already compiled (hence not eligible for JIT optimization).
Further, the LogisticRegression model is essentially LinearSVC with the appropriate loss function. I could be slightly wrong about that, but in any case, it uses LIBLINEAR to do the solving, which is again a compiled C library.
The makers of scikit-learn also make heavy use of one of the python-to-compiled systems, Pyrex I think, which again results in optimized machine compiled code ineligible for JIT compilation.
The #numba.vectorize for arrays and #numba.guvectorise for matricies, are decorators may help since they work to combine loop operations. They generate so called "ufunc"s which achieve this goal, but instead of having to manually write the c code yourself it generates it from the python input.
See: http://numba.pydata.org/numba-doc/dev/user/vectorize.html
sorry if this all seem nooby and unclear, but I'm currently learning Netlogo to model agent-based collective behavior and would love to hear some advice on alternative software choices. My main thing is that I'd very much like to take advantage of PyCuda since, from what I understand, it enables parallel computation. However, does that mean I still have to write the numerical script in some other environment and implement the visuals in yet another one???
If so, my questions are:
What numerical package should I use? PyEvolve, DEAP, or something else? It appears that PyEvolve is no longer being developed and DEAP is just a wrapper on the outdated(?) EAP.
Graphic-wise, I find mayavi2 and vtk promising. The problem is, none of the numerical package seems to bind to these readily. Is there no better alternative than to save the numerical output to datafile and feed them into, say, mayavi2?
Another option is to generate the data via Netlogo and feed them into a graphing package from (2). Is there any disadvantage to doing this?
Thank you so much for shedding light on this confusion.
You almost certainly do not want to use CUDA unless you are running into a significant performance problem. In general CUDA is best used for solving floating point linear algebra problems. If you are looking for a framework built around parallel computations, I'd look towards OpenCL which can take advantage of GPUs if needed..
In terms of visualization, I'd strongly suggest targeting a a specific data interchange format and then letting some other program do that rendering for you. The only reason I'd use something like VTK is if for some reason you need more control over the visualization process or you are looking for a real time solution.
Probably the best choice for visualization would be to use an intermediate format and do it in another program. But for performance, i'd rather configure a JVM for a cluster and run NetLogo on it. I've not tried it yet but i'm thinking seriously to try NetLogo on a Beowulf style cluster.
BTW, there is an ABM platform called Repast that is said to have Python interface if you're planning to implement your code in Python.
long-time R and Python user here. I use R for my daily data analysis and Python for tasks heavier on text processing and shell-scripting. I am working with increasingly large data sets, and these files are often in binary or text files when I get them. The type of things I do normally is to apply statistical/machine learning algorithms and create statistical graphics in most cases. I use R with SQLite sometimes and write C for iteration-intensive tasks; before looking into Hadoop, I am considering investing some time in NumPy/Scipy because I've heard it has better memory management [and the transition to Numpy/Scipy for one with my background seems not that big] - I wonder if anyone has experience using the two and could comment on the improvements in this area, and if there are idioms in Numpy that deal with this issue. (I'm also aware of Rpy2 but wondering if Numpy/Scipy can handle most of my needs). Thanks -
R's strength when looking for an environment to do machine learning and statistics is most certainly the diversity of its libraries. To my knowledge, SciPy + SciKits cannot be a replacement for CRAN.
Regarding memory usage, R is using a pass-by-value paradigm while Python is using pass-by-reference. Pass-by-value can lead to more "intuitive" code, pass-by-reference can help optimize memory usage. Numpy also allows to have "views" on arrays (kind of subarrays without a copy being made).
Regarding speed, pure Python is faster than pure R for accessing individual elements in an array, but this advantage disappears when dealing with numpy arrays (benchmark). Fortunately, Cython lets one get serious speed improvements easily.
If working with Big Data, I find the support for storage-based arrays better with Python (HDF5).
I am not sure you should ditch one for the other but rpy2 can help you explore your options about a possible transition (arrays can be shuttled between R and Numpy without a copy being made).
I use NumPy daily and R nearly so.
For heavy number crunching, i prefer NumPy to R by a large margin (including R packages, like 'Matrix') I find the syntax cleaner, the function set larger, and computation is quicker (although i don't find R slow by any means). NumPy's Broadcasting functionality for instance, i do not think has an analog in R.
For instance, to read in a data set from a csv file and 'normalize' it for input to an ML algorithm (e.g., mean center then re-scale each dimension) requires just this:
data = NP.loadtxt(data1, delimiter=",") # 'data' is a NumPy array
data -= NP.mean(data, axis=0)
data /= NP.max(data, axis=0)
Also, i find that when coding ML algorithms, i need data structures that i can operate on element-wise and that also understand linear algebra (e.g., matrix multiplication, transpose, etc.). NumPy gets this and allows you to create these hybrid structures easily (no operator overloading or subclassing, etc.).
You won't be disappointed by NumPy/SciPy, more likely you'll be amazed.
So, a few recommendations--in general and in particular, given the facts in your question:
install both NumPy and Scipy. As a rough guide, NumPy provides the
core data structures (in particular
the ndarray) and SciPy (which is
actually several times larger than
NumPy) provides the domain-specific
functions (e.g., statistics, signal
processing, integration).
install the repository versions, particularly w/r/t NumPy because the
dev version is 2.0. Matplotlib and NumPy are tightly integrated, you can use one without the other of course, but both are the best in their respective class among python libraries. You can get all three via easy_install, which i assume you already.
NumPy/SciPy have several modules
specifically directed to Machine
Learning/Statistics, including the Clustering package and the Statistics package.
As well as packages directed to
general computation, but which are
make coding ML algorithms a lot
faster, in particular,
Optimization and Linear Algebra.
There are also the SciKits, not included in the base NumPy or
SciPy libraries; you need to install them separately.
Generally speaking, each SciKit is a
set of convenience wrappers to
streamline coding in a given domain. The SciKits you are likely to find most relevant are: ann (approximate Nearest Neighbor), and learn (a set of ML/Statistics regression and classification algorithms, e.g., Logistic Regression, Multi-Layer Perceptron, Support Vector Machine).