using an R modelled regression in python [closed] - python

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
My data was modelled with a Cox-regression, using R, however I would like to use this model into a python GUI. As my knowledge of R is very limited. This way non-coders would be able to 'predict' survival rates based on our model.
What is the best way that I could use this model (combination of 3 different regressions) in python?

Do you want to predict values based on your estimates?
In this case you can just copy the R outputs into python and apply to
respective procedures.
Do you want the user to be able to run "your R regression pipeline" from within Python?
There are python libraries that help with that. I find this
source a useful start.

Related

Input values return prediction with percentage [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
Hi I have write machine learning use decision tree model. I create webapp that user can input and web will call to model by flask api and then show result on webapp but my result have only Yes/No. It possible if the result can show percentage how much this input will yes/no, Example Yes 76%
You could use predict_proba() as it returns the class probabilities of the input samples X.
probs = clf.predict_proba(X)
# probs will output the probability of the prediction on each class.
You can read more about it in the documentation and in this other post with a similar question.
https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html
Scikit-Learn Decision Tree: Probability of prediction being a or b?

How to make predictions with a decision tree on a dataset without a target value? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Every tutorial I have found about machine learning includes testing an algorithm on a dataset that has target values and then it finds how accurate the algorithm is by testing its predictions on the test set.
What if you then receive all of the data except for the target value and you want to make target value predictions to see if they come true in the future?Every tutorial I have seen has been with data that they already know the future target value predictions.
Decision tree is a supervised algorithm. That means you must use some target value(or lable) to build the tree(dividing node's value based on information gain rule).

What would differ between these simple codes? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
So I've been kinda new to some concepts, can someone please briefly explain what is the difference between these two codes?
regressor=LinearRegression()
regressor.fit(train_X,train_Y)
.
LinearRegression().fit(train_X,train_Y)
The main difference between the two is that the first creates a variable called regressor which you can later access. The second doesn't do this.
Otherwise the two are doing exactly the same thing.
The purpose of fitting (training) the regressor is to use it in the future for prediction. In you r second example (LinearRegression().fit(train_X,train_Y)) you create an anonymous regressor, train it, and then immediately discard. You cannot use it anymore as it does not have any references.
In the first example, you first create a regressor and assign it to a variable, then train the regressor that was previously created. You can later use it for prediction or any other purpose.

Compare numerous figures and identify the similar ones [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am working on pattern recognition program using R/python. What would be the best way to compare two or more figures and identify/recognize the similar or duplicate figures based on pattern recognition?
There are lots of papers on the internet, we can try to get the idea how to extract and process feature in a fingerprint. For instance, http://www.cse.unr.edu/~bebis/CS790Q/PaperPresentations/MinutiaeDetection.pdf
Then you can use whatever classifier you want such as support vector machine.
If you need more idea you can visit http://dermatoglyphics.org/11-basic-patterns-of-fingerprint/ to generalize

Spectrum Analyzer with Python? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I want to create a music player with Python which uses OpenGL for visualizing the audio spectrum.
I already have the 3d engine set up and all I need is to get the spectrum data and feed it to the renderer.
I'd imagine it would be a list of numbers updated each few miliseconds or so.
I've heard you can get that info with FMOD and there's pyfmodex Python wrapper for it, but I can't access the FMOD documentation and the pyfmodex is almost undocumented. Can't find what I need by just browsing the class/function names.
If there's another library which can get that info that will also work.
numpy has an FFT function that will compute a fast fourier transform on a block of input data. You can use its output to obtain your spectral information.

Categories