Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
If i have a dataset consisting by a list of images each associated with a series of features; there is a model that, once trained, generates new images upon entering a new list of features?
I think you are looking for GAN(Generative Adversarial Networks) which is proposed in this paper.
GAN are the type of algorithm which contains two different model so that one model named Discriminator tries to learn to determine if it's input data comes from the data set or not and the other one named Generator tries to learn how to generate data so that the Discriminator wrongly recognize that it comes from the data set.
You can find more details from the following links:
generative adversarial network (GAN)
Generative Adversarial Networks (GANs): Engine and Applications
GAN by Example using Keras on Tensorflow Backend
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 1 year ago.
Improve this question
I have a question regarding the model.fit method and overfitting from the scikit learn library in Pandas
Does the generic sklearn method model.fit(x---, y--) returns the score after applying the model to the specified training data?
Also, it is overfitting when performance on the test data degrades as more training data is used to learn the model?
model.fit(X, y) doesn't explicitly give you the score, if you assign a variable to it, it stores all the artifacts, training parameters. You can get the score by using model.score(X, y).
Overfitting in simple words is increasing the variance in your model by which your model fails to generalize. There are ways to reduce overfitting like feature engineering, normalization, regularization, ensemble methods etc.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I want to create two separate CNN models from a single CNN model. Let me name it as CNN-A and CNN-B.
i.e., Original CNN model = CNN-A model + CNN-B model
During the prediction, the raw input dataset are feed to the CNN-A. The output of the CNN-A are feed as input to the CNN-B. The original output of the original model is the output of CNN-B model.
To implement the above architecture, I would like get your suggestions and idea if any, please?
The implementation seems redundant. The reason is the input to each CNN should be an image. And let's say the output of the first CNN network is an image and you feed that to the second CNN, this is the same as stacking multiple convolution layers with additional dropouts and whatnot for the One CNN model.
So after all implementing a deep CNN will mimic the architecture you want.
You can also take a look at transfer learning, which is able to utilize a pre-trained model's layers and you are capable to add your own final layers and make adjustments. This is also similar to what you are talking about.
================ After Comment =====================
You could use a model architecture like mobilenet for a model to be deployed on your mobile.
You could also apply transfer learning to existing pre-trained mobilenet models which will save a lot of time and resources.
Lastly, you could deploy the model(used for computers) on a server using Flask. Then create an API that will provide predictions when you send the relevant data to the server via a POST request. This is commonly used to reduce the load on mobiles and this the approach that I would prefer. This method is relatively efficient and is easily scalable.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 2 years ago.
Improve this question
I created image data set including 62992 images with 128x128px resolution that contains characters, numbers and symbols with four kinds of font styles. how do I train this dataset and create pre-trained model for my OCR, can you please help me?
If you want to train a custom OCR, I would advise studying the Tensorflow Attention OCR implementation which can be found here.
I've used this implementation in various projects and it gave satisfactory results in converting image to text.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
If you check my github, I have successfully implemented CNN, KNN for classifying signal faults. For that, I have taken the signal with little preprocessing for dimensionality reduction and provided it to the network, using its class information I trained the network, later the trained network is tested with testing samples to determine the class and computed the accuracy.
My question here how do I input the text information to CNN or any other network. For inputs, I took the Twitter database from kaggle, I have selected 2 columns which have names and gender information. I have gone through some algorithms which classify gender based on their blog data. I wasn't clear how I implement to my data (In my case, if I just want to classify using only names alone).
In some examples, which I understood I saw computing sparse matrix for the text, but for 20,000 samples the sparse matrix is huge to give as input. I have no problem in implementing the CNN architectures(I want to implement because no features are required) or any other network. I am stuck here, how to input data to the network. What kind of conversations can I make so that I take the names and gender information can be considered to train the network?
If my method of thinking is wrong please provide me suggestion which algorithm is the best way. Deep learning or any other methods are ok!
You could use character-level embeddings (i.e. your input classes are the different characters, so 'a' is class 1, 'b' is class 2 etc..). One-hot encoding the classes and then passing them through an embedding layer will yield unique representations for each character. A string can then be treated as a character-sequence (or equally a vector-sequence), which can be used as an input for either a recurrent or convolutional network. If you feel like reading, this paper by Kim et al. will provide you all the necessary theoretical backbone.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a python Machine Learning script which learns from Firewall log data. I am using Random Forest Classifier in it. If I run it as a service is it possible to train the model, again and again, each day whenever last day's Firewall logs are received.
These are the estimators in sklearn that support what you want to do and unfortunately for you Random Forest isn't one of them, so you will have to refit every time you add data.
If you insist on sticking with Random Forest, one option would be to reduce the number of features (based on the classifier you currently have) to increase the speed of refitting your classifier.