I am following this tutorial:
https://www.tensorflow.org/tutorials/keras/text_classification_with_hub
It only goes up to fitting the model but I couldn't find how to use the model on a new dataset to classify unlabeled data. I tried following other tutorials but I couldn't get them to work since they might not be text based.
model.add(tf.keras.layers.Dense(1))
I run into an issue where I try to set the layer to 2 for positive, negative but that doesn't work either.
I think you misunderstood the purpose of that tutorial. That tutorial is applying the use of what is known as "transfer learning". Transfer Learning is when you take an already trained model, and train it with other data. What you are doing is creating an entirely new model, which is not the purpose of that specific tutorial. Furthermore, for that model you need a labeled dataset, which is provided in the tutorial using the Tensorflow Datasets library. To accomplish what you are trying to do, you must look at a different tutorial explaining how to train an LSTM model for text classification from scratch.
Related
I'm trying to solve a Prompt Learning task, but I'm not finding information on how to do it with HuggingFace Transformers. However, I've been thinking about it, and doing prompt-learning is basically fine-tuning a masked language model, but only over a set of possible values for those masks, instead of using the masks provided by the tokenizer straight out of the box. Is there a way of doing that, e.g., to fine-tune a masked language model that only calculates the logits for a set of predefined words?
I've tried OpenPrompt, but I didn't quite like it, that's why I'm trying to use Huggingface.
Cheers!
I am currently working on a system that extracts certain features out of 3D-objects (Voxelgrids to be precise), and i would like to compare those features to automatically made features when it comes to performance (classification) in a tensorflow cNN with some other data, but that is not the point here, just for background.
My idea now was, to take a dataset (modelnet10), train a tensorflow cNN to classify them, and then use what it learned there on my dataset - not to classify, but to extract features.
So i want to throw away everything the cnn does,except for what it takes from the objects.
Is there anyway to get these features? and how do i do that? i certainly have no idea.
Yes, it is possible to train models exclusively for feature extraction. This is called transfer learning where you can either train your own model and then extract the features or you can extract features from pre-trained models and then use it in your task if your task is similar in nature to that of what the pre-trained model was trained for. You can of course find a lot of material online for these topics. However, I am providing some links below which give details on how you can go about it:
https://keras.io/api/applications/
https://keras.io/guides/transfer_learning/
https://machinelearningmastery.com/how-to-use-transfer-learning-when-developing-convolutional-neural-network-models/
https://www.pyimagesearch.com/2019/05/27/keras-feature-extraction-on-large-datasets-with-deep-learning/
https://www.kaggle.com/angqx95/feature-extractor-fine-tuning-with-keras
I started experimenting with gans and on the internet, there are a lot of options the thing I worry about now is which one should I use Keras fit or Keras train_on_batchs which one is the proper way to train the model thanks
According to tensorflow's documentation, the combination of tf.data & model.train_on_batch() is the fastest way to train a model.
You can check more about tf.data in the link below:
https://www.tensorflow.org/guide/data
I have already implemented GANs with keras. My implementations might help you get an idea how to train those models:
https://github.com/kochlisGit/Keras-GAN
I came to the point where I deployed my trained model done with Keras and based on Tensorflow. I have Tensorflow-Serving running and when I feed the input with the test data I get exactly the output as expected. Wonderful!
However, in the real world (deployed scenario) I need to pass a new data set to the model that the model has never seen before. And in the training/testing setup I did categorization and one-hot encoding. So I need to transform the submitted data-set first. This I might be able to do.
I also did normalization (Standardscaler from sklearn) and now I have no clue what is best practice to do here. In order to do normalization I would need to run through the training data again plus the one submitted data-set.
I believe this can be solved in an elegant way. Any ideas?
How can I load a .pb protobuf model and then tweak the network as needed (specially the outer layers) in order to train a new model for completely different classes? Effectively doing transfer learning?
I want to do something like these (i.e. train the outer layers with a bigger learning rate than the inner layers) among other things, so I need a way to not only load the graph with the variables, but to alter the network's structure and hyperparameters too.
If anyone has an example to follow with the inception model, it would be amazing!
My question is very similar to this one.
I've searched all over the internet (TF docs, Github, StackOverflow, Google...) but I can't seam to find something useful for a novice.
Thanks a lot!
This is the updated tutorial from official Tensorflow website https://www.tensorflow.org/hub/tutorials/image_retraining
They use the pre trained Inception V3 model and everything works fine. You can change the dataset folder to your own dataset
tf.import_graph_def() is the function for loading a GraphDef:
https://www.tensorflow.org/versions/0.6.0/api_docs/python/framework.html#import_graph_def
Hopefully once imported, you can make the modifications to the graph you need. It would be easier, though, to modify the Python code that generated the graph in the first place, if you have access to that.