I am using Keras for a segmentation problem. I use fit_generator for training my model. I am unsure how to use class_weight in case of fit_generator?
I actually want to calculate class weights for each batch and not the entire dataset. And I am unable to find a way to do this using fit_generator without duplication of effort.
I tried using a generator to return class_weight for each batch but that gives me
TypeError: object of type 'generator' has no len()
I also thought about using a customised callback to handle class weights. As per my knowledge, Keras callbacks need to be initialised with the training batches and labels. Is there a way that I can pass my training batches to a callback from within my fit_generator call?
Thanks.
Related
Given that my dataset is too big to load in memory all at once I opted to use a custom generator with a batch size of 32, this means my data will be loaded in batch sizes of 32. Now if I understand correctly when I feed this generator as a paramenter to the .fit() of Keras that means that in one epoch all of the training data will be seen by the model in batches of 32. So my question is, if I specify a batch_size to the .fit() method so I can control when the weights are updated, how would this work, or should I not specify it and the weights will be updated for every batch of the generator?
In your case, I would leave it as it is. As the docs explain, when you define the batch size in model.fit(...), you are defining the:
[...] Number of samples per gradient update. [...]
However if you read on, it also says:
Do not specify the batch_size if your data is in the form of datasets,
generators, or keras.utils.Sequence instances (since they generate
batches).
So, the batch size from your generator should be respected when passing it to model.fit
TL;DR: During on_train_epoch_start, I want to get the model's output on ALL the training data, as part of some pre-training calculations. I'm asking what the lightning-friendly way to do that is.
This is an odd question.
In my project, every 10 epochs I select a subset of the full training data, and train only on that subset. During part of the calculation of which subset to use, I compute the model's output on every datapoint in the train dataset.
My question is, what's the best way to do this in pytorch lightning? Currently I have a callback with an on_train_epoch_start hook. During this hook, the callback makes its own dataloader from trainer.datamodule.train_dataloader(), and manually iterates over the dataloader, computing the model outputs. That's not ideal, right?
This makes me run into problems with pytorch lightning. For instance, when training on the GPU, I get an error, since my callback is using its own dataloader, not the trainer's train_dataloader, and so it isn't on the GPU. However, I can't use the trainer's train_dataloader, since after my callback selects its subset, it changes the trainer's train_dataloader to be just that subset, instead of the full dataset.
I guess I have two main questions:
Is there any way to avoid making a separate dataloader? Can I call a train loop somehow? Getting a models output on the full dataset seems like such a simple operation, I would think it'd be a one-liner.
How can I get/use a dataloader that syncs with all of the Pytorch Lightning automatic modifications? (eg. GPU/CPU, dataloading workers, pin_memory)
I am trying to use keras to fit a CNN model to classify images. The data set has much more images form certain classes, so its unbalanced.
I read different thing on how to weight the loss to account for this in Keras, e.g.:
https://datascience.stackexchange.com/questions/13490/how-to-set-class-weights-for-imbalanced-classes-in-keras, which is nicely explained. But, its always explaining for the fit() function, not the fit_generator() one.
Indeed, in the fit_generator() function we dont have the 'class_weights' parameter, but instead we have 'weighted_metrics', which I dont understand its description: "weighted_metrics: List of metrics to be evaluated and weighted by sample_weight or class_weight during training and testing."
How can I pass from 'class_weights' to 'weighted_metrics'? Would any one have a simple example?
We have class_weight in fit_generator (Keras v.2.2.2) According to docs:
Class_weight: Optional dictionary mapping class indices (integers) to
a weight (float) value, used for weighting the loss function (during
training only). This can be useful to tell the model to "pay more
attention" to samples from an under-represented class.
Assume you have two classes [positive and negative], you can pass class_weight to fit_generator with:
model.fit_generator(gen,class_weight=[0.7,1.3])
Using Keras in TensorFlow 2.x, I have a model that predicts something on multiple scales.
However, independent of the scale, the predicted content is the same and thus shares the same label that is fed from a tf.data.Dataset to model.fit().
As a consequence, the output of the model is a list of multiple tensors, while the label from the tf.data.Dataset is a single tensor. This results in the following error:
ValueError: Error when checking model target: the list of Numpy arrays that you are passing to your model is not the size the model expected.
How can I make the model aware of the repetition of the label?
Ideally, I don't want to replicate the label inside the tf.data.Dataset (unless there is a way to tile my label without consuming more memory).
I am training a model in tf.keras and I want to save all the activations of a given layer during training (so at each batch for instance) in order to be able to track boxplots/histograms of these activations in Tensorboard.
I am getting lost between Tensorboard callbacks options I don't manage to use for this purpose.
I have tried to write custom callback but I get an error when I use .numpy on the model.layers[i].output.
I have also tried custom metrics but it seems from the example that they only store a variable with shape=().
I have found answer about visualizing activation on inference but not during training on the training data itself.
thanks