Tensorflow - tf.Layers : Error with the function .fit - python

I'm running the example cnn_mnist given on github that is using layers module.
I can run the program but a Warning appears telling me that one of the function is deprecated.
I couldn't find which new function needs to be used.
WARNING:tensorflow:From <ipython-input-14-ee49e8b76469>:25: calling BaseEstimator.fit (from tensorflow.contrib.learn.python.learn.estimators.estimator) with batch_size is deprecated and will be removed after 2016-12-01.
Instructions for updating:
Estimator is decoupled from Scikit Learn interface by moving into
separate class SKCompat. Arguments x, y and batch_size are only
available in the SKCompat class, Estimator will only accept input_fn.
Example conversion:
est = Estimator(...) -> est = SKCompat(Estimator(...))

It should be sufficient to modify that script like this:
# Create the Estimator
mnist_classifier = learn.SKCompat(learn.Estimator(
model_fn=cnn_model_fn, model_dir="/tmp/mnist_convnet_model"))
However, please open a Github issue to update that sample so it doesn't mislead anyone else.

Related

ValueError and AttributeError in Pycaret

I am using pycaret in my new laptop and also after a gap 6 months so not sure if this problem is due to some issue in my laptop or due to some changes in pycaret package itself. Earlier I simply used to create experiment using setup method of pycaret and it used to work. But now it keep raising one error after another. Like I used below 2 lines to setup experiment.
from pycaret.classification import *
exp = setup(data=df.drop(['id'], axis=1), target='cancer', session_id=123)
But this gave error:-
ValueError: Setting a random_state has no effect since shuffle is False. You should leave random_state to its default (None), or set shuffle=True.
Then I changed my second as below-
exp = setup(data=df.drop(['id'], axis=1), target='cancer', session_id=123, fold_shuffle=True, imputation_type='iterative')
Then it returned a new error-
AttributeError: 'Make_Time_Features' object has no attribute 'list_of_features'
I remember well earlier I never had to use these attributes in my setup method. Looks like even default values of attributes in setup method of pycaret are not working. Can anyone suggest me how to troubleshoot this?

When using torch.autocast, how do I force individual layers to float32

I'm trying to train a model in mixed precision. However, I want a few of the layers to be in full precision for stability reasons. How do I force an individual layer to be float32 when using torch.autocast? In particular, I'd like for this to be onnx compileable.
Is it something like:
with torch.cuda.amp.autocast(enabled=False, dtype=torch.float32):
out = my_unstable_layer(inputs.float())
Edit:
Looks like this is indeed the official method. See the torch docs.
I think the motivation of torch.autocast is to automate the reduction of precision (not the increase).
If you have functions that need a particular dtype, you should consider using, custom_fwd
import torch
#torch.cuda.amp.custom_fwd(cast_inputs=torch.complex128)
def get_custom(x):
print(' Decorated function received', x.dtype)
def regular_func(x):
print(' Regular function received', x.dtype)
get_custom(x)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
with torch.cuda.amp.autocast(False):
print('autocast disabled')
regular_func(x)
with torch.cuda.amp.autocast(True):
print('autocast enabled')
regular_func(x)
autocast disabled
Regular function received torch.float16
Decorated function received torch.float16
autocast enabled
Regular function received torch.float16
Decorated function received torch.complex128
Edit: Using torchscript
I am not sure how much you can rely on this, due to a comment in the documentation. However the comment is apparently outdated.
Here is an example where I trace the model with autocast enabled, feeze it and then I use it and the value is indeed cast to the specified type
class Cast(torch.nn.Module):
#torch.cuda.amp.custom_fwd(cast_inputs=torch.float64)
def forward(self, x):
return x
with torch.cuda.amp.autocast(True):
model = torch.jit.trace(Cast().eval(), x)
model = torch.jit.freeze(model)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
print(model(x).dtype)
torch.float64
But I suggest you to validate this approach before using it for a serious application.

difference between custom_loss and custom_metric in catboost CatBoostClassifier

I am trying to learn catboost, and I see two confusing terms with CatBoostClassifier:
custom_loss and custom_metric.
I have browsed here which says: https://catboost.ai/docs/concepts/python-reference_parameters-list.html#python-reference_parameters-list
custom_metric:
Metric values to output during training. These functions are not optimized and are displayed for informational purposes only. Some metrics support optional parameters (see the Objectives and metrics section for details on each metric).
but then what is custom_loss?
I see custom_loss defined in the R documentation: https://catboost.ai/docs/features/loss-functions-desc.html - but not in the python one.
yet. on the python tutorial, they have defined a custom_loss like so:
model = CatBoostClassifier(
custom_loss=['Accuracy'],
random_seed=42,
logging_level='Silent'
)
Am I missing something here? Infact, custom_loss does not seem to be defined as a property anywhere in the python docs: https://catboost.ai/docs/concepts/python-reference_parameters-list.html#python-reference_parameters-list
I infer the following from this link in the documentation.
I am almost certain that they refer to the same parameter, but custom_loss is the R name while custom_metric is for Python. Apparently they can be used interchangeably as long as they don't cause name collisions.

Why are variables defined with 'self' automatically given a ListWrapper() while inheriting from tf.keras.Model?

I am not familiar with ListWrapper(), but it is being applied to all list variables created with self when my class inherits from tf.keras.Model.
https://www.tensorflow.org/api_docs/python/tf/keras/models/Model
This is bad because it is causing an IndexError when I use it in certain functions, or even by just passing it through my Tensorflow model. (I am using eager execution)
A small reproduction of the problem can be seen with this code:
import tensorflow as tf
class my_class(tf.keras.Model):
def __init__(self):
super(my_class, self).__init__()
self.x = [0]
print(self.x)
model = my_class()
Output:
ListWrapper([0])
Switching the inheritance to be from object solves the issue, which is how I know its tf.keras.Model that is causing this.
I tried looking it up but can't find anything on this. Any tips? Thanks!
Turns out this was a bug in Tensorflow between tf.keras.Model and eager execution.
This was not "how the tensorflow has 'patched' setting attributes", as suggested by a comment.
This is a link to the closed issue on Tensorflow:
https://github.com/tensorflow/tensorflow/issues/22853
If you have this problem, it should be fixed in the next Tensorflow update.
This bug was in version 1.11.0

Tensorflow Estimator: How to run an operation within model_fn

I got a very short question, which has probably a very simple answer but I just can't figure it out, although I tried for hours now.
I'm using Tensorflow Estimator and I want to access the global step within my model_fn. I've tried tf.train.get_global_step, which returns me a Tensor. I need the global_step as an integer though (or as a string)!
So I've tried to eval() (= tf.get_default_session().run(t)), but it doesn't work..
Cheers!
You can use tf.cast to cast the Tensor to int or string.
For example,
tf.cast(tf.train.get_global_step(), dtype=tf.int)
See the reference here.
One way would be to parse it from the latest checkpoint file in the model_dir.
So assuming you can pass the model_dir into the model_fn (either through the params argument of tf.estimator.Estimator(..., params={'model_dir': 'path/to/model_dir'}) or through tf.flags.FLAGS, you can then use this utility function:
import tensorflow as tf
def get_global_step_from_model_dir(model_dir):
latest_checkpoint_file = tf.train.latest_checkpoint(model_dir)
if latest_checkpoint_file is None:
return 0
else:
return int(os.path.basename(latest_checkpoint_file).split('-')[-1])

Categories