I created a "Python" layer "myLayer" in caffe, and use it in the net train_val.prototxt I insert the layer like this:
layer {
name: "my_py_layer"
type: "Python"
bottom: "in"
top: "out"
python_param {
module: "my_module_name"
layer: "myLayer"
}
include { phase: TRAIN } # THIS IS THE TRICKY PART!
}
Now, my layer only participates in the TRAINing phase of the net,
how can I know that in my layer's setup function??
class myLayer(caffe.Layer):
def setup(self, bottom, top):
# I want to know here what is the phase?!!
...
PS,
I posted this question on "Caffe Users" google group as well. I'll udpdate if anything pops there.
As pointed out by galloguille, caffe is now exposing phase to the python layer class. This new feature makes this answer a bit redundant. Still it is useful to know about the param_str in caffe python layer for passing other parameters to the layer.
Original answer:
AFAIK there is no trivial way of getting the phase. However, one can pass arbitrary parameters from the net prototxt to python. This can be done using the param_str parameters of the python_param.
Here's how it's done:
layer {
type: "Python"
...
python_param {
...
param_str: '{"phase":"TRAIN","numeric_arg":5}' # passing params as a STRING
In python, you get param_str in the layer's setup function:
import caffe, json
class myLayer(caffe.Layer):
def setup(self, bottom, top):
param = json.loads( self.param_str ) # use JSON to convert string to dict
self.phase = param['phase']
self.other_param = int( param['numeric_arg'] ) # I might want to use this as well...
This is a very good workaround, but if you are only interested in passing the phase as a parameter, now you can access the phase as an attribute of the layer. This feature was merged just 6 days ago https://github.com/BVLC/caffe/pull/3995.
Specific commit: https://github.com/BVLC/caffe/commit/de8ac32a02f3e324b0495f1729bff2446d402c2c
With this new feature you just need to use the attribute self.phase. For example you can do the following:
class PhaseLayer(caffe.Layer):
"""A layer for checking attribute `phase`"""
def setup(self, bottom, top):
pass
def reshape(self, bootom, top):
top[0].reshape()
def forward(self, bottom, top):
top[0].data[()] = self.phase
Related
I'm trying to train a model in mixed precision. However, I want a few of the layers to be in full precision for stability reasons. How do I force an individual layer to be float32 when using torch.autocast? In particular, I'd like for this to be onnx compileable.
Is it something like:
with torch.cuda.amp.autocast(enabled=False, dtype=torch.float32):
out = my_unstable_layer(inputs.float())
Edit:
Looks like this is indeed the official method. See the torch docs.
I think the motivation of torch.autocast is to automate the reduction of precision (not the increase).
If you have functions that need a particular dtype, you should consider using, custom_fwd
import torch
#torch.cuda.amp.custom_fwd(cast_inputs=torch.complex128)
def get_custom(x):
print(' Decorated function received', x.dtype)
def regular_func(x):
print(' Regular function received', x.dtype)
get_custom(x)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
with torch.cuda.amp.autocast(False):
print('autocast disabled')
regular_func(x)
with torch.cuda.amp.autocast(True):
print('autocast enabled')
regular_func(x)
autocast disabled
Regular function received torch.float16
Decorated function received torch.float16
autocast enabled
Regular function received torch.float16
Decorated function received torch.complex128
Edit: Using torchscript
I am not sure how much you can rely on this, due to a comment in the documentation. However the comment is apparently outdated.
Here is an example where I trace the model with autocast enabled, feeze it and then I use it and the value is indeed cast to the specified type
class Cast(torch.nn.Module):
#torch.cuda.amp.custom_fwd(cast_inputs=torch.float64)
def forward(self, x):
return x
with torch.cuda.amp.autocast(True):
model = torch.jit.trace(Cast().eval(), x)
model = torch.jit.freeze(model)
x = torch.tensor(0.0, dtype=torch.half, device='cuda')
print(model(x).dtype)
torch.float64
But I suggest you to validate this approach before using it for a serious application.
Is it possible to make the PyTorch distributions create their samples directly on GPU.
If I do
from torch.distributions import Uniform, Normal
normal = Normal(3, 1)
sample = normal.sample()
Then sample will be on CPU. Of course it is possible to do sample = sample.to(torch.device("cuda")) to make it on GPU. But is there a way to have the sample go directly to GPU without first creating it on CPU?
PyTorch distributions inherit from Object, not nn.Module so it does not have a to method the put the distribution instance on GPU.
Any ideas?
Distributions use the reparametrization trick. Thus giving size 0 tensors which are on GPU to the distribution constructor works. As follows:
normal = Normal(torch.tensor(0).to(device=torch.device("cuda")), torch.tensor(1).to(device=torch.device("cuda")))
In my case, I'm using a Normal Distribution as my prior in a neural net model. I have a class called class1, for example, and in the init function I have to initiate my prior. However, calling .to('cuda') of an instance of class1 doesn't change the distribution device and causes error in later usages. Therefore, I could have used register buffers to manage it as follows.
class class1(nn.Module):
def __init__(self):
super().__init__()
self.register_buffer("mean", torch.tensor(0.))
self.register_buffer("var", torch.tensor(1.))
def get_dist(self):
return torch.distributions.Normal(self.mean, self.var)
However, I have several priors, and it's not possible to register_buffer a list. So, an option could be initiating distributions in get_dist property unless you don't care about the time complexity of initiating distributions. I decided to define a function for initiating distributions and a try-except in get_dist to handle different states. If the distributions variable is not assigned or on CPU while we expect it to be on GPU, it jumps to except where I initiate the distributions using torch.zeros(..).to(device).
Overall, to handle this error of CPU/GPU device, you need to initiate a distribution using Tensor input parameters with appropriate device. And the main reason is torch.Distribution module hasn't a device attribute unfortunately.
I just came across the same problem, and thanks to the other answers here for the pointers. I want to offer another option if you want a distribution inside a module, which is to override the to method in the module and manually call the to methods on the distribution parameter tensors. I've only tested with Uniform but works well here.
class MyModule(nn.Module):
def __init__(self, ...):
self.rng = Uniform(
low=torch.zeros(3),
high=torch.ones(3)
)
def to(self, *args, **kwargs):
super().to(*args, **kwargs)
self.rng.low = self.rng.low.to(*args, **kwargs)
self.rng.high = self.rng.high.to(*args, **kwargs)
Now you can put your model on the gpu as usual and self.rng.sample() will return a sample on the correct device.
You can solve the problem of "transferring non-parameter/buffer attributes to GPU" by overriding self._apply(self, fn) method of your network. Like this:
def _apply(self, fn):
# apply fn() to your modules
for module in self.children(): # like 'ResNet_backbone'
module._apply(fn)
# apply fn() to your prior
self.prior.attr1 = fn(self.prior.attr1) # like 'MultivariateNormal.loc', need to be Tensor
self.prior.attr2 = fn(self.prior.attr2)
···
self.prior.attrN = fn(self.prior.attrN)
# if we do not use register_buffer(Tensor)
# apply fn() to your non-parameter/buffer attributes
# need to be Tensor too
self.attr1 = fn(self.attr1)
self.attr2 = fn(self.attr2)
···
self.attrN = fn(self.attrN)
I am having a script in tensorflow which contains the custom tensorflow ops. I want to port the code to keras and I am not sure how to call the custom ops within keras code.
I want to use tensorflow within keras, so the tutorial I found so far is describing the opposite to what I want: https://blog.keras.io/keras-as-a-simplified-interface-to-tensorflow-tutorial.html.
I also read about Lambda layers that can wrap arbitrary custom function, yet I did not see an example for tf.ops.
If you could provide code snippet with a simplest example how to do that I would be very grateful. For example assuming the tf.ops as:
outC = my_custom_op(inA, inB)
---EDIT:
Similar problem has been described in here - essentially calling this custom op in keras, however I cannot grasp the solution how to apply it on another example that I want, for instance this one. This custom tf op is first compiled (for gpu) and then so far used within tensorflow as here, see # line 40. It is clear for me how to use a custom (lambda) function wrapped in Lambda layer, what I would like to understand is how to use the compiled custom ops, if I use keras.
You can wrap arbitrary tensorflow functions in a keras Lambda layer and add them to your model. Minimal working example from this answer:
import tensorflow as tf
from keras.layers import Dense, Lambda, Input
from keras.models import Model
W = tf.random_normal(shape=(128,20))
b = tf.random_normal(shape=(20,))
inp = Input(shape=(10,))
x = Dense(128)(inp)
# Custom linear transformation
y = Lambda(lambda x: tf.matmul(x, W) + b, name='custom_layer')(x)
model = Model(inp, y)
I got a very short question, which has probably a very simple answer but I just can't figure it out, although I tried for hours now.
I'm using Tensorflow Estimator and I want to access the global step within my model_fn. I've tried tf.train.get_global_step, which returns me a Tensor. I need the global_step as an integer though (or as a string)!
So I've tried to eval() (= tf.get_default_session().run(t)), but it doesn't work..
Cheers!
You can use tf.cast to cast the Tensor to int or string.
For example,
tf.cast(tf.train.get_global_step(), dtype=tf.int)
See the reference here.
One way would be to parse it from the latest checkpoint file in the model_dir.
So assuming you can pass the model_dir into the model_fn (either through the params argument of tf.estimator.Estimator(..., params={'model_dir': 'path/to/model_dir'}) or through tf.flags.FLAGS, you can then use this utility function:
import tensorflow as tf
def get_global_step_from_model_dir(model_dir):
latest_checkpoint_file = tf.train.latest_checkpoint(model_dir)
if latest_checkpoint_file is None:
return 0
else:
return int(os.path.basename(latest_checkpoint_file).split('-')[-1])
Caffe has a layer type "Python".
For instance, this layer type can be used as a loss layer.
On other occasions it is used as an input layer.
What is this layer type?
How can this layer be used?
Prune's and Bharat's answers gives the overall purpose of a "Python" layer: a general purpose layer which is implemented in python rather than c++.
I intend this answer to serve as a tutorial for using "Python" layer.
A Tutorial for "Python" layer
what is a "Python" layer?
Please see the excellent answers of Prune and Bharat.
Pre-requisite
In order to use 'Python" layer you need to compile caffe with flag
WITH_PYTHON_LAYER := 1
set in 'Makefile.config'.
How to implement a "Python" layer?
A "Python" layer should be implemented as a python class derived from caffe.Layer base class. This class must have the following four methods:
import caffe
class my_py_layer(caffe.Layer):
def setup(self, bottom, top):
pass
def reshape(self, bottom, top):
pass
def forward(self, bottom, top):
pass
def backward(self, top, propagate_down, bottom):
pass
What are these methods?
def setup(self, bottom, top): This method is called once when caffe builds the net. This function should check that number of inputs (len(bottom)) and number of outputs (len(top)) is as expected.
You should also allocate internal parameters of the net here (i.e., self.add_blobs()), see this thread for more information.
This method has access to self.param_str - a string passed from the prototxt to the layer. See this thread for more information.
def reshape(self, bottom, top): This method is called whenever caffe reshapes the net. This function should allocate the outputs (each of the top blobs). The outputs' shape is usually related to the bottoms' shape.
def forward(self, bottom, top): Implementing the forward pass from bottom to top.
def backward(self, top, propagate_down, bottom): This method implements the backpropagation, it propagates the gradients from top to bottom. propagate_down is a Boolean vector of len(bottom) indicating to which of the bottoms the gradient should be propagated.
Some more information about bottom and top inputs you can find in this post.
Examples
You can see some examples of simplified python layers here, here and here.
Example of "moving average" output layer can be found here.
Trainable parameters
"Python" layer can have trainable parameters (like "Conv", "InnerProduct", etc.).
You can find more information on adding trainable parameters in this thread and this one. There's also a very simplified example in caffe git.
How to add a "Python" layer in a prototxt?
See Bharat's answer for details.
You need to add the following to your prototxt:
layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
python_param {
module: 'rpn.anchor_target_layer' # python module name where your implementation is
layer: 'AnchorTargetLayer' # the name of the class implementation
param_str: "'feat_stride': 16" # optional parameters to the layer
}
}
How to add a "Python" layer using pythonic NetSpec interface?
It's very simple:
import caffe
from caffe import layers as L
ns = caffe.NetSpec()
# define layers here...
ns.rpn_labels, ns.rpn_bbox_targets, \
ns.rpn_bbox_inside_weights, ns.rpn_bbox_outside_weights = \
L.Python(ns.rpn_cls_score, ns.gt_boxes, ns.im_info, ns.data,
name='rpn-data',
ntop=4, # tell caffe to expect four output blobs
python_param={'module': 'rpn.anchor_target_layer',
'layer': 'AnchorTargetLayer',
'param_str': '"\'feat_stride\': 16"'})
How to use a net with a "Python" layer?
Invoking python code from caffe is nothing you need to worry about. Caffe uses boost API to call python code from compiled c++.
What do you do need to do?
Make sure the python module implementing your layer is in $PYTHONPATH so that when caffe imports it - it can be found.
For instance, if your module my_python_layer.py is in /path/to/my_python_layer.py then
PYTHONPATH=/path/to:$PYTHONPATH $CAFFE_ROOT/build/tools/caffe train -solver my_solver.prototxt
should work just fine.
How to test my layer?
You should always test your layer before putting it to use.
Testing the forward function is entirely up to you, as each layer has a different functionality.
Testing the backward method is easy, as this method only implements a gradient of forward it can be numerically tested automatically!
Check out test_gradient_for_python_layer testing utility:
import numpy as np
from test_gradient_for_python_layer import test_gradient_for_python_layer
# set the inputs
input_names_and_values = [('in_cont', np.random.randn(3,4)),
('in_binary', np.random.binomial(1, 0.4, (3,1))]
output_names = ['out1', 'out2']
py_module = 'folder.my_layer_module_name'
py_layer = 'my_layer_class_name'
param_str = 'some params'
propagate_down = [True, False]
# call the test
test_gradient_for_python_layer(input_names_and_values, output_names,
py_module, py_layer, param_str,
propagate_down)
# you are done!
Special Notice
It is worth while noting that python code runs on CPU only. Thus, if you plan to have a Python layer in the middle of your net you will see a significant degradation in performance if you plan on using GPU. This happens because caffe needs to copy blobs from GPU to CPU before calling python layer and then copy back to GPU to proceed with the forward/backward pass.
This degradation is far less significant if the python layer is either an input layer or the topmost loss layer.
Update: On Sep 19th, 2017 PR #5904 was merged into master. This PR exposes GPU pointers of blobs via the python interface.
You may access blob._gpu_data_ptr and blob._gpu_diff_ptr directly from python at your own risk.
Very simply, it's a layer in which you provide the implementation code, rather than using one of the pre-defined types -- which are all backed by efficient functions.
If you want to define a custom loss function, go ahead: write it yourself, and create the layer with type Python. If you have non-standard input needs, perhaps some data-specific pre-processing, no problem: write it yourself, and create the layer with type Python.
Python layers are different from C++ layers which need to be compiled, their parameters need to be added to the proto file and finally you need to register the layer in layer_factory. If you write a python layer, you don't need to worry about any of these things. Layer parameters can be defined as a string, which are accessible as a string in python. For example: if you have a parameter in a layer, you can access it using 'self.param_str', if param_str was defined in your prototxt file. Like other layers, you need to define a class with the following functions:
Setup - Initialize your layer using parameters obtained from layer variables
Forward - What would be input and output of a layer
Backward - Given the prediction and gradients from the next layer, compute the gradients for the previous layer
Reshape - Reshape your blob if needed
Prototxt example:
layer {
name: 'rpn-data'
type: 'Python'
bottom: 'rpn_cls_score'
bottom: 'gt_boxes'
bottom: 'im_info'
bottom: 'data'
top: 'rpn_labels'
top: 'rpn_bbox_targets'
top: 'rpn_bbox_inside_weights'
top: 'rpn_bbox_outside_weights'
python_param {
module: 'rpn.anchor_target_layer'
layer: 'AnchorTargetLayer'
param_str: "'feat_stride': 16"
}
}
Here, name of the layer is rpn-data, bottom and top are input and output details of the layer respectively. python_param defines what are the parameters of the Python layer. 'module' specifies what is the file name of your layer. If the file called 'anchor_target_layer.py' is located inside a folder called 'rpn', the parameter would be 'rpn.anchor_target_layer'. The 'layer' parameter is the name of your class, in this case it is 'AnchorTargetLayer'. 'param_str' is a parameter for the layer, which contains a value 16 for the key 'feat_stride'.
Unlike C++/CUDA layers, Python layers do not work in a multi-GPU setting in caffe as of now, so that is a disadvantage of using them.