AttributeError: module 'tensorflow' has no attribute 'py_func' - python

I'm trying to run a code on ubuntu that uses tensorflow, it gives me the error:
AttributeError: module 'tensorflow' has no attribute 'py_func'
how can I fix it?
the relevant part of the code:
for i in range(cfg.num_layers):
neighbour_idx = tf.py_func(DP.knn_search, [batch_xyz, batch_xyz, cfg.k_n], tf.int32)
sub_points = batch_xyz[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
pool_i = neighbour_idx[:, :tf.shape(batch_xyz)[1] // cfg.sub_sampling_ratio[i], :]
up_i = tf.py_func(DP.knn_search, [sub_points, batch_xyz, 1], tf.int32)
input_points.append(batch_xyz)
input_neighbors.append(neighbour_idx)
input_pools.append(pool_i)
input_up_samples.append(up_i)
batch_xyz = sub_points
input_list = input_points + input_neighbors + input_pools + input_up_samples
input_list += [batch_features, batch_labels, batch_pc_idx, batch_cloud_idx]
It can't get the tf.py_func.
PS: I tried adding tf.compat.v1.py_func and it didn't work.

tf.py_func is designed for Tensorflow 1.
In Tensorflow 2,tf.numpy_function is a near-exact replacement, just drop the stateful argument (all tf.numpy_function calls are considered stateful). It is compatible with eager execution and tf.function.
tf.py_function is a close but not an exact replacement, passing TensorFlow tensors to the wrapped function instead of NumPy arrays, which provides gradients and can take advantage of accelerators.

Related

Correct way to run tensorflow function or any function using map on tf dataset

Below is the data augmentation function that I created.
import tensorFlow as tf
import tensorflow_addons as tfa
def augment_data(ds):
seed = tf.random.Generator.from_seed(1).normal([])
seed_2d = (1, 2)
# flipped images
ds_flipped = ds.map(lambda img, lbl: (tf.image.flip_left_right(img), lbl))
# induce random brightness
ds_rnb = ds.map(lambda img, lbl :
(tf.image.stateless_random_brightness(img,
max_delta=0.65,
seed=seed_2d),
lbl))
print('ds_flipped, ds_rnb ran successfully')
# centre crop
ds_cc = ds.map(lambda img, lbl:
(tf.image.central_crop(img,
central_fraction=0.8),
lbl))
ds_ran_zoom = ds.map(lambda img, lbl:
(tf.keras.preprocessing.image.random_zoom(img,
zoom_range=(.30, .70)),
lbl))
return ds_flipped, ds_rnb, ds_cc, ds_ran_zoom
The functions for flipped images and random brightness are working fine but tf.image.central_crop and tf.keras.preprocessing.image.random_zoom are not working.
Calling augment_data(ds) gives the following error
Running tf.image.central_crop giving me an error:
ValueError: image should either be a Tensor with rank = 3 or
rank = 4. Had rank = None.
Running tf.keras.preprocessing.image.random_zoom giving me an error
in transform_matrix_offset_center *
o_x = float(x) / 2 + 0.5
TypeError: float() argument must be a string or a number, not 'NoneType'
But if I run the central_crop function without using the map then the below code works fine
for image, label in train_data:
_ = tf.image.central_crop(image, central_fraction=0.8)
print('tf.image.central_crop ran successfully')
outputs
tf.keras.preprocessing.image.random_zoom ran successfully
If we run tf.keras.preprocessing.image.random.zoom in the same way then we get the error
for image, label in train_data:
_ = tf.keras.preprocessing.image.random_zoom(image, zoom_range=(.30, .70))
RuntimeError: affine matrix has wrong number of rows
Where in order to run tf.keras.preprocessing.image.random.zoom requires un-batching of the dataset. So the below code works fine
for image, label in train_data.unbatch().take(1):
_ = tf.keras.preprocessing.image.random_zoom(image, zoom_range=(.30, .70))
print('tf.keras.preprocessing.image.random_zoom ran successfully')
I have created a google colab notebook to replicate the issue.
What is the best way to run the TensorFlow function using the map function on the tf dataset?
What is the way to know whether any function is able to run on tf dataset using map function?
How to create a function that runs on batched and un-batched dataset both?
As you see above most of the functions are able to run on a single image but when it comes to running them using a map, different functions are throwing different errors.
The problem is that the shape of the images and labels is unknown. You should use set_shape at the end of the read_tfrecord function : decoded_image.set_shape(img_x, img_y, channels) and also for the label.
If you set the image and label shape in the dataset, most Tensorflow functions will work by applying map, both batched and unbatched.
tf.keras.preprocessing.image.random_zoom has a problem because it only takes a 3D Tensorflow tensor as an input and outputs a numpy array. This particular transformation is problematic.

tensorflow create variable during map_fn step

is there some way to create variables in a map_fn loop like shown in the code beneath? how can I solve this error while keeping a variable in the loop? the info log does not really help me either, so am I getting any concept of tensorflow fundamentally wrong here? [tensorflow 1.14.0, python 3.6.8]
import tensorflow as tf
### function called in map_fn
def opt_variable(theta):
init_theta = lambda: theta
var_theta = tf.get_variable(dtype=tf.float32, initializer=tf.Variable(init_theta))
### ... other steps which need variable type to optimize
return tf.constant(3.) # some return
def iterate_over_cols(theta):
iter_cols = tf.range(5)
map_theta = tf.map_fn(lambda x: (opt_variable(theta[x])),
iter_cols, dtype=tf.float32 )
return map_theta
### example run
t_test = tf.convert_to_tensor([1.4, 3.1, 4.6, 6.3], dtype=tf.float32)
iterate_over_cols(t_test)
leads to this error:
ValueError: Cannot use 'map_18/while/strided_slice' as input to
'map_18/while/Variable/Assign' because 'map_18/while/strided_slice' is
in a while loop. See info log for more details.
It seems that you can not use nested while loops in this version, that means you can not use the output of one map_fn to the input of the other.

Using setTermCriteria of cv2 and svm

I am trying to use setTermCriteria with SVM. But when I use it I am getting below error:
AttributeError: 'cv2.ml_SVM' object has no attribute 'setTermCritera_MAX_ITER'
This is how I am using it
svm.setTermCritera_MAX_ITER=10000
svm.setTermCriteria_EPS = 1e-3
I am not getting error but not finding it useful when I use it below way:
cv2.setTermCritera_MAX_ITER=10000
cv2.setTermCriteria_EPS = 1e-3
When I try below method
svm.setTermCriteria(10000)
SystemError: new style getargs format but argument is not a tuple
Which is the right way to use it in Python with OpenCV
The error message is clear, a tuple is needed. Let's see the default value:
svm = cv2.ml.SVM_create()
svm.getTermCriteria()
returns (3, 1000, 1.1920928955078125e-07). So if you want to set only the maximum number of iterations should call:
svm.setTermCriteria((cv2.TermCriteria_MAX_ITER, 10000, 0))
and if want to keep the same epsilon criterion and also set max iter:
svm.setTermCriteria((cv2.TermCriteria_MAX_ITER + cv2.TermCriteria_EPS, 10000, 1.1920928955078125e-07))

Use and modify variables in tensorflow bijectors

In the reference paper for TensorFlow Distributions (now Probability), it is mentioned that TensorFlow Variables can be used to construct Bijector and TransformedDistribution objects, i.e.:
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tf.enable_eager_execution()
shift = tf.Variable(1., dtype=tf.float32)
myBij = tfp.bijectors.Affine(shift=shift)
# Normal distribution centered in zero, then shifted to 1 using the bijection
myDistr = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=myBij,
name="test")
# 2 samples of a normal centered at 1:
y = myDistr.sample(2)
# 2 samples of a normal centered at 0, obtained using inverse transform of myBij:
x = myBij.inverse(y)
I would now like to modify the shift variable (say, I might compute gradients of some likelihood function as a function of the shift and update its value) so I do
shift.assign(2.)
gx = myBij.forward(x)
I would expect that gx=y+1, but I see that gx=y... And indeed, myBij.shift still evalues to 1.
If I try to modify the bijector directly, i.e.:
myBij.shift.assign(2.)
I get
AttributeError: 'tensorflow.python.framework.ops.EagerTensor' object has no attribute 'assign'
Computing gradients also does not work as expected:
with tf.GradientTape() as tape:
gx = myBij.forward(x)
grad = tape.gradient(gx, shift)
Yields None, as well as this exception when the script ends:
Exception ignored in: <bound method GradientTape.__del__ of <tensorflow.python.eager.backprop.GradientTape object at 0x7f529c4702e8>>
Traceback (most recent call last):
File "~/.local/lib/python3.6/site-packages/tensorflow/python/eager/backprop.py", line 765, in __del__
AttributeError: 'NoneType' object has no attribute 'context'
What am I missing here?
Edit: I got it working with a graph/session, so it seems there is an issue with eager execution...
Note: I have tensorflow version 1.12.0 and tensorflow_probability version 0.5.0
If you are using eager mode, you will need to recompute everything from the variable forward. Best to capture this logic in a function;
import tensorflow as tf
import tensorflow_probability as tfp
tfd = tfp.distributions
tf.enable_eager_execution()
shift = tf.Variable(1., dtype=tf.float32)
def f():
myBij = tfp.bijectors.Affine(shift=shift)
# Normal distribution centered in zero, then shifted to 1 using the bijection
myDistr = tfd.TransformedDistribution(
distribution=tfd.Normal(loc=0., scale=1.),
bijector=myBij,
name="test")
# 2 samples of a normal centered at 1:
y = myDistr.sample(2)
# 2 samples of a normal centered at 0, obtained using inverse
# transform of myBij:
x = myBij.inverse(y)
return x, y
x, y = f()
shift.assign(2.)
gx, _ = f()
Regarding gradients, you will need to wrap calls to f() in a GradientTape

Strange behavior of scan in Theano when outputs has last dimension equal to one

I have a strange error that I don't manage to understand when compiling a scan operator in Theano.
When outputs_info is initialized with a last dimension equal to one, I get this error:
TypeError: ('The following error happened while compiling the node', forall_inplace,cpu,
scan_fn}(TensorConstant{4}, IncSubtensor{InplaceSet;:int64:}.0, <TensorType(float32, vector)>),
'\n', "Inconsistency in the inner graph of scan 'scan_fn' : an input and an output are
associated with the same recurrent state and should have the same type but have type
'TensorType(float32, (True,))' and 'TensorType(float32, vector)' respectively.")
while I don't get any error if this dimension is set to anything greater than one.
This error happens on both gpu and cpu target, with theano 0.7, 0.8.0 and 0.8.2.
Here is a piece of code to reproduce the error:
import theano
import theano.tensor as T
import numpy as np
def rec_fun( prev_output, bias):
return prev_output + bias
n_steps = 4
# with state_size>1, compilation runs smoothly
state_size = 2
bias = theano.shared(np.ones((state_size),dtype=theano.config.floatX))
(outputs, updates) = theano.scan( fn=rec_fun,
sequences=[],
outputs_info=T.zeros([state_size,]),
non_sequences=[bias],
n_steps=n_steps
)
print outputs.eval()
# with state_size==1, compilation fails
state_size = 1
bias = theano.shared(np.ones((state_size),dtype=theano.config.floatX))
(outputs, updates) = theano.scan( fn=rec_fun,
sequences=[],
outputs_info=T.zeros([state_size,]),
non_sequences=[bias],
n_steps=n_steps
)
# compilation fails here
print outputs.eval()
The compilation has thus different behaviors depending on the "state_size".
Is there a workaround to handle both case state_size==1 and state_size>1?
Changing
outputs_info=T.zeros([state_size,])
to
outputs_info=T.zeros_like(bias)
makes it work properly for the case of state_size == 1.
Minor explanation and different solution
So I am noticing this crucial difference between the two cases.
Add these line of code exactly after the bias declaration line in both cases.
bias = ....
print bias.broadcastable
print T.zeros([state_size,]).broadcastable
The results are
for the first case where your code works
(False,)
(False,)
And for the second case where it seems to break down
(False,)
(True,)
So what happened is that when you added the two tensors of the same dimensions (bias and T.zeros) but with different broadcastable patterns, the pattern that the result inherited was the one from the bias. This ended up causing the misidentification from theano that they are not the same type.
T.zeros_like works because it uses the bias variable to generate the zeros tensor.
Another way to fix your problem is to change the broadcasting pattern like so
outputs_info=T.patternbroadcast(T.zeros([state_size,]), (False,)),

Categories