Cannot find Transformer function in module in caffe - python

Here is the error i get when i run the transformer function for preprocessing the image.
Traceback (most recent call last):
File "tst.py", line 18, in
transformer = caffe.io.Transformer({'data': net.blobs['data'].data.shape})
AttributeError: 'module' object has no attribute 'Transformer'

I actually figured it out.
The io.py file did not contain the class Trasnformer. Maybe it was missing in the modified caffe. I took the class from the original BVLC caffe and pasted it in the io.py file.
Link to io.py of BVLC caffe
https://github.com/BVLC/caffe/blob/master/python/caffe/io.py
works for me.

Related

TypeError: can't pickle weakref objects

I am trying to pickle.dump() an object with a model property that stores a TensorFlow model.
This is the minimum code to reproduce my error (at least on my machine, macOS Big Sur 11.4 with Anaconda Python 3.6 enviroment and TensorFlow 2.3.1).
import tensorflow as tf
import pickle as pkl
class OBJ():
def __init__(self):
self.model = tf.keras.models.Sequential()
save_location = 'your save location here'
pkl.dump(OBJ(), save_location)
On run time this exception is thrown
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: can't pickle weakref objects
Reading online forums I realised that this error has something to do with the fact that there are some objects (like TensorFlow models) that cannot be pickled.
Does anybody know a method to pickle an "unpickable" object? What makes these object specials, so that they can't be pickled?

Tensorflow Backend - Bug in "model._make_predict_function"

there's is a bug while running the Tensorflow code, the error code appears like this:
Traceback (most recent call last):
File "app.py", line 76, in <module>
model = deepmoji_emojis(maxlen, PRETRAINED_PATH)
File "/home/lifeofpy/LifeofPy/AI Photographer Project/Text-to-Color/deepmoji/model_def.py", line 35, in deepmoji_emojis
model._make_predict_function()
AttributeError: 'Functional' object has no attribute '_make_predict_function
and the file app.py is like this:
# print('Loading model from {}.'.format(PRETRAINED_PATH))
model = deepmoji_emojis(maxlen, PRETRAINED_PATH)
model.summary()
model._make_predict_function()
I think the error message is occured by the function 'model._make_predict_function',
I would appreciate any comments on this issue. Thanks!
Use: model.make_predict_function()
instead of: model._make_predict_function()
I tried to find _make_predict_function() with Google and it seems it was private function in old Keras in keras.engine.training.py but now Keras is part of tensorflow and function was removed from code. I can't find _make_predict_function() in tensorflow.keras.engine.training.py
Some old posts suggest to use model.predict() instead of model._make_predict_function() before threads but other posts suggest to duplicate model for every thread. But maybe new code in tensorflow resolved problem with running it in threads and maybe it doesn't need this function any more.

pytorch torch.jit.trace returns function instead of torch.jit.ScriptModule

I need to run in c++ a pre-trained pytorch nn model (trained in python) to make predictions.
To do so, I'm following the instructions on how to load a pytorch model in c++ given here: https://pytorch.org/tutorials/advanced/cpp_export.html
But when I try to get the torch.jit.ScriptModule via tracing as stated in the first step of the tutorial:
traced_script_module =
torch.jit.trace(model, (input_tensor_1, input_tensor_2))
Instead of returning a torch.jit.ScriptModule, it returns a function:
print(type(traced_script_module))
<type 'function'>
Which, when I run:
traced_script_module.save("model.pt")
then leads into the following error:
Traceback (most recent call last):
File "serialize_model.py", line 60, in <module>
traced_script_module.save("model.pt")
AttributeError: 'function' object has no attribute 'save'
Any ideas on what I'm doing wrong?
Thanks for asking Jatentaki. I was using PyTorch 0.4 in Python and when I updated to 1.0 it worked.

Tensorflow's API: seq2seq

I have been following https://github.com/kvfrans/twitch/blob/master/main.py tutorial to create and train a chatbot based on rnn using tensorflow. From what I understand, the tutorials was written on an older version of tensorflow, so some parts are outdated and give me an error like:
Traceback (most recent call last):
File "main.py", line 33, in <module>
outputs, last_state = tf.nn.seq2seq.rnn_decoder(inputs, initialstate, cell, loop_function=None, scope='rnnlm')
AttributeError: 'module' object has no attribute 'seq2seq'
I fixed some of them, but can't figure out what is the alternative to tf.nn.seq2seq.rnn_decoder and what should be the new module's parameters. What I currently fixed:
tf.nn.rnn_cell.BasicLSTMCell(embedsize) changed to
tf.contrib.rnn.BasicLSTMCell(embedsize)
tf.nn.rnn_cell.DropoutWrapper(lstm_cell,keep_prob) changed to tf.contrib.rnn.DropoutWrapper(lstm_cell,keep_prob)
tf.nn.rnn_cell.MultiRNNCell([lstm_cell] * numlayers) changed to
tf.contrib.rnn.MultiRNNCell([lstm_cell] * numlayers)
Can someone please help me figure out what tf.nn.seq2seq.rnn_decoder will be?
I think this is the one you need:
tf.contrib.legacy_seq2seq.rnn_decoder

NameError: name 'custom_data_home' is not defined

from sklearn.datasets import fetch_mldata
mnist = fetch_mldata('MNIST original', data_home=custom_data_home)
Traceback (most recent call last):
File "<pyshell#6>", line 1, in <module>
mnist = fetch_mldata('MNIST original', data_home=custom_data_home)
NameError: name 'custom_data_home' is not defined
i am getting NameError, i searched in net for solutions, i didn't get any relevant answers.
i even tries installing "custom_data_home" using easy_install . it says it could not find.
pls help me on this.
I don't know anything about sklearn, but it looks like you are trying to use the example from this page: http://scikit-learn.org/stable/datasets/mldata.html
In that example custom_data_home is a variable containing the path to where you want the data stored. If you leave it off it says it should default to just data.
Basically in your script you have not defined custom_data_home. That is what NameError is telling you.
If you are going to use a variable, like custom_data_home you have to define it in some way. Your script doesn't know what custom_data_home is.
custom_data_home = '/path/to/my/data'
mnist = fetch_mldata('MNIST original', data_home=custom_data_home)
That should work.

Categories