How do I understand the steps parameters in predict_generator? - python

I'm not so sure about the role of the steps parameter in predict_generator, and what I understand is that steps represents the amount of data generated by the generator, but someone denies my answer, and someone confirms my answer
I through the practice still couldn't find the right answer, my way is this, I use openslide to read a 5000x5000 size image, each produced a small map 100x100 to forecast the normal I can read 2500 100x100 the size of the picture, but when I set the steps=2500 is wrong
this is code:
# coding=utf-8
from __future__ import division
from keras.models import load_model
import openslide
import numpy as np
import Get_file_name
import generator
import matplotlib.pyplot as plt
def predict_model(img):
model = load_model(Get_file_name.model_path[0])
y= model.predict_generator(generator.pre_gen(img),steps=30)
print(y)
predict_model("cats_and_dogs_5")
This is wrong:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 612, in data_generator_task
generator_output = next(self._generator)
StopIteration
Traceback (most recent call last):
File "/home/zh/视频/MitosisDetection/mitosisDetection/predict.py", line 17, in <module>
predict_model("cats_and_dogs_5")
File "/home/zh/视频/MitosisDetection/mitosisDetection/predict.py", line 12, in predict_model
y= model.predict_generator(generator.pre_gen(img),steps=2500)
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/models.py", line 1183, in predict_generator
verbose=verbose)
File "/usr/local/lib/python3.5/dist-packages/keras/legacy/interfaces.py", line 88, in wrapper
return func(*args, **kwargs)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 2108, in predict_generator
outs = self.predict_on_batch(x)
File "/usr/local/lib/python3.5/dist-packages/keras/engine/training.py", line 1696, in predict_on_batch
outputs = self.predict_function(ins)
File "/usr/local/lib/python3.5/dist-packages/keras/backend/tensorflow_backend.py", line 2229, in __call__
feed_dict=feed_dict)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 778, in run
run_metadata_ptr)
File "/usr/local/lib/python3.5/dist-packages/tensorflow/python/client/session.py", line 961, in _run
% (np_val.shape, subfeed_t.name, str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape () for Tensor 'conv2d_1_input:0', which has shape '(?, 64, 64, 3)'
Process finished with exit code 1
If steps represents the amount of data generated by my generator, then why do I set steps=2500 wrong? If steps does not represent the amount of data generated by the generator, then what should I do? What should I do to set the amount of data generated by my generator?
Please God more advice, I'm around today, many people do not understand this thing!

Related

TensorFlow inference from a SavedModel: Expecting int64_t value for attr strides, got numpy.int32

I'm trying to use a pre-trained tensorflow model to classify an image.
I downloaded the efficientnet model from tensorflow hub.
The python code loads the model from the .pb file.
It then loads a sample image, resizes the image to 224x224, squishes the rgb values to [0,1] and adds another dimension to make it 4d (collection of images) as the model expects.
Use col_x for inference. The final input shape that is given to the model is (1, 224, 224, 3).
import os
import tensorflow as tf
from tensorflow import keras
print(tf.version.VERSION)
path = os.path.join(os.getcwd(), 'efficientnet')
model = keras.models.load_model(path)
from matplotlib import pyplot as plt
import matplotlib.image as mpimg
from PIL import Image
import numpy as np
img = Image.open("data/zebra.jpg")
img = img.resize((224, 224), Image.ANTIALIAS)
x = tf.keras.preprocessing.image.img_to_array(img)
plt.imshow(img)
plt.show()
norm_x = x / 255
col_x = norm_x[np.newaxis,...]
plt.imshow(col_x[0])
plt.show()
model(col_x)
But I get this error:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 926, in conv2d
"dilations", dilations)
tensorflow.python.eager.core._FallbackException: Expecting int64_t value for attr strides, got numpy.int32
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<pyshell>", line 1, in <module>
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\keras\engine\network.py", line 719, in call
convert_kwargs_to_constants=base_layer_utils.call_context().saving)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\keras\engine\network.py", line 888, in _run_internal_graph
output_tensors = layer(computed_tensors, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\keras\engine\base_layer.py", line 968, in __call__
outputs = self.call(cast_inputs, *args, **kwargs)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\keras\layers\convolutional.py", line 207, in call
outputs = self._convolution_op(inputs, self.kernel)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 1106, in __call__
return self.conv_op(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 638, in __call__
return self.call(inp, filter)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 237, in __call__
name=self.name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\nn_ops.py", line 2014, in conv2d
name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 933, in conv2d
data_format=data_format, dilations=dilations, name=name, ctx=_ctx)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\ops\gen_nn_ops.py", line 1022, in conv2d_eager_fallback
ctx=ctx, name=name)
File "C:\Program Files\Python36\lib\site-packages\tensorflow\python\eager\execute.py", line 60, in quick_execute
inputs, attrs, num_outputs)
tensorflow.python.framework.errors_impl.UnknownError: Failed to get convolution algorithm. This is probably because cuDNN failed to initialize, so try looking to see if a warning log message was printed above. [Op:Conv2D]
I was able could reproduce your error, Please the change the last line.
to either predict or evaluate or any function that you would want to infer.
model.predict(col_x)
You are using model(col_x) , which is sending the image directly to the model as an class attribute.
Also, for the other error I do not think your system is using the GPU if available, please install the correct version of Tensorflow and CUDA for that purpose.
Visit this answer Which TensorFlow and CUDA version combinations are compatible? for correcting it.
Cheers.

Tensorflow Input Feeding Error when moving from Python 2 to 3

I had a Tensorflow code that worked well with python 2. I need to switch to Python 3. besides, minor changes, this is the big error I am getting. Something is causing input feeding error.
Traceback (most recent call last):
File "main.py", line 530, in <module>
train(config)
File "main.py", line 337, in train
loss,train_op = trainer.step(sess,batch)
File "trainer.py", line 33, in step
loss, train_op = sess.run([self.loss,self.train_op],feed_dict=feed_dict)
File "anaconda2/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 905, in run
run_metadata_ptr)
File "anaconda2/envs/tf_gpu/lib/python3.6/site-packages/tensorflow/python/client/session.py", line 1116, in _run
str(subfeed_t.get_shape())))
ValueError: Cannot feed value of shape (0,) for Tensor 'simple_lstm/pre_emb_mat:0', which has shape '(?, 100)'
How could I resolve it?

Problem with tensorflow running in a multithreading in python. The function works well without thread but not in a thread

I am doing object detection, I am using a free python library which can be found at this link:https://github.com/OlafenwaMoses/ImageAI(Which uses tensorflow and keras as backend).The problem is that the object detection function works well in a single python script.But when i call the same function in a thread(multithreading), i get a bunch of errors.After a few debugging, I saw that the thread fails to load the model that is,( detector.loadModel(detection_speed="flash")) which eventually works when call in main. Even trying to declare and pass the model as global results in error
I have also try to load the detection model in the same thread but in vain.
Here is my script:
import tensorflow as tf
#import for threading
import threading
import queue
#import for PIR
import time
import RPi.GPIO as GPIO
import os
#import for camera
from picamera import PiCamera
from PIL import Image
#import function objectDetect function from singularObjectDetection
from imageai.Detection import ObjectDetection
from singularObjectDetection import objectDetect
###function calling objectdetection function
def read():
print("read "+ os.getcwd())
execution_path = os.getcwd()
detector = ObjectDetection()
detector.setModelTypeAsTinyYOLOv3()
detector.setModelPath(os.path.join(execution_path, "yolo-tiny.h5"))
detector.loadModel(detection_speed="flash")
custom = detector.CustomObjects(person=True, dog=True)
while True:
objectDetect("image1.jpg")
print("in main")
q=queue.Queue()
t2=threading.Thread(target=read,daemon=True)
t1.start()
Errors that i am getting:
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/pi/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1050, in _run
subfeed, allow_tensor=True, allow_operation=False)
File "/home/pi/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3488, in as_graph_element
return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
File "/home/pi/tensorflow/lib/python3.5/site-packages/tensorflow/python/framework/ops.py", line 3567, in _as_graph_element_locked
raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Placeholder_12:0", shape=(64,), dtype=float32) is not an element of this graph.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/home/pi/tensorflow/lib/python3.5/site-packages/threadfinal.py", line 57, in read
detector.loadModel(detection_speed="flash")
File "/home/pi/tensorflow/lib/python3.5/site-packages/imageai/Detection/__init__.py", line 213, in loadModel
model.load_weights(self.modelPath)
File "/home/pi/tensorflow/lib/python3.5/site-packages/keras/engine/network.py", line 1166, in load_weights
f, self.layers, reshape=reshape)
File "/home/pi/tensorflow/lib/python3.5/site-packages/keras/engine/saving.py", line 1058, in load_weights_from_hdf5_group
K.batch_set_value(weight_value_tuples)
File "/home/pi/tensorflow/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py", line 2470, in batch_set_value
get_session().run(assign_ops, feed_dict=feed_dict)
File "/home/pi/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 887, in run
run_metadata_ptr)
File "/home/pi/tensorflow/lib/python3.5/site-packages/tensorflow/python/client/session.py", line 1053, in _run
'Cannot interpret feed_dict key as Tensor: ' + e.args[0])
TypeError: Cannot interpret feed_dict key as Tensor: Tensor Tensor("Placeholder_12:0", shape=(64,), dtype=float32) is not an element of this graph.
It looks like you're creating more than one graph, perhaps one graph per thread.
Instead of relying on TF's default implicit graph I recommend you create a graph and explicitly enter it with with g.as_default(): on all threads you're creating.

writing out neural network inference test code

I am trying to modify an inference code for pruned SqueezeNet network
However, I faced the following error. Could anyone comment how to go around this cpu/gpu backend error ?
[kevin#linux SqueezeNet-Pruning]$ python predict.py --image “3_100.jpg” --model “model_prunned” --num_class “2”
prediction in progress
Traceback (most recent call last):
File “predict.py”, line 63, in
prediction = predict_image(imagepath)
File “predict.py”, line 47, in predict_image
output = model(input)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/home/kevin/Documents/Grive/Personal/Coursera/Machine_Learning/pruning/Pruning-CNN/SqueezeNet-Pruning/finetune.py”, line 39, in forward
x = self.features(x)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/container.py”, line 92, in forward
input = module(input)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/module.py”, line 477, in call
result = self.forward(*input, **kwargs)
File “/usr/lib/python3.7/site-packages/torch/nn/modules/conv.py”, line 313, in forward
self.padding, self.dilation, self.groups)
RuntimeError: Expected object of backend CPU but got backend CUDA for argument #2 ‘weight’
[kevin#linux SqueezeNet-Pruning]$
I think it can relate to the use of GPU. I think the program might work with correct configuration with GPU. Or you can delete line 39, 40.

Training py_faster_rcnn on VOC dataset with one class

I am using py_faster_rcnn to train the system for one class ('person'). Originally, it gave me an assertion error similar to this post
How to train new fast-rcnn imageset
So I made the following changes to my imdb.py file:
for b in range(len(boxes)):
if boxes[b][2] < boxes[b][0]:
boxes[b][0] = 0
assert (boxes[:,2] >= boxes[:,0]).all()
After the above changes, I get this new error. Has anyone come across this error or what may I be doing wrong?
Process Process-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/usr/lib/python2.7/multiprocessing/process.py", line 114, in run
self._target(*self._args, **self._kwargs)
File "./tools/train_faster_rcnn_alt_opt.py", line 130, in train_rpn
max_iters=max_iters)
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 134, in train_net
pretrained_model=pretrained_model)
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/fast_rcnn/train.py", line 53, in __init__
self.solver.net.layers[0].set_roidb(roidb)
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/roi_data_layer/layer.py", line 68, in set_roidb
self._shuffle_roidb_inds()
File "/home/microway/test/pytest/py-faster-rcnn/tools/../lib/roi_data_layer/layer.py", line 26, in _shuffle_roidb_inds
widths = np.array([r['width'] for r in self._roidb])
KeyError: 'width'

Categories