Who is Jenkins? Error /Users/jenkins/miniconda... - python

This might be a noob question...
I'm following this tutorial on Emotion Recognition With Python, OpenCV and a Face Dataset
When I run the training code get the following error:
OpenCV Error: Bad argument (Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4.) in predict, file /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp, line 132
Traceback (most recent call last):
File "trainModel.py", line 64, in <module>
correct = run_recognizer()
File "trainModel.py", line 52, in run_recognizer
pred, conf = fishface.predict(image)
cv2.error: /Users/jenkins/miniconda/1/x64/conda-bld/conda_1486587097465/work/opencv-3.1.0/build/opencv_contrib/modules/face/src/fisher_faces.cpp:132: error: (-5) Wrong input image size. Reason: Training and Test images must be of equal size! Expected an image with 122500 elements, but got 4. in function predict
It is complaining about the image size not being 350×350=122500 although all the images in my dataset folder are the correct size 350x350px.
And my user name is not ‘jenkins’ as it says in /Users/jenkins/miniconda… not sure where it comes from or how to replace it with my correct path to fisher_faces.cpp
Thanks for your help!

Don't worry about that path. The OpenCV library you are using was built on someone else's machine, and the error messages got paths from their machine baked in. It's just trying to tell you in which OpenCV source file the error is occurring in, namely this one.
(In this case, Jenkins is a popular build bot.)

Related

"Read less bytes than requested" error while trying to load tensorflow using tf.saved_model.load()

I've been trying to run a web server locally that my colleague created which contains a deployed model that processes object detection requests. There've been a lot of compatibility issues because he's on Windows and I'm on MacOS.
I've been able to resolve all the compatibility issues with the dependencies and get the web server running, but I've been running into this issue when trying to load the object detection model from the saved_model directory. The structure of the directory is:
saved_model/
    variables/
        variables.data-00000-of-00001
        variables.index
    saved_model.pb
I created a test script to try isolate the problem, which looks as follows:
import tensorflow as tf
model = tf.saved_model.load("./saved_model")
# print model summary
print("loaded model")
print(model.summary())
When I run the script, I get the following error, which is the same error that I get when running the web server.
Traceback (most recent call last):
File "/Users/alexandrospouroullis/programming-projects/elevat3d/backend/workspace/training_demo/exported-models/my_model_1024/testFile.py", line 4, in
model = tf.saved_model.load("./saved_model")
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/saved_model/load.py", line 782, in load
result = load_partial(export_dir, None, tags, options)["root"]
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/saved_model/load.py", line 912, in load_partial
loader = Loader(object_graph_proto, saved_model_proto, export_dir,
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/saved_model/load.py", line 189, in init
self._restore_checkpoint()
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/saved_model/load.py", line 507, in _restore_checkpoint
load_status = saver.restore(variables_path, self._checkpoint_options)
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/training/tracking/util.py", line 1430, in restore
object_graph_string = reader.get_tensor(base.OBJECT_GRAPH_PROTO_KEY)
File "/Users/alexandrospouroullis/opt/miniconda3/envs/elevat3d-api/lib/python3.10/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 66, in get_tensor
return CheckpointReader.CheckpointReader_GetTensor(
IndexError: Read less bytes than requested
I understand that things like Docker were made for this; I actually created a docker-compose.yaml file that I managed to get up and running as well, but it faced this exact same issue.
I've looked around on the web, but this doesn't seem to be a common issue and no remedy, as far as I know, exists.
What could be going wrong?
There are two things you will have to check :
Make sure your weights file(under the variables folder) is properly loaded in your docker environment(check the size of the file). At times, the files is not properly loaded and in just 2 Kb whereas the actual file will be atleast 0.5 GB and upwards.
Enough memory while loading the model. TF models take up considerable infra juice and atleast 16GB RAM is recommended.
I faced a similar issue. Mine was because of a corrupt weights file - point1.

Why I cannot do TIN Interpolation in QGIS?

I want to do TIN Interpolation on a layer but when I fill all the fields with the right data (vector layer, interpolation attribute, extent etc) the algorithm does not run and shows me this message:
Traceback (most recent call last):
File "C:/PROGRA~1/QGIS 3.14/apps/qgis/./python/plugins\processing\algs\qgis\TinInterpolation.py", line 188, in processAlgorithm
writer.writeFile(feedback)
Exception: unknown
Execution failed after 0.08 seconds
Does anybody have an idea about it?? Thank you
I had the same issue. I converted a dxf file into a shape file and then I tried to use Tin interpolation but it didn't work. Then I realized that, in my dxf file, there were some very very small lines and polyline and, after removing them, the interpolation went just fine. I don't really have an explanation but maybe this could help you.
It is because of some small lines that are in your file that cannot be handled by Interpolation. You can use Generalizer3 in QGIS plugins to remove those lines.

Dlib,python,face_detection with neural network

When ever i tried to load the trained model of
cnn based face_detectorin dlib.i got this error.
detector = dlib.simple_object_detector('mmod_human_face_detector.dat')
Traceback (most recent call last):
File "/home/hasans/Desktop/1/face_recognition1/face_detector.py", line 51, in <module>
detector = dlib.simple_object_detector('mmod_human_face_detector.dat')
RuntimeError: Unsupported version found when deserializing a scan_fhog_pyramid object</br>
how to get rid of this error?
.
I don't know where you got your code from, but the cnn-based face-detector is used differently, as given in this official demo.
Init looks like:
cnn_face_detection_model = dlib.cnn_face_detection_model_v1('mmod_human_face_detector.dat')
(I used it successfully)
Warning: the python-wrapper needed for this was only recently added (18.8.17) and as of now (3 days later) is only available within git, not any official release!
DNN based face detector is now officially available for using in python in its latest release 19.6.Everyone can download it from
https://dlib.net
cheers !!!

CRITICAL: tensorflow:Category has no images - validation

I'm trying to retraining the Inception v3 model in tensorflow for my own custom categories. I have downloaded some data and formatted it into directories. When I run, the python script creates bottlenecks for the images, and then when it runs, on the first training step( step 0) it has a critical error, where it tries to modulo by 0. It appears in the get_image_path function when computing the mod_index, which is index % len(category_list) so the category_list must be 0 right?
Why is this happening and how can I prevent it?
EDIT: Here's the exact code I'm seeing inside docker
2016-07-04 01:27:52.005912: Step 0: Train accuracy = 40.0%
2016-07-04 01:27:52.006025: Step 0: Cross entropy = 1.109777
CRITICAL:tensorflow:Category has no images - validation.
Traceback (most recent call last):
File "tensorflow/examples/image_retraining/retrain.py", line 824, in <module>
tf.app.run()
File "/usr/local/lib/python2.7/dist-packages/tensorflow/python/platform/app.py", line 30, in run
sys.exit(main(sys.argv))
File "tensorflow/examples/image_retraining/retrain.py", line 794, in main
bottleneck_tensor))
File "tensorflow/examples/image_retraining/retrain.py", line 484, in get_random_cached_bottlenecks
bottleneck_tensor)
File "tensorflow/examples/image_retraining/retrain.py", line 392, in get_or_create_bottleneck
bottleneck_dir, category)
File "tensorflow/examples/image_retraining/retrain.py", line 281, in get_bottleneck_path
category) + '.txt'
File "tensorflow/examples/image_retraining/retrain.py", line 257, in get_image_path
mod_index = index % len(category_list)
ZeroDivisionError: integer division or modulo by zero
Fix:
The issue happens when you have less number of images in any of your sub folders.
I have faced same issue when total number of images under a particular category was less than 30, please try to increase the image count to resolve the issue.
Reason:
For each label (sub folder), tensorflow tries to create 3 categories of images (Train, Test and Validation) and places the images under it based on a probability value (calculated using hash of label name).
An image is placed in the category folder only if the probability value is less than the category (Train, Test or validation) size.
Now if number of images inside a label are less ( say 25) then validation size is calculated as 10 (default) and the probability value is usually greater than 10 and hence no image is placed in the validation set.
Later when all bottlenecks are created and tf is trying to calculate validation accuracy, it first throws an fatal log message:
CRITICAL:tensorflow:Category has no images - validation.
and then continues to execute the code and crashes as it tries to divide by validation list size (which is 0).
I've modified retrain.py to ensure that at least there is an image in validation (line 201*)
if len(validation_images) == 0:
validation_images.append(base_name)
elif percentage_hash < validation_percentage:
(*) Line number may change in future releases. Look at the comments.
I had the same problem when running the retrain.py and when i set the --model_dir argument incorrectly and the inception directory got created in the flower_photos directory.
Please check if there are any directories in the flower_photos directory without any images.
This happens if you have too less images. Like Ashwin suggested, have at least 30 images.
Also the names of your folder is also important. Somehow your folder name can't have an underscore(_)
eg. These names didn't work : dettol_bottle, dettol_soap, dove_soap, lifebuoy_bottle
These names worked : dettolbottle, dettolsoap, dovesoap, lifebuoybottle
For me, this error was caused by having folders in the training directory that did not have images in them. I was following the same "Poets" tutorial and ended up putting directories with subdirectories in the image dir. Once I removed those and placed only directories with images directly in them (no sub dirs) the error no longer occurred and I was able to successfully train my model.
I was trying to train using my own set of images (pictures of dogs instead of flowers), and ran into this same problem.
I identified that the problem for me ended up being that my folder names (category names) weren't present in the imagenet_synset_to_human_label_map.txt file that gets loaded in the inception data that we are modifying.
By changing the name of my image folder from bichon to poodle, this started working, since poodle is in the inception map and bichon is not.
For me it was a "-" in my folder names. The moment I corrected it, the error vanished.
As Ashwin Patti has answered, there is a possibility that the split directory for validation has no images due to a lack of images in the original label directory.
This explanation is supported by the warning when you try to retrain with labels that have less than 20 images:
WARNING: Folder has less than 20 images, which may cause issues.
This error went away for me after adding >50 images to each category
I would also like to add my own experience:
Don't have spaces
For me, it worked when all a folder name contained was a to z characters, no spaces, no symbols, no nothin'.
E.g `I'm a folder' is wrong. However, 'imAFolder' would work.
As Matthieu said in comments, the solution proposed:
# make sure none of the list is empty, otherwise it will raise an error
# when validating / testing
if validation_percentage > 0 and not validation_images:
validation_images.append(training_images.pop())
if testing_percentage > 0 and not testing_images:
testing_images.append(training_images.pop())
wotks for me.
I'm wondering what the message "CRITICAL:tensorflow:Category has no images - validation" really means. Is it related to the error that was fixed or It could mean loss of accuracy? I mean, if was used few images, the results would not be as expected?
I had this exact same problem. My folders were named correctly however my files were named name_1.jpg, name_2.jpg. Removing the underscore fixed the issue.

PIL can not deal with images generated by uvccapture

I use uvccapture to take pictures and want to process them with the help of python and the python imaging library (PIL).
The problem is that PIL can not open those images. It throws following error message.
Traceback (most recent call last):
File "process.py", line 6, in <module>
im = Image.open(infile)
File "/usr/lib/python2.7/dist-packages/PIL/Image.py", line 1980, in open
raise IOError("cannot identify image file")
IOError: cannot identify image file
My python code looks like this:
import Image
infile = "snap.jpg"
im = Image.open(infile)
I tried to save the images in different formats before processing them. But this does not help. Also changing file permissions and owners does not help.
The only thing that helps is to open the images, for example with jpegoptim, and overwriting the old image with the optimized one. After this process, PIL can deal with these images.
What is the problem here? Are the files generated by uvccapture corrupt?
//EDIT: I also found out, that it is not possible to open the images, generated with uvccapture, with scipy. Running the command
im = scipy.misc.imread("snap.jpg")
produces the same error.
IOError: cannot identify image file
I only found a workaround to this problem. I processed the captured pic with jpegoptim and afterwords PIL could deal with the optimized image.

Categories