Tensorflow 2 Object detection doesn't work? - python

I'm trying to train an object detection model in Tensorflow 2. But since I went over to tensorflow 2 from 1 I seem to have problems. Whenever I start training. I get the same error shown in the following github thread https://github.com/tensorflow/models/issues/9706:
I have the same error if I use numpy 1.20.0 NotImplementedError:
Cannot convert a symbolic Tensor (cond_2/strided_slice:0) to a numpy
array.
if I use numpy 1.19.5 I get ValueError: numpy.ndarray size changed,
may indicate binary incompatibility. Expected 88 from C header, got 80
from PyObject
Tried with TF 2.2.2 as well in both cases same errors
The only difference is that when I change python to 3.6 I get the same output as the last error message(I'm also using anaconda):
Traceback (most recent call last): File "model_main_tf2.py", line 31,
in import tensorflow.compat.v2 as tf File
"D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow_init_.py",
line 41, in from tensorflow.python.tools import module_util as
module_util File
"D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python_init.py",
line 39, in from tensorflow.python import pywrap_tensorflow as
_pywrap_tensorflow File "D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 83, in raise ImportError(msg) ImportError: Traceback (most recent
call last): File
"D:\Maurice_Doc\AI\anaconda3\envs\tensorflow\lib\site-packages\tensorflow\python\pywrap_tensorflow.py",
line 64, in from tensorflow.python._pywrap_tensorflow_internal import
ImportError: DLL load failed: The specified module could not be found.
Failed to load the native TensorFlow runtime.
See https://www.tensorflow.org/install/errors
for some common reasons and solutions. Include the entire stack trace
above this error message when asking for help.
I've followed the tutorial at:
https://tensorflow-object-detection-api-tutorial.readthedocs.io/en/latest/training.html
from start to finish and it worked when I was following the TensorFlow 1x tutorial but for some reason, since they switched to Tensorflow 2x I get a lot of issues.
Does anybody know how to fix this issue?

Please try using Python 3.6.
This fixed it for other users reporting the same Problem.

Related

Python do not work when executed from Java on M1 Mac

I have a bash script that runs a python script:
#!/bin/bash
restest-env/bin/python3 script.py $1 $2 $3
When executed from terminal, everything works fine. Instead, when executed from a Java application with:
ProcessBuilder pb = new ProcessBuilder(command, String.join(" ",commandArgs));
Process proc = pb.start();
proc.getOutputStream();
String stdout = IOUtils.toString(proc.getInputStream(), Charset.defaultCharset());
String stderr = IOUtils.toString(proc.getErrorStream(), Charset.defaultCharset());
proc.waitFor();
I get this M1 chip-related error:
Traceback (most recent call last):
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/__init__.py", line 23, in <module>
from . import multiarray
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/multiarray.py", line 10, in <module>
from . import overrides
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/overrides.py", line 6, in <module>
from numpy.core._multiarray_umath import (
ImportError: dlopen(/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "ml/python-scripts/al_predictor.py", line 2, in <module>
import numpy as np
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/__init__.py", line 140, in <module>
from . import core
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/bin/python3"
* The NumPy version is: "1.23.1"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: dlopen(/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
2022-07-18 11:21:14 INFO stdout:36 - Traceback (most recent call last):
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/__init__.py", line 23, in <module>
from . import multiarray
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/multiarray.py", line 10, in <module>
from . import overrides
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/overrides.py", line 6, in <module>
from numpy.core._multiarray_umath import (
ImportError: dlopen(/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "ml/python-scripts/al_predictor.py", line 2, in <module>
import numpy as np
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/__init__.py", line 140, in <module>
from . import core
File "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/__init__.py", line 49, in <module>
raise ImportError(msg)
ImportError:
IMPORTANT: PLEASE READ THIS FOR ADVICE ON HOW TO SOLVE THIS ISSUE!
Importing the numpy C-extensions failed. This error can happen for
many reasons, often due to issues with your setup or how NumPy was
installed.
We have compiled some common reasons and troubleshooting tips at:
https://numpy.org/devdocs/user/troubleshooting-importerror.html
Please note and check the following:
* The Python version is: Python3.8 from "/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/bin/python3"
* The NumPy version is: "1.23.1"
and make sure that they are the versions you expect.
Please carefully study the documentation linked above for further help.
Original error was: dlopen(/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so, 0x0002): tried: '/Users/giulianomirabella/Desktop/RESTest/ml/restest-env/lib/python3.8/site-packages/numpy/core/_multiarray_umath.cpython-38-darwin.so' (mach-o file, but is an incompatible architecture (have 'arm64', need 'x86_64'))
So the strange thing is that python works fine while executed directly from terminal, but fails when executed from Java. I can not use conda. Any ideas?
I had the same issue where I was trying to use Java ProcessBuilder to run a terminal command to run a python project.
My command ran fine in the terminal but was not working when run from the Java program. When running command 'uname -p' I could see when the terminal was open manually I got 'arm' but when Java ran it it was 'x86_64 i386'. Basically java was running as an x86_64 process but my python install was arm so it failed.
To solve the issue I simply installed java jdk 18.0.2 and run my java program using this jdk. It worked after this

When importing keras on my Raspberry Pi 4 I get the same error

I have installed Tensorflow and all its libraries following various websites and my terminal actually tells me they are installed and have been upgraded to their last versions, but somehow when I try to run the code in Python it keeps giving me the same error. In case you were wondering the Python code itself works fine on my laptop.
The error code says as it follows:
tensorflow/core/platform/hadoop/hadoop_file_system.cc:132] HadoopFileSystem load error: libhdfs.so: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "/media/pi/8F3B-E71C/TRachscope/livefeed_tensorflow_model.py", line 5, in <module>
from keras import models
File "/home/pi/.local/lib/python3.7/site-packages/keras/__init__.py", line 25, in <module>
from keras import models
File "/home/pi/.local/lib/python3.7/site-packages/keras/models.py", line 19, in <module>
from keras import backend
File "/home/pi/.local/lib/python3.7/site-packages/keras/backend.py", line 39, in <module>
from tensorflow.python.eager.context import get_config
ImportError: cannot import name 'get_config' from 'tensorflow.python.eager.context' (/home/pi/.local/lib/python3.7/site-packages/tensorflow_core/python/eager/context.py)**

Loading Model Error using Pixellib in Python

For context, I am running an apple silicon mac and have used the rosetta terminal + miniconda to create a venv that runs python 3.7.
Here is the code I am trying to run.
from pixellib.instance import instance_segmentation
segment_image = instance_segmentation()
segment_image.load_model("mask_rcnn_coco.h5")
And this is the error below. I think it may be due to issues with the access to the GPU but I cannot be sure. Have been working on it for a few days.
If using Keras pass *_constraint arguments to layers.
Traceback (most recent call last):
File "/Users/USERNAME/PycharmProjects/test/main.py", line 16, in <module>
segment_image.load_model("mask_rcnn_coco.h5")
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/pixellib/instance/__init__.py", line 65, in load_model
self.model.load_weights(model_path, by_name= True)
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/pixellib/instance/mask_rcnn.py", line 2110, in load_weights
hdf5_format.load_weights_from_hdf5_group_by_name(f, layers)
File "/Users/USERNAME/miniconda3/envs/cowsUpdate/lib/python3.7/site-packages/tensorflow_core/python/keras/saving/hdf5_format.py", line 718, in load_weights_from_hdf5_group_by_name
original_keras_version = f.attrs['keras_version'].decode('utf8')
AttributeError: 'str' object has no attribute 'decode'
I encountered similar issue when using tf.keras.models.load_weights(), and I downgrade h5py from 2.10 to 2.8.0 in tensorflow 2.0.0, then it works, maybe you can have a try.

How to you fix the Python error when trying to import talib?

I'm trying to import TA-Lib and keep getting the error below. I'm using Python 3.8 and numpy 1.21. I tried several of the solutions I saw on here but it isn't working. I did download the whl from the this site https://www.lfd.uci.edu/~gohlke/pythonlibs/#ta-lib for my computer and python. It downloaded sucessfully but when I import I'm getting this error below:
import talib
Traceback (most recent call last):
File "<ipython-input-67-1ee486ccef90>", line 1, in <module>
import talib
File "C:\Users\thean\anaconda3\lib\site-packages\talib\__init__.py", line 52, in <module>
from ._ta_lib import (
File "talib\_ta_lib.pyx", line 1, in init talib._ta_lib
ValueError: numpy.ndarray size changed, may indicate binary incompatibility. Expected 88 from C header, got 80 from PyObject

Tensorflow=1.12.0 AttributeError: module 'tensorflow' has no attribute 'feature_column'

I am trying to go through a Tensorflow tutorial that uses tf.feature_column, however when running it I am encountered with this error.
I have tensorflow=1.12.0 installed. I am running it on Python 3.6.8.
This looks to be the most recent stable package of tensorflow and the docs say that Python 3.6 is supported. I have also checked the tensorflow package files and found that feature_column is included.
Any idea why this error is still persisting?
Full error:
Traceback (most recent call last):
File "tensorflow.py", line 1, in <module>
import tensorflow as tf
File "/Users/blakecarroll/SFInsuretech/virtEnv1/tensorflow.py", line 50, in <module>
categorical_object_feat_cols = [tf.feature_column.embedding_column(tf.feature_column.categorical_column_with_hash_bucket(key=col,hash_bucket_size=1000), dimension = len(df[col].unique())) for col in categorical_columns if df[col].dtype=='O']
File "/Users/blakecarroll/SFInsuretech/virtEnv1/tensorflow.py", line 50, in <listcomp>
categorical_object_feat_cols = [tf.feature_column.embedding_column(tf.feature_column.categorical_column_with_hash_bucket(key=col,hash_bucket_size=1000), dimension = len(df[col].unique())) for col in categorical_columns if df[col].dtype=='O']
AttributeError: module 'tensorflow' has no attribute 'feature_column'
This is happening because the name of your script is tensorflow.py, which makes import tensorflow as tf import your script itself. Rename your script into something else and this should be resolved.

Categories