could able to run demo sound classification application in openvino - python

To run sound classication demo in openvino,
I have followed below steps:
cd /opt/intel/openvino_2021/install_dependencies
sudo -E ./install_openvino_dependencies.sh
for env setting: source /opt/intel/openvino_2021/bin/setupvars.sh
cd /opt/intel/openvino_2021/deployment_tools/model_optimizer/install_prerequisites
sudo ./install_prerequisites.sh
git clone https://github.com/openvinotoolkit/open_model_zoo.git.
Then I placed the cloned repo in the deployment_tools directory.
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/downloader.py --name aclnet
sudo python3 /opt/intel/openvino_2021/deployment_tools/open_model_zoo/tools/downloader/converter.py --name aclnet
Here I got error:
/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.4) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Unable to locate Model Optimizer. Use --mo or run setupvars.sh/setupvars.bat from the OpenVINO toolkit.
Here aclent model downloaded and gives .onnx in public folder. now how to convert .onnx file to ir format(xml and bin format)?
I followed below coomand too but still I get same error
sudo python3 ./mo.py --input_model ~/public/aclnet/aclnet_des_53.onnx --output_dir ~/public/aclnet
https://docs.openvinotoolkit.org/latest/omz_demos_sound_classification_demo_python.html
Please can anyone help on this?

This error happened because the script attempts to locate Model Optimizer using the environment variables set by the OpenVINO™ toolkit's setupvars.sh script. You can override this heuristic with the --mo option:
python3 converter.py --mo my/openvino/path/model_optimizer/mo.py --name aclnet

Related

How to resolve CMake Error: Could not find a package configuration file provided by "boost_python3"

I tried to install the lanelet2 library according to the github installation guide at https://github.com/fzi-forschungszentrum-informatik/Lanelet2.
When I perform catkin build I get the following error:
Errors << lanelet2_python:cmake /home/student/catkin_ws/logs/lanelet2_python/build.cmake.000.log
CMake Error at /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake:117 (find_package):
Could not find a package configuration file provided by "boost_python3"
(requested version 1.71.0) with any of the following names:
boost_python3Config.cmake
boost_python3-config.cmake
Add the installation prefix of "boost_python3" to CMAKE_PREFIX_PATH or set
"boost_python3_DIR" to a directory containing one of the above files. If
"boost_python3" provides a separate development package or SDK, be sure it
has been installed.
My OS is Ubuntu 20.04 with ROS noetic. The build is performed inside a venv with Python Version 3.8.10.
The command python is pointing to python3. I've also installed the following dependencies:
sudo apt-get install ros-noetic-rospack ros-noetic-catkin ros-noetic-mrt-cmake-modules
sudo apt-get install libboost-dev libeigen3-dev libgeographic-dev libpugixml-dev libpython3-dev libboost-python-dev python3-catkin-tools
Does someone have an idea how to resolve this error?
See neutrinoyu's comment at https://github.com/ethz-asl/kalibr/issues/368#issuecomment-651726289
/kalibr/Schweizer-Messer/numpy_eigen/cmake/add_python_export_library.cmake:89
change
list(APPEND BOOST_COMPONENTS python3)
to
list(APPEND BOOST_COMPONENTS python)

ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)

When I built my program with catkin build, I got the following error:
File "/opt/ros/melodic/lib/python2.7/dist-packages/cv_bridge/core.py", line 91, in encoding_to_cvtype2
from cv_bridge.boost.cv_bridge_boost import getCvType
ImportError: dynamic module does not define module export function (PyInit_cv_bridge_boost)
After searching for the cause, I found that cv_bridge is built with python2 by default.
My environment is as follows.
Virtualenv (python3.6)
ROS melodic
Jetson AGX xavier
I wanted to use cv_bridge with python3, so I used the link below to build cv_bridge with the following steps.
Unable to use cv_bridge with ROS Kinetic and Python3
sudo apt-get install python-catkin-tools python3-dev python3-catkin-pkg-modules python3-numpy python3-yaml ros-kinetic-cv-bridge
# Create catkin workspace
mkdir catkin_ws
cd catkin_ws
source ~/virtualenv/jetson/bin/activate
catkin init
# Instruct catkin to set cmake variables
catkin config -DPYTHON_EXECUTABLE=/usr/bin/python3 -DPYTHON_INCLUDE_DIR=/usr/include/python3.6m -DPYTHON_LIBRARY=/usr/lib/aarch64-linux-gnu/libpython3.6m.so
# Instruct catkin to install built packages into install place. It is $CATKIN_WORKSPACE/install folder
catkin config --install
# Clone cv_bridge src
git clone https://github.com/ros-perception/vision_opencv.git src/vision_opencv
# Find version of cv_bridge in your repository
apt-cache show ros-kinetic-cv-bridge | grep Version
cd src/vision_opencv/
git checkout 1.13.0(maybe, I forgot)
cd ../../
# Build
catkin build cv_bridge
# Extend environment with new package
source install/setup.bash --extend
After performing this procedure, from cv_bridge.boost.cv_bridge_boost import getCvType no longer causes an error.
After that, I put other nodes I created in catkin_ws and built it with catkin build. However, when I run it with roslauch, I get the following error:
RLException: [sample.launch] is neither a launch file in package [sample_proc] nor is [sample_proc] a launch file name
Running source devel / setup.bash does not eliminate this error.
The ROS_PACKAGE_PATH when building cv_bridge with python3 is as follows.
(jetson) nvidia#nvidia:~/catkin_ws$ echo $ROS_PACKAGE_PATH
/home/nvidia/catkin_ws/install/share:/opt/ros/melodic/share
By default, ROS_PACKAGE_PATH was as follows:
(jetson) nvidia#nvidia:~/catkin_ws$ echo $ROS_PACKAGE_PATH
/opt/ros/melodic/share:/home/nvidia/catkin_ws/src/vision_opencv/cv_bridge:/home/nvidia/catkin_ws/src/ddynamic_reconfigure:/home/nvidia/catkin_ws/src/vision_opencv/image_geometry:/home/nvidia/catkin_ws/src/vision_opencv/opencv_tests:/home/nvidia/catkin_ws/src/pose_msgs:/home/nvidia/catkin_ws/src/output:/home/nvidia/catkin_ws/src/pose_check_proc:/home/nvidia/catkin_ws/src/post_proc:/home/nvidia/catkin_ws/src/realsense-ros/realsense2_camera:/home/nvidia/catkin_ws/src/realsense-ros/realsense2_description:/home/nvidia/catkin_ws/src/set_datetime_proc:/home/nvidia/catkin_ws/src/vision_opencv/vision_opencv
Do I need to do anything different if I use virtualenv?
Thanks!

No executable found for solver 'glpk' on pyomo

I have an optimization model written on pyomo (Python 3.7/Ubuntu 18.04) and using
from pyomo.opt import SolverFactory
opt = SolverFactory("gurobi")
results = opt.solve(model)
It works exactly as it should. However, when I try to use glpk as the solver, I get the following error:
ApplicationError: No executable found for solver 'glpk'.
Importing the package also returns an error:
ModuleNotFoundError: No module named 'glpk'
But when I do conda list on the terminal, I get this information for glpk package:
glpk 4.65 he80fd80_1002 conda-forge
How can I fix this?
It's been quite some time, but this might help future users with the same issue.
I had the same issue, while trying to run pyomo along with glpk as a solver on a debian based container image.
I was getting the following error: Could not locate the 'glpsol' executable, which is required for solver 'glpk'. ApplicationError: No executable found for solver 'glpk'.
After installing glpk-utils along with glpk, my python script executed successfully.
part of my working docker file can be found below
FROM python:3.10-slim-bullseye
WORKDIR /opt/app
RUN apt update && apt install -y gcc libglpk-dev glpk-utils
COPY requirements.txt /opt/app/requirements.txt
RUN pip install --upgrade pip && pip install -r requirements.txt
# requirements.txt contents (Pyomo==6.4.2 and glpk==0.4.6 among others)
COPY . .
# utilizing .dockerignore to leave files/folders out of the container image
CMD [ "python", "main.py" ]
On terminal, trying running which glpsol.
This ought to return a path to your glpsol executable. I am guessing you won't get a result. If that's the case you need to add the location of 'glpsol' to your PATH variable. You should be able to find it by seaching for where the 'glpk' package was installed. It should be in the 'bin' folder. Hopefully.

How to use transform_graph to optimize Tensorflow model

I used to use the optimize_for_inference library in optimizing frozen Tensorflow models. However, I have read from different sources that Tensorflow no longer supports it.
I came across transform_graph, and its documentation is found here: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#strip_unused_nodes
At first, I ran into errors and found out that I need to install Tensorflow from source (https://www.tensorflow.org/install/install_sources#install_the_pip_package) instead of using PIP.
I already re-installed Tensorflow from source, and ran this code in bash (/tensorflor/tensorflow dir):
bazel build tensorflow/tools/graph_transforms:transform_graph
bazel-bin/tensorflow/tools/graph_transforms/transform_graph \
--in_graph=tensorflow_inception_graph.pb \
--out_graph=optimized_inception_graph.pb \
--inputs='Mul' \
--outputs='softmax' \
--transforms='
strip_unused_nodes(type=float, shape="1,299,299,3")
fold_constants(ignore_errors=true)
fold_batch_norms
fold_old_batch_norms
round_weights(num_steps=256)'
And ran to again this error:
-bash: bazel-bin/tensorflow/tools/graph_transforms/transform_graph: No such file or directory
What seems to be the problem?
That is weird.
The code I write below is to install and using the transform_graph in CentOS7.
yum install epel-release
yum update
yum install patch
curl https://copr.fedorainfracloud.org/coprs/vbatts/bazel/repo/epel-7/vbatts-bazel-epel-7.repo -o /etc/yum.repos.d/vbatts-bazel-epel-7.repo
yum install bazel
curl -L -O https://github.com/tensorflow/tensorflow/archive/v1.8.0.tar.gz
cd tensorflow-1.8.0
./configure # interactive!
bazel build tensorflow/tools/graph_transforms:summarize_graph
bazel-bin/tensorflow/tools/graph_transforms/summarize_graph
After you install the Tensorflow by source code and finish the configure, the bazel codes should be working.
The error from you environment is occurred when you didn't finish the install the Tensorflow by source code, or you ran the script in wrong path.
Please check the configure step, path of the Tensorflow root.

Error with pip : pip-compile does not support URLs as packages

I'm using pip with pip-compile (installed this way: pip install pip-tools)
I got the following error when I run the pip-compile -v command:
pip-compile does not support URLs as packages, unless they are editable. Perhaps add -e option? (constraint was:
aldryn-django==1.8.7.0 from
https://control.aldyn.com/api/v1/apps/serve/aldryn-django/1.8.7.0/592213b1-e515-4447-8ef0-850713571a42/aldryn-django-1.8.7.0.tar.gz#egg=aldryn-django==1.8.7.0
(from -r requirements.in (line 2)))
I have tried with the -e option, but this causes another problem.
pip.exceptions.InstallationError: https://control.aldryn.com/api/v1/apps/serve/aldryn-django/1.8.7.0/592213b1-e515-4447-8ef0-850713571a42/aldryn-django-1.8.7.0.tar.gz#egg=aldryn-django==1.8.7.0 should either be a path to a local project or a VCS url beginning with svn+, git+, hg+, or bzr+
Below is an short extract of my requirements.in file:
\# <INSTALLED_ADDONS> # Warning: text inside the INSTALLED_ADDONS tags is auto-generated. Manual changes will be overwritten.
https://control.aldryn.com/api/v1/apps/serve/aldryn-django/1.8.7.0/592213b1-e515-4447-8ef0-850713571a42/aldryn-django-1.8.7.0.tar.gz#egg=aldryn-django==1.8.7.0
...
\# </INSTALLED_ADDONS>
I'm using Docker container based on the python:2.7-slim image.
The requirements.in work well on one other similar docker container.
I don't know why on mine, pip-compile does not work...
Have you any idea?
run the command from inside the docker container. Divio somehow seem to have fixed this in their install

Categories