Getting errors converting Open Model Zoo model - fastseg - python

Looking to convert "FastSeg-large" public model from the OpenModelZoo to use with the "image segmentation" demo. I was able to use the download.py utility to get the original model files downloaded. Now i'm running the following command:
python3 converter.py --name fastseg-large
I'm getting the following error:
Module fastseg_large in C:\Users\david\Desktop\Neural Zoo\open_model_zoo\models\public\fastseg-large;C:\Users\david\Desktop\Neural Zoo\open_model_zoo\tools\downloader\public\fastseg-large/model doesn't exist. Check import path and name
cannot import name 'container_abcs' from 'torch._six' (C:\Users\david\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0\LocalCache\local-packages\Python38\site-packages\torch\_six.py)
Looks like i'm getting 2 different errors with the first having trouble finding the path to the original torch model files. I confirmed my path down to the "/model" directory exists. In that directory is another subdirectory "fastseg\model" where the torch model files exist.
The second error i'm not sure about - i double checked that i installed Torch from pip.
Any suggestions?

I’ve validated that running converter.py for FastSeg-large public model is working fine in OpenVINO™ 2021.4.752.
Make sure you run the install_prerequisites_onnx.bat batch file to configure the Model Optimizer for ONNX:
cd “C:\Program Files (x86)\Intel\openvino_2021.4.752\deployment_tools\model_optimizer\install_prerequisites”
then
install_prerequisites_onnx.bat

Doing some more web searches i see the error can be due to a newer version of Torch. So i uninstalled Torch and reinstalled version 1.8. That actually failed and i went back to the "pip install torch" without specifying the version. Then i saw torchvision was also mentioned so i "pip install torchvision".
Somewhere along the way i must have cleaned up something related to my torch libraries. Either way that got this to work.

Related

AllenNLP Semantic Role Label - SRLBert Error

I am trying to run the AllenNLP demo https://demo.allennlp.org/semantic-role-labeling. When I run the command line or the Python version via Juptyer I get the error mentioned below.
I have installed the required libraries?
pip install allennlp==1.0.0 allennlp-models==1.0.0
Running this gives me an error:
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.11.19.tar.gz")
predictor.predict(
sentence="Did Uriah honestly think he could beat the game in under three hours?"
)
Update: If I use an older gz download I do not get the error
I grabbed this one from their public models site bert-base-srl-2020.09.03.tar.gz
from allennlp.predictors.predictor import Predictor
predictor = Predictor.from_path("https://storage.googleapis.com/allennlp-public-models/bert-base-srl-2020.09.03.tar.gz")
text = "Did Uriah honestly think he could beat the game in under three hours?"
predictor.predict(sentence=text)
Error:
RuntimeError: Error(s) in loading state_dict for SrlBert:
Unexpected key(s) in state_dict: "bert_model.embeddings.position_ids".
OS & versions:
Python: 3.8.6
pip: 20.2.1
Mac OS: Catalina 10.15.7
Are there some dependencies I am maybe missing for BERT? I haven't had any issues with the other AllenNLP examples.
I added an answer but they deleted it. This is how you resolve this:
After posting on github, found out from the AllenNLP folks that it is a version issue. I needed to be using allennlp=1.3.0 and the latest model. Now it works as expected.
This should be fixed in the latest allennlp 1.3 release. Also, the latest archive file is structured-prediction-srl-bert.2020.12.15.tar.gz.
https://github.com/allenai/allennlp/issues/4876

module 'caffe' has no attribute 'set_mode_gpu'

I have installed caffe from source. I have used the Cmake for installation. I have updated the respective paths as well.
My caffe root directory is: /home/ashj/caffe
I have updated the PYTHON path as:
export PYTHONPATH=<caffe-home>/python:$PYTHONPATH
which is by using
**export PYTHONPATH=/home/ashj/caffe/python:$PYTHONPATH**
I could load import the module caffe. However I am not able to access any methods or any layers inside the caffe like set_mode_gpu(), set_mode_cpu() or layers or params. I am getting errors like:
When I used
import caffe
caffe.set_mode_gpu()
I am getting following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: module 'caffe' has no attribute 'set_mode_gpu'
PS: I have also tried using caffe.__caffe.set_mode_gpu() as mentioned in this link. but it is not working for me.
My system specs: Ubuntu 18.04
TIA
Though may be late, but I encountered this same problem and found a way work around:
sys.path.insert(0, '/path/to/caffe/python')
import caffe
caffe.set_mode_gpu()
namely, add the caffe/pathon path to you sys.path before import caffe.
Wrote a post here for detailed analysis, hope helpful.
This problem may be the results of package caffe's path.
For me, if I do the following from the Ubuntu Terminal, everything goes fine:
but if I do from the Pycharm IDE, errors occur:
note that I tested the package caffe's path in both ways, and got different results:
- in the Ubuntu terminal, namely the way which goes fine, I got
'/home/CVAR-B/softwares/caffe/caffe/python/caffe/__init__.pyc'
which is the expected result;
- in the Pycharm IDE way, namely the way error occurs, I got
'/usr/local/lib/python2.7/dist-packages/caffe/__init__.pyc'
which is not the expected result.
In view of this discovery, I did this one more thing to handle the error:
sys.path.insert(0, '/path/to/caffe/python')
import caffe
caffe.set_mode_gpu()
namely, add the caffe/pathon path to you sys.path before import caffe.
and the result shows this can be a workaround:
(source: ax2x.com)
See the caffe.__file__'s result, now returns the expected path.
Try these steps and then set your python PATH:
You might've already done steps 1 and 3.
make all
make pycaffe
make distribute
mkdir ~/python
mv distribute/python/caffe ~/python
Set your PYTHONPATH after this — this should be some dir like caffe/python/caffe

Tensorflow no module named official

I am trying to use the nets from the official mnist directory of tensorflows model repository. On my windows system I receive this error:
C:\Users\ry\Desktop\NNTesting\models\official\mnist>mnist_test.py
Traceback (most recent call last):
File "C:\Users\ry\Desktop\NNTesting\models\official\mnist\mnist_test.py",line 24, in <module>
from official.mnist import mnist
ModuleNotFoundError: No module named 'official'
I have followed their official directions and set my python path using
set PYTHONPATH="PYTHONPATH:"%cd%"
and can confirm that
PYTHONPATH="$PYTHONPATH:C:\Users\ry\Desktop\NNTesting\models"
and I have also installed the dependencies successfully. Does anyone have experience using these models on a windows system and can help me with this pathing issue? I'm not sure what I have done incorrectly here.
Thanks
pip install tf-models-official
For Google Colab I needed to add the the model dir also to the Systems path:
!git clone https://github.com/tensorflow/models.git
import os
os.environ['PYTHONPATH'] += ":/content/models"
import sys
sys.path.append("/content/models")
if anyone has this problem make sure that the python path variable doesn't have quotations in it. For some reason, the readme has quotations around it.
Here is the correct way to set it
PYTHONPATH=path\to\models
I had exactly the same question as you did, and the following solution solved this problem.
There is an error in the tensorflow/models/official README.md
https://github.com/tensorflow/models/tree/master/official
Wrong
export PYTHONPATH="$PYTHONPATH:/path/to/models"
Correct
export PYTHONPATH=$PYTHONPATH:/path/to/models
The Official Models are made available as a Python module. To run the models and associated scripts, add the top-level /models folder to the Python path with the command: export PYTHONPATH="$PYTHONPATH:/path/to/models"
FROM README
I had the same problem. Did you use windows 10? Make sure you run the command prompt as "administrator". I used it in VS code at first, no warning, and didn't work. But it worked when I run a separate prompt window as "administrator".
set PYTHONPATH=path\to\models
then run the model.
I was setting up to run the NMT model and ran into the same problem.
It took me bit to figure out exactly which folder should be added to PYTHONPATH.
I tried several folders inside my example's directory with no luck.
I finally understood what that import was trying to tell me...
"from official.transformer.utils import tokenizer"
means
"add the parent of directory 'official' to PYTHONPATH".
For me, this was just the top-level 'models-master' directory that I obtained from GitHub. Once I added /path/to/models-master, I was past this obstacle.
Go to models folder and do
export PYTHONPATH=$PYTHONPATH:$PWD
add the model directory to PYTHONPATH.
import os
os.environ['PYTHONPATH'] += ':/content/models/research/:/content/models/research/slim/'
jupyter. Clone from official git and manually appended the path to sys.
!git clone https://github.com/tensorflow/models.git
import sys
sys.path.append("C:/Windows/System32/models")
sys.path

Windows Theano Keras - lazylinker_ext\mod.cpp: No such file or directory

I am installing Theano and Keras follwing the How do I install Keras and Theano in Anaconda Python on Windows?, which worked fine for me with an older release before. Now I have upgraded to the latest Theano version and when validating its functionality using this command:
Python:
from theano import function, config, shared, sandbox
it resulted in really long error log containing:
g++.exe: error: C:\Users\John: No such file or directory
g++.exe: error: Dow\AppData\Local\Theano\compiledir_Windows-10-10.0.10240-Intel64_Family_6_Model_60_Stepping_3_GenuineIntel-2.7.12-64\lazylinker_ext\mod.cpp: No such file or directory
It seems the path to user directory "John Dow" was splitted by g++ to two file paths, since there is space in the name.
Is there any way how to tell python to not to use the "C:\Users\John Dow" directory but e.g. "C:\mytempdir". Setting the USERPROFILE windows variable didn't help.
NOTE: I managed to fix the g++ command, where it failed (by adding quotes to the output), which successfully compiled the sources. Unfortunately it didnt solve my problem, since when started again, it fails on this step.
It seems also to be an issue of Theano, since swithcing to different Python version didnt help.
Answer is from here:
Theano: change `base_compiledir` to save compiled files in another directory
i.e. in ~/.theanorc file (or create it) add this line :
[global]
base_compiledir=/some/path

Issue with using protobufs with python ImportError: cannot import name descriptor_pb2

Context
Steps taken:
Environment Setup
I've installed protobufs via Home Brew
I've also followed the steps in the proto-bufs python folder's readme on installing python protobufs - namely running the python setup.py install command
I've using the protobuf-2.4.1 files
Coding
I have a python file (generated from a .proto file I compiled) that contains the statement, among other import statements, but I believe this one is the one causing issues:
from google.protobuf import descriptor_pb2
The above python file, I'm importing in another python file, it's
this python file that I want to write up logic for parsing the
protobufs data files I receive
Error received
I get this error when running that file:
Steps taken to fix
Searched google for that error - didn't find much
Looked at this question/answer Why do I see "cannot import name descriptor_pb2" error when using Google Protocol Buffers?
I don't really understand the above questions selected answer,I tried to run the command in the above answer protoc descriptor.proto --python_out=gen/ by coping and pasting it in the terminal in different places but couldn't get it to work
Question
How do I fix this error?
What is the underlying cause?
How do I check if the rest of my protobuf python compiler/classes are set up correctly?
I've discovered the issue. I had not run the python install instructions the first time I tried to compile this file. I recompiled the file and this issue was fixed.

Categories