deploy.prototxt in Caffe model - python

I am running into this problem when running my code:
model = cv2.dnn.readNetFromCaffe("deploy.prototxt", "res10_300x300_ssd_iter_140000.caffemodel")
cv2.error: OpenCV(4.7.0) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\caffe\caffe_io.cpp:1126: error: (-2:Unspecified error) FAILED: fs.is_open(). Can't open "deploy.prototxt" in function 'cv::dnn::ReadProtoFromTextFile'
I Believe it is coming from when I run this line of code and am not sure what to do about it. I thought it was because I didn't have this file saved with the code but I am not entirely sure what this file is and does:
# Load the SSD model
model = cv2.dnn.readNetFromCaffe("deploy.prototxt", "res10_300x300_ssd_iter_140000.caffemodel")

Related

Tensorflow.compat.v2.__internal__.tracking' has no attribute 'TrackableSaver' Error

I got this error after installing Tensorflow.js. Previously this program was working. Could it be a problem with the versions? I'm really curious as to what's causing it.
Thanks in advance.
File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:61
model = build_model()
File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:29 in build_model
model = keras.Sequential([
File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\trackable\base.py:205 in _method_wrapper
result = method(self, *args, **kwargs)
File C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py:67 in error_handler
raise e.with_traceback(filtered_tb) from None
File C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py:3331 in saver_with_op_caching
return tf.__internal__.tracking.TrackableSaver(
AttributeError: module 'tensorflow.compat.v2.__internal__.tracking' has no attribute 'TrackableSaver'
I was planning to convert my model with Tensorflow.js and run it over the web. But when I installed Tensorflow.js I got this error in the program.
Update keras with pip install 'keras>=2.9.0' for keras-team/keras#af70910.
- self._trackable_saver = saver_with_op_caching(self)
+ self._checkpoint = tf.train.Checkpoint(root=weakref.ref(self))
I had to create another environment on Jupyter and re-install all the libraries from scratch since it seemed all the libraries started having issues, not just TensorFlow. It is now working with no errors.

openVino: __cinit__() got an unexpected keyword argument 'weights'

I'm trying to run this repo but found an error.
PPE-detector-Tiny-YOLOv3-Rasp.erry-PI-and-NCS2
I never used python before. when I run the PPE_Detector_Pc.py I got an error like this :
PS D:\repository\PPE-detector-Tiny-YOLOv3-Rasp.erry-PI-and-NCS2> py PPE_Detector_Pc.py
loading the model...
loading plugin on Intel NCS2...
Traceback (most recent call last):
File "PPE_Detector_Pc.py", line 406, in <module>
sys.exit(main_IE_infer() or 0)
File "PPE_Detector_Pc.py", line 201, in main_IE_infer
net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)
File "ie_api.pyx", line 1598, in openvino.inference_engine.ie_api.IENetwork.__cinit__
TypeError: __cinit__() got an unexpected keyword argument 'weights'
this is the part where it gave me error:
path_to_xml_file = "tiny_yolo_IR_500000_FP32.xml" #<--- MYRIAD
path_to_bin_file = os.path.splitext(path_to_xml_file)[0] + ".bin"
time.sleep(1)
print("loading plugin on Intel NCS2...")
plugin = IECore()
net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)
input_blob = next(iter(net.inputs))
exec_net = plugin.load(network=net)
my path_to_xml_file and path_to_bin_file was using tiny_yolo_IR_500000_FP32.xml and tiny_yolo_IR_500000_FP32.bin i put in same folder i downloaded from the repo.
I just changed IEPlugin to IECore because it is no longer supported on the newer version openVino.
is there anything I miss on that?
The repository you've provided is not maintained by OpenVINO™ and is using deprecated APIs.
You're partially correct on steps to replace IEPlugin with IECore. Here are the full steps required to read and load networks using IECore API:
ie = IECore()
net = ie.read_network(model=model_xml, weights=model_bin)
exec_net = ie.load_network(network=net, device_name="CPU", num_requests=2)
The provided IR models (.xml and .bin files) from the repository are also in deprecated version as they are IR version 5 as shown here when running the edited code in OpenVINO™ Development Tools 2022.1:
To avoid this error, you will need to convert the original model into the latest IR format (IR v11) using OpenVINO™ 2022.1 Model Optimizer.

Getting errors when training sphinxtrain

I am trying to implement a sphinx model for a language. But I am getting some errors when training the system.
The following command is used in cmd to train sphinx:
python ../sphinxtrain/scripts/sphinxtrain run
However, I get the following errors.
Can't open D:/sphinx/other/result/other-1-1.match
word_align.pl failed with error code -1 at D:/sphinx/sphinxtrain/scripts/decode/slave.pl line 173.
Here is the file slave.pl:
And also these errors:
INFO: feat.c(715): Initializing feature stream to type: '1s_c_d_dd', ceplen=13, CMN='batch',
VARNORM='no', AGC='none'
ERROR: "acmod.c", line 79: Folder 'D:/sphinx/other/model_parameters/other.ci_cont' does not contain
acoustic model definition 'mdef'
FATAL: "batch.c", line 913: PocketSphinx decoder init failed
The sphinx>other>logdir>decode>other-1-1.log is also blank.

cv2.VideoCapture can not read remote video file

I have a video file and need to load it with OpenCV.
import cv2
cap = cv2.VideoCapture('http://xx.xx.xxx.xxx:8080/xxx/2019-11-29.3gp')
print(vc.isOpened()) # get False
However, I get the following error
[tcp # 000001a19a548b00] Connection to tcp://xx.xx.xxx.xxx:8080 failed: Error number -138 occurred
warning: Error opening file (/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:901)
warning: http://xx.xx.xxx.xxx:8080/xxx/2019-11-29.3gp
(/build/opencv/modules/videoio/src/cap_ffmpeg_impl.hpp:902)
How can I solve?

Gensim mallet bug? Fails to load the saved model more than once

I am trying to load a saved gensim lda mallet:
ldamallet = gensim.models.wrappers.LdaMallet(mallet_path, corpus=corpus, num_topics=n_topics,id2word=id2word)
ldamallet.save('ldamallet')
When testing this for a new query (with the original corpus and dictionary), everything seems fine for the first load.
ques_vec = [dictionary.doc2bow(words) for words in data_words_list]
for i, row in enumerate(lda[ques_vec]):
row = sorted(row, key=lambda x: (x[1]), reverse=True)
On executing the same code afterward, it is this error that pops up:
java.io.FileNotFoundException: /tmp/9f371_corpus.mallet (No such file
or directory)
at java.io.FileInputStream.open0(Native Method)
at java.io.FileInputStream.open(FileInputStream.java:195)
at java.io.FileInputStream.(FileInputStream.java:138)
at cc.mallet.types.InstanceList.load(InstanceList.java:787)
at cc.mallet.classify.tui.Csv2Vectors.main(Csv2Vectors.java:131)
Exception in thread "main" java.lang.IllegalArgumentException:
Couldn't read InstanceList from file /tmp/9f371_corpus.mallet
at cc.mallet.types.InstanceList.load(InstanceList.java:794)
at cc.mallet.classify.tui.Csv2Vectors.main(Csv2Vectors.java:131)
Traceback (most recent call last): File "topic_modeling1.py", line
406, in
topic = get_label(text, id2word, first, ldamallet) File "topic_modeling1.py", line 237, in get_label
for i, row in enumerate(lda[ques_vec]): File "/home/user/sjha/anaconda3/envs/conda_env/lib/python3.6/site-packages/gensim/models/wrappers/ldamallet.py", line 308, in getitem
self.convert_input(bow, infer=True) File "/home/user/sjha/anaconda3/envs/conda_env/lib/python3.6/site-packages/gensim/models/wrappers/ldamallet.py", line 256, in convert_input
check_output(args=cmd, shell=True) File "/home/user/sjha/anaconda3/envs/conda_env/lib/python3.6/site-packages/gensim/utils.py",
line 1806, in check_output
raise error subprocess.CalledProcessError: Command '/home/user/sjha/projects/topic_modeling/mallet-2.0.8/bin/mallet
import-file --preserve-case --keep-sequence --remove-stopwords
--token-regex "\S+" --input /tmp/9f371_corpus.txt --output /tmp/9f371_corpus.mallet.infer --use-pipe-from
/tmp/9f371_corpus.mallet' returned non-zero exit status 1.
Contents of my /tmp/ directory:
/tmp/9f371_corpus.txt /tmp/9f371_doctopics.txt /tmp/9f371_doctopics.txt.infer /tmp/9f371_inferencer.mallet /tmp/9f371_state.mallet.gz /tmp/9f371_topickeys.txt
Also, it seems like the files /tmp/9f371_doctopics.txt.infer and /tmp/9f371_corpus.txt get modified every time I load the model. What could be the possible error source? Or is it some kind of bug in gensim's mallet wrapper?
mallet likes to store important model files (the corpus, etc) in /tmp if prefix is unset, and then when /tmp is cleared (say, by restarting) it throws a fit because it needs those files to run. deleting the model and rerunning the algorithm does not solve it- you first must reinstall gensim...
eg
conda uninstall gensim
conda install gensim
or whatever install manager you prefer.
then delete your saved models (sorry, their corpus etc are already gone...)
important: before rerunning you need to explicitly set the prefix param when initializing mallet:
prefix = {your chosen prefix dir}
if not os.path.isdir(prefix):
os.mkdir(prefix)
ldamallet = models.wrappers.LdaMallet({all your other args}, prefix=prefix, ...)

Categories