I'm trying to use the dlib (v19.6) Python API to create a CNN face detector using the code:
cnn_face_detector = dlib.cnn_face_detection_model_v1('mmod_human_face_detector.dat')
However, I get an ArgumentError as follows:
---------------------------------------------------------------------------
ArgumentError Traceback (most recent call last)
<ipython-input-16-c2ca0a6e8dff> in <module>()
----> 1 cnn_face_detector = dlib.cnn_face_detection_model_v1('mmod_human_face_detector.dat')
ArgumentError: Python argument types in
cnn_face_detection_model_v1.__init__(cnn_face_detection_model_v1, str)
did not match C++ signature:
__init__(_object*, std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> >)
What might I be doing wrong? Can I not pass the filename of the model file simply as a string?
This works for me, using this fresh release and your usage is correct!
This probably means, that you either:
did something wrong during install
installed by python setup.py install? That would be correct!
or: your python-interpreter is using some other dlib-version without your knowledge
I had a simmilar issue after python setup.py install due to python using an older version of dlib from /opt/conda/lib/python3.6/site-packages/dlib.so.
Doing a simple
mv /opt/conda/lib/python3.6/site-packages/dlib.so /opt/conda/lib/python3.6/site-packages/dlib_old.so
solved it for me.
Related
Currently I'm working on implementing code in "Differentially Private Federated Learning: A Client Level Perspective" where the GitHub link is LINK.
However, I follow the instruction but got an error which is
Traceback (most recent call last):
File "sample.py", line 5, in <module>
from MNIST_reader import Data
File "/content/drive/MyDrive/Colab_Notebooks/machine-learning-diff-private-federated-learning-main/MNIST_reader.py", line 20
raise ValueError, "dataset must be 'testing' or 'training'"
SyntaxError: invalid syntax
I just run bash RUNME.sh and follow the instruction but still get an error!
!python sample.py —-m 100, sigma 1
You're welcome if you want to check the full code here.
Thanks a lot!!
Error SyntaxError: invalid syntax for line
raise ValueError, "dataset must be 'testing' or 'training'"
may suggest that code was created for Python 2 but you run Python 3.
You may have edit file MNIST_read.py and use () instead of , in line
raise ValueError("dataset must be 'testing' or 'training'")
BTW:
Some files on GitHub are 5 years old so it could be created for Python 2.
So you may excpect that code may have other problems with Python 3
Requirements shows Tensorflow 1.4.1 and it may have problem to run with your Tensorflow 2.8.4 because they change some elements.
Maybe it will be simpler to run it with Python 2
EDIT:
In GitHub in Insight / Network you can see all forks of this repo and fork created by rosdyana has last commit with title Fix code for python3.5 and maybe you should use this version.
But this version still may need to use Tensorflow 1.4.1 which may need older Python 3.5 or 3.6. I don't know if it works with the newest versions of Python.
This is all about and issue when using the latest Python Protobuf (3.19.1) and Python 3.10, in Linux (tested in Fedora 35 and Ubuntu 20.04.
It broke our library but it can easily tested using the addressbook.proto from the Python Protobuf tutorial and tried to get the proto2 message class as follows:
import addressbook_pb2
from google.protobuf import (
descriptor_database,
descriptor_pb2,
descriptor_pool,
message_factory,
)
_DESCRIPTOR_DB = descriptor_database.DescriptorDatabase()
_DESCRIPTOR_POOL = descriptor_pool.DescriptorPool(_DESCRIPTOR_DB)
_DESCRIPTOR_DB.Add(
descriptor_pb2.FileDescriptorProto.FromString(
addressbook_pb2.DESCRIPTOR.serialized_pb
)
)
factory = message_factory.MessageFactory()
cls = factory.GetPrototype(_DESCRIPTOR_POOL.FindMessageTypeByName("tutorial.Person"))
It raises the following error:
[libprotobuf ERROR google/protobuf/pyext/descriptor_database.cc:64] DescriptorDatabase method raised an error
SystemError: PY_SSIZE_T_CLEAN macro must be defined for '#' formats
Traceback (most recent call last):
File "/dev/protobuf/test/test.py", line 21, in <module>
ls = factory.GetPrototype(_DESCRIPTOR_POOL.FindMessageTypeByName("tutorial.Person"))
`KeyError: "Couldn't find message tutorial.Person"
Now, it works as expected if I use an older Python Protobuf version, such as 3.18.1.
I've opened a bug https://github.com/protocolbuffers/protobuf/issues/9245, but apparently, it was not considered a bug.
Python Protobuf introduced the PY_SSIZE_T_CLEAN macro in 3.19.1 and broke something, probably by using int instead of Py_ssize_t when using # formats.
Have anyone have this issue or can confirm it?
Yes I am also getting same issue.
We changes the version as below:
*protobuf >= 3.19* # Not working
*protobuf >= 3.15.6, <= 3.20.1* # Working
There are actually two errors here:
SystemError: PY_SSIZE_T_CLEAN macro must be defined for '#' formats
This error is caused by Python 3.10 dropping support for old default conversions when passing data from C to Python side. In this case in the protobuf library, the error only occurs when passing an exception from C code to Python.
The python-protobuf library was fixed to work with Python 3.10 back in October 2021, and the fix should be included in python-protobuf 3.20.0 and later.
Try adding this to your script to check the version:
import google.protobuf
print(google.protobuf.__version__)
For me the error does not occur with the latest versions 3.19.4, 3.20.1 or 4.21.1, but does occur with 3.19.2 and older.
I am trying to use the preprocessor library in order to clean text stored in a Pandas Data Frame. I've installed the last version (https://pypi.org/project/tweet-preprocessor/), but I receive this error message:
import preprocessor as p
#forming a separate feature for cleaned tweets
for i,v in enumerate(df['text']):
df.loc[v,'text'] = p.clean(i)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-183-94e08e1aff33> in <module>
1 #forming a separate feature for cleaned tweets
2 for i,v in enumerate(df['text']):
----> 3 df.loc[v,'text'] = p.clean(i)
AttributeError: module 'preprocessor' has no attribute 'clean'
You probably have the preprocessor module installed as well, which is entirely distinct from the tweet-preprocessor module. However, confusingly, the import preprocessor as p statement can be used for both. When both modules are installed, Python ignores tweet-preprocessor and automatically opts for preprocessor, which does not contain a clean function, hence the error you received.
To resolve this, I had to uninstall both modules with the following commands:
pip uninstall preprocessor
pip uninstall tweet-preprocessor
Then I closed all shells for a fresh start and typed:
pip install tweet-preprocessor
And finally:
>>> import preprocessor as p
>>> p.clean('#this and that')
'and that'
Merely uninstalling preprocessor did not work. Python kept importing the module despite it being uninstalled. I am not sure why, but I suspect it has something to do with caches that Python keeps in the background.
Try installing first:
pip install tweet-preprocessor
Then:
import preprocessor as p
I am very new to sentiment analysis. Trying to use Stanford Sentiment Treebank(sst) and ran into an error.
from nltk.tree import Tree
import os
import sst
trees = "C:\\Users\m\data\trees"
tree, score = next(sst.train_reader(trees))
[Output]:
AttributeError Traceback (most recent call last)
<ipython-input-19-4101f90b0b16> in <module>()
----> 1 tree, score = next(sst.train_reader(trees))
AttributeError: module 'sst' has no attribute 'train_reader'
I think you're looking for https://github.com/JonathanRaiman/pytreebank, not https://pypi.org/project/sst/.
On the python side, that error is pretty clear. Once you import the right package, though, I'm not sure I saw train_reader but I could be wrong.
UPDATE:
I'm not entirely sure why you're running into the 'sst' not having the attribute train_reader. Make sure you didn't accidentally install the 'sst' package if you're using conda. It looks like the 'sst' is referring to a privately created module and that one should work.
I got your import working but what I did was I:
Installed everything specified in the requirements.txt file.
import sst was still giving me an error so I installed nltk and sklearn to resolve that issue. (fyi, im not using conda. im just using pip and virtualenv for my own private package settings. i ran pip install nltk and pip install sklearn)
At this point, import sst worked for me.
I guess you're importing the sst package selenium-simple-test, which is not what you're looking for.
Try sst.discover() , if you get the error
TypeError: discover() missing 4 required positional arguments: 'test_loader', 'package', 'dir_path', and 'names'
You are using the selenium-simple-test package
I need to perform a BULK whois query using shodan API.
I came across this code
import shodan
api = shodan.Shodan('inserted my API-KEY- within single quotes')
info = api.host('8.8.8.8')
After running the module i get the following error:
Traceback (most recent call last):
File "C:/Users/PIPY/AppData/Local/Programs/Python/Python37/dam.py", line 1, in
import shodan
File "C:/Users/PIPY/AppData/Local/Programs/Python/Python37\shodan.py", line 2, in
api = shodan.Shodan('the above insereted API KEY')
AttributeError: module 'shodan' has no attribute 'Shodan'
I'm learning python and have limited scripting/programming experience.
Could you please help me out?
Cheers
You seem to have dam.py and shodan.py – Python defaults to importing from the module directory, so the installed shodan package gets masked.
Try renaming shodan.py to e.g. shodan_test.py (and of course fixing up any imports, etc.).
I have solved the issue by re-installing the shodan module under the C:\Users\PIPY\AppData\Local\Programs\Python\Python37\Scripts>pip install shodan
Thank you for the help AKX.
I had this same issue but after renaming my file as something different than shodan.py, I had to also delete the compiled class shodan.pyc to avoid the error.
Also, if you have more than one version of python installed, i.e. python2 and python3, use
python -m pip install shodan instead of pip install shodan, to ensure that you are installing the library in the same version of shodan that you are using to execute your script.
If you are executing your script with python3 shodan_test.py then use python3 -m pip install shodan