I'm trying to run this repo but found an error.
PPE-detector-Tiny-YOLOv3-Rasp.erry-PI-and-NCS2
I never used python before. when I run the PPE_Detector_Pc.py I got an error like this :
PS D:\repository\PPE-detector-Tiny-YOLOv3-Rasp.erry-PI-and-NCS2> py PPE_Detector_Pc.py
loading the model...
loading plugin on Intel NCS2...
Traceback (most recent call last):
File "PPE_Detector_Pc.py", line 406, in <module>
sys.exit(main_IE_infer() or 0)
File "PPE_Detector_Pc.py", line 201, in main_IE_infer
net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)
File "ie_api.pyx", line 1598, in openvino.inference_engine.ie_api.IENetwork.__cinit__
TypeError: __cinit__() got an unexpected keyword argument 'weights'
this is the part where it gave me error:
path_to_xml_file = "tiny_yolo_IR_500000_FP32.xml" #<--- MYRIAD
path_to_bin_file = os.path.splitext(path_to_xml_file)[0] + ".bin"
time.sleep(1)
print("loading plugin on Intel NCS2...")
plugin = IECore()
net = IENetwork(model=path_to_xml_file, weights=path_to_bin_file)
input_blob = next(iter(net.inputs))
exec_net = plugin.load(network=net)
my path_to_xml_file and path_to_bin_file was using tiny_yolo_IR_500000_FP32.xml and tiny_yolo_IR_500000_FP32.bin i put in same folder i downloaded from the repo.
I just changed IEPlugin to IECore because it is no longer supported on the newer version openVino.
is there anything I miss on that?
The repository you've provided is not maintained by OpenVINO™ and is using deprecated APIs.
You're partially correct on steps to replace IEPlugin with IECore. Here are the full steps required to read and load networks using IECore API:
ie = IECore()
net = ie.read_network(model=model_xml, weights=model_bin)
exec_net = ie.load_network(network=net, device_name="CPU", num_requests=2)
The provided IR models (.xml and .bin files) from the repository are also in deprecated version as they are IR version 5 as shown here when running the edited code in OpenVINO™ Development Tools 2022.1:
To avoid this error, you will need to convert the original model into the latest IR format (IR v11) using OpenVINO™ 2022.1 Model Optimizer.
Related
I am running following DataFlow config
test_dataflow= BeamRunPythonPipelineOperator(
task_id="xxxx",
runner="DataflowRunner",
py_file=xxxxx,
pipeline_options = dataflow_options,
py_requirements=['apache-beam[gcp]==2.39.0'],
py_interpreter='python3',
dataflow_config=DataflowConfiguration(job_name="{{task.task_id}}", location=LOCATION, project_id=PROJECT, wait_until_finished=False,gcp_conn_id="google_cloud_default")
#dataflow_config={"job_name":"{{task.task_id}}", "location":LOCATION, "project_id":PROJECT, "wait_until_finished":True,"gcp_conn_id":"google_cloud_default"}
)
It keeps throwing error . airflow-2.2.5 version.
Error - Traceback (most recent call last):
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/apache/beam/operators/beam.py", line 287, in execute
) = self._init_pipeline_options(format_pipeline_options=True, job_name_variable_key="job_name")
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/apache/beam/operators/beam.py", line 183, in _init_pipeline_options
dataflow_job_name, pipeline_options, process_line_callback = self._set_dataflow(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/apache/beam/operators/beam.py", line 63, in _set_dataflow
pipeline_options = self.__get_dataflow_pipeline_options(
File "/opt/python3.8/lib/python3.8/site-packages/airflow/providers/apache/beam/operators/beam.py", line 92, in __get_dataflow_pipeline_options
if self.dataflow_config.service_account:
AttributeError: 'DataflowConfiguration' object has no attribute 'service_account'
If I give service_account, then it errors saying parameter invalid
I ran into the same issue.
This is because of the inconsistency between the dataflow_configuration in dataflow and the one expected by beam. The DataflowConfiguration doesn't accepting the service_account.
I resolved my issue by upgrading the composer in place, so it gets the latest package related to dataflow where it has been fixed.
The service_account attribute has been added in this commit https://github.com/apache/airflow/commit/de65a5cc5acaa1fc87ae8f65d367e101034294a6
If you can't upgrade composer, try updating the google providers package to the latest version or > version 7.0 ?
You can check the commit in the commit log and identify the minimum version here - https://airflow.apache.org/docs/apache-airflow-providers-google/stable/commits.html#id6
Even though composer uses it's own fork, the oss should work. You can see the list of packages in the composer version list https://cloud.google.com/composer/docs/concepts/versioning/composer-versions it says apache-airflow-providers-google==2022.5.18+composer instead of apache-airflow-providers-google==7.0.
I am trying to run this github repo found at this link: https://github.com/HowieMa/DeepSORT_YOLOv5_Pytorch
After installing the requirements via pip install -r requirements.txt.
I am running this in a python 3.8 virtual environment, on a dji manifold 2g which runs on an Nvidia jetson tx2.
The following is the terminal output.
$ python main.py --cam 0 --display
Namespace(agnostic_nms=False, augment=False, cam=0, classes=[0], conf_thres=0.5, config_deepsort='./configs/deep_sort.yaml', device='', display=True, display_height=600, display_width=800, fourcc='mp4v', frame_interval=2, img_size=640, input_path='input_480.mp4', iou_thres=0.5, save_path='output/', save_txt='output/predict/', weights='yolov5/weights/yolov5s.pt')
Initialize DeepSORT & YOLO-V5
Using CPU
Using webcam 0
Traceback (most recent call last):
File "main.py", line 259, in <module>
with VideoTracker(args) as vdo_trk:
File "main.py", line 53, in __init__
cfg.merge_from_file(args.config_deepsort)
File "/home/dji/Desktop/targetTrackers/howieMa/DeepSORT_YOLOv5_Pytorch/utils_ds/parser.py", line 23, in merge_from_file
self.update(yaml.load(fo.read()))
TypeError: load() missing 1 required positional argument: 'Loader'
I have found some suggestions on github, such as in here TypeError: load() missing 1 required positional argument: 'Loader' in Google Colab,
which suggests to change yaml.load to yaml.safe_load
This is the code block to modify:
class YamlParser(edict):
"""
This is yaml parser based on EasyDict.
"""
def __init__(self, cfg_dict=None, config_file=None):
if cfg_dict is None:
cfg_dict = {}
if config_file is not None:
assert(os.path.isfile(config_file))
with open(config_file, 'r') as fo:
cfg_dict.update(yaml.load(fo.read()))
super(YamlParser, self).__init__(cfg_dict)
def merge_from_file(self, config_file):
with open(config_file, 'r') as fo:
self.update(yaml.load(fo.read()))
def merge_from_dict(self, config_dict):
self.update(config_dict)
However, changing yaml.load into yaml.safe_load leads me to this error instead
$ python main.py --cam 0 --display
Namespace(agnostic_nms=False, augment=False, cam=0, classes=[0], conf_thres=0.5, config_deepsort='./configs/deep_sort.yaml', device='', display=True, display_height=600, display_width=800, fourcc='mp4v', frame_interval=2, img_size=640, input_path='input_480.mp4', iou_thres=0.5, save_path='output/', save_txt='output/predict/', weights='yolov5/weights/yolov5s.pt')
Initialize DeepSORT & YOLO-V5
Using CPU
Using webcam 0
Done..
Camera ...
Done. Create output file output/results.mp4
Illegal instruction (core dumped)
Has anyone encountered anything similar ? Thank you !
Try this:
yaml.load(fo.read(), Loader=yaml.FullLoader)
It seems that pyyaml>=5.1 requires a Loader argument.
If you need FullLoader, you can also use "sugar" method, yaml.full_load().
And starting from pyyaml>=5.4, it doesn't have any discovered critical vulnerabilities, pyyaml status.
More about yaml.load(input) here.
My prophet model is running on test server. But the same model take error in production server. I used Python 3.7.6 and prophet libray is 0.6 version. Other libraries are the same version. But I take an this error in production.
Fatal Python error: Segmentation fault
Current thread 0x00007f5425728740 (most recent call first):
File "/vhosting/anaconda3/lib/python3.7/site-packages/pandas/core/indexers.py", line 211 in maybe_convert_indices
File "/vhosting/anaconda3/lib/python3.7/site-packages/pandas/core/internals/managers.py", line 1386 in take
File "/vhosting/anaconda3/lib/python3.7/site-packages/pandas/core/frame.py", line 4937 in sort_values
File "/vhosting/GBYATANOMALY/scripts/GBYATANOMALY.py", line 109 in pre_processing
File "/vhosting/GBYATANOMALY/scripts/GBYATANOMALY.py", line 318 in <module>
There is no problem in data. It is running with production data on development server.
pre_processing function is below.
def pre_processing(df_upd):
df_notnull = df_upd
df_notnull['TIME'] = pd.to_datetime(df_notnull['TIME_X'], format='%Y-%m-%d %H:%M:%S.%f')
df_dt = df_notnull[['TIME', 'DIMENSION', 'TARGET']]
df_dt['TARGET'] = df_dt['TARGET'].astype('float32')
df_dt.sort_values(by='TIME', ascending=True, inplace=True)
df_dt = df_dt[df_dt['TARGET'] == df_dt.groupby(['TIME', 'DIMENSION'])['TARGET'].transform(max)]
return df_dt
It sounds like a problem in installation. I would try uninstalling and installing Pandas.
If this does not help, do the same for python
still rather new to Python. I've been referencing a few blogs regarding the jnpr.junos packages. Specifically from Jeremy Schulman (http://forums.juniper.net/t5/Automation/Python-for-Non-Programmers-Part-2/bc-p/277682). I'm simply trying to make sure I have the commands right. I'm just attempting to pass simple commands to my SRX cluster. I'm attempting to pass the following to an SRX650 cluster.
>>> from jnpr.junos.utils.config import Config
>>> from jnpr.junos import Device
>>> dev = Device(host='devip',user='myuser',password='mypwd')
>>> dev.open()
Device(devip)
>>> cu = Config(dev)
>>> cu
jnpr.junos.utils.Config(devip)
>>> set_cmd = 'set system login message "Hello Admin!"'
>>> cu.load(set_cmd,format='set')
Warning (from warnings module):
File "C:\Python27\lib\site-packages\junos_eznc-1.0.0- py2.7.egg\jnpr\junos\utils\config.py", line 273
if any([e.find('[error-severity="error"]') for e in rerrs]):
FutureWarning: The behavior of this method will change in future versions. Use specific 'len(elem)' or 'elem is not None' test instead.
Traceback (most recent call last):
File "<pyshell#8>", line 1, in <module>
cu.load(set_cmd,format='set')
File "C:\Python27\lib\site-packages\junos_eznc-1.0.0- py2.7.egg\jnpr\junos\utils\config.py", line 296, in load
return try_load(rpc_contents, rpc_xattrs)
File "C:\Python27\lib\site-packages\junos_eznc-1.0.0-py2.7.egg\jnpr\junos\utils\config.py", line 274, in try_load
raise err
RpcError
I've done quite a bit of searching and can't seem to find anything as to why this RPC error is popping up. I've confirmed that the syntax is correct and read through the jnpr.junos documentation for Junos EZ.
Found that I was using an outdated version of junos.eznc. Running pip install -U junos-eznc updated me to junos.eznc 1.3.1. After doing this, my script worked properly.
I am working on creating a WebGL interface for which I am trying to convert FBX models to JSON file format in an automated process using python file, convert_fbx_three.py (from Mr. Doob's GitHub project) from command line.
When I try the following command to convert the FBX:
python convert_fbx_three.py Dolpine.fbx Dolpine
I get following errors:
Error in cmd:
Traceback (most recent call last):
File "convert_fbx_three.py", line 1625, in <module>
sdkManager, scene = InitializeSdkObjects()
File "D:\xampp\htdocs\upload\user\fbx\FbxCommon.py", line 7, in InitializeSdkObjects
lSdkManager = KFbxSdkManager.Create()
NameError: global name 'FbxManager' is not defined
I am using Autodesk FBX SDK 2012.2 available here on Windows 7.
Can you please try the following:
import FbxCommon
.
.
.
lSdkManager, lScene = FbxCommon.InitializeSdkObjects()
You probably need to add environment variables pointing to the folder that contains fbx.pyd, FbxCommon.py, and fbxsip.pyd prior to calling anything in those modules.