it's my first time with Azure Face Detection API and I'm using this code right here:
import os
import io
import cv2
from azure.cognitiveservices.vision.face import FaceClient
from msrest.authentication import CognitiveServicesCredentials
from PIL import Image, ImageDraw, ImageFont
API_KEY = '...'
ENDPOINT = '...'
image = open('realmadrid.jpg', 'rb')
face_client = FaceClient(ENDPOINT, CognitiveServicesCredentials(API_KEY))
response_detected_faces = face_client.face.detect_with_stream(
image=image,
detection_model='detection_03',
recognition_model='recognition_04',
return_face_landmarks=True,
)
if not response_detected_faces:
raise Exception("No face detected!")
print(f"Number of face detected {len(response_detected_faces)}")
The problem is that everytime I run this code it gives me an exception:
/home/thecowmilk/dev/azure_faceapi/venv/bin/python /home/thecowmilk/dev/azure_faceapi/faceapi/starting.py
Traceback (most recent call last):
File "/home/thecowmilk/dev/azure_faceapi/faceapi/starting.py", line 20, in <module>
detected_faces = face_client.face.detect_with_stream(
File "/home/thecowmilk/dev/azure_faceapi/venv/lib/python3.8/site-packages/azure/cognitiveservices/vision/face/operations/_face_operations.py", line 782, in detect_with_stream
raise models.APIErrorException(self._deserialize, response)
azure.cognitiveservices.vision.face.models._models_py3.APIErrorException: (InvalidRequest) Invalid request has been sent.
Process finished with exit code 1
I don't know how to solve this. It looks there's not much info about Azure Face Detection API. I'd appreciate your thoughts <3!
According to Get face landmarks, to get face landmark data, set the detectionModel parameter to DetectionModel.Detection01 and the returnFaceLandmarks parameter to true.
IList<DetectedFace> faces2 = await faceClient.Face.DetectWithUrlAsync(url: imageUrl, returnFaceId: true, returnFaceLandmarks: true, detectionModel: DetectionModel.Detection01);
Instead of detection_03 use detection_01:
response_detected_faces = face_client.face.detect_with_stream(
image=image,
detection_model='detection_01',
recognition_model='recognition_04',
return_face_landmarks=True,
)
You can refer to recent changes/limitations about some features of Face API: Responsible AI investments and safeguards for facial recognition
Related
I am trying to write a subscriber that will take images from a camera in a gazebo simulation and save them. I am able to take pictures and save them, however I am trying to increment the image name each time, yet I am finding it difficult to do so. I tried to create a class and then increment a number (image_number) inside the class each time I run the image_callback funcion, however I get an error. I also tried defining a global variable and incrementing that, yet it did not recognize the variable inside the functions. I have attached the code and error below, any help is greatly appreciated!
# rospy for the subscriber
import rospy, time
# ROS Image message
from sensor_msgs.msg import Image
# ROS Image message -> OpenCV2 image converter
from cv_bridge import CvBridge, CvBridgeError
# OpenCV2 for saving an image
import cv2
# Instantiate CvBridge
bridge = CvBridge()
class Image(object):
def __init__(self):
self.image_number = 0
#rospy.init_node('image_listener')
# Define your image topic
image_topic = "/wamv/sensors/cameras/front_left_camera/image_raw"
# Set up your subscriber and define its callback
rospy.Subscriber(image_topic, Image, self.image_callback)
#rospy.spin()
def image_callback(self, msg):
print("Received an image!")
try:
# Convert your ROS Image message to OpenCV2
cv2_img = bridge.imgmsg_to_cv2(msg, "bgr8")
except CvBridgeError as e:
print(e)
else:
# Save your OpenCV2 image as a jpeg
cv2.imwrite('croc_{}'.format(self.image_number)+'.png', cv2_img)
print("Saved Image!")
self.image_number += 1
time.sleep(3.0)
if __name__ == '__main__':
rospy.init_node('image_listener')
image_node = Image()
and the error:
Traceback (most recent call last):
File "/home/jehan/PycharmProjects/spawner/take_photo.py", line 73, in <module>
image_node = Image()
File "/home/jehan/PycharmProjects/spawner/take_photo.py", line 54, in __init__
rospy.Subscriber(image_topic, Image, self.image_callback)
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 563, in __init__
super(Subscriber, self).__init__(name, data_class, Registration.SUB)
File "/opt/ros/noetic/lib/python3/dist-packages/rospy/topics.py", line 144, in __init__
raise ValueError("data_class [%s] is not a message data class"%data_class.__class__.__name__)
ValueError: data_class [type] is not a message data class
Your problem isn't the counter. It's the fact your class is called Image, just like the message type. This means you will be re-defining over the imported message type. And then when you go to create a subscriber the Image type is actually the class; not a ROS message type. I'd suggest using a different class name as such:
class ImageNode(object):
image_node = ImageNode()
Hi i am trying to send a post into Instagram using instagrapi module
and I'm using photo_upload to do that but that not working
here is my code :
from instagrapi import Client
print("im gonna log in")
cl = Client()
cl.login("UserName", "Password")
cl.photo_upload("picture.png", "hello this is a test from instagrapi")
but i get this error :
Traceback (most recent call last): File "E:\HadiH2o\Documents\_MyProjects\Python\Test\Test.py", line 10, in <module> File "C:\Users\HadiH2o\AppData\Local\Programs\Python\Python39\lib\site-packages\instagrapi\mixins\photo.py", line 205, in photo_upload
upload_id, width, height = self.photo_rupload(path, upload_id) File "C:\Users\HadiH2o\AppData\Local\Programs\Python\Python39\lib\site-packages\instagrapi\mixins\photo.py", line 170, in photo_rupload
raise PhotoNotUpload(response.text, response=response, **last_json) instagrapi.exceptions.PhotoNotUpload: {"debug_info":{"retriable":false,"type":"ProcessingFailedError","message":"Request processing failed"}}
help please!
I found the answer to the question
To send a post to Instagram, the photo format must be JPG and the photo size must be less than 1080 x 1080.
this is the code :
from pathlib import Path
from PIL import Image
from instagrapi import Client
image = Image.open("picture.jpg")
image = image.convert("RGB")
new_image = image.resize((1080, 1080))
new_image.save("new_picture.jpg")
cl = Client()
cl.login("UserName", "Password")
phot_path = "new_picture.jpg"
phot_path = Path(phot_path)
cl.photo_upload(phot_path , "hello this is a test from instagrapi")
Trying to make a gif in vapoursynth, followed tutorials yet keep getting name error. If anyone could help explain what's wrong with it and how to fix it, I would appreciate it.
Failed to initialize script.
Failed to evaluate the script:
Python exception: name 'video' is not defined
Traceback (most recent call last):
File "src\cython\vapoursynth.pyx", line 1927, in
vapoursynth.vpy_evaluateScript
File "src\cython\vapoursynth.pyx", line 1928, in
vapoursynth.vpy_evaluateScript
File "C:/Users/caitl/Pictures/bbh.vpy", line 12, in
core.max_cache_size = 1000 #Use this command to limit the RAM usage. 1000 or 2000 is fine.
NameError: name 'video' is not defined
Code
import os
import vapoursynth as vs
import havsfunc as haf
import mvsfunc as mvs
import descale as descale
import muvsfunc as muvs
import resamplehq as rhq
import CSMOD as cs
import Dither as dither
core = vs.get_core()
video = core.std.Trim(video, a, b)
video = haf.QTGMC(video, Preset="Slower", TFF=True)
video = core.fmtc.resample(video, css="444")
video = descale.Debilinear(video, 629,354)
video = mvs.BM3D(video, sigma=8.84, radius1=1, profile1="fast", matrix="709")
video = hnw.FineSharp(video, sstr=1.13)
video = core.std.CropRel(video, left=72, top=52, right=107, bottom=52)
video = core.fmtc.bitdepth(video, bits=8)
video.set_output()
You didn't define video before the call to Trim which takes it as a parameter.
The example script in the documentation says you need to create a video object, for example, by loading a file:
from vapoursynth import core
video = core.ffms2.Source(source='Rule6.mkv')
This loads a video file Rule6.mkv using the ffms2 plugin, which it assumes is installed correctly.
I'm having an issue with my code in production. It works locally- I'm not sure how to troubleshoot the issue.
From what I can tell, PIL is the same version in both environments. The Image module works as expected both locally and in production- ImageEnhancement is causing issues.
Locally, the following code works as expected.
from PIL import Image
from PIL import ImageEnhancement
image = Image.open("a.jpg")
newImage = ImageEnhance.Contrast(image)
newImage.enhance(1.5)
newImage.save("newImage.jpg")
However when trying this in my production environment, I get an error:
Traceback (most recent call last):
File "analyse.py", line 95, in <module>
processedImage = ImageEnhance.Sharpness(processedImage)
File "/usr/lib/python2.7/dist-packages/PIL/ImageEnhance.py", line 97, in __init__
self.degenerate = image.filter(ImageFilter.SMOOTH)
AttributeError: 'Contrast' object has no attribute 'filter'
Class Contrast doesn't create image but object which can change image. And enhance() creates new image.
from PIL import Image
from PIL import ImageEnhance
image = Image.open("a.jpg")
enhancer = ImageEnhance.Contrast(image)
new_image = enhancer.enhance(1.5)
new_image.save("newImage.jpg")
I'm using Python to retrieve a Blob image from Azure storage and then send it to Custom Vision for a prediction.
This is the code:
import io
from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
block_blob_service = BlockBlobService(
account_name=account_name,
account_key=account_key
)
fp = io.BytesIO()
block_blob_service.get_blob_to_stream(
container_name,
blob_name,
fp,
max_connections=2
)
predictor = CustomVisionPredictionClient(
cv_prediction_key,
endpoint=cv_endpoint
)
# This call breaks with the below error message
results = predictor.predict_image(
cv_project_id,
image_data.getvalue(),
iteration_id=cv_iteration_id
)
However, executing the predict_image function results in the following error:
System.Private.CoreLib: Exception while executing function: Functions.ReloadPostgres. System.Private.CoreLib: Result: Failure
Exception: HttpOperationError: Operation returned an invalid status code 'Resource Not Found'
Stack: File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 288, in _handle__invocation_request
self.__run_sync_func, invocation_id, fi.func, args)
File "~/.pyenv/versions/3.6.8/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 347, in __run_sync_func
return func(**params)
File "~/py_func_app/ReloadPostgres/__init__.py", line 14, in main
data_handler.fetch_prediction_data()
File "~/py_func_app/Shared_Code/data_handler.py", line 127, in fetch_prediction_data
cv_handler.predict_image(image_data.getvalue(), cv_model)
File "~/py_func_app/Shared_Code/custom_vision.py", line 30, in predict_image
raise e
File "~/py_func_app/Shared_Code/custom_vision.py", line 26, in predict_image
iteration_id=cv_model.cv_iteration_id
File "~/.local/share/virtualenvs/py_func_app-GVYYSfCn/lib/python3.6/site-packages/azure/cognitiveservices/vision/customvision/prediction/custom_vision_prediction_client.py", line 215, in predict_image
raise HttpOperationError(self._deserialize, response)
Here in below i am providing similar example using custom vision prediction using image URL, you can change it to image file :
# -*- coding: utf-8 -*-
"""
Created on Tue Mar 19 11:04:54 2019
#author: moverm
"""
#from azure.storage.blob import BlockBlobService
from azure.cognitiveservices.vision.customvision.prediction import CustomVisionPredictionClient
#block_blob_service = BlockBlobService(
# account_name=account_name,
# account_key=account_key
#)
#
#fp = io.BytesIO()
#block_blob_service.get_blob_to_stream(
# container_name,
# blob_name,
# fp,
# max_connections=2
#)
predictor = CustomVisionPredictionClient(
"prediction-key",
endpoint="https://southcentralus.api.cognitive.microsoft.com"
)
# This call breaks with the below error message
#results = predictor.predict_image(
# 'prediction-key',
# image_data.getvalue(),
# iteration_id=cv_iteration_id
#)
test_img_url = "https://pointsprizes-blog.s3-accelerate.amazonaws.com/316.jpg"
results = predictor.predict_image_url("project-Id", "Iteration-Id", url=test_img_url)
# Display the results.
for prediction in results.predictions:
print ("\t" + prediction.tag_name + ": {0:.2f}%".format(prediction.probability * 100))
Basically issue is related to endpoint.Use https://southcentralus.api.cognitive.microsoft.com for an endpoint.
It should work, and you should be able to see the prediction probability.
Hope it helps.
I tried to reproduce your issue and got a similar issue, which was caused by using the incorrect endpoint from Azure portal when I created a Cognitive Service on the region of Janpa East, as the figure below.
As the figure above shown, the endpoint is https://japaneast.api.cognitive.microsoft.com/customvision/training/v1.0 for version 1, but the azure-cognitiveservices-vision-customvision PyPI page points out the corrent endpoint which should be https://{AzureRegion}.api.cognitive.microsoft.com as the figure below.
So I got the similar issue with yours if using the incorrent endpoint, as below. My code used is the same as yours, the only difference is the running environment which yours is on Azure Functions, but mine is a console script.
Meanwhile, according to the source code custom_vision_prediction_client.py of Azure Cognitive Service SDK for Custom Vision, you can see the code base_url = '{Endpoint}/customvision/v2.0/Prediction' to concat your passed endpoint with /customvision/v2.0/Prediction to generate the real endpoint for calling prediction api.
Therefore, as #MohitVerma-MSFT said, using https://<your cognitive service region>.api.cognitive.microsoft.com for the current version of Python package.
Additional notes as below, there is an announce of important update for customvision.ai you need to know, it may make effect for your current code working soon after.