I am currently able to send OpenCV image frames to my Flask Server using the following code
def sendtoserver(frame):
imencoded = cv2.imencode(".jpg", frame)[1]
headers = {"Content-type": "text/plain"}
try:
conn.request("POST", "/", imencoded.tostring(), headers)
response = conn.getresponse()
except conn.timeout as e:
print("timeout")
return response
But I want to send a unique_id along with the frame I tried combining the frame and the id using JSON but getting following error TypeError: Object of type 'bytes' is not JSON serializable does anybody have any idea how I can send some additional data along with the frame to the server.
UPDATED:
json format code
def sendtoserver(frame):
imencoded = cv2.imencode(".jpg", frame)[1]
data = {"uid" : "23", "frame" : imencoded.tostring()}
headers = {"Content-type": "application/json"}
try:
conn.request("POST", "/", json.dumps(data), headers)
response = conn.getresponse()
except conn.timeout as e:
print("timeout")
return response
I have actually solved the query by using the Python requests module instead of the http.client module and have done the following changes to my above code.
import requests
def sendtoserver(frame):
imencoded = cv2.imencode(".jpg", frame)[1]
file = {'file': ('image.jpg', imencoded.tostring(), 'image/jpeg', {'Expires': '0'})}
data = {"id" : "2345AB"}
response = requests.post("http://127.0.0.1/my-script/", files=file, data=data, timeout=5)
return response
As I was trying to send a multipart/form-data and requests module has the ability to send both files and data in a single request.
You can try encoding your image in base64 string
import base64
with open("image.jpg", "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
And send it as a normal string.
As others suggested base64 encoding might be a good solution, however if you can't or don't want to, you could add a custom header to the request, such as
headers = {"X-my-custom-header": "uniquevalue"}
Then on the flask side:
unique_value = request.headers.get('X-my-custom-header')
or
unique_value = request.headers['X-my-custom-header']
That way you avoid the overhead of processing your image data again (if that matters) and you can generate a unique id for each frame with something like the python uuid module.
Hope that helps
Related
Trying to output my response from Microsoft's Computer Vision API to a .json file, it works with all of the other APIs I've been using so far. With the code below, directly from Microsoft's documentation, I get an error:
Error: the JSON object must be str, not 'bytes'
Removing the parsed = json.loads(data) and using print(json.dumps(data, sort_keys=True, indent=2)) prints out the information for the image that I want, but also says Error and is prefixed with
b
denoting it's in bytes and ending with
is not JSON serializable
I'm just trying to find out how I can get the response into a .json file like i'm able to do with other APIs and am at a loss for how I can possible convert this in a way that will work.
import http.client, urllib.request, urllib.parse, urllib.error, base64, json
API_KEY = '{API_KEY}'
uri_base = 'westus.api.cognitive.microsoft.com'
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': API_KEY,
}
params = urllib.parse.urlencode(
{
'visualFeatures': 'Categories, Description, Color',
'language': 'en',
}
)
body = "{'url': 'http://i.imgur.com/WgPtc53.jpg'}"
try:
conn = http.client.HTTPSConnection(uri_base)
conn.request('POST', '/vision/v1.0/analyze?%s' % params, body, headers)
response = conn.getresponse()
data = response.read()
# 'data' contains the JSON data. The following formats the JSON data for display.
parsed = json.loads(data)
print ("Response:")
print (json.dumps(parsed, sort_keys=True, indent=2))
conn.close()
except Exception as e:
print('Error:')
print(e)
Shortly after posting the question, I realized I had missed looking for something: just converting bytes to a string.
found this Convert bytes to a Python string
and was able to modify my code to:
parsed = json.loads(data.decode('utf-8'))
And it seems to have resolved my issue. Now error-free and able to export to .json file like I needed.
I'm currently having issues sending a file to an API. I've manually tested my scripts base64 output by printing to the screen and copying and pasting this directly into the API's sandbox which works correctly but when I package it up in JSON ready to send, it no longer works.
What I need is this to send to the API:
{
"content": "mybase64encodedfilestuff"
}
and my python code is:
with open(filename, "rb") as image_file:
encoded_string = base64.b64encode(image_file.read())
encoded_string = encoded_string.decode("utf-8")
payload = {}
payload['content'] = encoded_string
json_payload = json.dumps(payload)
I then send this to the API as:
r = requests.post(url='https://api.example.com/uploads', data=payload,
headers={'Content-Type': 'application/json',
'Authorization': 'Basic '+api_string}, timeout=5)
I feel like I've missed something simple but can't figure it out as I just get a error 400, please provide valid content first. If I make the payload a copy and paste of the print output it works.
I'd converted my string to a JSON but they used the original string, not the JSONified one.
I've been looking around for ways to upload large file with additional data, but there doesn't seem to be any solution. To upload file, I've been using this code and it's been working fine with small file:
with open("my_file.csv", "rb") as f:
files = {"documents": ("my_file.csv", f, "application/octet-stream")}
data = {"composite": "NONE"}
headers = {"Prefer": "respond-async"}
resp = session.post("my/url", headers=headers, data=data, files=files)
The problem is that the code loads the whole file up before sending, and I would run into MemoryError when uploading large files. I've looked around, and the way to stream data is to set
resp = session.post("my/url", headers=headers, data=f)
but I need to add {"composite": "NONE"} to the data. If not, the server wouldn't recognize the file.
You can use the requests-toolbelt to do this:
import requests
from requests_toolbelt.multipart import encoder
session = requests.Session()
with open('my_file.csv', 'rb') as f:
form = encoder.MultipartEncoder({
"documents": ("my_file.csv", f, "application/octet-stream"),
"composite": "NONE",
})
headers = {"Prefer": "respond-async", "Content-Type": form.content_type}
resp = session.post(url, headers=headers, data=form)
session.close()
This will cause requests to stream the multipart/form-data upload for you.
I have a REST PUT request to upload a file using the Django REST framework. Whenever I am uploading a file using the Postman REST client it works fine:
But when I try to do this with my code:
import requests
API_URL = "http://123.316.118.92:8888/api/"
API_TOKEN = "1682b28041de357d81ea81db6a228c823ad52967"
URL = API_URL + 'configuration/configlet/31'
#files = {
files = {'file': open('configlet.txt','rb')}
print URL
print "Update Url ==-------------------"
headers = {'Content-Type' : 'text/plain','Authorization':API_TOKEN}
resp = requests.put(URL,files=files,headers = headers)
print resp.text
print resp.status_code
I am getting an error on the server side:
MultiValueDictKeyError at /api/configuration/31/
"'file'"
I am passing file as key but still getting above error please do let me know what might I am doing wrong here.
This is how my Django server view looks
def put(self, request,id,format=None):
configlet = self.get_object(id)
configlet.config_path.delete(save=False)
file_obj = request.FILES['file']
configlet.config_path = file_obj
file_content = file_obj.read()
params = parse_file(file_content)
configlet.parameters = json.dumps(params)
logger.debug("File content: "+str(file_content))
configlet.save()
For this to work you need to send a multipart/form-data body. You should not be setting the content-type of the whole request to text/plain here; set only the mime-type of the one part:
files = {'file': ('configlet.txt', open('configlet.txt','rb'), 'text/plain')}
headers = {'Authorization': API_TOKEN}
resp = requests.put(URL, files=files, headers=headers)
This leaves setting the Content-Type header for the request as a whole to the library, and using files sets that to multipart/form-data for you.
IN the code below, I am trying to make a POST of data with urllib2. However, I am getting a HTTP 400 bad request error. Can anyone help me with why this might be the case? The URL is reachable from my computer and all relevant ports are open.
data = {'operation' : 'all'}
results = an.post(an.get_cookie(), 'http://{}:8080/api/v1/data/controller/core/action/switch/update-host-stats'.format(an.TARGET), data)
print results
def post(session_cookie, url, payload):
data = urllib.urlencode(payload)
req = urllib2.Request(url, data)
req.add_header('Cookie','session_cookie=' + session_cookie)
try:
returnedData = urllib2.urlopen(req, data, timeout = 30)
data = json.load(returnedData)
except urllib2.URLError, e:
print e.code
print 'URL ERROR'
return {}
return data
The following code works for me:
import json
import urllib2
import logging
def post_json_request(url, post_data, optional_headers = {}):
"""
HTTP POST to server with json as parameter
#param url: url to post the data to
#param post_data: JSON formatted data
#return: response as raw data
"""
response = ""
try:
req = urllib2.Request(url, post_data, optional_headers)
jsonDump = json.dumps(post_data)
response = urllib2.urlopen(req, jsonDump)
except Exception, e:
logging.fatal("Exception while trying to post data to server - %s", e)
return response
I'm using it in various stubborn platforms that insist to retrieve data on a specific method.
Hope it will help,
Liron