I am trying to implement a Flask backend endpoint where two images can be uploaded, a background and a foreground (the foreground has a transparent background), and it will paste the foreground on top of the background. I have the following Python code that I have tested with local files and works:
background_name = request.args.get("background")
overlay_name = request.args.get("overlay")
output_name = request.args.get("output")
background_data = request.files["background"].read()
overlay_data = request.files["overlay"].read()
background = Image.open(BytesIO(background_data))
overlay = Image.open(BytesIO(overlay_data))
background = background.convert("RGBA")
overlay = overlay.convert("RGBA")
overlay = resize_image(overlay, background.size)
background.paste(overlay, (0, 0, background.size[0], background.size[1],), overlay)
background.save(output_name, "PNG")
However, when I try to upload the same images through Alamofire and the following code:
Alamofire.upload(multipartFormData: { formData in
formData.append(bgData, withName: "background", fileName: "background.jpg", mimeType: "image/jpeg")
formData.append(fgData, withName: "foreground", fileName: "foreground.png", mimeType: "image/png")
}, to: "http://localhost:8080/image_overlay?background=background%2Ejpg&overlay=overlay%2Epng&output=result%2Epng", encodingCompletion: { result in
switch result {
case .success(let upload, _, _):
upload.validate().responseJSON(completionHandler: { response in
switch response.result {
case .success(let value): print("success: \(value)")
case .failure((let error)): print("response error \(error)")
}
})
case .failure(let error):
print("encoding error \(error)")
}
})
The foreground appears with a white background instead of a transparent background and the resulting image is just the foreground with a white background. How can I get Alamofire to send the transparency?
EDIT:
I tried translating this to a cURL request and it works as expected. I used nc-l localhost 8080 to view the full request, and it seems that with the foreground picture, even though the Content-Type was set to "application/octet-stream", the next line was "?PNG". This line was no present from the Alamofire request. How can I get the request to recognize the image as a PNG?
I can't believe I was doing this, but when I was obtaining to image data, I was using
let fgData = UIImageJPEGRepresentation(UIImage(named: "overlay.png"), 0.75)
This was the only way I had ever obtained data from an image in Swift, even though obviously it should be what I changed it to:
let fgData = UIImagePNGRepresentation(UIImage(named: "overlay.png"))
Related
I have a post request in Flask that accepts an image file, and I want to return another image to retrieve it in Flutter and put it on screen.
In Flutter, I can send the image through the post request, but I don't know how to retrieve an image and put it on screen.
I know I can save the image in the static folder at Flask, and retrieve the URL from Flutter, and it works, but I think this is too inefficient for what I'm doing.
So I want to send the image directly without saving it.
This was my last attempt but didn't work.
#app.route("/send-image", methods=['POST'])
def send_image():
if request.method == 'POST':
user_image = request.files["image"]
image = cv2.imdecode(np.frombuffer(
user_image.read(), np.uint8), cv2.IMREAD_COLOR)
#data is a NumPy array returned by the predict function. This numpy array it's an image
data = predict(image)
data_object = {}
data = data.reshape(data.shape[0], data.shape[1], 1)
data2 = array_to_img(data)
b = BytesIO()
data2.save(b, format="jpeg")
b.seek(0)
data_object["img"] = str(b.read())
return json.dumps(data_object)
Here I returned a Uint8List because I read from the internet that I can put that into an Image.memory() to put the image on the screen.
Future<Uint8List> makePrediction(File photo) async {
const url = "http://192.168.0.11:5000/send-image";
try {
FormData data = new FormData.fromMap({
"image": await MultipartFile.fromFile(photo.path),
});
final response = await dio.post(url, data: data);
String jsonResponse = json.decode(response.data)["img"].toString();
List<int> bytes =
utf8.encode(jsonResponse.substring(2, jsonResponse.length - 1));
Uint8List dataResponse = Uint8List.fromList(bytes);
return dataResponse;
} catch (error) {
print("ERRORRR: " + error.toString());
}
}
Sorry if what I did here doesn't make sense, but after trying a lot of things I wasn't thinking properly.
I really need your help
You can convert the image to base64 and display it with Flutter.
On server:
import base64
...
data_object["img"] = base64.b64encode(b.read()).decode('ascii')
...
On client:
...
String imageStr = json.decode(response.data)["img"].toString();
Image.memory(base64Decode(imageStr));
...
The problem with your server-side code is it tries to coerce a bytes to str object by using function str().
However, in Python 3, bytes.__repr__ is called by str() since bytes.__str__ is not defined. This results in something like this:
str(b'\xf9\xf3') == "b'\\xf9\\xf3'"
It makes the JSON response looks like:
{"img": "b'\\xf9\\xf3'"}
Without writing a dedicated parser, you can not read this format of image data in Flutter. However, base64 is a well known format of encoding binary data and we do have a parser base64Decode in Flutter.
The Setup:
I have an NGINX web server. On a web page I am running the javascript to refresh the file tmp_image.jpg every half a second:
var img = new Image();
img.onload = function() {
var canvas = document.getElementById("x");
var context = canvas.getContext("2d");
context.drawImage(img, 0, 0);
setTimeout(timedRefresh, 500);
};
function timedRefresh() {
// just change src attribute, will always trigger the onload callback
try {
img.src = 'tmp_image.jpg?d=' + Date.now();
}catch(e){
console.log(e);
}
}
setTimeout(timedRefresh, 100);
With the HTML:
<canvas id="x" width="700px" height="500px"></canvas>
On another process using a raspberry pi camera I am running a script to write a image to a file tmp_image.jpg with the library picamera:
for frame in cam.camera.capture_continuous(raw, format="rgb", use_video_port=True):
img = frame.array
cv2.imwrite("tmp_image.jpg", img)
The problem:
I am receiving the error:
GET http://192.168.1.234/images/tmp_img.jpg?d=1513855080653 net::ERR_CONTENT_LENGTH_MISMATCH
Which then leads the refresh to crash (even though I have put it in a try catch)
I believe this is because NGINX is trying to read the file while it is being written to. So what should I do? And is this awfully inefficient?
I have an iOS mobile application that sends an encoded image to a Python3 server.
static func prepareImageAndUpload(imageView: UIImageView) -> String?
{
if let image: UIImage? = imageView.image {
// You create a NSData from your image
let imageData = UIImageJPEGRepresentation(imageView.image!, 0.5)
// You create a base64 string
let base64String = imageData!.base64EncodedStringWithOptions([])
// And you encode it in order to delete any problem of specials char
let encodeImg = base64String.stringByAddingPercentEncodingWithAllowedCharacters(.URLHostAllowedCharacterSet()) as String!
return encodeImg
}
return nil
}
And I am trying to receive that image using the following code:
imageName = "imageToSave.jpg"
fh = open(imageName, "wb")
imgDataBytes = bytes(imgData, encoding="ascii")
imgDataBytesDecoded = base64.b64decode(imgDataBytes)
fh.write(imgDataBytesDecoded)
fh.close()
I create the image file successfully and nothing breaks. And I can see that the filesize is correct, but the image is not correct, since it can't be opened and shows that it is broken.
I am not sure where the error can be, since the logic is as follows:
Encode image with base64 on iOS device
Send it
Decode image with base64 on Python3 server
Save image from decoded bytes
I have tried two new variants:
Remove stringByAddingPercentEncodingWithAllowedCharacters and
the result was the same
Add urldecode in Python3 server and the result was the same
I'd like to send a Image (via URL or Path), on request.
I use the source code here.
The code has already a sample to send an image (via URL or Path), but I don't get it since I'm new to Python.
Here's the sample code snippet:
elif text == '/image': #request
img = Image.new('RGB', (512, 512))
base = random.randint(0, 16777216)
pixels = [base+i*j for i in range(512) for j in range(512)] # generate sample image
img.putdata(pixels)
output = StringIO.StringIO()
img.save(output, 'JPEG')
reply(img=output.getvalue())
Some API infos can be found here.
Thanks for your patience.
To send a photo from URL:
bot.send_photo(chat_id=chat_id, photo='https://telegram.org/img/t_logo.png')
To send a photo from local Drive:
bot.send_photo(chat_id=chat_id, photo=open('tests/test.png', 'rb'))
Here is the reference documentation.
I was struggling to connect the python-telegram-bot examples with the above advice. Especially, while having context and update, I couldnt find chatid and bot. Now, my two cents:
pic=os.path.expanduser("~/a.png")
context.bot.send_photo(chat_id=update.effective_chat.id, photo=open(pic,'rb'))
I want to generate a dynamically created png image with Pycairo and serve it usign Django. I read this: Serve a dynamically generated image with Django.
Is there a way to transport data from Pycairo surface directly into HTTP response? I'm doing this for now:
data = surface.to_rgba()
im = Image.frombuffer ("RGBA", (width, height), data, "raw", "RGBA", 0,1)
response = HttpResponse(mimetype="image/png")
im.save(response, "PNG")
return response
But it actually doesn't work because there isn't a to_rgba call (this call I found using Google code but doesn't work).
EDIT: The to_rgba can be replaced by the correct call get_data(), but I still want to know if I can bypass PIL altogether.
def someView(request):
surface = cairo.ImageSurface(cairo.FORMAT_ARGB32, 100, 100)
context = cairo.Context(surface)
# Draw something ...
response = HttpResponse(mimetype="image/png")
surface.write_to_png(response)
return response
You can try this:
http://www.stuartaxon.com/2010/02/03/using-cairo-to-generate-svg-in-django
It's about SVG but I think it will be easy to adapt