I'm trying to send an OpenCV image in a json and receive it on the other end but I'm running into endless problems encoding and decoding the image
I send it in JSON in the following way:
dumps({"image": b64encode(image[y1:y2, x1:x2]).decode('utf-8')})
On the other end I try to decode it (I need it as a Pillow image):
image = Image.open(BytesIO(base64.b64decode(data['image'])))
But I'm getting Exception cannot identify image file <_io.BytesIO object at 0x7fbd34c98a98>
Also tried:
nparr = np.fromstring(b64decode(data['image']), np.uint8)
image = cv2.imdecode(nparr, cv2.COLOR_BGR2RGB)
pil_image = Image.fromarray(image)
But then I get 'NoneType' object has no attribute '__array_interface__' coming from Image.fromarray
Any ideas what I'm doing wrong?
Hopefully, this should get you started. I think that what you tried, by sending the unadorned bytes from the Numpy array probably won't work because the receiver will not know the width, height and number of channels in the image, so I used pickle to store that.
#!/usr/bin/env python3
import cv2
import numpy as np
import base64
import json
import pickle
from PIL import Image
def im2json(im):
"""Convert a Numpy array to JSON string"""
imdata = pickle.dumps(im)
jstr = json.dumps({"image": base64.b64encode(imdata).decode('ascii')})
return jstr
def json2im(jstr):
"""Convert a JSON string back to a Numpy array"""
load = json.loads(jstr)
imdata = base64.b64decode(load['image'])
im = pickle.loads(imdata)
return im
# Create solid red image
red = np.full((480, 640, 3), [0, 0, 255], dtype=np.uint8)
# Make image into JSON string
jstr = im2json(red)
# Extract image from JSON string, and convert from OpenCV to PIL reversing BGR to RGB on the way
OpenCVim = json2im(jstr)
PILimage = Image.fromarray(OpenCVim[...,::-1])
PILimage.show()
As you haven't answered my question in the comments about why you want do things this way, it may not be optimal - sending uncompressed, base64-encoded images across a network (presumably) is not very efficient. You might consider JPEG, or PNG encoded data to save network bandwidth, for example.
You could also use cPickle instead.
Note that some folks disapprove of pickle and also the method above uses a lot of network bandwidth. An alternative might be to JPEG compress the image before sending and decompress on the receiving end straight into a PIL Image. Note that this is lossy.
Or change the .JPG extension in the code to .PNG which is loss-less but may be slower and will not work for images with floating point data or 16-bit data (although the latter could be accommodated).
You could also look at TIFF, but again, it depends on the nature of your data, the network bandwidth, the flexibility you need, your CPU's encoding/decoding performance...
#!/usr/bin/env python3
import cv2
import numpy as np
import base64
import json
from io import BytesIO
from PIL import Image
def im2json(im):
_, imdata = cv2.imencode('.JPG',im)
jstr = json.dumps({"image": base64.b64encode(imdata).decode('ascii')})
return jstr
def json2im(jstr):
load = json.loads(jstr)
imdata = base64.b64decode(load['image'])
im = Image.open(BytesIO(imdata))
return im
# Create solid red image
red = np.full((480, 640, 3), [0, 0, 255], dtype=np.uint8)
# Make image into JSON string
jstr = im2json(red)
# Extract image from JSON string into PIL Image
PILimage = json2im(jstr)
PILimage.show()
Related
As the question says, i need a way to decode a base64 image into the pixel values of that image. I tried using opencv for imdecode, and that's pretty much all i found about this.
The reason i want to do it is because i am writing a backend api, which receives a base64 encoded image from a web application (written in TypeScript if that's needed, from a object) and then in python i want to parse it's pixel values to do some computation, like see what's the average color of the image.
The problem is that the data isn't sent just as pixels probably, it's a strange format and i don't know how to decode it.
Here is an example base64 image:
EDIT
iVBORw0KGgoAAAANSUhEUgAAABQAAAAUCAYAAACNiR0NAAAAh0lEQVQ4T+3TMQ6AIAwF0PZIbIxwcljZuFENJhiDrX4imzKT109bWESEFh7+wdfdvO0hM18KPM0QAhtyxu/Q6Sl32EKnwForOecohEApJbXfMLjsyeNglgzF2iUNV5/cUllJSinkvT9qjPemwSblnCnGuKMwiH4ZCOxYT2GtiFYUXhs08QfBDdNTgsUZ/lOUAAAAAElFTkSuQmCC==
It can be decoded in browser decoders and it also shows the image properly, i just can't decode it so that i can iterate through the pixel values.
import base64
import numpy as np
import cv2
base64_string = "INSERT BASE64 HERE"
decoded = base64.b64decode(base64_string)
buffer = np.fromstring(decoded, np.float32)
image_array = cv2.imdecode(buffer, cv2.IMREAD_COLOR)
cv2.imshow("img", image_array)
cv2.waitKey(0)
The base64 string is decoded and then np.fromstring converts the decoded string to an array. Finally cv2.imdecode decodes the NumPy array to convert to image format.
I'm trying to send json dict that should contain Pillow image as one of his fields, to do that I have to convert the image to string.
I tried to use pillow function:
image.toString()
but still got it as bytes, so I tried to encode it:
buff = BytesIO()
image.save(buff, format="JPEG")
img_str = base64.b64encode(buff.getvalue())
but still got it as bytes.
How can I convert Pillow images to format that can be saved in json file?
In the comments, Mark Setchell suggests calling .decode('ascii') on the result of your b64encode call. I agree that this will work, but I think base64encoding to begin with is introducing an unnecessary extra step that complicates your code.*
Instead, I suggest directly decoding the bytes returned by image.tostring. The only complication is that the bytes object can contain values larger than 128, so you can't decode it with ascii. Try using an encoding that can handle values up to 256, such as latin1.
from PIL import Image
import json
#create sample file. You don't have to do this in your real code.
img = Image.new("RGB", (10,10), "red")
#decode.
s = img.tobytes().decode("latin1")
#serialize.
with open("outputfile.json", "w") as file:
json.dump(s, file)
(*but, to my surprise, the resulting json file is still smaller than one made with a latin1 encoding, at least for my sample file. Use your own judgement to determine whether file size or program clarity is more important.)
I use the following to exchange Pillow images via json.
import json
from PIL import Image
import numpy as np
filename = "filename.jpeg"
image = Image.open(filename)
json_data = json.dumps(np.array(image).tolist())
new_image = Image.fromarray(np.array(json.loads(json_data), dtype='uint8'))
How can I get the same effect as:
from PIL import Image
with Image.open(image_path) as image:
image.thumbnail((200, 200), Image.ANTIALIAS)
image.save(temporary_thumbnail_path)
with open(temporary_thumbnail_path, "rb") as thumbnail_file:
thumbnail_as_string = base64.b64encode(thumbnail_file.read()).decode()
without having to write to disk ?
i.e. I would like to get the bytes representation of the compressed image, but without having to resort to temporary_thumbnail_path.
I know that PIL documentation recommends using
save(), with a BytesIO parameter for in-memory data.
but I am not sure to understand what this means and haven't found examples online.
It was not so hard:
import io
from PIL import Image
output = io.BytesIO()
with Image.open(image_path) as image:
image.thumbnail((400, 400), Image.ANTIALIAS)
image.save(output, format="JPEG")
thumbnail_as_string = base64.b64encode(output.getvalue()).decode()
I've been handed a list of files from the backend of an application that are supposed to be jpeg files. However for the life of me I haven't been able to convert them into PIL image objects. When I call
str(curimg)
I get back:
<type 'str'>
. I have tried using open(), .read, io.BytesIO(img.read() and also doing nothing to it, but it keeps seeing it as a string. When i print the string, I get unrecognizable characters. Does anyone know how to tell python how to intepret this string as a jpeg and convert it into a pill image where I can call .size and np.array on?
from PIL import Image
import io
Image.open(io.BytesIO(image))
Note:
If image is on the web; you need to download it first.
import requests
image = requests.get(image_url).content #download image from web
And then pass it to io module.
io.BytesIO(image)
If image is in your hd; you can open directly with PIL.
Image.open('image_file.jpg') #image in your HD
You should be able to pass a StringIO object to PIL and open it that way.
ie:
from PIL import Image
import StringIO
tempBuff = StringIO.StringIO()
tempBuff.write(curimg)
tempBuff.seek(0) #need to jump back to the beginning before handing it off to PIL
Image.open(tempBuff)
For me, none of the solutions above worked.
I finally managed to read the string properly like this:
from PIL import Image
img = Image.frombytes('RGB', (640, 480), img_str, 'raw')
To test it, you can do something like
image = Image.open("some.png")
print(image.mode, image.size) # OUT: 'RGB' (640, 480)
image = Image.frombytes('RGB', (640, 480), image.tobytes(), 'raw')
image.show()
#CEO (per this comment) I don't know how what role SQL plays here, and I'm not exactly sure what you're trying to achieve, but I recall I had some issues and this is what works for my case, hope it helps
frame = self._rawNode.display_frame.copy()
width = int(self.customLayout.geometry().width())
height = int(frame.shape[0] * (width / frame.shape[1]))
display_frame = cv2.cvtColor(cv2.resize(frame, (width, height)), cv2.COLOR_BGR2RGB)
qImg = QtGui.QImage(display_frame.data, width, height, 3 * width, QtGui.QImage.Format_RGB888)
self.pixmap = QtGui.QPixmap(qImg)
self.Imagelabel.setPixmap(self.pixmap)
I have a char pointer to png data provided by a c library.
How do I create a image in python from this data in memory.
The c function looks like this:
char *getImage(int *imgSize);
In Python I got the char* as follows:
imSize = c_int()
img = c_char_p()
img = c_char_p(my_c_api.getImage(byref(imSize)))
The char* is returned into the img variable and the size of the image in bytes is returned in the imSize variable.
When executing the following python script:
im = Image.frombuffer("RGBA", (400,400), img.value, 'raw', "RGBA", 0, 1)
I get a ValueError: buffer is not large enough error.
I suspect the img variable in the frombuffer call.
What do I have to do with the img variable, to pass the image data correctly to the frombuffer call?
You'll need to put the data in a StringIO instance and have PIL parse the data from that.
from cStringIO import StringIO
imgfile = StringIO(img.value)
im = Image.open(imgfile)
.frombuffer assumes raw image data, not PNG-encoded data. The StringIO object provides a file-like object to wrap your data in, so PIL can work with it.