Load OpenCV image from binary string - python

What I mean by binary string, is the raw content of image file (That's what wand.image.make_blob() returns)
Is there a way to load it in OpenCV ?
Edit:
cv2.imdecode() doesn't work
img = cv2.imdecode( buf=wand_img.make_blob(), flags=cv2.IMREAD_UNCHANGED)
TypeError: buf is not a numpy array, neither a scalar

Have you tried cv2.imdecode which takes an image buffer and turns it into a CvMat object? Though I am not sure about this one.
See : http://docs.opencv.org/3.0-beta/modules/imgcodecs/doc/reading_and_writing_images.html

Related

Why does cv2.imread output a matrix of zeros for a 32-bit image even when using cv.IMREAD_ANYDEPTH?

I'm using OpenCV version 4.1.1 in Python and cannot get a legitimate reading for a 32-bit image, even when I use cv.IMREAD_ANYDEPTH. Without cv.IMREAD_ANYDEPTH, it returns as None type; with it, I get a matrix of zeros. The issue persists after reinstalling OpenCV. os.path.isfile returns True. The error was replicated on another computer. The images open in ImageJ, so I wouldn't think they're corrupted. I would rather use Skimage since it reads the images just fine, but I have to use OpenCV for what I'm working on. Any advice is appreciated.
img = cv2.imread(file,cv2.IMREAD_ANYDEPTH)
Link for the image: https://drive.google.com/file/d/1IiHbemsmn2gLW12RG3i9fLYZQW2u8sQw/view?usp=sharing
It appears to be some bug in how OpenCV loads such TIFF images. Pillow seems to load the image in a sensible way. Running
from PIL import Image
import numpy as np
img_pil = Image.open('example_image.tiff')
img_pil_cv = np.array(img_pil)
print(img_pil_cv.dtype)
print(img_pil_cv.max())
I get
int32
40950
as an output, which looks reasonable enough.
When I do
import cv2
img_cv = cv2.imread('example_image.tiff', cv2.IMREAD_ANYDEPTH)
print(img_cv.dtype)
print(img_cv.max())
I get
float32
5.73832e-41
which is obviously wrong.
Nevertheless, the byte array holding the pixel data is correct, it's just not being interpreted correctly. You can use numpy.ndarray.view to reinterpret the datatype of a numpy array, so that it's treated as an array if 32bit integers instead.
img_cv = cv2.imread('example_image.tiff', cv2.IMREAD_ANYDEPTH)
img_cv = img_cv.view(np.int32)
print(img_cv.dtype)
print(img_cv.max())
Which prints out
int32
40950
Since the maximum value is small enough for 16bit integer, let's convert the array and see what it looks like
img_cv_16bit = img_cv.astype(np.uint16)
cv2.imwrite('output_cv_16bit.png', img_cv_16bit)
OK, there are some bright spots, and a barely visible pattern. With a little adjustment, we can get something visible:
img_cv_8bit = np.clip(img_cv_16bit // 16, 0, 255).astype(np.uint8)
cv2.imwrite('output_cv_8bit.png', img_cv_8bit)
That looks quite reasonable now.

How can i import an image file into python , read it as an array, then output the array as same image file type

I am tasked with writing a program that can take an image file as input, then encrypt the image with some secondary code that i have already written, and finally output the encrypted image file.
I would like to import an image, make it a 1d array of numbers, perform some encryption on this 1d array (which will make it into a 2d array which i will flatten), and then be able to output the encrypted 1d array as an image file after converting it back to whatever format it was in on input.
I am wondering how this can be done, and details about what types of image files can be accepted, and what libraries may be required.
Thanks
EDIT:
this is some code i have used, img_arr stores the image in an array of integers, max 255. This is what i want, however now i require to convert back into the original format, after i have performed some functions on img_arr.
from PIL import Image
img = Image.open('testimage.jfif')
print('img first: ',img)
img = img.tobytes()
img_arr=[]
for x in img:
img_arr.append(x)
img2=Image.frombytes('RGB',(460,134),img)
print('img second: ',img2)
my outputs are slightly different
img first: <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=460x134 at 0x133C2D6F970>
img second: <PIL.Image.Image image mode=RGB size=460x134 at 0x133C2D49EE0>
In programming, Base64 is a group of binary-to-text encoding schemes that represent binary data (more specifically, a sequence of 8-bit bytes) in an ASCII string format by translating the data into a radix-64 representation.
Fortunately, you can encode and decode the image binary file with python based on base64. The following link helps you.
Encoding an image file with base64

OpenCV TypeError: Expected cv::UMat for argument 'src' - What is this?

Disclaimer: huge openCV noob
Traceback (most recent call last):
File "lanes2.py", line 22, in
canny = canny(lane_image)
File "lanes2.py", line 5, in canny
gray = cv2.cvtColor(imgUMat, cv2.COLOR_RGB2GRAY)
TypeError: Expected cv::UMat for argument 'src'
What exactly is 'src' referring to?
src is the first argument to cv2.cvtColor.
The error you are getting is because it is not the right form. cv2.Umat() is functionally equivalent to np.float32(), so your last line of code should read:
gray = cv2.cvtColor(np.float32(imgUMat), cv2.COLOR_RGB2GRAY)
gray = cv2.cvtColor(cv2.UMat(imgUMat), cv2.COLOR_RGB2GRAY)
UMat is a part of the Transparent API (TAPI) than help to write one code for the CPU and OpenCL implementations.
The following can be used from numpy:
import numpy as np
image = np.array(image)
Not your code is the problem this is perfectly fine:
gray = cv2.cvtColor(imgUMat, cv2.COLOR_RGB2GRAY)
The problem is that imgUMat is None so you probably made a mistake when loading your image:
imgUMat = cv2.imread("your_image.jpg")
I suspect you just entered the wrong image path.
Just add this at start:
image = cv2.imread(image)
Convert your image matrix to ascontiguousarray using np.ascontiguousarray as bellow:
gray = cv2.cvtColor(np.ascontiguousarray(imgUMat), cv2.COLOR_RGB2GRAY)
Is canny your own function? Do you use Canny from OpenCV inside it? If yes check if you feed suitable argument for Canny - first Canny argument should meet following criteria:
type: <type 'numpy.ndarray'>
dtype: dtype('uint8')
being single channel or simplyfing: grayscale, that is 2D array, i.e. its shape should be 2-tuple of ints (tuple containing exactly 2 integers)
You can check it by printing respectively
type(variable_name)
variable_name.dtype
variable_name.shape
Replace variable_name with name of variable you feed as first argument to Canny.
This is a general error, which throws sometimes, when you have mismatch between the types of the data you use. E.g I tried to resize the image with opencv, it gave the same error. Here is a discussion about it.
Some dtype are not supported by specific OpenCV functions. For example inputs of dtype np.uint32 create this error. Try to convert the input to a supported dtype (e.g. np.int32 or np.float32)
that is referring to the expected dtype of your image
"image".astype('float32') should solve your issue
Sometimes I have this error when videostream from imutils package doesn't recognize frame or give an empty frame. In that case, solution will be figuring out why you have such a bad frame or use a standard VideoCapture(0) method from opencv2
If using ImageGrab
Verify that your image is not a 0x0 area due to an incorrect bbox.
Verify the application root folder is the same as the file you are attempting to run.
I got round thid by writing/reading to a file. I guessed cv.imread would put it into the format it needed. This code for anki Vector SDK program but you get the idea.
tmpImage = robot.camera.latest_image.raw_image.save('temp.png')
pilImage = cv.imread('temp.png')
If you are using byte object instead of reading from file you can convert your image to numpy array like this
image = numpy.array(Image.open(io.BytesIO(image_bytes)))

Reading a PyCBitmap with OpenCV

I created an image from a window screenshot using Win32gui. The object has the type:
object 'PyCBitmap' - assoc is 000002AF9A64DB50, vi=<None>
I want to then pass this for analysis with OpenCV. I have had success reading in a saved .bmp file using:
cv2.imread(img_file, 0)
When trying using cv2.imread for a PyCBitmap object I get the following error:
TypeError: bad argument type for built-in operation
My question is:
How can I convert the PyCBitmap object into an acceptable type for cv2.imread, without having to save the object as a .bmp file first?
Thanks in advance,
Behzad
p.s I'm using opencv 3.1 with python bindings, I'm happy to follow advice written in C++ or python :)
I've been looking for the same thing, and I finally found it by combining several other SO answers:
PIL and Bitmap from WinAPI
https://stackoverflow.com/a/14140796/343381
Basically, the code I came up with is:
import PIL, numpy, cv2
bmpinfo = dataBitMap.GetInfo()
bmparray = numpy.asarray(dataBitMap.GetBitmapBits(), dtype=numpy.uint8)
pil_im = Image.frombuffer('RGB', (bmpinfo['bmWidth'], bmpinfo['bmHeight']), bmparray, 'raw', 'BGRX', 0, 1)
pil_array = numpy.array(pil_im)
cv_im = cv2.cvtColor(pil_array, cv2.COLOR_RGB2BGR)
Brief explanation: Python OpenCV just uses numpy arrays, so the trick is really getting the bytes into the right numpy array format. As it turns out, for this you need an image processing library like PIL that can handle the image-specific logic like cutting out the alpha channel. The input data is generally RGBX format, PIL converts to RGB, and OpenCV converts that to BGR which it likes.
I profiled this and unfortunately it is dramatically slower than GetBitmapBits() and in converting its tuple result to an array.

Converting uint8 array to image

I am trying to convert a uint8 array of a 48x48 image to its corresponding 48x48 image file (jpg/jpeg/gif). I tried converting the array contents to first binary and then wrote ('wb' mode) it to a file, but that did not work out.
Is there a way I can accomplish this?
If you are producing the image in TensorFlow (as I'm inferring from your tag), you can use the tf.image.encode_jpeg() or tf.image.encode_png() ops to encode a uint8 tensor as an image:
uint8_data = ...
image_data = tf.image.encode_png(uint8_data)
The result of either op is a tf.string tensor that you can evaluate and write out to a file.
I was able to the same very easily using Octave.
Here i have tried to generate a random matrix of 48x48 and then i saved the image as jpg format.
img = rand(48,48);
imwrite(img, "test.jpg")
You can save any type of image with this approach.
If you can give some more details about what you want to achieve. Do u need to do it just once or u need it as part of program.
Hope that helped.

Categories