I have an image that I load using cv2.imread(). This returns an NumPy array. However, I need to pass this into a 3rd party API that requires the data in IplImage format.
I've scoured everything I could and I've found instances of converting from IplImage to CvMat,and I've found some references to converting in C++, but not from NumPy to IplImage in Python. Is there a function that is provided that can do this conversion?
You can do like this.
source = cv2.imread() # source is numpy array
bitmap = cv.CreateImageHeader((source.shape[1], source.shape[0]), cv.IPL_DEPTH_8U, 3)
cv.SetData(bitmap, source.tostring(),
source.dtype.itemsize * 3 * source.shape[1])
bitmap here is cv2.cv.iplimage
2-way to apply:
img = cv2.imread(img_path)
img_buf = cv2.imencode('.jpg', img)[1].tostring()
just read the image file:
img_buf = open(img_path, 'rb').read()
Related
I have a NumPy array img_array of dimension (h,w,3) of one image, which is the result of some function. I want to convert this NumPy array directly into grayscale.
Possible Solution:
Save the img_array as image using cv2.imwrite(path). Then read again with cv2.imread(path, cv2.GRAYSCALE)
However, I am looking for something like this :
def convert_array_to_grayscale_array(img_array):
do something...
return grayscare_version
I have already tried cv2.imread(img_array, CV2.GRAYSCALE), but it is throwing error of img_array must be a file pathname.
I think saving a separate image will consume more space disk. Is there any better way to do that with or without using the OpenCV library function.
scikit-image has color conversion functions: https://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_rgb_to_gray.html
from skimage.color import rgb2gray
grayscale = rgb2gray(img_array)
Through some http requests I have been able to receive an image in binary form as
b'\xff\xd8\xff\xe0\x00\...
and with:
with open('image.jpg', 'wb') as out_file:
out_file.write(binary_content)
where binary_content is a string containing the data received through request I saved an image into a file.
Afterwards I can read this image with OpenCV methods. But I wanted to do a direct pass from binary string to OpenCV Mat without any in-betweens. cv2.decode method didn't work.
io.BytesIO and PIL worked well. Closing this q.
If you want to stay in the SciPy ecosystem, then the imageio library (previously part of SciPy) works well.
from imageio import imread
image_array = imread("image_path.jpg")
The code above gives you an uint8 array, if you want a float array, you can cast it easily
from imageio import imread
image_array = imread("image_path.jpg").astype(float)
I'm trying to use a MATLAB function in Python using the MATLAB Python engine. The MATLAB function is for processing an image. Here is my code:
import matlab.engine
import os
from PIL import Image
img_rows, img_cols = 256, 256
img_channels = 1
path0 = r'D:\NEW PICS\non_stressed'
path2 = r'D:\empty2'
listing = os.listdir(path0)
num_samples=size(listing)
print (num_samples)
eng = matlab.engine.start_matlab()
for file in listing:
im = Image.open(path0 + '\\' + file)
img = im.resize((img_rows,img_cols))
gray = img.convert('L')
#gray: This is the image I want to pass from Python to MATLAB
reg = eng.LBP(gray)
reg.save(path2 +'\\' + file, "JPEG")
But it gives me this error:
TypeError: unsupported Python data type: PIL.Image.Image
Please help me with this. Thank you.
As described in the MATLAB documentation on how to Pass Data to MATLAB from Python, only a certain number of types are supported. This includes scalar data types, such as int, float and more, as well as (partly) dict. Further, list, set, and tuple are automatically converted to MATLAB cell arrays.
But: array.array, and any module.type objects are not supported. This includes PIL.Image.Image, like in your case. You will have to convert the image to a supported datatype before passing it to MATLAB.
For arrays, MATLAB recommends to use their special MATLAB Array type for Python. You can convert the PIL Image to e.g. a uint8 MATLAB Array with
from PIL import Image
import matlab.engine
image = Image.new('RGB', (1024, 1280))
image_mat = matlab.uint8(list(image.getdata()))
image_mat.reshape((image.size[0], image.size[1], 3))
The final reshape command is required because PIL's getdata() function returns a flattened list of pixel values, so the image width and height are lost. Now, you can call any MATLAB function on the image_mat array.
Usually I would create an image in OpenCV as:
from cv2 import imread
img = imread("/home/nick/myfile.jpg")
But already have the contents of the file in another variable, so how to I create an OpenCV from this directly? e.g.
fc = open("/home/nick/myfile.jpg", "rb").read()
img = something(fc)
What is something? Is there an OpenCV or numpy function to do this?
cv2.imdecode() can do that in memory. and yes, it wants a numpy array as input
I have a char pointer to png data provided by a c library.
How do I create a image in python from this data in memory.
The c function looks like this:
char *getImage(int *imgSize);
In Python I got the char* as follows:
imSize = c_int()
img = c_char_p()
img = c_char_p(my_c_api.getImage(byref(imSize)))
The char* is returned into the img variable and the size of the image in bytes is returned in the imSize variable.
When executing the following python script:
im = Image.frombuffer("RGBA", (400,400), img.value, 'raw', "RGBA", 0, 1)
I get a ValueError: buffer is not large enough error.
I suspect the img variable in the frombuffer call.
What do I have to do with the img variable, to pass the image data correctly to the frombuffer call?
You'll need to put the data in a StringIO instance and have PIL parse the data from that.
from cStringIO import StringIO
imgfile = StringIO(img.value)
im = Image.open(imgfile)
.frombuffer assumes raw image data, not PNG-encoded data. The StringIO object provides a file-like object to wrap your data in, so PIL can work with it.