Pass image from Python to a MATLAB function - python

I'm trying to use a MATLAB function in Python using the MATLAB Python engine. The MATLAB function is for processing an image. Here is my code:
import matlab.engine
import os
from PIL import Image
img_rows, img_cols = 256, 256
img_channels = 1
path0 = r'D:\NEW PICS\non_stressed'
path2 = r'D:\empty2'
listing = os.listdir(path0)
num_samples=size(listing)
print (num_samples)
eng = matlab.engine.start_matlab()
for file in listing:
im = Image.open(path0 + '\\' + file)
img = im.resize((img_rows,img_cols))
gray = img.convert('L')
#gray: This is the image I want to pass from Python to MATLAB
reg = eng.LBP(gray)
reg.save(path2 +'\\' + file, "JPEG")
But it gives me this error:
TypeError: unsupported Python data type: PIL.Image.Image
Please help me with this. Thank you.

As described in the MATLAB documentation on how to Pass Data to MATLAB from Python, only a certain number of types are supported. This includes scalar data types, such as int, float and more, as well as (partly) dict. Further, list, set, and tuple are automatically converted to MATLAB cell arrays.
But: array.array, and any module.type objects are not supported. This includes PIL.Image.Image, like in your case. You will have to convert the image to a supported datatype before passing it to MATLAB.
For arrays, MATLAB recommends to use their special MATLAB Array type for Python. You can convert the PIL Image to e.g. a uint8 MATLAB Array with
from PIL import Image
import matlab.engine
image = Image.new('RGB', (1024, 1280))
image_mat = matlab.uint8(list(image.getdata()))
image_mat.reshape((image.size[0], image.size[1], 3))
The final reshape command is required because PIL's getdata() function returns a flattened list of pixel values, so the image width and height are lost. Now, you can call any MATLAB function on the image_mat array.

Related

Convert image numpy array into grayscale array directly without saving image

I have a NumPy array img_array of dimension (h,w,3) of one image, which is the result of some function. I want to convert this NumPy array directly into grayscale.
Possible Solution:
Save the img_array as image using cv2.imwrite(path). Then read again with cv2.imread(path, cv2.GRAYSCALE)
However, I am looking for something like this :
def convert_array_to_grayscale_array(img_array):
do something...
return grayscare_version
I have already tried cv2.imread(img_array, CV2.GRAYSCALE), but it is throwing error of img_array must be a file pathname.
I think saving a separate image will consume more space disk. Is there any better way to do that with or without using the OpenCV library function.
scikit-image has color conversion functions: https://scikit-image.org/docs/dev/auto_examples/color_exposure/plot_rgb_to_gray.html
from skimage.color import rgb2gray
grayscale = rgb2gray(img_array)

Writing seemingly same array to two image files leads to different results

I'm trying to read and write tiff-images with the PIL library. While testing, I noticed that saving the seemingly same numpy array generated in different ways leads to different images on disk. Why is this and how can I fix it?
For testing purposes, I created an image with GIMP (upscaled from 8x8) which I saved as TIF, read to a numpy array and wrote back to a tif file:
img_gimp = Image.open('img_gimp.tif')
imgarray_gimp = np.array(img_gimp)
img_gimp = Image.fromarray(imgarray_gimp, mode = 'I;16')
img_gimp.save('final_gimp.tif')
The result is as expected, it is the same image as the original. So far so good.
Now I generated the same image directly in python code:
imgarray_direct = np.zeros(shape = (8, 8)).astype(int)
for i in range(2):
for j in range(2):
image[i][j] = 65535
Writing this array to disk...
img_direct = Image.fromarray(imgarray_direct, mode = 'I;16')
img_direct.save('final_direct.tif')
doesn't give me the expected result, instead I find this:
image generated by for loop (upscaled from 8x8)
Doing
print(np.array_equal(imgarray_gimp, imgarray_direct))
gives True, and looking at print(imgarray_gimp) and print(imgarray_direct), one cannot see any difference.
Is this intended behaviour? If yes, whats the reason for it?
Thank you for your answers!
As #MarkSetchell hinted in the comments, the issue is that your dtype for the numpy array of raw data does not match the PIL image mode string you are supplying afterwards. Changing the parameter passed into astype or simply passing in the right type on array creation fixed this issue for me. Here is what the modified code looks like:
import numpy as np
from PIL import Image
#Generate raw image data (16-bit!)
image_data = np.zeros(shape = (8, 8), dtype=np.uint16)#this is the big change
for i in range(2):
for j in range(2):
image_data[i][j] = 65535
#Save image as TIF to disk
image_direct = Image.fromarray(image_data, mode = 'I;16')
image_direct.save('final_direct.tif')
As a side note, I am surprised that the mode string I;16 you have used is valid; I could not find any mention about it in pillow's documentation.

Converting OpenCV code snippet from C++ to Python

I'm trying to convert this code to python.
can anyone help me?
cv::Mat image;
while (image.empty())
{
image = cv::imread("capture.jpg",1);
}
cv::imwrite("result.jpg",image);
`
In Python the Mat of C++ becomes a numpy array and hence the image manipulation becomes as simple as accessing a multi dimensional array. However the methods name are same in both C++ and Python.
import cv2 #importing opencv module
img = cv2.imread("capture.jpg", 1) #Reading the whole image
cv2.imwrite("result.jpg", img) # Creating a new image and copying the contents of img to it
EDIT: If you want to write the contents as soon as the image file is being generated then you can use os.path.isfile() which return a bool value depending upon the presence of a file in the given directory.
import cv2
import os.path
while not os.path.isfile("capture.jpg"):
#ignore if no such file is present.
pass
img = cv2.imread("capture.jpg", 0)
cv2.imwrite("result.jpg", img)
You can also refer to docs for detailed implementation of each method and basic image operations.

Create PIL image from memory stream provided by C library

I have a char pointer to png data provided by a c library.
How do I create a image in python from this data in memory.
The c function looks like this:
char *getImage(int *imgSize);
In Python I got the char* as follows:
imSize = c_int()
img = c_char_p()
img = c_char_p(my_c_api.getImage(byref(imSize)))
The char* is returned into the img variable and the size of the image in bytes is returned in the imSize variable.
When executing the following python script:
im = Image.frombuffer("RGBA", (400,400), img.value, 'raw', "RGBA", 0, 1)
I get a ValueError: buffer is not large enough error.
I suspect the img variable in the frombuffer call.
What do I have to do with the img variable, to pass the image data correctly to the frombuffer call?
You'll need to put the data in a StringIO instance and have PIL parse the data from that.
from cStringIO import StringIO
imgfile = StringIO(img.value)
im = Image.open(imgfile)
.frombuffer assumes raw image data, not PNG-encoded data. The StringIO object provides a file-like object to wrap your data in, so PIL can work with it.

OpenCV: Converting from NumPy to IplImage in Python

I have an image that I load using cv2.imread(). This returns an NumPy array. However, I need to pass this into a 3rd party API that requires the data in IplImage format.
I've scoured everything I could and I've found instances of converting from IplImage to CvMat,and I've found some references to converting in C++, but not from NumPy to IplImage in Python. Is there a function that is provided that can do this conversion?
You can do like this.
source = cv2.imread() # source is numpy array
bitmap = cv.CreateImageHeader((source.shape[1], source.shape[0]), cv.IPL_DEPTH_8U, 3)
cv.SetData(bitmap, source.tostring(),
source.dtype.itemsize * 3 * source.shape[1])
bitmap here is cv2.cv.iplimage
2-way to apply:
img = cv2.imread(img_path)
img_buf = cv2.imencode('.jpg', img)[1].tostring()
just read the image file:
img_buf = open(img_path, 'rb').read()

Categories