Display image without shape pyqt - python

I am trying to create a pyqt application which contains three windows.
Display video stream from camera as BGR format using QPixmap and opencv.
Display Masked image using QPixmap and opencv.
Display screen grab using PIL library, opencv and QPixmap.
The error that I face while displaying masked & screen grab frame is given below.
screen_grab_height, screen_grab_width, screen_grab_channel = image_grab_frame.shape ValueError: not enough values to unpack (expected 3, got 2)
When I checked the frame.shape I found that it has only two values, i.e. image_height and image_width. It does not have image_channel value.
I am attaching the code for both the functions below,
Masked image
mask = cv2.inRange(hsv, lower_hue, upper_hue)
mask1=cv2.bitwise_not(mask)
hsv_height, hsv_width, hsv_channel = hsv_image.shape
hsv_step = hsv_channel * hsv_width
mask_height, mask_width, mask_channel = mask_frame.shape
mask_step = mask_channel * mask_width
convertToQFormat = QImage(mask_frame.data, mask_frame.shape[1], mask_frame.shape[0], QImage.Format_RGB888)
pic = convertToQFormat.scaled(1280, 720, Qt.KeepAspectRatio)
self.normal_screen.setPixmap(QPixmap.fromImage(pic))
and
screen grab
screen_grab_height, screen_grab_width, screen_grab_channel = image_grab_frame.shape
screen_grab_step = screen_grab_channel * screen_grab_width
#Display image grab#
convertToQFormat = QImage(image_grab_frame.data,image_grab_frame.shape[1], image_grab_frame.shape[0], QImage.Format_RGB888)
image_grab_pic = convertToQFormat.scaled(1280, 720, Qt.KeepAspectRatio)
self.normal_screen.setPixmap(QPixmap.fromImage(image_grab_pic))

You have 2 options :-
loss="binary_crossentropy" is correct as you only have 1 class. Earlier due to it being unclear about the number of classes, I did a mistake there.
change the model and the rest is the same.:-
model.add(Dense(1))
model.add(Activation("sigmoid"))
DEFAULT_IMAGE_SIZE= (256, 256)
In this section
try:
image = cv2.imread(image_dir)
if image is not None:
image = cv2.resize(image, DEFAULT_IMAGE_SIZE)
return img_to_array(image)
else:
pass
If you had "None" images then you have to filter them out at first and you can just a simple pass.
5) I am not sure why you are changing the mode to channels_first but that would make the channels at 0 index and here it should be
model.add(BatchNormalization(axis=0))
In fact, just don't change the K.image_data_format() and then you won't even need to pass the axis in BatchNormalization.

Related

Open CV imshow() - ARGB32 to OpenCV image

I am trying to process images from Unity3D WebCamTexture graphics format(ARGB32) using OpenCV Python. But I am having trouble interpreting the image on the Open CV side. The image is all Blue (possibly due to ARGB)
try:
while(True):
data = sock.recv(480 * 640 * 4)
if(len(data) == 480 * 640 * 4):
image = numpy.fromstring(data, numpy.uint8).reshape( 480, 640, 4 )
#imageNoAlpha = image[:,:,0:2]
cv2.imshow('Image', image) #further do image processing
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
finally:
sock.close()
The reason is because of the order of the channels. I think the sender read image as a RGB image and you show it as a BGR image or vice versa.
Change the order of R and B channels will solve the problem:
image = image[..., [0,3,2,1]] # swap 3 and 1 represent for B and R
You will meet this problem frequently if you work with PIL.Image and OpenCV. The PIL.Image will read the image as RGB and cv2 will read as BGR, that's why all the red points in your image become blue.
OpenCV uses BGR (BGRA when including alpha) ordering when working with color images [1][2], this applies to images read/written with imread(), imwrite(); images acquired with VideoCapture; drawing functions ellipse(), rectangle(); and so on. This convention is self-consistent within the library, if you read an image with imread() and show it with imshow(), the correct colors will appear.
OpenCV is the only library I know that uses this ordering, e.g. PIL and Matplotlib both use RGB. If you want to convert from one color space to another use cvtColor(), example:
# Convert RGB to BGR.
new_image = cvtColor(image, cv2.COLOR_RGB2BGR)
See the ColorConversionCodes enum for all supported conversion pairs. Unfortunately there is no ARGB to BGR, but you can always manually manipulate the NumPy array anyway:
# Reverse channels ARGB to BGRA.
image_bgra = image[..., ::-1]
# Convert ARGB to BGR.
image_bgr = image[..., [3, 2, 1]]
There is also a mixChannels() function and a bunch other array manipulation utilities but most of these are redundant in OpenCV Python since images are backed by NumPy arrays so it's easier to just use the NumPy counterparts instead.
OpenCV uses BGR for seemingly historical reasons: Why OpenCV Using BGR Colour Space Instead of RGB.
References:
[1] OpenCV: Mat - The Basic Image Container (Search for 'BGR' under Storing methods.)
[2] OpenCV: How to scan images, lookup tables and time measurement with OpenCV
Image from [2] showing BGR layout in memory.
IMAGE_WIDTH = 640
IMAGE_HEIGHT = 480
IMAGE_SIZE = IMAGE_HEIGHT * IMAGE_WIDTH * 4
try:
while(True):
data = sock.recv(IMAGE_SIZE)
dataLen = len(data)
if(dataLen == IMAGE_SIZE):
image = numpy.fromstring(data, numpy.uint8).reshape(IMAGE_HEIGHT, IMAGE_WIDTH, 4)
imageDisp = cv2.cvtColor(image, cv2.COLOR_RGBA2BGR)
cv2.imshow('Image', imageDisp)
key = cv2.waitKey(1) & 0xFF
if key == ord("q"):
break
finally:
sock.close()
Edited as per the suggestions from comment

Why does resizing images end up changing the channels as well?

so I have this code in python using TensorFlow which opens an image and resizes it:
def parse_image(filename):
label = filename
image = tf.io.read_file(path+'/'+filename)
print(image.shape)
image = tf.image.decode_png(image)
image = tf.image.convert_image_dtype(image, tf.float32)
image = tf.image.resize(image, [340, 340])
print(image.shape)
return image, label
When I open a test grayscale image using OpenCV:
img = cv2.imread(path+"/test.png")
print(img.shape) #returns (2133,3219,3)
But when I call the above function on the same image:
image, label = parse_image('test.png')
print(image.shape) #returns (340,340,1)
I know the 340 x 340 is the width and height I just set but why did the channels change? I'm trying to calculate the structural similarity but the test image and the image I want to compare have different channels which raises an error. The worst part is, it's this specific test image. Other grayscale images work fine.

Resizing JPG using PIL.resize gives a completely black image

I'm using PIL to resize a JPG. I'm expecting the same image, resized as output, but instead I get a correctly sized black box. The new image file is completely devoid of any information, just an empty file. Here is an excerpt for my script:
basewidth = 300
img = Image.open(path_to_image)
wpercent = (basewidth/float(img.size[0]))
hsize = int((float(img.size[1])*float(wpercent)))
img = img.resize((basewidth,hsize))
img.save(dir + "/the_image.jpg")
I've tried resizing with Image.LANCZOS as the second argument, (defaults to Image.NEAREST with 1 argument), but it didn't make a difference. I'm running Python3 on Ubunutu 16.04. Any ideas on why the image file is empty?
I also encountered the same issue when trying to resize an image with transparent background. The "resize" works after I add a white background to the image.
Code to add a white background then resize the image:
from PIL import Image
im = Image.open("path/to/img")
if im.mode == 'RGBA':
alpha = im.split()[3]
bgmask = alpha.point(lambda x: 255-x)
im = im.convert('RGB')
im.paste((255,255,255), None, bgmask)
im = im.resize((new_width, new_height), Image.ANTIALIAS)
ref:
Other's code for making thumbnail
Python: Image resizing: keep proportion - add white background
The simplest way to get to the bottom of this is to post your image! Failing that, we can check the various aspects of your image.
So, import Numpy and PIL, open your image and convert it to a Numpy ndarray, you can then inspect its characteristics:
import numpy as np
from PIL import Image
# Open image
img = Image.open('unhappy.jpg')
# Convert to Numpy Array
n = np.array(img)
Now you can print and inspect the following things:
n.shape # we are expecting something like (1580, 1725, 3)
n.dtype # we expect dtype('uint8')
n.max() # if there's white in the image, we expect 255
n.min() # if there's black in the image, we expect 0
n.mean() # we expect some value between 50-200 for most images

How to get color image from point grey camera with Spinnaker in python?

I am trying to get color images (no difference in rgb or bgr in my case) from flea3 camera (with the code "FL3-U3-32S2C-CS", which shows its a color camera) but my code generates grayscale photos... what is wrong in the following code snippet? any idea?
# Begin acquiring images
cam.BeginAcquisition()
# Retrieve next image and convert it
image_result = cam.GetNextImage()
img_converted = image_result.Convert(PySpin.PixelFormat_RGB8, PySpin.HQ_LINEAR)
# Convert the Image object to RGB array
width = image_result.GetWidth()
height = image_result.GetHeight()
rgb_array = img_converted.GetData()
rgb_array = rgb_array.reshape(height, width, 3)
I had the same issue but with a Blackfly S camera over USB. I had to use a specific format to get it to work. I also set the pixel format on the camera before acquisition.
cam.PixelFormat.SetValue(PySpin.PixelFormat_BGR8)
cam.BeginAcquisition()
image_result = cam.GetNextImage()
image_converted = image_result.Convert(PySpin.PixelFormat_BGR8)
# Convert the Image object to RGB array
width = image_result.GetWidth()
height = image_result.GetHeight()
rgb_array = image_converted.GetData()
rgb_array = rgb_array.reshape(height, width, 3)
The following shows, how this could be done:
### Set Pixel Format to RGB8 ###
node_pixel_format =
PySpin.CEnumerationPtr(nodemap.GetNode('PixelFormat'))
if not PySpin.IsAvailable(node_pixel_format) or not PySpin.IsWritable(node_pixel_format):
print('Unable to set Pixel Format to RGB8 (enum retrieval). Aborting...')
node_pixel_format_RGB8 = node_pixel_format.GetEntryByName('RGB8')
if not PySpin.IsAvailable(node_pixel_format_RGB8) or not PySpin.IsReadable(node_pixel_format_RGB8):
print('Unable to set Pixel Format to RGB8 (entry retrieval). Aborting...')
pixel_format_RGB8 = node_pixel_format_RGB8.GetValue()
node_pixel_format.SetIntValue(pixel_format_RGB8)

OpenCV resize fails on large image with "error: (-215) ssize.area() > 0 in function cv::resize"

I'm using OpenCV 3.0.0 and Python 3.4.3 to process a very large RGB image (107162,79553,3). While I'm trying to resize it using the following code:
import cv2
image = cv2.resize(img, (0,0), fx=0.5, fy=0.5, interpolation=cv2.INTER_AREA)
I had this error message coming up:
cv2.error: C:\opencv-3.0.0\source\modules\imgproc\src\imgwarp.cpp:3208: error: (-215) ssize.area() > 0 in function cv::resize
I'm certain there is image content in the image array because I can save them into small tiles in jpg format. When I try to resize just a small part of the image, there is no problem and I end up with correctly resized image. (Taking a rather big chunk (50000,50000,3) still won't work, but it will work on a (10000,10000,3) chunk)
What could cause this problem and how can I solve this?
So it turns out that the problem comes from one line in modules\imgproc\src\imgwarp.cpp:
CV_Assert( ssize.area() > 0 );
When the product of rows and columns of the image to be resized is larger than 2^31, ssize.area() results in a negative number. This appears to be a bug in OpenCV and hopefully will be fixed in the future release. A temporary fix is to build OpenCV with this line commented out. While not ideal, it works for me.
And I just recently found out that the above applies only to image whose width is larger than height. For images with height larger than width, it's the following line that causes error:
CV_Assert( dsize.area() > 0 );
So this has to be commented out as well.
Turns out for me this error was actually telling the truth - I was trying to resize a Null image, which was usually the 'last' frame of a video file, so the assertion was valid.
Now I have an extra step before attempting the resize operation, which is to do the assertion myself:
def getSizedFrame(width, height):
"""Function to return an image with the size I want"""
s, img = self.cam.read()
# Only process valid image frames
if s:
img = cv2.resize(img, (width, height), interpolation = cv2.INTER_AREA)
return s, img
Now I don't see the error.
Also pay attention to the object type of your numpy array, converting it using .astype('uint8') resolved the issue for me.
I know this is a very old thread but I had the same problem which was due spaces in the images names.
e.g.
Image name: "hello o.jpg"
weirdly, by removing the spaces the function worked just fine.
Image name: "hello_o.jpg"
I am having OpenCV version 3.4.3 on MacOS.
I was getting the same error as above.
I changed my code from
frame = cv2.resize(frame, (0,0), fx=0.5, fy=0.5)
to
frame = cv2.resize(frame, None, fx=0.5, fy=0.5)
Now its working fine for me.
This type of error also takes place because the resize is unable to get the image in simple
the directory of the image may be wrong.In my case I left the forward slash during providing the location of file and this error took place after I put the slash problem was solved.
For me the following work-around worked:
split the array up into smaller sub arrays
resize the sub arrays
merge the sub arrays again
Here the code:
def split_up_resize(arr, res):
"""
function which resizes large array (direct resize yields error (addedtypo))
"""
# compute destination resolution for subarrays
res_1 = (res[0], res[1]/2)
res_2 = (res[0], res[1] - res[1]/2)
# get sub-arrays
arr_1 = arr[0 : len(arr)/2]
arr_2 = arr[len(arr)/2 :]
# resize sub arrays
arr_1 = cv2.resize(arr_1, res_1, interpolation = cv2.INTER_LINEAR)
arr_2 = cv2.resize(arr_2, res_2, interpolation = cv2.INTER_LINEAR)
# init resized array
arr = np.zeros((res[1], res[0]))
# merge resized sub arrays
arr[0 : len(arr)/2] = arr_1
arr[len(arr)/2 :] = arr_2
return arr
You can manually place a check in your code. Like this:
if result != []:
for face in result:
bounding_box = face['box']
x, y, w, h = bounding_box[0], bounding_box[1], bounding_box[2], bounding_box[3]
rect_face = cv2.rectangle(frame, (x, y), (x+w, y+h), (46, 204, 113), 2)
face = rgb[y:y+h, x:x+w]
#CHECK FACE SIZE (EXIST OR NOT)
if face.shape[0]*face.shape[1] > 0:
predicted_name, class_probability = face_recognition(face)
print("Result: ", predicted_name, class_probability)
Turns out I had a .csv file at the end of the folder from which I was reading all the images.
Once I deleted that it worked alright
Make sure that it's all images and that you don't have any other type of file
In my case I did a wrong modification in the image.
I was able to find the problem checking the image shape.
print img.shape
In my case,
image = cv2.imread(filepath)
final_img = cv2.resize(image, size_img)
filepath was incorrect, cv2.imshow didn't give any error in this case but due to wrong path cv2.resize was giving me error.
I came across the same error message while I was trying to enlarge the image size. Assigning the image type as uint8 did the work for me and I was able to resize the image 30 times of its original size. Here is an example as a reference for anyone else who has such issue.
scale_percent = 3000
width = int(img.shape[1] * scale_percent / 100)
height = int(img.shape[0] * scale_percent /100)
dim = (width, height)
image = cv2.resize(img.astype('uint8'), dim, interpolation=cv2.INTER_AREA)
Same error message for me but issue was different:
The interpolation method 'INTER_AREA' was NOT compatible with int8 !
cv2.resize(frame_rgb, tuple([None, None]))
gives similar error. Notice the None values in the resizing tuple.
In my case there were some corrupt or not supported images. What i simple did is just check if it is not None than process it as shown below.
cv2.imread(image_path)
if img is not None:
cv2.resize(img,(150,150)) # You can give your own desired image size
I was working with 3 files: The python script, the image, and the trained model.
Everything worked when I moved these 3 files into their own folder instead of in the directory with the other python scripts.
I had the same error. Resizing the images resolved the issue. However, I used online tools to resize the images because using pillow to resize them did not solve my problem.

Categories