resizing a picture in opencv? - python

i am writing a face dection program in opencv. And this is the error code below:
result = img[rects[0]:rects[1], rects[2]:rects[3]]
result = cv2.resize(result, (100,100))
img is our original picture and the first step is croping out our rigion of interest into result.
The second step is resize to 100*100 pixels.
the error is:
result = cv2.resize(result, (100,100))
error: ..\..\..\src\opencv\modules\imgproc\src\imgwarp.cpp:1725: error: (-215) ssize.area() > 0
Hope someone can help me. Thanks a lot.

Had the same error in python ,found that image was empty so check whether imread worked or not by using imshow or checking it for NULL then use cv2.resize .

Related

imshow() function is not giving output as expected in python

import cv2
img=cv2.imread('test.jpg')
cv2.imshow("frame1",img)
waitKey(0)
input_image
output_image
Above is my code and it is not giving expected result(complete image). Roughly 10% of image is getting as output.
My input image is of size 1.24MB.is there any size limitations in opencv??
Thanks for help in advance
The problem is arising from cv2.imshow() function. As you are using cv2 from python, the command to use would be cv2.namedWindow('image',cv2.WINDOW_NORMAL) before cv2.imshow(). This should solve your problem. I tried your code the following way and it worked for me.
import cv2
img=cv2.imread('input1.jpg')
cv2.namedWindow('image',cv2.WINDOW_NORMAL)
#cv2.resizeWindow('image', 600,600)
cv2.imshow("image",img)
k = cv2.waitKey(0)
if k == 27:
cv2.imwrite('newImage2.png', img)
cv2.destroyAllWindows()
See if this can solve your problem.
You can use the namedwindow() method to adjust the screen size to fit the image size.
Here is the function;
cv.NamedWindow(name, flags=CV_WINDOW_AUTOSIZE)

OpenCV Error: (-215:Assertion failed) VScn::contains(scn) && VDcn::contains(dcn) && VDepth::contains(depth) in function 'CvtHelper'

Traceback (most recent call last):
File "demo.py", line 132, in
`result = find_strawberry(image)`
File "demo.py", line 63, in find_strawberry
`image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)`
cv2.error: OpenCV(3.4.2) /Users/travis/build/skvark/opencv-python/opencv/modules/imgproc/src/color.hpp:253: error: (-215:Assertion failed) VScn::contains(scn) && VDcn::contains(dcn) && VDepth::contains(depth) in function 'CvtHelper'
I personally have spent a lot of time on this question, hence thought relevant to post it on Stackoverflow.
Question taken from: llSourcell/Object_Detection_demo_LIVE
Even I had the same problem, and the solution was quiet easy. Remember 1 thing, if the RGB values of your image lie in the range of 0-255, make sure the values are not of data type 'float'. As OpenCV considers float only when values range from 0-1. If it finds a float value larger than 1 it clips off the value thinking floats only exists between 0-1. Hence such errors generated. So convert the data type to uint8 if values are from 0-255.
image = image.astype('uint8')
Check this Kaggle Kernel to learn more about it
Just in case if anyone is still having the same error even after applying the above fix then do check the depth of your image i.e. Check whether the image is grayscale or colored since cv2.COLOR_BGR2GRAY cannot convert images that are already grayscale and thus throws up this error.
Well I was doing the Epipolar Geometry (find the link below) and I had this issue. I solved this error by doing one of the two methods:
First method - keeping the original colors:
A. I load the image with its original color (in my case it was RGB) by deleting the zero parameter from cv2.imread.
img1 = cv2.imread('image.jpg')
B. You might need to edit the shape of the image since it is RGB
r, c,_ = img1.shape
C. Comment the conversion
# img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
The second method - converting into grayscale image:
A. I load the image in BGR by adding the zero parameter into cv2.imread.
img1 = cv2.imread('image.jpg',0)
B. You might need to edit the shape of the image since it is BGR
r, c = img1.shape
C. Now you can convert the image into grayscale image
img1 = cv2.cvtColor(img1,cv2.COLOR_GRAY2BGR)
If the two methods do not work for you, you might need to check the links below they might have answer your question:
https://github.com/aleju/imgaug/issues/157
https://github.com/llSourcell/Object_Detection_demo_LIVE/issues/6
Epipolar Geometry
https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_epipolar_geometry/py_epipolar_geometry.html
get the same error while using trackbar in opencv but this method resolved it:
img = np.full((512,512,3), 12, np.uint8)
where img is your image

(scikit-image) HOG visualization image appears black when saved

I am new to computer vision and image processing and am using this code
from skimage.feature import hog
hog_list, hog_img = hog(test_img_gray,
orientations=8,
pixels_per_cell=(16, 16), cells_per_block=(1, 1),
block_norm='L1',
visualise=True,
feature_vector=True)
plt.figure(figsize=(15,10))
plt.imshow(hog_img)
to get this HOG visualization image
I have 2 questions at this point:
When I try to save this image (as a .pdf or .jpg) the resulting image is pure black. Converting this image to PIL format and examining it with
hog_img_pil = Image.fromarray(hog_img)
hog_img_pil.show()
still shows the image as pure black. Why is this happening and how can I fix it?
When I try to run this code
hog_img = cv2.cvtColor(hog_img, cv2.COLOR_BGR2GRAY)
to convert the image to grayscale I get the error error: (-215) depth == CV_8U || depth == CV_16U || depth == CV_32F in function cvtColor. What do I need to do to get this image in grayscale and why would this be happening?
As additional information, running hog_img.shape returns (1632, 1224) which is just the size of the image, which I had initially interpreted to mean that the image is already is already in grayscale (since it appears to be lacking a dimension for color channel). However, when I then tried to run
test_img_bw = cv2.adaptiveThreshold(
src=hog_img,
maxValue=255,
adaptiveMethod=cv2.ADAPTIVE_THRESH_GAUSSIAN_C,
thresholdType=cv2.THRESH_BINARY,
blockSize=115, C=4)
I got the error error: (-215) src.type() == CV_8UC1 in function adaptiveThreshold which this answer seems to indicate means that the image is not in grayscale.
Finally, another bit of useful information is that running print(hog_img.dtype) on the image returns float64.
I will continue to debug, in the meantime
Thanks for any thoughts :)
Inverting the image with hog_img_inv = cv2.bitwise_not(hog_img) and using
plt.figure(figsize=(15,10))
plt.imshow(hog_img_uint8_inv)
showed that the lines were in fact there but are very faint (I've included the image here for comletness, but you can barley see it (but trust me, it's there)). I will have to do some more processing of the image to get the lines more distinguishable.
Running print(hog_img.dtype) showed that the dtype was float64 when (I think) it should have been uint8. I fixed this by running hog_img_uint8 = hog_img.astype(np.uint8) which seems to have fixed the problem with passing the image to other algorithms (eg. cv2.adaptiveThreshold).
If had the same problem. But if you look inside the docu, they also use this code for better visualisation:
# Rescale histogram for better display
hog_image_rescaled = exposure.rescale_intensity(hog_image, in_range=(0, 0.02))
But I still have the same problem. Visualisation with matplotlib is no problem. saving the image with opencv (or skimage) saves only a black image...

Displaying CR part of YCR_CB python opencv

I am trying to generate a colour space such that the red sections of a set of images are easily seen by converting it from BGR to YCR_CB using python and opencv like this:
img = cv2.imread(os.path.join(directory_to_cycle, filename), cv2.IMREAD_COLOR)
width, height = img.shape[0], img.shape[1]
img_yCrCb = cv2.cvtColor(img, cv2.COLOR_BGR2YCR_CB)
cv2.imshow('image', img_yCrCb[:,:,1])
but I am getting this error caused by cv2.imshow('image', img_yCrCb[:,:,1])
error: C:\slave\WinInstallerMegaPack\src\opencv\modules\core\src\array.cpp:2482: error: (-206) Unrecognized or unsupported array type
and I don't understand why since others that have used this exact code have it working
Does anyone have any ideas whether its to fix this problem or an alternative solution to what I am trying to do?

Grayscale image segmentation

I'm trying to segment a grayscale picture generated from field measurements, that is why it is not a conventional 3-channel picture.
I have tried this piece of code:
import cv2 #this is the openCV library
import numpy as np
# some code to generate img
ret,thresh = cv2.threshold(img ,0,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
And it spits out this error:
cv2.error: ..\..\..\modules\imgproc\src\thresh.cpp:719: error: (-215) src.type() == CV_8UC1 in function cv::threshold
I have no idea on how to solve this since the usage seems to be pretty straight forward, so any idea is welcome.
The error is due to the following assert statement CV_Assert( src.type() == CV_8UC1 ); in thresh.cpp, meaning your input image is not of type CV_8UC1.
So make sure that your generated input image img is in fact CV_8UC1 (one channel 8-bit image).
So indeed the problem is the image type, since it contains double values thay need to be normalized to 0 ~ 255.
in my case 1000 is the maximum value possible
img = cv2.convertScaleAbs(img / 1000.0 * 255)
This worked for me.

Categories