There is a bug on mask bounds in cv2.seamlessClone. It produces on mask filled with 255 only on edges.
Does anyone new how to solve it?
Now I ignore images when I need to blend only edges.
import cv2
import numpy as np
background = np.zeros((100, 100, 3)).astype(np.uint8)
target_object = np.ones_like(background).astype(np.uint8) * 255
mask = np.zeros_like(target_object)
bound = -1
mask[:, bound:] = 255
center = (mask.shape[0] // 2, mask.shape[1] // 2)
cv2.seamlessClone(target_object, background, mask, center, flags = cv2.NORMAL_CLONE)
# bound:
# -1, -2 : "terminate called after throwing an instance of 'std::length_error'
# what(): vector::_M_default_append"
# -3 : "cv2.error: OpenCV(4.0.0) /io/opencv/modules/core/src/matrix_wrap.cpp:1669: error: (-215:Assertion failed) !fixedSize() in function 'release'"
# <= -4 : works
I've opened issue for this bug in opencv
https://github.com/opencv/opencv/issues/15294
As workaround for your issue i suggest manually add 1 pixel on borders.
Related
I am attempting to take the image above and crop the image down to just the pvc pipe so that I can later on determine the differences between this reference image and another image. (I will possibly ask another question later on if any issues arise when I try to do that) I am currently attempting to find the any of the pink or white pvc in the image, and I have successfully done that utilizing numpy to split the bgr array of the image. Here is my code so far.
import cv2
image = cv2.imread("CoralImgs/Original.png")
c = image.copy()
c = cv2.resize(c, (480, 380), interpolation=cv2.INTER_AREA)
for y in range(480):
for x in range(720):
b, g, r = image[:, :, 0], image[:, :, 1], image[:, :, 2] # for BGR image
for y in range(len(r)):
for x in range(len(r[y])):
if r[y][x] < 170:
r[y][x] = 0
# mask_b = cv2.bitwise_and(c, c, mask=r)
cv2.imshow("r", r)
cv2.waitKey(0)
However when I uncomment the line above mask_b = cv2.bitwise_and(c, c, mask=r) I get the following error, I also understand that this error is usually the case of the source image being empty, but it only occurs when I uncomment that line.
cv2.error: OpenCV(4.5.3) C:\Users\runneradmin\AppData\Local\Temp\pip-req-build-sn_xpupm\opencv\modules\core\src\arithm.cpp:230:
error: (-215:Assertion failed) (mtype == CV_8U || mtype == CV_8S) && _mask.sameSize(*psrc1) in function 'cv::binary_op'
So what I am asking here is for help understanding why the error is occuring and hopefully find a fix for it so that I do not have this problem again.
Thank you!!!!
-- as a last second thought I wanted to add what the code returns when the line in question is commented out.
So as Dan Mašek (I do not know how to # people, I need to figure that out) said in the comments. The reason I was getting that error was due to my images being of two sizes and after modifying the code above a little bit here was the result.
import cv2
image = cv2.imread("CoralImgs/Original.png")
c = image.copy()
c = cv2.resize(c, (480, 380), interpolation=cv2.INTER_AREA)
b, g, r = c[:, :, 0], c[:, :, 1], c[:, :, 2] # for BGR image
for y in range(len(r)):
for x in range(len(r[y])):
if r[y][x] < 170:
r[y][x] = 0
mask_b = cv2.bitwise_and(c, c, mask=r)
cv2.imshow("r", r)
cv2.imshow("mask_b", mask_b)
cv2.waitKey(0)
Dan Mašek Thanks Again!
I am trying to replace some code which uses scipy.misc.imresize with skimage.transform.resize. However, I am struggling to understand some code which performs some math on the results:
import os
import numpy as np
from PIL import Image
from skimage.transform import resize as imresize
# load the image
filename = os.path.join('silhouettes', 'cat.jpg')
image = Image.open(filename)
data = np.asarray(image)
width, height, _ = data.shape
mask = imresize(data, (width, height), order=3).astype('float32')
print(type(mask))
# Perform binarization of mask
mask[mask <= 127] = 0
mask[mask > 128] = 255
# numpy.amax
# Return the maximum of an array or maximum along an axis.
max = np.amax(mask)
print(max)
# RuntimeWarning: invalid value encountered in true_divide
# Attempt to divide an numpy.ndarray by 0
mask /= max
The comments document the error I'm getting: The max value is 0, and I wind up trying to divide by 0. For reference, the original function was:
def load_mask_sil(invert_sil, shape):
width, height, _ = shape
mask = imresize(invert_sil, (width, height), interp='bicubic').astype('float32')
# Perform binarization of mask
mask[mask <= 127] = 0
mask[mask > 128] = 255
max = np.amax(mask)
mask /= max
return mask
According to the documentation on skimage.transform.resize, the values in the output are scaled to the interval [0.0 ... 1.0], whereas I assume, that scipy.misc.imresize didn't change the data type or values at all (I don't have such an old version of scipy at hand to verify that).
So, in the original version, you most likely had values in the range [0.0 ... 255.0], and some values above 128, so that the maximum was 255. In the new version, you only have values in the range [0.0 ... 1.0], thus all pixels will be set to 0, since they're all below 127. (On a side note: Why <= 127 and > 128? What about 128 itself?)
You can circumvent that issue by enabling the preserve_range flag in your skimage.transform.resize call:
mask = imresize(data, (width, height), order=3, preserve_range=True).astype('float32')
So, you again get values in the range [0.0 ... 255.0], which should resemble the original behaviour.
I am trying to apply a mask I have made to an image using openCV (3.3.1) in python (3.6.5) to extract all the skin. I am looping over a photo and checking windows and classifying them using two premade sklearm GMMs. If the window is skin I have changing that area of the mask to True (255) otherwise leaving it as 0.
I have initialized the numpy array to hold the mask before the loop to be the same dimensions as the image, but openCV keeps saying that the image and mask do not have the same dimensions (output and error message are below). I have seen other somewhat similar problems on the site but none with solutions that have worked for me.
Here is my code:
# convert the image to hsv
hsv = cv2.cvtColor(img, cv2.COLOR_BGR2HSV)
delta = 6
# create an empty np array to make the mask
#mask = np.zeros((img.shape[0], img.shape[1], 1))
mask = np.zeros(img.shape[:2])
# loop through image and classify each window
for i in range(0,hsv.shape[0],delta):
for j in range(0,hsv.shape[1],delta):
# get a copy of the window
arr = np.copy(hsv[i:i+delta,j:j+delta,0])
# create a normalized hue histogram for the window
if arr.sum() > 0:
arr = np.histogram(np.ravel(arr/arr.sum()), bins=100, range=(0,1))
else:
arr = np.histogram(np.ravel(arr), bins=100, range=(0,1))
# take the histogram and reshape it
arr = arr[0].reshape(1,-1)
# get the probabilities that the window is skin or not skin
skin = skin_gmm.predict_proba(arr)
not_skin = background_gmm.predict_proba(arr)
if skin > not_skin:
# becasue the window is more likely skin than not skin
# we fill that window of the mask with ones
mask[i:i+delta,j:j+delta].fill(255)
# apply the mask to the original image to extract the skin
print(mask.shape)
print(img.shape)
masked_img = cv2.bitwise_and(img, img, mask = mask)
The output is:
(2816, 2112)
(2816, 2112, 3)
OpenCV Error: Assertion failed ((mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1)) in cv::binary_op, file C:\ci\opencv_1512688052760
\work\modules\core\src\arithm.cpp, line 241
Traceback (most recent call last):
File "skindetector_hist.py", line 183, in <module>
main()
File "skindetector_hist.py", line 173, in main
skin = classifier_mask(img, skin_gmm, background_gmm)
File "skindetector_hist.py", line 63, in classifier_mask
masked_img = cv2.bitwise_and(img, img, mask = mask)
cv2.error: C:\ci\opencv_1512688052760\work\modules\core\src
\arithm.cpp:241: error: (-215) (mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1) in function cv::binary_op
As you can see in the output, the image and mask have the same width and height. I have also tried making the mask have depth one (line 5) but that didn't help. Thank you for any help!
It is not only complaining about the size of the mask. It is complaining about the type of the mask. The error:
OpenCV Error: Assertion failed ((mtype == 0 || mtype == 1) &&
_mask.sameSize(*psrc1))
Means that either the type of the mask or the size (that in your case is equal) is not the same. In the documentation we see:
mask – optional operation mask, 8-bit single channel array, that
specifies elements of the output array to be changed.
And this is consistent with the error that asks for a type 0 (CV_8U) or 1 (CV_8S).
Also, even if it is not said, the img should not be float, since it will not give a desired result (probably it will do it anyways).
The solution is probably enough to change:
mask = np.zeros(img.shape[:2])
to
mask = np.zeros(img.shape[:2], dtype=np.uint8)
A small test shows what type you will get:
np.zeros((10,10)).dtype
gives you dtype('float64') which means doubles and not 8 bit
I'm using OpenCV/Python and I'm trying to add a number to image.
My code is:
import cv2
import numpy as np
import math
from matplotlib import pyplot as plt
img = cv2.imread('messi.jpg',0)
img2 = img
img2 = cv2.add(img2, np.uint8([50]))
I got the next error:
OpenCV Error: Assertion failed (type2 == CV_64F && (sz2.height == 1 || sz2.heigh
t == 4)) in cv::arithm_op, file C:\builds\master_PackSlaveAddon-win64-vc12-stati
c\opencv\modules\core\src\arithm.cpp, line 1989
Traceback (most recent call last):
File "lab3_examples.py", line 27, in <module>
img2 = cv2.add(img, np.uint8([50]))
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\core
\src\arithm.cpp:1989: error: (-215) type2 == CV_64F && (sz2.height == 1 || sz2.h
eight == 4) in function cv::arithm_op
The image I'm using is messi.jpg
Instead, if I use img2 = np.add(img2, np.uint8([50])) intensities that pass the value 255 the value % 255 result, e.g. 260%255=4 the pixel's value is set to 4 instead of 255. As a result, white pixels are turned to black!
Here is the faulty resulted image.
Any ideas please?
In C++ for this purpose saturate_cast(...) is used.
In Python simply
img2 = cv2.add(img2, 50)
will do, if you want to increase brightness for gray-scale image. If implied to colored, color balance will be shifted. For colored image, to save the balance, good answer is by Alex, Bill Grates:
How to fast change image brightness with python + OpenCV?
The only remark - next part of code are not nessessary:
v[v > 255] = 255
v[v < 0] = 0
in my case (Python3,Opencv4).
I would suggest convert the BGR image to HSV image:
hsv= cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
Then split the channels using:
h_channel, s_channel, v_channel = cv2.split(hsv)
Now play with the h_channel:
h_channel + = 20 #---You can try any other value as well---
Now merge the channels back together again:
merged = cv2.merge((h_channel , s_channel , v_channel ))
Finally convert the image back to BGR and display it:
Final_image = cv2.cvtColor(merged, cv2.COLOR_HSV2BGR)
cv2.imshow('Final output', Final_image)
You will see an enhanced or a dimmed image depending on the value you add.
Hope it helps.... :D
I've been trying to use the OpenCV implementation of the grab cut method via the Python bindings. I have tried using the version in both cv and cv2 but I am having trouble finding out the correct parameters to use to get the method to run correctly. I have tried several permutations of the parameters and nothing seems to work (basically every example I've seen on Github). Here are a couple examples I have tried to follow:
Example 1
Example 2
And here is the method's documentation and a known bug report:
Documentation
Known Grabcut Bug
I can get the code to execute using the example below, but it returns a blank (all black) image mask.
img = Image("pills.png")
mask = img.getEmpty(1)
bgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
fgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
for i in range(0, 13*5):
cv.SetReal2D(fgModel, 0, i, 0)
cv.SetReal2D(bgModel, 0, i, 0)
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv.GrabCut(img.getBitmap(),mask,rect,tmp1,tmp2,5,cv.GC_INIT_WITH_RECT)
I am using SimpleCV to load the images. The mask type and return type from img.getBitmap() are:
iplimage(nChannels=1 width=730 height=530 widthStep=732 )
iplimage(nChannels=3 width=730 height=530 widthStep=2192 )
If someone has a working example of this code I would love to see it. For what it is worth I am running on OSX Snow Leopard, and my version of OpenCV was installed from the SVN repository (as of a few weeks ago). For reference my input image is this:
I've tried changing the result mask enum values to something more visible. It is not the return values that are the problem. This returns a completely black image. I will try a couple more values.
img = Image("pills.png")
mask = img.getEmpty(1)
bgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
fgModel = cv.CreateMat(1, 13*5, cv.CV_64FC1)
for i in range(0, 13*5):
cv.SetReal2D(fgModel, 0, i, 0)
cv.SetReal2D(bgModel, 0, i, 0)
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv.GrabCut(img.getBitmap(), mask, rect, tmp1, tmp2, 5, cv.GC_INIT_WITH_MASK)
mask[mask == cv.GC_BGD] = 0
mask[mask == cv.GC_PR_BGD] = 0
mask[mask == cv.GC_FGD] = 255
mask[mask == cv.GC_PR_FGD] = 255
result = Image(mask)
result.show()
result.save("result.png")
Kat, this version of your code seems to work for me.
import numpy as np
import matplotlib.pyplot as plt
import cv2
filename = "pills.png"
im = cv2.imread(filename)
h,w = im.shape[:2]
mask = np.zeros((h,w),dtype='uint8')
rect = (150,70,170,220)
tmp1 = np.zeros((1, 13 * 5))
tmp2 = np.zeros((1, 13 * 5))
cv2.grabCut(im,mask,rect,tmp1,tmp2,10,mode=cv2.GC_INIT_WITH_RECT)
plt.figure()
plt.imshow(mask)
plt.colorbar()
plt.show()
Produces a figure like this, with labels 0,2 and 3.
Your mask is filled with the following values:
GC_BGD defines an obvious background pixels.
GC_FGD defines an obvious foreground (object) pixel.
GC_PR_BGD defines a possible background pixel.
GC_PR_FGD defines a possible foreground pixel.
Which are all part of an enum:
enum { GC_BGD = 0, // background
GC_FGD = 1, // foreground
GC_PR_BGD = 2, // most probably background
GC_PR_FGD = 3 // most probably foreground
};
Which translates to the colors: completely black, very black, dark black, and black. I think you'll find that if you add the following code (taken from your example 1 and slightly modified) your mask will look nicer:
mask[mask == cv.GC_BGD] = 0 //certain background is black
mask[mask == cv.GC_PR_BGD] = 63 //possible background is dark grey
mask[mask == cv.GC_FGD] = 255 //foreground is white
mask[mask == cv.GC_PR_FGD] = 192 //possible foreground is light grey