How to apply opencv background subtraction to an image - python

I am trying to apply cv2.createBackgroundSubtractorMOG() to this Image:
to eliminate all background brightness and only leave the two bright objects in the middle for further analysis. Is this the right approach for this task? If not, how would I do that?
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.createBackgroundSubtractorMOG().apply(img)
Output:
Traceback (most recent call last):
File "/home/artur/Desktop/test.py", line 4, in <module>
sharp_img = cv2.createBackgroundSubtractorMOG().apply(img)
AttributeError: module 'cv2.cv2' has no attribute 'createBackgroundSubtractorMOG
Edit:
MOG does not seem to work.
Code:
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.bgsegm.createBackgroundSubtractorMOG().apply(img)
cv2.imwrite('image2.png', sharp_img)
Output:
Traceback (most recent call last):
File "/home/artur/Desktop/test.py", line 4, in <module>
sharp_img = cv2.bgsegm.createBackgroundSubtractorMOG().apply(img)
AttributeError: module 'cv2.cv2' has no attribute 'bgsegm'
MOG2 seems to work but with no satisfying result:
Code:
import cv2
img = cv2.imread('image.png')
sharp_img = cv2.createBackgroundSubtractorMOG2().apply(img)
cv2.imwrite('image2.png', sharp_img)
Output Image:
I tried to play around with the args of the MOG2 Method from the docs but with no change.

from the docs, try this:
sharp_img = cv.bgsegm.createBackgroundSubtractorMOG().apply(img)
or
sharp_img = cv2.createBackgroundSubtractorMOG2().apply(img)

import cv2
img = cv2.imread('image.png')
max,min = img.max(),imgg.min()
print(max,min) #helps in giving thresholding values
threshold_img = cv2.threshold(blurred, 127, 255,cv2.THRESH_BINARY) #good starting point to give t1 value as half of max value of image
cv2.imshow(threshold_img)
This approach is a good starting point in your case, as you have two bright peaks that you want to separate from the noise. Once you have identified the required threshold limits, you should be able to isolate the two spots from the noise in the background. You can further use cv2.erode and cv2.dilate if needed to remove further noise.

Related

openCV not accept numpy array?

I want to do a matchTemplate from a screenshot (with mss)
from mss import mss
import cv2
import numpy
with mss() as sct:
screenshot_numpy = numpy.array(sct.shot())
template = cv2.imread('./templates/player.png')
result = cv2.matchTemplate(screenshot_numpy,template,cv2.TM_CCOEFF_NORMED)
Error message:
Traceback (most recent call last):
File "main.py", line 14, in <module>
result = cv2.matchTemplate(screenshot_numpy,template,cv2.TM_CCOEFF_NORMED)
TypeError: image data type = 18 is not supported
From the mss examples page:
img = numpy.array(sct.grab(monitor))
So here we can see the .grab() method to get the raw pixel data from the image. In this case sct.grab() returns a PIL Image, and numpy.array(Image) will thus convert the PIL Image object into a numpy ndarray.
Check the numpy ndarray dtype after you convert; for e.g. if your code is ndarray_img = numpy.array(sct.grab()), then check ndarray_img.dtype. If it's np.uint8 then you're done. If it's np.uint16, then you'll have to divide by 256 and convert to np.uint8 with ndarray_img = (ndarray_img/255).astype(np.uint8).
Further down you'll see another example which flips the R and B channels of the image:
cv2.imshow(title, cv2.cvtColor(img, cv2.COLOR_BGR2RGB))
except this is actually backwards. It really doesn't matter because either way it's just swapping the first and third channel, so BGR2RGB and RGB2BGR do exactly the same thing, but PIL (and other libraries) give you RGB order while you need BGR order to display with OpenCV, so technically it should be
cv2.imshow(title, cv2.cvtColor(img, cv2.COLOR_RGB2BGR))

From skimage import io error trace back

I am trying dlib for face recognition. But when i execute the program i have an error with skimage. Can somebody help me? I have try to solve it but i can't
from skimage.io import imread
import sys
import os
import dlib
import glob
import numpy
if len(sys.argv) != 4:
print(
"Call this program like this:\n"
" ./face_recognition.py shape_predictor_68_face_landmarks.dat dlib_face_recognition_resnet_model_v1.dat ../examples/faces\n"
"You can download a trained facial shape predictor and recognition model from:\n"
" http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2\n"
" http://dlib.net/files/dlib_face_recognition_resnet_model_v1.dat.bz2")
exit()
predictor_path = sys.argv[1]
face_rec_model_path = sys.argv[2]
faces_folder_path = sys.argv[3]
detector = dlib.get_frontal_face_detector()
sp = dlib.shape_predictor(predictor_path)
facerec = dlib.face_recognition_model_v1(face_rec_model_path)
win = dlib.image_window()
for f in glob.glob(os.path.join(faces_folder_path, "*.jpg")):
print("Processing file: {}".format(f))
img = io.imread(f)
win.clear_overlay()
win.set_image(img)
# Ask the detector to find the bounding boxes of each face. The 1 in the
# second argument indicates that we should upsample the image 1 time. This
# will make everything bigger and allow us to detect more faces.
dets = detector(img, 1)
print("Number of faces detected: {}".format(len(dets)))
# Now process each face we found.
for k, d in enumerate(dets):
print("Detection {}: Left: {} Top: {} Right: {} Bottom: {}".format(
k, d.left(), d.top(), d.right(), d.bottom()))
# Get the landmarks/parts for the face in box d.
shape = sp(img, d)
# Draw the face landmarks on the screen so we can see what face is currently being processed.
win.clear_overlay()
win.add_overlay(d)
win.add_overlay(shape)
# Compute the 128D vector that describes the face in img identified by
# shape. In general, if two face descriptor vectors have a Euclidean
# distance between them less than 0.6 then they are from the same
# person, otherwise they are from different people. He we just print
# the vector to the screen.
face_descriptor = facerec.compute_face_descriptor(img, shape)
print(face_descriptor)
# It should also be noted that you can also call this function like this:
# face_descriptor = facerec.compute_face_descriptor(img, shape, 100)
# The version of the call without the 100 gets 99.13% accuracy on LFW
# while the version with 100 gets 99.38%. However, the 100 makes the
# call 100x slower to execute, so choose whatever version you like. To
# explain a little, the 3rd argument tells the code how many times to
# jitter/resample the image. When you set it to 100 it executes the
# face descriptor extraction 100 times on slightly modified versions of
# the face and returns the average result. You could also pick a more
# middle value, such as 10, which is only 10x slower but still gets an
# LFW accuracy of 99.3%.
dlib.hit_enter_to_continue()
And my error message like this
Traceback (most recent call last):
File "C:/Users/Android/Downloads/Compressed/dlib-19.4/dlib-19.4/python_examples/face_recognition.py", line 48, in <module>
from skimage.io import imread
File "C:\Users\Android\AppData\Local\Programs\Python\Python35\lib\site-packages\skimage\io\__init__.py", line 11, in <module>
from ._io import *
File "C:\Users\Android\AppData\Local\Programs\Python\Python35\lib\site-packages\skimage\io\_io.py", line 7, in <module>
from ..color import rgb2grey
File "C:\Users\Android\AppData\Local\Programs\Python\Python35\lib\site-packages\skimage\color\__init__.py", line 1, in <module>
from .colorconv import (convert_colorspace,
File "C:\Users\Android\AppData\Local\Programs\Python\Python35\lib\site-packages\skimage\color\colorconv.py", line 59, in <module>
from scipy import linalg
File "C:\Users\Android\AppData\Local\Programs\Python\Python35\lib\site-packages\scipy\__init__.py", line 61, in <module>
from numpy._distributor_init import NUMPY_MKL # requires numpy+mkl
ImportError: cannot import name 'NUMPY_MKL'
Please help me with my problem. Thank you before
Imread is available from the mahotas package.
Example:
import mahotas as mh
from mahotas.features import surf
image = mh.imread('zipper.jpg', as_grey=True)

I'm trying to enhance the contrast of image using following code

Here i'm trying to:
- Apply Adaptive filtering to image .
- Enhance the contrast of image .
my code is as follow :
#!/usr/bin/python
import cv2
import numpy as np
import PIL
from PIL import ImageFilter
from matplotlib import pyplot as plt
img = cv2.imread('Crop.jpg',0)
cv2.imshow('original',img)
img = cv2.medianBlur(img,5)
th3 = cv2.adaptiveThreshold(img,255,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,\
cv2.THRESH_BINARY,11,2)
cv2.imshow('image',th3)
th3 = th3.filter(ImageFilter.SMOOTH)
cv2.imshow('image',th3)
cv2.waitKey(0)
cv2.destroyAllWindows()
I am getting following Error :
Traceback (most recent call last):
File "./adaptive.py", line 22, in
th3 = th3.filter(ImageFilter.SMOOTH)
AttributeError: 'numpy.ndarray' object has no attribute 'filter
You may confuse with cv.Smooth

opencv-python basic ORB feature detection gives me error -215 in function cv::drawKeypoints

this is the basic code from opencv-python documentation:
import numpy as np
import cv2
from matplotlib import pyplot as plt
img = cv2.imread('simple.jpg',0)
# Initiate STAR detector
orb = cv2.ORB()
# find the keypoints with ORB
kp = orb.detect(img,None)
# compute the descriptors with ORB
kp, des = orb.compute(img, kp)
# draw only keypoints location,not size and orientation
img2 = cv2.drawKeypoints(img,kp,color=(0,255,0), flags=0)
plt.imshow(img2),plt.show()
and it gives me this error:
Traceback (most recent call last):
File "C:\Python27\test.py", line 18, in <module>
img2 = cv2.drawKeypoints(img,kp,color=(0,255,0), flags=0)
error: ..\..\..\..\opencv\modules\features2d\src\draw.cpp:115: error: (-215) !outImage.empty() in function cv::drawKeypoints
i have to mention this error happens in opencv-PYTHON, can you help me out please? struggling really to make it work
i've found the solution
it couldn't find the image
i changed
img = cv2.imread('simple.jpg',0)
to
img = cv2.imread('c:\\python27\\sample.jpg', cv2.IMREAD_GRAYSCALE)
and it worked
note that the image i used for the sample image was one of my own grayscale images.

how to use hough circles in cv2 with python?

I have the following code and I want to detect the circle.
img = cv2.imread("act_circle.png")
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray,cv2.CV_HOUGH_GRADIENT)
it looks like it does not have the attribute and the error is the following
'module' object has no attribute 'CV_HOUGH_GRADIENT'
Does anybody know where this hidden parameters is?
Thanks
CV_HOUGH_GRADIENT belongs to the cv module, so you'll need to import that:
import cv2.cv as cv
and change your function call to
circles = cv2.HoughCircles(gray,cv.CV_HOUGH_GRADIENT)
Now in current cv2 versions:
import cv2
cv2.HOUGH_GRADIENT
In my case, I am using opencv 3.0.0 and it worked the following way:
circles = cv2.HoughCircles(gray_im, cv2.HOUGH_GRADIENT, 2, 10, np.array([]), 20, 60, m/10)[0]
i.e. instead of cv2.cv.CV_HOUGH_GRADIENT, I have used just cv2.HOUGH_GRADIENT.
if you use OpenCV 3, then use this code :
img = cv2.imread("act_circle.png")
gray = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
circles = cv2.HoughCircles(gray,cv2.HOUGH_GRADIENT) # change here

Categories