I have a FOV camera that has approximately 195*130 degree. So this 'lens' will put in a circle holder and the lens should not see the holder. Here's the image of not I want.
I draw 4 rectangle in Paint. There are 4 black spots which is the holder. The full red one is for censorship not there actually
If the camera image streams like that, that's a no. I need to detect that black spots and if it is like this it should gives me a error message or simply 'false'. I searched google and couldn't found this. I'm a noob of this subject but if you explain me how to do this I can connect the dots.
Thank you for your helps.
And I get the stream via USB-Capture Card. It acts like webcam.
#UPDATE1: I cropped the four corners of image then get the threshold. Made a basic if else logic and get what I want. Thank you anyway.
Try detection by generating custom HAAR filters.
Or make it simply by applying a threshold (nearly black) and look if some tiny squares in the corneres are completely black.
Related
I am trying to create a line tracking program with a drone with a forward facing camera. I understood this could be a bit difficult since the camera was not facing downward and would pick up on the environment. I need it to face forward for a face recognition algorithm. So I chose to make the line pink. I found on this site some parameters for color filtering. I thought they would be over compensating with the color range, but the tape doesn't show up in a full sheet, but rather in a ton of boxes inside the tape.
def pinkThreshold(image):
copy = image.copy()
copy = cv2.cvtColor(copy,cv2.COLOR_RGB2HSV)
lower_pink = np.array([125,30,100])
upper_pink = np.array([225,255,255])
pinkImage = cv2.inRange(copy, lower_pink, upper_pink)
edges = cv2.Canny(pinkImage,240,255)
return edges
The image I get is this:
I think it might have to do with the camera returning red squares, but i'm not completely sure what I should do about this and if this is even the issue. The red pattern areas seem to be like what I have seen, but i'm not completely sure. If that is true, what would be a good color filter with pink and red? Also, would this be solved by a large floodlight over the line to be tracked?
The camera is attached to a DJI Tello drone. I can't change the equipment.
I think it might have to do with the camera returning red squares, but i'm not completely sure what I should do about this and if this is even the issue.
Let's adjust contrast and use color substitution to see the actual problem:
As you can see, color noise is huge. If you try to do color-based segmentation or try to apply any other "color sensitive logic" by targeting any specific color you are going to see that noise being picked up:
You can always improve your lighting conditions and extend the range you defined, but there is another approach: you can use multiple colors to find the actual shape you need, you can use thresholding to boost some specific areas and so on:
Long story short:
improve lightning conditions
AND/OR do some specific preprocessing with wider color range to properly address noise and overall quality of the picture
i am quite new to Python and i try to write some code for image analysing.
Here is my initial image:
Initial image
After splitting the image in to the rgb channels, converting in to gradient, using a threshold and merging them back together i get the following image:
Gradient/Threshold
Now i have to draw contours around the black areas and get the size of the surrounded areas. I just dont know how to do it, since my trials with find/draw.contours in opencv are not succesfull at all.
Maybe someone also knows an easier way to get that from the initial image.
Hope someone can help me here!
I am coding in Python 3.
Try adaptive thresholding on the grayscale image of the input image.
Also play with the last two parameters of the adaptive thresholding. You will find good results as I have shown in the image. (Tip: Create trackbar and play with value, this will be quick and easy method to get best values of these params.)
I am working on a script that detects the bottom of a cooking pot, given a picture from above at a slightly slanted angle. This is a tough task due to lighting, lack of edges, and the glare of the metal pot.
This is a sample image I am working with:
image
img = cv2.imread('img.jpg')
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
The bottom of the pot is visible, but hard to detect.
So far, I was able to produce this image using:
th2 = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,2)
And this image using:
edges = cv2.Canny(img,30,60)
The solution seems intuitive since in both images the base can be detected easily, but I am not able to figure out the implementation.
My intuition tells me that I should cluster white pixels starting from the center of the image, and line the border of the segmentation generated from the clustering. I am not quite sure about how to go about that. Any advice would be greatly appreciated.
WORK IN PROGRESS
What is your overall goal? Do you want to detect something at the bottom? Count bubbles or something like that? Or you you want to check for defects?.
As you already mentioned, it's hard because of the reflections, so you could start using less direct light and maybe use a diffuse source.
No single light bulb, no direct sun, maybe use a white canvas. Or span a thin piece of white cloth between the current light source and the camera.
I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.
Hi I am wanting to use the python imaging library to crop images to a specific size for a website. I have a problem, these images are meant to show people's faces so I need to automatically crop based on them.
I know face detection is a difficult concept so I'm thinking of using the face.com API http://developers.face.com/tools/#faces/detect which is fine for what I want to do.
I'm just a little stuck on how I would use this data to crop a select area based on the majority of faces.
Can anybody help?
Joe
There is a library for python that have a concept of smart-cropping that among other options, can use face detection to do a smarter cropping.
It uses opencv under the hood, but you are isolated from it.
https://github.com/globocom/thumbor
If you have some rectangle that you want to excise from an image, here's what I might try first:
(optional) If the image is large, do a rough square crop centered on the face with dimensions sqrt(2) larger than the longer edge (if rectangular). Worst-case (45° rotation), it will still grab everything important.
Rotate based on the face orientation (something like rough_crop.rotate(math.degrees(math.atan(ydiff/xdiff)), trig is fun)
Do a final crop. If you did the initial crop, the face should be centered, otherwise you'll have to transform (rotate) all your old coordinates to the new image (more trig!).