How to identify the unfinished rectangle by image processing? - python

I have a color image.
After several preprocessing I am able to get the following image.
However, as you have seen the door portion is not complete, only 3 lines are visible on the post processed one. Not the 4th boundary lin, because on the original photo, the color portion was missing at that part.
Now I can identify the two windows, but how to identify the door?
Is there any way to complete the unfinished part of the door as rectangle ?
The yellow ticked door is also needed to be identified.

You want an image processing algorithm that given 3 sides of a rectangle will know to "close" the fourth one.
Suppose we give you such an algorithm, how do you expect it to differentiate between the green rectangle (the door you want to detect) and the red rectangle (you do not want)?

I beleive you should somehow take a step back from your processed image and identify a car blocking a view of the door.
So it boils down to the car hood identification. Maybe use high gradient color change in this area (color quickly gradients from dark grey [door] to light grey [hood] indicating last curve to be added to three straight lines of the door).

Related

find camera/image obstructed by an object with opencv python

I have a FOV camera that has approximately 195*130 degree. So this 'lens' will put in a circle holder and the lens should not see the holder. Here's the image of not I want.
I draw 4 rectangle in Paint. There are 4 black spots which is the holder. The full red one is for censorship not there actually
If the camera image streams like that, that's a no. I need to detect that black spots and if it is like this it should gives me a error message or simply 'false'. I searched google and couldn't found this. I'm a noob of this subject but if you explain me how to do this I can connect the dots.
Thank you for your helps.
And I get the stream via USB-Capture Card. It acts like webcam.
#UPDATE1: I cropped the four corners of image then get the threshold. Made a basic if else logic and get what I want. Thank you anyway.
Try detection by generating custom HAAR filters.
Or make it simply by applying a threshold (nearly black) and look if some tiny squares in the corneres are completely black.

As per optical-flow, why is there a heavy motion at image margins?

See some examples:
(1)
(2)
I am curious about, why is there a high magnitude area at the margins of the image? (Like the blue area for the first frame, and red area for the second)
Also, I am seeing a similar trend around a stationary objects in the frame. (As for example, see the heading "RACE 9" in the first image...and look at the high magnitude area around it)
Even though they are not moving very fast, why is this trend coming up? Is it because the pixels are considered to have moved out of the frame, and thus Opt-Flow assumes high motion, corresponding to them? Any possible explanation will be highly appreciated.
What you see in the second picture is a scissor problem. Look at the white triangle at the bottom, which is just a part of the white strip. Imagine that on the next frame the camera will move up just a bit. Because the white stip is almost parallel to the edge of the frame, the point of intersection of the white strip with the bottom edge of the frame will move a long distance to the right. It's similar to scissors, where the point of intersection of blades moves very fast if the angle between the blades is small. So if you just try to follow the motion of the bottom white triangle on your frame you'll see that it's sliding to the right at high speed as the camera moves up.
This type of problems appear near the edge of the screen and along straight edges in a picture in general. For example, you can shift stripes of the american flag to the right or left without significant change of a picture. It means that insignificant changes of the flag picture can be as well interpreted as large shifts to the left or right.

Detect region from edges

I've got a micrograph showing a number of grains that have a rather clear boundary. I've used OpenCV-Python to detect these boundaries (with a Canny filter), and I think it was rather successful in its attempt, see figure. I would like to identify and mark the individual regions bounded by the detected edges, and then get the area (number of pixels) contained those regions. My apologies if the question was asked (and answered) before, but I could not find any satisfying answers yet.
Thanks in advance
Original image
Original image overlain by the detected edges
If the grain makes no difference in the color (maybe on the raw data rather than a compressed format), you may wanna use the Becke line to distinguish inside and outside. The borders of your grain appear dark on the inside and white on the outside. But this depends also on the focus of the microscope. See here.
In the case that your grains do not enclose totally a background spot you can use a point in polygon approach.

How to remove sun's reflection on a photo using image processing

I have multiple grayscale images in which each image has the sun's reflection or also known as glare as a bright glaring spot. It looks like a bright white blob which I want to remove. It is basically the front portion of a car image when the sun's reflection falls on the front steel grill of the car.
I want to remove this reflection as much as possible. I would appreciate if anyone can point me to a good algorithm , preferably in Python, that I can leverage to remove this glare and pre-process as much as possible.
I tried the approach of applying a threshold to the image pixels and then setting anything that is above 200 to a value of 128. It doesn't work very well because there are other parts of the image that contains white and those get affected.
do not forget to add some sample images ,...
I would try first identify the sun spot
by intensity and the shape of graph intensity=f(distance from spot middle)
it may have a distinct shape that could be used to identify the spot more reliably
After that I would bleed colors
from spot outer area to the its inside
recoloring the spot with its surrounding color ...
by finding all spot pixels that are next to non spot pixels
recoloring them to average of neighboring non spot pixels
and clearing them in spot mask
looping this until no spot pixel in mask is left
[notes]
But without any input images to test is this just a theory
also if you have the source RGB image not just gray the color patterns can may be also be used to help identify the spots
by checking for saturation of white and or some rainbow like pattern

Determine height of Coffee in the pot using Python imaging

We have a web-cam in our office kitchenette focused at our coffee maker. The coffee pot is clearly visible. Both the location of the coffee pot and the camera are static. Is it possible to calculate the height of coffee in the pot using image recognition? I've seen image recognition used for quite complex stuff like face-recognition. As compared to those projects, this seems to be a trivial task of measuring the height.
(That's my best guess and I have no idea of the underlying complexities.)
How would I go about this? Would this be considered a very complex job to partake? FYI, I've never done any kind of imaging-related work.
Since the coffee pot position is stationary, get a sample frame and locate a single column of pixels where the minimum and maximum coffee quantities can easily be seen, in a spot where there are no reflections. Check the green vertical line segment in the following picture:
(source: nullnetwork.net)
The easiest way is to have two frames, one with the pot empty, one with the pot full (obviously under the same lighting conditions, which typically would be the case), convert to grayscale (colorsys.rgb_to_hsv each RGB pixel and keep only the v (3rd) component) and sum the luminosity of all pixels in the chosen line segment. Let's say the pot-empty case reaches a sum of 550 and the pot-full case a sum of 220 (coffee is dark). By comparing an input frame sum to these two sums, you can have a rough estimate of the percentage of coffee in the pot.
I wouldn't bet my life on the accuracy of this method, though, and the fluctuations even from second to second might be wild :)
N.B: in my example, the green column of pixels should extend to the bottom of the pot; I just provided an example of what I meant.
Steps that I'd try:
Convert the image in grayscale.
Binarize the image, and leave only the coffee. You can discover a good threshold manually through experimentation.
Blob extraction. Blob's area (number of pixels) is one way to calculate the height, ie area / width.
First do thresholding, then segmentation. Then you can more easily detect edges.
You're looking for edge detection. But you only need to do it between the brown/black of the coffee and the color of the background behind the pot.
make pictures of the pot with different levels of coffe in it.
downsample the image to maybe 4*10 pixels.
make the same in a loop for each new live picture.
calculate the difference of each pixels value compared to the reference images.
take the reference image with the least difference sum and you get the state of your coffe machine.
you might experiment if a grayscale version or only red or green might give better results.
if it gives problems with different light settings this aproach is useless. just buy a spotlight for the coffe machine, or lighten up, or darken each picture till the sum of all pixels reaches a reference value.

Categories