how to implement tripwire in OpenCV using background subtraction - python

The following paper gives you an idea about tripwire.
I have subtracted the background from foreground using backgroundsubtractormog2 method of OpenCV C++.
I can able to find and draw contours but how to draw a line such that whenever a person or vehicle is crossing that line, it should be counted. you can see a sample video here example video.
Any suggestions on drawing such a line......
Is the way I am approaching is correct? I am using Windows 7 OpenCV 2 & OpenCV 3 in python and C++

For drawing a line you can use the cv::line function.
For counting objects passing the line you can use the minimum and maximum (x,y) values of the contour or bounding box you know. If the contour has passed the line, then both minimum and maximum (x,y) will be on the other side of that line.
Can you show us the code of what you've already tried?

Related

How to use opencv functions on roi

My project is REM sleep detector, and the provided pictures show the contour of my eyelid. As my eye looks in directions, this contour moves in a distinct way. For lack of a better solution, my first attempt is to draw a grid of rois on my video stream, with that in place I want to use the countnonzero function or use blob detection on the rois. Depending on which rois in the grid change values, movement and direction is detected. (I am sure there is better way)
Problem: I can not specify one or several rois of my choice, the function always work only on the entire image. How do I retrieve values from each roi specifically? Rois are set up by means of multiple rectangle functions. Code is in python. Any help greatly appreciated.
Contour of eyelid:

Opencv Line Length Detection

I need help with a sample code which is using python language to get the length of vertical and horizontal lines of an uploaded image.
Didn't got your question correctly. But here is something according to my understanding.
First, detect the lines:
Line detection using OpenCV
Second, use the formula to calculate the distance between two points:
length = norm(p2 - p1)
where p1(x1,y1) and p2(x2,y2)
Note: It will calculate length in pixels.
You can use cv2.findContours and cv2.approxPolyDP. This will get you a list of rectangles, then you can analyse if they are vertical or horizontal.
Some info:
https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html

Remove motion blur with real time performance on camera input

I have drawn simple pattern of geometrical shapes on a paper and placed it one a object as marker. I'm able to detect and analyze pattern successfully. However when object moves a little faster the motion blur is introduced which can be rotational or linear. This way detected regions overlap e.g. a strip of arrows moving in direction of arrows, is detected as a single line after introduction of motion blur. Therefore I need to fix it somehow. So I can detect individual arrows and analyze them.
Below are images of markers with and without motion blur.
Is there any python module or open source implementation that can be used to solve it?
Motion can be in any direction at any speed so PSF is not known and required for Wiener, Lucy-Richardson methods.
Also it is a realtime tracking problem so I need something that executes fast.
P.S. I'm using Python 2.7 and Opencv 3
This problem can be solved by limiting the exposure time of your camera. This can be done using opencv by using:
cap.set(cv2.CAP_PROP_EXPOSURE,40)
or using the v4l2-ctl command line utility.
first step is to check whether camera is suitable for opencv properties such as
CAP_PROP_FRAME_WIDTH
CAP_PROP_FRAME_HEIGHT
in order to check camera suitability
second step is to is use CV_CAP_PROP_EXPOSURE like
cap.set(cv2.CAP_PROP_EXPOSURE, 40)
value can be change accordingly to avoid motion blur

OpenCV [Python] - Perspective Warp an image with more than 4 points

I'm trying to make a simple scanner program, which takes in an image of a piece of paper and create a binary image based off of that. An example of what I am trying to do is below:
However, as you can see, the program uses the 4 corners of the paper to create the image. This means that the program doesn't take into account the curvature of the paper.
Is there anyway to "warp" the image with more than four points? By this, I mean find the bounding rectangle, and if the contour line is outside the rectangle, shrink that row of pixels, and if the contour line is inside, then extend it?
I feel like this should exist in some way or form, but if it doesn't, it may be time to delve into the depths of OpenCV :D
Thanks!

Image Segmentation based on Pixel Density

I need some help developing some code that segments a binary image into components of a certain pixel density. I've been doing some research in OpenCV algorithms, but before developing my own algorithm to do this, I wanted to ask around to make sure it hasn't been made already.
For instance, in this picture, I have code that imports it as a binary image. However, is there a way to segment objects in the objects from the lines? I would need to segment nodes (corners) and objects (the circle in this case). However, the object does not necessarily have to be a shape.
The solution I thought was to use pixel density. Most of the picture will made up of lines, and the objects have a greater pixel density than that of the line. Is there a way to segment it out?
Below is a working example of the task.
Original Picture:
Resulting Images after Segmentation of Nodes (intersection of multiple lines) and Components (Electronic components like the Resistor or the Voltage Source in the picture)
You can use an integral image to quickly compute the density of black pixels in a rectangular region. Detection of regions with high density can then be performed with a moving window in varying scales. This would be very similar to how face detection works but using only one super-simple feature.
It might be beneficial to make all edges narrow with something like skeletonizing before computing the integral image to make the result insensitive to wide lines.
OpenCV has some functionality for finding contours that is able to put the contours in a hierarchy. It might be what you are looking for. If not, please add some more information about your expected output!
If I understand correctly, you want to detect the lines and the circle in your image, right?
If it is the case, have a look at the Hough line transform and Hough circle transform.

Categories