I have a video file of evening time ( 6pm-9pm). And I want to detect movement of people on the road.
While trying to find the difference between a handful of images from "10 minute" time frame videos (10 equally time spaced images within any 10 minutes video frame clip) I'm facing these challenges:
All the images are coming as different (coming as Alert) because there is some plant moving due to wind all the time.
All the 10 images are coming different also because the sun is setting down and hence due to "natural light variation" the 10
images from 10 minute frames after coming different even though
there is no public/human movement.
How do I restrict my algorithm to focus only on movements ion certain area of the video rather than all of it ? (Couldn't find
anything on google or dont know if there's any algo in opencv for this)
This one is rather difficult to deal with. I recommend you try to blur the frames a little bit to reduce the noises from moving plants. Also, if the range of the movement is not so large, try changing the difference threshold and area threshold (if your algorithm contains contour detection as the following step). Hope this can help a little bit.
For detecting "movement" of people, a (10 frame/10 min) fps is a little too low. People in the frames can be totally different. This means you cannot detect the movement of a single person, but to find the differences between two frames. In the case where you are using low fps videos, I recommend you try Background Subtraction, to find people in the frames instead of people movements between the frames. For Background Subtraction, to solve
All the 10 images are coming different also because the sun is setting down and hence due to "natural light variation" the 10 images from 10 minute frames after coming different even though there is no public/human movement.
you can try using the average image of all frames as the background_img in
difference = current_img - background_img
If the time span is longer, you can use the average of images more recent to current_img as background_img. And keep updating background_img when running the video.
If your ROI is a rectangle in the frame, use
my_ROI = cv::Rect(x, y, width, height)
cv::Mat ROI_img= frame(my_ROI)
If not, try using a mask.
I think what you are looking for is a Pedestrian Detection. You can do this easily in Python with OpenCV package.
# Initialize a HOG descriptor
hog = cv2.HOGDescriptor()
# Set it for Pedestrian Detection
hog.setSVMDetector(cv2.HOGDescriptor_getDefaultPeopleDetector())
# Then use the detector
hog.detectMultiScale()
Exemple : Pedestrian Detection OpenCV
Related
I'm currently working on my first assignment in image processing (using OpenCV in Python). My assignment is to calculate a precise score (to tenths of a point) of one to several shooting holes in an image uploaded by a user. One of the requirements is to transform the uploaded shooting target image to be from "birds-eye view" for further processing. For that I have decided that I need to find center coordinates of numbers (7 & 8) to select them as my 4 quadrilateral.
Unfortunately, there are several limitations that need to be taken into account.
Limitations:
resolution of the processed shooting target image can vary
the image can be taken in different lighting conditions
the image processed by this part of my algorithm will always be taken under an angle (extreme angles will be automatically rejected)
the image can be slightly rotated (+/- 10 degrees)
the shooting target can be just a part of the image
the image can be only of the center black part of the target, meaning the user doesn't have to take a photo of the whole shooting target (but there always has to be the center black part on it)
this algorithm can take a maximum of 2000ms runtime
What I have tried so far:
Template matching
here I quickly realized that it was unusable since the numbers could be slightly rotated and a different scale
Feature matching
I have tried all of the different feature matching types (SIFT, SURF, ORB...)
unfortunately, the numbers do not have that specific set of features so they matched a quite lot of false positives, but I could possibly filter them by adding shape matching, etc..
the biggest blocker was runtime, the runtime of only a single number feature matching took around 5000ms (even after optimizations) (on MacBook PRO 2017)
Optical character recognition
I mostly tried using pytesseract library
even after thresholding the image to inverted binary (so the text of numbers 7 and 8 is black and the background white) it failed to recognize them
I also tried several ways of preprocessing the image and I played a lot with the tesseract config parameter but it didn't seem to help whatsoever
Contour detection
I have easily detected all of the wanted numbers (7 & 8) as single contours but failed to filter out all of the false positives (since the image can be in different resolutions and also there are two types of targets with different sizes of the numbers I couldn't simply threshold the contour by its width, height or area)
After I would detect the numbers as contours I wanted to extract them as some ROI and then I would use OCR on them (but since there were so many false positives this would take a lot of time)
I also tried filtering them by using cv2.matchShapes function on both contours and cropped template / ROI but it seemed really unreliable
Example processed images:
high resolution version here
high resolution version here
high resolution version here
high resolution version here
high resolution version here
high resolution version here
As of right now, I'm lost on how to progress about this. I have tried everything I could think of. I would be immensely happy if any of you image recognition experts gave me any kind of advice or even better a usable code example to help me solve my problem.
Thank you all in advance.
Find the black disk by adaptive binarization and contour (possibly blur to erase the inner features);
Fit an ellipse to the outline, as accurate as possible;
Find at least one edge of the square (Hough lines);
Classify the edge as one of NWSE (according to angle);
Use the ellipse and the line information to reconstruct the perspective transformation (it is an homography);
Apply the inverse homography to straighten the image and obtain the exact target center and axis;
Again by adaptive binarization, find the bullet holes (center/radius);
Rate the holes after their distance to the center, relative to the back disk radius.
If the marking scheme is variable, detect the circles (Hough circles, using the known center, or detect peaks in an oblique profile starting from the center).
If necessary, you could OCR the digits, but it seems that the score is implicitly starting at one in the outer ring.
My experiment involves subjecting a substance to pressure that makes the substance eventually crack. The crack grows with time and pressure applied. I have a set-up to take a picture of the substance at fixed intervals of time.
I need to measure how fast crack grows.How do I go about this? (I can code in Python).
Is there a way to measure live speed or speed of growth of crack from one frame to another?
Google drive link to series of pictures taken - https://drive.google.com/open?id=189cv8B4rm3lhSgT6OYfI_aN0Xmqi-tYi
Kindly advise.
I Tried floodFill from OpenCV as per suggestions to this question. But the returned mask is as shown:
h, w = resized.shape[:2]
mask = np.zeros((h+2, w+2), np.uint8)
seed = (int(w/2),int(h/2))
# Floodfill from point (0, 0)
num,im,mask,rect = cv2.floodFill(resized, mask, (0,0), (255,0,0), (10,)*3, (10,)*3, floodflags)
I thought if I can get the co-ordinates of the rectangle bounding box that encloses the crack, I can track its co-ordinates across frames and measure the size of the crack and eventually the speed.
I tried thresholding as below:
th, im_th = cv2.threshold(im, 100, 255, cv2.THRESH_BINARY);
This gives:
I'm unsure if this will let me filter out the background and draw a bounding box over the crack alone. Please advise.
Thanks in advance.
Depending on how slowly the crack forms, you probably don't need a video; you'll likely wind up sampling every X frames anyway, and throwing all of the extra frames away. What you want is enough frames to get "incremental" changes in the crack without getting too many frames that it becomes too computationally expensive.
If you can carefully control the lighting conditions in your setup, then you're in luck! This becomes a very simple problem. You can take a histogram of the pixels (openCV has handles for this, but so does PIL and numpy); you should get two families of color; one that is the color of the outside of the substance, and another that is biased by the shadow in the crack.
You can also try dramatically increasing the contrast in each image/frame in order to get a binary mask of the crack, or running an edge detector over the image. These techniques will lead to frames that are substantially easier to process than the raw footage. You can even feed these into a skeletonization process in order to generate a vector-based representation of the line, in XY image coordinates.
If you can't control the lighting, or the sample is a similar color to the crack, you'll probably need to use object detection techniques, but it's unlikely there's an existing "crack detector," so you may either need to build your own, or look for what other detectors serve as a good proxy for the color and shape of the forming crack.
I'd highly recommend trying the first option if at all possible; pixel and histogram math is far easier than other techniques.
I appreciate you are only just getting started but you have some issues with your video. Firstly the lighting it is not best and it is not consistent because people are moving around in front of it and casting shadows - it also doesn't illuminate the the background behind the crack best - it would be better if it was at the height of the crack and shining more into it so that it better illuminates the background behind the crack. Secondly, you could do without the camera moving part way through the experiment!
Finally, if you want to measure things you need to calibrate, which at the very least means putting a ruler in the image - or scale lines on your background at fixed intervals. If you are doing all that you may as well make life easy for yourself and put markers of a specific colour/pattern, both different, on the top and bottom of the frame plates that are applying the load.
Finally then, you want to do something like a floodfill, or a fill just within the confines of your material (probably by masking) to fill the crack with a different colour. It is then pretty simple to measure the length of the crack and the left-most extent of the crack.
With a proper segmentation approach you are going to have a detailed geometry of the object extracted from a single frame. For example:
If you process multiple frames you will be able to see geometry evolution in time. Having that it should be easy to compare polygons to find form changes, cracks, etc:
I used to work with 4K video to get all required details and good accuracy. You might not need all that data, but video is still way more flexible.
Here is a complete example: https://youtu.be/g2KyfrBtTA4
Provide some examples if you want to get more detailed recommendations.
Update
Real examples are always helpful. So you can segment a crack:
or a substance:
or both:
Basically, you need to enhance overall quality of the input (focus, background under the substance, etc).
As Mark Setchell showed, you might get unwanted background as part of the result shape (the right side of the crack), so it is better to make sure that will not happen or just try to analyze only the substance.
Anyway, your task doesn't seem to be complex. It might be trivial if you can improve image quality and do some simplifications to the environment (some specific background, etc).
I am reading the slides for temporal filtering in Computer vision (page 108) class and i am wondering how can we do temporal filtering for videos?
For example they say our data is a vide which is in XYT, whre X,Y are spatial domain and T is time.
"How could we create a filter that keeps sharp objects that move at some velocity (vx, vy)while blurring the rest?"
and they kinda drive the formula for that, but im confused how to apply it?
How can we do filtering in Fourie Domain , how should we apply that in general? can someone please help me how should i code it?
In that example, they're talking about a specific known speed. For example, if you know that a car is moving left at 2 pixels per frame. It's possible to make a video that blurs everything except that car.
Here's the idea: start at frame 0 of the video. At each pixel, look one frame in the future, and 2 pixels left. You will be looking at the same part of the moving car. Now, imagine you take the average color value between your current pixel & the future pixel (the one that is 2 pixels left, and 1 frame in the future). If your pixel is on the moving car, both pixels will be the exact same color, so taking the average has no effect. On the other hand, if it's NOT on the moving car, they'll be different colors, and so take the average will have the effect of blurring between them.
Thus, the pixels of the car will be unchanged, but the rest of the video will get a blur. Repeat for each frame. You can also include more frames in your filter; e.g. you could look 2 frames in the future and 4 pixels left, or 1 frame in the past and 2 pixels right.
Note: this was a teaching example; I don't think there are many real computer vision applications for this (at least, not as a standalone technique), because it's so fragile. If the car speeds up or slows down slightly, it gets blurred.
I am building a system which detects coins that are picked up from a tray. This tray will be kept in a public place. People will pick up one or more coins, but would be expected to keep them back after some time.
I would have a live stream through a webcam placed at the top. I will have a calibration step, say at the beginning of the day, that captures the initial state of the tray to be used for comparing with the live feed. A few slots might be empty to begin with, as you can see in the sample image.
I need to detect slots that had a coin initially, but are missing the same at any given point of time during the day.
I am trying out a few approaches using OpenCV:
SSIM difference: I can use SSIM to find diff between my live image frame and initial state. However, a number of slots are larger than the corresponding coin sizes (e.g. top two rows). This could mean that if the coin was originally placed at the center, but was later put back to touch one of the edges, we may get a false positive.
Blob detection: Alternatively, I can pre-feed (or detect) slot co-ordinates. Then do a blob detection within every slot. If a blob was present in the original state, but is missing in a camera frame, this would mean a coin has been picked up. However, accurate blob detection could be a challenge if the contrast between the coin and the tray is low.
I might also need to watch out for slight variations in lighting due to shadows of people moving around.
Any thoughts on these or any pointers on alternate approaches that can be tried out? Is there any analogous implementation that I can learn from?
Many thanks in advance.
Edit: Thanks to #I.Newton's suggestion. For those who stumble upon this question and would benefit from a sample implementation, look here: https://github.com/kewats/computer-vision-samples/tree/master/image-processing/missing-coins-detection
If you complete control over the lighting conditions, you can use simple color thresholding to solve the problem.
First make a mask for the boxes. You can do it in multiple ways by color threshold or by using adaptive threshold or canny edge etc. I did by color threshold
Then make a mask for the coins by the same method.
Now flood fill your box mask from from the center of each of this coins. It'll retain only those which do not have the coins.
Now you can compare this with your initial mask to figure out if all the coins are present
This does not include frame subtraction. So you need not worry about different position of coin in the box. Only thing you need to make sure is the lighting conditions for making the masks. If you want to make sure the coins are returned to the same box, you should go for template matching etc which again needs effort.
Okay I know what might come to people's mind is initially that I use OpenCV to solve my problem - except that unfortunately I can't implement it in this case (I'm using a Raspberry PI + the camera module). I have a motion detection script, but the problem is throughout the day shadows are cast or sometimes the camera adjust it's sensitivity - which to the algorithm it detects as a movement. I'm trying to figure out what is a way I can get around this problem. Currently I take an image A and calculate the different between that and image B, and compute the entropy of those images, I do this for ten images average them to create a moving average so that the current image I can compare it to this average entropy if it's outside of 1 standard deviation I take this to be indicative of motion.
I should add since I don't really understand OpenCV - is there potentially a way to have it compare two images and detect motion from that? In which case I'm already generating images as the camera is taking photos I could find a way to feed them to OpenCV using python and maybe get around my sunlight exposure issue?
My current plan of attack is to try and average say the last ten frames into one and then compute the difference, is there a good way to do this in Python? Also say a dog is found in image 10, does that mean that for 10 images following even if it's gone in 11 it will still detect there is motion because the average of those ten frames till it is removed from the queue will have the dog present?