cv2 CascadeClassifier parameters - python

Can someone give me example of fully set classifier< I´m talking about parameters i just don´t understand this example:
cv2.CascadeClassifier.detectMultiScale(image, rejectLevels, levelWeights[, scaleFactor[, minNeighbors[, flags[, minSize[, maxSize[, outputRejectLevels]]]]]]) → objects
I am detecting my face but I need to set min and max size of it. To do that you have to set rejectLevels, levelWeights etc.
I´m using module CV2.

In this problem, first you have to create a collection file with bounding boxes on positive images before you create a list of negative images. Then you have to create opencv samples in order to train your cascade. Once you have finished that, You can simply use following code in order to detect your face samples.
#load detection file
cascade = cv2.CascadeClassifier("cascade.xml")
# detect objects, return as list
rects = cascade.detectMultiScale(img)
Then you can iterate over your rect list.
Please have a look on this ref:

Related

How can I make the annotation follow the rotation or random crop of the photo in Python?

I want to make datasets of my images for an object detection model.
However, I want to try to use random crop and rotating the images to make more datasets but I worry about whether the annotation also can change through the original images's annotation.
What I mean is that I just want to change the anotation automatically when I use the function of random crop and rotation.
If you have any idea? please tell me.
Thank you

Occlusion handling in object tracking

I am implementing motion based object tracking program which is using background substraction, Kalman Filter and Hungarian algorithm. Everything is working fine except the occlusions. When two object are close enough to each other the background substraction recognizes it as one of these two objects. After they split the program recognizes these two objects correctly. I am looking for solution/algorithm which will detect occlusion like shown in point c) in the example below.
I will appreciate any references or code examples reffering to occlusion detecion problem when using background substraction.
Object detection using a machine learning algorithm should reliably distinguish between these objects, even with significant occlusion. You haven't shared anything about your environment so I don't know what kind of constraints you have, but using an ML approach, here is how I would tackle your problem.
import cv2
from sort import *
tracker = Sort() # Create instance of tracker (see link below for repo)
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read()
# Not sure what your environment is,
#but get your objects bounding boxes like this
detected_objects = detector.detect(frame) #pseudo code
# Get tracking IDs for objects and bounding boxes
detected_objects_with_ids = tracker.update(detected_objects)
...
The above example uses this Kalman Filter and Hungarian algorithm, which can track multiple objects in real-time.
Again, not sure about your environment, but you could find pre-built object detection algorithms on the Tensorflow site.

object detection: is object in the photo, python

I am trying to detect plants in the photos, i've already labeled photos with plants (with labelImg), but i don't understand how to train model with only background photos, so that when there is no plant here model can tell me so.
Do I need to set labeled box as the size of image?
p.s. new to ml so don't be rude, please)
I recently had a problem where all my training images were zoomed in on the object. This meant that the training images all had very little background information. Since object detection models use space outside bounding boxes as negative examples of these objects, this meant that the model had no background knowledge. So the model knew what objects were, but didn't know what they were not.
So I disagree with #Rika, since sometimes background images are useful. With my example, it worked to introduce background images.
As I already said, object detection models use non-labeled space in an image as negative examples of a certain object. So you have to save annotation files without bounding boxes for background images. In the software you use here (labelImg), you can use verify image to say that it saves the annotation file of the image without boxes. So it saves a file that says it should be included in training, but has no bounding box information. The model uses this as negative examples.
In your case, you don't need to do anything in that regard. Just grab the detection data that you created and train your network with it. When it comes to testing, you usually set a threshold for bounding boxes accuracy, because you may get lots of them so you only want the ones with the highest confidence.
Then you get/show the ones with highest bbox accuracies and there your go, you get your detection result and you can do what ever you want like cropping them using the bounding box coordinates you get.
If there are no plants, your network will likely create bboxes with an accuracy below your threshold (very low confidence) and then, you just ignore them.

Random crop and bounding boxes in tensorflow

I want to add a data augmentation on the WiderFace dataset and I would like to know, how is it possible to random crop an image and only keep the bouding box of faces with the center inside the crop using tensorflow ?
I have already try to implement a solution but I use TFRecords and the TfExampleDecoder and the shape of the input image is set to [None, None, 3] during the process, so no way to get the shape of the image and do it by myself.
You can get the shape, but only at runtime - when you call sess.run and actually pass in the data - that's when the shape is actually defined.
So do the random crop manually in tesorflow, basically, you want to reimplement tf.random_crop so you can handle the manipulations to the bounding boxes.
First, to get the shape, x = your_tensor.shape[0] will give you the first dimension. It will appear as None until you actually call sess.run, then it will resolve to the appropriate value. Now you can compute some random crop parameters using tf.random_uniform or whatever method you like. Lastly you perform the crop with tf.slice.
If you want to choose whether to perform the crop or not you can use tf.cond.
Between those components, you should be able to implement what you want using only tensorflow constructs. Try it out and if you get stuck along the way post the code and error you run into.

Detect blob in very noisy image

I am working with a stack of noisy images, trying to isolate a blob in an image section. Below you can see the starting image, loaded and plotted with python, and the same image after some editing with Gimp.
What I want to do is to isolate and recognise the blob inside the light blue circle, editing the image and then using something like ndimage.label. Do you have any suggestion on how to edit the image? Thanks.
The background looks quite even so you should be able to isolate the main object using thresholding, allowing you to use array masking to identify regions within the main object. I would have a go with some tools from scikit image to see where that gets you http://scikit-image.org/docs/dev/auto_examples/
I would try gaussian/median filtering followed by thresholding/filling gaps. Or you could try random walker segmentation, or pherhaps texture classification might be more useful. When you have a list of smaller objects within the main object you can then filter these with respect to shape, size, roundness etc
http://scikit-image.org/docs/dev/auto_examples/plot_label.html#example-plot-label-py

Categories