I am working on a program to crop a image around a rectangle in OpenCV. How could I go about doing this. I also need it to be able to turn multiple rectangles into cropped images.
I've tried using this tutorial: https://www.pyimagesearch.com/2016/02/08/opencv-shape-detection/, but I dont know how to get the borders of the shape and crop around it.
I hope to get an output of multiple images, that have pictures of the contents of the triangle.
Thank you in advance
I have just recently done this for one of my projects, and it worked perfectly.
Here is the technique I use to implement this in Python OpenCV:
Display the image using OpenCV's cv2.imshow() function.
Select 2 points (x, y) on an image. This can be done by capturing mouse click events with OpenCV. One way to do this is to click with your mouse where the first point is, while still clicking move towards the second points, and let go from the mouse click once the cursor is over the correct point. This selects the 2 points for you. In OpenCV, you can do this with cv2.EVENT_LBUTTONDOWN and cv2.EVENT_LBUTTONUP. You can write a function to record the two points using the mouse capture events and pass it to cv2.setMouseCallback().
Once you have your 2 coordinates, you can draw a rectangle using OpenCV's cv2.rectangle() function, where you can pass the image, the 2 points and additional parameters such as the colour of the rectangle to draw.
Once you're happy with those results, you can crop the results using something like this:
image = cv2.imread("path_to_image")
cv2.setMouseCallback("image", your_callback_function)
cropped_img = image[points[0][1]:points[1][1], points[0][0]:points[1][0]]
cv2.imshow("Cropped Image", cropped_img)
cv2.waitKey(0)
Here is one of the results I get on one of my images.
Before (original image):
Region of interest selected with a rectangle drawn around it:
After (cropped image):
I started by following this excellent tutorial on how to implement it before further improving it on my own, so you can get started here: Capturing mouse click events with Python and OpenCV. You should also read the comments at the bottom of the attached tutorial to easily improve the code.
You can get co-ordinates of box using 'BoundedRect' function. Then use slicing operation, to extract required part of image.
Related
I have a FOV camera that has approximately 195*130 degree. So this 'lens' will put in a circle holder and the lens should not see the holder. Here's the image of not I want.
I draw 4 rectangle in Paint. There are 4 black spots which is the holder. The full red one is for censorship not there actually
If the camera image streams like that, that's a no. I need to detect that black spots and if it is like this it should gives me a error message or simply 'false'. I searched google and couldn't found this. I'm a noob of this subject but if you explain me how to do this I can connect the dots.
Thank you for your helps.
And I get the stream via USB-Capture Card. It acts like webcam.
#UPDATE1: I cropped the four corners of image then get the threshold. Made a basic if else logic and get what I want. Thank you anyway.
Try detection by generating custom HAAR filters.
Or make it simply by applying a threshold (nearly black) and look if some tiny squares in the corneres are completely black.
I am developing an app for Android and i have this issue - after i have taken a picture with the camera, i need to crop it. But the coordinates for the rectangle i have dragged over the capturing area start from the screen's 0,0 coordinates aka touch coordinates do not match actual picture's - if i try to crop the image with these using PIL, i get a partial result.
One possible solution would be to take a partial screenshot with these coordinates and get the cropped picture that way.
I tried to use pyscreenshot, but then i found out that it does not work on Android.
Any ideas how to capture a partial screenshot on Kivy?
Thank you
For possible anyone with the same issue - i did not find any good way to grab a screenshot and crop selected portion out of it. I did, however, solve it this way - i move to a new screen, resize picture to device's screen resolution and from then i can crop it with touch without no issues - i did not have to find a clever way to map screen coordinates with image's. After cropping and saving i resize the image back to a fixed resolution. This is not a elegant solution, but still will serve my purpose.
Thank you anyone who thought along with me.
My project is REM sleep detector, and the provided pictures show the contour of my eyelid. As my eye looks in directions, this contour moves in a distinct way. For lack of a better solution, my first attempt is to draw a grid of rois on my video stream, with that in place I want to use the countnonzero function or use blob detection on the rois. Depending on which rois in the grid change values, movement and direction is detected. (I am sure there is better way)
Problem: I can not specify one or several rois of my choice, the function always work only on the entire image. How do I retrieve values from each roi specifically? Rois are set up by means of multiple rectangle functions. Code is in python. Any help greatly appreciated.
Contour of eyelid:
I'm trying to write a script that views an image, looks at lines on the image, and creates bounding boxes around the lines. Here is what I'm talking about...
I have this image:
I'm trying to have a way of intelligently cropping each section using a script. The best idea I came up with was to have colored tape around each of the sections like this:
Given this image with the colored tape the program should be able to find the colored lines and identify where they intersect. Here is a visual of what the program should be able to locate: (Black lines are where the tape is, red dots are the intersecting positions)
The end game here is for the program to be able to use this data to
Know how many sections there are (in this case 9)
Know where the sections are and create a bounding box around each one
Visually something like this:
OpenCV has facial detection and feature detection so something like this with a static image should be fairly possible. What is the best method to accomplish this?
There are many ways to do what you want.
One is to use sift:
https://docs.opencv.org/3.3.0/da/df5/tutorial_py_sift_intro.html
You will need to use keypoint detection, something like:
sift = cv2.SIFT()
kp = sift.detect(img,None)
You can check if the points are right with:
img2 = cv2.drawKeyPoints(kp)
Then you will need to use cv2.boundingRect
box = cv2.boundingRect(kp)
If your marker is of a different color then the rest of the image, you only need to make a color filter to find the points.
When we use some image processing library to rotate an image, the rotated image will always contains some black area. For example, I use the following python code to rotate an image:
from scipy import misc
img = misc.imread('test.jpg')
img = misc.imrotate(img,15)
misc.imsave('rotated.jpg')
The image is as follows:
My question is: how can I rotate an image without producing black area. I believe there exists some interpolation method to compensate for the missing area, which makes the image more natural.
It will be appreciated if anyone can provide a python code to achieve my task.
If you want to 'clone' or 'heal' the missing areas based on some part of the background, that's a complex problem, usually done with user intervention (in tools like Photoshop or GIMP).
Alternatives would be to fill the background with a calculated average colour - or just leave the original image. Neither will look 'natural' though.
The only approach that will work for all images will be to crop the rotated image to the largest rectangle within the rotated area. That will achieve your objective of having no black areas and looking natural, but at the cost of reducing the image size.
isnt there a simple paint fill function in your "some image library" ?, simple do that at all 4 corner pixels and then make it white or so.