OpenCV Python: retrieve rectangle main area from image then crop image - python

I'm trying to isolate the main area art from Pokemon cards and crop them. For example,
I want to extract the main area, as seen in the bounding box below.
I want to use the OpenCV in Python for the task. I've tried out shape detection and corner detection, but I can't seem to make them work as intended, and they seem to pick up anything but the main area that I want to extract.
I can't hard-code the bounding box because I want to process many cards and the position of the main area is different per card.
What are the steps needed to extract the main area and save a png file of just the main area?

If the area of interest is always contained inside a somewhat contrasted quasi-rectangular frame you can try your luck with a Hough transform, keep the long horizontal and vertical edges (obtained separately) and try to reconstitute that frame.
To begin, process a small number of cards and observe the results of Hough and figure out which systematic rules you could use to select the right segments by length/position/alignment/embedding in a larger frame...

Related

Creating angled grid with OpenCV in python

So, I need to do a very specific thing. I need to break an image into an angled grid. I need to be able to take the average color of all the pixels in this grid and put that average into another image with the same angle grid.
I have looked and I keep coming up with literally drawing a picture of a grid onto the image, which is not what I want.

How to use opencv functions on roi

My project is REM sleep detector, and the provided pictures show the contour of my eyelid. As my eye looks in directions, this contour moves in a distinct way. For lack of a better solution, my first attempt is to draw a grid of rois on my video stream, with that in place I want to use the countnonzero function or use blob detection on the rois. Depending on which rois in the grid change values, movement and direction is detected. (I am sure there is better way)
Problem: I can not specify one or several rois of my choice, the function always work only on the entire image. How do I retrieve values from each roi specifically? Rois are set up by means of multiple rectangle functions. Code is in python. Any help greatly appreciated.
Contour of eyelid:

Recognizing Image Target Lines

I'm trying to write a script that views an image, looks at lines on the image, and creates bounding boxes around the lines. Here is what I'm talking about...
I have this image:
I'm trying to have a way of intelligently cropping each section using a script. The best idea I came up with was to have colored tape around each of the sections like this:
Given this image with the colored tape the program should be able to find the colored lines and identify where they intersect. Here is a visual of what the program should be able to locate: (Black lines are where the tape is, red dots are the intersecting positions)
The end game here is for the program to be able to use this data to
Know how many sections there are (in this case 9)
Know where the sections are and create a bounding box around each one
Visually something like this:
OpenCV has facial detection and feature detection so something like this with a static image should be fairly possible. What is the best method to accomplish this?
There are many ways to do what you want.
One is to use sift:
https://docs.opencv.org/3.3.0/da/df5/tutorial_py_sift_intro.html
You will need to use keypoint detection, something like:
sift = cv2.SIFT()
kp = sift.detect(img,None)
You can check if the points are right with:
img2 = cv2.drawKeyPoints(kp)
Then you will need to use cv2.boundingRect
box = cv2.boundingRect(kp)
If your marker is of a different color then the rest of the image, you only need to make a color filter to find the points.

OpenCV [Python] - Perspective Warp an image with more than 4 points

I'm trying to make a simple scanner program, which takes in an image of a piece of paper and create a binary image based off of that. An example of what I am trying to do is below:
However, as you can see, the program uses the 4 corners of the paper to create the image. This means that the program doesn't take into account the curvature of the paper.
Is there anyway to "warp" the image with more than four points? By this, I mean find the bounding rectangle, and if the contour line is outside the rectangle, shrink that row of pixels, and if the contour line is inside, then extend it?
I feel like this should exist in some way or form, but if it doesn't, it may be time to delve into the depths of OpenCV :D
Thanks!

Using findContour and the resulting contours to approximate line segments to find line intersection

I am attempting to write a program to handle the finding the intersections of the lines outlining a rectangular object, (eg. Computer screen). I attempted to use hough lines originally, however, due to variable lighting conditions as well as content appearing on the screen, the lines that are drawn are not always the outline of the screen. Additionally, there may be a large amount of other random objects in the frame.
My next approach was to use contours which always seem to outline both the screen as well as being able to handle the variable content encapsulated within it. How do I go about using the contours to approximate a line?
I used
print len(contours)
and received a contour fairly long contour length > 200 consistently.
Feel free to comment asking for clarity.
EDIT
The green "lines" are the contours found by findContours. I am primarily interested in the contour surrounding the screen content. How can I use those contours to find a line approximating those contours and then find the point of intersection of the two lines? This is from a webcam stream so conditions, angle, and distance may not stay constant.
A first step might be to use the size of the contours to filter out those you are not interested in since the smaller contours usually correspond to stuff on the screen.
Also, the findContour method can be used to return the contours in a hierarchy of nested contours. This will tell you which contour is contained within another and allow you to get the outer-most, second outer-most one etc. If you are trying to get the screen, then it would have to be a large contour that is possibly the second largest and nested just below the contour for the monitor.
After obtaining the potential contours, which are just list of points, you can do a robust fitting for a single rectangle using just these points either by RANSAC (with a model for rectangles, not homography), or hough transform modified for this case.

Categories