PyAutoGUI ignores colors when locating images - python

When using pyautogui.locateAllOnScreen() to look for images on screen (2x2 black pixels), the function returns coordinates of every single 2x2 matching pixels on screen, completely ignoring their color.
Same thing happens with locateOnScreen(), it returns the top-left corner of the region.
Setting grayscale or confidence to various values did not help, any advice?

Related

Creating angled grid with OpenCV in python

So, I need to do a very specific thing. I need to break an image into an angled grid. I need to be able to take the average color of all the pixels in this grid and put that average into another image with the same angle grid.
I have looked and I keep coming up with literally drawing a picture of a grid onto the image, which is not what I want.

How to use list of border pixels to obtain list of inside pixels?

I have an image that I have contoured using the active_contour method from skimage. However, I need to find the average of the pixels within the image. The border created by the contour is multiple pixels thick and has an irregular closed shape.
I am able to obtain a list of all the border pixels. Is there a way to use this list to find the locations of all the pixels inside the border(despite it being multiple pixels thick)?

Python - Remove Black Pixels originatin from the border of an image

I am very new to Image processing and I am trying to cleanse pictures similar to picture 1 of the Black Pixels originating from the border of the Image.
The Images are clipped Characters from a PDF which I try to process with tesseract to retieve the character. I already searched in Stackoverflow for answers, but only found resolutions to get rid of black borders.
I need to overwrite all the black pixels from the corners with white pixels, so tesseract can correctly recognize the character.
I cannot alter the Bounding Boxes used to clip the Characters, since the characters are centered in different ares of the BoundingBox and if i Cut the BoundingBox, i would cut some Characters like seen below
My first guess would have been to recursively track down pixels with a certain threshhold of black in them, but I am scared of computing time in that case and wouldn't really know where and how to start, except for using two two-dimensional arrays, one with the pixels, and one with an indicator whether i already worked on that pixel or not.
Help would be greatly appreciated.
Edit: some more pictures of cases, where black pixels from the edge need to be cleared:
Edit: Code-Snippet to create Border Image:
#staticmethod
def __get_border_image(image: Image) -> Image:
data = numpy.asarray(image)
border = cv2.copyMakeBorder(data, top=5, bottom=5, left=5, right=5, borderType=cv2.BORDER_CONSTANT)
return Image.fromarray(border)
Try like this:
artificially add a 1px wide black border all around the edge
flood-fill with white all black pixels starting at top-left corner
remove the 1px border from the first step (if necessary)
The point of adding the border is to allow the white to "flow" all around all edges of the image and reach any black items touching the edge.

Get rectangular shape from very noisy image Opencv Python

Need to get rectangular shapes from a noisy color segmented image.
The problem is that sometimes the object isn't uniformly the correct color causing holes in the image, or sometimes reflection of the object in the background cause noise/false positive for the color segmentation.
The object could be in any position of the image and of any unknown rectangular size, the holes can occur anywhere inside the object and the noise could occur on any side of the object.
The only known constant is that the object is rectangular in shape.
Whats the best way to filter out that noise to the left of the object and get a bounding box around the object?
Using erosion would remove the detail from the bottom of the object and would cause the size of the bounding box to be wrong
I can't comment because of my rep, but I think you could try to analyse the colored image using other color spaces. Create a upper and a lower bound of the color you want until it selects the object, leaving you with less noise, which you can filter with erode/dilate/opening/closing.
For example, in my project I wanted to found a bounding box of a color-changing green rectangle, so I went and tried a lot of diferent color spaces with a lot of diferent upper/lower bounds until I finally got something worthy. Here is a nice read of what I'm talking about : Docs
You can also try filtering the object by área, after dilating it (you dilate first so the closer points connect to one another, while the more distant ones, which are the noise, don't, creating a big rectangle with lots of noise, but then you filter by a big área).
One method is to take histogram projection on both the horizontal and vertical axes, and select the intersection of ranges that have high projections.
The projections are just totals of object pixels in each row and each column. When you are looking for only one rectangle, the values indicated the probablity of the row/column belonging to the rectangle.

Pygame rotate causes image to stutter

I honestly have no idea why this doesn't work. The rotate cause the image to scale up and down constantly. I looked around and haven't found a solution to my problem.
Main http://tinypaste.com/1c5025fa
Module http://tinypaste.com/f42f9c58
Also can someone explain why this program's box abruptly stops rotating?
Etc 'http://tinypaste.com/82b3b30e' (remove the quotes, I'm not allowed to post more than 2 hyperlinks)
From what I can tell, the scaling that you're seeing is a sort of artifact of how the rotation operation works. As a rectangle is rotated, the bounding box will necessarily be larger than the original rectangle. See for example, the blue rectangle in the image, below. R is the radius of the rectangle...so when it's rotated, the rectangle sweeps out the area covered by the red circle in the second image. The bounding box for the rotation is the now the gray rectangle. pygame has to fill in both the red area and the gray area. What color does pygame use to fill in the padding area?
The pygame.transform.rotate docs say...
"Unless rotating by 90 degree increments, the image will be padded larger to hold the new size. If the image has pixel alphas, the padded area will be transparent. Otherwise pygame will pick a color that matches the Surface colorkey or the topleft pixel value."
So, the solution is to explicitly set the color key or alpha value for the image (in your case, when you construct your saved_image surface.) Then, when the saved_image is rotated, the newly produced image will have the padding area filled with the appropriate color.
Give it a go and see if that works.

Categories