transparent effect python image library - python

I have been searching relevant strings for days, but couldn't find a good answer for me.
my problem is how I can draw a transparent red rectangle on top of a blue rectangle.
Here is how i am doing now.
im=Image.new('RGBA',(400,400),'white')
draw=ImageDraw.Draw(im)
draw.rectangle((100,100,200,200),fill=(0,255,0,0)) #big 100*100 green rectangle
draw.rectangle((80,80,130,130),fill=(255,0,0,0)) #small 50*50 red rectangle
im.show()
what I got now is the read rectangle totally cover overlapping part of the green one, but I want overlapping part be transparent, so that I can see under the red rectangle it is the part of green one and the overlapping part will be another color.
Any help would be appreciated!

Several good answers in another thread.
Including explaining that if you are going to use the 4th argument then the draw object needs to be 'RGBA' and the base images must be 'RGB' mode for this to work.

If you just do a rectangle the 4th argument in fill is an opacity where 0 is fully opaque.

Related

How to identify the unfinished rectangle by image processing?

I have a color image.
After several preprocessing I am able to get the following image.
However, as you have seen the door portion is not complete, only 3 lines are visible on the post processed one. Not the 4th boundary lin, because on the original photo, the color portion was missing at that part.
Now I can identify the two windows, but how to identify the door?
Is there any way to complete the unfinished part of the door as rectangle ?
The yellow ticked door is also needed to be identified.
You want an image processing algorithm that given 3 sides of a rectangle will know to "close" the fourth one.
Suppose we give you such an algorithm, how do you expect it to differentiate between the green rectangle (the door you want to detect) and the red rectangle (you do not want)?
I beleive you should somehow take a step back from your processed image and identify a car blocking a view of the door.
So it boils down to the car hood identification. Maybe use high gradient color change in this area (color quickly gradients from dark grey [door] to light grey [hood] indicating last curve to be added to three straight lines of the door).

Python - Remove Black Pixels originatin from the border of an image

I am very new to Image processing and I am trying to cleanse pictures similar to picture 1 of the Black Pixels originating from the border of the Image.
The Images are clipped Characters from a PDF which I try to process with tesseract to retieve the character. I already searched in Stackoverflow for answers, but only found resolutions to get rid of black borders.
I need to overwrite all the black pixels from the corners with white pixels, so tesseract can correctly recognize the character.
I cannot alter the Bounding Boxes used to clip the Characters, since the characters are centered in different ares of the BoundingBox and if i Cut the BoundingBox, i would cut some Characters like seen below
My first guess would have been to recursively track down pixels with a certain threshhold of black in them, but I am scared of computing time in that case and wouldn't really know where and how to start, except for using two two-dimensional arrays, one with the pixels, and one with an indicator whether i already worked on that pixel or not.
Help would be greatly appreciated.
Edit: some more pictures of cases, where black pixels from the edge need to be cleared:
Edit: Code-Snippet to create Border Image:
#staticmethod
def __get_border_image(image: Image) -> Image:
data = numpy.asarray(image)
border = cv2.copyMakeBorder(data, top=5, bottom=5, left=5, right=5, borderType=cv2.BORDER_CONSTANT)
return Image.fromarray(border)
Try like this:
artificially add a 1px wide black border all around the edge
flood-fill with white all black pixels starting at top-left corner
remove the 1px border from the first step (if necessary)
The point of adding the border is to allow the white to "flow" all around all edges of the image and reach any black items touching the edge.

Remove uneven white border from images using OpenCV

I'm trying to remove uneven white borders from different set of pictures. They all look like these:
What I'm doing right now is just drawing a rectangle around the picture in hope that it covers the white area:
h, w = img.shape
cv2.rectangle(img, (0,0), (w,h), (0,0,0), 2)
Depending on the picture it might work or not. As there are a variety number of pictures which are in similar situation I'm looking for a more logical solution which is applicable to all pictures with this kind of issue.
I think your way is right, but it's unaware whether it overlays figures (you may increase the thickness if you know there won't be figures with that margin) and the desired thickness is unknown.
You may use findContour. Find the "thick" figures (if you expect particular metrics as in the picture). Sort their extreme coordinates, add some margin and that would set the max depth of the border.
However then not a rectangle, but a line would be better drawn per each side, in case there are figures very close to the border.
Another scenario: first draw concentrating black rectangles (or lines per side) in order to clear the unevenness, then draw the white lines/rectangle with the desired thickness.

Opencv: How to stitch four trapezoid images to make a square image?

I am currently trying very hard to figure out a way to make these four trapezoid images into one nice image. The final image should look something like this(used photoshop to make it):
That above image will be complied with four of these images:
The problem is that when I try to rotate and combine these images, the black surroundings come into the final image as well like this:
How am I supposed to rid of the blacked out area or make it transparent? I've tried using a mask but that only make the black area white instead. I have also tried using the alpha channel, but that didn't work(although maybe I was doing wrong). Any ideas on what I can do in OpenCV?
I did actually figure it out. I did it with these steps:
Create two SAME SIZED black backgrounds with numpy zeros
Put one image in each background where you want them(for me, it was left and top)
Then all you need to do is cv.add(first, second)
The reason it works is because black pixels are 0,0,0 so adding to a pixel that is, say, 25,62,34, the pixel doesn't change and thus rids of the black corner.

Pygame rotate causes image to stutter

I honestly have no idea why this doesn't work. The rotate cause the image to scale up and down constantly. I looked around and haven't found a solution to my problem.
Main http://tinypaste.com/1c5025fa
Module http://tinypaste.com/f42f9c58
Also can someone explain why this program's box abruptly stops rotating?
Etc 'http://tinypaste.com/82b3b30e' (remove the quotes, I'm not allowed to post more than 2 hyperlinks)
From what I can tell, the scaling that you're seeing is a sort of artifact of how the rotation operation works. As a rectangle is rotated, the bounding box will necessarily be larger than the original rectangle. See for example, the blue rectangle in the image, below. R is the radius of the rectangle...so when it's rotated, the rectangle sweeps out the area covered by the red circle in the second image. The bounding box for the rotation is the now the gray rectangle. pygame has to fill in both the red area and the gray area. What color does pygame use to fill in the padding area?
The pygame.transform.rotate docs say...
"Unless rotating by 90 degree increments, the image will be padded larger to hold the new size. If the image has pixel alphas, the padded area will be transparent. Otherwise pygame will pick a color that matches the Surface colorkey or the topleft pixel value."
So, the solution is to explicitly set the color key or alpha value for the image (in your case, when you construct your saved_image surface.) Then, when the saved_image is rotated, the newly produced image will have the padding area filled with the appropriate color.
Give it a go and see if that works.

Categories