crop unwanted black space of image - python

I have a set of grayscale images, like this:
This is an example image as I cannot post the original image. Each image has an area with a texture, a pure white watermark (pos), and lots of unwanted black space.
Ideally this image should be cropped to:
The watermark can be slightly different in each image, but is always very thin pure white text.
The pictures can look very different, here is another example
this one only needs cropping on the left
another one:
this one needs to be cropped on top and bottom:
and another one
this one needs to be cropped at the top and right. Note that I left the watermark in this picture. Ideally the watermark would be removed as well, but I guess it is easier without.
here is a picture of the watermark how it looks in reality.
The images vary in size, but are usually large (over 2000x2000).
I am looking for a solution in python (cv2 maybe).
my first idea was to use something like this:
Python & OpenCV: Second largest object
but this solution code fails for me

I work in C# and C++ and don't work in python but can suggest you the logic.
You need to run two scan of the image, one row wise and other columns wise.
Since you said the unwanted part of image is always black, just read the pixel values in both scan. If the color of all the pixels in a certain row is black then you can elemminate or delete that row. Similar steps can be followed for column wise scanning.
Now we cannot just eleminate the rows and columns so easily, so just note down the redundant rows and columns and then you can crop your image using following code:( I will code in C# with emgucv library but it is easy to understand for python)
Mat original_image = new Mat();
Rect ROI = new Rect(x,y,width,height);
Mat image_needed_to_crop = new Mat(original_image,ROI);
This code just extracts only the region of interest from the original image.

Related

Python / Pillow cut image with larger mask

To create clean isometric tiles, I want to cut everything outside the mask.
So far I have
from PIL import Image
img = Image.open('grass.png')
mask = Image.open('mask.png').convert('L')
img.putalpha(mask)
img.save('result.png')
Input
Mask
Result
Expected Result
It successfully cut the bottom left+right edges, but I've got the top part colored black, which I also want transparent. Thus, I only want to cut the parts of the input image where it exceeds the mask.
Of course in that specific case I could have just created a mask for a bottom tile but as I have many different ones, I want to have a generic mask. I then thought about just removing black pixels afterwards, but there may be black pixels in my input images too, so this is also not a good option.
I have found a couple of similar questions here but those only cover cutting masks which are smaller than the input image not bigger which makes the difference in this case.

Best way to "fill in" an image in Python, using a reflection of the image itself?

For an analysis, I'd like to take the a bunch of TIF images I have, and fill in the black background to create a perfect square.
I would like to keep the same general pattern of the image when I fill in the black spots, instead of just filling in the black with random bits of white and blue. My first thought for doing this is to impose some sort of symmetrical "reflection" of the image onto the black portions - the concept is detailed below.
The thing is, I'm not sure how to go about this - my first thought was to convert the image to a NumPy array and copy the individual rows of pixels over for a pseudo-reflection, but that could take a lot of time since I would be accounting for the length of the black portion in each row, and it wouldn't even be the desired result. I was wondering if there was a package or method that did something like this already, perhaps in PIL.
Any ideas are appreciated, as I am not to familiar with image processing in Python (or in general) - thank you!
EDIT: Here is a google drive link for the example image in question.
EDIT 2: Here is a google drive link for another example, this time with two "overlapping" black background areas (in other words, the actual data has a corner)

OpenCV get subimages by mask

Have you any ideas how can I with python and OpenCV get subimages on original image by that mask? I need separated subimages of every white area.
Because it's not rects it's hard to get them separated.
I think you are looking for connectedComponentsWithStats(), which will give you connected components (i.e., one label per white area). The result will be a labeled image with a separate label for each component.
From this, it is easy to extract the part of the image with a specific label.

OpenCV how to replace cut out object with background

I have two images, one image which contains a box and one without. There is a small vertical disparity between the two pictures since the camera was not at the same spot and was translated a bit. I want to cut out the box and replace the hole with the information from the other picture.
I want to achieve something like this (a slide from a computer vision course)
I thought about using the cv2.createBackgroundSubtractorMOG2() method, but it does not seem to work with only 2 pictures.
Simply subtracting the picture from another does not work either because of the disparity.
The course suggests using RANSAC to compute the most likely relationship between two pictures and subtract the area thaht changed a lot. But how do I actually fill in the holes?
Many thanks in advance!!
If you plant ot use only a pair of images (or only a few images), image stitching methods are better than background subtraction.
The steps are:
Calculate homography between the two images.
Warp the second image to overlap the second.
Replace the region with the human with pixels from the warped image.
This link shows a basic example of image stitching. You will need extra work if both images have humans in different places, but otherwise it should not be hard to tweak this code.
You can try this library for background subtraction issues. https://github.com/andrewssobral/bgslibrary
there is python wrappers of this tool.

Opencv: How to stitch four trapezoid images to make a square image?

I am currently trying very hard to figure out a way to make these four trapezoid images into one nice image. The final image should look something like this(used photoshop to make it):
That above image will be complied with four of these images:
The problem is that when I try to rotate and combine these images, the black surroundings come into the final image as well like this:
How am I supposed to rid of the blacked out area or make it transparent? I've tried using a mask but that only make the black area white instead. I have also tried using the alpha channel, but that didn't work(although maybe I was doing wrong). Any ideas on what I can do in OpenCV?
I did actually figure it out. I did it with these steps:
Create two SAME SIZED black backgrounds with numpy zeros
Put one image in each background where you want them(for me, it was left and top)
Then all you need to do is cv.add(first, second)
The reason it works is because black pixels are 0,0,0 so adding to a pixel that is, say, 25,62,34, the pixel doesn't change and thus rids of the black corner.

Categories