To create clean isometric tiles, I want to cut everything outside the mask.
So far I have
from PIL import Image
img = Image.open('grass.png')
mask = Image.open('mask.png').convert('L')
img.putalpha(mask)
img.save('result.png')
Input
Mask
Result
Expected Result
It successfully cut the bottom left+right edges, but I've got the top part colored black, which I also want transparent. Thus, I only want to cut the parts of the input image where it exceeds the mask.
Of course in that specific case I could have just created a mask for a bottom tile but as I have many different ones, I want to have a generic mask. I then thought about just removing black pixels afterwards, but there may be black pixels in my input images too, so this is also not a good option.
I have found a couple of similar questions here but those only cover cutting masks which are smaller than the input image not bigger which makes the difference in this case.
Related
Input image
I need to group the region in green and get its coordinates, like this output image. How to do this in python?
Please see the attached images for better clarity
At first, split the green channel of the image, put a threshold on that and have a binary image. This binary image contains the objects of the green area. Start dilating the image with the suitable kernel, this would make adjacent objects stick to each other and become to one big object. Then use findcontour to take the sizes of all objects, then hold the biggest object and remove the others, this image would be your mask. Now you can reconstruct the original image (green channel only) with this mask and fit a box to the remained objects.
You can easily find the code each part.
I have a set of grayscale images, like this:
This is an example image as I cannot post the original image. Each image has an area with a texture, a pure white watermark (pos), and lots of unwanted black space.
Ideally this image should be cropped to:
The watermark can be slightly different in each image, but is always very thin pure white text.
The pictures can look very different, here is another example
this one only needs cropping on the left
another one:
this one needs to be cropped on top and bottom:
and another one
this one needs to be cropped at the top and right. Note that I left the watermark in this picture. Ideally the watermark would be removed as well, but I guess it is easier without.
here is a picture of the watermark how it looks in reality.
The images vary in size, but are usually large (over 2000x2000).
I am looking for a solution in python (cv2 maybe).
my first idea was to use something like this:
Python & OpenCV: Second largest object
but this solution code fails for me
I work in C# and C++ and don't work in python but can suggest you the logic.
You need to run two scan of the image, one row wise and other columns wise.
Since you said the unwanted part of image is always black, just read the pixel values in both scan. If the color of all the pixels in a certain row is black then you can elemminate or delete that row. Similar steps can be followed for column wise scanning.
Now we cannot just eleminate the rows and columns so easily, so just note down the redundant rows and columns and then you can crop your image using following code:( I will code in C# with emgucv library but it is easy to understand for python)
Mat original_image = new Mat();
Rect ROI = new Rect(x,y,width,height);
Mat image_needed_to_crop = new Mat(original_image,ROI);
This code just extracts only the region of interest from the original image.
I'm trying to blur around specific regions in a 2D image (the data is an array of size m x n).
The points are specified by an m x n mask. cv2 and scikit avaiable.
I tried:
Simply applying blur filters to the masked image. But that isn't not working.
Extracting the points to blur by np.nan the rest, blurring and reassembling. Also not working, because the blur obviously needs the surrounding points to work correctly.
Any ideas?
Cheers
What was the result in the first case? It sounds like a good approach. What did you expect and what you get?
You can also try something like that:
Either create a copy of a whole image or just slightly bigger ROI (to include samples that will be used for blurring)
Apply blur on the created image
Apply masks on two images (from original image take everything except ROI and from blurred image take ROI)
Add two masked images
If you want more smooth transition make sure that masks aren't binary. You can smooth them using another blurring (blur one mask and create the second one by calculating: mask2 = 1 - mask1. By doing so you will be sure that weights always add up to one).
I am currently trying very hard to figure out a way to make these four trapezoid images into one nice image. The final image should look something like this(used photoshop to make it):
That above image will be complied with four of these images:
The problem is that when I try to rotate and combine these images, the black surroundings come into the final image as well like this:
How am I supposed to rid of the blacked out area or make it transparent? I've tried using a mask but that only make the black area white instead. I have also tried using the alpha channel, but that didn't work(although maybe I was doing wrong). Any ideas on what I can do in OpenCV?
I did actually figure it out. I did it with these steps:
Create two SAME SIZED black backgrounds with numpy zeros
Put one image in each background where you want them(for me, it was left and top)
Then all you need to do is cv.add(first, second)
The reason it works is because black pixels are 0,0,0 so adding to a pixel that is, say, 25,62,34, the pixel doesn't change and thus rids of the black corner.
I am trying to validate black and white images (more of a clipart images - not photos) for an engraving machine.
One of the major things I need to take into consideration is the size of areas (or width of lines) since the machine can't handle lines that are too thin - so I need to find areas that are thinner than a given threshold.
Take this image for example:
The strings of the harp might be too thin to engrave.
I am reading about Matlab and OpenCV but image processing is an area I am learning about for the first time.
I am a Java / C# developer so implementation done with one of those languages will be best for me but any direction will be greatly appreciated.
A solution using matlab utilizing image morphological operations:
Define the minimal thickness of allowed area, for example, minThick=4
BW = imread('http://i.stack.imgur.com/oXKep.jpg');
BW = BW(:,:,1) < 128; %// convert image to binary mask
se = strel('disk', minTick/2, 0); %// define a disk element
eBW = imerode( BW, se ); %// "chop" half thickness from mask
deBW = imdilate( eBW, se ); %// dilate the eroded mask
Eroding and dilating should leave regions that are thicker than minThick unchanged, but it will remove the thin areas
invalidArea = BW & ~deBW; %// pixels that are in BW but not in deBW
Resulting with:
You can read more about imdilate and imerode in the linked documentation.
This is primarily for self-containment, but this is the equivalent code to what #Shai has performed in Python. I used the numpy and OpenCV packages from Python. The equivalent code to doing it in Python would simply be this:
import numpy as np # Import numpy package
import cv2 # Import OpenCV package
orig = cv2.imread('oXKep.jpg') # Read in image from disk
BW = orig[:,:,2] < 128 # Threshold below 128 to invert image
minThick = 5 # Define minimum thickness
se = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (minThick,minThick)) # define a disk element
finalBW = 255*cv2.morphologyEx(BW.astype('uint8'), cv2.MORPH_OPEN, se) # "chop" half thickness from mask and dilate the eroded mask
# Find invalid area
invalidArea = 255*np.logical_and(BW, np.logical_not(finalBW)).astype('uint8')
# Show original image
cv2.imshow('Original', orig)
# Show opened result
cv2.imshow('Final', finalBW)
# Show invalid lines
cv2.imshow('Invalid Area', invalidArea)
# Wait for user input then close windows
cv2.waitKey(0)
cv2.destroyAllWindows()
A few intricacies that I need to point out:
OpenCV's imread function reads in colour channels in reverse order with respect to MATLAB. Specifically, the channels are read in with a blue-green-red order. This means that the first channel is blue, the second channel green and third channel red. In MATLAB, these are read in proper RGB order. Because this is a grayscale image, the RGB components are the same so it really doesn't matter which channel you use. However, in order to be consistent with Shai's method, the red channel is being accessed and so we need to access the last channel of the image through OpenCV.
The disk structuring element in MATLAB with a structure number of 0 is essentially a diamond shape. However, because OpenCV does not have this structuring element built-in, and I wanted to produce the minimum amount of code possible to get something going, the closest thing I could use was the elliptical shaped structuring element.
In order for the structuring element to be symmetric, you need to make sure that the size is odd, so I changed the size from 4 to 5 from Shai's example.
In order to show an image using OpenCV Python, the image must be at least an unsigned 8-bit integer type. Binary images for display using OpenCV are not supported, and so I artificially made the binary images uint8 and multiplied the values by 255 before displaying them.
You can combine the erosion and dilation operations into one operation using morphological opening. Opening seeks to remove thin lines or disconnect objects that are thinly connected but maintain the shape of the original more larger objects. This is the the point of eroding first so that you can remove these lines but you will shrink the objects a bit in terms of the area, then dilating after so that you can restore the shapes back to their original size (mostly). I exploited that by performing a morphological opening via cv2.morphologyEx.
This is what I get: