Coloring only the inside of a shape - python

Lets say that you are given this image
and are given the instruction to programmatically color only the inside of it the appropriate color, but the program would have to not only work on this shape and other primitives but on any outlined shape, however complex it may be and shaded or not.
This is the problem I am trying to solve, but here's where I'm stuck, it seems like it should be simple to teach a computer to see black lines, and color inside them. But searching mostly turns up eigenface style recognition algorithms, which seems to me to be over fitting and far greater complexity than is needed for at least the basic form of this problem.
I would like to frame this as a supervised learning classifier problem, the purpose of which is to feed my model a complete image and it will output smaller numpy arrays consisting of pixels classsified as object or background. But in order to do that I would need to give it training data, which to me seems like I would need to hand label every pixel in my training set, which obviously defeats the purpose of the program.
Now that you have the background, here's my question, given this image, is there an efficient way to get two distinct arrays, each consisting of all adjacent pixels that do not contain any solid black (RGB(0,0,0)) pixels?
Which would make one set all pixels on the inside of the circle, and the other, all pixels on the outside of the circle

You can use scipy.ndimage.measurements.label to do all the heavy lifting for you:
import scipy.ndimage
import scipy.misc
data = scipy.misc.imread(...)
assert data.ndim == 2, "Image must be monochromatic"
# finds and number all disjoint white regions of the image
is_white = data > 128
labels, n = scipy.ndimage.measurements.label(is_white)
# get a set of all the region ids which are on the edge - we should not fill these
on_border = set(labels[:,0]) | set(labels[:,-1]) | set(labels[0,:]) | set(labels[-1,:])
for label in range(1, n+1): # label 0 is all the black pixels
if label not in on_border:
# turn every pixel with that label to black
data[labels == label] = 0
This will fill all closed shapes within the image, considering a shape cut by the edge of the image not to be closed

Related

How can I search an area for pixel change?

This is the code I am using to detect if a pixel (in this case pixel 510,510) turns to a certain color.
import PIL.ImageGrab
import mouse
while True:
rgb = PIL.ImageGrab.grab(bbox = None)
rgb2=(253, 146, 134)
print (rgb.getpixel((510, 510)))
if (rgb.getpixel((510, 510))) == rgb2:
mouse.click()
I want to be able to search an area of my screen for any pixel that changes to a specified color, not just an individual pixel. How might I do that? I want to keep this running as fast as possible. I know most areas searched on an image or video would be a rectangle, but could it be a triangle to cut down on pixels searched? If not, the next sentences are irrelevant. How so? Would it work if I give the coords of each point in the triangle?
Make a black rectangular image just big enough to contain the shape you want to detect. Use np.zeros((h,w,3), np.uint8) to create it. It will be zero everywhere.
Draw the shape you want to detect in the black rectangle with colour=[1,1,1]. You now have an image that is 1 where you are interested in the pixels and 0 elsewhere. Do these first 2 steps outside your main loop.
Inside your loop, grab an area of screen the same size as your mask from steps 1 and 2. Multiply your image by the mask and all pixels you are not interested in will become zero. Test if your colour exists using np.where() or cv2.countNonZero(np.all(im==soughtColour, axis=-1))
As an alternative to drawing with colour=[1,1,1] at the second step, draw with colour=[255,255,255] and then in the third step use cv2.bitwise_and() instead of multiplying.

How to convert rgb to labels for image segmentation

I have around 4000 rgb label images which are masks for some other images. I can use this image label pair in the deep learning encoder-decoder structure (eg:UNet) architecture with the help of regression approach. But I would like to do segmentation approach. For that how can I convert these images?
Sample image:
(Above sample image should contain 3 classes. one oval shape part, the remaining red part, and the background white part. This can go upto 7 classes in some other image pairs)
There is supposed to be 7 classes including background for the entire dataset. But when I tried to find the unique values in an RGB label, there are more than 30 unique value pairs coming. Otherwise I would have select the unique rgb pair and do the processing. How to overcome this
Here's one potential way to handle this (in MATLAB, but similar in other situations)
The image you have shared is rather pixelated, and hence quite difficult to handle. If your dataset contains similarly pixelated images, I'd explore some kind of pre-processing to get rid of spurious edge discolorations, as they mess up the clustering. For the sake of demonstration here, I've created a test image with exactly three colors.
% Create a test image - one shared is very pixelated.
I = uint8(zeros(100, 100, 3));
I(10:20, 10:20, 1) = 255;
I(40:50, 40:50, 2) = 255;
If the number of colors here is unknown, but up to 7, here's a quick way to use imsegkmeans and it's 'C' output to find the number of unique centers.
% Specify max clusters
maxNumClusters = 7;
% Run clustering using the max value
[~, C] = imsegkmeans(I, maxNumClusters);
nUniqueClusters = size(unique(C, 'rows'), 1);
'nUniqueClusters' should now contain the 'true' number of clusters in the image. In a way, this is almost like finding the number of unique entries of pixel RGB triplets in the image itself - I think what's affecting your work is noise due to pixelation - which is a separate problem.
[L, C] = imsegkmeans(I, nUniqueClusters);
% Display the labeled image for further verification.
B = labeloverlay(I, L);
figure, imshow(B)
One way to attempt to fix the pixelation problem is to plot a histogram of your image pixels (for one of the three color planes), and then managing the low values somehow - possibly marking all of them with a distinct new color that you know doesn't exist in your dataset otherwise (0, 0, 0), for example - and marking it's label to be 'unknown'. This is slightly outside the scope of your original question - hence just a text description of it here.

Binarize image data

I have 10 greyscale brain MRI scans from BrainWeb. They are stored as a 4d numpy array, brains, with shape (10, 181, 217, 181). Each of the 10 brains is made up of 181 slices along the z-plane (going through the top of the head to the neck) where each slice is 181 pixels by 217 pixels in the x (ear to ear) and y (eyes to back of head) planes respectively.
All of the brains are type dtype('float64'). The maximum pixel intensity across all brains is ~1328 and the minimum is ~0. For example, for the first brain, I calculate this by brains[0].max() giving 1328.338086605072 and brains[0].min() giving 0.0003886114541273855. Below is a plot of a slice of a brain[0]:
I want to binarize all these brain images by rescaling the pixel intensities from [0, 1328] to {0, 1}. Is my method correct?
I do this by first normalising the pixel intensities to [0, 1]:
normalized_brains = brains/1328
And then by using the binomial distribution to binarize each pixel:
binarized_brains = np.random.binomial(1, (normalized_brains))
The plotted result looks correct:
A 0 pixel intensity represents black (background) and 1 pixel intensity represents white (brain).
I experimented by implementing another method to normalise an image from this post but it gave me just a black image. This is because np.finfo(np.float64) is 1.7976931348623157e+308, so the normalization step
normalized_brains = brains/1.7976931348623157e+308
just returned an array of zeros which in the binarizition step also led to an array of zeros.
Am I binarising my images using a correct method?
Your method of converting the image to a binary image basically amounts to random dithering, which is a poor method of creating the illusion of grey values on a binary medium. Old-fashioned print is a binary medium, they have fine-tuned the methods to represent grey-value photographs in print over centuries. This process is called halftoning, and is shaped in part by properties of ink on paper, that we do not have to deal with in binary images.
So what methods have people come up with outside of print? Ordered dithering (mostly Bayer matrix), and error diffusion dithering. Read more about dithering on Wikipedia. I wrote a blog post showing how to implement all of these methods in MATLAB some years ago.
I would recommend you use error diffusion dithering for your particular application. Here is some code in MATLAB (taken from my blog post liked above) for the Floyd-Steinberg algorithm, I hope that you can translate this to Python:
img = imread('https://i.stack.imgur.com/d5E9i.png');
img = img(:,:,1);
out = double(img);
sz = size(out);
for ii=1:sz(1)
for jj=1:sz(2)
old = out(ii,jj);
%new = 255*(old >= 128); % Original Floyd-Steinberg
new = 255*(old >= 128+(rand-0.5)*100); % Simple improvement
out(ii,jj) = new;
err = new-old;
if jj<sz(2)
% right
out(ii ,jj+1) = out(ii ,jj+1)-err*(7/16);
end
if ii<sz(1)
if jj<sz(2)
% right-down
out(ii+1,jj+1) = out(ii+1,jj+1)-err*(1/16);
end
% down
out(ii+1,jj ) = out(ii+1,jj )-err*(5/16);
if jj>1
% left-down
out(ii+1,jj-1) = out(ii+1,jj-1)-err*(3/16);
end
end
end
end
imshow(out)
Resampling the image before applying the dithering greatly improves the results:
img = imresize(img,4);
% (repeat code above)
imshow(out)
NOTE that the above process expects the input to be in the range [0,255]. It is easy to adapt to a different range, say [0,1328] or [0,1], but it is also easy to scale your images to the [0,255] range.
Have you tried a threshold on the image?
This is a common way to binarize images, rather than trying to apply a random binomial distribution. You could try something like:
binarized_brains = (brains > threshold_value).astype(int)
which returns an array of 0s and 1s according to whether the image value was less than or greater than your chosen threshold value.
You will have to experiment with the threshold value to find the best one for your images, but it does not need to be normalized first.
If this doesn't work well, you can also experiment with the thresholding options available in the skimage filters package.
IT is easy in OpenCV. as mentioned a very common way is defining a threshold, But your result looks like you are allocating random values to your intensities instead of thresholding it.
import cv2
im = cv2.imread('brain.png', cv2.CV_LOAD_IMAGE_GRAYSCALE)
(th, brain_bw) = cv2.threshold(imy, 128, 255, cv2.THRESH_BINARY | cv2.THRESH_OTSU)
th = (DEFINE HERE)
im_bin = cv2.threshold(im, th, 255, cv
cv2.imwrite('binBrain.png', brain_bw)
brain
binBrain

Python - matplotlib - imshow - How to influence displayed value of unzoomed image

I need to search outliers in more or less homogeneous images representing some physical array. The images have a resolution which is much higher than the screen resolution. Thus every pixel on screen originates from a block of image pixels. Is there the possibility to customize the algorithm which calculates the displayed value for such a block? Especially the possibility to either use the lowest or the highest value would be helpful.
Thanks in advance
Scipy provides several such filters. To get a new image (new) whose pixels are the maximum/minimum over a w*w block of an original image (img), you can use:
new = scipy.ndimage.filters.maximum_filter(img, w)
new = scipy.ndimage.filters.minimum_filter(img, w)
scipy.ndimage.filters has several other filters available.
If the standard filters don't fit your requirements, you can roll your own. To get you started here is an example that shows how to get the minimum in each block in the image. This function reduces the size of the full image (img) by a factor of w in each direction. It returns a smaller image (new) in which each pixel is the minimum pixel in a w*w block of pixels from the original image. The function assumes the image is in a numpy array:
import numpy as np
def condense(img, w):
new = np.zeros((img.shape[0]/w, img.shape[1]/w))
for i in range(0, img.shape[1]//w):
col1 = i * w
new[:, i] = img[:, col1:col1+w].reshape(-1, w*w).min(1)
return new
If you wanted the maximum, replace min with max.
For the condense function to work well, the size of the full image must be a multiple of w in each direction. The handling of non-square blocks or images that don't divide exactly is left as an exercise for the reader.

Scipy Binary Closing - Edge Pixels lose value

I am attempting to fill holes in a binary image. The image is rather large so I have broken it into chunks for processing.
When I use the scipy.ndimage.morphology.binary_fill_holes functions, it fills larger holes that belong in the image. So I tried using scipy.ndimage.morphology.binary_closing, which gave the desired results of filling small holes in the image. However, when I put the chunks back together, to create the entire image, I end up with seamlines because the binary_closing function removes any values from the border pixels of each chunk.
Is there any way to avoid this effect?
Yes.
Label your image using ndimage.label (first invert the image, holes=black).
Find the hole object slices with ndimage.find_objects
Filter the list of object slices based on your size criteria
Invert back your image and perform binary_fill_holes on the slices that meet your criteria.
That should do it, without needing to chop the image up. For example:
Input image:
Output image (Middle size holes are gone):
Here is the code (inequality is set to remove the middle size blobs):
import scipy
from scipy import ndimage
import numpy as np
im = scipy.misc.imread('cheese.png',flatten=1)
invert_im = np.where(im == 0, 1, 0)
label_im, num = ndimage.label(invert_im)
holes = ndimage.find_objects(label_im)
small_holes = [hole for hole in holes if 500 < im[hole].size < 1000]
for hole in small_holes:
a,b,c,d = (max(hole[0].start-1,0),
min(hole[0].stop+1,im.shape[0]-1),
max(hole[1].start-1,0),
min(hole[1].stop+1,im.shape[1]-1))
im[a:b,c:d] = scipy.ndimage.morphology.binary_fill_holes(im[a:b,c:d]).astype(int)*255
Also note that I had to increase the size of the slices so that the holes would have border all the way around.
Operations that involve information from neighboring pixels, such as closing will always have trouble at the edges. In your case, this is very easy to get around: just process subimages that are slightly larger than your tiling, and keep the good parts when stitching together.

Categories