Determine how colorblind-friendly an image is - python

I am wanting to create a function that would determine how colorblind-friendly an image is (on a scale from 0-1). I have several color-related functions that are capable of performing the following tasks:
Take an image (as a PIL image, filename, or RGB array) and transform it into an image representative of what a colorblind person would see (for the different types of colorblindness)
Take an image and determine the rgb colors associated with each pixel of the image (transform into numpy array of rgb colors)
Determine the color palette associated with an image
Find the similarity between two rgb arrays (using CIELAB- see colormath package)
My first instinct was to transform the image and colorblind version of the image into RGB arrays and then use the CIELAB function to determine the similarity between the two images. However, that doesn't really solve the problem since it wouldn't be able to pick out things like readability (e.g. if the text and background color end up being very similar after adjusting for colorblindness).
Any ideas for how to determine how colorblind-friendly an image is?

Related

Shape detection with cv2

I am trying to detect shapes within pixels using cv2. Currently using examples such as https://www.geeksforgeeks.org/how-to-detect-shapes-in-images-in-python-using-opencv/ however this will only define each pixel, rather than identifying from the larger outset. I've attempted to add blur however that causes cv2 to identify the shapes as circles as opposed to squares.
Example images:
What would the best way to process these? Ideally fro the above two images, there's 3 distinct square patterns.

How create a difference mask between two images and then superimpose that mask over another image

I have two textures in dds format, one original and the other with different colors (changed brightness, color saturation etc). Unfortunately, I don't know the color settings in this other texture altered so that I can transfer them to another similar dds texture.
I would like to compare these two textures, the original and the changed one, and draw a difference from them, creating the so-called the mask. Then apply this mask to another texture with the same dimensions as the original but the wrong colors. So as to get the same color saturation, brightness, etc. as in the changed image.
I tried to do this with compressonator. I managed to get a difference, but I can't combine my image with the difference to get the same effect as in the changed image.
Below I am adding a link to the original modified textures and the difference that I was able to extract from them, I would like to get the same effect as on the changed texture after applying the difference to the original
https://mega.nz/folder/G5AXjSAB#7NXZ0CVMJTs4mo0YgKwy3g
Thanks for help

How to group the image regions of same color and get its coordinates ignoring the background color using python

Input image
I need to group the region in green and get its coordinates, like this output image. How to do this in python?
Please see the attached images for better clarity
At first, split the green channel of the image, put a threshold on that and have a binary image. This binary image contains the objects of the green area. Start dilating the image with the suitable kernel, this would make adjacent objects stick to each other and become to one big object. Then use findcontour to take the sizes of all objects, then hold the biggest object and remove the others, this image would be your mask. Now you can reconstruct the original image (green channel only) with this mask and fit a box to the remained objects.
You can easily find the code each part.

how to extract the relative colour intensity in a black and white image in python?

Suppose I have got a black an white image, how do I convert the colour intensity at each point into a numerical value that represents its relativity intensity?
I checked somewhere on the web and found the following:
Intensity = np.asarray(PIL.Image.open('test.jpg'))
What's the difference between asarray and array?
Besides, the shape of the array Intensity is '181L, 187L, 3L'. The size of the image test.jpg is 181x187, so what does the extra '3' represent?
And are there any other better ways of extracting the colour intensity of an image?
thank you.
The image is being opened as a color image, not as a black and white one. The shape is 181x187x3 because of that: the 3 is there because each pixel is an RGB value. Quite often images in black and white are actually stored in an RGB format. For an image image, if np.all(image[:,:,0]==image[:,:,1]) and so on, then you can just choose to use any of them (eg, image[:,:,0]). Alternatively, you could take the mean with np.mean(image,axis=2).
Note too that the range of values will depend on the format, and so depending upon what you mean by color intensity, you may need to normalize them. In the case of a jpeg, they are probably uint8s, so you may want image[:,:,0].astype('float')/255 or something similar.

What does matplotlib `imshow(interpolation='nearest')` do?

I use imshow function with interpolation='nearest' on a grayscale image and get a nice color picture as a result, looks like it does some sort of color segmentation for me, what exactly is going on there?
I would also like to get something like this for image processing, is there some function on numpy arrays like interpolate('nearest') out there?
EDIT: Please correct me if I'm wrong, it looks like it does simple pixel clustering (clusters are colors of the corresponding colormap) and the word 'nearest' says that it takes the nearest colormap color (probably in the RGB space) to decide to which cluster the pixel belongs.
interpolation='nearest' simply displays an image without trying to interpolate between pixels if the display resolution is not the same as the image resolution (which is most often the case). It will result an image in which pixels are displayed as a square of multiple pixels.
There is no relation between interpolation='nearest' and the grayscale image being displayed in color. By default imshow uses the jet colormap to display an image. If you want it to be displayed in greyscale, call the gray() method to select the gray colormap.

Categories