I'm using Python and PIL (or Pillow) and want to run code on files that contain two pixels of a given intensity and RGB code (0,0,255).
The pixels may also be close to (0,0,255) but slightly adjusted ie (0,1,255). I'd like to overwrite the two pixels closest to (0,0,255) with (0,0,255).
Is this possible? If so, how?
Here's an example image , here zoomed with the pixels I want to make "more blue" here
The attempt at code I'm looking at comes from here:
# import the necessary packages
import numpy as np
import scipy.spatial as sp
import matplotlib.pyplot as plt
import cv2
from PIL import Image, ImageDraw, ImageFont
#Stored all RGB values of main colors in a array
# main_colors = [(0,0,0),
# (255,255,255),
# (255,0,0),
# (0,255,0),
# (0,0,255),
# (255,255,0),
# (0,255,255),
# (255,0,255),
# ]
main_colors = [(0,0,0),
(0,0,255),
(255,255,255)
]
background = Image.open("test-small.tiff").convert('RGBA')
background.save("test-small.png")
retina = cv2.imread("test-small.png")
#convert BGR to RGB image
retina = cv2.cvtColor(retina, cv2.COLOR_BGR2RGB)
h,w,bpp = np.shape(retina)
#Change colors of each pixel
#reference :https://stackoverflow.com/a/48884514/9799700
for py in range(0,h):
for px in range(0,w):
########################
#Used this part to find nearest color
#reference : https://stackoverflow.com/a/22478139/9799700
input_color = (retina[py][px][0],retina[py][px][1],retina[py][px][2])
tree = sp.KDTree(main_colors)
ditsance, result = tree.query(input_color)
nearest_color = main_colors[result]
###################
retina[py][px][0]=nearest_color[0]
retina[py][px][1]=nearest_color[1]
retina[py][px][2]=nearest_color[2]
print(str(px), str(py))
# show image
plt.figure()
plt.axis("off")
plt.imshow(retina)
plt.savefig('color_adjusted.png')
My logic is to replace the array of closest RGB colours to only contain (0,0,255) (my desired blue) and perhaps (255,255,255) for white - this way only the pixels that are black, white, or blue come through.
I've run the code on a smaller image, and it converts this to this as desired.
However, the code runs through every pixel, which is slow for larger images (I'm using images of 4000 x 4000 pixels). I would also like to output and save images to the same dimensions as the original file (which I expect to be an option when using plt.savefig.
If this could be optimized, that would be ideal. Similarly, picking the two "most blue" (ie closest to (0,0,255)) pixels and rewriting them with (0,0,255) should be quicker and just as effective for me.
As your image is largely unsaturated greys with just a few blue pixels, it will be miles faster to convert to convert to HLS colourspace and look for saturated pixels. You can do further tests easily enough on the identified pixels if you want to narrow it down to just two:
#!/usr/bin/env python3
import cv2
import numpy as np
# Load image
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
# Convert to HLS, so we can find saturated blue pixels
HLS = cv2.cvtColor(im,cv2.COLOR_BGR2HLS)
# Get x,y coordinates of pixels that have high saturation
SatPix = np.where(HLS[:,:,2]>60)
print(SatPix)
# Make them pure blue and save result
im[SatPix] = [255,0,0]
cv2.imwrite('result.png',im)
Output
(array([157, 158, 158, 272, 272, 273, 273, 273]), array([55, 55, 56, 64, 65, 64, 65, 66]))
That means pixels 157,55 and 158,55, and 158,56 and so on are blue. The conversion to HLS colourspace, identification of saturated pixels and setting them to solid blue takes 758 microseconds on my Mac.
You can achieve the same type of thing without writing any Python just using ImageMagick on the command line:
magick eye.png -colorspace hsl -channel g -separate -auto-level result.png
Here's a different way to do it. Use SciPy's cdist() to work out the Euclidean distance from each pixel to Blue, then pick the nearest two:
#!/usr/bin/env python3
import cv2
import numpy as np
from scipy.spatial.distance import cdist
# Load image, save shape, reshape as tall column of 3 RGB values
im = cv2.imread('eye.png', cv2.IMREAD_COLOR)
origShape = im.shape
im = im.reshape(-1,3)
# Work out distance to pure Blue for each pixel
blue = np.full((1,3), [255, 0 , 0])
d = cdist(im, blue, metric='euclidean') # THIS LINE DOES ALL THE WORK
indexNearest = np.argmin(d) # get index of pixel nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
d[indexNearest] = 99999 # make it appear further so we don't find it again
indexNearest = np.argmin(d) # get index of pixel second nearest to blue
im[np.argmin(d)] = [0,0,255] # make it red
# Reshape back to original shape and save result
im = im.reshape(origShape)
cv2.imwrite('result.png',im)
Related
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:3]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:3]
result = np.hstack((im1,im2))
imwrite('result.jpg', result)
Original images opening directly from the url's (I'm trying to concatenate the two images into one and keep the background white):
As can be seen both have no background, but when joining the two via Python, the defined background becomes this moss green:
I tried modifying the color reception:
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')[...,:1]
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')[...,:1]
But the result is a Black & White with the background still looking like it was converted from the previous green, even though the PNG's don't have such a background color.
How should I proceed to solve my need?
There is a 4th channel in your images - transparency. You are discarding that channel with [...,:1]. This is a mistake.
If you retain the alpha channel this will work fine:
import numpy as np
from imageio import imread, imwrite
im1 = imread('https://api.sofascore.app/api/v1/team/2697/image')
im2 = imread('https://api.sofascore.app/api/v1/team/2692/image')
result = np.hstack((im1,im2))
imwrite('result.png', result)
However, if you try to make a jpg, you will have a problem:
>>> imwrite('test.jpg', result)
OSError: JPEG does not support alpha channel.
This is correct, as JPGs do not do transparency. If you would like to use transparency and also have your output be a JPG, I suggest a priest.
You can replace the transparent pixels by using np.where and looking for places that the alpha channel is 0:
result = np.hstack((im1,im2))
result[np.where(result[...,3] == 0)] = [255, 255, 255, 255]
imwrite('result.png', result)
If you want to improve image quality, here is a solution. #Brondy
# External libraries used for
# Image IO
from PIL import Image
# Morphological filtering
from skimage.morphology import opening
from skimage.morphology import disk
# Data handling
import numpy as np
# Connected component filtering
import cv2
black = 0
white = 255
threshold = 160
# Open input image in grayscale mode and get its pixels.
img = Image.open("image.jpg").convert("LA")
pixels = np.array(img)[:,:,0]
# Remove pixels above threshold
pixels[pixels > threshold] = white
pixels[pixels < threshold] = black
# Morphological opening
blobSize = 1 # Select the maximum radius of the blobs you would like to remove
structureElement = disk(blobSize) # you can define different shapes, here we take a disk shape
# We need to invert the image such that black is background and white foreground to perform the opening
pixels = np.invert(opening(np.invert(pixels), structureElement))
# Create and save new image.
newImg = Image.fromarray(pixels).convert('RGB')
newImg.save("newImage1.PNG")
# Find the connected components (black objects in your image)
# Because the function searches for white connected components on a black background, we need to invert the image
nb_components, output, stats, centroids = cv2.connectedComponentsWithStats(np.invert(pixels), connectivity=8)
# For every connected component in your image, you can obtain the number of pixels from the stats variable in the last
# column. We remove the first entry from sizes, because this is the entry of the background connected component
sizes = stats[1:,-1]
nb_components -= 1
# Define the minimum size (number of pixels) a component should consist of
minimum_size = 100
# Create a new image
newPixels = np.ones(pixels.shape)*255
# Iterate over all components in the image, only keep the components larger than minimum size
for i in range(1, nb_components):
if sizes[i] > minimum_size:
newPixels[output == i+1] = 0
# Create and save new image.
newImg = Image.fromarray(newPixels).convert('RGB')
newImg.save("new_img.PNG")
If you want to change the background of a Image, pixellib is the best solution because it seemed the most reasonable and easy library to use.
import pixellib
from pixellib.tune_bg import alter_bg
change_bg = alter_bg()
change_bg.load_pascalvoc_model("deeplabv3_xception_tf_dim_ordering_tf_kernels.h5")
change_bg.color_bg("sample.png", colors=(255,255,255), output_image_name="colored_bg.png")
This code requires pixellib to be higher or the same as 0.6.1
I am trying to write an algorithm to systematically determine how many different "curves" are in an image. Example Image. I'm specifically interested in the white lines here, so I've used a color threshold to mask the rest of the image and only get the white pixels. These lines represent a path run by a player (wide receivers in the NFL), so I'm interested in the x and y coordinates that the path represents - and each "curve" represents a different path that the player took (or "route"). All curves should start on or behind the blue line.
However, while I can get just the white pixels, I can't figure out how to systematically identify the separate curves. In this example image, there are 8 white curves (or routes) present. I've identified those curves in this image. I tried edge detection, and then using scipy ndimage to get the number of connected components, but because the curves overlap it counts them as connected and only gives me 3 labeled components for this image as opposed to eight. Here's what the edge detection output looks like. Is there a better way to go about this? Here is my sample code.
import cv2
from skimage.morphology import skeletonize
import numpy as np
from scipy import ndimage
#Read in image
image = cv2.imread('example_image.jpeg')
#Color boundary to get white pixels
lower_white = np.array([230, 230, 230])
upper_white = np.array([255, 255, 255])
#mask image for white pixels
mask = cv2.inRange(image, lower_white, upper_white)
c_pixels = cv2.bitwise_and(image, image, mask=mask)
#make pixels from 0 to 1 form to use in skeletonize
c_pixels = c_pixels.clip(0,1)
ske_c = skeletonize(c_pixels[:,:,1]).astype(np.uint8)
#Edge Detection
inputImage =ske_c*255
edges = cv2.Canny(inputImage,100,200,apertureSize = 7)
#Show edges
cv2.imshow('edges', edges)
cv2.waitKey(0)
cv2.destroyAllWindows()
#Find number of components
# smooth the image (to remove small objects); set the threshold
edgesf = ndimage.gaussian_filter(edges, 1)
T = 50 # set threshold by hand to avoid installing `mahotas` or
# `scipy.stsci.image` dependencies that have threshold() functions
# find connected components
labeled, nr_objects = ndimage.label(edgesf > T) # `dna[:,:,0]>T` for red-dot case
print("Number of objects is %d " % nr_objects)
I have a huge dataset of images like this:
I would like to change the colors on these. All white should stay white, all purple should turn white and everything else should turn black. The desired output would look like this:
I've made the code underneath and it is doing what I want, but it takes way to long to go through the amount of pictures I have. Is there another and faster way of doing this?
path = r"C:path"
for f in os.listdir(path):
f_name = (os.path.join(path,f))
if f_name.endswith(".png"):
im = Image.open(f_name)
fn, fext = os.path.splitext(f_name)
print (fn)
im =im.convert("RGBA")
for x in range(im.size[0]):
for y in range(im.size[1]):
if im.getpixel((x, y)) == (255, 255, 255, 255):
im.putpixel((x, y),(255, 255, 255,255))
elif im.getpixel((x, y)) == (128, 64, 128, 255):
im.putpixel((x, y),(255, 255, 255,255))
else:
im.putpixel((x, y),(0, 0, 0,255))
im.show()
Your images seem to be palettised as they represent segmentations, or labelled classes and there are typically fewer than 256 classes. As such, each pixel is just a label (or class number) and the actual colours are looked up in a 256-element table, i.e. the palette.
Have a look here if you are unfamiliar with palletised images.
So, you don't need to iterate over all 12 million pixels, you can instead just iterate over the palette which is only 256 elements long...
#!/usr/bin/env python3
import sys
import numpy as np
from PIL import Image
# Load image
im = Image.open('image.png')
# Check it is palettised as expected
if im.mode != 'P':
sys.exit("ERROR: Was expecting a palettised image")
# Get palette and make into Numpy array of 256 entries of 3 RGB colours
palette = np.array(im.getpalette(),dtype=np.uint8).reshape((256,3))
# Name our colours for readability
purple = [128,64,128]
white = [255,255,255]
black = [0,0,0]
# Go through palette, setting purple to white
palette[np.all(palette==purple, axis=-1)] = white
# Go through palette, setting anything not white to black
palette[~np.all(palette==white, axis=-1)] = black
# Apply our modified palette and save
im.putpalette(palette.ravel().tolist())
im.save('result.png')
That takes 290ms including loading and saving the image.
If you have many thousands of images to do, and you are on a decent OS, you can use GNU Parallel. Change the above code to accept a command-line parameter which is the name of the image, and save it as recolour.py then use:
parallel ./recolour.py {} ::: *.png
It will keep all CPU cores on your CPU busy till they are all processed.
Keywords: Image processing, Python, Numpy, PIL, Pillow, palette, getpalette, putpalette, classes, classification, label, labels, labelled image.
If you're open to use NumPy, you can heavily speed-up pixel manipulations:
from PIL import Image
import numpy as np
# Open PIL image
im = Image.open('path/to/your/image.png').convert('RGBA')
# Convert to NumPy array
pixels = np.array(im)
# Get logical indices of all white and purple pixels
idx_white = (pixels == (255, 255, 255, 255)).all(axis=2)
idx_purple = (pixels == (128, 64, 128, 255)).all(axis=2)
# Generate black image; set alpha channel to 255
out = np.zeros(pixels.shape, np.uint8)
out[:, :, 3] = 255
# Set white and purple pixels to white
out[idx_white | idx_purple] = (255, 255, 255, 255)
# Convert back to PIL image
im = Image.fromarray(out)
That code generates the desired output, and takes around 1 second on my machine, whereas your loop code needs 33 seconds.
Hope that helps!
I want to calculate persentage of black pixels and white pixels for the picture, its colorful one
import numpy as np
import matplotlib.pyplot as plt
image = cv2.imread("image.png")
cropped_image = image[183:779,0:1907,:]
You don't want to run for loops over images - it is dog slow - no disrespect to dogs. Use Numpy.
#!/usr/bin/env python3
import numpy as np
import random
# Generate a random image 640x150 with many colours but no black or white
im = np.random.randint(1,255,(150,640,3), dtype=np.uint8)
# Draw a white rectangle 100x100
im[10:110,10:110] = [255,255,255]
# Draw a black rectangle 10x10
im[120:130,200:210] = [0,0,0]
# Count white pixels
sought = [255,255,255]
white = np.count_nonzero(np.all(im==sought,axis=2))
print(f"white: {white}")
# Count black pixels
sought = [0,0,0]
black = np.count_nonzero(np.all(im==sought,axis=2))
print(f"black: {black}")
Output
white: 10000
black: 100
If you mean you want the tally of pixels that are either black or white, you can either add the two numbers above together, or test for both in one go like this:
blackorwhite = np.count_nonzero(np.all(im==[255,255,255],axis=2) | np.all(im==[0,0,0],axis=2))
If you want the percentage, bear mind that the total number of pixels is easily calculated with:
total = im.shape[0] * im.shape[1]
As regards testing, it is the same as any software development - get used to generating test data and using it :-)
white_pixels = np.logical_and(255==cropped_image[:,:,0],np.logical_and(255==cropped_image[:,:,1],255==cropped_image[:,:,2]))
num_white = np.sum(white_pixels)
and the same with 0 for the black ones
Keep variables for white_count and black_count and just iterate through the image matrix. Whenever you encounter 255 increase the white_count and whenever 0 increase the black_count. Try it yourself, if no success I'll post the code here :)
P.S keep the dimensionality of the image in mind
You can use the getcolors() function from PIL image, this function return a list of tuples with colors found in image and the amount of each one. I'm using the following function to return a dictionary with color as key, and counter as value.
from PIL import Image
def getcolordict(im):
w,h = im.size
colors = im.getcolors(w*h)
colordict = { x[1]:x[0] for x in colors }
return colordict
im = Image.open('image.jpg')
colordict = getcolordict(im)
# get the amount of black pixels in image
# in RGB black is 0,0,0
blackpx = colordict.get((0,0,0))
# get the amount of white pixels in image
# in RGB white is 255,255,255
whitepx = colordict.get((255,255,255))
# percentage
w,h = im.size
totalpx = w*h
whitepercent=(whitepx/totalpx)*100
blackpercent=(blackpx/totalpx)*100
I found something reasonably close to what I want to do here:
Python: PIL replace a single RGBA color
However, in my scenario I have images that were originally grayscale with color annotations added to the image (an x-ray with notes in color). I would like to replace any pixel that is not grayscale with random noise. My main problem is replacing values with noise and not a single color.
Edit: I figured out the random noise part, now just trying to figure out how to separate the color pixels from the pixels that were originally in grayscale.
from PIL import Image
import numpy as np
im = Image.open('test.jpg')
data = np.array(im) # "data" is a height x width x 3 numpy array
red, green, blue = data.T # Temporarily unpack the bands for readability
# Replace white with random noise...
white_areas = (red == 255) & (blue == 255) & (green == 255)
Z = random.random(data[...][white_areas.T].shape)
data[...][white_areas.T] = Z
im2 = Image.fromarray(data)
im2.show()
You could try
col_areas = np.logical_or(np.not_equal(red, blue), np.not_equal(red, green))
You could use this Pixel Editing python module
from PixelMenu import ChangePixels as cp
im = Image.open('test.jpg')
grayscalergb=(128, 128, 128) #RGB value of gray in your image
noise=(100,30,5) #You can adjust the noise based upon your requirements
outputimg=cp(im, col=grayscalergb, col2=noise, save=False,tolerance=100) #Adjust the tolerance until you get the right amount of noise in your image
Also:
I'd suggest you to use png images instead of jpg images because JPEG is designed with compression, everytime you load the image the RGB values change making it hard for your code to function perfectly everytime