Customize (change) image (pixel) colours - python - python

I want to customize the image color to make a similar image with color variants.
Example :
For the above image, I want to replace the red color with other colors like blue, green, yellow, black, etc.
I tried :
from PIL import Image
filename ="./Logo.jpg"
picture = Image.open(filename, 'r')
_colors = [(255, 255, 255), (128, 128, 0), (128, 128, 128), (192, 128, 0), (128, 64, 0), (0, 192, 0), (128, 64, 128), (255, 255, 255)]
width, height = picture.size
for x in range(0, width):
for y in range(0, height):
current_color = picture.getpixel((x,y))
# print (current_color)
if current_color in _colors:
picture.putpixel((x,y), (255,5, 255))
# print ("Y")
picture.save("./test/change.png")
The above code is quite common code which is suggested for most of them.
But it is quite hard to as it replaces the pixel in the list " _colors "
The output image is :
Any solution to the above problem? Any smart way to deal with this using machine learning? Any solution using another programming language?

I'm not familiar with PIL, and I heard that it's slow. So here's a OpenCV version:
# for red color, it's easier to work with negative image
# since hue is in [170, 180] or [0,10]
hsv_inv = cv2.cvtColor(255-img, cv2.COLOR_BGR2HSV)
# these are cyan limit, but we're working on negative image, so...
lower_range = np.array([80,0,0])
upper_range = np.array([100,255,255])
# mask the red
mask = cv2.inRange(hsv_inv, lower_range, upper_range)
# replace red by green
green_hsv = hsv_inv.copy()
green_hsv[np.where(mask)] += np.array([60,0,0], dtype=np.uint8)
green_img = 255 - cv2.cvtColor(green_hsv, cv2.COLOR_HSV2BGR)
purple_hsv = hsv_inv.copy()
purple_hsv[np.where(mask)] -= np.array([30,0,0], dtype=np.uint8)
purple_img = 255 - cv2.cvtColor(purple_hsv, cv2.COLOR_HSV2BGR)
And result, pls ignore the ticks as I showed them by matplotlib.

Related

How to lower transparency to line in Pillow?

How to lower opacity to line? I would like to lower opacity to one of line in example bellow.
from PIL import Image, ImageDraw
img = Image.new('RGB', (100, 100), (255, 255, 255))
draw = ImageDraw.Draw(img)
draw.line((100, 30, 0, 30), (0, 0, 0), 20)
draw.line((100, 70, 0, 70), (0, 0, 0), 20)
img.show()
I have seen in one example they created opacity like this...
TRANSPARENCY = .25 # Degree of transparency, 0-100%
OPACITY = int(255 * TRANSPARENCY)
But don't know how to apply to one of lines. Any ideas?
EDIT
I made some changes (based on answer of #Pedro Maia), it still doesn't work, just changes a color, it doesn't lower opacity to see background color.
from PIL import Image, ImageDraw
img = Image.new('RGBA', (500, 500), (255, 255, 255))
draw = ImageDraw.Draw(img)
TRANSPARENCY = .25 # Degree of transparency, 0-100%
draw.line((200, 0, 200, 600),(255, 0, 0), 60)
draw.line((500, 100, 0, 100), (0, 0, 0, int(255 * TRANSPARENCY)), 60)
draw.line((500, 400, 0, 400),(0, 0, 0), 60)
img
And I have to convert it to RGB to export it as 'jpg'
You would have to do something like this, which is similar to how the example code works, to do what (I think) you want to. I changed the code you added to your question in the EDIT slightly so it better demonstrates that lines of different amounts of transparency can be drawn.
from PIL import Image, ImageDraw
RED = (255, 0, 0)
BLACK = (0, 0, 0)
WHITE = (255, 255, 255)
# Calculate alpha given a 0-100% opacity value.
opacity = lambda transparency: (int(255 * (transparency/100.)),) # Returns a monuple.
def draw_transp_line(image, xy, color, width=1, joint=None):
""" Draw line with transparent color on the specified image. """
if len(color) < 4: # Missing alpha?
color += opacity(100) # Opaque since alpha wasn't specified.
# Make an overlay image the same size as the specified image, initialized to
# a fully transparent (0% opaque) version of the line color, then draw a
# semi-transparent line on it.
overlay = Image.new('RGBA', image.size, color[:3]+opacity(0))
draw = ImageDraw.Draw(overlay) # Create a context for drawing things on it.
draw.line(xy, color, width, joint)
# Alpha composite the overlay image onto the original.
image.alpha_composite(overlay)
# Create opaque white RGBA background image.
img = Image.new('RGBA', (500, 500), (255, 255, 255)+opacity(100))
draw_transp_line(img, ((200, 0), (200, 600)), RED+opacity(100), 60)
draw_transp_line(img, ((500, 100), (0, 100)), BLACK+opacity(25), 60)
draw_transp_line(img, ((150, 50), (600, 400)), BLACK+opacity(50), 60)
img = img.convert("RGB") # Remove alpha for saving in jpg format.
img.save('transparent_lines.jpg')
img.show()
JPG image created:
With draw.line you can pass as argument RGB or RGBA just pass the value of the transparency:
draw.line((100, 30, 0, 30), (0, 0, 0, int(255 * TRANSPARENCY)), 20)
Also when creating the image set it as RGBA:
img = Image.new('RGBA', (100, 100), (255, 255, 255))

How to get the bounding box of regions excluding specific RGB values

I am currently using PIL.Image.Image.getbbox() to get the bounding box for the non-zero (non-transparent) regions of an image.
What if I have an image that has a background of a specific color? How can I get the bounding box of the image then? Same idea as getbbox() but instead of non-zero, I specify the RGB values.
I'm afraid, my comment didn't properly express, what I wanted to suggest. So, here's a full answer:
Make a copy of your image (that one, that has a specific background color).
On that copy, replace the specific background color with black.
Call getbbox on that copy.
Maybe, the following code and examples make things more clear:
import numpy as np
from PIL import Image, ImageDraw
# Black background
img = Image.new('RGB', (400, 400), (0, 0, 0))
draw = ImageDraw.Draw(img)
draw.rectangle((40, 40, 160, 160), (255, 0, 0))
draw.rectangle((280, 260, 380, 330), (0, 255, 0))
img.save('black_bg.png')
print(img.getbbox(), '\n')
# Specific color background
bg_color = (255, 255, 0)
img = Image.new('RGB', (400, 400), bg_color)
draw = ImageDraw.Draw(img)
draw.rectangle((40, 40, 160, 160), (255, 0, 0))
draw.rectangle((280, 260, 380, 330), (0, 255, 0))
img.save('color_bg.png')
print(img.getbbox(), '\n')
# Suggested color replacing (on image copy) - Pillow only, slow
img_copy = img.copy()
for y in range(img_copy.size[1]):
for x in range(img_copy.size[0]):
if img_copy.getpixel((x, y)) == bg_color:
img_copy.putpixel((x, y), (0, 0, 0))
print(img_copy.getbbox(), '\n')
# Suggested color replacing (on image copy) - NumPy, fast
img_copy = np.array(img)
img_copy[np.all(img_copy == bg_color, axis=2), :] = 0
print(Image.fromarray(img_copy).getbbox())
There's one image with black background:
The corresponding output of getbbox is:
(40, 40, 381, 331)
Also, there's an image with a specific background color (yellow):
Calling getbbox on that image – obviously – returns:
(0, 0, 400, 400)
By simply replacing yellow with black in some copy of the second image, we again get the correct results from getbbox (both proposed methods):
(40, 40, 381, 331)
(40, 40, 381, 331)
Since iterating single pixels in Pillow is kinda slow, you could also use NumPy's vectorization abilities to speed up that task.
----------------------------------------
System information
----------------------------------------
Platform: Windows-10-10.0.16299-SP0
Python: 3.8.5
NumPy: 1.19.5
Pillow: 8.1.0
----------------------------------------

Why does Pillow convert return colours outside the specified palette?

Using Pillow 5.4.1, Python 3.6.8
Given an image image.png with 9 distinct colours, and given a data palette with 5 distinct colours, one would expect that asking pillow to reduce the image to the described palette that the resulting image would contain colours from only that palette.
However, using the im.im.convert method returns an image with colours outside the specified palette; specifically they are always greyscale images (R==B==G values)
Sample Code, outputting the unique set of colours for the original image, palette, and converted image.
from PIL import Image
im = Image.open("image.png")
# create palette from raw data
# colours: Red, Green, Blue, Black, and White (5 total)
RGBBW = [(255,0,0), (0,255,0), (0,0,255), (0,0,0), (255,255,255)]
data = sum([list(x) for x in RGBBW], [])[:256]
pimg = Image.new("P",(16,16))
pimg.putpalette(data)
# Hack
im.convert("RGB")
cim_ = im.im.convert("P", 0, pimg.im)
cim = im._new(cim_).convert("RGB")
def colors(im):
cs = []
for x in range(im.width):
for y in range(im.height):
cs.append(im.getpixel((x,y)))
return list(set(cs))
print("Original: %s" % colors(im))
print("Palette: %s" % RGBBW)
print("Convert: %s" % colors(cim))
Input image: -> <- (3x3 pixel image, all pixels unique colours)
(Larger version, for visualisation only: )
Output:
Original: [(85, 85, 85, 255), (0, 0, 255, 255), (0, 0, 0, 255), (255, 0, 0, 255), (0, 255, 255, 255), (255, 255, 255, 255), (255, 255, 0, 255), (255, 0, 255, 255), (0, 255, 0, 255)]
Palette: [(255, 0, 0), (0, 255, 0), (0, 0, 255), (0, 0, 0), (255, 255, 255)]
Convert: [(252, 252, 252), (0, 0, 255), (255, 0, 0), (0, 0, 0), (170, 170, 170), (0, 255, 0), (84, 84, 84)]
(Note that the hack to prevent dither is a workaround, pending a fix I've contributed to master (yet to be cut into a new release))
The values [(170, 170, 170), (84, 84, 84), (252, 252, 252)] appear in the converted image, but were not specified in the original palette. They all happen to be greyscale.
I think there's something in src/libImaging/Palette.c that's effecting this, but I'm not sure if this is a bug of the code, or a 'feature' of libjpeg
Turns out this issue is both user error and an unexpected initialisation issue.
The initialisation issue: As pointed out in the comments, the palette for a new image is specifically initialised as greyscale.
If we replace the entire palette with our own, then we're fine. Except, I wasn't.
data = sum([list(x) for x in RGBBW], [])[:256]
This line is logically incorrect.
The palette expects a flattened list of up to 256 triples of RGB, that is, an array of max len 768. If the array is anything less than this, then the rest of the greyscale will still be in play.
The better way to re-initialise the palette is to ensure we repeat a value as to override the greyscale.
In this case:
data = (sum([list(x) for x in RGBBW], []) + (RGBBW[-1] * (256 - len(RGBBW))))[:256*3]
That is:
data = (
sum([list(x) for x in RGBBW], []) # flatten the nested array
+ (RGBBW[-1] * (256 - len(RGBBW))) # extend with the last value, to our required length, if needed
)[:256*3] # and trim back, if needed.
This will result in the palette always being 768 length.
Using the last value from our provided array is an arbitrary choice, as is only used as a valid padding value.

How to correctly test RGB values using images(OpenCV & Python)?

I'm currently working on a project which uses color detection (OpenCV). I'm very new to Python and OpenCV so I'm not getting it work exactly as I want it to.
I have a class ColourDetection(any suggestions for fine tuning the HSV values?) which contains the static method detect_color which we use to detect a certain color. Here it is:
#!/usr/bin/env python
import cv2
import numpy as np
class ColourDetection(object):
#HSV 180-255-255 max values openCV (recalculate from GIMP)
#these need adjusting
BOUNDARIES = {
'red': ([170, 160, 60], [180, 255, 255]),
'blue': ([110, 50, 50], [130, 255, 255]),
'green': ([38, 50, 50], [75, 255, 255]),
'yellow':([103, 50, 50], [145, 255, 255])
}
#staticmethod
def detect_color(detection_image):
img_hsv = cv2.cvtColor(detection_image, cv2.COLOR_BGR2HSV)
#loop for all defined colours
for k,v in ColourDetection.BOUNDARIES.iteritems():
#convert to numpy arrays
lower_color = np.array(v[0])
upper_color = np.array(v[1])
#create mask from colour bounds
mask = cv2.inRange(img_hsv, lower_color, upper_color)
#count found colour pixels
amount_not_zero = cv2.countNonZero(mask)
if amount_not_zero > 9000:
return k
else:
return "No colour found"
The first 2 tests are working correctly. However the last test should return red using these RGB values. It seems I need some fine tuning with the HSV values. Can anyone help me?
from unittest import TestCase
from ColourDetection import ColourDetection
import numpy as np
__author__ = 'user'
class TestColourDetection(TestCase):
def test_detect_color_not_found(self):
image = np.zeros((512, 512, 3), np.uint8)
color = ColourDetection.detect_color(image)
self.assertEqual("No colour found", color)
def test_detect_color_is_red(self):
image = np.zeros((512, 512, 3), np.uint8)
image[:,0:512] = (0, 0, 255)
color = ColourDetection.detect_color(image)
self.assertEqual("red", color)
def test_detect_color_is_blue(self):
image = np.zeros((512, 512, 3), np.uint8)
image[:,0:512] = (255, 0, 0)
color = ColourDetection.detect_color(image)
self.assertEqual("blue", color)
def test_detect_color_is_green(self):
image = np.zeros((512, 512, 3), np.uint8)
image[:,0:512] = (0, 255, 0)
color = ColourDetection.detect_color(image)
self.assertEqual("green", color)
def test_detect_color_is_yellow(self):
image = np.zeros((512, 512, 3), np.uint8)
image[:,0:512] = (0, 255, 255)
color = ColourDetection.detect_color(image)
self.assertEqual("yellow", color)
The reason why you're only detecting blue might be because of a bug in your detect_color function.
#staticmethod
def detect_color(detection_image):
for k,v in ColourDetection.BOUNDARIES.iteritems():
# . . .
if amount_not_zero > 9000:
return k
else:
return "No colour found"
Notice that you will always return a value in your first iteration over the k,v pairs.
That is, either the first k that iteritems() gives you, or "No colour found".

Merging background with transparent image in PIL

I have a png image as background and I want to add a transparent mesh to this background but this doesn't work as expected. The background image is converted to transparent on places where I apply transparent mesh.
I am doing:
from PIL import Image, ImageDraw
map_background = Image.open(MAP_BACKGROUND_FILE).convert('RGBA')
map_mesh = Image.new('RGBA', (width, height), (0, 0, 0, 0))
draw = ImageDraw.Draw(map_mesh)
# Create mesh using: draw.line([...], fill=(255, 255, 255, 50), width=1)
...
map_background.paste(map_mesh, (0, 0), map_mesh)
But the result is:
You can see a chessboard pattern if you look carefully (used in graphics programs as no background). Transparent lines makes the background layer transparent too in places where both layers met. But I only want the transparent line to be added on top of the background.
I can solve it with:
map_background.paste((255,255,255), (0, 0), map_mesh)
but as I use different colors for different lines, I would have to make for every color this process. If I had 100 colors, then I need 100 layers what is not very good solution.
What you are trying to do is to composite the grid onto the background, and for that you need to use Image.blend or Image.composite. Here's an example using the latter to composite red lines with random alpha values onto a white background:
import Image, ImageDraw, random
background = Image.new('RGB', (100, 100), (255, 255, 255))
foreground = Image.new('RGB', (100, 100), (255, 0, 0))
mask = Image.new('L', (100, 100), 0)
draw = ImageDraw.Draw(mask)
for i in range(5, 100, 10):
draw.line((i, 0, i, 100), fill=random.randrange(256))
draw.line((0, i, 100, i), fill=random.randrange(256))
result = Image.composite(background, foreground, mask)
From left to right:
[background] [mask]
[foreground]
[result]
(If you are happy to write the result back to the background image, then you can use one of the masked versions of Image.paste, as pointed out by Paulo Scardine in a deleted answer.)
I had trouble getting the above examples to work well. Instead, this worked for me:
import numpy as np
import Image
import ImageDraw
def add_craters(image, craterization=20.0, width=256, height=256):
foreground = Image.new('RGBA', (width, height), (0, 0, 0, 0))
draw = ImageDraw.Draw(foreground)
for c in range(0, craterization):
x = np.random.randint(10, width-10)
y = np.random.randint(10, height-10)
radius = np.random.randint(2, 10)
dark_color = (0, 0, 0, 128)
draw.ellipse((x-radius, y-radius, x+radius, y+radius), fill=dark_color)
image_new = Image.composite(foreground, image, foreground)
return image_new

Categories