Pygame Changing Hue of Image - python

I have a question to do with the Python module pygame.
I want to change the hue of an image for a sprite, like applying a filter to the image. I have seen many posts concerning changing specific pixels from one color to another, although this is not what I want to do. I want to do something similar to what can be done in simple photo editing software such as paint.net, changing the overall color of an image. I could of course change the hue of the image in a photo editing software, but this would lead to lots of images needing to be made and loaded and managed, it would quickly become very tedious. I am hoping that there is some sort of way to change the hue of an image in pygame.

You can do this with Python PIL. Take a look at this question and answer, and especially the original question and answer that they link to:
Changing the color of an image based on RGB value

'''
Original post https://www.reddit.com/r/pygame/comments/hprkpr/how_to_change_the_color_of_an_image_in_pygame/
'''
blue_rgb = (0,0,255) red_rgb = (255,0,0) img =
pygame.image.load("sky_picture.png") # loads the picture from the path
given var = pygame.PixelArray(img)
# var.replace(([Colour you want to replace]), [Colour you want]) var.replace((blue_rgb), (red_rgb)) # replaces all blue in the picture
to red del var
"""
if the picture has some unchanged pixels left it's probably because they are not EXACTLY the rgb given for example (254,0,0) is not
(255,0,0) and won't be changed (to fix this you will have to calculate
the approx number ot just change the main picture)
"""
# I also uploaded this to grepper

The hue of each pixel in an image can be shifted using
PixelArray to
iterate over each pixel,
Surface.unmap_rgb to get a Color object from each pixel, and
Color.hsla to do the
hue shift.
# Get the pixels
pixels = PixelArray(surface)
# Iterate over every pixel
for x in range(surface.get_width()):
for y in range(surface.get_height()):
# Turn the pixel data into an RGB tuple
rgb = surface.unmap_rgb(pixels[x][y])
# Get a new color object using the RGB tuple and convert to HSLA
color = Color(*rgb)
h, s, l, a = color.hsla
# Add 120 to the hue (or however much you want) and wrap to under 360
color.hsla = (int(h) + 120) % 360, int(s), int(l), int(a)
# Assign directly to the pixel
pixels[x][y] = color
# The old way of closing a PixelArray object
del pixels
If the surface is small, this could be run in real time. However, it would be better to run once at load time, especially if the surface is large.

Related

Set particular channel of Image to certain value according to other image

I have this image which has 4 channels. What I want to do is, to reduce this image's opacity. I want the image to remain transparent, however, I just want to reduce the opacity of the VW logo part. (Using opencv/numpy and python)
Here's what I tried :
logo = cv2.imread(FILE_PATH, cv2.IMREAD_UNCHANGED)
logo[:, :, 3] = 50
But this assigns the value 50 to all over the image, giving me this result. Notice how the remaining part of the image is no more transparent (I want it to remain like the original one.)
I thought of something like:
#This is my logic NOT ANY CODE, I want to do something like this:
if (any of) other 3 channels are non zero, make alpha channel of that pixel = 50.
else, keep that pixel as it is. (This pixel would be part of logo)
Is there any way of achieving this result, by using opencv / numpy in python? My last option would be to iterate through all the pixels and look for above conditions, but I feel that would be inefficient.
This answer is exactly what I DON'T want. I want only the logo part(Colored pixels)'s alpha channel to be set to 50.
Just fleshing out the comment from #yann-ziselman...
Here are the RGBA channels of your image side-by-side, with a red border so you can see the extent on Stack Overflow's background:
Here's how to do what you ask:
import cv2
# Load image including alpha channel
im = cv2.imread('logo.png', cv2.IMREAD_UNCHANGED)
# Set alpha channel to 50 anywhere none of the BGR channels is non-zero
im[(im[..., :3]!=0).any(2), 3] = 50
# Save result
cv2.imwrite('result.png', im)
Result
Result split into RGBA channels side-by-side

Is there another way to fill the area outside a rotated image with white color? 'fillcolor' does not work with older versions of Python

I want to rotate a black and white image. I am trying to use the rotate function as follows:
image.rotate(angle, fillcolor=255)
I am required to older versions of Python and Pillow, and they do not support the 'fillcolor' argument. I cannot upgrade to the newer versions due to certain restrictions and cannot use any external libraries.
Is there another way to fill the area outside the rotated image with white color using Pillow?
Rotated image has black color in the area outside the rotated part. I want to fill it with white color.
Original : Original image
Rotated :Rotated image
You can try Interpolating the Original Image, with the cropped one via Image.composite() to get rid of the black bars/borders.
from PIL import Image
img = Image.open(r"Image_Path").convert("RGBA")
angle = 30
img = img.rotate(angle)
new_img = Image.new('RGBA', img.size, 'white')
Alpha_Image = Image.composite(img, new_img, img)
Alpha_Image = Alpha_Image.convert(img.mode)
Alpha_Image.show()
The above code takes in an Image, converts it into mode RGBA (Alpha is required for this process), and then rotates the Image by 30 degrees. After that It creates a empty Image object of mode RGBA of the same dimensions as the original image, with each pixel having a default value of 255 each channel (i.e Pure white for RGB, and Full Opacity in the context of Alpha/Transparency). Then Interpolates the original image with this empty one using the mask of original Image (we are using the transparency mask of the first image). This results in the Desired images, where black bars/edges are replaced by white. In the end we convert the image color space to the original one.
ORIGINAL IMAGE:-
IMAGE AFTER ROTATING 30 DEGREES:-
An awkward option that has always worked for me, seeing as with my tools I always get a light gray "border" around the rotated image that interferes with filling:
add a border on the non-rotated image and use the fill color with that border.
The bordering operation is lossless and filling will be exact (and easy).
rotate the bordered image. The seam will now also be correct (but not exact unless you
rotate by 45° or 90°).
calculate the size of the rotated border using trigonometry. The result will not be exact (i.e. "131.12 pixel"). Usually you can do this in reverse, starting with an exact border on the rotated image and calculating the border you need to add, and adjust the border width so that the nonrotated border is exact. Example: with a rotated border of 170 pixels you get a nonrotated border of 140.3394 pixels. So you use a 510 pixel rotated border, resulting in the need to add a 421.018 pixel nonrotated border. This is close enough to 421 pixels that it is acceptable.
remove the rotated border.
This also helps avoiding some artefacts near the cut parts of the image that fall off the rotated image.
It has the drawback that you end up with a more massive rotation, with higher memory expenditure and computation time, especially if you use larger borders to increase precision.
Edit: As no external libraries are allowed, I would suggest cropping the rectangle you want and pasting it onto the original image, this could be done with magic numbers (of the rectangle's coordinates), this works for me (you might will need to tweek a little)
im = Image.open("mFul4.png")
rotated = im.rotate(105)
box = (55, 65,200,210)
d = rotated.crop(box=box)
im.paste(d, box=box)
im.save("ex.bmp" )
and the output
Edit2: This is the ugliest way, but it works, you might need to tweak the magic numbers a bit to have it more precise, I was working on your given image, so couldn't tell when i'm overdoing it. It produces the same output
from PIL import Image
im = Image.open("mFul4.png")
angle=105
cos = 0.240959049 # -cos(angle)
d = im.rotate(angle)
pix = d.load()
tri_x = 120
for i in range(4): # 4 triangles
for j in range(tri_x, -1, -1):
for k in range(int((tri_x-j)*cos)+1, -1, -1):
x,y =( j, k )if i <1 else (d.size[0]-j-1, d.size[1]-k-1)
if i in [2,3]:
y, x = (d.size[0] - j-2 , k) if i <3 else (j, d.size[1] - k)
pix[x,y] = (255, 255, 255, 255)
d.show()

How to get border pixels of an image in python?

I have an image, using steganography I want to save the data in border pixels only.
In other words, I want to save data only in the least significant bits(LSB) of border pixels of an image.
Is there any way to get border pixels to store data( max 15 characters text) in the border pixels?
Plz, help me out...
OBTAINING BORDER PIXELS:
Masking operations are one of many ways to obtain the border pixels of an image. The code would be as follows:
a= cv2.imread('cal1.jpg')
bw = 20 //width of border required
mask = np.ones(a.shape[:2], dtype = "uint8")
cv2.rectangle(mask, (bw,bw),(a.shape[1]-bw,a.shape[0]-bw), 0, -1)
output = cv2.bitwise_and(a, a, mask = mask)
cv2.imshow('out', output)
cv2.waitKey(5000)
After I get an array of ones with the same dimension as the input image, I use cv2.rectangle function to draw a rectangle of zeros. The first argument is the image you want to draw on, second argument is start (x,y) point and the third argument is the end (x,y) point. Fourth argument is the color and '-1' represents the thickness of rectangle drawn (-1 fills the rectangle). You can find the documentation for the function here.
Now that we have our mask, you can use 'cv2.bitwise_and' (documentation) function to perform AND operation on the pixels. Basically what happens is, the pixels that are AND with '1' pixels in the mask, retain their pixel values. Pixels that are AND with '0' pixels in the mask are made 0. This way you will have the output as follows:
.
The input image was :
You have the border pixels now!
Using LSB planes to store your info is not a good idea. It makes sense when you think about it. A simple lossy compression would affect most of your hidden data. Saving your image as JPEG would result in loss of info or severe affected info. If you want to still try LSB, look into bit-plane slicing. Through bit-plane slicing, you basically obtain bit planes (from MSB to LSB) of the image. (image from researchgate.net)
I have done it in Matlab and not quite sure about doing it in python. In Matlab,
the function, 'bitget(image, 1)', returns the LSB of the image. I found a question on bit-plane slicing using python here. Though unanswered, you might want to look into the posted code.
To access border pixel and enter data into it.
A shape of an image is accessed by t= img.shape. It returns a tuple of the number of rows, columns, and channels.A component is RGB which 1,2,3 respectively.int(r[0]) is variable in which a value is stored.
import cv2
img = cv2.imread('xyz.png')
t = img.shape
print(t)
component = 2
img.itemset((0,0,component),int(r[0]))
img.itemset((0,t[1]-1,component),int(r[1]))
img.itemset((t[0]-1,0,component),int(r[2]))
img.itemset((t[0]-1,t[1]-1,component),int(r[3]))
print(img.item(0,0,component))
print(img.item(0,t[1]-1,component))
print(img.item(t[0]-1,0,component))
print(img.item(t[0]-1,t[1]-1,component))
cv2.imwrite('output.png',img)

Color list to gtk Image

How can i make a gtk.Image that is made from an list of gtk.gdk.Color.
color_list = [gtk.gdk.Color(100*i,100*i,100*i) for i in range(10)]
image = gtk.Image()
draw_image_from_list(color_list, image)
I need that procedure and the image would be just one column of pixels:
|color1|
|color2|
|color3|
...
Here's a widget that almost does what you want: https://github.com/ptomato/LaserCam/blob/master/src/ColorMapIndicator.py
It's a gtk.DrawingArea instead of a gtk.Image, but I decided against using an image since they are more meant for displaying image files, not for drawing on. It displays the colors in a horizontal bar 128 pixels wide and 10 high, but that should be easy enough to change. And finally, it does it from an array of numbers, not of gtk.gdk.Color, but it looks like you generate those from numbers anyway in your example.

PIL Converting an image's hue, then saving out in Python

I'm loading and saving out images with PIL just fine but I can't seem to change the "overall" hue of a given image ~ google and here revealed an answer, sort of, with the numpy module, but thats not an option for me
It should be quite simple, given a gray image with alpha, I'd like to make it's hue red
I think you want a mono-hue image. Is this true?
It's not clear what you want done with the existing bands (alpha and greyscale/level). Do you want alpha to remain alpha and the greyscale to become red saturation? Do you want the alpha to become your red saturation? Do you want greyscale to be the image lightness and the alpha to become the saturation?
Edit:
I've changed the output based on your comment. You wanted the darkest shade of the greyscale band to represent fully saturated red and the lightest grey to represent white (in other words full-saturated with all colors). You also indicated that you wanted alpha to be preserved as alpha in the output. I've made that change too.
This is possible with some band swapping:
import Image
# get an image that is greyscale with alpha
i = Image.open('hsvwheel.png').convert('LA')
# get the two bands
L,A = i.split()
# a fully saturated band
S, = Image.new('L', i.size, 255).split()
# re-combine the bands
# this keeps tha alpha channel in the new image
i2 = Image.merge('RGBA', (S,L,L,A))
# save
i2.save('test.png')

Categories