I am wanting to create an image from a selection of coordinates i have. So I want each coordinate to be set a particular size and colour say black and 2X2, and then place it at the particular pixel it represents.
How will i go about this?
Will the function putpixel, work for what I want to do?
Thanks in advance
Doing this with putpixel will be inconvenient but not impossible. Since you say you want to make dots of more than a single pixel, it would be better to use ImageDraw.rectangle() or ellipse() instead.
For example:
import Image
import ImageDraw
img = Image.new("RGB", (400,400), "white")
draw = ImageDraw.Draw(img)
coords = [(100,70), (220, 310), (200,200)]
dotSize = 2
for (x,y) in coords:
draw.rectangle([x,y,x+dotSize-1,y+dotSize-1], fill="black")
img.show()
Related
I'm currently working on a little project in which I'm trying to generate a random image of a set size with a set palette. Everything generates fine and dandy, but zooming in on the image shows the pixels are interpolated in a way that doesn't look great. I would prefer if the pixels had hard edges, as with nearest neighbor interpolation. How would I go about doing that? See code below:
#!/usr/bin/env
import random
from PIL import Image
colors = ["#ffffff", "#898d90", "#000000", "#cf0530", "#2450a4", "#7eed56", "#ffd635", "#6134e1",
"#ffa800", "#6d482f", "#ff3881", "#51e9f4", "#fff8b8", "#94b3ff", "#158d62", "#515252" ]
def main():
size = width, height = 128, 128
image = Image.new( "RGB", size)
fillRand(image, size)
image.show()
del image
def randColor():
color = random.choice(colors)
return color
def hex_to_rgb(hex):
return tuple(int(hex.lstrip('#')[i:i+2], 16) for i in (0, 2, 4))
def fillRand(image, size):
for x in range(size[0]):
for y in range(size[1]):
pixel_access_object = image.load()
pixel_access_object[x,y] = (hex_to_rgb(randColor()))
if ( __name__ == "__main__"):
main()
Thank you in advance!
P.S. - Python isn't my main language and I'm rusty in general, so apologies if my code is wack.
I think that's a problem with your image viewer. I used ImageGlass, and there the pixels have crisp edges. A possible solution would be to make your pixels bigger, so you color 16x16 pixels the same color, instead of one. Then the interpolation woulden't be as visible.
I've got a function that takes an image and adjusts all of the RGB values by a given amount:
def colour(self,img,col):
img = self.my_image.copy()
col = (col[0],col[1],col[2],0)
img.fill(col, None, pygame.BLEND_RGBA_ADD)
return img
I want to apply this to a partially transparent image. But when I apply this to the image, the transparent pixels become coloured. I want these pixels to stay transparent, but any opaque pixels to be coloured as normal. Is there any way I can do this, without having to loop through every pixel and check if each one is transparent before adjusting it? Thanks
fill treats the color to be solid. You have to use blit.
Create a transparent image the size of the source image and fill it with the color. Finally, mix both images:
def colour(self, img, col):
img = self.my_image.copy()
color_img = pygame.Surface(img.get_size(), pygame.SRCALPHA)
color_img.fill((col[0], col[1], col[2], 0))
img.blit(color_img, (0, 0), special_flags = pygame.BLEND_RGBA_ADD)
return img
See also Is it possible to change sprite colours in Pygame?
I want to analyse a specific part of an image, as an example I'd like to focus on the bottom right 200x200 section and count all the black pixels, so far I have:
im1 = Image.open(path)
rgb_im1 = im1.convert('RGB')
for pixel in rgb_im1.getdata():
Whilst you could do this with cropping and a pair of for loops, that is really slow and not ideal.
I would suggest you use Numpy as it is very commonly available, very powerful and very fast.
Here's a 400x300 black rectangle with a 1-pixel red border:
#!/usr/bin/env python3
import numpy as np
from PIL import Image
# Open the image and make into Numpy array
im = Image.open('image.png')
ni = np.array(im)
# Declare an ROI - Region of Interest as the bottom-right 200x200 pixels
# This is called "Numpy slicing" and is near-instantaneous https://www.tutorialspoint.com/numpy/numpy_indexing_and_slicing.htm
ROI = ni[-200:,-200:]
# Calculate total area of ROI and subtract non-zero pixels to get number of zero pixels
# Numpy.count_nonzero() is highly optimised and extremely fast
black = 200*200 - np.count_nonzero(ROI)
print(f'Black pixel total: {black}')
Sample Output
Black pixel total: 39601
Yes, you can make it shorter, for example:
h, w = 200,200
im = np.array(Image.open('image.png'))
black = h*w - np.count_nonzero(ni[-h:,-w:])
If you want to debug it, you can take the ROI and make it into a PIL Image which you can then display. So just use this line anywhere after you make the ROI:
# Display image to check
Image.fromarray(ROI).show()
You can try cropping the Image to the specific part that you want:-
img = Image.open(r"Image_location")
x,y = img.size
img = img.crop((x-200, y-200, x, y))
The above code takes an input image, and crops it to its bottom right 200x200 pixels. (make sure the image dimensions are more then 200x200, otherwise an error will occur)
Original Image:-
Image after Cropping:-
You can then use this cropped image, to count the number of black pixels, where it depends on your use case what you consider as a BLACK pixel (a discrete value like (0, 0, 0) or a range/threshold (0-15, 0-15, 0-15)).
P.S.:- The final Image will always have a dimension of 200x200 pixels.
from PIL import Image
img = Image.open("ImageName.jpg")
crop_area = (a,b,c,d)
cropped_img = img.crop(crop_area)
I have a gallery application where the users upload photos and my code gives it a border, writes some of the photo attributes on the border and stores it.
image2 = Image.open('media/' + str(image.file))
width, height = image2.size;
image2 = ImageOps.expand(image2, border=(int(width/25),int(height/20),int(width/25),int(height/10)), fill='rgb(0,0,0)')
(Note that here my bottom border is longer than the top because I am writing attributes on the bottom border.)
Now I'm building an edit feature for the uploaded images where the user can change the attributes of the uploaded images. But the attributes that are already written on the border have to be overwritten.
So here, my approach is to put a black patch on the bottom border and re-write the new attributes without changes the top and side borders and without changing the aspect ratio. All of this has to be done using PIL.
Question is how do I put a black box on the bottom border?
I tried ImageOps.fit() as mentioned here https://pillow.readthedocs.io/en/3.3.x/reference/ImageOps.html#PIL.ImageOps.fit, but the aspect ratio doesn't seem to be right and I want to overwrite on the black border a black box and not crop the photo.
To me it seems like the easiest solution is just quickly draw the black pixels in the area that you want using a couple loops and Image.putpixel
from PIL import Image
img = Image.open('red.png')
for x in range(img.width):
for y in range(img.height - 40, img.height):
img.putpixel((x, y), (0, 0, 0))
img.save('red2.png')
The simplest way in my opinion is to create a new black image and paste onto your existing image -
from PIL import Image
im = Image.open('test.png')
blackBox = Image.new(im.mode, (im.width, 50), '#000')
im.paste(blackBox, (0, im.height - blackBox.height))
Alternatively, you could use ImageDraw - http://pillow.readthedocs.io/en/5.2.x/reference/ImageDraw.html - which you could use to draw rectangles and other shapes.
from PIL import Image, ImageDraw
im = Image.open('test.png')
d = ImageDraw.Draw(im)
d.rectangle((0, im.height - 50, im.width, im.height), fill='#000')
I have seen an image of a girl which is made up of multiple images of her.So I want to achieve the same thing using a python script.(I am completely new to image processing)
I am using pil library for this script.
import sys,os
from PIL import Image
img = Image.open("DSC_0149.jpg")
pixels = img.load()
for i in range(img.size[0]):
for j in range(img.size[1]):
pixels[i,j] = (i, j, 100) # I will change this to some pic image.
img.show()
I am trying first just to change the colour of pixel retaining the pic,But this code dint work.
Can anyone guide me how to achieve it.
Edit : I want to fill the picture with multiple pictures and yet RETAIN the original picture.
Something like this : http://www.photoshopessentials.com/photo-effects/photo-fill/ but in a much better way.
So first you need to edit each pixel with this to change the color:
If it is rgb:
img.putpixel((10,15),(r,g,b))
or
faster: pixels[1, 1] = (r, g, b)
otherwise:
Is it possible to change the color of one individual pixel in Python?
After knowing how to edit each pixel you have to create a small copy of your image with a resize like this:
Copy Image:
// Not tested : Make sure it's rgb
img = Image.new( 'RGB', (img.size[0],(img.size[1]), "black") # create a new black image
pixels = img.load() # create the pixel map
for i in range(img.size[0]): # for every pixel:
for j in range(img.size[1]):
pixels[i,j] = other_image_pixel[i,j] # set the colour accordingly
https://opensource.com/life/15/2/resize-images-python
Apply a Color filter to each small image to match the area color you will replace with this image.
The best way to understand the whole process is to take time to read this code in the same language, it's around 200 lines:
https://github.com/codebox/mosaic
Hope it solve your problems