PIL transparency doesn't seem to work - python

I'm trying to make the black pixels in the background of an image transparent. For this, I wrote the function below. I then overlay the image (a car in the center surrounded by black pixels) on a copy of itself and move the first one slowly. I was expecting the first image to reveal the copy underneath without showing the rectangular boundary since that part should have been transparent. However, I don't get the desired effect. Does anyone know what I'm missing?
def makeImageTransparent(img):
img = img.convert("RGBA")
datas = img.getdata()
newData = []
ii = 0
for item in datas:
if item[0] == 0 and item[1] == 0 and item[2] == 0:
newData.append((0, 0, 0, 0))
ii = ii + 1
else:
newData.append(item)
print str(ii)
img.putdata(newData)
return img

I couldn't figure out how to make the transparency work, so I just created my own paste method that updated the bytes of the image I was pasting to directly and this got me the desired effect.
def pasteImage(img, bigim, posn):
pixdata = img.load()
width, height = img.size
mainpixdata = bigim.load()
for y in xrange(height):
for x in xrange(width):
if pixdata[x, y] != (0, 0, 0, 0):
mainpixdata[x+posn[0], y+posn[1]] = pixdata[x,y]

Related

Python PIL cut words out so it becomes transparent PNG

I would like to know if its possible with python pil to cut text out and make the picture transparent. Here is an example to explain it better. Picture 1.png is a circle.
Picture 2.png is text.png that just says thanks. What I would like to do is put picture2 into picture1 and cut that out so it becomes transparent. so it should look like this picture3.png
This is what I have so far but it only pasting the image. I am new to PIl. I am not sure how to tell python I want this cut out like that.
img = Image.open('picture1.png').convert("RGBA")
bg_w, bg_h = img.size
layer = Image.open('picture2.png').convert("RGBA") # this file is the transparent one
img_w, img_h = layer.size
offset = ((bg_w - img_w)/2, (bg_h - img_h) / 3)
img.paste(layer, offset, mask=layer)
img.save('Picture3.png')
I figured it out. Here is the answer:
from PIL import Image, ImageDraw
img = Image.open('circle.png').convert("RGBA")
layer = Image.open('Thanks.png').convert("RGBA") # this file is the transparent one
img.paste(layer, mask=layer)
img.save('Picture3White.png')
img = Image.open('Picture3White.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("Transparent.png", "PNG")

Problems cropping entire white lines from .png file

What I want to do is to crop out the white lines above a given instagram print screen. I tried doing that by finding the center of the image and going up, line by line, until I found the first line entirely white. Any idea why my code is not working?
from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
entireWhiteLine = set()
entireWhiteLine.add(im.getpixel((0, 0)))
terminateUpperCrop = 1
while terminateUpperCrop != 2 :
for i in range(centerLine, 1, -1) :
entireLine = set()
upperBorder = i - 1
for j in range(0, width, 1) :
entireLine.add((im.getpixel((i, j))))
if entireLine == im.getpixel((0,0)):
box = (0, upperBorder, width, height)
crop = im.crop((box))
crop.save('test2.png')
terminateUpperCrop = 2
Your getpixel() call is actually searching with the coordinates the wrong way around, so in effect you were scanning for the left edge. You could use the following approach. This creates a row of data containing only white pixels. If the length of the row equals your width, then you know they are all white.
from PIL import Image
image_file = "test.png"
im = Image.open(image_file)
width, height = im.size
centerLine = height // 2
white = (255, 255, 255)
for y in range(centerLine, 0, -1) :
if len([1 for x in range(width) if im.getpixel((x, y)) == white]) == width - 1:
box = (0, y, width, height)
crop = im.crop((box))
crop.save('test2.png')
break

Residual white pixels in transparent background from PIL

I used this following code from another stackoverflow post
from PIL import Image as image
img = image.open('output.png')
img = img.convert("RGBA")
datas = img.getdata()
newData = []
for item in datas:
if item[0] == 255 and item[1] == 255 and item[2] == 255:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
img.putdata(newData)
img.save("img2.png", "PNG")
to transform my png's background to transparent. However, when I tried to add some shapes in powerpoint underneath the transparent image, it still has some residual white pixels left. Anyone know how to solve this?
Those pixels are not exactly "white". The color you are testing against and removing from the image is, with its value of #FFFFFF. But those slanted lines are heavily antialiased, "fading" from the pure white of the background to the pure color of the center of the lines.
This can be seen when zooming in just a teeny bit:
You can lower the threshold of when to make a pixel entirely transparent:
if item[0] > 240 and item[1] > 240 and item[2] > 240:
newData.append((255, 255, 255, 0))
else:
newData.append(item)
but no matter how much you do this, you will always end up with either visibly lighter pixels around the lines, or – when only matching the center "line" color exactly – with disconnected pixels, not resembling the original lines anymore.
But there is no reason to use a Yes/No mask with PNG images! PNG supports full 8-bit transparency, and so you can make the 'solid' center lines fully opaque, the solid white fully transparent, and have the gradually darkening pixels fade between these values.
This works best if you know the exact original color that was used to draw the lines with. Measuring it with Adobe PhotoShop, I get something like #818695. Plugging in these values into your program and adjusting the 'tint' (towards white) to transparency, flattened towards the full possible range, I suggest this code:
from PIL import Image as image
img = image.open('input.png')
img = img.convert("RGBA")
datas = img.getdata()
retain = (0x81,0x86,0x95)
retain_gray = (39*retain[0] + 50*retain[1] + 11*retain[2])
newData = []
for item in datas:
if item[0] > retain[0] and item[1] > retain[1] and item[2] > retain[2]:
# convert to grayscale
val = 39*item[0] + 50*item[1] + 11*item[2]
# invert
val = 25500 - val;
# difference with 'retain'
val = retain_gray - val
# scale down
val = 255*val/retain_gray
# invert to act as transparency
transp = 255-val
# apply transparency to original 'full' color value
newData.append((retain[0], retain[1], retain[2], transp ))
else:
newData.append(item)
img.putdata(newData)
img.save("output.png", "PNG")
print "done"
What it essentially does is converting the input image to grayscale, scaling it (because the scale from darkest to lightest should be in the full transparency range of 0..255), then using this as the 'transparent' byte. The result is way better than your on/off approach:

Using PIL or a Numpy array, how can I remove entire rows from an image?

I would like to know how I could remove entire rows from an image, preferably based on the color of the row?
Example: I have an image that is 5 pixels in height, the top two rows and the bottom two rows are white and the middle row is black. I would like to know how I could get PIL to identify this row of black pixels, then, remove the entire row and save the new image.
I have some knowledge of python and have so far been editing my images by listing the result of "getdata" so any answers with pseudo code may hopefully be enough. Thanks.
I wrote you the following code that removes every row which is completely black. I use the else clause of the for loop that will be executed when the loop is not exited by a break.
from PIL import Image
def find_rows_with_color(pixels, width, height, color):
rows_found=[]
for y in xrange(height):
for x in xrange(width):
if pixels[x, y] != color:
break
else:
rows_found.append(y)
return rows_found
old_im = Image.open("path/to/old/image.png")
if old_im.mode != 'RGB':
old_im = old_im.convert('RGB')
pixels = old_im.load()
width, height = old_im.size[0], old_im.size[1]
rows_to_remove = find_rows_with_color(pixels, width, height, (0, 0, 0)) #Remove black rows
new_im = Image.new('RGB', (width, height - len(rows_to_remove)))
pixels_new = new_im.load()
rows_removed = 0
for y in xrange(old_im.size[1]):
if y not in rows_to_remove:
for x in xrange(new_im.size[0]):
pixels_new[x, y - rows_removed] = pixels[x, y]
else:
rows_removed += 1
new_im.save("path/to/new/image.png")
If you have questions just ask :)

How do you compare pixels?

I am using the PIL to take an image with a black background and make a mask out of it. What I want the program to do is iterate through all the pixels in the image and if the pixel is black make it white and if it is any other color make it black, but I am not sure how to appropriately compare pixel values to determine what to do with the pixel.
Here is my code so far which creates an all black image.
import os, sys
import Image
filename = "C:\Users\pdiffley\Dropbox\C++2\Code\Test\BallSpriteImage.bmp"
height = 50
width = 50
im = Image.open(filename)
im = im.load()
i = 0
j = 0
while i<height:
while j<width:
if im[j,i] == (0,0,0):
im[j,i] = (255,255,255)
else:
im[j,i] = (0,0,0)
j = j+1
i = i+1
mask = Image.new('RGB', (width, height))
newfile = filename.partition('.')
newfile = newfile[0] + "Mask.bmp"
mask.save(newfile)
I believe the problem is in the if statement comparing the im[j,i] to the RGB value (0,0,0) which always evaluates to false. What is the correct way to compare the pixel?
The pixel data comparison is correct. But there are two problems with the logic:
When you are finished with a row, you should reset j to 0.
You are modifying the object "im", but writing "mask".
This should work (as long as you have no alpha channel - as andrewdski pointed out):
img = Image.open(filename)
im = img.load()
i = 0
while i<height:
j = 0
while j<width:
if im[j,i] == (0,0,0):
im[j,i] = (255,255,255)
else:
im[j,i] = (0,0,0)
j = j+1
i = i+1
newfile = filename.partition('.')
newfile = newfile[0] + "Mask.png"
img.save(newfile)
Here's how I'd rewrite it, which avoids a pixel index reset problem by using for loops, writes the data to a separate mask image rather than back onto the source, and removes the hardcoded image size. I also added an r prefix to the filename string to handle the backslashes in it.
import os, sys
import Image
BLACK = (0,0,0)
WHITE = (255, 255, 255)
filename = r"C:\Users\pdiffley\Dropbox\C++2\Code\Test\BallSpriteImage.bmp"
img = Image.open(filename)
width, height = img.size
im = img.load()
mask = Image.new('RGB', (width, height))
msk = mask.load()
for y in xrange(height):
for x in xrange(width):
if im[x,y] == BLACK:
msk[x,y] = WHITE
else: # not really needed since mask's initial color is black
msk[x,y] = BLACK
newfilename = filename.partition('.')
newfilename = newfilename[0] + "Mask.bmp"
mask.save(newfilename)
The following function uses the .point method and works on separately on each band of the image:
CVT_TABLE= (255,) + 255 * (0,)
def do_convert(img):
return img.point(CVT_TABLE * len(img.getbands()))
Working separately on each band means that a picture like this:
will be converted into this:
However, you can get almost what you want if you convert the image to mode "L" first:
CVT_TABLE= (255,) + 255 * (0,)
def do_convert(img):
return img.convert("L").point(CVT_TABLE)
producing the following result:
The only drawback is that a few darkest colors (e.g. #000001, the darkest blue possible) will probably be converted to black by the mode conversion.

Categories