Image Gurus: Optimize my Python PNG transparency function - python

I need to replace all the white(ish) pixels in a PNG image with alpha transparency.
I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing.
I found the excellent little pyPNG module and managed to knock up a little function that does what I need:
make_transparent.py
pseudo-code for the main loop would be something like:
for each pixel:
if pixel looks "quite white":
set pixel values to transparent
otherwise:
keep existing pixel values
and (assuming 8bit values) "quite white" would be:
where each r,g,b value is greater than "240"
AND each r,g,b value is within "20" of each other
This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?)
I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm.
Thanks!

This still visits every pixel, but may be faster:
new_pixels = []
for row in pixels:
new_row = array('B', row)
i = 0
while i < len(new_row):
r = new_row[i]
g = new_row[i + 1]
b = new_row[i + 2]
if r>threshold and g>threshold and b>threshold:
m = int((r+g+b)/3)
if nearly_eq(r,m,tolerance) and nearly_eq(g,m,tolerance) and nearly_eq(b,m,tolerance):
new_row[i + 3] = 0
i += 4
new_pixels.append(new_row)
It avoids the slicen generator, which will be copying the entire row of pixels for every pixel (less one pixel each time).
It also pre-allocates the output row by directly copying the input row, and then only writes the alpha value of pixels which have changed.
Even faster would be to not allocate a new set of pixels at all, and just write directly over the pixels in the source image (assuming you don't need the source image for anything else).

Honestly, the only heuristic I could conceive is picking a few arbitrary, random points on your image and using a flood fill.
This only works well if your image as large contiguous white portions (if your image is an object with no or little holes in front of a background, then you're in luck -- you actually have a heuristic for which points to flood fill from).
(disclaimer: I am no image guru =/ )

I'm quite sure there is no short cut for this. You have to visit every single pixel.

The issue seems to have more to do with loops in Python than with images.
Python loops are extremely slow, it is best to avoid them and use built-ins loop operators instead.
Here, if you were willing to copy the image, you could use a list comprehension:
def make_transparent(pixel):
if pixel looks "quite white": return transparent
else: return pixel
newImage = [make_transparent(p) for p in oldImage]

Related

Pillow masked extraction without pixel subscripting?

I have one jpeg, call it template.jpg that is a blank ballot. I need to compare this with thousands of scanned ballots, imagexxxx.jpg to see any marks that the user put on the form, even those extraneous to the purpose.
One way to do this would be a nested loop across all pixels like this:
tmpl = Image.open('template.jpg')
tmpl = tmpl.convert(mode='1', dither=0)
im = Image.open('imagexxxx.jpg')
pxtmpl = tmpl.load()
pxim = im.load()
for i in range(tmpl.size[0]):
for j in range (tmpl.size[1]):
if pxtmpl[i,j] == 0:
pxim[i,j] = (255,255,255)
im.show()
However pixel access is slow.
Needless to say I am willing to spend almost any amount of CPU creating the template file in its optimal format in order to minimize the per-ballot time.
I have been hoping to find some channel-op or clever use of the .point method but haven't found one that worked?
Any suggestions for how to get this done with minimal per-ballot processing time?
(For this discussion I am ignoring the issue of misregistration of ballots vs the template in scanning.)

Image Sharpening Algorithm coded in Python

I was hoping someone could take a look at this sharpening algorithm I devised using PILLOW and explain to me why it is not giving a desirable sharpening effect on images. It really just looks like crap when applied to my sample images. I've worked on this for several days, but haven't made much progress in either improving the quality of the sharpening effect or the efficiency of the algorithm itself. Ideally, I'm looking for a subtle sharpening effect or something that can be scaled easily. I really appreciate any help or insight that can be provided. Here are the sources that I used to come up with this algorithm:
http://lodev.org/cgtutor/filtering.html#Sharpen
http://www.foundalis.com/res/imgproc.htm
from PIL import *
from PIL import Image
import os
os.chdir(r"C:")
filter1=9
filter2=-1
def sharpen2(photo,height,width,filter1,filter2):
for y in range(1,height-1):
for x in range(1,width-1):
(r,g,b)=photo.getpixel((x,y))
r=int(r*filter1)
g=int(g*filter1)
b=int(b*filter1)
(r1,g1,b1)=photo.getpixel((x-1,y-1))
r1=int(r1*filter2)
g1=int(g1*filter2)
b1=int(b1*filter2)
(r2,g2,b2)=photo.getpixel((x,y-1))
r2=int(r2*filter2)
g2=int(g2*filter2)
b2=int(b2*filter2)
(r3,g3,b3)=photo.getpixel((x+1,y-1))
r3=int(r3*filter2)
g3=int(g3*filter2)
b3=int(b3*filter2)
(r4,g4,b4)=photo.getpixel((x-1,y))
r4=int(r4*filter2)
g4=int(g4*filter2)
b4=int(b4*filter2)
(r5,g5,b5)=photo.getpixel((x+1,y))
r5=int(r5*filter2)
g5=int(g5*filter2)
b5=int(b5*filter2)
(r6,g6,b6)=photo.getpixel((x-1,y+1))
r6=int(r6*filter2)
g6=int(g6*filter2)
b6=int(b6*filter2)
(r7,g7,b7)=photo.getpixel((x,y+1))
r7=int(r7*filter2)
g7=int(g7*filter2)
b7=int(b7*filter2)
(r8,g8,b8)=photo.getpixel((x+1,y+1))
r8=int(r8*filter2)
g8=int(g8*filter2)
b8=int(b8*filter2)
rfPixel=r+r1+r2+r3+r4+r5+r6+r7+r8
if rfPixel>255:
rfPixel=255
elif rfPixel<0:
rfPixel=0
gfPixel= g+g1+g2+g3+g4+g5+g6+g7+g8
if gfPixel>255:
gfPixel=255
elif gfPixel<0:
gfPixel=0
bfPixel=b+b1+b2+b3+b4+b5+b6+b7+b8
if bfPixel>255:
bfPixel=255
elif bfPixel<0:
bfPixel=0
photo.putpixel((x,y),(rfPixel,gfPixel,bfPixel))
return photo
photo=Image.open("someImage.jpg").convert("RGB")
photo2=photo.copy()
height=photo.height
width=photo.width
x=sharpen2(photo,height,width,filter1,filter2)
One problem is likely that you're saving the results to the same image you are getting pixel data from. By the time you get to a pixel, some of its neighbors have been replaced by the filtered data, and some have not. The error is small at first but adds up.
To fix: save the results to a different image, say filtered_photo.putpixel(...). You'd have to create a blank filtered_photo first.
Another big problem (mentioned by #Mark Ransom) is that you probably want filter1 = 1.1 and filter2 = -0.1 or something along those lines. Using 9 and -1 will make most values come out of range.
A better implementation: don't loop over each pixel in python code, use numpy to process the whole image at once, it will be much faster (and shorter code). The usual implementation of sharpen is to subtract the gaussian-filtered image from the original image, which is a one-liner using numpy and ndimage (or skimage).

Defining color range for histologic image mask within HSV colorspace (Python, OpenCV, Image-Analysis):

In an effort to separate histologic slides into several layers based on color, I modified some widely distributed code (1) available through OpenCV's community. Our staining procedure marks different cell types of tissue cross sections with different colors (B cells are red, Macrophages are brown, background nuceli have a bluish color).
I'm interested in selecting only the magenta-colored and brown parts of the image.
Here's my attempt to create a mask for the magenta pigment:
import cv2
import numpy as np
def mask_builder(filename,hl,hh,sl,sh,vl,vh):
#load image, convert to hsv
bgr = cv2.imread(filename)
hsv = cv2.cvtColor(bgr, cv2.COLOR_BGR2HSV)
#set lower and upper bounds of range according to arguements
lower_bound = np.array([hl,sl,vl],dtype=np.uint8)
upper_bound = np.array([hh,sh,vh],dtype=np.uint8)
return cv2.inRange(hsv, lower_bound,upper_bound)
mask = mask_builder('sample 20 138 1.jpg', 170,180, 0,200, 0,230)
cv2.imwrite('mask.jpg', mask)
So far a trial and error approach has produced poor results:
The can anyone suggest a smarter method to threshhold within the HSV colorspace? I've done my best to search for answers in previous posts, but it seems that these color ranges are particularly difficult to define due to the nature of the image.
References:
Separation with Colorspaces: http://opencv-python-tutroals.readthedocs.org/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html
python opencv color tracking
BGR separation: http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
UPDATE:
I've found a working solution to my problem. I increased the lower bound of 'S' and 'V' by regular intervals using a simple FOR control structure, outputing the results for each test image and choosing the best. I found my lower bounds for S and V should be set at 100 and 125. This systematic method of trial and error produced better results:
I am happy you found your answer.
I will suggest an alternate method that might work. Unfortunately I am not proficient with python so you'll need to find out how to code that in python (its basic).
If I had the firs image you have after the HSV threshold, I would use morphological operations to get the information I want.
I would probably give it a go to "closing", but if it doesnt work I would first dilate, then fill and then erode the same amount firstly dilated.
Probably after this first step you'll need to delete the small "noise" blobs you have around and you'll get the image.
This is how it would be in Matlab (showing this mainly so you can see the results):
I=imread('http://i.stack.imgur.com/RlH4V.jpg');
I=I>230; % Create Black and white image (this is because in stackoverflow its a jpg)
ker=strel('square',3); % Create a 3x3 square kernel
I1=imdilate(I,ker); % Dilate
I2=imfill(I1,'holes'); % Close
I3=imerode(I2,ker); % Erode
Ilabel=bwlabel(I3,8); % Get a label per independent blob
% Get maximum area blob (you can do this with a for in python easily)
areas = regionprops(Ilabel,'Centroid','Area','PixelIdxList');
[~,index] = max([areas.Area]); % Get the maximum area
Imask=Ilabel==index; % Get the image with only the max area.
% Plot: This is just matlab code, no relevance
figure;
subplot(131)
title('Dialted')
imshow(I1);
subplot(132)
title('Closed')
imshow(I2);
subplot(133)
title('Eroded')
imshow(I3);
figure;
imshow(imread('http://i.stack.imgur.com/ZqrF9.jpg'))
hold on
h=imshow(bwperim(Imask));
set(h,'alphadata',Imask/2)
Note that I started from the "bad" HSV segmentation. If you try a better one the results may improve. Also, play with the kernel size for the erosion and dilation.
Through trial-and-error (incrementing down and up the "S" and "V" scales), I found that my desired colors require a relaxed range for "S" and "V" values. I'll refrain from sharing the particular values I use because I don't think anyone would find such information useful.
Note that the original code shared works fine once more representitive ranges are used.

Cases where Morphological Opening and Closing yields the same results?

I would like to know if there are any examples or cases where Opening and Closing Morphology operations on an single image produce the same results.
As an example, let's say we have an image X, and we have done opening operation to produce Y. Similarly, we have done a closing operation on the original X to produce the same Y. I would like to know if there are examples for these type of images X. Programming examples in Python or MATLAB are also appreciated.
Yes there are. As one small example, if you had a binary image where it consists of a bunch of squares that are disconnected and distinct. Provided that you specify a structuring element that is square, and choosing the structuring element so that it is smaller than the smallest square in the image, then doing either operation will give you the same results.
If you did an opening on this image and a closing on this image, you will produce the same results. Remember, an opening is an erosion followed by a dilation where a closing is a dilation followed by an erosion. In terms of analyzing the shapes, erosion slightly shrinks the area of the image while dilation slightly enlarges it.
By doing an erosion followed by a dilation (opening), you're shrinking the object and then growing it again. This will bring the image back to where it was before, provided that you choose the structuring element like what we talked about before. Similarly, if you did an dilation followed by an erosion (closing), you're growing the object and then shrinking it again, also bringing the image back to where it was before... following that same guideline I just talked about of course.
If you were to choose a structuring element where it is larger than the smallest object, doing an opening will remove this object from the image, and so you won't get the original image back. Also, you need to make sure that the objects are well far away from each other, and that the size of the structuring element does not overlap any of the objects as you slide over and do the morphology operations. The reason why is because if you were to do a closing, you would join these two objects together and so that won't get you the same results either!
Here's an example image that I generated that is binary:
To generate this image in MATLAB, you can do:
A = false(200,200);
A(30:60,30:60) = true;
A(90:110,90:110) = true;
A(10:30, 135:155) = true;
A(150:180,100:120) = true;
In Python, you can do this with numpy:
import numpy as np
A = np.zeros((200,200), dtype='uint8')
A[29:60,29:60] = 255
A[89:110,89:110] = 255
A[9:30, 134:155] = 255
A[149:180, 99:120] = 255
The reason why I had to create the array as uint8 in numpy is because when we want to show this image, I'm going to use OpenCV and it requires that the image be at least a uint8 type.
Now, let's choose a 5 x 5 square structuring element, and let's perform a closing and an opening with this image. We will display the results in a single figure going from left to right:
se = strel('square', 5);
A_close = imclose(A, se);
A_open = imopen(A, se);
figure;
subplot(1,3,1);
imshow(A);
title('Original');
subplot(1,3,2);
imshow(A_close);
title('Closed');
subplot(1,3,3);
imshow(A_open);
title('Open');
This is the result:
It certainly looks the same! To really show the difference, let's subtract the closed and opened result from the original image. You should get a blank image in the end if they're both equal to the original image.
figure;
subplot(1,2,1);
imshow(abs(double(A) - double(A_close)));
subplot(1,2,2);
imshow(abs(double(A) - double(A_open)));
Bear in mind that I converted the images to double to facilitate subtraction, and I used abs to ensure that negative differences are reflected. This is what I get:
As you can see, both results are totally blank, meaning they're exact copies of the original image after each result.
The equivalent code in Python for the first part is the following:
import cv2
se = np.ones((5,5), dtype='uint8')
A_close = cv2.morphologyEx(A, cv2.MORPH_CLOSE, se)
A_open = cv2.morphologyEx(A, cv2.MORPH_OPEN, se)
cv2.imshow('Original', A)
cv2.imshow('Close', A_close)
cv2.imshow('Open', A_open)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here's what I get:
You'll need to install the OpenCV package for this Python code. I displayed all of the images as three separate figures, then left the windows there until you choose any one of them and push a key. Once you do this, all of the windows will close. If you want to show the subtraction stuff, this is the code in Python:
A_close_diff = A - A_close
A_open_diff = A - A_open
cv2.imshow('Close Diff', A_close_diff)
cv2.imshow('Open Diff', A_open_diff)
cv2.waitKey(0)
cv2.destroyAllWindows()
I didn't name the figures in MATLAB because what we're showing is obvious, but for OpenCV, you need to name the windows, and so I put names that describe what we're showing for each. I also didn't need to take the absolute value, because in numpy, doing arithmetic operations that result in an overflow or underflow will simply wrap around itself, while for MATLAB, the values get clipped. That's why for MATLAB, I needed to convert to double and take the absolute value because imshow doesn't display negative intensities or if we were to have a situation where we did 0 - 1, the output would be 0 and you wouldn't be able to show that this location has a difference. With Python, doing 0 - 1 for uint8, will result in 255, so we can certainly see a difference here.... so there's no need to do any of this abs and casting stuff that we did in MATLAB. Here's what I get:
In general, you can reproduce what I did with any kind of shape and any size shape, so long as you choose a structuring element that mimics the properties of the shape that is in your image, and you choose a structuring element that is smaller than the smallest shape seen in that image. I'm sure there are many more examples that don't have to follow these specific guidelines, but this is the best example that I can think of at this moment.
This should hopefully get you started.
Good luck!
Yes, there are such images. One of the properties of opening (it's mentioned in wiki article, for example) is that it is an anti-extensive operation, i.e. if Y is opening of X, then Y ⊆ X. Similarly, closing is an extensive operation (i.e. X ⊆ Y), therefore for any such image X = Y. Any image invariant to both opening and closing will satisfy your requirement (and, as I have just shown, only such images will).
Concrete examples depend on structuring element used when performing erosion or dilation. For example, if it is a square n x n matrix with all elements equal to 1, then any rectangle with both height and width greater than n (and located far enough, i.e. at least n/2 pixels, from image edges) will satisfy this requirement.

Drawing fast lines in pygame

I'm trying to draw fast lines using pygame that aren't rendered directly to the screen. I've got a Python list as large as the number of pixels for the desired resolution, and store integer values corresponding to the number of times that pixel was hit by the line algorithm. Using this, a 2D heat map is built up, so rather than drawing a flat pixel value, pixel values are incremented based on the number of times a line runs through it, and "hot" pixels get brighter colours.
The reason for doing it this way is that we don't know in advance how many of these lines are going to get drawn, and what the maximum number of times any given pixel is going to be hit. Since we'd like to scale the output so that each rendering has the correct maximum and minimum RGB values, we can't just draw to the screen.
Is there a better way to draw these lines than a relatively naive Bresenham's algorithm? Here's the critical part of the drawLine function:
# before the loop, to save repeated multiplications
xm = []
for i in range(resolution[0]):
xm.append(i * resolution[0])
# inside of drawLine, index into the f list, of size resolution[0] * resolution[1]
for x in range(x0, x1 + 1):
if steep:
idx = y + xm[x]
f[idx] += 1
else:
idx = x + xm[y]
f[idx] += 1
The end result is scaled and drawn to the screen based on the maximum value inside of f. For example, if the maximum value is 1000, then you can assume the RGB value of each of the pixels is (f[i] * 255) / 1000.
The profile information says that runtime is dominated by the index lookups into f. I've used previous questions here to prove that these basic lists are faster than numpy arrays or arrays in Python, but for drawing lines like this, it still seems like there's room to improve.
What's a good and fast method for drawing an unknown number of lines to the screen, knowing that you'll be scaling the output in the end to render to the screen? Is there a good way to get rid of the index overhead?
Try Cython or something similar. (If you do, I would be interested in knowing if/how much that helped)
Cython is a programming language to
simplify writing C and C++ extension
modules for the CPython Python
runtime. Strictly speaking, Cython
syntax is a superset of Python syntax
additionally supporting: Direct
calling of C functions, or C++
functions/methods, from Cython code.
Strong typing of Cython variables,
classes, and class attributes as C
types. Cython compiles to C or C++
code rather than Python, and the
result is used as a Python Extension
Module or as a stand-alone application
embedding the CPython runtime.
(http://en.wikipedia.org/wiki/Cython)

Categories