Pillow masked extraction without pixel subscripting? - python

I have one jpeg, call it template.jpg that is a blank ballot. I need to compare this with thousands of scanned ballots, imagexxxx.jpg to see any marks that the user put on the form, even those extraneous to the purpose.
One way to do this would be a nested loop across all pixels like this:
tmpl = Image.open('template.jpg')
tmpl = tmpl.convert(mode='1', dither=0)
im = Image.open('imagexxxx.jpg')
pxtmpl = tmpl.load()
pxim = im.load()
for i in range(tmpl.size[0]):
for j in range (tmpl.size[1]):
if pxtmpl[i,j] == 0:
pxim[i,j] = (255,255,255)
im.show()
However pixel access is slow.
Needless to say I am willing to spend almost any amount of CPU creating the template file in its optimal format in order to minimize the per-ballot time.
I have been hoping to find some channel-op or clever use of the .point method but haven't found one that worked?
Any suggestions for how to get this done with minimal per-ballot processing time?
(For this discussion I am ignoring the issue of misregistration of ballots vs the template in scanning.)

Related

How to best apply `skimage.util.apply_parallel` to remove small objects in large datasets?

For a study where I'm processing a fairly large cohort of tomographically scanned teeth I'd like to implement a cropping function to extract each tooth from its surroundings.
For this I'm using scipy.ndimage.find_objects(thresholded_data)[0] and follow what's outlined in the documentation.
To reliably extract the tooth from the full dataset I need to remove small objects from the thresholded data prior to extracting the tooth with find_object.
I achieve this with
def cropper(image, threshold=66):
thresholded = image > threshold
despeckled = skimage.morphology.remove_small_objects(thresholded, min_size=500)
cropdimensions = scipy.ndimage.find_objects(despeckled)[0]
return(image[cropdimensions])
My datasets are quite large (each with a size of around 4000 x 1632 x 1632 voxels) and I'm using dask to store a representation of the full datasets on disk.
To make my function above work I thus have to deliberately use .compute() at the beginning to load the full dataset into RAM.
I thought I should be able to apply the skimage.util.apply_parallel function for speeding up my processing (and making it possible to run on a machine with less than 196 GB of RAM which I fortunately have).
If I read the documentation correclty, it seems that I can 'simply' expand my function and use the code below.
def cropper(image, threshold=66):
thresholded = image > threshold
despeckled = skimage.util.apply_parallel(skimage.morphology.remove_small_objects,
thresholded,
extra_keywords={'min_size': 500})
cropdimensions = skimage.util.apply_parallel(scipy.ndimage.find_objects,
thresholded)
cropdimensions = cropdimensions[0]
return(image[cropdimensions])
But I get an AttributeErrorthat a 'list' object has no attribute 'dtype'.
If I remove the find_objects, I can get out the despeckled image like so
def cropper(image, threshold=66):
thresholded = image > threshold
despeckled = skimage.util.apply_parallel(skimage.morphology.remove_small_objects,
thresholded,
extra_keywords={'min_size': 500})
return(despeckled)
Can I not use two consecutive apply_parallels in the same function?
After a while of debugging; Obviously you can't, because you're not using skimage to find the object!
If you change the relevant line to cropdimensions = scipy.ndimage.find_objects(despeckled)[0] it works as expected, you silly!

Image Sharpening Algorithm coded in Python

I was hoping someone could take a look at this sharpening algorithm I devised using PILLOW and explain to me why it is not giving a desirable sharpening effect on images. It really just looks like crap when applied to my sample images. I've worked on this for several days, but haven't made much progress in either improving the quality of the sharpening effect or the efficiency of the algorithm itself. Ideally, I'm looking for a subtle sharpening effect or something that can be scaled easily. I really appreciate any help or insight that can be provided. Here are the sources that I used to come up with this algorithm:
http://lodev.org/cgtutor/filtering.html#Sharpen
http://www.foundalis.com/res/imgproc.htm
from PIL import *
from PIL import Image
import os
os.chdir(r"C:")
filter1=9
filter2=-1
def sharpen2(photo,height,width,filter1,filter2):
for y in range(1,height-1):
for x in range(1,width-1):
(r,g,b)=photo.getpixel((x,y))
r=int(r*filter1)
g=int(g*filter1)
b=int(b*filter1)
(r1,g1,b1)=photo.getpixel((x-1,y-1))
r1=int(r1*filter2)
g1=int(g1*filter2)
b1=int(b1*filter2)
(r2,g2,b2)=photo.getpixel((x,y-1))
r2=int(r2*filter2)
g2=int(g2*filter2)
b2=int(b2*filter2)
(r3,g3,b3)=photo.getpixel((x+1,y-1))
r3=int(r3*filter2)
g3=int(g3*filter2)
b3=int(b3*filter2)
(r4,g4,b4)=photo.getpixel((x-1,y))
r4=int(r4*filter2)
g4=int(g4*filter2)
b4=int(b4*filter2)
(r5,g5,b5)=photo.getpixel((x+1,y))
r5=int(r5*filter2)
g5=int(g5*filter2)
b5=int(b5*filter2)
(r6,g6,b6)=photo.getpixel((x-1,y+1))
r6=int(r6*filter2)
g6=int(g6*filter2)
b6=int(b6*filter2)
(r7,g7,b7)=photo.getpixel((x,y+1))
r7=int(r7*filter2)
g7=int(g7*filter2)
b7=int(b7*filter2)
(r8,g8,b8)=photo.getpixel((x+1,y+1))
r8=int(r8*filter2)
g8=int(g8*filter2)
b8=int(b8*filter2)
rfPixel=r+r1+r2+r3+r4+r5+r6+r7+r8
if rfPixel>255:
rfPixel=255
elif rfPixel<0:
rfPixel=0
gfPixel= g+g1+g2+g3+g4+g5+g6+g7+g8
if gfPixel>255:
gfPixel=255
elif gfPixel<0:
gfPixel=0
bfPixel=b+b1+b2+b3+b4+b5+b6+b7+b8
if bfPixel>255:
bfPixel=255
elif bfPixel<0:
bfPixel=0
photo.putpixel((x,y),(rfPixel,gfPixel,bfPixel))
return photo
photo=Image.open("someImage.jpg").convert("RGB")
photo2=photo.copy()
height=photo.height
width=photo.width
x=sharpen2(photo,height,width,filter1,filter2)
One problem is likely that you're saving the results to the same image you are getting pixel data from. By the time you get to a pixel, some of its neighbors have been replaced by the filtered data, and some have not. The error is small at first but adds up.
To fix: save the results to a different image, say filtered_photo.putpixel(...). You'd have to create a blank filtered_photo first.
Another big problem (mentioned by #Mark Ransom) is that you probably want filter1 = 1.1 and filter2 = -0.1 or something along those lines. Using 9 and -1 will make most values come out of range.
A better implementation: don't loop over each pixel in python code, use numpy to process the whole image at once, it will be much faster (and shorter code). The usual implementation of sharpen is to subtract the gaussian-filtered image from the original image, which is a one-liner using numpy and ndimage (or skimage).

Manipulating data arrays and adding noise

I just cannot seem to get this right! :-(
I have a code that is supposed to read in a .fits file, add normally distributed noise to it, and then re-save that code. So far, it just does not seem to be working at all. There's a lot of extra code, so I only posted the portion that's relevant. Assume that everything that this slice of code needs to read in exists, because it does. The goal of this code is to take a .fits file and add normally distributed noise to the pixels, then save that file. "poisson" is a previously inputted variable, i.e. a "poisson" value of 1 corresponds to one standard deviation from the mean of zero. Yes, the word "poisson" is a bit of a misnomer, and I should really rehaul my code to amend that.
My first issue is . . . what does im0 = im[0] mean? It doesn't seem like it's the first row of pixels in the .fits file, because when I change the integer in the brackets to anything besides "0", I get an index error. On top of that, the normalNoise = np.random.normal(0,poisson) method is incomplete because I'm missing a third parameter, "size" (tuple of ints) and I have no idea what that means. My images are 130 pixels x 130 pixels, if that means anything.
im = pf.open(name)
im0 = im[0]
normalNoise = np.random.normal(0,poisson)
print im0.data
test = im0.data + normalNoise
print test
im0.data = test
stringee = 'NOISE'
pf.writeto(stringee+str(poisson)+name, data=test, clobber=True, header=im0.header)
print poisson
This should ideally spit out the same image but with added noise, except it doesn't!
I do not know the underlying library, but if pf is a FITS file, im[0] is probably
creating a reference to the primary HDU; if there is only a primary HDU (as for
any simple-minded FITS file with only a single image), any higher index leads to an error.

How to find a template in an image using a mask (or transparency) with OpenCV and Python?

Let us assume we are looking for this template:
The corners of our template are transparent, so the background will vary, like so:
Assuming we could use the following mask with our template:
It would be very easy to find it.
What I have tried:
I have tried matchTemplate but it doesn't support masks (as far as I know), and using the alpha channel (transparency) in the template does not achieve this, as it compares the alpha channels instead of ignoring those pixels.
I have also looked into "region of interest", which I thought would be the solution, but with it you can only specify a rectangular area. I'm not even sure if it works on the template or not.
I'm sure this is possible to do by writing my own algorithm, but I was hoping this is possible via. standard OpenCV to avoid reinventing the wheel. Not to mention, it would most likely be more optimised than my own.
So, how could I do something like this with OpenCV + Python?
This could be achieved using only matchTemplate function, but a little workaround is needed.
Lets analyse the default metrics(CV_TM_SQDIFF_NORMED). According to matchTemplate documentation
the default metrics looks like this
R(x, y) = sum (I(x+x', y+y') - T(x', y'))^2
Where I is image matrix, T is template, R is result matrix. Summation is done over template coordinates x' and y',
So, lets alter this metrics by inserting weight matrix W, which has the same dimensions as
T.
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
In this case, by setting W(x', y') = 0 you can actually make pixel be ignored. So, how to make such metrics? With simple math:
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
= sum W(x', y')*(I(x+x', y+y')^2 - 2*I(x+x', y+y')*T(x', y') + T(x', y')^2)
= sum {W(x', y')*I(x+x', y+y')^2} - sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} + sum{W(x', y')*T(x', y')^2)}
So, we divided Q metrics into tree separate sums. And all those sums could be calculated
with matchTemplate function (using CV_TM_CCORR method). Namely
sum {W(x', y')*I(x+x', y+y')^2} = matchTemplate(I^2, W, method=2)
sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} = matchTemplate(I, 2*W*T, method=2)
sum{W(x', y')*T(x', y')^2)} = matchTemplate(T^2, W, method=2) = sum(W*T^2)
The last element is a constant, so, for minimisation it does not have any effect. On the other hand, it still might me useful to see if our template have perfect match (if Q is approaching to zero). Nonetheless, for last element we actually do not need matchTemplate function, since it could be calculated directly.
The final pseudocode looks like this:
result = matchTemplate(I^2, W, method=2) - matchTemplate(I, 2*W*T, method=2) + as.scalar(sum(W*T^2))
Does it really do exactly as defined? Mathematically yes.
Practically, there is some small rounding error, because matchTemplate function
works on 32-bit floating-point, but I believe it is not a big problem.
Please note, that you can extent analysis and have weighted equivalents for any metrics offered by matchTemplate.
This actually worked for me. I am sorry I don't give actual code. I am working in R, so
I don't have the code in Python. But idea is quite straightforward.
I hope this will help.
What worked for me the one time I needed this was to fill the "mask" areas with white noise. Then it gets effectively washed out of the correlation when looking for matches. Otherwise I got, as I presume you did, false matches on the masked areas.
One answer to your question is convolution. Use the template as kernel and filter the image.
The destination Mat will have dense bright areas where your template might be. You'll have to cluster the results (e.g. Mean-shift).
In that way, you'll have a very simplistic implementation of the Generalized Hough Transform or a Template-based convolution matching.
Imagemagick 7.0.3.9 now has a masked compare capability so that you can limit the template matching region. See http://www.imagemagick.org/discourse-server/viewtopic.php?f=4&t=31053
Also, I see that OpenCV 3.0 now has masked template matching. See http://docs.opencv.org/3.0.0/df/dfb/group__imgproc__object.html#ga586ebfb0a7fb604b35a23d85391329be
However, it is only for method == CV_TM_SQDIFF and method == CV_TM_CCORR_NORMED. see python opencv matchTemplate is mask feature implemented?
ImageMagick has logic for finding subimages in other images and it works quite well.
compare -verbose -dissimilarity-threshold 0.1 -subimage-search subimage bigimage
I've used it to find and blur watermarks off some products. Don't ask.
(Sometimes you have to do what you have to do..)
2021 Update: I've been trying to find a solution for transparency in templates throughout the day, and I think I finally found a way to do it. matchTemplate() has a mask parameter, which apparently works exactly like OP wants it to: ignore certain pixels from a template when searching for it in another image. And since my templates already contain transparency in them, I decided to use my template as both a template and mask parameter. Surprisingly, it worked.
I'm using JavaScript with opencv4nodejs, so the following python code snippet might be completely off, but the theory is there and I'm fairly positive it should work.
# Import OpenCV
import cv2 as cv
# Read both the image and the template
image = cv.imread("image.png", cv.IMREAD_COLOR)
template = cv.imread("template.png", cv.IMREAD_COLOR)
# Match with template as both template and mask parameter
result = cv.matchTemplate(image, template, cv.TM_CCORR_NORMED, None, template)
Here's a gist for JavaScript with opencv4nodejs if you're interested.
Now that I think about it, it seems really stupid and way too good to be true, but I've been getting good matches (0.98+) on most tests. Hope this helps!

Image Gurus: Optimize my Python PNG transparency function

I need to replace all the white(ish) pixels in a PNG image with alpha transparency.
I'm using Python in AppEngine and so do not have access to libraries like PIL, imagemagick etc. AppEngine does have an image library, but is pitched mainly at image resizing.
I found the excellent little pyPNG module and managed to knock up a little function that does what I need:
make_transparent.py
pseudo-code for the main loop would be something like:
for each pixel:
if pixel looks "quite white":
set pixel values to transparent
otherwise:
keep existing pixel values
and (assuming 8bit values) "quite white" would be:
where each r,g,b value is greater than "240"
AND each r,g,b value is within "20" of each other
This is the first time I've worked with raw pixel data in this way, and although works, it also performs extremely poorly. It seems like there must be a more efficient way of processing the data without iterating over each pixel in this manner? (Matrices?)
I was hoping someone with more experience in dealing with these things might be able to point out some of my more obvious mistakes/improvements in my algorithm.
Thanks!
This still visits every pixel, but may be faster:
new_pixels = []
for row in pixels:
new_row = array('B', row)
i = 0
while i < len(new_row):
r = new_row[i]
g = new_row[i + 1]
b = new_row[i + 2]
if r>threshold and g>threshold and b>threshold:
m = int((r+g+b)/3)
if nearly_eq(r,m,tolerance) and nearly_eq(g,m,tolerance) and nearly_eq(b,m,tolerance):
new_row[i + 3] = 0
i += 4
new_pixels.append(new_row)
It avoids the slicen generator, which will be copying the entire row of pixels for every pixel (less one pixel each time).
It also pre-allocates the output row by directly copying the input row, and then only writes the alpha value of pixels which have changed.
Even faster would be to not allocate a new set of pixels at all, and just write directly over the pixels in the source image (assuming you don't need the source image for anything else).
Honestly, the only heuristic I could conceive is picking a few arbitrary, random points on your image and using a flood fill.
This only works well if your image as large contiguous white portions (if your image is an object with no or little holes in front of a background, then you're in luck -- you actually have a heuristic for which points to flood fill from).
(disclaimer: I am no image guru =/ )
I'm quite sure there is no short cut for this. You have to visit every single pixel.
The issue seems to have more to do with loops in Python than with images.
Python loops are extremely slow, it is best to avoid them and use built-ins loop operators instead.
Here, if you were willing to copy the image, you could use a list comprehension:
def make_transparent(pixel):
if pixel looks "quite white": return transparent
else: return pixel
newImage = [make_transparent(p) for p in oldImage]

Categories