I want to compare two images and save a difference image, where the differences are marked in red.
Unfortunately I get the following error:
Traceback (most recent call last):
File "pythontest.py", line 216, in <module>
nDiff = compare(sPathCur, sPathRef, sPathDif)
File "pythontest.py", line 88, in compare
pix_diff[y, x] = (255, 0, 0)
TypeError: function takes exactly 1 argument (3 given)
def compare(sPathCur, sPathRef, sPathDif):
im_cur = Image.open(sPathCur)
im_ref = Image.open(sPathRef)
im_dif = im_cur.convert('L') # convert image to grey scale
delta = ImageChops.difference(im_cur, im_ref)
width, height = delta.size
pix_delta = delta.load()
pix_diff = im_dif.load()
for y in range(width):
for x in range(height):
r, g, b = pix_delta[y, x]
if (r > 0 or g > 0 or b > 0):
pix_diff[y, x] = (255, 0, 0)
im_dif.save(sPathDif)
Once you have performed the conversion to a greyscale image, each pixel is assigned a single value, rather than an RGB triplet.
Taken from http://effbot.org/imagingbook/image.htm :
When converting from a colour image to black and white, the library
uses the ITU-R 601-2 luma transform:
L = R * 299/1000 + G * 587/1000 + B * 114/1000
So if your pixel at [x,y]=[0,0] had (R,G,B) value of (100,150,200), then after converting to greyscale, it would contain the single value 140.75 (which would then be rounded to an integer)
You can verify this by checking the value of pix_diff[0,0] before your nested loops. It should return you only a single value.
So you either need to assign a single greyscale value to each pixel in your pix_diff[y, x], or convert your pix_diff image back into an RGB-compatible format before you can assign each pixel your value of (255, 0, 0)
Related
i am trying to change the value of a pixel in an image to the closest value i have in my list, and i cant figure out why i cant change the pixel value.
I've tried converting the image to RGB or RGBA and for some reason sometimes it takes 3 arguments sometime 4.
im = Image.open('rick.png') # Can be many different formats.
rgb_im = im.convert('RGBA')
pix = im.load()
height, width = im.size
image = ImageGrab.grab()
COLORS = (
(0, 0, 0),
(127, 127, 127),
(136, 0, 21),
(237, 28, 36),
(255, 127, 39),
)
def closest_color(r, g, b, COLORS):
min_diff = 9999
answer = None
for color in COLORS:
cr, cg, cb = color
color_diff = abs(r - cr) + abs(g - cg) + abs(b - cb)
if color_diff < min_diff:
answer = color
min_diff = color_diff
return answer
def read_color(height,width, COLORS, pix):
for x in range(height):
for y in range(width):
r,g,b,a = rgb_im.getpixel((x,y))
color = closest_color(r, g, b, COLORS) # color is returned as tuple
pix[x,y] = color # Changing color value? -Here i get the error-
read_color(height,width, COLORS, pix)
im.save('try.png')
I keep getting this error even tho closest_value returns one argument and i dont know why, thnk you for your help!
COLORS - is a list of colors, i've tested the closest_color() function and it works good
Error message:
'Exception has occurred: TypeError
function takes exactly 1 argument (3 given)
File "C:\Users\user\Desktop\תוכנות שעשיתי\program.py", line 133, in
read_color
pix[x,y] = color
File "C:\Users\user\Desktop\תוכנות שעשיתי\program.py", line 137, in
<module>
read_color(height,width, COLORS, pix)'
EDIT!
Apperantly the code is working for most of the images but not for all of them, for exmaple this image doesn't work and i get this error
You are being inconsistent by reading the pixels from the RGBA converted image but setting the pixels in the original maybe-not-RGBA image. Fixing that makes your code work with the sample image.
pix = rgb_im.load()
I have the error setting an array element with a sequence and I'm not sure how I can fix it. The line that is triggering the error is:
ret[q,l] = (50,255,20)
ValueError: setting an array element with a sequence.
What I'm trying to do is change every pixel from the thresh array (located edges) in a greyscale image to a green color however I keep getting this error.
Also here is the rest of my edge detect code:
def edgeDetect(img, threshold):
ret = img.copy()
thresh = []
h = img.shape[0]
w = img.shape[1]
for y in range(1, h-2):
for x in range(1, w-2):
gx = 0
gy = 0
gx = (img.item(y, x)*0 + img.item(y + 1, x)*0 + img.item(y-1, x)*0 + img.item(y, x+1)*2 + img.item(y, x-1)*-2 + img.item(y+1, x+1)*1 + img.item(y-1, x-1)*-1 + img.item(y+1, x-1)*-1 + img.item(y-1, x+1)*1)
gy = (img.item(y, x)*0 + img.item(y + 1, x)*-2 + img.item(y-1, x)*2 + img.item(y, x+1)*0 + img.item(y, x-1)*0 + img.item(y+1, x+1)*-1 + img.item(y-1, x-1)*1 + img.item(y+1, x-1)*-1 + img.item(y-1, x+1)*1)
result = gx**2 + gy**2
if(result > float(threshold)**2):
thresh.append([y,x])
for p in range(len(thresh)):
print()
q, l = thresh[p]
img[q,l] = (50,255,20)
return ret
You probably are trying to set a 3-channel value (RGB or BGR if using OpenCV) to a single-channel array (i.e., grayscale).
In other words, what's the shape of the img array? If your image is 256x256 pixels, is the shape (256,256) or (256,256,3)? If the latter, you should be able to set the pixel value (3-tuple) as img[row,col,:] = (r,g,b).
If my assumption that img is only single channel then what you need to do is convert the existing single channel image to 3 channels and then set the color.
OpenCV's cvtColor(...) is an easy way to convert from grayscale images to BGR/RGB, but you can do the same with straight-forward numpy commands.
I would like to transform a .jpg into a categorical array. For each pixel of the images I have RGB values and I would like to associate this values to a unique value (see images). Have you any idea to do this? I've made some research in scikit image and other image processing modules but without success.
The first part solution is found in https://stackoverflow.com/a/30524039/3104727). It is reproduced here in order to work it with this image
from PIL import Image
import operator
from collections import defaultdict
import numpy as np
input_path = 'TI_test.jpg'
output_path = 'TI_output.png'
size = (200,200)
# Then we declare the palette - this should contain all colours.
palette = [(112, 137, 98), #green
(96, 97, 115), #blue
(140, 129, 49), #gold
(184, 31, 36), #red
]
while len(palette) < 256:
palette.append((0, 0, 0))
# The code below will declare palette for PIL, since PIL needs flat
# array rather than array of tuples:
flat_palette = reduce(lambda a, b: a+b, palette)
assert len(flat_palette) == 768
# Now we can declare an image that will hold the palette. We'll use
# it to reduce the colours from the original image later.
palette_img = Image.new('P', (1, 1), 0)
palette_img.putpalette(flat_palette)
# Here we open the image and quantize it. We scale it to size eight
# times bigger than needed, since we're going to sample the average
# output later.
multiplier = 8
img = Image.open(input_path)
img = img.resize((size[0] * multiplier, size[1] * multiplier),Image.BICUBIC)
img = img.quantize(palette=palette_img) #reduce the palette
# We need to convert it back to RGB so that we can sample pixels now:
img = img.convert('RGB')
# Now we're going to construct our final image. To do this, we'll
# sample how many pixels of each palette color each square in the
# bigger image contains. Then we'll choose the color that occurs most
# often.
out = Image.new('RGB', size)
for x in range(size[0]):
for y in range(size[1]):
#sample at get average color in the corresponding square
histogram = defaultdict(int)
for x2 in range(x * multiplier, (x + 1) * multiplier):
for y2 in range(y * multiplier, (y + 1) * multiplier):
histogram[img.getpixel((x2,y2))] += 1
color = max(histogram.iteritems(),key=operator.itemgetter(1))[0]
out.putpixel((x, y), color)
The following code is added to transform RGB image in grayscale and then in an array of categorical value (0 to n colours).
out2 = out.convert('L')
List of unique grayscale values
color = list(set(list(out2.getdata())))
Associate categorical value (0 to n colours) to each pixel
for x in range(size[0]):
for y in range(size[1]):
if out2.getpixel((x,y)) == color[0]:
out2.putpixel((x,y),0)
elif out2.getpixel((x,y)) == color[1]:
out2.putpixel((x,y),1)
elif out2.getpixel((x,y)) == color[2]:
out2.putpixel((x,y),2)
else:
out2.putpixel((x,y),3)
Transform the image to a numpy array
pix = np.array(out2)
This code shall convert my RGB-image into Black/White and provide me the RGB-value -which should be (0, 0, 0) or (255, 255, 255).
import cv2
import numpy as np
template = cv2.imread('C:\colorbars.png')
gray = cv2.cvtColor(template, cv2.COLOR_RGB2GRAY)
gray = cv2.resize(gray,(640,480))
ret,gray = cv2.threshold(gray,120,255,0)
gray2 = gray.copy()
mask = np.zeros(gray.shape,np.uint8)
contours, hier = cv2.findContours(gray,cv2.RETR_LIST,cv2.CHAIN_APPROX_SIMPLE)
for cnt in contours:
if 200<cv2.contourArea(cnt)<5000:
cv2.drawContours(gray2,[cnt],0,(0,255,0),2)
cv2.drawContours(mask,[cnt],0,(0,255,0),-1)
cv2.bitwise_not(gray2,gray2,mask)
y = 250
x = 200
r, g, b = gray2[y,x]
print r, g, b
It works if I check the RGB value of the colored image with the line r, g, b = template[y,x]; however as soon as I want to have the RGB value of the Black/White image, following error message appears:
File "C:\Python27\Lib\site-packages\myprogram.py", Line 22, in <module>
r, g, b = gray2[y,x]
TypeError: ´numpy.uint8´ object is not iterable
I assume it means that there are not enough objects in the array and I assume that the problem lies somewhere in the conversion from color to B/W.
Your "gray" variable is a 2D matrix (because of the grayscale), so when you ask for gray2[x,y], it returns a single unsigned integer in 8 bit (np.unint8) corresponding to the grayscale value of the [x,y] pixel.
When you do : r,g,b =gray2[x,y], you expect 3 values (r, g, b), but it returns only 1 so you get an error.
You should precise what you are trying to do, as asking for the RGB values of a grayscale image makes no sense.
Please try to use just ONE channel to get the result instead of 3 channels,
for example: r = gray2[x,y]
I'm stuck with a problem of the python wrapper for OpenCv.
I have this function that returns 1 if the number of black pixels is greater than treshold
def checkBlackPixels( img, threshold ):
width = img.width
height = img.height
nchannels = img.nChannels
step = img.widthStep
dimtot = width * height
data = img.imageData
black = 0
for i in range( 0, height ):
for j in range( 0, width ):
r = data[i*step + j*nchannels + 0]
g = data[i*step + j*nchannels + 1]
b = data[i*step + j*nchannels + 2]
if r == 0 and g == 0 and b == 0:
black = black + 1
if black >= threshold * dimtot:
return 1
else:
return 0
The loop (scan each pixel of a given image) works good when the input is an RGB
image...but if the input is a single channel image I get this error:
for j in range( width ):
TypeError: Nested sequences should have 2 or 3 dimensions
The input single channel image (called 'rg' in the next example) is taken from
an RGB image called 'src' processed with cvSplit and then cvAbsDiff
cvSplit( src, r, g, b, 'NULL' )
rg = cvCreateImage( cvGetSize(src), src.depth, 1 ) # R - G
cvAbsDiff( r, g, rg )
I've also already noticed that the problem comes from the difference image got from cvSplit...
Anyone can help me?
Thank you
widthStep and imageData are no longer valid attributes for IplImage object. Thus, the correct way to loop through each pixel and grabbing its color value would be
for i in range(0, height):
for j in range(0, width):
pixel_value = cv.Get2D(img, i, j)
# Since OpenCV loads color images in BGR, not RGB
b = pixel_value[0]
g = pixel_value[1]
r = pixel_value[2]
# cv.Set2D(result, i, j, value)
# ^ to store results of per-pixel
# operations at (i, j) in 'result' image
Hope you find this useful.
What version of OpenCV and which Python wrapper are you using? I recommend using OpenCV 2.1 or 2.2 with the Python interface that comes with the library.
I also recommend that you avoid scanning pixels manually, and instead use the low-level functions provided by OpenCV (see the Operations on Arrays part of the OpenCV docs). That way will be less error-prone and much faster.
If you want to count the number of black pixels in a single-channel image or in a color image with the COI set (so that the color image is effectively treated as a single-channel one), you could use the function CountNonZero:
def countBlackPixels(grayImg):
(w,h) = cv.GetSize(grayImg)
size = w * h
return size - cv.CountNonZero(grayImg)