"AttributeError: 'PixelAccess' object has no attribute 'mode'" when merging image - python

I'm learning in an academic course python 3 and we currently study this PIL library.
in a certain task I need to take a couple of images and to run some tests on them, and one of the tests is to merge a gray image to an RGB image using Image.merge().
I have copied the part which is relevant to my problem here but I might be wrong and copied too little:
imag1= Image.open(im1_file)
imag2= Image.open(im2_file)
# convert the images to grayscale
img1 = imag1.convert('L')
img2 = imag2.convert('L')
# calculate the images histograms
mat1= img1.load()
mat2= img2.load()
hist1= img1.histogram()
hist2= img2.histogram()
# creat lists for bright pixels in the images
hist1count= hist1[128:]
hist2count= hist2[128:]
sum1= sum(hist1count)
sum2= sum(hist2count)
# calculate the mean brightness for each image in those bright pixels
mean1= sum1/len(hist1count)
mean2 =sum2/len(hist2count)
# compare images total number of bright pixels and assign the image
if sum1 > sum2:
mat_saved= mat1
img_saved= img1
elif sum2 > sum1:
mat_saved= mat2
img_saved= img2
# and the mean brightness of the image with the larger amount of bright pixels to variables for later use
if mean1 > mean2:
rmean_saved= mean1
elif mean2> mean1:
rmean_saved= mean2
else:
r_mean_saved = Image.new('L', (w,h), 0)
# creat image for the new red channel in the output image
new_img = img_saved.copy()
w, h=new_img.size
new_mat = new_img.load()
# creat a red rectangle border to the selected output image
for x in range(w):
for y in range(h):
new_mat[x,0]= rmean_saved
new_mat[x, h-1]= rmean_saved
new_mat[0, y]= rmean_saved
new_mat[w-1, y]= rmean_saved
# merge the red channel image with blue and green channel images to creat the output image
img_mixed= Image.merge("RGB", (new_mat ,mat_saved , mat_saved))
It gives me back an error which says:
Traceback (most recent call last):
File "./Root/src/main.py", line 103, in c,selected_image,marked_img = compare_and_mark_images(im1_file,im2_file)
File "./Root/src/main.py", line 89, in compare_and_mark_images img_mixed= Image.merge("RGB", (new_mat ,mat_saved , mat_saved))
File "/usr/lib/python3/dist-packages/PIL/Image.py", line 2118, in merge if im.mode != getmodetype(mode):
AttributeError: 'PixelAccess' object has no attribute 'mode
Note: I think the code works fine from small tests I've done during writing, the problem starts in the last line, the merge() function.

Related

Overlaying a translucent image over another image python

I am trying to overlay a translucent B&W image over a coloured image, but all the sample code I've found on this website hasn't worked so far ;-;
At this point, I'm just trying to implement the functionality of overlaying an image over another one, but it keeps reporting errors.
# import the opencv library
import cv2
import numpy as np
def canny(frame):
# Convert to graycsale
img_gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
max_value = np.max(img_gray)
threshold_value = int(max_value * 0.75)
ret, thresh = cv2.threshold(img_gray, threshold_value, 255, cv2.THRESH_BINARY)
# extract alpha channel from foreground image as mask and make 3 channels
alpha = frame[:,:,3]
alpha = cv2.merge([alpha,alpha,alpha])
# extract bgr channels from foreground image
front = frame[:,:,0:3]
# blend the two images using the alpha channel as controlling mask
result = np.where(alpha==(0,0,0), thresh, front)
# Display Canny Edge Detection Image
cv2.imshow('Canny Edge Detection',result)
# define a video capture object
vid = cv2.VideoCapture("videoplayback.mp4")
while(True):
# Capture the video frame
# by frame
ret, frame = vid.read()
canny(frame)
# the 'q' button is set as the
# quitting button you may use any
# desired button of your choice
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# After the loop release the cap object
vid.release()
# Destroy all the windows
cv2.destroyAllWindows()```
As of right now, the error is ["Traceback (most recent call last):
File "edge.py", line 34, in <module>
canny(frame)
File "edge.py", line 13, in canny
alpha = frame[:,:,3]
IndexError: index 3 is out of bounds for axis 2 with size 3']
When I change it to 2. it reports ["Traceback (most recent call last):
File "edge.py", line 34, in <module>
canny(frame)
File "edge.py", line 20, in canny
result = np.where(alpha==(0,0,0), thresh, front)
File "<__array_function__ internals>", line 180, in where
ValueError: operands could not be broadcast together with shapes (360,480,3) (360,480) (360,480,3) "]
Please let me know how I can fix this, or if there is another way to write this code ;-;

Highlight shape differences between two images with color change

I am trying to make my code more robust compared to my first revision. The goal is to generate a final single image by comparing image A and image B to get image C. Currently I am working to show differences in images composed of black lines. In this case, that would be image A and B. I have a working method with imaging resizing and the pre-processing done (resizing, noise reduction, etc). The code I developed to show the differences (image C) is shown below:
np_image_A = np.array(image_A)
np_image_B = np.array(image_B)
# Set the green and red channels respectively to 0. Leaves a blue image
np_image_A[:, :, 1] = 0
np_image_A[:, :, 2] = 0
# Set the blue channels to 0.
np_image_B[:, :, 0] = 0
# Add the np images after color modification
overlay_image = cv2.add(np_image_A, np_image_B)
I currently don't feel that is is robust enough and may lead to some issues down the line. I want to use a method that shows the image differences between image A and B in a single image. And image A be assigned one color for differences and image B be assigned another color (such as blue and red, and black represents areas that are the same). This is highlighted in the image below:
To remedy this, I received some help from StackOverflow and now have a method that uses masking and merging in OpenCV. The issue that I have found is that only additive changes are shown, and if an item is removed, it is not show in the difference image.
Here is the updated code that gets me part of the way to the solution that I am seeking.The issue with this code is that it produces what is found in image D and not image C. I tried to essentially run this block of code twice, switching img = imageA and imageB, but the output is mangled for some reason.
# load image A as color image
img = cv2.imread('1a.png')
# load A and B as grayscale
imgA = cv2.imread('1a.png',0)
imgB = cv2.imread('1b.png',0)
# invert grayscale images for subtraction
imgA_inv = cv2.bitwise_not(imgA)
imgB_inv = cv2.bitwise_not(imgB)
# subtract the original (A) for the new version (B)
diff = cv2.subtract(imgB_inv, imgA_inv)
# split color image A into blue,green,red color channels
b,g,r = cv2.split(img)
# merge channels back into image, subtracting the diff from
# the blue and green channels, leaving the shape of diff red
res = cv2.merge((b-diff,g-diff,r))
# display result
cv2.imshow('Result',res)
cv2.waitKey(0)
cv2.destroyAllWindows()
The result that I am looking for is image C, but currently I can only achieve image D with the revised code.
Edit: Here are the test images A and B for use.
You're almost there, but you need to create two separate diffs. One diff represents the black pixels that are in A but not in B, and the other diff represents the black pixels that are in B but not in A.
Result:
import cv2
import numpy as np
# load A and B as grayscale
imgA = cv2.imread('1a.png',0)
imgB = cv2.imread('1b.png',0)
# invert grayscale images for subtraction
imgA_inv = cv2.bitwise_not(imgA)
imgB_inv = cv2.bitwise_not(imgB)
# create two diffs, A - B and B - A
diff1 = cv2.subtract(imgB_inv, imgA_inv)
diff2 = cv2.subtract(imgA_inv, imgB_inv)
# create a combined image of the two inverted
combined = cv2.add(imgA_inv, imgB_inv)
combined_inv = cv2.bitwise_not(combined)
# convert the combined image back to rbg,
# so that we can modify individual color channels
combined_rgb = cv2.cvtColor(combined_inv, cv2.COLOR_GRAY2RGB)
# split combined image into blue,green,red color channels
b,g,r = cv2.split(combined_rgb)
# merge channels back into image, adding the first diff to
# the red channel and the second diff to the blue channel
res = cv2.merge((b+diff2,g,r+diff1))
# display result
cv2.imshow('Result',res)
cv2.waitKey(0)
cv2.destroyAllWindows()

OpenCV Python: Detecting lines only in ROI

I'd like to detect lines inside a region of interest. My output image should display the original image and the detected lines in the selected ROI. So far it has not been a problem to find lines in the original image or select a ROI but finding lines only inside the ROI did not work. My MWE reads an image, converts it to grayscale and lets me select a ROI but gives an error when HoughLinesP wants to read roi.
import cv2
import numpy as np
img = cv2.imread('example.jpg',1)
gray = cv2.cvtColor(img ,cv2.COLOR_BGR2GRAY)
# Select ROI
fromCenter = False
roi = cv2.selectROI(gray, fromCenter)
# Crop ROI
roi = img[int(roi[1]):int(roi[1]+roi[3]), int(roi[0]):int(roi[0]+roi[2])]
# Find lines
minLineLength = 100
maxLineGap = 30
lines = cv2.HoughLinesP(roi,1,np.pi/180,100,minLineLength,maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(img,(x1,y1),(x2,y2),(237,149,100),2)
cv2.imshow('Image',img)
cv2.waitKey(0) & 0xFF
cv2.destroyAllWindows()
The console shows:
lines = cv2.HoughLinesP(roi,1,np.pi/180,100,minLineLength,maxLineGap)
error: OpenCV(3.4.1)
C:\Miniconda3\conda-bld\opencv-suite_1533128839831\work\modules\imgproc\src\hough.cpp:441:
error: (-215) image.type() == (((0) & ((1 << 3) - 1)) + (((1)-1) <<
3)) in function cv::HoughLinesProbabilistic
My assumption is that roi does not have the correct format. I am using Python 3.6 with Spyder 3.2.8.
Thanks for any help!
The function cv2.HoughLinesP is expecting a single-channel image, so the cropped region could be taken from the gray image and that would remove the error:
# Crop the image
roi = list(map(int, roi)) # Convert to int for simplicity
cropped = gray[roi[1]:roi[1]+roi[3], roi[0]:roi[0]+roi[2]]
Note that I'm changing the output name from roi to cropped, and that's because you're going to still need the roi box. The points x1, x2, y1, and y2 are pixel positions in the cropped image, not the full image. To get the images drawn correctly, you can just add the upper left corner pixel position from roi.
Here's the for loop with relevant edits:
# Find lines
minLineLength = 100
maxLineGap = 30
lines = cv2.HoughLinesP(cropped,1,np.pi/180,100,minLineLength,maxLineGap)
for x in range(0, len(lines)):
for x1,y1,x2,y2 in lines[x]:
cv2.line(img,(x1+roi[0],y1+roi[1]),(x2+roi[0],y2+roi[1]),(237,149,100),2)

TypeError: 'int' object is not subscriptable?

when I run my PIL code ,it has this error:
from PIL import Image,ImageDraw, ImageColor, ImageChops
# Load images
im1 = Image.open('im1.png')
im2 = Image.open('im2.png')
# Flood fill white edges of image 2 with black
seed = (0, 0)
black = ImageColor.getrgb("black")
ImageDraw.floodfill(im2, seed, black, thresh=127)
# Now select lighter pixel of image1 and image2 at each pixel location and
result = ImageChops.lighter(im1, im2)
result.save('result.png')
the error is in my image processing:
Traceback (most recent call last):
File "C:\Users\Martin
Ma\Desktop\test\36\light_3_global\close_open\gray\main.py", line 96, in <module>
ImageDraw.floodfill(im2, seed, black, thresh=127)
File "E:\python\lib\site-packages\PIL\ImageDraw.py", line 346, in floodfill
if _color_diff(value, background) <= thresh:
File "E:\python\lib\site-packages\PIL\ImageDraw.py", line 386, in _color_diff
return abs(rgb1[0]-rgb2[0]) + abs(rgb1[1]-rgb2[1]) + abs(rgb1[2]-rgb2[2])
TypeError: 'int' object is not subscriptable
enter link description here
how can I solve it? thanks a lot !
You have changed image type without thinking about the consequences. JPEG and PNG are fundamentally different beasts, and you need to be aware of that:
JPEG images are lossily saved, so your data will not generally be read back with the same values you wrote - this seems to shock everyone. They threshold an image so that all values above 127 go white and others go black and have a true binary image, they then save as JPEG and are amazed that on reloading, the image has 78 colours despite having thresholded it.
JPEG images have all sorts of artefacts - chunky blocks of noise which will mess up your processing - especially if you look at saturation.
PNG images are often palettised where each pixel stores an index into a 256-colour palette, rather than an RGB triplet. Most operations will fail on palettised images because you are comparing an index with an RGB colour triplet.
PNG images are often greyscale - so there is only one channel and comparisons with RGB triplets will fail because the number of channels differs.
So, in answer to your question, I suspect your PNG image is palettised (especially likely when it only has 2 colours). You therefore need to convert it to RGB or maybe Luminance mode on opening:
im1 = Image.open('im1.png').convert('RGB')

OpenCV/Python: Error in cv2.add() to add a number to image

I'm using OpenCV/Python and I'm trying to add a number to image.
My code is:
import cv2
import numpy as np
import math
from matplotlib import pyplot as plt
img = cv2.imread('messi.jpg',0)
img2 = img
img2 = cv2.add(img2, np.uint8([50]))
I got the next error:
OpenCV Error: Assertion failed (type2 == CV_64F && (sz2.height == 1 || sz2.heigh
t == 4)) in cv::arithm_op, file C:\builds\master_PackSlaveAddon-win64-vc12-stati
c\opencv\modules\core\src\arithm.cpp, line 1989
Traceback (most recent call last):
File "lab3_examples.py", line 27, in <module>
img2 = cv2.add(img, np.uint8([50]))
cv2.error: C:\builds\master_PackSlaveAddon-win64-vc12-static\opencv\modules\core
\src\arithm.cpp:1989: error: (-215) type2 == CV_64F && (sz2.height == 1 || sz2.h
eight == 4) in function cv::arithm_op
The image I'm using is messi.jpg
Instead, if I use img2 = np.add(img2, np.uint8([50])) intensities that pass the value 255 the value % 255 result, e.g. 260%255=4 the pixel's value is set to 4 instead of 255. As a result, white pixels are turned to black!
Here is the faulty resulted image.
Any ideas please?
In C++ for this purpose saturate_cast(...) is used.
In Python simply
img2 = cv2.add(img2, 50)
will do, if you want to increase brightness for gray-scale image. If implied to colored, color balance will be shifted. For colored image, to save the balance, good answer is by Alex, Bill Grates:
How to fast change image brightness with python + OpenCV?
The only remark - next part of code are not nessessary:
v[v > 255] = 255
v[v < 0] = 0
in my case (Python3,Opencv4).
I would suggest convert the BGR image to HSV image:
hsv= cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
Then split the channels using:
h_channel, s_channel, v_channel = cv2.split(hsv)
Now play with the h_channel:
h_channel + = 20 #---You can try any other value as well---
Now merge the channels back together again:
merged = cv2.merge((h_channel , s_channel , v_channel ))
Finally convert the image back to BGR and display it:
Final_image = cv2.cvtColor(merged, cv2.COLOR_HSV2BGR)
cv2.imshow('Final output', Final_image)
You will see an enhanced or a dimmed image depending on the value you add.
Hope it helps.... :D

Categories