Comparing two images/pictures, and mark the difference - python

I am learning to compare two images/pictures. I found the post Compare two images the python/linux way is very useful and I have some questions regarding the technique.
Question 1:
The post shows ways to compare 2 pictures/images. Probably the easiest way is:
from PIL import Image
from PIL import ImageChops
im1 = Image.open("file1.jpg")
im2 = Image.open("file2.jpg")
diff = ImageChops.difference(im2, im1).getbbox()
print diff
when I have 2 look alike pictures and run above, it give result:
(389, 415, 394, 420)
It’s the position on the picture where the difference in 2 pictures lies. So my question is, would it be possible to mark the difference on the picture (for example, draw a circle)?
Question 2:
import math, operator
from PIL import Image
def compare(file1, file2):
image1 = Image.open(file1)
image2 = Image.open(file2)
h1 = Image.open("image1").histogram()
h2 = Image.open("image2").histogram()
rms = math.sqrt(reduce(operator.add, map(lambda a,b: (a-b)**2, h1, h2))/len(h1))
if __name__=='__main__':
import sys
file1 = ('c:\\a.jpg') # added line
file2 = ('c:\\b.jpg') # added line
file1, file2 = sys.argv[1:]
print compare(file1, file2)
When I run above, it gives an error “ValueError: need more than 0 values to unpack”, and the problem lies in this line:
file1, file2 = sys.argv[1:]
How can I have it corrected? and I tried below it neither works.
print compare('c:\\a.jpg', 'c:\\b.jpg')
Update
Added question following Matt's help.
It can draw a rectangle to mark the difference on the two images/pictures. When the two images/pictures looked general the same but there are small spots differences spread. It draws a big rectangle marking the big area include all the spots differences. Is there a way to identically mark the differences individually?

Regarding your first question:
import ImageDraw
draw = ImageDraw.Draw(im2)
draw.rectangle(diff)
im2.show()
Regarding your second question:
The error states, that sys.argv does not contain enough values to be assigned to file1 and file2. You need to pass the names of the two files you want to compare to you python script (the variable sys.arv contains the name of your script and all the command line parameters):
python name_of_your_script.py file1.jpg file2.jpg

Question 1: ImageDraw
image = Image.open("x.png")
draw = ImageDraw.Draw(image)
draw.ellipse((x-r, y-r, x+r, y+r))

Related

How to to calculate unnested watersheds in GRASS GIS?

I am running into a few issues using the GRASS GIS module r.accumulate while running it in Python. I use the module to calculate sub watersheds for over 7000 measurement points. Unfortunately, the output of the algorithm is nested. So all sub watersheds are overlapping each other. Running the r.accumulate sub watershed module takes roughly 2 minutes for either one or multiple points, I assume the bottleneck is loading the direction raster.
I was wondering if there is an unnested variant in GRASS GIS available and if not, how to overcome the bottleneck of loading the direction raster every time you call the module accumulate. Below is a code snippet of what I have tried so far (resulting in a nested variant):
locations = VectorTopo('locations',mapset='PERMANENT')
locations.open('r')
points=[]
for i in range(len(locations)):
points.append(locations.read(i+1).coords())
for j in range(0,len(points),255):
output = "watershed_batch_{}#Watersheds".format(j)
gs.run_command("r.accumulate", direction='direction#PERMANENT', subwatershed=output,overwrite=True, flags = "r", coordinates = points[j:j+255])
gs.run_command('r.stats', flags="ac", input=output, output="stat_batch_{}.csv".format(j),overwrite=True)
Any thoughts or ideas are very welcome.
I already replied to your email, but now I see your Python code and better understand your "overlapping" issue. In this case, you don't want to feed individual outlet points one at a time. You can just run
r.accumulate direction=direction#PERMANENT subwatershed=output outlet=locations
r.accumulate's outlet option can handle multiple outlets and will generate non-overlapping subwatersheds.
The answer provided via email was very usefull. To share the answer I have provided the code below to do an unnested basin subwatershed calculation. A small remark: I had to feed the coordinates in batches as the list of coordinates exceeded the max length of characters windows could handle.
Thanks to #Huidae Cho, the call to R.accumulate to calculate subwatersheds and longest flow path can now be done in one call instead of two seperate calls.
The output are unnested basins. Where the largers subwatersheds are seperated from the smaller subbasins instead of being clipped up into the smaller basins. This had to with the fact that the output is the raster format, where each cell can only represent one basin.
gs.run_command('g.mapset',mapset='Watersheds')
gs.run_command('g.region', rast='direction#PERMANENT')
StationIds = list(gs.vector.vector_db_select('locations_snapped_new', columns = 'StationId')["values"].values())
XY = list(gs.vector.vector_db_select('locations_snapped_new', columns = 'x_real,y_real')["values"].values())
for j in range(0,len(XY),255):
output_ws = "watershed_batch_{}#Watersheds".format(j)
output_lfp = "lfp_batch_{}#Watersheds".format(j)
output_lfp_unique = "lfp_unique_batch_{}#Watersheds".format(j)
gs.run_command("r.accumulate", direction='direction#PERMANENT', subwatershed=output_ws, flags = "ar", coordinates = XY[j:j+255],lfp=output_lfp, id=StationIds[j:j+255], id_column="id",overwrite=True)
gs.run_command("r.to.vect", input=output_ws, output=output_ws, type="area", overwrite=True)
gs.run_command("v.extract", input=output_lfp, where="1 order by id", output=output_lfp_unique,overwrite=True)
To export the unique watersheds I used the following code. I had to transform the longest_flow_path to point as some of the longest_flow_paths intersected with the corner boundary of the watershed next to it. Some longest flow paths were thus not fully within the subwatershed. See image below where the red line (longest flow path) touches the subwatershed boundary:
enter image description here
gs.run_command('g.mapset',mapset='Watersheds')
lfps= gs.list_grouped('vect', pattern='lfp_unique_*')['Watersheds']
ws= gs.list_grouped('vect', pattern='watershed_batch*')['Watersheds']
files=np.stack((lfps,ws)).T
#print(files)
for file in files:
print(file)
ids = list(gs.vector.vector_db_select(file[0],columns="id")["values"].values())
for idx in ids:
idx=int(idx[0])
expr = f'id="{idx}"'
gs.run_command('v.extract',input=file[0], where=expr, output="tmp_lfp",overwrite=True)
gs.run_command("v.to.points", input="tmp_lfp", output="tmp_lfp_points", use="vertex", overwrite=True)
gs.run_command('v.select', ainput= file[1], binput = "tmp_lfp_points", output="tmp_subwatersheds", overwrite=True)
gs.run_command('v.db.update', map = "tmp_subwatersheds",col= "value", value=idx)
gs.run_command('g.mapset',mapset='vector_out')
gs.run_command('v.dissolve',input= "tmp_subwatersheds#Watersheds", output="subwatersheds_{}".format(idx),col="value",overwrite=True)
gs.run_command('g.mapset',mapset='Watersheds')
gs.run_command("g.remove", flags="f", type="vector",name="tmp_lfp,tmp_subwatersheds")
I ended up with a vector for each subwatershed

Trying to run a program in Pycharm

I have to test out a code in Pycharm (the bottom of the question), but I cannot figure out how to run it in Pycharm without getting this error:
usage: color.py [-h] -i IMAGE
color.py: error: the following arguments are required: -i/--image
I know that if I was using Idle I would have written this code:
/Users/syedrishad/Downloads/python-project-color-detection/color_detection.py -i
Users/syedrishad/Downloads/python-project-color-detection/colorpic.jpg
But I don't know how to run it on Pycharm
I use a mac and for some reason i always have to put the full path name or it doesn't work.(If that makes a difference)
This program makes it so if I double click on a part of the image, it shows the exact color name. All the color names are stored in this file:
/Users/syedrishad/Downloads/python-project-color-detection/colors.csv
The actual code is here:
import cv2
import numpy as np
import pandas as pd
import argparse
#Creating argument parser to take image path from command line
ap = argparse.ArgumentParser()
ap.add_argument('-i', '--image', required=True, help="Image Path")
args = vars(ap.parse_args())
img_path = args['image']
#Reading the image with opencv
img = cv2.imread(img_path)
#declaring global variables (are used later on)
clicked = False
r = g = b = xpos = ypos = 0
#Reading csv file with pandas and giving names to each column
index=["color","color_name","hex","R","G","B"]
csv = pd.read_csv('colors.csv', names=index, header=None)
#function to calculate minimum distance from all colors and get the most matching color
def getColorName(R,G,B):
minimum = 10000
for i in range(len(csv)):
d = abs(R- int(csv.loc[i,"R"])) + abs(G- int(csv.loc[i,"G"]))+ abs(B- int(csv.loc[i,"B"]))
if(d<=minimum):
minimum = d
cname = csv.loc[i,"color_name"]
return cname
#function to get x,y coordinates of mouse double click
def draw_function(event, x,y,flags,param):
if event == cv2.EVENT_LBUTTONDBLCLK:
global b,g,r,xpos,ypos, clicked
clicked = True
xpos = x
ypos = y
b,g,r = img[y,x]
b = int(b)
g = int(g)
r = int(r)
cv2.namedWindow('image')
cv2.setMouseCallback('image',draw_function)
while(1):
cv2.imshow("image",img)
if (clicked):
#cv2.rectangle(image, startpoint, endpoint, color, thickness)-1 fills entire rectangle
cv2.rectangle(img,(20,20), (750,60), (b,g,r), -1)
#Creating text string to display( Color name and RGB values )
text = getColorName(r,g,b) + ' R='+ str(r) + ' G='+ str(g) + ' B='+ str(b)
#cv2.putText(img,text,start,font(0-7),fontScale,color,thickness,lineType )
cv2.putText(img, text,(50,50),2,0.8,(255,255,255),2,cv2.LINE_AA)
#For very light colours we will display text in black colour
if(r+g+b>=600):
cv2.putText(img, text,(50,50),2,0.8,(0,0,0),2,cv2.LINE_AA)
clicked=False
#Break the loop when user hits 'esc' key
if cv2.waitKey(20) & 0xFF ==27:
break
cv2.destroyAllWindows()
If you know how to run, can you please tell me. I have been searching for an answer but come up empty-handed.
As #Yves Daoust mentioned you basically have two options:
A) Either change the code you are testing to provide the required argument or
B) Use Run->Run...->Edit Configurations... to provide the argument as you would from the command line.
Let's examine in more details the options you have:
A) The easiest way would be to provide the path to the image you want to open like this:
# replace img_path = args['image'] with
img_path = 'path/to/the/image'
which has the advantage of being extremely easy to get but it breaks the argparser functionality.
A more versatile solution would be to provide a default parameter and edit this one each time you want to open an image.
import argparse
# Add a global variable here. It will provide the argument if no other is given (via `Run->Run...->Edit Configurations...`)
IMAGE_PATH = 'path/to/image'
#Creating argument parser to take image path from command line
ap = argparse.ArgumentParser()
ap.add_argument('-i', '--image', required=True, help="Image Path", default=IMAGE_PATH)
which has the advantage of keeping the arg parse functionality intact if you are interested in this one.
B) This option means you provide the parameters yourself just like you would do in the console. The parameters are put in the Parameters: field.
For example you could pass:
--image="path/to/image"
PyCharm provides the option to Apply (button) the changes you inserted (which in this case would mean to keep the parameters stored for this script as long as the script will be in the memory).
Hope this clarify things a bit.
You have to go to Run > Run and then select the program you want to run. If it still gives an error, check the file path, otherwise, you have an error in your code. I hope that answers your question.

How to tell if matchTemplate succeeds? [duplicate]

I'm attempting to find an image in another.
im = cv.LoadImage('1.png', cv.CV_LOAD_IMAGE_UNCHANGED)
tmp = cv.LoadImage('e1.png', cv.CV_LOAD_IMAGE_UNCHANGED)
w,h = cv.GetSize(im)
W,H = cv.GetSize(tmp)
width = w-W+1
height = h-H+1
result = cv.CreateImage((width, height), 32, 1)
cv.MatchTemplate(im, tmp, result, cv.CV_TM_SQDIFF)
print result
When I run this, everything executes just fine, no errors get thrown. But I'm unsure what to do from here. The doc says that result stores "A map of comparison results". I tried printing it, but it gives me width, height, and step.
How do I use this information to find whether or not one image is in another/where it is located?
This might work for you! :)
def FindSubImage(im1, im2):
needle = cv2.imread(im1)
haystack = cv2.imread(im2)
result = cv2.matchTemplate(needle,haystack,cv2.TM_CCOEFF_NORMED)
y,x = np.unravel_index(result.argmax(), result.shape)
return x,y
CCOEFF_NORMED is just one of many comparison methoeds.
See: http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
for full list.
Not sure if this is the best method, but is fast, and works just fine for me! :)
MatchTemplate returns a similarity map and not a location.
You can then use this map to find a location.
If you are only looking for a single match you could do something like this to get a location:
minVal,maxVal,minLoc,maxLoc = cv.MinMaxLoc(result)
Then minLoc has the location of the best match and minVal describes how well the template fits. You need to come up with a threshold for minVal to determine whether you consider this result a match or not.
If you are looking for more than one match per image you need to use algorithms like non-maximum supression.

Question about blending 2 images together

I am looking at the sample code from the link below (Example 2: Italy Street Image with Wave Style).
https://github.com/dipanjanS/practical-machine-learning-with-python/blob/master/notebooks/Ch12_Deep_Learning_for_Computer_Vision/Neural%20Style%20Transfer%20Results%20-%20High%20Resolution.ipynb
Also, looking at input images from the link below.
https://github.com/dipanjanS/practical-machine-learning-with-python/tree/master/notebooks/Ch12_Deep_Learning_for_Computer_Vision/results/italy%20street
My question is this. Why is iteration_1.png to iteration_20.png being read into the code?
is_content_image = io.imread('results/italy street/italy_street.jpg')
is_style_image = io.imread('results/italy street/style1.png')
is_iter1 = io.imread('results/italy street/style_transfer_result_italy_street_gatys_at_iteration_1.png')
is_iter5 = io.imread('results/italy street/style_transfer_result_italy_street_gatys_at_iteration_5.png')
is_iter10 = io.imread('results/italy street/style_transfer_result_italy_street_gatys_at_iteration_10.png')
is_iter15 = io.imread('results/italy street/style_transfer_result_italy_street_gatys_at_iteration_15.png')
is_iter20 = io.imread('results/italy street/style_transfer_result_italy_street_gatys_at_iteration_20.png')
It seems like you would read in italy_street.jpg and style1.png, and the code should do the work of blending the images together, so after several iterations they are merged into a contamination of the two images. This seems intuitive, rather than doing 'io.imread' for several images, and then saving each image after each iteration. Am I missing something basic here? Or, is there a better demo other there, which shows how to blend two images together? Thanks for the look.

im.getcolors() returns None

I am using a simple code to compare an image to a desktop screenshot through the function getcolors() from PIL. When I open an image, it works:
im = Image.open('sprites\Bowser\BowserOriginal.png')
current_sprite = im.getcolors()
print current_sprite
However, using both pyautogui.screenshot() and ImageGrab.grab() for the screenshot, my code returns none. I have tried using the RGB conversion as shown here: Cannot use im.getcolors.
Additionally, even when I save a screenshot to a .png, it STILL returns none.
i = pyautogui.screenshot('screenshot.png')
f = Image.open('screenshot.png')
im = f.convert('RGB')
search_image = im.getcolors()
print search_image
First time posting, help is much appreciated.
Pretty old question but for those who sees this now:
Image.getcolors() takes as a parameter "maxcolors – Maximum number of colors." (from the docs here).
The maximum number of colors an image can have, equals to the number of pixels it contains.
For example, an image of 50*60px will have maximum 3,000 colors.
To translate it into code, try this:
# Open the image.
img = Image.open("test.jpg")
# Set the maxcolors number to the image's pixels number.
colors = img.getcolors(img.size[0]*img.size[1])
If you'd check the docs, getcolors returns None if the number of colors in the image is greater than the default parameter, which is set to 256.

Categories