how does scipy.misc.toimage changes image domain? - python

there's a code that I've been working on and I saw a code that I don't Understand, I really appreciate someone explains it to me how it works.
the first code will normalize it in [0,1] but in the third code, it makes it in e.g [2,89] (it depends on input image),
1- my question is on code 3, how it makes it to the new domain?
2- if I want to take it back on [0,255] how to undo code 3 (e.g I normalized it and then take it back to the first image)?
img = img.astype(np.float32)/255.0
sc = np.power(np.power(2.0, -3), 0.5)
img=scipy.misc.toimage(sc*np.squeeze(img), cmin=0.0, cmax=1.0)
img=np.asarray(s)

so finally after a few days I figured it out, I Answer it here in case someone had my answer :)
it works like scipy.misc.bytescale and the math behind that is like below:
((I-Cmin)/(Cmax-Cmin))*255
the I parameter is the value of the pixel. for the test, you should make a small matrix like (e.g 3 by 3) and change the Cmax and Cmin. I'm sure you'll understand better.

Related

Corner Refine Method, CORNER_REFINE_SUBPIX, from AcUro with Python and OpenCV

Hi There
I want to increase the accuracy of the marker detection from aruco.detectMarkers. So, I want to use Corner Refine Method with CORNER_REFINE_SUBPIX, but I do not understand how it is implemented in python.
Sample code:
frame = cv.imread("test.png")
gray = cv.cvtColor(frame, cv.COLOR_BGR2GRAY)
para = aruco.DetectorParameters_create()
det_corners, ids, rejected = aruco.detectMarkers(gray,dictionary,parameters=para)
aruco.drawDetectedMarkers(frame,det_corners,ids)
Things I have tried:
para.cornerRefinementMethod()
para.cornerRefinementMethod(aruco.CORNER_REFINE_SUBPIX)
para.cornerRefinementMethod.CORNER_REFINE_SUBPIX
para = aruco.DetectorParameters_create(aruco.CORNER_REFINE_SUBPIX)
para = aruco.DetectorParameters_create(para.cornerRefinementMethod(aruco.CORNER_REFINE_SUBPIX))
They did not work, I’m pretty new to python ArUco so I hope that there is a simple and obvious solution.
I would also Like to implement enclosed markers like in the Documentation(Page 4). Do you happen to know if there is a way to generate these enclosed markers in python?
Concerning the first part of your question, you were pretty close: I assume your trouble is in switching and tweaking the "para" options. If so, you only need to set the corresponding values in the parameters object like
para.cornerRefinementMethod = aruco.CORNER_REFINE_SUBPIX
Note that "aruco.CORNER_REFINE_SUBPIX" is simply an integer. You can verify this by typing type(aruco.CORNER_REFINE_SUBPIX) in the console. Thus assigning values to the "para" object works like mentioned above.
You might also want to tweak the para.cornerRefinementWinSize which seems to be implemented in units of code pixels, not actual image pixel units.
Concerning the second part, you might have to write a function, that adds the boxes at the corner points, which you can get using the detectMarker function. Note that the corner points are always ordered clockwise, thus you can easily assign the correct offset values (like "up & left", "up & right" etc.).
para.cornerRefinementMethod = 1
may work.

variable limits when cutting pixels from a fits image with imcopy (IRAF) in python (pyraf)

I am trying to use pyraf to use the iraf's task imcopy in one code in python. My problem is that you have to specify the range in x and y in which you want to cut the image, but I want those limits to be variables, since everything is within a loop and I have to copy a several regions.
For example, I have this:
img_seg =raw_input('Name of the segmentation image? ')
iraf.imcopy(input=img_seg+'[200:220,300:400]',output='out')
But if I try with for example:
x1 = 200
x2 = 220
y1 = 300
y2 = 400
iraf.imcopy(input=img_seg+'[x1:x2,y1:y2]',output='out'),
that does not work. It gives me a syntax error and the message ERROR (1, "Number of input and output images not the same")
I have been trying for a while but I have not been able to make it. So it would me nice if someone can explain me how to do this, and thanks in advance!
ps. My anwer is similar to this one How to run a function on a list of objects in python/pyraf?, but the answer there is basically what is not working for me.
Ok, so the point was that IRAF needs to see only the interval, but you can use python to build the interval as is needed for IRAF.
So all one needs to do is:
iraf.imcopy(input=img_seg+'['+str(x1)+':'+str(x2)+','+str(y1)+':'+str(y2)+']',output='out')

Image Sharpening Algorithm coded in Python

I was hoping someone could take a look at this sharpening algorithm I devised using PILLOW and explain to me why it is not giving a desirable sharpening effect on images. It really just looks like crap when applied to my sample images. I've worked on this for several days, but haven't made much progress in either improving the quality of the sharpening effect or the efficiency of the algorithm itself. Ideally, I'm looking for a subtle sharpening effect or something that can be scaled easily. I really appreciate any help or insight that can be provided. Here are the sources that I used to come up with this algorithm:
http://lodev.org/cgtutor/filtering.html#Sharpen
http://www.foundalis.com/res/imgproc.htm
from PIL import *
from PIL import Image
import os
os.chdir(r"C:")
filter1=9
filter2=-1
def sharpen2(photo,height,width,filter1,filter2):
for y in range(1,height-1):
for x in range(1,width-1):
(r,g,b)=photo.getpixel((x,y))
r=int(r*filter1)
g=int(g*filter1)
b=int(b*filter1)
(r1,g1,b1)=photo.getpixel((x-1,y-1))
r1=int(r1*filter2)
g1=int(g1*filter2)
b1=int(b1*filter2)
(r2,g2,b2)=photo.getpixel((x,y-1))
r2=int(r2*filter2)
g2=int(g2*filter2)
b2=int(b2*filter2)
(r3,g3,b3)=photo.getpixel((x+1,y-1))
r3=int(r3*filter2)
g3=int(g3*filter2)
b3=int(b3*filter2)
(r4,g4,b4)=photo.getpixel((x-1,y))
r4=int(r4*filter2)
g4=int(g4*filter2)
b4=int(b4*filter2)
(r5,g5,b5)=photo.getpixel((x+1,y))
r5=int(r5*filter2)
g5=int(g5*filter2)
b5=int(b5*filter2)
(r6,g6,b6)=photo.getpixel((x-1,y+1))
r6=int(r6*filter2)
g6=int(g6*filter2)
b6=int(b6*filter2)
(r7,g7,b7)=photo.getpixel((x,y+1))
r7=int(r7*filter2)
g7=int(g7*filter2)
b7=int(b7*filter2)
(r8,g8,b8)=photo.getpixel((x+1,y+1))
r8=int(r8*filter2)
g8=int(g8*filter2)
b8=int(b8*filter2)
rfPixel=r+r1+r2+r3+r4+r5+r6+r7+r8
if rfPixel>255:
rfPixel=255
elif rfPixel<0:
rfPixel=0
gfPixel= g+g1+g2+g3+g4+g5+g6+g7+g8
if gfPixel>255:
gfPixel=255
elif gfPixel<0:
gfPixel=0
bfPixel=b+b1+b2+b3+b4+b5+b6+b7+b8
if bfPixel>255:
bfPixel=255
elif bfPixel<0:
bfPixel=0
photo.putpixel((x,y),(rfPixel,gfPixel,bfPixel))
return photo
photo=Image.open("someImage.jpg").convert("RGB")
photo2=photo.copy()
height=photo.height
width=photo.width
x=sharpen2(photo,height,width,filter1,filter2)
One problem is likely that you're saving the results to the same image you are getting pixel data from. By the time you get to a pixel, some of its neighbors have been replaced by the filtered data, and some have not. The error is small at first but adds up.
To fix: save the results to a different image, say filtered_photo.putpixel(...). You'd have to create a blank filtered_photo first.
Another big problem (mentioned by #Mark Ransom) is that you probably want filter1 = 1.1 and filter2 = -0.1 or something along those lines. Using 9 and -1 will make most values come out of range.
A better implementation: don't loop over each pixel in python code, use numpy to process the whole image at once, it will be much faster (and shorter code). The usual implementation of sharpen is to subtract the gaussian-filtered image from the original image, which is a one-liner using numpy and ndimage (or skimage).

How to find a template in an image using a mask (or transparency) with OpenCV and Python?

Let us assume we are looking for this template:
The corners of our template are transparent, so the background will vary, like so:
Assuming we could use the following mask with our template:
It would be very easy to find it.
What I have tried:
I have tried matchTemplate but it doesn't support masks (as far as I know), and using the alpha channel (transparency) in the template does not achieve this, as it compares the alpha channels instead of ignoring those pixels.
I have also looked into "region of interest", which I thought would be the solution, but with it you can only specify a rectangular area. I'm not even sure if it works on the template or not.
I'm sure this is possible to do by writing my own algorithm, but I was hoping this is possible via. standard OpenCV to avoid reinventing the wheel. Not to mention, it would most likely be more optimised than my own.
So, how could I do something like this with OpenCV + Python?
This could be achieved using only matchTemplate function, but a little workaround is needed.
Lets analyse the default metrics(CV_TM_SQDIFF_NORMED). According to matchTemplate documentation
the default metrics looks like this
R(x, y) = sum (I(x+x', y+y') - T(x', y'))^2
Where I is image matrix, T is template, R is result matrix. Summation is done over template coordinates x' and y',
So, lets alter this metrics by inserting weight matrix W, which has the same dimensions as
T.
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
In this case, by setting W(x', y') = 0 you can actually make pixel be ignored. So, how to make such metrics? With simple math:
Q(x, y) = sum W(x', y')*(I(x+x', y+y') - T(x', y'))^2
= sum W(x', y')*(I(x+x', y+y')^2 - 2*I(x+x', y+y')*T(x', y') + T(x', y')^2)
= sum {W(x', y')*I(x+x', y+y')^2} - sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} + sum{W(x', y')*T(x', y')^2)}
So, we divided Q metrics into tree separate sums. And all those sums could be calculated
with matchTemplate function (using CV_TM_CCORR method). Namely
sum {W(x', y')*I(x+x', y+y')^2} = matchTemplate(I^2, W, method=2)
sum{W(x', y')*2*I(x+x', y+y')*T(x', y')} = matchTemplate(I, 2*W*T, method=2)
sum{W(x', y')*T(x', y')^2)} = matchTemplate(T^2, W, method=2) = sum(W*T^2)
The last element is a constant, so, for minimisation it does not have any effect. On the other hand, it still might me useful to see if our template have perfect match (if Q is approaching to zero). Nonetheless, for last element we actually do not need matchTemplate function, since it could be calculated directly.
The final pseudocode looks like this:
result = matchTemplate(I^2, W, method=2) - matchTemplate(I, 2*W*T, method=2) + as.scalar(sum(W*T^2))
Does it really do exactly as defined? Mathematically yes.
Practically, there is some small rounding error, because matchTemplate function
works on 32-bit floating-point, but I believe it is not a big problem.
Please note, that you can extent analysis and have weighted equivalents for any metrics offered by matchTemplate.
This actually worked for me. I am sorry I don't give actual code. I am working in R, so
I don't have the code in Python. But idea is quite straightforward.
I hope this will help.
What worked for me the one time I needed this was to fill the "mask" areas with white noise. Then it gets effectively washed out of the correlation when looking for matches. Otherwise I got, as I presume you did, false matches on the masked areas.
One answer to your question is convolution. Use the template as kernel and filter the image.
The destination Mat will have dense bright areas where your template might be. You'll have to cluster the results (e.g. Mean-shift).
In that way, you'll have a very simplistic implementation of the Generalized Hough Transform or a Template-based convolution matching.
Imagemagick 7.0.3.9 now has a masked compare capability so that you can limit the template matching region. See http://www.imagemagick.org/discourse-server/viewtopic.php?f=4&t=31053
Also, I see that OpenCV 3.0 now has masked template matching. See http://docs.opencv.org/3.0.0/df/dfb/group__imgproc__object.html#ga586ebfb0a7fb604b35a23d85391329be
However, it is only for method == CV_TM_SQDIFF and method == CV_TM_CCORR_NORMED. see python opencv matchTemplate is mask feature implemented?
ImageMagick has logic for finding subimages in other images and it works quite well.
compare -verbose -dissimilarity-threshold 0.1 -subimage-search subimage bigimage
I've used it to find and blur watermarks off some products. Don't ask.
(Sometimes you have to do what you have to do..)
2021 Update: I've been trying to find a solution for transparency in templates throughout the day, and I think I finally found a way to do it. matchTemplate() has a mask parameter, which apparently works exactly like OP wants it to: ignore certain pixels from a template when searching for it in another image. And since my templates already contain transparency in them, I decided to use my template as both a template and mask parameter. Surprisingly, it worked.
I'm using JavaScript with opencv4nodejs, so the following python code snippet might be completely off, but the theory is there and I'm fairly positive it should work.
# Import OpenCV
import cv2 as cv
# Read both the image and the template
image = cv.imread("image.png", cv.IMREAD_COLOR)
template = cv.imread("template.png", cv.IMREAD_COLOR)
# Match with template as both template and mask parameter
result = cv.matchTemplate(image, template, cv.TM_CCORR_NORMED, None, template)
Here's a gist for JavaScript with opencv4nodejs if you're interested.
Now that I think about it, it seems really stupid and way too good to be true, but I've been getting good matches (0.98+) on most tests. Hope this helps!

calculating mean of several numpy masked arrays (masked_all)

first of all I'm new to python and programming but you guys already helped me a lot, so thanks a lot! But I've come to a problem I haven't found an answer so far:
I have the data of several plates where the data represents the pressure on each plate at a large number of different spots. The thing is, these plates aren't perfectly round because of the sensors measuring the pressure and sometimes these sensors even produce an error so I don't have any data at a spot within the plate.
When I just have to plot one plate, I'll do it like that:
import numpy.ma as ma
matrix=ma.masked_all((160,65),float)
for x in range(len(plate.X)):
matrix[(plate.Y[x],plate.X[x])]=data.index(plate.measurementname[x])
image.pcolormesh(matrix,min,max)
This works fine. Now that I have several plates I'd like to plot the mean pressure on each spot. Because I don't know any mean function, I thought of adding all plates together and divide by the number of plates...I tried following:
import numpy.ma as ma
meanmatrix=ma.masked_all((160,65),float)
for plate in plateslist:
matrix=ma.masked_all((160,65),float)
for x in range(len(plate.X)):
matrix[(plate.Y[x],plate.X[x])]=data.index(plate.measurementname[x])
meanmatrix+=matrix
meanmatrix=meanmatrix/len(plateslist)
image.pcolormesh(meanmatrix,min,max)
This works pretty good but there's one problem I can't solve. As I said sometimes some plates didn't get all data, therefore there's a "hole" at some spots in the plot. Now my meanmatrix has a whole where ever one of the plates had a whole even if all others had data at that spot.
How can I make sure I won't get these holes or is there even a smoother way of getting my "meanmatrix"?? (I hope my question is clear enough...)
Edit:
The problem is not that I don't get the mean of the data, this actually works (well I don't like how I did it but it works), the problem is that I get these "holes" I described before. That's what bothers me.
EDIT: Sorry, I misinterpreted the question. Try this:
allplates = ma.masked_all((160, 65, numplates))
# fill in allplates
meanplate = allplates.mean(axis=2)
This will compute the mean over the last dimension of the array, i.e., average the plates together. Missing values are ignored.
Earlier answer: You can take the mean of a masked array, and it will ignore the missing values:
>>> X = ma.masked_all((160, 65))
>>> X.mean()
masked
>>> X[0, 0] = 1
>>> X.mean()
1.0
Try to avoid using matrix as a variable name, though, because it also refers to a NumPy data structure.
Ok I got an answer:
import numpy.ma as ma
allplats=ma.masked_all((160,65),float)
for plate in plateslist:
for x in range(len(plate.X)):
allplates[(plate.Y[x],plate.X[x])]+=data.index(plate.measurementname[x])
allplates=allplates/len(plateslist)
image.pcolormesh(meanmatrix,min,max)
This actually works! So i guess there was a mistake when adding two masked_all arrays...("Stupid is as stupid does")
If someone has a better approach to get the mean of all plates at each single spot, it would be nice to read it.

Categories