How would I translate this equation into code? - python

I am working in Python, and I a trying to compute a wight matrix for a graph of pixels, and the weight of each edge is dependent on their "feature" similarity (F(i) - F(j)), and their location similarity (X(i)-X(j)). "Features" includes intensity, color, texture.
Right now I have it implemented and it is working, but not for color images. I at first tried to simply take some RGB values and average each pixel to convert the entire image to greyscale. But that didn't work as I had hoped, and I have read throgh a paper that suggests a different method.
They say to use this: F(i) = [v, v*s*sin(h), v*s*cos(h)](i)
where h, s, and v and the HSV color values.
I am just confused on the notation. What is this suppsed to mean? What does it mean to have three different terms separated by commas inside square brackets? I'm also confused with what the (i) at the end is supposed to mean. The solution to F(i) for any given pixel should be a single number, to be able to carry out F(i)-F(j)?
I'm not asking for someone to do this for me I just need some clarification.

Features can be vectors and you can calculate distance between vectors.
f1 = numpy.array([1,2,3])
f2 = numpy.array([0,2,3])
distance = numpy.linalg.norm(f1 - f2).

Related

Measuring shift between two images along one direction only

I have to measure shifts between two monochromatic images.
These images are actually spectra before calibration, which are very noisy and full of unwanted features, but they basically look like following
I know that between different images, they have shifts along x-direction, but not along y-direction. And I want to know the amount of the shift along x-direction between them.
Luckily I found a function in skimage, register_translation, which can be used for arbitrary subpixel precision. But the problem is, I want to know shift along x-direction only, and I want resulting y-direction shift to be 0, but the program finds the shift to x and y at the same time, presumably along the direction perpendicular to the features. (marked as blue arrow in the figure)
So, I am wondering :
is there any function or package in python that measures the shift between two images along one direction only, or even with any prior knowledge?
what is a correct way of finding shifts between two noisy images? Would finding maximum cross-correlation value in FFT space would do the job?
Some simple maths should do in this situation if register_translation gives you the xy shift, be it in vector or component form. You can calculate the movement in x that would be required if the y shift was non-existent, which is what you want. I am travelling so unfortunately can't give you the graph right now, would recommend drawing the triangles out.
The extra x shift required (x_extra) is defined by:
x_extra = y * tan[arctan(y_shift/x_shift)]
Which is simplified to:
x_extra = y_shift^2 / x_shift
Therefore, the total shift in x is:
x_shift_total = x_shift + x_extra
Where the x_shift is given to you by register_translation.
If you then move imageA by x_shift_total, it should be aligned with imageB, assuming the x_shift given by register_translation is correct.
#jni I would be keen to implement this as an option in register_translation!
I'm not positive it will work, but: one of the benefits of open source is that you can look at the implementation details of register_translation, then try to adapt it to your case. In your case, I would replace the fftn with fftn(..., axis=1), so that you only compute the fft along the columns axis. Then, multiply the two FFT signals together (this is equivalent to the convolution of each line, as suggested by #CypherX). Finally, you have to find a way to "coalesce" the shifts found along each line into a single measurement. One idea would be to take each shift (the maximum along that line) and plot a histogram. One would hope that you get a sharp peak around the true x shift.
If it works, it would be a pretty great contribution to scikit-image to add an "axis" keyword argument to register_translation. You can read the how to contribute guide and propose a change accordingly!
Another, much faster and simpler, approach would be to calculate the horizontal profile at the same location in both images. That would give you a 1D profile for each image horizontally. Simple peak finding will then give you the location of the lines, and the difference between the peak indexes will tell you the shift solely in the x-axis.
I use this approach routinely to do shift detection similar to your problem, and it is very very fast, very simple, and very robust.
# pick a row to use
row = 10
x_profile1 = np.mean(image1[row, :], axis=0)
x_profiel2 = np.mean(image2[row, :], axis=0)
# 'get_peaks' is a function to return indices of found peaks - several
# around
peaks1 = get_peaks(x_profile1)
peaks2 = get_peaks(x_profile2)
x_shift = peaks1[0] - peaks2[0]
Method-1
You could use convolution between the two images to find where you get a maximum. You could envision this as sliding the non-shifted images over the shifted image from left to right, and the convolution will produce maxima corresponding to the scenario when the identical sections of each image lies on top of one-another. Take a look at scipy.ndimage.convolution and scipy.signal.convolve and see which one suits your needs better.
Method-2
On the other hand, you could take a horizontal slice from each image and find the position of the peaks (assuming black strips are 1's and white regions are 0's).
Calculate the centroids of these peaks in each image. Find the difference between the positions of these centroids and that is the shift your are looking for.
For robustness, you could then apply this to various rows of the image-pairs and the average of all the such differences would be a more statistically viable result for a measure of horizontal shift.

How to normalize colors acquiring a single color?

I have to build an algorithm that takes an RBG image and returns the image turned into a wood-like mosaic. For this, I was given some wood tablets samples as seen in the image below:
I'd like to know how I can normalize the colors of each tablet, resulting in a single color, so I can build a map of reference colors to convert the input image colors to.
I've searched for how to achieve that, but I only found a Wikipedia article, but I couldn't understand much of it.
Thanks in advance for all help you might provide me.
PS: I'm considering using Python to develop this. So if you come up with something done using this language, I'd really appreciate it.
The way to get the average color is to simply take the average of the RGB values.
To get a more accurate average you should do this with linear color values. Usually RGB uses a gamma corrected value, but you can easily undo it then redo it once you have the average. Here's how you'd do it with Python's PIL using a gamma of 2.2:
def average_color(sample):
pix = sample.load()
totals = [0.0, 0.0, 0.0]
for y in range(sample.size[1]):
for x in range(sample.size[0]):
color = pix[x,y]
for c in range(3):
totals[c] += color[c] ** 2.2
count = sample.size[0] * sample.size[1]
color = tuple(int(round((totals[c] / count) ** (1/2.2))) for c in range(3))
return color
For the sample in the upper left of your examples, the result is (144, 82, 66). Here's a visual of all of them:
To make one color represent a tile, a simple option would be to find the mean color of a random sample of pixels in a specific tile. You can choose an appropriate sample size as a trade-off between speed and accuracy.
For your specific use case, I'd recommend further division of tiles, say into 3 columns (because of the top-to-bottom design of most wood panels). Find the mean color of each column and eliminate any which is beyond a certain measure of variance. This is to try to ensure that tiles such as the right most one in the 4th row don't get mapped to the darker shade.
An alternate approach would be to convert both your input image and these wood tiles in to and carry out your processing in grayscale. The opencv library has various simple functions for RGB2GRAYconversions.
One trivial way to normalize the colors is to simply force the mean and standard deviation of RGB values in all images to be the same.
Here is an example with the two panels at the top of the left column in the example image. I'm using MATLAB with DIPimage 3.0, because that is what I know, but this is trivial enough to implement in Python with NumPy, or any other desired language/library:
img = readim('https://i.stack.imgur.com/HK6VY.png')
tab1 = dipcrop; % Interactive cropping of a tile from the displayed image
tab2 = dipcrop;
m1 = mean(tab1);
s1 = std(tab1);
m2 = mean(tab2);
s2 = std(tab2);
tab2b = (tab2 - m2) ./ s2 .* s1 + m1;
What the code does to the image tab2 is, on a per-channel basis, to subtract the mean and divide by the standard deviation. Next, it multiplies each channel by the standard deviation of the corresponding channel of the template image, and adds the mean of that channel.

reconstruction: why undistort image AND normalize coordinates?

From the Tutorial: https://programtalk.com/vs2/?source=python/8176/opencv-python-blueprints/chapter4/scene3D.py
I don't understand why they first undistort the images
# undistort the images
self.img1 = cv2.undistort(self.img1, self.K, self.d)
self.img2 = cv2.undistort(self.img2, self.K, self.d)
and: Compute the Essential Matrix
def _find_fundamental_matrix(self):
self.F, self.Fmask = cv2.findFundamentalMat(self.match_pts1,
self.match_pts2,
cv2.FM_RANSAC, 0.1,0.99)
def _find_essential_matrix(self):
self.E = self.K.T.dot(self.F).dot(self.K)
and also Normalize the coordinates:
first_inliers = []
second_inliers = []
for i in range(len(self.Fmask)):
if self.Fmask[i]:
# normalize and homogenize the image coordinates
first_inliers.append(self.K_inv.dot([self.match_pts1[i][0],
self.match_pts1[i][1], 1.0]))
second_inliers.append(self.K_inv.dot([self.match_pts2[i][0],
self.match_pts2[i][1], 1.0]))
Shouldn't it be either or? Or do I have some wrong understanding here?
Can please somone help me on that?
The first step, undistort, does a number of things to reverse the typical warping caused by small camera lenses. See the Wikipedia article on distortion (optics) for more background.
The last step, homogenizing the coordinates, is a completely different thing. The Wikipedia article on homogenous coordinates explains it, but the basic idea is that you add in an extra fake axis that lets you do all affine and projective transformations with chained simple matrix multiplication and then just project back to 3D at the end. Normalizing is just a step you do to make that math easier—basically, you want your extra coordinate to start off as 1.0 (multiply by the inverse of the projective norm).
The requirement for normalization is explained at page-107 of Multi-View Geometry (Hartley and Zisserman). The normalization is required in addition to the un-distortion.
If are using raw pixel values in homogeneous coordinates, the Z-coordinate which is 1 will be small compared to the X and Y co-coordinates. Eg: (X=320, Y=220, Z=1).
But if the homogenized coordinates are the image pixel positions normalized to a standard range, ie -1.0 to 1.0, then we are talking about coordinate values all of whom are kind of in the same range , Eg: (0.75, -0.89, 1.0).
If the image coordinates are of dramatically different ranges(as in the unnormalized case), then the DLT matrix produced will have a bad condition number, and consequently small variations in input image pixel positions, could produce wide variations in the result.
Please see page 107 for a very good explanation.

Regarding extracting feature vector for image processing

First, pardon me if the question is a little bit hard to be understood since I am still a novice and trying my best to express my problem.
I am trying to implement the method of detecting road lane from Effective lane detection and tracking method using statistical modeling of color and lane edge-orientation thesis (Maybe not all of you guys can access this thesis).
From the thesis:
"A proposed lane segmentation method uses two distinctive features when there is an input image f (x, y) ,Z = [z1, z2 ]T for classifying lane pixels: lane HSV color feature z1 and lane edge-orientation feature z2, which can be defines as Z = [z1, z2]T = [I'(x, y), ø(x,y)]T
What I want to know is, is the Z itself only has two elements, in which each element correspond to a pixel, which also means I will have Z(x, y) feature vectors?
Or will I only have one feature vector Z in which inside the vector already contains a long list of I' and ø of each pixel?
And, how can I store this feature vector with Python (By using certain library)? I already make some search but still a little bit confused. It would be helpful if at least someone can give me a keyword so I can search deeper.
#Hilman, first thing is understanding about a feature vector, a feature is a description of your data using some properties of data(in this case pixel) for example mean or variance(or Color [r,g,b]) etc of data (pixel), or may be output of applying any transformation function(such as color space conversion) on your data (pixel) which converts your data into more appropriate form for classifying or making prediction purposes.
Here what i understand from your question description is, proposed algorithm taking HSV value(using color space conversion) of each pixel along with its gradient direction(phase), if you club them you will get 4 column vector for each pixel. so if you talk about feature vector z it will consist [H, S, V, Phase] for each pixel along with class annotation of the pixel.
in python if you want to store a feature vector you can write a csv file of a numpy array.
Thank You

Resizing image algorithm in python

So, I'm learning my self python by this tutorial and I'm stuck with exercise number 13 which says:
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
This is not meant as a question about PIL, but to ask which algorithm to use so I can code it myself.
I've found some similar questions like this, but I dunno how to translate this into python.
Any help would be appreciated.
I've come to this:
import image
win = image.ImageWin()
img = image.Image("cy.png")
factor = 2
W = img.getWidth()
H = img.getHeight()
newW = int(W*factor)
newH = int(H*factor)
newImage = image.EmptyImage(newW, newH)
for col in range(newW):
for row in range(newH):
p = img.getPixel(col,row)
newImage.setPixel(col*factor,row*factor,p)
newImage.draw(win)
win.exitonclick()
I should do this in a function, but this doesn't matter right now. Arguments for function would be (image, factor). You can try it on OP tutorial in ActiveCode. It makes a stretched image with empty columns :.
Your code as shown is simple and effective for what's known as a Nearest Neighbor resize, except for one little bug:
p = img.getPixel(col/factor,row/factor)
newImage.setPixel(col,row,p)
Edit: since you're sending a floating point coordinate into getPixel you're not limited to Nearest Neighbor - you can implement any interpolation algorithm you want inside. The simplest thing to do is simply truncate the coordinates to int which will cause pixels to be replicated when factor is greater than 1, or skipped when factor is less than 1.
Mark has the correct approach. To get a smoother result, you replace:
p = img.getPixel(col/factor,row/factor)
with a function that takes floating point coordinates and returns a pixel interpolated from several neighboring points in the source image. For linear interpolation it takes the four nearest neigbors; for higher-order interpolation it takes a larger number of surrounding pixels.
For example, if col/factor = 3.75 and row/factor = 1.9, a linear interpolation would take the source pixels at (3,1), (3,2), (4,1), and (4,2) and give a result between those 4 rgb values, weighted most heavily to the pixel at (4,2).
You can do that using the Python Imaging Library.
Image.resize() should do what you want.
See http://effbot.org/imagingbook/image.htm
EDIT
Since you want to program this yourself without using a module, I have added an extra solution.
You will have to use the following algorithm.
load your image
extract it's size
calculate the desired size (height * factor, width * factor)
create a new EmptyImage with the desired size
Using a nested loop through the pixels (row by column) in your image.
Then (for shrinking) you remove some pixels every once in while, or for (enlarging) you duplicate some pixels in your image.
If you want you want to get fancy, you could smooth the added, or removed pixels, by averaging the rgb values with their neighbours.

Categories