So, I'm learning my self python by this tutorial and I'm stuck with exercise number 13 which says:
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
This is not meant as a question about PIL, but to ask which algorithm to use so I can code it myself.
I've found some similar questions like this, but I dunno how to translate this into python.
Any help would be appreciated.
I've come to this:
import image
win = image.ImageWin()
img = image.Image("cy.png")
factor = 2
W = img.getWidth()
H = img.getHeight()
newW = int(W*factor)
newH = int(H*factor)
newImage = image.EmptyImage(newW, newH)
for col in range(newW):
for row in range(newH):
p = img.getPixel(col,row)
newImage.setPixel(col*factor,row*factor,p)
newImage.draw(win)
win.exitonclick()
I should do this in a function, but this doesn't matter right now. Arguments for function would be (image, factor). You can try it on OP tutorial in ActiveCode. It makes a stretched image with empty columns :.
Your code as shown is simple and effective for what's known as a Nearest Neighbor resize, except for one little bug:
p = img.getPixel(col/factor,row/factor)
newImage.setPixel(col,row,p)
Edit: since you're sending a floating point coordinate into getPixel you're not limited to Nearest Neighbor - you can implement any interpolation algorithm you want inside. The simplest thing to do is simply truncate the coordinates to int which will cause pixels to be replicated when factor is greater than 1, or skipped when factor is less than 1.
Mark has the correct approach. To get a smoother result, you replace:
p = img.getPixel(col/factor,row/factor)
with a function that takes floating point coordinates and returns a pixel interpolated from several neighboring points in the source image. For linear interpolation it takes the four nearest neigbors; for higher-order interpolation it takes a larger number of surrounding pixels.
For example, if col/factor = 3.75 and row/factor = 1.9, a linear interpolation would take the source pixels at (3,1), (3,2), (4,1), and (4,2) and give a result between those 4 rgb values, weighted most heavily to the pixel at (4,2).
You can do that using the Python Imaging Library.
Image.resize() should do what you want.
See http://effbot.org/imagingbook/image.htm
EDIT
Since you want to program this yourself without using a module, I have added an extra solution.
You will have to use the following algorithm.
load your image
extract it's size
calculate the desired size (height * factor, width * factor)
create a new EmptyImage with the desired size
Using a nested loop through the pixels (row by column) in your image.
Then (for shrinking) you remove some pixels every once in while, or for (enlarging) you duplicate some pixels in your image.
If you want you want to get fancy, you could smooth the added, or removed pixels, by averaging the rgb values with their neighbours.
Related
I have a 2d gaussian whose center has been destroyed by pixel saturation. I need the center to be filled in because a poorly filled in center will confuse a neural network I'm trying to train. See below:
The scattered nan values I can handle fairly easily, but the large cluster in the gaussian's center I cannot.
I've tried various methods to correct this, but none seem to work in the sense that the gaussian is filled in correctly.
Here are some other similar answers that I've tried:
Python Image Processing - How to remove certain contour and blend the value with surrounding pixels?
https://docs.astropy.org/en/stable/convolution/index.html
These work well for the small discrete nans floating around the image, but don't adequately address the center cluster.
This is what I get with convolution infilling:
I've taken slices of the centers as well.
I do actually have a reference image that does not have nans. However, the scaling of the pixel values are not constant, so I've made a function that takes into account the different scaling of each pixel.
def mult_mean_surround(s_arr, c_arr, coord):
directions = np.array([[1,0],[-1,0],[0,1],[0,-1],[1,1],[1,-1],[-1,-1],[-1,1]])
s = np.array([])
for i in directions:
try:
if not np.isnan(s_arr[coord[0]+i[0],coord[1]+i[1]]):
s=np.append(s,s_arr[coord[0]+i[0],coord[1]+i[1]]/c_arr[coord[0]+i[0],coord[1]+i[1]])
except IndexError:
pass
if len(s)!=0:
s_arr[coord[0],coord[1]] = c_arr[coord[0],coord[1]] *np.mean(s)
It copies the corresponding pixel values of the reference image and scales it to the correct amount.
Ideally, it would look something like this:
The center is brighter than the rim and it looks more like a gaussian. However, this method is also substantially slower than the rest, so I'm not sure how to get around either of my issues. I've tried boosting speed with cupy to no luck as shown here: Boosting algorithm with cupy
If anyone has any helpful ideas, that would be great.
I am assuming that you are filling the 'hole' with only one gaussian.
First make a mask of all the NaNs, i.e. NaN = 1, not NaN = 0.
You can do a neighbor-count check to remove all mask pixels with no neighbors, then use a clustering algorithm (like DBSCAN) to find the largest cluster of pixels.
Calculate the centroid, width (max x - min x), and height (max y - min y) of the resulting cluster.
You can then use the following code:
import math
def gaussian_fit(query_x,query_y,
centroid_x,centroid_y,
filter_w,filter_h,
sigma_at_edge = 1.0):
x_coord = (query_x - centroid_x) * 2 / (filter_w * sigma_at_edge)
y_coord = (query_y - centroid_y) * 2 / (filter_h * sigma_at_edge)
return math.exp(-1.0*(x_coord**2+y_coord**2))
You may need to rescale the result by some constant.
I have to build an algorithm that takes an RBG image and returns the image turned into a wood-like mosaic. For this, I was given some wood tablets samples as seen in the image below:
I'd like to know how I can normalize the colors of each tablet, resulting in a single color, so I can build a map of reference colors to convert the input image colors to.
I've searched for how to achieve that, but I only found a Wikipedia article, but I couldn't understand much of it.
Thanks in advance for all help you might provide me.
PS: I'm considering using Python to develop this. So if you come up with something done using this language, I'd really appreciate it.
The way to get the average color is to simply take the average of the RGB values.
To get a more accurate average you should do this with linear color values. Usually RGB uses a gamma corrected value, but you can easily undo it then redo it once you have the average. Here's how you'd do it with Python's PIL using a gamma of 2.2:
def average_color(sample):
pix = sample.load()
totals = [0.0, 0.0, 0.0]
for y in range(sample.size[1]):
for x in range(sample.size[0]):
color = pix[x,y]
for c in range(3):
totals[c] += color[c] ** 2.2
count = sample.size[0] * sample.size[1]
color = tuple(int(round((totals[c] / count) ** (1/2.2))) for c in range(3))
return color
For the sample in the upper left of your examples, the result is (144, 82, 66). Here's a visual of all of them:
To make one color represent a tile, a simple option would be to find the mean color of a random sample of pixels in a specific tile. You can choose an appropriate sample size as a trade-off between speed and accuracy.
For your specific use case, I'd recommend further division of tiles, say into 3 columns (because of the top-to-bottom design of most wood panels). Find the mean color of each column and eliminate any which is beyond a certain measure of variance. This is to try to ensure that tiles such as the right most one in the 4th row don't get mapped to the darker shade.
An alternate approach would be to convert both your input image and these wood tiles in to and carry out your processing in grayscale. The opencv library has various simple functions for RGB2GRAYconversions.
One trivial way to normalize the colors is to simply force the mean and standard deviation of RGB values in all images to be the same.
Here is an example with the two panels at the top of the left column in the example image. I'm using MATLAB with DIPimage 3.0, because that is what I know, but this is trivial enough to implement in Python with NumPy, or any other desired language/library:
img = readim('https://i.stack.imgur.com/HK6VY.png')
tab1 = dipcrop; % Interactive cropping of a tile from the displayed image
tab2 = dipcrop;
m1 = mean(tab1);
s1 = std(tab1);
m2 = mean(tab2);
s2 = std(tab2);
tab2b = (tab2 - m2) ./ s2 .* s1 + m1;
What the code does to the image tab2 is, on a per-channel basis, to subtract the mean and divide by the standard deviation. Next, it multiplies each channel by the standard deviation of the corresponding channel of the template image, and adds the mean of that channel.
From the Tutorial: https://programtalk.com/vs2/?source=python/8176/opencv-python-blueprints/chapter4/scene3D.py
I don't understand why they first undistort the images
# undistort the images
self.img1 = cv2.undistort(self.img1, self.K, self.d)
self.img2 = cv2.undistort(self.img2, self.K, self.d)
and: Compute the Essential Matrix
def _find_fundamental_matrix(self):
self.F, self.Fmask = cv2.findFundamentalMat(self.match_pts1,
self.match_pts2,
cv2.FM_RANSAC, 0.1,0.99)
def _find_essential_matrix(self):
self.E = self.K.T.dot(self.F).dot(self.K)
and also Normalize the coordinates:
first_inliers = []
second_inliers = []
for i in range(len(self.Fmask)):
if self.Fmask[i]:
# normalize and homogenize the image coordinates
first_inliers.append(self.K_inv.dot([self.match_pts1[i][0],
self.match_pts1[i][1], 1.0]))
second_inliers.append(self.K_inv.dot([self.match_pts2[i][0],
self.match_pts2[i][1], 1.0]))
Shouldn't it be either or? Or do I have some wrong understanding here?
Can please somone help me on that?
The first step, undistort, does a number of things to reverse the typical warping caused by small camera lenses. See the Wikipedia article on distortion (optics) for more background.
The last step, homogenizing the coordinates, is a completely different thing. The Wikipedia article on homogenous coordinates explains it, but the basic idea is that you add in an extra fake axis that lets you do all affine and projective transformations with chained simple matrix multiplication and then just project back to 3D at the end. Normalizing is just a step you do to make that math easier—basically, you want your extra coordinate to start off as 1.0 (multiply by the inverse of the projective norm).
The requirement for normalization is explained at page-107 of Multi-View Geometry (Hartley and Zisserman). The normalization is required in addition to the un-distortion.
If are using raw pixel values in homogeneous coordinates, the Z-coordinate which is 1 will be small compared to the X and Y co-coordinates. Eg: (X=320, Y=220, Z=1).
But if the homogenized coordinates are the image pixel positions normalized to a standard range, ie -1.0 to 1.0, then we are talking about coordinate values all of whom are kind of in the same range , Eg: (0.75, -0.89, 1.0).
If the image coordinates are of dramatically different ranges(as in the unnormalized case), then the DLT matrix produced will have a bad condition number, and consequently small variations in input image pixel positions, could produce wide variations in the result.
Please see page 107 for a very good explanation.
I am perplexed by the API to scipy.ndimage.interpolation.affine_transform. And judging by this issue I'm not the only one. I'm actually wanting to do more interesting things with affine_transform than just rotating an image, but a rotation would do for starters. (And yes I'm well aware of scipy.ndimage.interpolation.rotate, but figuring out how to drive affine_transform is what interests me here).
When I want to do this sort of thing in systems like OpenGL, I'm think in terms of computing the transform which applies a 2x2 rotation matrix R about a centre c, and therefore thinking of points p being transformed (p-c)R+c = pR+c-cR, which gives a c-cR term to be used as the translation component of a transform. However, according to the issue above, scipy's affine_transform does "offset first" so we actually need to compute an offset s such that (p-c)R+c=(p+s)R which with a bit of rearrangement gives s=(c-cR)R' where R' is the inverse of R.
If I plug this into an ipython notebook (pylab mode; code below maybe needs some additional imports):
img=scipy.misc.lena()
#imshow(img,cmap=cm.gray);show()
centre=0.5*array(img.shape)
a=15.0*pi/180.0
rot=array([[cos(a),sin(a)],[-sin(a),cos(a)]])
offset=(centre-centre.dot(rot)).dot(linalg.inv(rot))
rotimg=scipy.ndimage.interpolation.affine_transform(
img,rot,order=2,offset=offset,cval=0.0,output=float32
)
imshow(rotimg,cmap=cm.gray);show()
I get
which unfortunately isn't rotated about the centre.
So what's the trick I'm missing here?
Once treddy's answer got me a working baseline, I managed to get a better working model of affine_transform. It's not actually as odd as the issue linked in the original question hints.
Basically, each point (coordinate) p in the output image is transformed to pT+s where T and s are the matrix and offset passed to the function.
So if we want point c_out in the output to be mapped to and sampled from c_in from the input image, with rotation R and (possibly anisotropic) scaling S we need pT+s = (p-c_out)RS+c_in which can be rearranged to yield s = (c_int-c_out)T (with T=RS).
For some reason I then need to pass transform.T to affine_transform but I'm not going to worry about that too much; probably something to do with row-coordinates with transforms on the right (assumed above) vs column-coordinates with transforms on the left.
So here's a simple test rotating a centred image:
src=scipy.misc.lena()
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Here's it modified for different image sizes
src=scipy.misc.lena()[::2,::2]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]])
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
And here's a version with anisotropic scaling to compensate for the anisotropic resolution of the source image.
src=scipy.misc.lena()[::2,::4]
c_in=0.5*array(src.shape)
c_out=array((256.0,256.0))
for i in xrange(0,7):
a=i*15.0*pi/180.0
transform=array([[cos(a),-sin(a)],[sin(a),cos(a)]]).dot(diag(([0.5,0.25])))
offset=c_in-c_out.dot(transform)
dst=scipy.ndimage.interpolation.affine_transform(
src,transform.T,order=2,offset=offset,output_shape=(512,512),cval=0.0,output=float32
)
subplot(1,7,i+1);axis('off');imshow(dst,cmap=cm.gray)
show()
Based on the insight from #timday that matrix and offset are defined in the output coordinate system, I would offer the following reading of the issue, which fits with standard notations in linear algebra and allows to understand the scaling of images as well. I use here T.inv=T^-1 as pseudo-python notation to mean the inverse of a matrix and * to mean the dot product.
For each point o in the output image, affine_transform finds the corresponding point i in the input image as i=T.inv*o+s, where matrix=T.inv is the inverse of the 2x2 transformation matrix that one would use to define the forward affine transformation and offset=s is the translation defined in the output coordinates. For a pure rotation T=R=[[cos,-sin],[sin,cos]], and in this special case matrix=T.inv=T.T, which is the reason why #timday had to apply the transposition still (alternatively one could just use the negative angle).
The value for the offset s is found exactly the way described by #timday: if c_in is supposed to be positioned, after the affine transformation, at c_out (e.g. the input centre should be placed at the output centre) then c_in=T.inv*c_out+s or s=c_in-T.inv*c_out (note the conventional mathematical order of the matrix product used here, matrix*vector, which is why #timday, who used the revers order, didn't need a transposition at this point in his code).
If one wants a scaling S first and then a rotation R it holds that T=R*S and therefore T.inv=S.inv*R.inv (note the reversed order). For example, if one wants to make the image double as wide in the columns direction ('x'), then S=diag((1, 2)), hence S.inv=diag((1, 0.5)).
src = scipy.misc.lena()
c_in = 0.5 * array(src.shape)
dest_shape = (512, 1028)
c_out = 0.5 * array(dest_shape)
for i in xrange(0, 7):
a = i * 15.0 * pi / 180.0
rot = array([[cos(a), -sin(a)], [sin(a), cos(a)]])
invRot = rot.T
invScale = diag((1.0, 0.5))
invTransform = dot(invScale, invRot)
offset = c_in - dot(invTransform, c_out)
dest = scipy.ndimage.interpolation.affine_transform(
src, invTransform, order=2, offset=offset, output_shape=dest_shape, cval=0.0, output=float32
)
subplot(1, 7, i + 1);axis('off');imshow(dest, cmap=cm.gray)
show()
If the image is to be first rotated, then stretched, the order of the dot product needs to be reversed:
invTransform = dot(invRot, invScale)
Just doing some quick & dirty testing I noticed that taking the negative value of your offset seems to rotate about the centre.
I have a code which creates a square image with dimensions 4x4 arcsec running from -2 arcsec to +2 arcsec and is created on an 80x80 grid. To this I want to add another image.
This second image is created through a FFT of an 80x80 grid and thus starts out in Fourier space. After the FFT, I want the image to have exactly the same dimensions in real space as the first image.
Because Fourier space represents the scales and the wavenumber is defined as k = 2pi/x (although in this case the numpy.fft uses the definition where I think k = 1/x), I thought the largest scale would have to have the smallest k-value and the smallest scale the largest k-value.
So if x_max = 2 (the dimensions in the x-direction of the first image) and dim_x = 80 (the number of columns in the grid):
k_x,max = 1/(2*x_max/dim_x)
k_x,min = 1/(2*x_max)
and let the grid in Fourier-space run from k_x,min to k_x,max (same for the y-direction)
I hope I explained this clearly enough, but I haven't been able to find any confirmation or explanation for this in the literature about FFT's and would really like to know if this correct.
Thanks in advance
This is not correct. The k-space values will range from -N/2*omega_0 to (N-1)/2*omega_0, where omega_0 is the inverse of the sample length, given by 2*pi/(max(x)-min(x)) and N is the number of samples. So for your case you get something along the lines of this:
N = len(x)
dx = x[-1]-x[0]
k = np.linspace(-N*pi/dx, (N+1)*pi/dx, N)