I have a code which creates a square image with dimensions 4x4 arcsec running from -2 arcsec to +2 arcsec and is created on an 80x80 grid. To this I want to add another image.
This second image is created through a FFT of an 80x80 grid and thus starts out in Fourier space. After the FFT, I want the image to have exactly the same dimensions in real space as the first image.
Because Fourier space represents the scales and the wavenumber is defined as k = 2pi/x (although in this case the numpy.fft uses the definition where I think k = 1/x), I thought the largest scale would have to have the smallest k-value and the smallest scale the largest k-value.
So if x_max = 2 (the dimensions in the x-direction of the first image) and dim_x = 80 (the number of columns in the grid):
k_x,max = 1/(2*x_max/dim_x)
k_x,min = 1/(2*x_max)
and let the grid in Fourier-space run from k_x,min to k_x,max (same for the y-direction)
I hope I explained this clearly enough, but I haven't been able to find any confirmation or explanation for this in the literature about FFT's and would really like to know if this correct.
Thanks in advance
This is not correct. The k-space values will range from -N/2*omega_0 to (N-1)/2*omega_0, where omega_0 is the inverse of the sample length, given by 2*pi/(max(x)-min(x)) and N is the number of samples. So for your case you get something along the lines of this:
N = len(x)
dx = x[-1]-x[0]
k = np.linspace(-N*pi/dx, (N+1)*pi/dx, N)
Related
I'm sorry for this easy question. I have a little knot in my head.
I have a 2d-array in python. And I would like to dive this array into squares of size n and compute the mean for each square.
My messy pseudo-code looks more or less like this until now:
def mean(pic, n):
"""
n: size of the square
"""
npixels_r = pic.height // n
npixels_c = pic.width // n
new_pic = picture(npixels_c, npixels_r)
# fill the new image
# define the indexes for each qaudrant
for s in range(0,w,n):
for z in range(0,h,n):
vals = []
# for each pixel in the quadrant
for i in range(s,s+n):
for j in range(z, z*n):
# get color at each pixel
val = pic[i][j]
vals.append(val)
m = mean(vals)
new_pic.setValue(m)
But it's not working. In the first nested for-loop I wanted to iterate over the squares and in the second nested for-loop over each pixel in the old_image and then compute the mean.
This is apparently not a good idea, but I can't think of any solution at the moment:/
You're gonna kick yourself: change for j in range(z, z*n) to for j in range(z, z+n)
I would use an image kernel to solve this one though. You would create an nxn kernel with each pixel value 1/n^2 and apply it to the image. Your proposed solution is essentially the same, but less generalizable (what if you want to do edge finding instead of averaging?)
I would like to compute the average luminescence value vs distance to the center of an image. The approach I am thinking about is to
compute the distance between pixels in image and image center
group pixels with same distance
compute the average value of pixels for each group
plot graph of distance vs average intensity
To compute first step I use this function:
dist_img = np.zeros(gray.shape, dtype=np.uint8)
for y in range(0, h):
for x in range(0, w):
cy = gray.shape[0]/2
cx = gray.shape[1]/2
dist = math.sqrt(((x-cx)**2)+((y-cy)**2))
dist_img[y,x] = dist
Unfortunately id does give different result from the one which I compute from here
distance = math.sqrt(((1 - gray.shape[0]/2)**2 )+((1 - gray.shape[1]/2 )**2))
when I test it for pixel (1,1) I receive 20 from first code and 3605 from second.
I would appreciate suggestions on the how to correct the loop and hints on how to start with other points.Or maybe there is other way to achieve what I would like to.
You are setting up dist_img with an np.uint8 dtype. This 8 Bit unsigned integer can fit values between 0 and 255, thus 3605 can not be properly represented. Use a higher bith depth for your distance image dtype, like np.uint32.
distance = math.sqrt(((1 - gray.shape[0]/2)**2 )+((1 - gray.shape[1]/2 )**2))
Careful: gray.shape will give you (height, width) or (y, x). The other code correctly assigns gray.shape[0]/2 to the y center, this one mixes it up and uses the height for the x coordinate.
Your algorithm seems good enough, I would suggest you stick with it. You can achieve something similar to the first two steps by converting the image to polar space (e.g. with OpenCV linearToPolar), but that may be harder to debug.
I was wondering, how would you, mathematically speaking, generate x points at random positions on a 3D surface, knowing the number of triangle polygons composing the surface (their dimensions, positions, normals, etc.)? In how many steps would you proceed?
I'm trying to create a "scatterer" in Maya (with Python and API), but I don't even know where to start in terms of concept. Should I generate the points first, and then check if they belong to the surface? Should I create the points directly on the surface (and how, in this case)?
Edit: I want to achieve this without using 2D projection or UVs, as far as possible.
You should compute the area of each triangle, and use those as weights to determine the destination of each random point. It is probably easiest to do this as a batch operation:
def sample_areas(triangles, samples):
# compute and sum triangle areas
totalA = 0.0
areas = []
for t in triangles:
a = t.area()
areas.append(a)
totalA += a
# compute and sort random numbers from [0,1)
rands = sorted([random.random() for x in range(samples)])
# sample based on area
area_limit = 0.0
rand_index = 0
rand_value = rands[rand_index]
for i in range(len(areas)):
area_limit += areas[i]
while rand_value * totalA < area_limit:
# sample randomly over current triangle
triangles[i].add_random_sample()
# advance to next sorted random number
rand_index += 1;
if rand_index >= samples:
return
rand_value = rands[rand_index]
Note that ridged or wrinkled regions may appear to have higher point density, simply because they have more surface area in a smaller space.
If the constraint is that all of the output points be on the surface, you want a consistent method of addressing the surface itself rather than worrying about the 3d > surface conversion for your points.
The hacktastic way to do that would be to create a UV map for your 3d object, and then scatter points randomly in 2 dimensions (throwing away points which happened not to land inside a valid UV shell). Once your UV shells are filled up as much as you'd like, you can convert your UV points to barycentric coordinates to convert those 2-d points back to 3-d points: effectively you say "i am 30% vertex A, 30 % vertex B, and 40% vertex C, so my position is (.3A + .3B + .4C)
Besides simplicity, another advantage of using is UV map is that it would allow you to customize the density and relative importance of different parts of the mesh: a larger UV face will get a lot of scattered points, and a smaller one fewer -- even if that doesn't match the physical size or the faces.
Going to 2D will introduce some artifacts because you probably will not be able to come up with a UV map that is both stretch-free and seam-free, so you'll get variations in the density of your scatter because of that. However for many applications this will be fine, since the algorithm is really simple and the results easy to hand tune.
I have not used this one but this looks like it's based on this general approach: http://www.shanemarks.co.za/uncategorized/uv-scatter-script/
If you need a more mathematically rigorous method, you'd need a fancier method of mesh parameterization : a way to turn your 3-d collection of triangles into a consistent space. There is a lot of interesting work in that field but it would be hard to pick a particular path without knowing the application.
Pick 2 random edges from random triangle.
Create 2 random points on edges.
Create new random point between them.
My ugly mel script:
//Select poly and target object
{
$sel = `ls -sl -fl`; select $sel[0];
polyTriangulate -ch 0;
$poly_s = `polyListComponentConversion -toFace`;$poly_s = `ls -fl $poly_s`;//poly flat list
int $numPoly[] = `polyEvaluate -fc`;//max random from number of poly
int $Rand = rand($numPoly[0]);//random number
$vtx_s =`polyListComponentConversion -tv $poly_s[$Rand]`;$vtx_s=`ls- fl $vtx_s`;//3 vertex from random poly flat list
undo; //for polyTriangulate
vector $A = `pointPosition $vtx_s[0]`;
vector $B = `pointPosition $vtx_s[1]`;
vector $C = `pointPosition $vtx_s[2]`;
vector $AB = $B-$A; $AB = $AB/mag($AB); //direction vector and normalize
vector $AC = $A-$C; $AC = $AC/mag($AC); //direction vector and normalize
$R_AB = mag($B-$A) - rand(mag($B-$A)); vector $AB = $A + ($R_AB * $AB);//new position
$R_AC = mag($A-$C) - rand(mag($A-$C)); vector $AC = $C + ($R_AC * $AC);//new position
vector $ABC = $AB-$AC; $ABC = $ABC/mag($ABC); //direction vector and normalize
$R_ABC = mag($AB-$AC) - rand(mag($AB-$AC)); //random
vector $ABC = $AC + ($R_ABC * $ABC);
float $newP2[] = {$ABC.x,$ABC.y,$ABC.z};//back to float
move $newP2[0] $newP2[1] $newP2[2] $sel[1];
select -add $sel[1];
}
PS UV method is better
Here is pseudo code that might be a good starting point:
Let N = no of vertices of 3D face that you are working with.
Just generate N random numbers, compute their sum, divide each one by the sum. Now you have N random number whose sum is = 1.0.
Using above random numbers, take a linear combination of 3D vertices of the 3D face that you are interested in. This should give you a random 3D point on the face.
Repeat till you get sufficient no. of random points on the 3D face.
For a class, I've written a Laplacian of Gaussian edge detector that works in the following way.
Make a Laplacian of Gaussian mask given the variance of the Gaussian the size of the mask
Convolve it with the image
Find the zero crossings in a really shoddy manner, these are the edges of the image
If you so desire, the code for this program can be viewed here, but the most important part is where I create my Gaussian mask which depends on two functions that I've reproduced here for your convenience:
# Function for calculating the laplacian of the gaussian at a given point and with a given variance
def l_o_g(x, y, sigma):
# Formatted this way for readability
nom = ( (y**2)+(x**2)-2*(sigma**2) )
denom = ( (2*math.pi*(sigma**6) ))
expo = math.exp( -((x**2)+(y**2))/(2*(sigma**2)) )
return nom*expo/denom
# Create the laplacian of the gaussian, given a sigma
# Note the recommended size is 7 according to this website http://homepages.inf.ed.ac.uk/rbf/HIPR2/log.htm
# Experimentally, I've found 6 to be much more reliable for images with clear edges and 4 to be better for images with a lot of little edges
def create_log(sigma, size = 7):
w = math.ceil(float(size)*float(sigma))
# If the dimension is an even number, make it uneven
if(w%2 == 0):
print "even number detected, incrementing"
w = w + 1
# Now make the mask
l_o_g_mask = []
w_range = int(math.floor(w/2))
print "Going from " + str(-w_range) + " to " + str(w_range)
for i in range_inc(-w_range, w_range):
for j in range_inc(-w_range, w_range):
l_o_g_mask.append(l_o_g(i,j,sigma))
l_o_g_mask = np.array(l_o_g_mask)
l_o_g_mask = l_o_g_mask.reshape(w,w)
return l_o_g_mask
All in all, it works relatively well, even if it is extremely slow because I don't know how to leverage Numpy. However, whenever I change the size of the Gaussian mask, the thickness of the edges I detect change drastically.
Here is the image run with a size of mask equivalent to 4 times the given variance of the Gaussian:
Here is the same image run with a size of mask equivalent to 6 times the variance:
I'm kind of baffled, because the only thing the size parameter should change is the accuracy of the approximation of the Laplacian of Gaussian mask before I begin to convolve it with the image. So I ran a test where I wanted to vizualize how my mask looked given different size parameters.
Here it is with a size of 4:
Here it is with a size of 6:
The shape of the function seems to be the same as far as I can tell from the zero crossings (they happen to be spaced around four pixels apart) and their peaks. Is there a better way to check?
Any suggestions as to why this issue might be occurring or how to investigate further are appreciated.
It turns out your concept about the effect of increasing the mask size is wrong. Increasing the size doesn't actually improve the quality of approximation or the resolution of the function. To explain, instead of using a complicated 2D function like the Laplacian of the Gaussian, let's take things back down to the one dimension and pretend we are approximating the function f(x) = x^2.
Now you code for calculating the function would look like this:
def derp(theta, size):
w = math.ceil(float(size)*float(sigma))
# If the dimension is an even number, make it uneven
if(w%2 == 0):
print "even number detected, incrementing"
w = w + 1
# Now make the mask
x_mask = []
w_range = int(math.floor(w/2))
print "Going from " + str(-w_range) + " to " + str(w_range)
for i in range_inc(-w_range, w_range):
x_mask = a*i^2
If you were to increase the "size" of this function, you wouldn't be increasing the resolution, you're actually increasing the range of values of x that you're grabbing from. For example, for a size of 3 you're evaluating -1, 0, 1, for a size of 5 you're evaluating -2, -1, 0, 1, 2. Notice this doesn't increase the spacing between the pixels. This is what you're actually seeing when you talk about the zero crossing occurring the same number of pixels apart.
Consequently, when convoluting with this really silly mask, you would get really different results. But what if we went back to the Laplacian of the Gaussian?
Well, the nice property the Laplacian of the Gaussian has is that the farther you go with it, the more zero values you get. So unlike our silly x^2 function, you should be getting the same results after some time.
Now, I think the reason you didn't see this with your test cases is because they were too limited in size, because your program is too slow for you to really see the difference between size=15 and size=20, but if were to actually run those cases I think you would see that the image doesn't change that much.
This still doesn't answer what you should be doing, for that, we're going to have to look to the professionals. Namely, the implementation of the gaussian_filter in Scipy (source here).
When you look at their source code, the first thing you'll notice is that when creating their mask they're basically doing the same thing as you. They are always using an integer step size and they are scaling the size of the mask by it's standard deviation.
As to why they are doing it that way, I can't answer, since I don't have that much of an in depth knowledge of image processing or Scipy. However, this may make for a good new question to ask on SO.
So, I'm learning my self python by this tutorial and I'm stuck with exercise number 13 which says:
Write a function to uniformly shrink or enlarge an image. Your function should take an image along with a scaling factor. To shrink the image the scale factor should be between 0 and 1 to enlarge the image the scaling factor should be greater than 1.
This is not meant as a question about PIL, but to ask which algorithm to use so I can code it myself.
I've found some similar questions like this, but I dunno how to translate this into python.
Any help would be appreciated.
I've come to this:
import image
win = image.ImageWin()
img = image.Image("cy.png")
factor = 2
W = img.getWidth()
H = img.getHeight()
newW = int(W*factor)
newH = int(H*factor)
newImage = image.EmptyImage(newW, newH)
for col in range(newW):
for row in range(newH):
p = img.getPixel(col,row)
newImage.setPixel(col*factor,row*factor,p)
newImage.draw(win)
win.exitonclick()
I should do this in a function, but this doesn't matter right now. Arguments for function would be (image, factor). You can try it on OP tutorial in ActiveCode. It makes a stretched image with empty columns :.
Your code as shown is simple and effective for what's known as a Nearest Neighbor resize, except for one little bug:
p = img.getPixel(col/factor,row/factor)
newImage.setPixel(col,row,p)
Edit: since you're sending a floating point coordinate into getPixel you're not limited to Nearest Neighbor - you can implement any interpolation algorithm you want inside. The simplest thing to do is simply truncate the coordinates to int which will cause pixels to be replicated when factor is greater than 1, or skipped when factor is less than 1.
Mark has the correct approach. To get a smoother result, you replace:
p = img.getPixel(col/factor,row/factor)
with a function that takes floating point coordinates and returns a pixel interpolated from several neighboring points in the source image. For linear interpolation it takes the four nearest neigbors; for higher-order interpolation it takes a larger number of surrounding pixels.
For example, if col/factor = 3.75 and row/factor = 1.9, a linear interpolation would take the source pixels at (3,1), (3,2), (4,1), and (4,2) and give a result between those 4 rgb values, weighted most heavily to the pixel at (4,2).
You can do that using the Python Imaging Library.
Image.resize() should do what you want.
See http://effbot.org/imagingbook/image.htm
EDIT
Since you want to program this yourself without using a module, I have added an extra solution.
You will have to use the following algorithm.
load your image
extract it's size
calculate the desired size (height * factor, width * factor)
create a new EmptyImage with the desired size
Using a nested loop through the pixels (row by column) in your image.
Then (for shrinking) you remove some pixels every once in while, or for (enlarging) you duplicate some pixels in your image.
If you want you want to get fancy, you could smooth the added, or removed pixels, by averaging the rgb values with their neighbours.