I'm writing a simple 2d brownian motion simulator in Python. It's obviously easy to draw values for x displacement and y displacement from a distribution, but I have to set it up so that the 2d displacement (ie hypotenuse) is drawn from a distribution, and then translate this to new x and y coordinates. This is probably trivial and I'm just too far removed from trigonometry to remember how to do it correctly. Am I going to need to generate a value for the hypotenuse and then translate it into x and y displacements with sin and cos? (How do you do this correctly?)
This is best done by using polar coordinates (r, theta) for your distributions (where r is your "hypotenuse")), and then converting the result to (x, y), using x = r cos(theta) and y = r sin(theta). That is, select r from whatever distribution you like, and then select a theta, usually from a flat, 0 to 360 deg, distribution, and then convert these values to x and y.
Going the other way around (i.e., constructing correlated (x, y) distributions that gave a direction independent hypotenuse) would be very difficult.
If you have a hypotenuse in the form of a line segment, then you have two points. From two points in the form P0 = (x0, y0) P1 = (x1, y1) you can get the x and y displacements by subtracting x0 from x1 and y0 from y1.
If your hypotenuse is actually a vector in a polar coordinate plane, then yes, you'll have to take the sin of the angle and multiply it by the magnitude of the vector to get the y displacement and likewise with cos for the x displacement.
Related
I've been trying to generate points on the surface of a sphere of radius "inner_radius", such that they're uniformly spread out. The algorithm works as expected for a radius of 1, but generates lesser than expected points for greater radii.
I have looked through similar questions on here, but they seem to be for generating points throughout the volume and not just on the surface of the sphere.
import numpy as np
PI=np.pi
def spherical_to_cartesian(pol_ang,azim_ang,radius): #This function converts given spherical coordinates (theta, phi and radius) to cartesian coordinates.
return np.array((radius*np.sin(pol_ang) * np.cos(azim_ang),
radius*np.sin(pol_ang) * np.sin(azim_ang),
radius*np.cos(pol_ang))
)
def get_electron_coordinates_list(inner_radius,electron_count):
#Algorithm used was mostly taken from https://www.cmu.edu/biolphys/deserno/pdf/sphere_equi.pdf . Explanations in code added by me.
electron_coordinate_list=[]
inner_area=4*(PI*inner_radius**2)
area_per_electron=inner_area/electron_count
pseudo_length_per_electron=np.sqrt(area_per_electron) #This is the side length of a square where the area of it is the area per electron on the sphere.
#Now, we need to get a value of angular space, such that angular space between electrons on latitude and longitude per electron is equal
#As a first step to obtaining this, we must make another value holding a whole number approximation of the ratio between PI and the pseudo_length. This will give the number of
#possible latitudes.
possible_count_of_lats=np.round(PI/pseudo_length_per_electron)
approx_length_per_electron_lat=PI/possible_count_of_lats #This is the length between electrons on a latitude
approx_length_per_electron_long=area_per_electron/approx_length_per_electron_lat #This is the length between electrons on a longitude
for electron_num_lat in range(int(possible_count_of_lats.item())): #The int(somenumpyvalue.item()) is used because Python cannot iterate over a numpy integer and it must be converted to normal int.
pol_ang=PI*(electron_num_lat+0.5)/possible_count_of_lats #The original algorithm recommended pol_ang=PI*(electron_num_lat+0.5)/possible_count_of_lats. The 0.5 appears to be added in order to get a larger number of coordinates.
#not sure if removing the 0.5 affects results. It didnt do so drastically, so what gives? Anyway, this gets the polar angle as PI*(latitudenumber)/totalnumberoflatitudes.
possible_count_of_longs=np.round(2*PI*np.sin(pol_ang)/approx_length_per_electron_long)
for electron_num_long in range(int(possible_count_of_longs.item())):
azim_ang=(2*PI)*(electron_num_long)/possible_count_of_longs #This gets the azimuthal angle as 2PI*longitudenumber/totalnumberoflongitudes
electron_coordinate=spherical_to_cartesian(pol_ang, azim_ang,inner_radius) #Converts the recieved spherical coordinates to cartesian so Manim can easily handle them.
electron_coordinate_list.append(electron_coordinate) #Add this coordinate to the electron_coordinate_list
print("Got coordinates: ",electron_coordinate) #Print the coordinate recieved.
print(len(electron_coordinate_list)," points generated.") #Print the amount of electrons will exist. Comment these two lines out if you don't need the data.
return electron_coordinate_list
get_electron_coordinates_list(1,100)
get_electron_coordinates_list(2,100)
Spherical_to_Cartesian() does nothing other than convert the spherical points to Cartesian.
For 100 points and radius 1, it generates 99 points.
But, only 26 points are made if the radius is 2 and 100 points are requested.
If you can generate points uniformly in the sphere's volume, then to get a uniform distribution on the sphere's surface, you can simply normalize the vectors so their radius equals the sphere's radius.
Alternatively, you can use the fact that independent identically-distributed normal distributions are rotationally-invariant. If you sample from 3 normal distributions with mean 1 and standard deviation 0, and then likewise normalize the vector, it will be uniform on the sphere's surface. Here's an example:
import random
def sample_sphere_surface(radius=1):
x, y, z = (random.normalvariate(0, 1) for i in range(3))
scalar = radius / (x**2 + y**2 + z**2) ** 0.5
return (x * scalar, y * scalar, z * scalar)
To be absolutely foolproof, we can handle the astronomically unlikely case of a division-by-zero error when x, y and z all happen to be zero:
def sample_sphere_surface(radius=1):
while True:
try:
x, y, z = (random.normalvariate(0, 1) for i in range(3))
scalar = radius / (x**2 + y**2 + z**2) ** 0.5
return (x * scalar, y * scalar, z * scalar)
except ZeroDivisionError:
pass
The element of area is, in polar coordinates, sinΘ dΘ dφ. Hence the azimuth angle can be uniformly distributed, while the inclination must be redistributed. Using the inverse transform sampling trick, Θ=arccos(u) where u is drawn uniformly will do.
Hence in Cartesian coordinates, (√(1-u²) cos v, √(1-u²) sin v, u) where u is drawn from [-1,1) and v from [0,2π).
I'd like to fit my 2D numpy array (image) data to a Gaussian. I've read a lot of examples using scipy.optimize, and I've tried but the fit has never been good -- this is probably because my background is non-zero, and sometimes I have other peaks too. I think it might be easier for me to simply generate a Gaussian that has the parameters of the correct peak. I already have the subpixel centroid coordinates x and y of the peak I want, and can easily get the amplitude of the peak with data[y][x], although I guess I would have to round the coordinates. What I'm stuck on now is the x and y widths. My Gaussian function looks like this:
import numpy as np
def gaussian_func(xy, x0, y0, width_x, width_y, amp): #x0 and y0 are the centroid coordinates
x = xy[0]
y = xy[1]
offset = np.min(data) #should this be a median value of the background instead?
a = 1/(2*width_x**2)
c = 1/(2*width_y**2)
exp_term = a*(x-x0)**2 + c*(y-y0)**2
return (offset + amp * np.exp(-exp_term)).ravel()
x, y = np.arange(0, np.shape(data)[1], 1), np.arange(0, np.shape(data)[0], 1)
xx, yy = np.meshgrid(x, y)
gaussian = gaussian_func((xx, yy), x0, y0, width_x, width_y, amp)
gaussian = np.reshape(gaussian, np.shape(data))
So I'm basically just confused on what to insert for width_x and width_y. I know these terms are supposed to be interchangeable with the standard deviations in x and y, but when I tried simply using np.std(data), I got bad results. Do the widths correspond to the actual physical widths of the peak? If so, how do I find those? Thanks!
I'm working in Python2.7 with 3D numpy arrays, and trying to retrieve only pixels who fall on a 2D tilted disc.
Here is my code to plot the border of the disc (= a circle) I am interested in
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
#creating a 3d numpy array (empty in this example, but will represent a binary 3D image in my application)
space=np.zeros((40,40,20))
r = 8 #radius of the circle
theta = np.pi / 4 # "tilt" of the circle
phirange = np.linspace(0, 2 * np.pi) #to make a full circle
#center of the circle
center=[20,20,10]
#computing the values of the circle in spherical coordinates and converting them
#back to cartesian
for phi in phirange:
x = r * np.cos(theta) * np.cos(phi) + center[0]
y= r*np.sin(phi) + center[1]
z= r*np.sin(theta)* np.cos(phi) + center[2]
space[int(round(x)),int(round(y)),int(round(z))]=1
x,y,z = space.nonzero()
#plotting
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z, zdir='z', c= 'red')
plt.show()
The plot gives the following figure :
which is a good start, but now I want a way to retrieve only the values of the pixels of space which are located in the disc defined by the circle : the ones in the pink zone in the following image (in my application, space will be a 3D binary image, here it is numpy.zeros() just to be able to plot and show you the disc I want):
How should I procede ?
I guess there is some numpy masking involved, an I understand how you would do it in 2D (like this question) but I'm having trouble applying this to 3D.
One easy way would be to calculate the normal vector to your disc plane. You can use your spherical coordinates for that. Be sure not to add the centre, set phi at zero and swap cos and sin theta, also stick a minus sign to the sin.
lets call that vector v. The plane is given by v0*x0 + v1*x1 + v2*x2 == c you can calculate c by inserting a point from your circle for x.
Next you can make a 2d grid for x0 and x1 and solve for x2. this gives you the height x2 as a function of the x0, x1 mesh. for these points you can calculate the distance from your disc centre and discard the points that are too far off. This you would indeed do using a mask.
Finally, depending on how precisely you want to plot you could round the x2 values to grid units, but for example for a surface plot I wouldn't do that.
To get a 3d mask as you describe you would round x2 and then starting from an all zero space set the disc pixels using space[x0, x1, x2] = True. This assumes that you have masked x0, x1, x2 as described earlier.
Well that is a math problem, you should ask it in the Mathematics Stack Exchange site.
From my perspective, you should first find the surface your disc is in, and do the area calculation within that surface, by, for example, the method you mentioned in the linked question.
numpy or matplotlib here definitely do not responsible for the projection, you do.
Without clearly point out which (or which kind of) surface they are in, and the equation does not guarantee it is a plane, the area does not mean anything.
What is the most effective way of generating an equation for a surface where x, y and z are known? There seems to be many ways to interpolate spline by spline from a few data points, however, I have all the data points that represent a smooth surface and would still not have a single equation representing the whole surface. Each spline is fairly simple in that it rises and falls once.
I have generated an equation from a least squares example on Brandon Stafford's blog, but the resulting equation does not represent the more complicated form.
I realize the cross terms are missing. How can I add each cross term (xy, xy^2, x^2y, x^3y, x^3y^2, y^3x, y^3x^2) into the script? Once I have cross terms do I need to add degrees for them?
# Set up the canonnical least squares form
Ax = np.vander(X,degree)
Ay = np.vander(Y,degree)
A = np.hstack((Ax,Ay))
# Solve for the least squares estimate of current
(coeffs, residuals, rank, sing_vals) = np.linalg.lstsq(A, Z)
# Extract coefficients and create polynomials in x and y
xcoeffs = coeffs[0:degree]
ycoeffs = coeffs[degree:2 * degree]
fx = np.poly1d(xcoeffs)
fy = np.poly1d(ycoeffs)
print fx
print fy
This is a bit of a complicated problem, so I'll do my best to break it down into chunks.
I'm writing a 3D Python library for the sake of learning / fun (as opposed to one that I'd intend for others to use). In the system I've developed, three-dimensional points are generally flattened to the image as follows:
Increasing the Z index by width moves the point halfway to the vanishing point in the center.
At Z = 0, the X and Y values correspond directly to the pixel at X, Y.
(There might be a name for this method, but if there is, I'm not familiar with it.)
In Python:
# vx and vy are the vanishing point's coordinates
def flatten_point(width, vx, vy, x, y, z):
distance = (x - vx, y - vy)
flat_distance = [d / (1 + float(z) / width) for d in distance]
return (vx + flat_distance[0], vx + flat_distance[1])
At this point, I'm able to create triangles somewhat efficiently by flattening its vertices and using barycentric coordinates to find and fill in the pixels that fall between those three points. This works well enough if I don't need to know anything about the actual points on the triangle that those pixels correspond to, but if I want to shade the triangle so that deeper points are drawn darker, I need to know what unflattened point on the triangle the pixel corresponds to.
joriki on math.stackexchange recommended using the barycentric coordinates as weights to find the original point. This did appear to work for awhile -- and it probably would work if I were using a linear depth system -- but it falls apart when the depths of the triangle's points differ by enough. The triangle appears to approach the greatest depth more quickly than it actually does, as if it were curved backwards.
So, in short: how can I reverse the point flattening function to get the actual 3D point of an arbitrary 2D pixel on a flattened triangle? Alternatively, if there is a better / more efficient way to flatten triangles without losing the depth of each pixel, that would work too.
You are right that the problem lies in your depth values not being linear. Fortunately, the solution is simple, but a little expensive if calculated per pixels.
Using your barycentric coordinates, rather than interpolating the three Z components directly, you need to interpolate their inverse and reinverse the result. This is called perspective correction.
Example for Z only :
def GetInterpolatedZ(triangle, u, v):
z0 = 1.0 / triangle[0].z
z1 = 1.0 / triangle[1].z
z2 = 1.0 / triangle[2].z
z = z0 + u * (z1-z0) + v * (z2-z0)
return 1.0/z
With triangle a list of three vectors and u and v the barycentric coordinates for triangle[1] and triangle[2] respectively. You will need to remap your Zs before and after the divisions if they are offset.
If you want to interpolate the actual X and Y coordinates, you do something similar. You will need to interpolate x/z and y/z and relinearize the result by multiplying by z.
def GetInterpolatedZ(tri, u, v):
t0 = Vec3(tri[0].x/tri[0].z, tri[0].y/tri[0].z, 1.0/tri[0].z)
t1 = Vec3(tri[1].x/tri[1].z, tri[1].y/tri[1].z, 1.0/tri[1].z)
t2 = Vec3(tri[2].x/tri[2].z, tri[2].y/tri[2].z, 1.0/tri[2].z)
inter = t0 + u * (t1-t0) + v * (t2-t0)
inter.z = 1.0 / inter.z
inter.x *= inter.z
inter.y *= inter.z
return inter
Again, tri is a list of the three vectors and u, v are the barycentric coordinates for tri[1], tri[2]. Vec3 is a regular 3 components Euclidean vector type.