I was wondering, how would you, mathematically speaking, generate x points at random positions on a 3D surface, knowing the number of triangle polygons composing the surface (their dimensions, positions, normals, etc.)? In how many steps would you proceed?
I'm trying to create a "scatterer" in Maya (with Python and API), but I don't even know where to start in terms of concept. Should I generate the points first, and then check if they belong to the surface? Should I create the points directly on the surface (and how, in this case)?
Edit: I want to achieve this without using 2D projection or UVs, as far as possible.
You should compute the area of each triangle, and use those as weights to determine the destination of each random point. It is probably easiest to do this as a batch operation:
def sample_areas(triangles, samples):
# compute and sum triangle areas
totalA = 0.0
areas = []
for t in triangles:
a = t.area()
areas.append(a)
totalA += a
# compute and sort random numbers from [0,1)
rands = sorted([random.random() for x in range(samples)])
# sample based on area
area_limit = 0.0
rand_index = 0
rand_value = rands[rand_index]
for i in range(len(areas)):
area_limit += areas[i]
while rand_value * totalA < area_limit:
# sample randomly over current triangle
triangles[i].add_random_sample()
# advance to next sorted random number
rand_index += 1;
if rand_index >= samples:
return
rand_value = rands[rand_index]
Note that ridged or wrinkled regions may appear to have higher point density, simply because they have more surface area in a smaller space.
If the constraint is that all of the output points be on the surface, you want a consistent method of addressing the surface itself rather than worrying about the 3d > surface conversion for your points.
The hacktastic way to do that would be to create a UV map for your 3d object, and then scatter points randomly in 2 dimensions (throwing away points which happened not to land inside a valid UV shell). Once your UV shells are filled up as much as you'd like, you can convert your UV points to barycentric coordinates to convert those 2-d points back to 3-d points: effectively you say "i am 30% vertex A, 30 % vertex B, and 40% vertex C, so my position is (.3A + .3B + .4C)
Besides simplicity, another advantage of using is UV map is that it would allow you to customize the density and relative importance of different parts of the mesh: a larger UV face will get a lot of scattered points, and a smaller one fewer -- even if that doesn't match the physical size or the faces.
Going to 2D will introduce some artifacts because you probably will not be able to come up with a UV map that is both stretch-free and seam-free, so you'll get variations in the density of your scatter because of that. However for many applications this will be fine, since the algorithm is really simple and the results easy to hand tune.
I have not used this one but this looks like it's based on this general approach: http://www.shanemarks.co.za/uncategorized/uv-scatter-script/
If you need a more mathematically rigorous method, you'd need a fancier method of mesh parameterization : a way to turn your 3-d collection of triangles into a consistent space. There is a lot of interesting work in that field but it would be hard to pick a particular path without knowing the application.
Pick 2 random edges from random triangle.
Create 2 random points on edges.
Create new random point between them.
My ugly mel script:
//Select poly and target object
{
$sel = `ls -sl -fl`; select $sel[0];
polyTriangulate -ch 0;
$poly_s = `polyListComponentConversion -toFace`;$poly_s = `ls -fl $poly_s`;//poly flat list
int $numPoly[] = `polyEvaluate -fc`;//max random from number of poly
int $Rand = rand($numPoly[0]);//random number
$vtx_s =`polyListComponentConversion -tv $poly_s[$Rand]`;$vtx_s=`ls- fl $vtx_s`;//3 vertex from random poly flat list
undo; //for polyTriangulate
vector $A = `pointPosition $vtx_s[0]`;
vector $B = `pointPosition $vtx_s[1]`;
vector $C = `pointPosition $vtx_s[2]`;
vector $AB = $B-$A; $AB = $AB/mag($AB); //direction vector and normalize
vector $AC = $A-$C; $AC = $AC/mag($AC); //direction vector and normalize
$R_AB = mag($B-$A) - rand(mag($B-$A)); vector $AB = $A + ($R_AB * $AB);//new position
$R_AC = mag($A-$C) - rand(mag($A-$C)); vector $AC = $C + ($R_AC * $AC);//new position
vector $ABC = $AB-$AC; $ABC = $ABC/mag($ABC); //direction vector and normalize
$R_ABC = mag($AB-$AC) - rand(mag($AB-$AC)); //random
vector $ABC = $AC + ($R_ABC * $ABC);
float $newP2[] = {$ABC.x,$ABC.y,$ABC.z};//back to float
move $newP2[0] $newP2[1] $newP2[2] $sel[1];
select -add $sel[1];
}
PS UV method is better
Here is pseudo code that might be a good starting point:
Let N = no of vertices of 3D face that you are working with.
Just generate N random numbers, compute their sum, divide each one by the sum. Now you have N random number whose sum is = 1.0.
Using above random numbers, take a linear combination of 3D vertices of the 3D face that you are interested in. This should give you a random 3D point on the face.
Repeat till you get sufficient no. of random points on the 3D face.
Related
I have a Shapely polygon. I want to cut these polygon into n polygons, which all have more-or-less equally sized areas. Equally sized would be best, but an approximation would be okay too.
I have tried to use the two methods described here, which both are a step in the right direction by not what I need. Both don't allow for a target n
I looked into voronoi, with which I am largely unfamiliar. The resulting shapes this analysis gives would be ideal, but it requires points, not a shape as input.
This is the best I could manage. It does not result in equal surface area per polygon, but it turned out to work for what I needed. This populates a shape with a specific number of points (if the parameters are kept constant, the number of points will be too). Then the points are converted to a voronoi, which was then turned into triangles.
from shapely import affinity
from shapely.geometry.multipolygon import MultiPolygon
from scipy.spatial import Voronoi
# Voronoi doesn't work properly with points below (0,0) so set lowest point to (0,0)
shape = affinity.translate(shape, -shape_a.bounds[0], -shape_a.bounds[1])
points = shape_to_points(shape)
vor = points_to_voronoi(points)
triangles = MultiPolygon(triangulate(MultiLineString(vor)))
def shape_to_points(shape, num = 10, smaller_versions = 10):
points = []
# Take the shape, shrink it by a factor (first iteration factor=1), and then
# take points around the contours
for shrink_factor in range(0,smaller_versions,1):
# calculate the shrinking factor
shrink_factor = smaller_versions - shrink_factor
shrink_factor = shrink_factor / float(smaller_versions)
# actually shrink - first iteration it remains at 1:1
smaller_shape = affinity.scale(shape, shrink_factor, shrink_factor)
# Interpolate numbers around the boundary of the shape
for i in range(0,int(num*shrink_factor),1):
i = i / int(num*shrink_factor)
x,y = smaller_shape.interpolate(i, normalized=True).xy
points.append( (x[0],y[0]))
# add the origin
x,y = smaller_shape.centroid.xy
points.append( (x[0], y[0]) ) # near, but usually not add (0,0)
points = np.array(points)
return points
def points_to_voronoi(points):
vor = Voronoi(points)
vertices = [ x for x in vor.ridge_vertices if -1 not in x]
# For some reason, some vertices were seen as super, super long. Probably also infinite lines, so take them out
lines = [ LineString(vor.vertices[x]) for x in vertices if not vor.vertices[x].max() > 50000]
return MultiLineString(lines)
This is the input shape:
This is after shape_to_points:
This is after points_to_voronoi
And then we can triangulate the voronoi:
Just combining the response and basic polyfill docs provided by #user3496060 (very helpful for me, thank you), here's a simple function.
And here's a great notebook from the h3 repo. Check out the "Census Polygon to Hex" section for how they use polyfill().
def h3_fill_shapely_poly(poly = shape, res = 10):
"""
inputs:
- poly: must be a shapely Polygon, cannot be any other shapely object
- res: resolution (higher means more specific zoom)
output:
- h3_fill: a Python set() object, generated by polypill
"""
coordinates = [[i[0], i[1]] for i in poly.exterior.coords]
geo_json = {
"type": "Polygon",
"coordinates": [coordinates]
}
h3_fill = h3.polyfill(geo_json, res, geo_json_conformant=False)
print(f'h3_fill =\n{type(h3_fill), h3_fill}')
return h3_fill
Another out-of-the-box option out there is the h3 polyfill function. Basically any repeating structure would work (triangle, square, hex), but Uber's library uses hexes so you're stuck with that unless you write a module to do the same thing with one of the other shapes. You still have the issue with "n" not being specified directly though (only indirectly through the discrete zoom level options).
polyfill
For a project, I need to be able to sample random points uniformly from linear subspaces (ie. lines and hyperplanes) within a certain radius. Since these are linear subspaces, they must go through the origin. This should work for any dimension n from which we draw our subspaces for in Rn.
I want my range of values to be from -0.5 to 0.5 (ie, all the points should fall within a hypercube whose center is at the origin and length is 1). I have tried to do the following to generate random subspaces and then points from those subspaces but I don't think it's exactly correct (I think I'm missing some form of normalization for the points):
def make_pd_line_in_rn(p, n, amount=1000):
# n is the dimension we draw our subspaces from
# p is the dimension of the subspace we want to draw (eg p=2 => line, p=3 => plane, etc)
# assume that n >= p
coeffs = np.random.rand(n, p) - 0.5
t = np.random.rand(amount, p)-0.5
return np.matmul(t, coeffs.T)
I'm not really good at visualizing the 3D stuff and have been banging my head against the wall for a couple of days.
Here is a perfect example of what I'm trying to achieve:
I think I'm missing some form of normalization for the points
Yes, you identified the issue correctly. Let me sum up your algorithm as it stands:
Generate a random subspace basis coeffs made of p random vectors in dimension n;
Generate coordinates t for amount points in the basis coeffs
Return the coordinates of the amount points in R^n, which is the matrix product of t and coeffs.
This works, except for one detail: the basis coeffs is not an orthonormal basis. The vectors of coeffs do not define a hypercube of side length 1; instead, they define a random parallelepiped.
To fix your code, you need to generate a random orthonormal basis instead of coeffs. You can do that using scipy.stats.ortho_group.rvs, or if you don't want to import scipy.stats, refer to the accepted answer to that question: How to create a random orthonormal matrix in python numpy?
from scipy.stats import ortho_group # ortho_group.rvs random orthogonal matrix
import numpy as np # np.random.rand random matrix
def make_pd_line_in_rn(p, n, amount=1000):
# n is the dimension we draw our subspaces from
# p is the dimension of the subspace we want to draw (eg p=2 => line, p=3 => plane, etc)
# assume that n >= p
coeffs = ortho_group.rvs(n)[:p]
t = np.random.rand(amount, p) - 0.5
return np.matmul(t, coeffs)
Please note that this method returns a rotated hypercube, aligned with the subspace. This makes sense; for instance, if you want to draw a square on a plane embed in R^3, then the square has to be aligned with the plane (otherwise it's not in the plane).
If what you wanted instead, is the intersection of a dimension-n hypercube with the dimension-p subspace, as suggested in the comments, then please do clarify your question.
I am new to calculating these values and am having a hard time figuring out how to calculate a (global?) Moran's I value for an increasing neighbour distance between points. Specifically, I'm not really sure how to set this lag/neighbour distance so that I can plot a correlogram.
The data I have is for the variation of single parameter in a 2D list (matrix). This can be plotted simply as a colorplot where the axes represent the points/pixels in each direction of the image, and the colormap shows the value of this parameter for each box across the 2D surface. As they seem to be clumping, I would like to see how long this 'parameter clump length' is using a correlogram.
So far I have managed to create another colorplot which I don't know exactly how to interpret.
y = 2D_Array
w = pysal.lat2W(rows,cols,rook=False,id_type="float")
lm = pysal.Moran_Local(y,w)
moran_significance = np.reshape(lm.p_sim,np.shape(ListOrArray))
plt.imshow(moran_significance)
I have also managed to obtain the global Moran I value by converting the 2D_Array into a 1D list.
y = 1D_List
w = pysal.lat2W(rows,cols)
mi = pysal.Moran(y,w,two_tailed=False)
But what I am really looking for is, how does I change when looking at how the parameter changes for neighbour n = 1,2,3,4,... where n = 1 is the nearest neighbour and n = 2 is the next nearest, and so on. Here is an example of what I'd like: https://creativesciences.files.wordpress.com/2015/05/morins-i-e1430616786173.png
I want to generate random points on the surface of cylinder such that distance between the points fall in a range of 230 and 250. I used the following code to generate random points on surface of cylinder:
import random,math
H=300
R=20
s=random.random()
#theta = random.random()*2*math.pi
for i in range(0,300):
theta = random.random()*2*math.pi
z = random.random()*H
r=math.sqrt(s)*R
x=r*math.cos(theta)
y=r*math.sin(theta)
z=z
print 'C' , x,y,z
How can I generate random points such that they fall with in the range(on the surfaceof cylinder)?
This is not a complete solution, but an insight that should help. If you "unroll" the surface of the cylinder into a rectangle of width w=2*pi*r and height h, the task of finding distance between points is simplified. You have not explained how to measure "distance along the surface" between points on the top of the cylinder and the side- this is a slightly tricky bit of geometry.
As for computing the distance along the surface when we created an artificial "seam", just use both (x1-x2) and (w -x1+x2) - whichever gives the shorter distance is the one you want.
I do think that #VincentNivoliers' suggestion to use Poisson disk sampling is very good, but with the constraints of h=300 and r=20 you will get terrible results no matter what.
The basic way of creating a set of random points with constraints in the positions between them, is to have a function that modulates the probability of points being placed at a certain location. this function starts out being a constant, and whenever a point is placed, forbidden areas surrounding the point are set to zero. That is difficult to do with continuous variables, but reasonably easy if you discretize your problem.
The other thing to be careful about is the being on a cylinder part. It may be easier to think of it as random points on a rectangular area that repeats periodically. This can be handled in two different ways:
the simplest is to take into consideration not only the rectangular tile where you are placing the points, but also its neighbouring ones. Whenever you place a point in your main tile, you also place one in the neighboring ones and compute their effect on the probability function inside your tile.
A more sophisticated approach considers the probability function then convolution of a kernel that encodes forbidden areas, with a sum of delta functions, corresponding to the points already placed. If this is computed using FFTs, the periodicity is anatural by product.
The first approach can be coded as follows:
from __future__ import division
import numpy as np
r, h = 20, 300
w = 2*np.pi*r
int_w = int(np.rint(w))
mult = 10
pdf = np.ones((h*mult, int_w*mult), np.bool)
points = []
min_d, max_d = 230, 250
available_locs = pdf.sum()
while available_locs:
new_idx = np.random.randint(available_locs)
new_idx = np.nonzero(pdf.ravel())[0][new_idx]
new_point = np.array(np.unravel_index(new_idx, pdf.shape))
points += [new_point]
min_mask = np.ones_like(pdf)
if max_d is not None:
max_mask = np.zeros_like(pdf)
else:
max_mask = True
for p in [new_point - [0, int_w*mult], new_point +[0, int_w*mult],
new_point]:
rows = ((np.arange(pdf.shape[0]) - p[0]) / mult)**2
cols = ((np.arange(pdf.shape[1]) - p[1]) * 2*np.pi*r/int_w/mult)**2
dist2 = rows[:, None] + cols[None, :]
min_mask &= dist2 > min_d*min_d
if max_d is not None:
max_mask |= dist2 < max_d*max_d
pdf &= min_mask & max_mask
available_locs = pdf.sum()
points = np.array(points) / [mult, mult*int_w/(2*np.pi*r)]
If you run it with your values, the output is usually just one or two points, as the large minimum distance forbids all others. but if you run it with more reasonable values, e.g.
min_d, max_d = 50, 200
Here's how the probability function looks after placing each of the first 5 points:
Note that the points are returned as pairs of coordinates, the first being the height, the second the distance along the cylinder's circumference.
I'm working on a problem where I have a large set (>4 million) of data points located in a three-dimensional space, each with a scalar function value. This is represented by four arrays: XD, YD, ZD, and FD. The tuple (XD[i], YD[i], ZD[i]) refers to the location of data point i, which has a value of FD[i].
I'd like to superimpose a rectilinear grid of, say, 100x100x100 points in the same space as my data. This grid is set up as follows.
[XGrid, YGrid, ZGrid] = np.mgrid[Xmin:Xmax:Xstep, Ymin:Ymax:Ystep, Zmin:Zmax:Zstep]
XG = XGrid[:,0,0]
YG = YGrid[0,:,0]
ZG = ZGrid[0,0,:]
XGrid is a 3D array of the x-value at each point in the grid. XG is a 1D array of the x-values going from Xmin to Xmax, separated by a distance of XStep.
I'd like to use an interpolation algorithm I have to find the value of the function at each grid point based on the data surrounding it. In this algorithm I require 20 data points closest (or at least close) to my grid point of interest. That is, for grid point (XG[i], YG[j], ZG[k]) I want to find the 20 closest data points.
The only way I can think of is to have one for loop that goes through each data point and a subsequent embedded for loop going through all (so many!) data points, calculating the Euclidean distance, and picking out the 20 closest ones.
for i in range(0,XG.shape):
for j in range(0,YG.shape):
for k in range(0,ZG.shape):
Distance = np.zeros([XD.shape])
for a in range(0,XD.shape):
Distance[a] = (XD[a] - XG[i])**2 + (YD[a] - YG[j])**2 + (ZD[a] - ZG[k])**2
B = np.zeros([20], int)
for a in range(0,20):
indx = np.argmin(Distance)
B[a] = indx
Distance[indx] = float(inf)
This would give me an array, B, of the indices of the data points closest to the grid point. I feel like this would take too long to go through each data point at each grid point.
I'm looking for any suggestions, such as how I might be able to organize the data points before calculating distances, which could cut down on computation time.
Have a look at a seemingly simmilar but 2D problem and see if you cannot improve with ideas from there.
From the top of my head, I'm thinking that you can sort the points according to their coordinates (three separate arrays). When you need the closest points to the [X, Y, Z] grid point you'll quickly locate points in those three arrays and start from there.
Also, you don't really need the euclidian distance, since you are only interested in relative distance, which can also be described as:
abs(deltaX) + abs(deltaY) + abs(deltaZ)
And save on the expensive power and square roots...
No need to iterate over your data points for each grid location: Your grid locations are inherently ordered, so just iterate over your data points once, and assign each data point to the eight grid locations that surround it. When you're done, some grid locations may have too few data points. Check the data points of adjacent grid locations. If you have plenty of data points to go around (it depends on how your data is distributed), you can already select the 20 closest neighbors during the initial pass.
Addendum: You may want to reconsider other parts of your algorithm as well. Your algorithm is a kind of piecewise-linear interpolation, and there are plenty of relatively simple improvements. Instead of dividing your space into evenly spaced cubes, consider allocating a number of center points and dynamically repositioning them until the average distance of data points from the nearest center point is minimized, like this:
Allocate each data point to its closest center point.
Reposition each center point to the coordinates that would minimize the average distance from "its" points (to the "centroid" of the data subset).
Some data points now have a different closest center point. Repeat steps 1. and 2. until you converge (or near enough).