What units does Plotly camera center (layout.scene.camera.center) use? - python

In a 3D Plotly plot the camera center defaults to (0,0,0), where, as far as I understand, (0,0,0) refers to the centre of the 3D volume occupied by the plot, not the coordinate (0,0,0).
These values can be changed via layout.scene.camera.center as documented here and here. However, I can't work out what units are being used, nor can I find this information in the documentation.
E.g. if I change the camera center to (1,1,1), where is this in relation to my plot? From a bit of experimenting I have discovered that:
(1,1,1) puts the camera center outside the volume occupied by my plot, but I can't figure out how far outside,
(0.5, 0.5, 0.5) put the camera center near, but not exactly on, one of the edges of the volume occupied by the plot; sometimes it is near a corner of the volume, sometimes it is along an edge.

Note: I'm not 100% sure that my answer relates to plotly-python, but it works that way in plotly-js so I suppose it should be the same.
By default camera's center is set to (0, 0, 0), that is the visual center of your plot. So, assuming following edge values on axes:
x: [10, 110],
y: [0, 50],
z: [1, 11],
Center point will have coords of (60, 25, 6) (e.g. for x: (10 + 110) / 2 == 60).
To calculate camera coords corresponding to some point within your plot's axes, you can use the following formula (given example is for x axis, but is valid for any):
multiplier = 0.5 * aspectratio.x
x = ((point.x - center.x) / halfLengthOfAxisX) * multiplier
So, in our example, if we wanted to center the camera on point (1, 2, 3), given aspect ratio 1, we would have:
multiplier = 0.5
halfLengthOfAxisX = 50 // Math.abs(center.x - Math.min(x))
x = ((1 - 60) / 50) * 0.5 // -0.59
You mentioned that (0.5, 0.5, 0.5) puts the camera near, but not exactly on one of the edges. That's probably caused by not taking aspectratio into the consideration. From what I know there is no way to retrieve it if it's calculated by Plotly (at least using Plotly.js; it could work differently in Python), so you may need to set it manually.

Related

Applying Perlin noise to plane multiple times/ sphere

I have some questions regarding the Perlin noise and the pv.sample_function in general.
How would you go about applying Perlin noise to a sphere? I would like to have a little bit disformed sphere.
Can you apply Perlin noise to a mesh (sphere/plane) multiple times? I would like to have a plane with some rough 'waves' and high detailed noise on top of them (thus having big waves with little waves in them).
What exactly does the third parameter in the frequency do? After playing around with some values I haven't noticed how it affected the noise.
These are the two different frequencies/Perlin noises that I would like to apply to one plane. Additionally, it shows the plane they respectively create.
def smooth_and_plot(sampled : pv.core.grid.UniformGrid):
mesh = sampled.warp_by_scalar('scalars')
mesh = mesh.extract_surface()
# clean and smooth a little to reduce perlin noise artifacts
mesh = mesh.smooth(n_iter=100, inplace=True, relaxation_factor=0.07)
mesh.plot()
def gravel_plane():
freq = [180, 180, 50]
noise = pv.perlin_noise(0.2, freq, (0, 0, 0))
sampled = pv.sample_function(noise,
bounds=(-10, 2, -10, 10, -10, 10),
dim=(500, 500, 1))
smooth_and_plot(sampled)
def bumpy_plane():
freq = [0.5, 0.7, 0]
noise = pv.perlin_noise(0.5, freq, (-10, -10, -10))
sampled = pv.sample_function(noise,
bounds=(-10, 2, -10, 10, -10, 10),
dim=(500, 500, 1))
smooth_and_plot(sampled)
Let me answer your questions in reverse order for didactical reasons.
What exactly does the third parameter in the frequency do? After playing around with some values I haven't noticed how it affected the noise.
You didn't see an effect because you were looking at 2d samples, and changing the behaviour along the third axis. The three frequencies specify the granularity of the noise along the x, y and z axes, respectively. In other words, the generated implicit function is a scalar function of three variables. It's just that your sampling reduces the dimensionality to 2.
Frequency might be a surprising quantity when it comes to spatial quantities, but it works the same way as for time. High temporal frequency means short oscillation period, low temporal frequency means long oscillation period. High spatial frequency means short wavelength, low spatial frequency means long wavelength. To be specific, wavelength and frequency are inversely proportional.
So you'll see the effect of the third frequency when you start slicing along the z axis:
import pyvista as pv
freq = [0.5, 0.5, 2]
noise = pv.perlin_noise(0.5, freq, (0, 0, 0))
noise_cube = pv.sample_function(noise,
bounds=(-10, 10, -10, 10, -10, 10),
dim=(200, 200, 200))
noise_cube.slice_orthogonal(-9, -9, -9).plot()
As you can see, the blobs in the xy plane are circular, because the two in-plane frequencies are equal. But in both vertical planes the blobs are elongated: they are flatter in the z direction. This is because the frequency along the z axis is four times larger, leading to a wavelength that is four times smaller. This will lead to random blobs having a roughly 4:1 aspect ratio.
Can you apply Perlin noise to a mesh (sphere/plane) multiple times? I would like to have a plane with some rough 'waves' and high detailed noise on top of them (thus having big waves with little waves in them).
All that happens in your snippets is that a function is sampled on a pre-defined rectangular grid, and the resulting values are stored as scalars on the grid. If you want to superimpose two functions, all you have to do is sum up the scalars from two such function calls. This will be somewhat wasteful, as you are generating the same grid twice (and discarding one of the copies), but this is the least exhausting solution from a development point of view:
def bumpy_gravel_plane():
bounds = (-10, 2, -10, 10, -10, 10)
dim = (500, 500, 1)
freq = [180, 180, 50]
noise = pv.perlin_noise(0.2, freq, (0, 0, 0))
sampled_gravel = pv.sample_function(noise, bounds=bounds, dim=dim)
freq = [0.5, 0.7, 0]
noise = pv.perlin_noise(0.5, freq, (-10, -10, -10))
sampled_bumps = pv.sample_function(noise, bounds=bounds, dim=dim)
sampled = sampled_gravel
sampled['scalars'] += sampled_bumps['scalars']
smooth_and_plot(sampled)
How would you go about applying Perlin noise to a sphere? I would like to have a little bit disformed sphere.
The usual solution of generating a 2d texture and applying that to a sphere won't work here, because the noise is not periodic, so you can't easily close it like that. But if you think about it, the generated Perlin noise is a 3d function. You can just sample this 3d function directly on your sphere!
There's one small problem: I don't think you can do that with just pyvista. We'll have to get our hands slightly dirty, and by that I mean using a bare vtk method (namely EvaluateFunction() of the noise). Generate your sphere, and then query the noise function of your choice on its points. If you want the result to look symmetric, you'll have to set the same frequency along all three Cartesian axes:
def bumpy_sphere(R=10):
freq = [0.5, 0.5, 0.5]
noise = pv.perlin_noise(0.5, freq, (0, 0, 0))
sampled = pv.Sphere(radius=R, phi_resolution=100, theta_resolution=100)
# query the noise at each point manually
sampled['scalars'] = [noise.EvaluateFunction(point) for point in sampled.points]
smooth_and_plot(sampled)

Failed to display all the spheres in perspetive projection of 3D animation

I have generated an optic flow animation, with spheres (circles) that move towards the viewer in a 3D coordinates space. For some reason, although I define a number of 8 spheres, it never displays all the spheres every time I run the code; sometimes it displays 1, sometimes 4 (like in the gif). It is eventually a random number of spheres from 1 to 8.
My code is available on Github
At perspective projection, the viewing volume is a frustum. So probably the spheres are clipped (not in the frustum) at the sides of the frustum, especially when they are close to the near plane.
Note, most of the stars "leave" the window at its borders, when they come closer to the camera (except the ones, who leave the frustum through the near plane).
Set the initial z-coordinate of the spheres to it's maximum (far plane), for debug reasons:
for sphere in spheres:
sphere.position.xy = np.random.uniform(-25, 25, size=2)
#sphere.position.z = np.random.uniform(0.0, -50.0)
sphere.position.z = 50
If you don't "see" all stars at all, then the range for the x and y coordinate ([-25, 25]) is to large.
To compensate the in initial clipping you can scale the x and y component by the distance:
for sphere in spheres:
sphere.position.xy = np.random.uniform(-25, 25, size=2)
z = np.random.uniform(0.0, -50.0)
sphere.position.z = z
sphere.position.xy[0] *= z/-50
sphere.position.xy[1] *= z/-50

Generate a point within a sphere of a certain radius that's around a SPECIFIC given point (not at 0,0,0) - Python

I have a function that creates a uniform random point inside a sphere of a specific radius:
radius = 5
r = radius * ( numpy.random.random()**(1./3.) )
phi = numpy.random.uniform(0,2*numpy.pi)
costheta = numpy.random.uniform(-1,1)
theta = numpy.arccos(costheta)
x = numpy.sin(theta) * numpy.cos(phi)
y = numpy.sin(theta) * numpy.sin(phi)
z = numpy.cos(theta)
point = numpy.array([x, y, z]) * r`
However, I'm trying to figure out how to get the point that gets generated to be within a sphere that's around a specific point in space rather than where it's being generated at currently around 0,0,0. I'm not math savvy at all so I'm not sure how to do this.
There are plenty of examples of how to generate a random point in a sphere of a specific radius (ie Generate a random point within a circle (uniformly), but I haven't seen any in python that state how to do so inside a radius that's around a user specified point (or perhaps I'm just misunderstanding the math that gets used...).
There was one question that got asked/answered here (generate a random cluster of points around a given point python) but this didn't really help, there were some other somewhat similar questions but they were in either Java or C# (ie Randomly generate clustered points given a center coordinate in 3D).
I put in a simple drawing of what I have right now (on the left) and what I'm trying to do (on the right).differentOrigin
Any help or examples would be greatly appreciated!!
You're really overthinking this quite a bit. It's simpler to demonstrate in 2D, and the logic is the same:
In the figure above, we have a circle centered at (0, 0), and a point located at (0, 1).
Now, let's center the circle at (1, 1) and move the point to the same relative position:
The new center of our circle is (1, 1), and the point is now located at (1, 2). All you need to do to get this transformation is:
1 + 0 = 1 # new_center_x + point_x
1 + 1 = 2 # new_center_y + point_y
It really is that simple!
Now numpy has built in functionality to make this even easier, because you can simply add numpy arrays. So if you have your new center, and your initial point, you can calculate the new point like so:
new_center = np.array([1, 1])
original_point = np.array([0, 1])
new_center + original_point
# array([1, 2])
This translates easily to 3D surfaces:
Here we have a sphere centered at (0, 0, 0), and a point at (0, 0, 10). We can use the same logic to move this circle to be centered at (5, 5, 5), with the point still in the same relative position:
new_center = np.array([5, 5, 5])
original_point = np.array([0, 0, 10])
new_center + original_point
# array([ 5, 5, 15])

Latitude and Longitude to X and Y in python

I am using a picture from Google Maps, roughly 200x200 feet in size (the size of a house and it's property). My goal is to have an input coordinate (E.g. [37.211817, -86.682670]) that can place a marker on my Google Maps picture I took, using my own math. I have looked and tried many methods. I just simply want to take a lat / lon, and proportionally put it in a square X and Y big.
Ok, I found the answer, and I will share it as it seems more complicated than I ever anticipated. The solution was to rotate the GPS coordinates 90° clockwise, then perform a reflection over the Y-Axis. -> (y, -x) +> (x, -y).
EDIT
So yea, all that has to happen is flip the x and y. It’s lat-lon, not lon-lat.
Then, it's a simple scaling formula to fit it to your screen. Heres the code:
top_left_raw = GPS_COORD
bottom_right_raw = GPS_COORD
maprect = [0,0,400,500] # Picture of map's width and height
def translate(pos):
#rot = (pos[1], pos[0]*-1)
#reflect = (rot[0], rot[1]*-1)
#return reflect
return (pos[1], pos[0])
def gps_to_coord(pos):
pos1 = translate((pos[0]-top_left_raw[0], pos[1]-top_left_raw[1]))
pos2 = translate((bottom_right_raw[0] - top_left_raw[0], bottom_right_raw[1] - top_left_raw[1]))
x = (maprect[2]*pos1[0]) / pos2[0]
y = (maprect[3]*pos1[1]) / pos2[1]
return (x,y)
gps_to_coord(GPS_COORD)
Let's assume for the sake of simplicity that GPS coordinates can scale to another coordinate system linearly. You'll need the GPS coordinates of the top left-most point on the image and the bottom right-most point:
Pseudocode:
input: gps_marker
let gps1 = lat/lon of location corresponding to top left of image.
let gps2 = lat/lon of location corresponding to bottom right of image.
let x_offset = (gps_marker.lon - gps1.lon)/(gps2.lon - gps1.lon) * image.width
let y_offset = (gps_marker.lat - gps1.lat)/(gps2.lat - gps1.lat) * image.height
// x_offset and y_offset are the x,y for your marker within the image.

Finding edge from 2d points Python

I have several 2d sets of scattered data that I would like to find the edges of. Some edges may be open lines, others may be polygons.
For example, here is one plot that has an open edge that I would like to be able to keep. I would actually like to create a polygon from the open edges so I can use point_in_poly to check if another point lies inside. The points that would close the polygon are the boundaries of my plot area, btw.
Any ideas on where to get started?
EDIT:
Here is what I have already tried:
KernelDensity from sklearn. The edges point density varies significantly enough to not be entirely distinguishable from the bulk of the points.
kde = KernelDensity()
kde.fit(my_data)
dens = np.exp(kde.score_samples(ds))
dmax = dens.max()
dens_mask = (0.4 * dmax < dens) & (dens < 0.8 * dmax)
ax.scatter(ds[dens_mask, 0], ds[dens_mask, 1], ds[dens_mask, 2],
c=dens[dens_mask], depthshade=False, marker='o', edgecolors='none')
Incidentally, the 'gap' in the left side of the color plot is the same one that is in the black and white plot above. I also am pretty sure that I could be using KDE better. For example, I would like to get the density for a much smaller volume, more like using radius_neighbors from sklearn's NearestNeighbors()
ConvexHull from scipy. I tried removing points from semi-random data (for practice) while still keeping a point of interest (here, 0,0) inside the convex set. This wasn't terribly effective. I had no sophisticated way of exlcuding points from an iteration and only removed the ones that were used in the last convex hull. This code and accompanying image shows the first and last hull made while keeping the point of interest in the set.
hull = ConvexHull(pts)
contains = True
while contains:
temp_pts = np.delete(pts, hull.vertices, 0)
temp_hull = ConvexHull(temp_pts)
tp = path.Path(np.hstack((temp_pts[temp_hull.vertices, 0][np.newaxis].T,
temp_pts[temp_hull.vertices, 1][np.newaxis].T)))
if not tp.contains_point([0, 0]):
contains = False
hull = ConvexHull(pts)
plt.plot(pts[hull.vertices, 0], pts[hull.vertices, 1])
else:
pts = temp_pts
plt.plot(pts[hull.vertices, 0], pts[hull.vertices, 1], 'r-')
plt.show()
Ideally the goal for convex hull would be to maximize the area inside the hull while keeping only the point of interest inside the set but I haven't been able to code this.
KMeans() from sklearn.cluster. Using n=3 clusters I tried just run the class with default settings and got three horizontal groups of points. I haven't learned how to train the data to recognize points that form edges.
Here is a piece of the model where the data points are coming from. The solid areas contain points while the voids do not.
Here, and here are some other questions I have asked that show some more of what I have been looking at.
So I was able to do this in a roundabout way.
I used images of slices of the model in the xy plane generated from SolidWorks to distinguish the areas of interest.
If you see them, there are points in the corners of the picture that I placed in the model for reference at known distances. These points allowed me to determine the number of pixels per millimeter. From there, I mapped the points in my analysis set to pixels and checked the color of the pixel. If the pixel is white it is masked.
def mask_z_level(xi, yi, msk_img, x0=-14.3887, y0=5.564):
im = plt.imread(msk_img)
msk = np.zeros(xi.shape, dtype='bool')
pxmm = np.zeros((3, 2))
p = 0
for row in range(im.shape[0]):
for col in range(im.shape[1]):
if tuple(im[row, col]) == (1., 0., 0.):
pxmm[p] = (row, col)
p += 1
pxx = pxmm[1, 1] / 5.5
pxy = pxmm[2, 0] / 6.5
print(pxx, pxy)
for j in range(xi.shape[1]):
for i in range(xi.shape[0]):
x, y = xi[i, j], yi[i, j]
dx, dy = x - x0, y - y0
dpx = np.round(dx * pxx).astype('int')
dpy = -np.round(dy * pxy).astype('int')
if tuple(im[dpy, dpx]) == (1., 1., 1.):
msk[i, j] = True
return msk
Here is a plot showing the effects of the masking:
I am still fine tuning the borders but I have a very manageable task now that the mask is in largely complete. The reason being is that some mask points are incorrect resulting in banding.

Categories