I already have a rectangle triangulated by a scipy.spatial.Delaunay() object. I manage to stretch and curve it around so that it looks like an annulus cut along a line. Here is some code to make something with the same topology:
from scipy.spatial import Delaunay
NR = 22
NTheta = 36
Rin = 1
Rout = 3
alphaFactor = 33/64
alpha = np.pi/alphaFactor # opening angle of wedge
u=np.linspace(pi/2, pi/2 + alpha, NTheta)
v=np.linspace(Rin, Rout, NR)
u,v=np.meshgrid(u,v)
u=u.flatten()
v=v.flatten()
#evaluate the parameterization at the flattened u and v
x=v*np.cos(u)
y=v*np.sin(u)
#define 2D points, as input data for the Delaunay triangulation of U
points2D=np.vstack([u,v]).T
xy0 = np.vstack([x,y]).T
triLattice = Delaunay(points2D) #triangulate the rectangle U
triSimplices = triLattice.simplices
plt.figure()
plt.triplot(x, y, triSimplices, linewidth=0.5)
Starting from this topology, I now want to join up the two open edges, and make a closed annulus (change the topology, that is). How do I manually add new triangles to the existing triangulation?
A solution is to merge the points around the gap. Here is a way to do this, by keeping track of the indexes of the corresponding points:
import matplotlib.pylab as plt
from scipy.spatial import Delaunay
import numpy as np
NR = 4
NTheta = 16
Rin = 1
Rout = 3
alphaFactor = 33/64 # -- set to .5 to close the gap
alpha = np.pi/alphaFactor # opening angle of wedge
u = np.linspace(np.pi/2, np.pi/2 + alpha, NTheta)
v = np.linspace(Rin, Rout, NR)
u_grid, v_grid = np.meshgrid(u, v)
u = u_grid.flatten()
v = v_grid.flatten()
# Get the indexes of the points on the first and last columns:
idx_grid_first = (np.arange(u_grid.shape[0]),
np.zeros(u_grid.shape[0], dtype=int))
idx_grid_last = (np.arange(u_grid.shape[0]),
(u_grid.shape[1]-1)*np.ones(u_grid.shape[0], dtype=int))
# Convert these 2D indexes to 1D indexes, on the flatten array:
idx_flat_first = np.ravel_multi_index(idx_grid_first, u_grid.shape)
idx_flat_last = np.ravel_multi_index(idx_grid_last, u_grid.shape)
# Evaluate the parameterization at the flattened u and v
x = v * np.cos(u)
y = v * np.sin(u)
# Define 2D points, as input data for the Delaunay triangulation of U
points2D = np.vstack([u, v]).T
triLattice = Delaunay(points2D) # triangulate the rectangle U
triSimplices = triLattice.simplices
# Replace the 'last' index by the corresponding 'first':
triSimplices_merged = triSimplices.copy()
for i_first, i_last in zip(idx_flat_first, idx_flat_last):
triSimplices_merged[triSimplices == i_last] = i_first
# Graph
plt.figure(figsize=(7, 7))
plt.triplot(x, y, triSimplices, linewidth=0.5)
plt.triplot(x, y, triSimplices_merged, linewidth=0.5, color='k')
plt.axis('equal');
plt.plot(x[idx_flat_first], y[idx_flat_first], 'or', label='first')
plt.plot(x[idx_flat_last], y[idx_flat_last], 'ob', label='last')
plt.legend();
which gives:
Maybe you will have to adjust the definition of the alphaFactor so that the gap has the right size.
Related
I am trying to make a hexagonal fill by the voronoi diagram. One problem I find is that although the plot it produces is a hexagon diagram, the distances between the points vary.
The first function is to give a voronoi diagram by exact hexagons. Then I am trying to assign a universal initial distance between each cells as a spring rest length between them.
Now my problem is that the initial hexagonal diagram gives non-universal length between cells. We can see it by the printed result given by the line "print(a)" in the code. However, I assigned the coordinates of the points by 'x = (col + (0.5 * (row % 2))) * np.sqrt(3)' and 'y = row * 0.5', which should give exact hexagons. I don't understand how I am getting different distances between points.
The following is my code, and mostly the second function part is about finding neighbors to each cell and computing distances between each cell and its neighbors. I am printing the distances by 'print(a)' line.
import numpy as np
import freud
import matplotlib.pyplot as plt
from scipy.spatial import Delaunay
from collections import defaultdict
import itertools
# Source: https://freud.readthedocs.io/en/v2.10.0/gettingstarted/examples/module_intros/locality.Voronoi.html
def hexagonal_lattice(rows=3, cols=3, noise=.0, seed=None):
if seed is not None:
np.random.seed(seed)
# Assemble a hexagonal lattice
points = []
for row in range(rows * 2):
for col in range(cols):
x = (col + (0.5 * (row % 2))) * np.sqrt(3)
y = row * 0.5 # These x,y are allocated to produce exact hexagons
points.append((x, y, 0))
points = np.asarray(points)
points += np.random.multivariate_normal(
mean=np.zeros(3), cov=np.eye(3) * noise, size=points.shape[0]
)
# Set z=0 again for all points after adding Gaussian noise
# points[:, 2] = 0 # do not see the need. Seems wrap later changes z coordi to 0
# Wrap the points into the box
box = freud.box.Box(Lx=cols * np.sqrt(3), Ly=rows, is2D=True)
points = box.wrap(points) # 주어진 그림박스 안으로 periodic bdy 써서 넣어주는 역할
return box, points
# Compute the Voronoi diagram and plot
box1, pts1 = hexagonal_lattice(rows=12, cols=12, seed=2) # Noise = 0
voro = freud.locality.Voronoi()
voro.compute((box1, pts1))
plt.figure()
ax = plt.gca()
voro.plot(ax=ax, cmap="RdBu")
ax.scatter(pts1[:, 0], pts1[:, 1], s=2, c='k')
plt.show()
# This part is for the stability check of the initial exact hexagons diagram
def cell_movement(box, points, time_length, Lambda=0.01):
time = 1
while time <= time_length:
# 2D projection + neighboring cells
points_2d = []
for point in points:
points_2d.append([point[0], point[1]]) # projection to 2d for neighbor list
points_2d = np.asarray(points_2d)
tri = Delaunay(points_2d)
neiList = defaultdict(set) # Neighbor list for each cell
for p in tri.vertices:
for i, j in itertools.combinations(p, 2):
neiList[i].add(j)
neiList[j].add(i)
neiborList = sorted(neiList.items()) # Sorted neighbor array
spring = np.ones((len(points[:, 0]), len(points[:, 0]))) # Initial spring rest length
rintervec = np.empty((len(points[:, 0]), len(points[:, 0]), 2)) # spring length array
for i in range(len(neiborList)):
for j in list(neiborList[i][1]):
j = int(j)
rintervec[i, j] = points_2d[i] - points_2d[j] # Distance vector between i,j cells
a = np.linalg.norm(rintervec[i, j]) # Distances between neighboring cells
if a != 0:
print(a) # These are the printed numbers
spring[i, j] = np.linalg.norm(rintervec[i, j]) # Assign a as spring rest length
points_2d[i] += Lambda * rintervec[i, j] * ( # moves points by equation (8)
spring[i, j] - np.linalg.norm(rintervec[i, j])) / np.linalg.norm(rintervec[i, j])
points[i] = np.append(points_2d[i], np.array([0]))
# diagram
points = box.wrap(points) # 주어진 그림박스 안으로 periodic bdy 써서 넣어주는 역할
voro.compute((box, points)) # Computing the Voronoi diagram
# figure
plt.figure()
ax = plt.gca()
voro.plot(ax=ax, cmap="RdBu")
ax.scatter(points[:, 0], points[:, 1], s=2, c='k')
plt.savefig("C:\\doit\\pythonPractice\\At time %s.png" % time) # saves diagrams
plt.show()
time = time + 1
cell_movement(box1, pts1, time_length=5)
I wrote some code that creates randomised patches from graphs in matplotlib. Basically how it works is that you create a graph from nodes taken from a circle using the parametric equation for a circle and then you randomly displace the nodes along the vector of (0,0) to the node point on the circumference of the circle. That way you can be certain to avoid lines from crossing each other once the circle is drawn. In the end you just append the first (x,y) coordinate to the list of coordinates to close the circle.
What I want to do next is to find a way to fill that circular graph with a solid colour so that I can create a "stamp" that can be used to make randomised patches on a canvas that hopefully will not create crossing edges. I want to use this to make procedural risk maps in svg format, because a lot of those are uploaded with terrible edges using raster image formats using jpeg.
I am pretty sure that my information of the nodes should be sufficient to make that happen but I have no idea how to implement that. Can anyone help?
import numpy as np
import matplotlib.pyplot as plt
def node_circle(r=0.5,res=100):
# Create arrays (x and y coordinates) for the nodes on the circumference of a circle. Use parametric equation.
# x = r cos(t) y = r sin(t)
t = np.linspace(0,2*np.pi,res)
x = r*np.cos(t)
y = r*np.sin(t)
return t,x,y
def sgn(x,x_shift=-0.5,y_shift=1):
# A shifted sign function to use as a switching function
# in order to avoid shifts lower than -0.5 which is
# the radius of the circle.
return -0.5*(np.abs(x -x_shift)/(x -x_shift)) +y_shift
def displacer(x,y,low=-0.5,high=0.5,maxrad=0.5):
# Displaces the node points of the circle
shift = 0
shift_increment = 0
for i in range(len(x)):
shift_increment = np.random.uniform(low,high)
shift += shift_increment*sgn(maxrad)
x[i] += x[i]*shift
y[i] += y[i]*shift
x = np.append(x,x[0])
y = np.append(y,y[0])
return x,y
def plot():
# Actually visualises everything
fig, ax = plt.subplots(figsize=(4,4))
# np.random.seed(1)
ax.axis('off')
t,x,y = node_circle(res=100)
a = 0
x,y = displacer(x,y,low=-0.15,high=0.15)
ax.plot(x,y,'r-')
# ax.scatter(x,y,)
plt.show()
plot()
got it: the answer is to use matplotlib.Patches.Polygon
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.patches import Polygon
def node_circle(r=0.5,res=100):
# Create arrays (x and y coordinates) for the nodes on the circumference of a circle. Use parametric equation.
# x = r cos(t) y = r sin(t)
t = np.linspace(0,2*np.pi,res)
x = r*np.cos(t)
y = r*np.sin(t)
return x,y
def sgn(x,x_shift=-0.5,y_shift=1):
# A shifted sign function to use as a switching function
# in order to avoid shifts lower than -0.5 which is
# the radius of the circle.
return -0.5*(np.abs(x -x_shift)/(x -x_shift)) +y_shift
def displacer(x,y,low=-0.5,high=0.5,maxrad=0.5):
# Displaces the node points of the circle
shift = 0
shift_increment = 0
for i in range(len(x)):
shift_increment = np.random.uniform(low,high)
shift += shift_increment*sgn(maxrad)
x[i] += x[i]*shift
y[i] += y[i]*shift
x = np.append(x,x[0])
y = np.append(y,y[0])
return x,y
def patch_distributor(M,N,res,grid='square'):
# Distribute Patches based on a specified pattern/grid.
if grid == 'square':
data = np.zeros(shape=(M,N,2,res+1))
for i in range(M):
for j in range(N):
x,y = displacer(*node_circle(res=res),low=-0.2,high=0.2)
data[i,j,0,:] = x
data[i,j,1,:] = y
return data
def plot(res):
# Actually visualises everything
fig, ax = plt.subplots(figsize=(4,4))
# np.random.seed(1)
ax.axis('off')
# x,y = node_circle(res=res)
# x,y = displacer(x,y,low=-0.15,high=0.15)
# xy = np.zeros((len(x),2))
# xy[:,0] = x
# xy[:,1] = y
patch_data = patch_distributor(10,10,res)
for i in range(patch_data.shape[0]):
for j in range(patch_data.shape[1]):
x,y = patch_data[i,j]
x += i*0.5
y += j*0.5
xy = np.zeros((len(x),2))
xy[:,0] = x
xy[:,1] = y
patch = Polygon(xy,fc='w',ec='k',lw=2,zorder=np.random.randint(2),antialiased=False)
ax.add_patch(patch)
ax.autoscale_view()
# ax.plot(x,y,'r-')
# ax.scatter(x,y,)
plt.savefig('lol.png')
plot(res=40)
# Displace circle along the line of (0,0) -> (cos(t),sin(t))
# Make the previous step influence the next to avoid jaggedness
# limit displacement level to an acceptable amount
# Random displaced cubic grid as placing points for stamps.
say we have a 2D grid that is projected on a 3D surface, resulting in a 3D numpy array, like the below image. What is the most efficient way to calculate a surface normal for each point of this grid?
I can give you an example with simulated data:
I showed your way, with three points. With three points you can always calculate the cross product to get the perpendicular vector based on the two vectors created from three points. Order does not matter.
I took the liberty to also add the PCA approach using predefined sklearn functions. You can create your own PCA, good exercise to understand what happens under the hood but this works fine. The benefit of the approach is that it is easy to increase the number of neighbors and you are still able to calculate the normal vector. It is also possible to select the neighbors within a range instead of N nearest neighbors.
If you need more explanation about the working of the code please let me know.
from functools import partial
import numpy as np
from sklearn.neighbors import KDTree
from mpl_toolkits.mplot3d import axes3d
import matplotlib.pyplot as plt
from sklearn.decomposition import PCA
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
X, Y, Z = axes3d.get_test_data(0.25)
X, Y, Z = map(lambda x: x.flatten(), [X, Y, Z])
plt.plot(X, Y, Z, '.')
plt.show(block=False)
data = np.array([X, Y, Z]).T
tree = KDTree(data, metric='minkowski') # minkowki is p2 (euclidean)
# Get indices and distances:
dist, ind = tree.query(data, k=3) #k=3 points including itself
def calc_cross(p1, p2, p3):
v1 = p2 - p1
v2 = p3 - p1
v3 = np.cross(v1, v2)
return v3 / np.linalg.norm(v3)
def PCA_unit_vector(array, pca=PCA(n_components=3)):
pca.fit(array)
eigenvalues = pca.explained_variance_
return pca.components_[ np.argmin(eigenvalues) ]
combinations = data[ind]
normals = list(map(lambda x: calc_cross(*x), combinations))
# lazy with map
normals2 = list(map(PCA_unit_vector, combinations))
## NEW ##
def calc_angle_with_xy(vectors):
'''
Assuming unit vectors!
'''
l = np.sum(vectors[:,:2]**2, axis=1) ** 0.5
return np.arctan2(vectors[:, 2], l)
dist, ind = tree.query(data, k=5) #k=3 points including itself
combinations = data[ind]
# map with functools
pca = PCA(n_components=3)
normals3 = list(map(partial(PCA_unit_vector, pca=pca), combinations))
print( combinations[10] )
print(normals3[10])
n = np.array(normals3)
n[calc_angle_with_xy(n) < 0] *= -1
def set_axes_equal(ax):
'''Make axes of 3D plot have equal scale so that spheres appear as spheres,
cubes as cubes, etc.. This is one possible solution to Matplotlib's
ax.set_aspect('equal') and ax.axis('equal') not working for 3D.
Input
ax: a matplotlib axis, e.g., as output from plt.gca().
FROM: https://stackoverflow.com/questions/13685386/matplotlib-equal-unit-length-with-equal-aspect-ratio-z-axis-is-not-equal-to
'''
x_limits = ax.get_xlim3d()
y_limits = ax.get_ylim3d()
z_limits = ax.get_zlim3d()
x_range = abs(x_limits[1] - x_limits[0])
x_middle = np.mean(x_limits)
y_range = abs(y_limits[1] - y_limits[0])
y_middle = np.mean(y_limits)
z_range = abs(z_limits[1] - z_limits[0])
z_middle = np.mean(z_limits)
# The plot bounding box is a sphere in the sense of the infinity
# norm, hence I call half the max range the plot radius.
plot_radius = 0.5*max([x_range, y_range, z_range])
ax.set_xlim3d([x_middle - plot_radius, x_middle + plot_radius])
ax.set_ylim3d([y_middle - plot_radius, y_middle + plot_radius])
ax.set_zlim3d([z_middle - plot_radius, z_middle + plot_radius])
u, v, w = n.T
fig = plt.figure()
ax = fig.add_subplot(projection='3d')
# ax.set_aspect('equal')
# Make the grid
ax.quiver(X, Y, Z, u, v, w, length=10, normalize=True)
set_axes_equal(ax)
plt.show()
The surface normal for a point cloud is not well defined. One way to define them is from the surface normal of a reconstructed mesh using triangulation (which can introduce artefacts regarding you specific input). A relatively simple and fast solution is to use VTK to do that, and more specifically, vtkSurfaceReconstructionFilter and vtkPolyDataNormals . Regarding your needs, it might be useful to apply other filters.
BLUF::
I'm having trouble computing zonal statistics with a rotated array using the rasterstats package. I'm guessing the problem is with my affine matrix, but I'm not completely sure. Below is the affine transform matrix and output:
| 951.79, 0.45, 2999993.57|
| 0.00,-996.15,-1985797.84|
| 0.00, 0.00, 1.00|
Background:
I am creating files for a groundwater flow model and need to compute zonal statistics for each model grid cell using some .csv data from the Puerto Rico Agricultural Water Management web portal. These data are available on a daily timestep for numerous parameters (e.g. ET, tmax, tmin, precip, etc.). These files are not georeferenced, but ancillary files are available that specify the lon/lat for each cell which can then be projected using pyproj:
import pandas as pd
import numpy as np
import pyproj
url_base = 'http://academic.uprm.edu/hdc/GOES-PRWEB_RESULTS'
# Load some data
f = '/'.join([url_base, 'actual_ET', 'actual_ET20090101.csv'])
array = pd.read_csv(f, header=None).values
# Read longitude values
f = '/'.join([url_base, 'NON_TRANSIENT_PARAMETERS', 'LONGITUDE.csv'])
lon = pd.read_csv(f, header=None).values
# Read latitude values
f = '/'.join([url_base, 'NON_TRANSIENT_PARAMETERS', 'LATITUDE.csv'])
lat = np.flipud(pd.read_csv(f, header=None).values)
# Project to x/y coordinates (North America Albers Equal Area Conic)
aea = pyproj.Proj('+init=ESRI:102008')
x, y = aea(lon, lat)
Before I can compute zonal statistics, I need to create the affine transform that relates row/column coordinates to projected x/y coordinates. I specify the 6 parameters to create the Affine object using the affine library:
import math
from affine import Affine
length_of_degree_longitude_at_lat_mean = 105754.71 # 18.25 degrees via http://www.csgnetwork.com/degreelenllavcalc.html
length_of_degree_latitude_at_lat_mean = 110683.25 # 18.25 degrees via http://www.csgnetwork.com/degreelenllavcalc.html
# Find the upper left x, y
xul, yul = aea(lon[0][0], lat[0][0])
xll, yll = aea(lon[-1][0], lat[-1][0])
xur, yur = aea(lon[0][-1], lat[0][-1])
xlr, ylr = aea(lon[-1][-1], lat[-1][-1])
# Compute pixel width
a = abs(lon[0][1] - lon[0][0]) * length_of_degree_longitude_at_lat_mean
# Row rotation
adj = abs(xlr - xll)
opp = abs(ylr - yll)
b = math.atan(opp/adj)
# x-coordinate of the upper left corner
c = xul - a / 2
# Compute pixel height
e = -abs(lat[1][0] - lat[0][0]) * length_of_degree_latitude_at_lat_mean
# Column rotation
d = 0
# y-coordinate of the upper left corner
f = yul - e / 2
affine = Affine(a, b, c, d, e, f)
where:
a = width of a pixel
b = row rotation (typically zero)
c = x-coordinate of the upper-left corner of the upper-left pixel
d = column rotation (typically zero)
e = height of a pixel (typically negative)
f = y-coordinate of the of the upper-left corner of the upper-left pixel
(from https://www.perrygeo.com/python-affine-transforms.html)
The resulting affine matrix looks reasonable and I've tried passing row and column rotation as both radians and degrees with little change in the result. Link to grid features: grid_2km.geojson
import rasterstats
import matplotlib.pyplot as plt
grid_f = 'grid_2km.geojson'
gdf = gpd.read_file(grid_f)
zs = rasterstats.zonal_stats(gdf,
array,
affine=affine,
stats=['mean'])
df = pd.DataFrame(zs).fillna(value=np.nan)
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(131, aspect='equal')
ax.pcolormesh(x, y, np.zeros_like(array))
ax.pcolormesh(x, y, array)
ax.set_title('Projected Data')
ax = fig.add_subplot(132, aspect='equal')
gdf.plot(ax=ax)
ax.set_title('Projected Shapefile')
ax = fig.add_subplot(133, aspect='equal')
ax.imshow(df['mean'].values.reshape((gdf.row.max(), gdf.col.max())))
ax.set_title('Zonal Statistics Output')
plt.tight_layout()
plt.show()
Further, there is a discrepancy between x, y value pairs transformed using the affine object versus those derived from the native lon, lat values using pyproj:
rr = np.array([np.ones(x.shape[1], dtype=np.int) * i for i in range(x.shape[0])])
cc = np.array([np.arange(x.shape[1]) for i in range(x.shape[0])])
x1, y1 = affine * (cc, rr)
fig = plt.figure(figsize=(14, 6))
ax = fig.add_subplot(111, aspect='equal')
ax.scatter(x, y, s=.2)
ax.scatter(x1, y1, s=.2)
plt.show()
From a complex 3D shape, I have obtained by tricontourf the equivalent top view of my shape.
I wish now to export this result on a 2D array.
I have tried this :
import numpy as np
from shapely.geometry import Polygon
import skimage.draw as skdraw
import matplotlib.pyplot as plt
x = [...]
y = [...]
z = [...]
levels = [....]
cs = plt.tricontourf(x, y, triangles, z, levels=levels)
image = np.zeros((100,100))
for i in range(len(cs.collections)):
p = cs.collections[i].get_paths()[0]
v = p.vertices
x = v[:,0]
y = v[:,1]
z = cs.levels[i]
# to see polygon at level i
poly = Polygon([(i[0], i[1]) for i in zip(x,y)])
x1, y1 = poly.exterior.xy
plt.plot(x1,y1)
plt.show()
rr, cc = skdraw.polygon(x, y)
image[rr, cc] = z
plt.imshow(image)
plt.show()
but unfortunately, from contours vertices only one polygon is created by level (I think), generated at the end an incorrect projection of my contourf in my 2D array.
Do you have an idea to correctly represent contourf in a 2D array ?
Considering a inner loop with for path in ...get_paths() as suggested by Andreas, things are better ... but not completely fixed.
My code is now :
import numpy as np
import matplotlib.pyplot as plt
import cv2
x = [...]
y = [...]
z = [...]
levels = [....]
...
cs = plt.tricontourf(x, y, triangles, z, levels=levels)
nbpixels = 1024
image = np.zeros((nbpixels,nbpixels))
pixel_size = 0.15 # relation between a pixel and its physical size
for i,collection in enumerate(cs.collections):
z = cs.levels[i]
for path in collection.get_paths():
verts = path.to_polygons()
for v in verts:
v = v/pixel_size+0.5*nbpixels # to centered and convert vertices in physical space to image pixels
poly = np.array([v], dtype=np.int32) # dtype integer is necessary for the next instruction
cv2.fillPoly( image, poly, z )
The final image is not so far from the original one (retunred by plt.contourf).
Unfortunately, some empty little spaces still remains in the final image.(see contourf and final image)
Is path.to_polygons() responsible for that ? (considering only array with size > 2 to build polygons, ignoring 'crossed' polygons and passing through isolated single pixels ??).