Plotting parametric objects as a grid in PyVista - python

I am stuck with probably a simple problem but after reading pyvista docs I am still looking for an answer. I am trying to plot a grid in which each cell will be a mesh defined as a parametric shape i.e. supertorus. In an early version of pyvista, I defined "my own" supertorus as below:
def supertorus(yScale, xScale, Height, InternalRadius, Vertical, Horizontal,
deltaX=0, deltaY=0, deltaZ=0):
# initial range for values used in parametric equation
n = 100
u = np.linspace(-np.pi, np.pi, n)
t = np.linspace(-np.pi, np.pi, n)
u, t = np.meshgrid(u, t)
# a1: Y Scale <0, 2>
a1 = yScale
# a2: X Scale <0, 2>
a2 = xScale
# a3: Height <0, 5>
a3 = Height
# a4: Internal radius <0, 5>
a4 = InternalRadius
# e1: Vertical squareness <0.25, 1>
e1 = Vertical
# e2: Horizontal squareness <0.25, 1>
e2 = Horizontal
# Definition of parametric equation for supertorus
x = a1 * (a4 + np.sign(np.cos(u)) * np.abs(np.cos(u)) ** e1) *\
np.sign(np.cos(t)) * np.abs(np.cos(t)) ** e2
y = a2 * (a4 + np.sign(np.cos(u)) * np.abs(np.cos(u)) ** e1) *\
np.sign(np.sin(t)) * np.abs(np.sin(t)) ** e2
z = a3 * np.sign(np.sin(u)) * np.abs(np.sin(u)) ** e1
grid = pyvista.StructuredGrid(x + deltaX + 5, y + deltaY + 5, z + deltaZ)
return grid
I could manipulate with deltaX, deltaY and deltaZ to position supertori at the location of my choice.
Unfortunately, this approach was not efficient and I am planning to use PyVista provided supertoroidal meshes (https://docs.pyvista.org/examples/00-load/create-parametric-geometric-objects.html?highlight=supertoroid). My question is: how I can place multiple meshes (like supertori) at the location defined by coordinates x, y, z?

I believe what you're looking for are glyphs. You can pass your own dataset as a glyph geometry that will in turn plot the dataset in each point of the supermesh. Without going into details of orienting your glyphs, colouring them according to scalars and whatnot, here's a simple "alien invasion" scenario as an example:
import numpy as np
import pyvista as pv
# get dataset for the glyphs: supertoroid in xy plane
saucer = pv.ParametricSuperToroid(ringradius=0.5, n2=1.5, zradius=0.5)
saucer.rotate_y(90)
# saucer.plot() # <-- check how a single saucer looks like
# get dataset where to put glyphs
x,y,z = np.mgrid[-1:2, -1:2, :2]
mesh = pv.StructuredGrid(x, y, z)
# construct the glyphs on top of the mesh
glyphs = mesh.glyph(geom=saucer, factor=0.3)
# glyphs.plot() # <-- simplest way to plot it
# create Plotter and add our glyphs with some nontrivial lighting
plotter = pv.Plotter(window_size=(1000, 800))
plotter.add_mesh(glyphs, color=[0.2, 0.2, 0.2], specular=1, specular_power=15)
plotter.show()
I've added some strong specular lighting to make the saucers look more menacing:
But the key point for your problem was creating the glyphs from your supermesh by passing it as the geom keyword of mesh.glyph. The other keywords such as orient and scale are useful for arrow-like glyphs where you can use the glyph to denote vectorial information of your dataset.
You've asked in comments whether it's possible to vary the glyphs along the dataset. I was certain that this was not possible, however the VTK docs clearly mention the possibility to define a collection of glyphs to use:
More than one glyph may be used by creating a table of source objects, each defining a different glyph. If a table of glyphs is defined, then the table can be indexed into by using either scalar value or vector magnitude.
It turns out that PyVista doesn't expose this functionality (yet), but the base vtk package lets us get our hands dirty. Here's a proof of concept based on DataSetFilters.glyph, which I'll float by the PyVista devs to see if there's interest in exposing this functionality.
import numpy as np
import pyvista as pv
from pyvista.core.filters import _get_output # just for this standalone example
import vtk
pyvista = pv # just for this standalone example
# below: adapted from core/filters.py
def multiglyph(dataset, orient=True, scale=True, factor=1.0,
tolerance=0.0, absolute=False, clamping=False, rng=None,
geom_datasets=None, geom_values=None):
"""Copy a geometric representation (called a glyph) to every point in the input dataset.
The glyphs may be oriented along the input vectors, and they may be scaled according to scalar
data or vector magnitude.
Parameters
----------
orient : bool
Use the active vectors array to orient the glyphs
scale : bool
Use the active scalars to scale the glyphs
factor : float
Scale factor applied to sclaing array
tolerance : float, optional
Specify tolerance in terms of fraction of bounding box length.
Float value is between 0 and 1. Default is 0.0. If ``absolute``
is ``True`` then the tolerance can be an absolute distance.
absolute : bool, optional
Control if ``tolerance`` is an absolute distance or a fraction.
clamping: bool
Turn on/off clamping of "scalar" values to range.
rng: tuple(float), optional
Set the range of values to be considered by the filter when scalars
values are provided.
geom_datasets : tuple(vtk.vtkDataSet), optional
The geometries to use for the glyphs in table mode
geom_values : tuple(float), optional
The value to assign to each geometry dataset, optional
"""
# Clean the points before glyphing
small = pyvista.PolyData(dataset.points)
small.point_arrays.update(dataset.point_arrays)
dataset = small.clean(point_merging=True, merge_tol=tolerance,
lines_to_points=False, polys_to_lines=False,
strips_to_polys=False, inplace=False,
absolute=absolute)
# Make glyphing geometry
if not geom_datasets:
arrow = vtk.vtkArrowSource()
arrow.Update()
geom_datasets = arrow.GetOutput(),
geom_values = 0,
# check if the geometry datasets are consistent
if not len(geom_datasets) == len(geom_values):
raise ValueError('geom_datasets and geom_values must have the same length!')
# TODO: other kinds of sanitization, e.g. check for sequences etc.
# Run the algorithm
alg = vtk.vtkGlyph3D()
if len(geom_values) == 1:
# use a single glyph
alg.SetSourceData(geom_datasets[0])
else:
alg.SetIndexModeToScalar()
# TODO: index by vectors?
# TODO: SetInputArrayToProcess for arbitrary arrays, maybe?
alg.SetRange(min(geom_values), max(geom_values))
# TODO: different Range?
for val, geom in zip(geom_values, geom_datasets):
alg.SetSourceData(val, geom)
if isinstance(scale, str):
dataset.active_scalars_name = scale
scale = True
if scale:
if dataset.active_scalars is not None:
if dataset.active_scalars.ndim > 1:
alg.SetScaleModeToScaleByVector()
else:
alg.SetScaleModeToScaleByScalar()
else:
alg.SetScaleModeToDataScalingOff()
if isinstance(orient, str):
dataset.active_vectors_name = orient
orient = True
if rng is not None:
alg.SetRange(rng)
alg.SetOrient(orient)
alg.SetInputData(dataset)
alg.SetVectorModeToUseVector()
alg.SetScaleFactor(factor)
alg.SetClamping(clamping)
alg.Update()
return _get_output(alg)
def example():
"""Small glyph example"""
rng = np.random.default_rng()
# get dataset for the glyphs: supertoroid in xy plane
# use N random kinds of toroids over a mesh with 27 points
N = 5
values = np.arange(N) # values for scalars to look up glyphs by
geoms = [pv.ParametricSuperToroid(n1=n1, n2=n2) for n1,n2 in rng.uniform(0.5, 2, size=(N, 2))]
for geom in geoms:
# make the disks horizontal for aesthetics
geom.rotate_y(90)
# get dataset where to put glyphs
x,y,z = np.mgrid[-1:2, -1:2, -1:2]
mesh = pv.StructuredGrid(x, y, z)
# add random scalars
mesh.point_arrays['scalars'] = rng.integers(0, N, size=x.size)
# construct the glyphs on top of the mesh; don't scale by scalars now
glyphs = multiglyph(mesh, geom_datasets=geoms, geom_values=values, scale=False, factor=0.3)
# create Plotter and add our glyphs with some nontrivial lighting
plotter = pv.Plotter(window_size=(1000, 800))
plotter.add_mesh(glyphs, specular=1, specular_power=15)
plotter.show()
if __name__ == "__main__":
example()
The multiglyph function in the above is mostly the same as mesh.glyph, but I've replaced the geom keyword with two keywords, geom_datasets and geom_values. These define an index -> geometry mapping that is then used to look up each glyph based on array scalars.
You asked whether you can colour the glyphs independently: you can. In the above proof of concept the choice of glyph is tied to the scalars (choosing vectors would be equally easy; I'm not so sure about arbitrary arrays). However you can easily choose what arrays to colour by when you call pv.Plotter.add_mesh, so my suggestion is to use something other than the proper scalars to colour your glyphs.
Here's a typical output:
I kept the scalars for colouring to make it easier to see the differences between the glyphs. You can see that there are five different kinds of glyphs being chosen randomly based on the random scalars. If you set non-integer scalars it will still work; I suspect vtk chooses the closest scalar or something similar for lookup.

Related

Counterclockwise sorting of x, y data

I have a set of points in a text file: random_shape.dat.
The initial order of points in the file is random. I would like to sort these points in a counter-clockwise order as follows (the red dots are the xy data):
I tried to achieve that by using the polar coordinates: I calculate the polar angle of each point (x,y) then sort by the ascending angles, as follows:
"""
Script: format_file.py
Description: This script will format the xy data file accordingly to be used with a program expecting CCW order of data points, By soting the points in Counterclockwise order
Example: python format_file.py random_shape.dat
"""
import sys
import numpy as np
# Read the file name
filename = sys.argv[1]
# Get the header name from the first line of the file (without the newline character)
with open(filename, 'r') as f:
header = f.readline().rstrip('\n')
angles = []
# Read the data from the file
x, y = np.loadtxt(filename, skiprows=1, unpack=True)
for xi, yi in zip(x, y):
angle = np.arctan2(yi, xi)
if angle < 0:
angle += 2*np.pi # map the angle to 0,2pi interval
angles.append(angle)
# create a numpy array
angles = np.array(angles)
# Get the arguments of sorted 'angles' array
angles_argsort = np.argsort(angles)
# Sort x and y
new_x = x[angles_argsort]
new_y = y[angles_argsort]
print("Length of new x:", len(new_x))
print("Length of new y:", len(new_y))
with open(filename.split('.')[0] + '_formatted.dat', 'w') as f:
print(header, file=f)
for xi, yi in zip(new_x, new_y):
print(xi, yi, file=f)
print("Done!")
By running the script:
python format_file.py random_shape.dat
Unfortunately I don't get the expected results in random_shape_formated.dat! The points are not sorted in the desired order.
Any help is appreciated.
EDIT: The expected resutls:
Create a new file named: filename_formatted.dat that contains the sorted data according to the image above (The first line contains the starting point, the next lines contain the points as shown by the blue arrows in counterclockwise direction in the image).
EDIT 2: The xy data added here instead of using github gist:
random_shape
0.4919261070361315 0.0861956168831175
0.4860816807027076 -0.06601587301587264
0.5023029456281289 -0.18238249845392662
0.5194784026079869 0.24347943722943777
0.5395164357511545 -0.3140611471861465
0.5570497147514262 0.36010146103896146
0.6074231036252226 -0.4142604617604615
0.6397066014669927 0.48590810704447085
0.7048302091822873 -0.5173701298701294
0.7499157837544145 0.5698170011806378
0.8000108666123336 -0.6199254449254443
0.8601249660418364 0.6500974025974031
0.9002010323281716 -0.7196585989767801
0.9703341483292582 0.7299242424242429
1.0104102146155935 -0.7931355765446666
1.0805433306166803 0.8102046438410078
1.1206193969030154 -0.865251869342778
1.1907525129041021 0.8909386068476981
1.2308285791904374 -0.9360074773711129
1.300961695191524 0.971219008264463
1.3410377614778592 -1.0076702085792988
1.4111708774789458 1.051499409681228
1.451246943765281 -1.0788793781975592
1.5213800597663678 1.1317798110979933
1.561456126052703 -1.1509956709956706
1.6315892420537896 1.2120602125147582
1.671665308340125 -1.221751279024005
1.7417984243412115 1.2923406139315234
1.7818744906275468 -1.2943211334120424
1.8520076066286335 1.3726210153482883
1.8920836729149686 -1.3596340023612745
1.9622167889160553 1.4533549783549786
2.0022928552023904 -1.4086186540731989
2.072425971203477 1.5331818181818184
2.1125020374898122 -1.451707005116095
2.182635153490899 1.6134622195985833
2.2227112197772345 -1.4884454939000387
2.292844335778321 1.6937426210153486
2.3329204020646563 -1.5192876820149541
2.403053518065743 1.774476584022039
2.443129584352078 -1.5433264462809912
2.513262700353165 1.8547569854388037
2.5533387666395 -1.561015348288075
2.6234718826405867 1.9345838252656438
2.663547948926922 -1.5719008264462806
2.7336810649280086 1.9858362849271942
2.7737571312143436 -1.5750757575757568
2.8438902472154304 2.009421487603306
2.883966313501766 -1.5687258953168035
2.954099429502852 2.023481896890988
2.9941754957891877 -1.5564797323888229
3.0643086117902745 2.0243890200708385
3.1043846780766096 -1.536523022432113
3.1745177940776963 2.0085143644234558
3.2145938603640314 -1.5088557654466737
3.284726976365118 1.9749508067689887
3.324803042651453 -1.472570838252656
3.39493615865254 1.919162731208186
3.435012224938875 -1.4285753640299088
3.5051453409399618 1.8343467138921687
3.545221407226297 -1.3786835891381335
3.6053355066557997 1.7260966810966811
3.655430589513719 -1.3197205824478546
3.6854876392284703 1.6130086580086582
3.765639771801141 -1.2544077134986225
3.750611246943765 1.5024152236652237
3.805715838087476 1.3785173160173163
3.850244800627849 1.2787337662337666
3.875848954088563 -1.1827449822904361
3.919007794704616 1.1336638361638363
3.9860581363759846 -1.1074537583628485
3.9860581363759846 1.0004485329485333
4.058012891753723 0.876878197560016
4.096267318663407 -1.0303482880755608
4.15638141809291 0.7443374218374221
4.206476500950829 -0.9514285714285711
4.256571583808748 0.6491902794175526
4.3166856832382505 -0.8738695395513574
4.36678076609617 0.593855765446675
4.426894865525672 -0.7981247540338443
4.476989948383592 0.5802489177489183
4.537104047813094 -0.72918339236521
4.587199130671014 0.5902272727272733
4.647313230100516 -0.667045454545454
4.697408312958435 0.6246979535615904
4.757522412387939 -0.6148858717040526
4.807617495245857 0.6754968516332154
4.8677315946753605 -0.5754260133805582
4.917826677533279 0.7163173947264858
4.977940776962782 -0.5500265643447455
5.028035859820701 0.7448917748917752
5.088149959250204 -0.5373268398268394
5.138245042108123 0.7702912239275879
5.198359141537626 -0.5445838252656432
5.2484542243955445 0.7897943722943728
5.308568323825048 -0.5618191656828015
5.358663406682967 0.8052154663518301
5.41877750611247 -0.5844972451790631
5.468872588970389 0.8156473829201105
5.5289866883998915 -0.6067217630853987
5.579081771257811 0.8197294372294377
5.639195870687313 -0.6248642266824076
5.689290953545233 0.8197294372294377
5.749405052974735 -0.6398317591499403
5.799500135832655 0.8142866981503349
5.859614235262157 -0.6493565525383702
5.909709318120076 0.8006798504525783
5.969823417549579 -0.6570670995670991
6.019918500407498 0.7811767020857934
6.080032599837001 -0.6570670995670991
6.13012768269492 0.7562308146399057
6.190241782124423 -0.653438606847697
6.240336864982342 0.7217601338055886
6.300450964411845 -0.6420995670995664
6.350546047269764 0.6777646595828419
6.410660146699267 -0.6225964187327819
6.4607552295571855 0.6242443919716649
6.520869328986689 -0.5922077922077915
6.570964411844607 0.5548494687131056
6.631078511274111 -0.5495730027548205
6.681173594132029 0.4686727666273125
6.7412876935615325 -0.4860743801652889
6.781363759847868 0.3679316979316982
6.84147785927737 -0.39541245791245716
6.861515892420538 0.25880333951762546
6.926639500135833 -0.28237987012986965
6.917336127605076 0.14262677798392165
6.946677533279001 0.05098957832291173
6.967431210462995 -0.13605442176870675
6.965045730326905 -0.03674603174603108
I find that an easy way to sort points with x,y-coordinates like that is to sort them dependent on the angle between the line from the points and the center of mass of the whole polygon and the horizontal line which is called alpha in the example. The coordinates of the center of mass (x0 and y0) can easily be calculated by averaging the x,y coordinates of all points. Then you calculate the angle using numpy.arccos for instance. When y-y0 is larger than 0 you take the angle directly, otherwise you subtract the angle from 360° (2𝜋). I have used numpy.where for the calculation of the angle and then numpy.argsort to produce a mask for indexing the initial x,y-values. The following function sort_xy sorts all x and y coordinates with respect to this angle. If you want to start from any other point you could add an offset angle for that. In your case that would be zero though.
def sort_xy(x, y):
x0 = np.mean(x)
y0 = np.mean(y)
r = np.sqrt((x-x0)**2 + (y-y0)**2)
angles = np.where((y-y0) > 0, np.arccos((x-x0)/r), 2*np.pi-np.arccos((x-x0)/r))
mask = np.argsort(angles)
x_sorted = x[mask]
y_sorted = y[mask]
return x_sorted, y_sorted
Plotting x, y before sorting using matplotlib.pyplot.plot (points are obvisously not sorted):
Plotting x, y using matplotlib.pyplot.plot after sorting with this method:
If it is certain that the curve does not cross the same X coordinate (i.e. any vertical line) more than twice, then you could visit the points in X-sorted order and append a point to one of two tracks you follow: to the one whose last end point is the closest to the new one. One of these tracks will represent the "upper" part of the curve, and the other, the "lower" one.
The logic would be as follows:
dist2 = lambda a,b: (a[0]-b[0])*(a[0]-b[0]) + (a[1]-b[1])*(a[1]-b[1])
z = list(zip(x, y)) # get the list of coordinate pairs
z.sort() # sort by x coordinate
cw = z[0:1] # first point in clockwise direction
ccw = z[1:2] # first point in counter clockwise direction
# reverse the above assignment depending on how first 2 points relate
if z[1][1] > z[0][1]:
cw = z[1:2]
ccw = z[0:1]
for p in z[2:]:
# append to the list to which the next point is closest
if dist2(cw[-1], p) < dist2(ccw[-1], p):
cw.append(p)
else:
ccw.append(p)
cw.reverse()
result = cw + ccw
This would also work for a curve with steep fluctuations in the Y-coordinate, for which an angle-look-around from some central point would fail, like here:
No assumption is made about the range of the X nor of the Y coordinate: like for instance, the curve does not necessarily have to cross the X axis (Y = 0) for this to work.
Counter-clock-wise order depends on the choice of a pivot point. From your question, one good choice of the pivot point is the center of mass.
Something like this:
# Find the Center of Mass: data is a numpy array of shape (Npoints, 2)
mean = np.mean(data, axis=0)
# Compute angles
angles = np.arctan2((data-mean)[:, 1], (data-mean)[:, 0])
# Transform angles from [-pi,pi] -> [0, 2*pi]
angles[angles < 0] = angles[angles < 0] + 2 * np.pi
# Sort
sorting_indices = np.argsort(angles)
sorted_data = data[sorting_indices]
Not really a python question I think, but still I think you could try sorting by - sign(y) * x doing something like:
def counter_clockwise_sort(points):
return sorted(points, key=lambda point: point['x'] * (-1 if point['y'] >= 0 else 1))
should work fine, assuming you read your points properly into a list of dicts of format {'x': 0.12312, 'y': 0.912}
EDIT: This will work as long as you cross the X axis only twice, like in your example.
If:
the shape is arbitrarily complex and
the point spacing is ~random
then I think this is a really hard problem.
For what it's worth, I have faced a similar problem in the past, and I used a traveling salesman solver. In particular, I used the LKH solver. I see there is a Python repo for solving the problem, LKH-TSP. Once you have an order to the points, I don't think it will be too hard to decide on a clockwise vs clockwise ordering.
If we want to answer your specific problem, we need to pick a pivot point.
Since you want to sort according to the starting point you picked, I would take a pivot in the middle (x=4,y=0 will do).
Since we're sorting counterclockwise, we'll take arctan2(-(y-pivot_y),-(x-center_x)) (we're flipping the x axis).
We get the following, with a gradient colored scatter to prove correctness (fyi I removed the first line of the dat file after downloading):
import numpy as np
import matplotlib.pyplot as plt
points = np.loadtxt('points.dat')
#oneliner for ordering points (transform, adjust for 0 to 2pi, argsort, index at points)
ordered_points = points[np.argsort(np.apply_along_axis(lambda x: np.arctan2(-x[1],-x[0]+4) + np.pi*2, axis=1,arr=points)),:]
#color coding 0-1 as str for gray colormap in matplotlib
plt.scatter(ordered_points[:,0], ordered_points[:,1],c=[str(x) for x in np.arange(len(ordered_points)) / len(ordered_points)],cmap='gray')
Result (in the colormap 1 is white and 0 is black), they're numbered in the 0-1 range by order:
For points with comparable distances between their neighbouring pts, we can use KDTree to get two closest pts for each pt. Then draw lines connecting those to give us a closed shape contour. Then, we will make use of OpenCV's findContours to get contour traced always in counter-clockwise manner. Now, since OpenCV works on images, we need to sample data from the provided float format to uint8 image format. Given, comparable distances between two pts, that should be pretty safe. Also, OpenCV handles it well to make sure it traces even sharp corners in curvatures, i.e. smooth or not-smooth data would work just fine. And, there's no pivot requirement, etc. As such all kinds of shapes would be good to work with.
Here'e the implementation -
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import pdist
from scipy.spatial import cKDTree
import cv2
from scipy.ndimage.morphology import binary_fill_holes
def counter_clockwise_order(a, DEBUG_PLOT=False):
b = a-a.min(0)
d = pdist(b).min()
c = np.round(2*b/d).astype(int)
img = np.zeros(c.max(0)[::-1]+1, dtype=np.uint8)
d1,d2 = cKDTree(c).query(c,k=3)
b = c[d2]
p1,p2,p3 = b[:,0],b[:,1],b[:,2]
for i in range(len(b)):
cv2.line(img,tuple(p1[i]),tuple(p2[i]),255,1)
cv2.line(img,tuple(p1[i]),tuple(p3[i]),255,1)
img = (binary_fill_holes(img==255)*255).astype(np.uint8)
if int(cv2.__version__.split('.')[0])>=3:
_,contours,hierarchy = cv2.findContours(img.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
else:
contours,hierarchy = cv2.findContours(img.copy(),cv2.RETR_TREE,cv2.CHAIN_APPROX_NONE)
cont = contours[0][:,0]
f1,f2 = cKDTree(cont).query(c,k=1)
ordered_points = a[f2.argsort()[::-1]]
if DEBUG_PLOT==1:
NPOINTS = len(ordered_points)
for i in range(NPOINTS):
plt.plot(ordered_points[i:i+2,0],ordered_points[i:i+2,1],alpha=float(i)/(NPOINTS-1),color='k')
plt.show()
return ordered_points
Sample run -
# Load data in a 2D array with 2 columns
a = np.loadtxt('random_shape.csv',delimiter=' ')
ordered_a = counter_clockwise_order(a, DEBUG_PLOT=1)
Output -

Center a colormap around 0 in Mayavi

I'm plotting a point cloud and coloring by residual error. I'd like the colormap to remain centered on 0, so that 0 error is white.
I see answers for matplotlib. What about Mayavi?
from mayavi import mlab
mlab.points3d(x, y, z, e, colormap='RdBu')
you can set the vmin and vmax of the colormap explicitly with mlab.points3d. So, you could just make sure that vmin = -vmax. Something like this:
mylimit = 10
mlab.points3d(x, y, z, e, colormap='RdBu',vmin=-mylimit,vmax=mylimit)
Or, you could set the limit automatically with something like:
mylimit = max(abs(e.min()),abs(e.max()))
In case anybody wishes to do this but so that the full extent of the colorbar is used, here is a solution I made (with help from here) for mayavi that stretches the colorbar so that the centre of it is on zero:
#Mayavi surface
s = mlab.surf(data)
#Get the lut table of the data
lut = s.module_manager.scalar_lut_manager.lut.table.asarray()
maxd = np.max(data)
mind = np.min(data)
#Data range
dran = maxd - mind
#Proportion of the data range at which the centred value lies
zdp = abs(mind / dran)
#The +0.5's here are because floats are rounded down when converted to ints
#index equal portion of distance along colormap
cmzi = int(zdp * 255 + 0.5)
#linspace from zero to 128, with number of points matching portion to side of zero
topi = np.linspace(0, 127, cmzi) + 0.5
#and for other side
boti = np.linspace(128, 255, 255 - cmzi) + 0.5
#convert these linspaces to ints and map the new lut from these
shift_index = np.hstack([topi.astype(int), boti.astype(int)])
s.module_manager.scalar_lut_manager.lut.table = self.lut[shift_index]
#Force update of the figure now that we have changed the LUT
mlab.draw()
Note that if you wish to do this multiple times for the same surface (ie. if you're modifying the mayavi scalars rather than redrawing the plot) you need to make a record of the initial lut table and modify that each time.

How to reshape a networkx graph in Python?

So I created a really naive (probably inefficient) way of generating hasse diagrams.
Question:
I have 4 dimensions... p q r s .
I want to display it uniformly (tesseract) but I have no idea how to reshape it. How can one reshape a networkx graph in Python?
I've seen some examples of people using spring_layout() and draw_circular() but it doesn't shape in the way I'm looking for because they aren't uniform.
Is there a way to reshape my graph and make it uniform? (i.e. reshape my hasse diagram into a tesseract shape (preferably using nx.draw() )
Here's what mine currently look like:
Here's my code to generate the hasse diagram of N dimensions
#!/usr/bin/python
import networkx as nx
import matplotlib.pyplot as plt
import itertools
H = nx.DiGraph()
axis_labels = ['p','q','r','s']
D_len_node = {}
#Iterate through axis labels
for i in xrange(0,len(axis_labels)+1):
#Create edge from empty set
if i == 0:
for ax in axis_labels:
H.add_edge('O',ax)
else:
#Create all non-overlapping combinations
combinations = [c for c in itertools.combinations(axis_labels,i)]
D_len_node[i] = combinations
#Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs
if i > 1:
for node in D_len_node[i]:
for p_node in D_len_node[i-1]:
#if set.intersection(set(p_node),set(node)): Oops
if all(p in node for p in p_node) == True: #should be this!
H.add_edge(''.join(p_node),''.join(node))
#Show Plot
nx.draw(H,with_labels = True,node_shape = 'o')
plt.show()
I want to reshape it like this:
If anyone knows of an easier way to make Hasse Diagrams, please share some wisdom but that's not the main aim of this post.
This is a pragmatic, rather than purely mathematical answer.
I think you have two issues - one with layout, the other with your network.
1. Network
You have too many edges in your network for it to represent the unit tesseract. Caveat I'm not an expert on the maths here - just came to this from the plotting angle (matplotlib tag). Please explain if I'm wrong.
Your desired projection and, for instance, the wolfram mathworld page for a Hasse diagram for n=4 has only 4 edges connected all nodes, whereas you have 6 edges to the 2 and 7 edges to the 3 bit nodes. Your graph fully connects each "level", i.e. 4-D vectors with 0 1 values connect to all vectors with 1 1 value, which then connect to all vectors with 2 1 values and so on. This is most obvious in the projection based on the Wikipedia answer (2nd image below)
2. Projection
I couldn't find a pre-written algorithm or library to automatically project the 4D tesseract onto a 2D plane, but I did find a couple of examples, e.g. Wikipedia. From this, you can work out a co-ordinate set that would suit you and pass that into the nx.draw() call.
Here is an example - I've included two co-ordinate sets, one that looks like the projection you show above, one that matches this one from wikipedia.
import networkx as nx
import matplotlib.pyplot as plt
import itertools
H = nx.DiGraph()
axis_labels = ['p','q','r','s']
D_len_node = {}
#Iterate through axis labels
for i in xrange(0,len(axis_labels)+1):
#Create edge from empty set
if i == 0:
for ax in axis_labels:
H.add_edge('O',ax)
else:
#Create all non-overlapping combinations
combinations = [c for c in itertools.combinations(axis_labels,i)]
D_len_node[i] = combinations
#Create edge from len(i-1) to len(i) #eg. pq >>> pqr, pq >>> pqs
if i > 1:
for node in D_len_node[i]:
for p_node in D_len_node[i-1]:
if set.intersection(set(p_node),set(node)):
H.add_edge(''.join(p_node),''.join(node))
#This is manual two options to project tesseract onto 2D plane
# - many projections are available!!
wikipedia_projection_coords = [(0.5,0),(0.85,0.25),(0.625,0.25),(0.375,0.25),
(0.15,0.25),(1,0.5),(0.8,0.5),(0.6,0.5),
(0.4,0.5),(0.2,0.5),(0,0.5),(0.85,0.75),
(0.625,0.75),(0.375,0.75),(0.15,0.75),(0.5,1)]
#Build the "two cubes" type example projection co-ordinates
half_coords = [(0,0.15),(0,0.6),(0.3,0.15),(0.15,0),
(0.55,0.6),(0.3,0.6),(0.15,0.4),(0.55,1)]
#make the coords symmetric
example_projection_coords = half_coords + [(1-x,1-y) for (x,y) in half_coords][::-1]
print example_projection_coords
def powerset(s):
ch = itertools.chain.from_iterable(itertools.combinations(s, r) for r in range(len(s)+1))
return [''.join(t) for t in ch]
pos={}
for i,label in enumerate(powerset(axis_labels)):
if label == '':
label = 'O'
pos[label]= example_projection_coords[i]
#Show Plot
nx.draw(H,pos,with_labels = True,node_shape = 'o')
plt.show()
Note - unless you change what I've mentioned in 1. above, they still have your edge structure, so won't look exactly the same as the examples from the web. Here is what it looks like with your existing network generation code - you can see the extra edges if you compare it to your example (e.g. I don't this pr should be connected to pqs:
'Two cube' projection
Wikimedia example projection
Note
If you want to get into the maths of doing your own projections (and building up pos mathematically), you might look at this research paper.
EDIT:
Curiosity got the better of me and I had to search for a mathematical way to do this. I found this blog - the main result of which being the projection matrix:
This led me to develop this function for projecting each label, taking the label containing 'p' to mean the point has value 1 on the 'p' axis, i.e. we are dealing with the unit tesseract. Thus:
def construct_projection(label):
r1 = r2 = 0.5
theta = math.pi / 6
phi = math.pi / 3
x = int( 'p' in label) + r1 * math.cos(theta) * int('r' in label) - r2 * math.cos(phi) * int('s' in label)
y = int( 'q' in label) + r1 * math.sin(theta) * int('r' in label) + r2 * math.sin(phi) * int('s' in label)
return (x,y)
Gives a nice projection into a regular 2D octagon with all points distinct.
This will run in the above program, just replace
pos[label] = example_projection_coords[i]
with
pos[label] = construct_projection(label)
This gives the result:
play with r1,r2,theta and phi to your heart's content :)

mayavi 3d object in matplotlib Axes3D

I sometimes find myself frustrated with the lack of certain rendering features in matplotlib's mplot3d. In most of these cases, I do find that I can get what I want in mayavi, but still the matplotlib 3d axes are preferable, if only for aesthetics, like LaTeX-ified labels and visual consistency with my other figures.
My question here is about the obvious hack: is it possible to draw some 3d object (a surface or 3d scatter plot or whatever) in mayavi without axes, export that image, then place it in a matplotlib Axes3D of correct size, orientation, coordinate projection, etc.? Can anyone think of an outline of what would be needed to accomplish this, or perhaps even offer a skeleton solution?
I fiddled around with this some time ago and found I had no trouble in exporting a transparent background mayavi figure and placing it in an empty matplotlib Axes3D (with ticks, labels, and so on), but I didn't get far in getting the camera configurations of mayavi and matplotlib to match. Simply setting the three common parameters of azimuth, elevation, and distance equal in both environments didn't do the trick; presumably what's needed is some consideration of the perspective (or other) transformations going on to render the whole scene, and I'm fairly clueless in that area.
It seems like this might be useful:
http://docs.enthought.com/mayavi/mayavi/auto/example_mlab_3D_to_2D.html
I produced a proof-of-concept solution for Mayavi -> PGFPlots using the mlab_3D_to_2D.py example and the "Support for External Three-Dimensional Graphics" section of the PGFPlots manual.
Procedure:
Run the modified mlab_3D_to_2D.py with Mayavi to generate img.png. Four random points are printed to the console, copy these to the clipboard. Note the figure size and resolution are hard-coded into the script, these shoud be edited or automatically extracted for different image sizes.
Paste the points into mlab_pgf.tex.
Run LaTeX on mlab_pgf.tex.
Result:
Modified mlab_3D_to_2D.py:
# Modified mlab_3D_to_2D.py from https://docs.enthought.com/mayavi/mayavi/auto/example_mlab_3D_to_2D.html
# Original copyright notice:
# Author: S. Chris Colbert <sccolbert#gmail.com>
# Copyright (c) 2009, S. Chris Colbert
# License: BSD Style
from __future__ import print_function
# this import is here because we need to ensure that matplotlib uses the
# wx backend and having regular code outside the main block is PyTaboo.
# It needs to be imported first, so that matplotlib can impose the
# version of Wx it requires.
import matplotlib
# matplotlib.use('WXAgg')
import pylab as pl
import numpy as np
from mayavi import mlab
from mayavi.core.ui.mayavi_scene import MayaviScene
def get_world_to_view_matrix(mlab_scene):
"""returns the 4x4 matrix that is a concatenation of the modelview transform and
perspective transform. Takes as input an mlab scene object."""
if not isinstance(mlab_scene, MayaviScene):
raise TypeError('argument must be an instance of MayaviScene')
# The VTK method needs the aspect ratio and near and far clipping planes
# in order to return the proper transform. So we query the current scene
# object to get the parameters we need.
scene_size = tuple(mlab_scene.get_size())
clip_range = mlab_scene.camera.clipping_range
aspect_ratio = float(scene_size[0])/float(scene_size[1])
# this actually just gets a vtk matrix object, we can't really do anything with it yet
vtk_comb_trans_mat = mlab_scene.camera.get_composite_projection_transform_matrix(
aspect_ratio, clip_range[0], clip_range[1])
# get the vtk mat as a numpy array
np_comb_trans_mat = vtk_comb_trans_mat.to_array()
return np_comb_trans_mat
def get_view_to_display_matrix(mlab_scene):
""" this function returns a 4x4 matrix that will convert normalized
view coordinates to display coordinates. It's assumed that the view should
take up the entire window and that the origin of the window is in the
upper left corner"""
if not (isinstance(mlab_scene, MayaviScene)):
raise TypeError('argument must be an instance of MayaviScene')
# this gets the client size of the window
x, y = tuple(mlab_scene.get_size())
# normalized view coordinates have the origin in the middle of the space
# so we need to scale by width and height of the display window and shift
# by half width and half height. The matrix accomplishes that.
view_to_disp_mat = np.array([[x/2.0, 0., 0., x/2.0],
[ 0., -y/2.0, 0., y/2.0],
[ 0., 0., 1., 0.],
[ 0., 0., 0., 1.]])
return view_to_disp_mat
def apply_transform_to_points(points, trans_mat):
"""a function that applies a 4x4 transformation matrix to an of
homogeneous points. The array of points should have shape Nx4"""
if not trans_mat.shape == (4, 4):
raise ValueError('transform matrix must be 4x4')
if not points.shape[1] == 4:
raise ValueError('point array must have shape Nx4')
return np.dot(trans_mat, points.T).T
def test_surf():
"""Test surf on regularly spaced co-ordinates like MayaVi."""
def f(x, y):
sin, cos = np.sin, np.cos
return sin(x + y) + sin(2 * x - y) + cos(3 * x + 4 * y)
x, y = np.mgrid[-7.:7.05:0.1, -5.:5.05:0.05]
z = f(x, y)
s = mlab.surf(x, y, z)
#cs = contour_surf(x, y, f, contour_z=0)
return x, y, z, s
if __name__ == '__main__':
f = mlab.figure()
f.scene.parallel_projection = True
N = 4
# x, y, z, m = test_mesh()
x, y, z, s = test_surf()
mlab.move(forward=2.0)
# now were going to create a single N x 4 array of our points
# adding a fourth column of ones expresses the world points in
# homogenous coordinates
W = np.ones(x.flatten().shape)
hmgns_world_coords = np.column_stack((x.flatten(), y.flatten(), z.flatten(), W))
# applying the first transform will give us 'unnormalized' view
# coordinates we also have to get the transform matrix for the
# current scene view
comb_trans_mat = get_world_to_view_matrix(f.scene)
view_coords = \
apply_transform_to_points(hmgns_world_coords, comb_trans_mat)
# to get normalized view coordinates, we divide through by the fourth
# element
norm_view_coords = view_coords / (view_coords[:, 3].reshape(-1, 1))
# the last step is to transform from normalized view coordinates to
# display coordinates.
view_to_disp_mat = get_view_to_display_matrix(f.scene)
disp_coords = apply_transform_to_points(norm_view_coords, view_to_disp_mat)
# at this point disp_coords is an Nx4 array of homogenous coordinates
# where X and Y are the pixel coordinates of the X and Y 3D world
# coordinates, so lets take a screenshot of mlab view and open it
# with matplotlib so we can check the accuracy
img = mlab.screenshot(figure=f, mode='rgba', antialiased=True)
pl.imsave("img.png", img)
pl.imshow(img)
# mlab.close(f)
idx = np.random.choice(range(disp_coords[:, 0:2].shape[0]), N, replace=False)
for i in idx:
# print('Point %d: (x, y) ' % i, disp_coords[:, 0:2][i], hmgns_world_coords[:, 0:3][i])
a = hmgns_world_coords[:, 0:3][i]
a = str(list(a)).replace('[', '(').replace(']', ')').replace(' ',',')
# See note below about 298.
b = np.array([0, 298]) - disp_coords[:, 0:2][i]
b = b * np.array([-1, 1])
# Important! These values are not constant.
# The image is 400 x 298 pixels, or 288 x 214.6 pt.
b[0] = b[0] / 400 * 288
b[1] = b[1] / 298 * 214.6
b = str(list(b)).replace('[', '(').replace(']', ')').replace(' ',',')
print(a, "=>", b)
pl.plot([disp_coords[:, 0][i]], [disp_coords[:, 1][i]], 'ro')
pl.show()
# you should check that the printed coordinates correspond to the
# proper points on the screen
mlab.show()
#EOF
mlab_pgf.py:
\documentclass{standalone}
\usepackage{pgfplots}
\pgfplotsset{compat=1.17}
\begin{document}
\begin{tikzpicture}
\begin{axis}[
grid=both,minor tick num=1,
xlabel=$x$,ylabel=$y$,zlabel=$z$,
xmin=-7,
xmax=7,
ymin=-5,
ymax=5,
zmin=-3,
zmax=3,
]
\addplot3 graphics [
points={% important, paste points generated by `mlab_3D_to_2D.py`
(5.100000000000001, -3.8, 2.9491697063900895) => (69.82857610254948, 129.60245304203693)
(-6.2, -3.0999999999999996, 0.6658335107904079) => (169.834990346303, 158.6375879061911)
(-1.7999999999999998, 0.4500000000000002, -1.0839565197346115) => (162.75120267070378, 103.53696636434113)
(-5.3, -4.9, 0.6627774166307937) => (147.33354714145847, 162.93938533017257)
},
] {img.png};
\end{axis}
\end{tikzpicture}
\end{document}

generating a hemispherical surface with triangular_mesh and representing a data( as value or as colors) at each vertex

I want to generate a surface which should look like a hemisphere.. What I have done so far is to read an already existing BEM mesh and try to show the scalar values on it. But now I have to show the scalar values on a hemisphere instead of the Bem mesh. And I don't know how to generate using a triangular mesh that looks like an hemisphere.
This hemisphere needs to contain a set of N number of points(x,y,z)[using the mlab.triangular_mesh] and at each vertex I need to represent N data(float) either as a value or using variations in colormap(eg: blue(lowest value of the data) to red(highest value of the data)). data=its an array of size 2562, a set of float values, could be randomly generated as its part of another codes. Points were part of another set of code too.its of shape(2562,3). but the shape is not a hemisphere
This was the program I used for viewing using the BEM surface
fname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'
surfaces = mne.read_bem_surfaces(fname, add_geom=True)
print "Number of surfaces : %d" % len(surfaces)
head_col = (0.95, 0.83, 0.83) # light pink
colors = [head_col]
try:
from enthought.mayavi import mlab
except:
from mayavi import mlab
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
for c, surf in zip(colors, surfaces):
points = surf['rr']
faces = surf['tris']
s=data
mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2],faces,color=c, opacity=1,scalars=s[:,0])
#mesh= mlab.triangular_mesh(x,y,z,triangles,representation='wireframe',opacity=0) #point_data=mesh.mlab_source.dataset.point_data
#point_data.scalars=t
#point_data.scalars.name='Point data'
#mesh2= mlab.pipeline.set_active_attribute(mesh,point_scalars='Point data')
As others have pointed out your question is not very clear, and does not include an easily reproducible example -- your example would take considerable work for us to reproduce and you have not described the steps you have taken very clearly.
What you are trying to do is easy. Scalars can be defined for each vertex (i.e., each VTK point):
surf = mlab.triangular_mesh(x,y,z,triangles)
surf.mlab_source.scalars = t
And you need to set a flag to get them to appear, which I think might be your problem:
surf.actor.mapper.scalar_visibility=True
Here is some code to generate a half-sphere. It produces a VTK polydata. I'm not 100% sure if the mayavi source is the same source type as triangular_mesh but I think it is.
res = 250. #desired resolution (number of samples on sphere)
phi,theta = np.mgrid[0:np.pi:np.pi/res, 0:np.pi:np.pi/res]
x=np.cos(theta) * np.sin(phi)
y=np.sin(theta) * np.sin(phi)
z=np.cos(phi)
mlab.mesh(x,y,z,color=(1,1,1))

Categories