How to render a simple 2D vtkImageData object in Paraview? - python

I would like to use Paraview to plot simple 2D meshes with either different colors per cell or different colors per vertices. As far as I can tell, the Paraview documentation does not explain how to Show() a user-defined VTK object.
I read from the Paraview guide how the VTK data model works and from the VTK User's Guide how to generate a vtkImageData object.
From what I could gather, the following code should yield a vtkImageData object of a 10x5 2D mesh spanning [0.;10.]x[0.;5.] with 50 blue elements.
But now I don't know how to actually plot it in Paraview.
from paraview import vtk
import paraview.simple as ps
import numpy as np
from paraview.vtk.util.numpy_support import numpy_to_vtk
def main():
# create the vtkImageData object
myMesh = vtk.vtkImageData()
myMesh.SetOrigin(0.,0.,0.)
myMesh.SetExtent(0,10,0,5,0,0)
myMesh.SetSpacing(1.,1.,0.)
# create the numpy colors for each cell
blue = np.array([15, 82, 186], dtype=np.ubyte) # 8 bits each [0, 255]
npColors = np.tile(blue, (myMesh.GetNumberOfCells(), 1))
# transform them to a vtkUnsignedCharArray
# organized as 50 tuples of 3
vtkColors = numpy_to_vtk(npColors, deep=1, array_type=vtk.VTK_UNSIGNED_CHAR)
# allocate the sets of 3 scalars to each cell
myMesh.AllocateScalars(vtk.VTK_UNSIGNED_CHAR, 3) # set 3 scalars per cell
myMesh.GetCellData().SetScalars(vtkColors) # returns 0, the index to which
# vtkColors is assigned
# Something... generate a proxy using servermanager??
ps.Show(myMesh?)
ps.Interact() # or ps.Render()
if __name__ == "__main__":
main()
From what I could gather is that I have to apply a geometric filter first, such as vtkImageDataGeometryFilter(). But this does not exist in paraview.vtk, only by importing the vtk module directly.
Another option, according to this VTK C++ SO question is using vtkMarchingSquares.
Either way, apparently paraview.simple.Show only accepts a proxy object as input. Which then begs the question of how to create a proxy out of the filtered vtkImageData object? To be honest, I have not quite grasped how the visualization pipeline works, despite reading the docs.
Thus far I've only found ways of visualizing VTK objects using vtk directly via the Kitware examples in GitHub, without using the higher level features of Paraview.

ProgrammableSource is what you want to use. See this example or this

I managed to render it using a TrivialProducer object and the method .GetClientSideObject(). This interfaces ParaView to serverside objects.
Sources: the source code and the tip given by Mathieu Westphal from ParaView support.
from paraview import simple as ps
from paraview import vtk
from paraview.vtk.util.numpy_support import numpy_to_vtk
import numpy as np
def main():
# Create an image (this is a data object)
myMesh = vtk.vtkImageData()
myMesh.SetOrigin(0., 0., 0.)
myMesh.SetSpacing(0.1, 0.1, 0.)
myMesh.SetExtent(0, 10, 0, 5, 0, 0)
# coloring
blue = np.array([15, 82, 186], dtype=np.ubyte)
# numpy colors
scalarsnp = np.tile(blue, (myMesh.GetNumberOfCells(), 1))
scalarsnp[[9, 49]] = np.array([255, 255, 0], dtype=np.ubyte) # yellow
# vtk array colors. Organized as 50 tuples of 3
scalarsvtk = numpy_to_vtk(scalarsnp, deep=1, array_type=vtk.VTK_UNSIGNED_CHAR)
scalarsvtk.SetName("colorsArray")
# allocate the scalars to the vtkImageData object
# myMesh.AllocateScalars(vtk.VTK_UNSIGNED_CHAR, 3) # set 3 scalars per cell
# myMesh.GetCellData().SetScalars(scalarsvtk) # do not use this in ParaView!!
colorArrayID = myMesh.GetCellData().AddArray(scalarsvtk)
myMesh.GetCellData().SetActiveScalars(scalarsvtk.GetName())
# TrivialProducer to interface ParaView to serverside objects
tp_mesh = ps.TrivialProducer(registrationName="tp_mesh")
myMeshClient = tp_mesh.GetClientSideObject()
# link the vtkImageData object to the proxy manager
myMeshClient.SetOutput(myMesh)
tp_mesh.UpdatePipeline()
# Filter for showing the ImageData to a plane
mapTexture2Plane = ps.TextureMaptoPlane(registrationName="TM2P_mesh", Input=tp_mesh)
renderViewMesh = ps.CreateView("RenderView")
renderViewMesh.Background = [1, 1, 1]
renderViewMesh.OrientationAxesVisibility = 0
display = ps.Show(proxy=mapTexture2Plane, view=renderViewMesh)
display.SetRepresentationType("Surface")
display.MapScalars = 0 # necessary so as to not generate a colormap
ps.Interact() # or just ps.Render()
if __name__ == "__main__":
main()

Related

How to set Axes limits on OpenTurns Viewer?

I'm using openturns to find the best fit distribution for my data. I got to plot it alright, but the X limit is far bigger than I'd like. My code is:
import statsmodels.api as sm
import openturns as ot
import openturns.viewer as otv
data = in_seconds
sample = ot.Sample(data, 1)
tested_factories = ot.DistributionFactory.GetContinuousUniVariateFactories()
best_model, best_bic = ot.FittingTest.BestModelBIC(sample, tested_factories)
print(best_model)
graph = ot.HistogramFactory().build(sample).drawPDF()
bestPDF = best_model.drawPDF()
bestPDF.setColors(["blue"])
graph.add(bestPDF)
name = best_model.getImplementation().getClassName()
graph.setLegends(["Histogram",name])
graph.setXTitle("LatĂȘncias (segundos)")
graph.setYTitle("FrequĂȘncia")
otv.View(graph)
I'd like to set X limits as something like "graph.setXLim", as we'd do in matplotlib, but I'm stuck with it as I'm new to OpenTurns.
Thanks in advance.
Any OpenTURNS graph has a getBoundingBox method which returns the bounding box as a dimension 2 Interval. We can get/set the lower and upper bounds of this interval with getLowerBound and getUpperBound. Each of these bounds is a Point with dimension 2. Hence, we can set the bounds of the graphics prior to the use of the View class.
In the following example, I create a simple graph containing the PDF of the gaussian distribution.
import openturns as ot
import openturns.viewer as otv
n = ot.Normal()
graph = n.drawPDF()
_ = otv.View(graph)
Suppose that I want to set the lower X axis to -1.
The script:
boundingBox = graph.getBoundingBox()
lb = boundingBox.getLowerBound()
print(lb)
produces:
[-4.10428,-0.0195499]
The first value in the Point is the X lower bound and the second is the Y lower bound. The following script sets the first component of the lower bound to -1, wraps the lower bound into the bounding box and sets the bounding box into the graph.
lb[0] = -1.0
boundingBox.setLowerBound(lb)
graph.setBoundingBox(boundingBox)
_ = otv.View(graph)
This produces the following graph.
The advantage of these methods is that they configure the graphics from the library, before the rendering is done by Matplotlib. The drawback is that they are a little more verbose than the Matplotlib counterpart.
Here is a minimal example adapted from openTURNS examples (see http://openturns.github.io/openturns/latest/examples/graphs/graphs_basics.html) to set the x range (initially from [-4,4] to [-2,2]) :
import openturns as ot
import openturns.viewer as viewer
from matplotlib import pylab as plt
n = ot.Normal()
# To configure the look of the plot, we can first observe the type
# of graphics returned by the `drawPDF` method returns: it is a `Graph`.
graph = n.drawPDF()
# The `Graph` class provides several methods to configure the legends,
# the title and the colors. Since a graphics can contain several sub-graphics,
# the `setColors` takes a list of colors as inputs argument: each item of
# the list corresponds to the sub-graphics.
graph.setXTitle("N")
graph.setYTitle("PDF")
graph.setTitle("Probability density function of the standard gaussian distribution")
graph.setLegends(["N"])
graph.setColors(["blue"])
# Combine several graphics
# In order to combine several graphics, we can use the `add` method.
# Let us create an empirical histogram from a sample.
sample = n.getSample(100)
histo = ot.HistogramFactory().build(sample).drawPDF()
# Then we add the histogram to the `graph` with the `add` method.
# The `graph` then contains two plots.
graph.add(histo)
# Using openturns.viewer
view = viewer.View(graph)
# Get the matplotlib.axes.Axes member with getAxes()
# Similarly, there is a getFigure() method as well
axes = view.getAxes() # axes is a matplotlib object
_ = axes[0].set_xlim(-2.0, 2.0)
plt.show()
You can read the definition of the View object here :
https://github.com/openturns/openturns/blob/master/python/src/viewer.py
As you will see, the View class contains matplotlib objects such as axes and figure. Once accessed by the getAxes (or getFigure) you can use the matplotlib methods.

Finding 3D position of an object given 2 Images

Hello I would like to find the 3D position of an object given 2 different views of an object.
Things that I can provide here are:
I can calculate the intrinsic matrix of each camera.
I also know the 2D coordinates of the objects see here.
Providing bounding boxes of the object
Things that I can not provide here are:
3D position or relative position of the 2 cameras.
3D position of the object.
Measurements of the object.
These are methods i may be able to use to obtain center coordinates corresponding to the camera and the intrinsic parameters.
# This function uses a custom trained fasterrcnn model to detect the object and
# the center of the objects is being calculated using the bounding boxes.
# For simplicity the centers are being hardcoded, since the object won't move
def calculateCenterAndBoundingBox(image):
...
boundingBox1 = [(715.329, 383.64413), (746.09143, 402.87524)]
boudingBox2 = [(303.78778, 391.57953), (339.4821, 412.69092)]
if image == 1:
return (730.7102, 393.2597), boundingBox1
else
return (321.63495, 402.13522), boudingBox2
#for simplicity reasons, both intrinsic cameras are the same
def calculateIntrinsic():
...
return [[512, 0.0, 512],
[0.0, 483.0443151, 364],
[0.0, 0.0, 1.0]]
I tried to determine the position of my object with 8-point-algorithm so I decided to create some feature keypoints with SIFT using this Implementation .
%matplotlib inline
import matplotlib.pylab as plt
import numpy as np
import pysift
import math
import cv2
def myPlot(img):
plt.figure(figsize=(15,20)) # display the output image
plt.imshow(img)
plt.xticks([])
plt.yticks([])
plt.show()
pathToImage1 = "testImage1.png"
c1, bb1 = calculateCenterAndBoundingBox(1)
originalImage1 = cv2.imread(pathToImage1)
img1 = cv2.imread(pathToImage1, 0)
originalImage1 = originalImage[math.floor(bb1[0][1]): math.floor(bb1[1][1]), math.floor(bb1[0][0]):math.floor(bb1[1][0])]
img1 = img1[math.floor(bb1[0][1]): math.floor(bb1[1][1]), math.floor(bb1[0][0]):math.floor(bb1[1][0])]
keypoints, descriptors = pysift.computeKeypointsAndDescriptors(img1)
img1=cv2.drawKeypoints(img1,keypoints,originalImage1)
myplot(img1)
pathToImage2 = "testImage2.png"
c2, bb2 = calculateCenterAndBoundingBox(2)
originalImage2 = cv2.imread(pathToImage2)
img2 = cv2.imread(pathToImage2 , 0)
originalImage2 = originalImage2[math.floor(bb2 [0][1]): math.floor(bb2 [1][1]), math.floor(bb2 [0][0]):math.floor(bb2 [1][0])]
img2 = img2[math.floor(bb2 [0][1]): math.floor(bb2 [1][1]), math.floor(bb2 [0][0]):math.floor(bb2 [1][0])]
keypoints, descriptors = pysift.computeKeypointsAndDescriptors(img2)
img2=cv2.drawKeypoints(img2,keypoints,originalImage2)
myPlot(img2)
However I only got 1 feature keypoint instead of 8 or more
So appearently I can't use the 8-point algorithm in this case.
But I have no other Ideas how to solve this problem giving the constraints above.
Is it even possible to calculate the 3D position given only 2D points and intrinsic matrix of each camera?

How to export a 3D vtk rendered scene to paraview using python?

I've wrote a code to produce cylinder objects using vtk in python. This code works fine where it produces a 3D scene where i can zoom or turn around the cylinders which i have been made. The problem is i want to export this rendered scene to paraview to view and save it for later works. How can i do this?
Here is the code that produce a Y-shape with cylinders:
import vtk
import numpy as np
'''
Adding multiple Actors to one renderer scene using VTK package with python api.
Each cylinder is an Actor with three input specifications: Startpoint, Endpoint and radius.
After creating all the Actors, the preferred Actors will be added to a list and that list will be our input to the
renderer scene.
A list or numpy array with appropriate 3*1 shape could be used to specify starting and ending points.
There are two alternative ways to apply the transform.
1) Use vtkTransformPolyDataFilter to create a new transformed polydata.
This method is useful if the transformed polydata is needed
later in the pipeline
To do this, set USER_MATRIX = True
2) Apply the transform directly to the actor using vtkProp3D's SetUserMatrix.
No new data is produced.
To do this, set USER_MATRIX = False
'''
USER_MATRIX = True
def cylinder_object(startPoint, endPoint, radius, my_color="DarkRed"):
colors = vtk.vtkNamedColors()
# Create a cylinder.
# Cylinder height vector is (0,1,0).
# Cylinder center is in the middle of the cylinder
cylinderSource = vtk.vtkCylinderSource()
cylinderSource.SetRadius(radius)
cylinderSource.SetResolution(50)
# Generate a random start and end point
# startPoint = [0] * 3
# endPoint = [0] * 3
rng = vtk.vtkMinimalStandardRandomSequence()
rng.SetSeed(8775070) # For testing.8775070
# Compute a basis
normalizedX = [0] * 3
normalizedY = [0] * 3
normalizedZ = [0] * 3
# The X axis is a vector from start to end
vtk.vtkMath.Subtract(endPoint, startPoint, normalizedX)
length = vtk.vtkMath.Norm(normalizedX)
vtk.vtkMath.Normalize(normalizedX)
# The Z axis is an arbitrary vector cross X
arbitrary = [0] * 3
for i in range(0, 3):
rng.Next()
arbitrary[i] = rng.GetRangeValue(-10, 10)
vtk.vtkMath.Cross(normalizedX, arbitrary, normalizedZ)
vtk.vtkMath.Normalize(normalizedZ)
# The Y axis is Z cross X
vtk.vtkMath.Cross(normalizedZ, normalizedX, normalizedY)
matrix = vtk.vtkMatrix4x4()
# Create the direction cosine matrix
matrix.Identity()
for i in range(0, 3):
matrix.SetElement(i, 0, normalizedX[i])
matrix.SetElement(i, 1, normalizedY[i])
matrix.SetElement(i, 2, normalizedZ[i])
# Apply the transforms
transform = vtk.vtkTransform()
transform.Translate(startPoint) # translate to starting point
transform.Concatenate(matrix) # apply direction cosines
transform.RotateZ(-90.0) # align cylinder to x axis
transform.Scale(1.0, length, 1.0) # scale along the height vector
transform.Translate(0, .5, 0) # translate to start of cylinder
# Transform the polydata
transformPD = vtk.vtkTransformPolyDataFilter()
transformPD.SetTransform(transform)
transformPD.SetInputConnection(cylinderSource.GetOutputPort())
# Create a mapper and actor for the arrow
mapper = vtk.vtkPolyDataMapper()
actor = vtk.vtkActor()
if USER_MATRIX:
mapper.SetInputConnection(cylinderSource.GetOutputPort())
actor.SetUserMatrix(transform.GetMatrix())
else:
mapper.SetInputConnection(transformPD.GetOutputPort())
actor.SetMapper(mapper)
actor.GetProperty().SetColor(colors.GetColor3d(my_color))
return actor
def render_scene(my_actor_list):
renderer = vtk.vtkRenderer()
for arg in my_actor_list:
renderer.AddActor(arg)
namedColors = vtk.vtkNamedColors()
renderer.SetBackground(namedColors.GetColor3d("SlateGray"))
window = vtk.vtkRenderWindow()
window.SetWindowName("Oriented Cylinder")
window.AddRenderer(renderer)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(window)
# Visualize
window.Render()
interactor.Start()
if __name__ == '__main__':
my_list = []
p0 = np.array([0, 0, 0])
p1 = np.array([0, 10, 0])
p2 = np.array([7, 17, 0])
p3 = np.array([-5, 15, 0])
my_list.append(cylinder_object(p0, p1, 1, "Red"))
my_list.append(cylinder_object(p1, p2, 0.8, "Green"))
my_list.append(cylinder_object(p1, p3, 0.75, "Navy"))
render_scene(my_list)
I have multiple actors where all of them are rendered together in one render scene, can i pass each actor into a vtk.vtkSTLWriter? this seems not working!
What you're looking for is subclasses of the vtkExporter class which, as per the linked doco:
vtkExporter is an abstract class that exports a scene to a file. It is very similar to vtkWriter except that a writer only writes out the geometric and topological data for an object, where an exporter can write out material properties, lighting, camera parameters etc.
As you can see from the inheritance diagram of the class there's about 15 classes that support exporting such a scene into a file that can be viewed in appropriate readers.
IMHO the one you'll have the most luck with is the vtkVRMLExporter class as it's a fairly common format. That being said I don't believe Paraview supports VRML files (at least based on some pretty ancient posts I've found) but I'm pretty sure MayaVi does.
Alternatively you could, as you mentioned, export objects into STL files but STL files simply contain triangle coordinates and info on how they connect. Such files cannot possibly describe info re the scene such as camera or lighting information. Also last I checked a single STL file can only contain a single object so your three cylinders would end up being a merged object so its probably not what you want.
I added these codes and it created a VRML file from my rendered scene.
exporter = vtk.vtkVRMLExporter()
exporter.SetRenderWindow(window)
exporter.SetFileName("cylinders.wrl")
exporter.Write()
exporter.Update()

How to save custom vertex properties with openmesh in Python?

I am working with openmesh installed in Python 3.6 via pip. I need to add custom properties to vertices of a mesh in order to store some data at each vertex. My code goes as follows :
import openmesh as OM
import numpy as np
mesh = OM.TriMesh()
#Some vertices
vh0 = mesh.add_vertex(np.array([0,0,0]));
vh1 = mesh.add_vertex(np.array([1,0,0]));
vh2 = mesh.add_vertex(np.array([1,1,0]));
vh3 = mesh.add_vertex(np.array([0,1,0]));
#Some data
data = np.arange(mesh.n_vertices)
#Add custom property
for vh in mesh.vertices():
mesh.set_vertex_property('prop1', vh, data[vh.idx()])
#Check properties have been added correctly
print(mesh.vertex_property('prop1'))
OM.write_mesh('mesh.om',mesh)
print returns [0, 1, 2, 3]. So far, so good. But when I read again the mesh, the custom property has disappeared :
mesh1 = OM.TriMesh()
mesh1 = OM.read_trimesh('mesh.om')
print(mesh1.vertex_property('prop1'))
returns [None, None, None, None]
I have two guesses :
1 - The property was not saved in the first place
2 - The reader does not know there is a custom property when it reads the file mesh.om
Does anybody know how to save and read properly a mesh with custom vertex properties with openmesh in Python? Or is it even possible (has anybody done it before?)?
Is it that there is something wrong with my code?
Thanks for your help,
Charles.
The OM writer currently does not support custom properties. If you are working with numeric properties, it is probably easiest to convert the data to a NumPy array and save it separately.
Say your mesh and properties are set up like this:
import openmesh as om
import numpy as np
# create example mesh
mesh1 = om.TriMesh()
v00 = mesh1.add_vertex([0,0,0])
v01 = mesh1.add_vertex([0,1,0])
v10 = mesh1.add_vertex([1,0,0])
v11 = mesh1.add_vertex([1,1,0])
mesh1.add_face(v00, v01, v11)
mesh1.add_face(v00, v11, v01)
# set property data
mesh1.set_vertex_property('color', v00, [1,0,0])
mesh1.set_vertex_property('color', v01, [0,1,0])
mesh1.set_vertex_property('color', v10, [0,0,1])
mesh1.set_vertex_property('color', v11, [1,1,1])
You can extract the property data as a numpy array using one of the *_property_array methods and save it alongside the mesh using NumPy's save function.
om.write_mesh('mesh.om', mesh1)
color_array1 = mesh1.vertex_property_array('color')
np.save('color.npy', color_array1)
Loading is similar:
mesh2 = om.read_trimesh('mesh.om')
color_array2 = np.load('color.npy')
mesh2.set_vertex_property_array('color', color_array2)
# verify property data is equal
for vh1, vh2 in zip(mesh1.vertices(), mesh2.vertices()):
color1 = mesh1.vertex_property('color', vh1)
color2 = mesh2.vertex_property('color', vh2)
assert np.allclose(color1, color2)
When you store data, you should set set_persistent function true like below.
(sorry for using c++, I don't know about python)
OpenMesh::VPropHandleT<float> vprop_float;
mesh.add_property(vprop_float, "vprop_float");
mesh.property(vprop_float).set_persistent(true);
OpenMesh::IO::write_mesh(mesh, "tmesh.om");
and then, you have to request this custom property in your mesh before loading it with the obj reader. Order is important.
TriMesh readmesh;
OpenMesh::VPropHandleT<float> vprop_float;
readmesh.add_property(vprop_float, "vprop_float");
OpenMesh::IO::read_mesh(readmesh, "tmesh.om");'
I refered below.
https://www.openmesh.org/media/Documentations/OpenMesh-4.0-Documentation/a00062.html
https://www.openmesh.org/media/Documentations/OpenMesh-4.0-Documentation/a00060.html

How can I create a graph in python (pyqtgraph) where verticies display images?

I would like to construct an interactive graph (as in G = (V, E) with vertices and edges) using python, and I would like to display images on top of each vertex.
I'm using this to visualize a medium to large sized clustering problem, so I'd like whatever backend I use preferably be very fast (so networkx doesnt seem to cut it).
I'm basically looking for creating a set of verticies and assigning an image (or path to an image, or function to create an image) to each. Then I want to specify connections between verticies and their weights.
Its not the end of the world if I have to specify positions of each vertex, but ideally I'd like a layout to be automatically generated using the weights on the graphs.
It would also be cool if I could move the nodes with my mouse, but again, not the end of the world if I have to build that in myself. I just need to get to a starting point.
I was using this demo code to build a graph.
import pyqtgraph as pg
import numpy as np
# Enable antialiasing for prettier plots
pg.setConfigOptions(antialias=True)
w = pg.GraphicsWindow()
w.setWindowTitle('pyqtgraph example: GraphItem')
v = w.addViewBox()
v.setAspectLocked()
g = pg.GraphItem()
v.addItem(g)
## Define positions of nodes
pos = np.array([
[0,0],
[10,0],
[0,10],
[10,10],
[5,5],
[15,5]
])
## Define the set of connections in the graph
adj = np.array([
[0,1],
[1,3],
[3,2],
[2,0],
[1,5],
[3,5],
])
## Define the symbol to use for each node (this is optional)
symbols = ['o','o','o','o','t','+']
## Define the line style for each connection (this is optional)
lines = np.array([
(255,0,0,255,1),
(255,0,255,255,2),
(255,0,255,255,3),
(255,255,0,255,2),
(255,0,0,255,1),
(255,255,255,255,4),
], dtype=[('red',np.ubyte),('green',np.ubyte),('blue',np.ubyte),('alpha',np.ubyte),('width',float)])
## Update the graph
g.setData(pos=pos, adj=adj, pen=lines, size=1, symbol=symbols, pxMode=False)
I tried chaning symbols to use a pyqtgraph image item, but that did not seem to work.
# My Code to make an image
img = ibs.get_annot_chips(cm.qaid)
img_item = pg.ImageItem(img)
# ....
# breaks...
symbols = [img_item,'o','o','o','t','+']
Any input or advice on how to do this?
PyQtGraph does not support this, but it would be a very nice feature. If you look in graphicsItems/ScatterPlotItem.py, near the top is a function called renderSymbol() which generates a QImage based on the symbol parameters specified by the user. You could probably modify this by adding:
if isinstance(symbol, QtGui.QImage):
return symbol
to the top of the function and expect everything to work as you expect (you might need to correct some type checking elsewhere as well).

Categories