I am using HARFANG for a scientific visualization project in VR, with the Python API. I based my work on the tutorial given here: https://github.com/harfang3d/tutorials-hg2/blob/master/scene_vr.py
But there is one thing I can't do :(
Is it possible to display vertices and lines in the VR view?
To do this in the render pipeline I figured out from the tutorials that the line vid = hg.GetSceneForwardPipelinePassViewId(passId, hg.SFPP_Opaque) would let me get the exact render pass into which I could inject my line draws.
However, I can't get it to work in a VR code. The best I've been able to do so far is to desync the view of the two eyes...
What happens here is that your views are overwritten by the drawing pass of your lines.
You have to figure out how BGFX works: it uses a (very simple) system of views, indexed by IDs going from 0 to 255.
As detailed in the manual :
Each call to a drawing function such as DrawLines or DrawModel is queued as multiple draw commands on the specified view. When Frame is called all views are processed in order, from the lowest to the highest id.
What you need to do is to increment the view id before using it for your specific drawing commands:
vid = 0 # keep track of the next free view id
passId = hg.SceneForwardPipelinePassViewId()
# Prepare view-independent render data once
vid, passId = hg.PrepareSceneForwardPipelineCommonRenderData(vid, scene, render_data, pipeline, res, passId)
vr_eye_rect = hg.IntRect(0, 0, vr_state.width, vr_state.height)
# Prepare the left eye render data then draw to its framebuffer
vid, passId = hg.PrepareSceneForwardPipelineViewDependentRenderData(vid, left, scene, render_data, pipeline, res, passId)
vid, passId = hg.SubmitSceneToForwardPipeline(vid, scene, vr_eye_rect, left, pipeline, render_data, res, vr_left_fb.GetHandle())
# Prepare the right eye render data then draw to its framebuffer
vid, passId = hg.PrepareSceneForwardPipelineViewDependentRenderData(vid, right, scene, render_data, pipeline, res, passId)
vid, passId = hg.SubmitSceneToForwardPipeline(vid, scene, vr_eye_rect, right, pipeline, render_data, res, vr_right_fb.GetHandle())
# Display lines:
hg.SetViewFrameBuffer(vid, vr_left_fb.GetHandle())
hg.SetViewRect(vid, 0, 0, vr_state.width, vr_state.height)
hg.SetViewClear(vid, 0, 0, 1.0, 0)
hg.SetViewTransform(vid, left.view, left.proj)
draw_line(vid, hg.Vec3(-2, 0.5, 0), hg.Vec3(2, 0.5, 0), hg.Color.Red, hg.Color.Blue)
vid += 1
hg.SetViewFrameBuffer(vid, vr_right_fb.GetHandle())
hg.SetViewRect(vid, 0, 0, vr_state.width, vr_state.height)
hg.SetViewClear(vid, 0, 0, 1.0, 0)
hg.SetViewTransform(vid, right.view, right.proj)
draw_line(vid, hg.Vec3(-2, 0.5, 0), hg.Vec3(2, 0.5, 0), hg.Color.Red, hg.Color.Blue)
vid += 1
And from here you can go on with the rest of the code snippet you mentioned
As told in the manual, the lines will be drawn after the scene (thanks to the view id being incremented), so you can either clear the DepthBuffer or keep it so that the lines will go through your objects.
Please also note that you have to increment the view id because you are doing custom rendering operations. Most of the time, the API will do it for you (as PrepareSceneForwardPipelineViewDependentRenderData or SubmitSceneToForwardPipeline for example)
Related
i'm new to openGL and i'm trying to move the camera as a first person shooter game. i want to use gluLookAt for movement and looking around the scene, but i can't figure out the camera part
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glLoadIdentity()
glu.gluLookAt(current_player.position[0], current_player.position[1] ,
current_player.position[2], look_at_position[0], look_at_position[1], 0,
0, 1 ,0)
the look_at_position is the mouse position but i can't calculate the last value so i put temporarily as 0
i just want to know how to move the player and the camera using the glLookAt.
Works the same as glm::lookAt(). First argument is the position you are viewing from (you are correct), then the position you are looking at, and then the up vector (also correct). Here's what I invoke:
//this code is in the mouse callback, both yaw and pitch are mouse inputs
glm::vec3 front;
glm::vec3 right;
front.x = cos(glm::radians(yaw)) * cos(glm::radians(pitch));
front.y = sin(glm::radians(pitch));
front.z = sin(glm::radians(yaw)) * cos(glm::radians(pitch));
cameraFront = glm::normalize(front);
front.x = cos(glm::radians(yaw));
front.z = sin(glm::radians(yaw));
movementFront = glm::normalize(front);
//this is in int main()
view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp);
I've wrote a code to produce cylinder objects using vtk in python. This code works fine where it produces a 3D scene where i can zoom or turn around the cylinders which i have been made. The problem is i want to export this rendered scene to paraview to view and save it for later works. How can i do this?
Here is the code that produce a Y-shape with cylinders:
import vtk
import numpy as np
'''
Adding multiple Actors to one renderer scene using VTK package with python api.
Each cylinder is an Actor with three input specifications: Startpoint, Endpoint and radius.
After creating all the Actors, the preferred Actors will be added to a list and that list will be our input to the
renderer scene.
A list or numpy array with appropriate 3*1 shape could be used to specify starting and ending points.
There are two alternative ways to apply the transform.
1) Use vtkTransformPolyDataFilter to create a new transformed polydata.
This method is useful if the transformed polydata is needed
later in the pipeline
To do this, set USER_MATRIX = True
2) Apply the transform directly to the actor using vtkProp3D's SetUserMatrix.
No new data is produced.
To do this, set USER_MATRIX = False
'''
USER_MATRIX = True
def cylinder_object(startPoint, endPoint, radius, my_color="DarkRed"):
colors = vtk.vtkNamedColors()
# Create a cylinder.
# Cylinder height vector is (0,1,0).
# Cylinder center is in the middle of the cylinder
cylinderSource = vtk.vtkCylinderSource()
cylinderSource.SetRadius(radius)
cylinderSource.SetResolution(50)
# Generate a random start and end point
# startPoint = [0] * 3
# endPoint = [0] * 3
rng = vtk.vtkMinimalStandardRandomSequence()
rng.SetSeed(8775070) # For testing.8775070
# Compute a basis
normalizedX = [0] * 3
normalizedY = [0] * 3
normalizedZ = [0] * 3
# The X axis is a vector from start to end
vtk.vtkMath.Subtract(endPoint, startPoint, normalizedX)
length = vtk.vtkMath.Norm(normalizedX)
vtk.vtkMath.Normalize(normalizedX)
# The Z axis is an arbitrary vector cross X
arbitrary = [0] * 3
for i in range(0, 3):
rng.Next()
arbitrary[i] = rng.GetRangeValue(-10, 10)
vtk.vtkMath.Cross(normalizedX, arbitrary, normalizedZ)
vtk.vtkMath.Normalize(normalizedZ)
# The Y axis is Z cross X
vtk.vtkMath.Cross(normalizedZ, normalizedX, normalizedY)
matrix = vtk.vtkMatrix4x4()
# Create the direction cosine matrix
matrix.Identity()
for i in range(0, 3):
matrix.SetElement(i, 0, normalizedX[i])
matrix.SetElement(i, 1, normalizedY[i])
matrix.SetElement(i, 2, normalizedZ[i])
# Apply the transforms
transform = vtk.vtkTransform()
transform.Translate(startPoint) # translate to starting point
transform.Concatenate(matrix) # apply direction cosines
transform.RotateZ(-90.0) # align cylinder to x axis
transform.Scale(1.0, length, 1.0) # scale along the height vector
transform.Translate(0, .5, 0) # translate to start of cylinder
# Transform the polydata
transformPD = vtk.vtkTransformPolyDataFilter()
transformPD.SetTransform(transform)
transformPD.SetInputConnection(cylinderSource.GetOutputPort())
# Create a mapper and actor for the arrow
mapper = vtk.vtkPolyDataMapper()
actor = vtk.vtkActor()
if USER_MATRIX:
mapper.SetInputConnection(cylinderSource.GetOutputPort())
actor.SetUserMatrix(transform.GetMatrix())
else:
mapper.SetInputConnection(transformPD.GetOutputPort())
actor.SetMapper(mapper)
actor.GetProperty().SetColor(colors.GetColor3d(my_color))
return actor
def render_scene(my_actor_list):
renderer = vtk.vtkRenderer()
for arg in my_actor_list:
renderer.AddActor(arg)
namedColors = vtk.vtkNamedColors()
renderer.SetBackground(namedColors.GetColor3d("SlateGray"))
window = vtk.vtkRenderWindow()
window.SetWindowName("Oriented Cylinder")
window.AddRenderer(renderer)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(window)
# Visualize
window.Render()
interactor.Start()
if __name__ == '__main__':
my_list = []
p0 = np.array([0, 0, 0])
p1 = np.array([0, 10, 0])
p2 = np.array([7, 17, 0])
p3 = np.array([-5, 15, 0])
my_list.append(cylinder_object(p0, p1, 1, "Red"))
my_list.append(cylinder_object(p1, p2, 0.8, "Green"))
my_list.append(cylinder_object(p1, p3, 0.75, "Navy"))
render_scene(my_list)
I have multiple actors where all of them are rendered together in one render scene, can i pass each actor into a vtk.vtkSTLWriter? this seems not working!
What you're looking for is subclasses of the vtkExporter class which, as per the linked doco:
vtkExporter is an abstract class that exports a scene to a file. It is very similar to vtkWriter except that a writer only writes out the geometric and topological data for an object, where an exporter can write out material properties, lighting, camera parameters etc.
As you can see from the inheritance diagram of the class there's about 15 classes that support exporting such a scene into a file that can be viewed in appropriate readers.
IMHO the one you'll have the most luck with is the vtkVRMLExporter class as it's a fairly common format. That being said I don't believe Paraview supports VRML files (at least based on some pretty ancient posts I've found) but I'm pretty sure MayaVi does.
Alternatively you could, as you mentioned, export objects into STL files but STL files simply contain triangle coordinates and info on how they connect. Such files cannot possibly describe info re the scene such as camera or lighting information. Also last I checked a single STL file can only contain a single object so your three cylinders would end up being a merged object so its probably not what you want.
I added these codes and it created a VRML file from my rendered scene.
exporter = vtk.vtkVRMLExporter()
exporter.SetRenderWindow(window)
exporter.SetFileName("cylinders.wrl")
exporter.Write()
exporter.Update()
I'm looking for a python function or script which could check the borders of all uv shells in the scene, including exceeds the border or too close to the border.
The scripts I found are mainly used to find all uv shells in a selected object.
https://polycount.com/discussion/196753/maya-python-get-a-list-of-all-uv-shells-in-a-selected-object
But I want to check the borders of all uv shells, and if there are any errors in the scene, it could show me exactly the model that is irregular.
Thanks,
This is a very rudimentary example. It loops over all the meshes in the scene, collecting their UV bounding boxes using cmds.polyEvaluate. If it finds anything which sticks outside the bounding box supplied, it adds them to a list. It returns two things: first the uv bounds of the entire scene, second the list of items outside the target bounding box.
import maya.cmds as cmds
def scene_uv_bounds(target = (0,0,1,1)):
umin, vmin, umax, vmax = 0, 0, 0, 0
for item in cmds.ls(type='mesh'):
out_of_bounds = []
# polyEvaluate -b2 returns [(umin, umax) , (vmin, vmas)]
uvals, vvals = cmds.polyEvaluate(item, b2=True)
#unpack into separate values
uumin, uumax = uvals
vvmin, vvmax = vvals
if uumin < target[0] or vvmin < target[1] or uumax > target[2] or vvmax > target[3]:
out_of_bounds.append(item)
umin = min(umin, uumin)
umax = max(umax, uumax)
vmin = min(vmin, vvmin)
vmax = max(vmax, vvmax)
return (umin, vmin, umax, vmax), out_of_bounds
#usage
uv_bounds, out_of_bounds_meshes = scene_uv_bounds()
Depending on your content you might need to manage the active UV set on the different items, but for simple one-channel cases this catches most cases.
I am trying to display images obtained with a CT-scan using numpy/vtk. To do so, I followed this sample code and the answer to this question, but I do not get good results and I do not know the reason.
I have checked it out that I load the data correctly so it seems I am doing something wrong when rendering. Any type of help is highly appreciated. This is my result until now:
Thanks in advance.
This is my code :
import os
import sys
import pylab
import glob
import vtk
import numpy as np
#We order all the directories by name
path="data/Images/"
tulip_files = [t for t in os.listdir(path)]
tulip_files.sort() #the os.listdir function do not give the files in the right order so we need to sort them
#Function that open all the images of a folder and save them in a images list
def imageread(filePath):
filenames = [img for img in glob.glob(filePath)]
filenames.sort()
temp = pylab.imread(filenames[0])
d, w = temp.shape
h = len(filenames)
print 'width, depth, height : ',w,d,h
volume = np.zeros((w, d, h), dtype=np.uint16)
k=0
for img in filenames: #assuming tif
im=pylab.imread(img)
assert im.shape == (500,500), 'Image with an unexpected size'
volume[:,:,k] = im
k+=1
return volume
#We create the data we want to render. We create a 3D-image by a X-ray CT-scan made to an object. We store the values of each
#slice and we complete the volume with them in the z axis
matrix_full = imageread(path+'Image15/raw/reconstruction/*.tif')
# For VTK to be able to use the data, it must be stored as a VTK-image. This can be done by the vtkImageImport-class which
# imports raw data and stores it.
dataImporter = vtk.vtkImageImport()
# The previously created array is converted to a string of chars and imported.
data_string = matrix_full.tostring()
dataImporter.CopyImportVoidPointer(data_string, len(data_string))
# The type of the newly imported data is set to unsigned short (uint16)
dataImporter.SetDataScalarTypeToUnsignedShort()
# Because the data that is imported only contains an intensity value (it isnt RGB-coded or someting similar), the importer
# must be told this is the case.
dataImporter.SetNumberOfScalarComponents(1)
# The following two functions describe how the data is stored and the dimensions of the array it is stored in.
w, h, d = tulip_matrix_full.shape
dataImporter.SetDataExtent(0, h-1, 0, d-1, 0, w-1)
dataImporter.SetWholeExtent(0, h-1, 0, d-1, 0, w-1)
# This class stores color data and can create color tables from a few color points.
colorFunc = vtk.vtkPiecewiseFunction()
colorFunc.AddPoint(0, 0.0);
colorFunc.AddPoint(65536, 1);
# The following class is used to store transparency-values for later retrieval.
alphaChannelFunc = vtk.vtkPiecewiseFunction()
#Create transfer mapping scalar value to opacity
alphaChannelFunc.AddPoint(0, 0.0);
alphaChannelFunc.AddPoint(65536, 1);
# The previous two classes stored properties. Because we want to apply these properties to the volume we want to render,
# we have to store them in a class that stores volume properties.
volumeProperty = vtk.vtkVolumeProperty()
volumeProperty.SetColor(colorFunc)
volumeProperty.SetScalarOpacity(alphaChannelFunc)
#volumeProperty.ShadeOn();
# This class describes how the volume is rendered (through ray tracing).
compositeFunction = vtk.vtkVolumeRayCastCompositeFunction()
# We can finally create our volume. We also have to specify the data for it, as well as how the data will be rendered.
volumeMapper = vtk.vtkVolumeRayCastMapper()
volumeMapper.SetMaximumImageSampleDistance(0.01) # function to reduce the spacing between each image
volumeMapper.SetVolumeRayCastFunction(compositeFunction)
volumeMapper.SetInputConnection(dataImporter.GetOutputPort())
# The class vtkVolume is used to pair the previously declared volume as well as the properties to be used when rendering that volume.
volume = vtk.vtkVolume()
volume.SetMapper(volumeMapper)
volume.SetProperty(volumeProperty)
# With almost everything else ready, its time to initialize the renderer and window, as well as creating a method for exiting the application
renderer = vtk.vtkRenderer()
renderWin = vtk.vtkRenderWindow()
renderWin.AddRenderer(renderer)
renderInteractor = vtk.vtkRenderWindowInteractor()
renderInteractor.SetRenderWindow(renderWin)
# We add the volume to the renderer ...
renderer.AddVolume(volume)
# ... set background color to white ...
renderer.SetBackground(1,1,1)
# ... and set window size.
renderWin.SetSize(550, 550)
renderWin.SetMultiSamples(4)
# A simple function to be called when the user decides to quit the application.
def exitCheck(obj, event):
if obj.GetEventPending() != 0:
obj.SetAbortRender(1)
# Tell the application to use the function as an exit check.
renderWin.AddObserver("AbortCheckEvent", exitCheck)
renderInteractor.Initialize()
# Because nothing will be rendered without any input, we order the first render manually before control is handed over to the main-loop.
renderWin.Render()
renderInteractor.Start()
Finally I found a solution. I made two important changes:
Change opacity values. I have a lot of near-to-black voxels so I modify the opacity to consider them as black (0.0).
alphaChannelFunc.AddPoint(15000, 0.0);
alphaChannelFunc.AddPoint(65536, 1);
Change array order. It seems that the array order in VTK is Fortran order, so I changed the next functions to define the axis correctly:
dataImporter.SetDataExtent(0, h-1, 0, d-1, 0, w-1)
dataImporter.SetWholeExtent(0, h-1, 0, d-1, 0, w-1)
And now it works!
I want to reserve some space on the screen for my Gtk application written in Python. I've wrote this function:
import xcb, xcb.xproto
import struct
def reserve_space(xid, data):
connection = xcb.connect()
atom_cookie = connection.core.InternAtom(True, len("_NET_WM_STRUT_PARTIAL"),
"_NET_WM_STRUT_PARTIAL")
type_cookie = connection.core.InternAtom(True, len("CARDINAL"), "CARDINAL")
atom = atom_cookie.reply().atom
atom_type = type_cookie.reply().atom
data_p = struct.pack("I I I I I I I I I I I I", *data)
strat_cookie = connection.core.ChangeProperty(xcb.xproto.PropMode.Replace, xid,
atom, xcb.xproto.Atom.CARDINAL, 32, len(data_p), data_p)
connection.flush()
It's call looks like this:
utils.reserve_space(xid, [0, 60, 0, 0, 0, 0, 24, 767, 0, 0, 0, 0])
Unfortunately, it doesn't work. Where is an error in my code?
UPD:
Here is my xprop output. My WM is Compiz.
I have uploaded a gist that demonstrates how to specify a strut across the top of the current monitor for what might be a task-bar. It may help explain some of this.
The gist of my gist is below:
window = gtk.Window()
window.show_all()
topw = window.get_toplevel().window
topw.property_change("_NET_WM_STRUT","CARDINAL",32,gtk.gdk.PROP_MODE_REPLACE,
[0, 0, bar_size, 0])
topw.property_change("_NET_WM_STRUT_PARTIAL","CARDINAL",32,gtk.gdk.PROP_MODE_REPLACE,
[0, 0, bar_size, 0, 0, 0, 0, 0, x, x+width, 0, 0])
I found the strut arguments confusing at first, so here is an explanation that I hope is clearer:
we set _NET_WM_STRUT, the older mechanism as well as _NET_WM_STRUT_PARTIAL but window managers ignore the former if they support the latter. The numbers in the array are as follows:
0, 0, bar_size, 0 are the number of pixels to reserve along each edge of the screen given in the order left, right, top, bottom. Here the size of the bar is reserved at the top of the screen and the other edges are left alone.
_NET_WM_STRUT_PARTIAL also supplies a further four pairs, each being a start and end position for the strut (they don't need to occupy the entire edge).
In the example, we set the top start to the current monitor's x co-ordinate and the top-end to the same value plus that monitor's width. The net result is that space is reserved only on the current monitor.
Note that co-ordinates are specified relative to the screen (i.e. all monitors together).
(see the referenced gist for the full context)
Changing to using ChangePropertyChecked(), and then checking the result gives a BadLength exception.
I think the bug here is that the ChangeProperty() parameter data_len is the number of elements of the size given by format , not the number of bytes, in the property data data.
Slightly modified code which works for me:
def reserve_space(xid, data):
connection = xcb.connect()
atom_cookie = connection.core.InternAtom(False, len("_NET_WM_STRUT_PARTIAL"),
"_NET_WM_STRUT_PARTIAL")
atom = atom_cookie.reply().atom
data_p = struct.pack("12I", *data)
strat_cookie = connection.core.ChangePropertyChecked(xcb.xproto.PropMode.Replace, xid,
atom, xcb.xproto.Atom.CARDINAL, 32, len(data_p)/4, data_p)
strat_cookie.check()
connection.flush()