Maya – Select objects in viewport via Python - python

Can anyone help me? Is it possible to make a script with Python to automatically select every object in Maya's viewport?
Is it possible?

It's very possible though you have to use Maya's api to do it. You can use OpenMayaUI.MDrawTraversal to collect all objects within a camera's frustum.
This may seem more long winded than using OpenMaya.MGlobal.selectFromScreen, but it gives you a few benefits:
You can do it on any camera, despite it not being used as an active view.
You can perform whatever you need to do all in memory without selecting and forcing a redraw.
OpenMaya.MGlobal.selectFromScreen will be interface dependent, meaning that it's not possible to execute it on Maya batch jobs. This will work on either case.
That being said, here's an example that will create a bunch of random boxes, create a camera looking at them, then select all boxes that are within the camera's view:
import random
import maya.cmds as cmds
import maya.OpenMaya as OpenMaya
import maya.OpenMayaUI as OpenMayaUI
# Create a new camera.
cam, cam_shape = cmds.camera()
cmds.move(15, 10, 15, cam)
cmds.rotate(-25, 45, 0, cam)
cmds.setAttr("{}.focalLength".format(cam), 70)
cmds.setAttr("{}.displayCameraFrustum".format(cam), True)
# Create a bunch of boxes at random positions.
val = 10
for i in range(50):
new_cube, _ = cmds.polyCube()
cmds.move(random.uniform(-val, val), random.uniform(-val, val), random.uniform(-val, val), new_cube)
# Add camera to MDagPath.
mdag_path = OpenMaya.MDagPath()
sel = OpenMaya.MSelectionList()
sel.add(cam)
sel.getDagPath(0, mdag_path)
# Create frustum object with camera.
draw_traversal = OpenMayaUI.MDrawTraversal()
draw_traversal.setFrustum(mdag_path, cmds.getAttr("defaultResolution.width"), cmds.getAttr("defaultResolution.height")) # Use render's resolution.
draw_traversal.traverse() # Traverse scene to get all objects in the camera's view.
frustum_objs = []
# Loop through objects within frustum.
for i in range(draw_traversal.numberOfItems()):
# It will return shapes at first, so we need to fetch its transform.
shape_dag_path = OpenMaya.MDagPath()
draw_traversal.itemPath(i, shape_dag_path)
transform_dag_path = OpenMaya.MDagPath()
OpenMaya.MDagPath.getAPathTo(shape_dag_path.transform(), transform_dag_path)
# Get object's long name and make sure it's a valid transform.
obj = transform_dag_path.fullPathName()
if cmds.objExists(obj):
frustum_objs.append(obj)
# At this point we have a list of objects that we can filter by type and do whatever we want.
# In this case just select them.
cmds.select(frustum_objs)
Hope that gives you a better direction.

you can try the following script
import maya.OpenMaya as om
import maya.OpenMayaUI as omUI
view = omUI.M3dView.active3dView()
om.MGlobal.selectFromScreen( 0, 0, view.portWidth(), view.portHeight(),om.MGlobal.kReplaceList)
I found this snippet on https://forums.cgsociety.org/t/list-objects-in-viewport/1463426, and it seems to do the trick. You can read through the discussion for more information

Related

Maya Python - How to set input mesh on MASH programmatically?

I'm trying to create a MASH in Maya and set the input mesh using the Python API. This is incredibly simple in the GUI but I've spent hours and can't figure out how to make it work in the API. Here's my code so far:
from maya import cmds
import MASH.api as mapi
#create backplate
backplate = cmds.polyPlane(w=10,h=10)
#create cube
cube = cmds.polyCube(w=10,h=10)
#create mash
cmds.select(cube[0])
mashNetwork = mapi.Network()
mashNetwork.createNetwork()
#set mash to mesh distribution type
cmds.setAttr(mashNetwork.distribute + '.arrangement', 4)
What do I do after this? I want the backplate to be the input mesh to the MASH. I know the parameter I need to set can be accessed by this: mashNetwork.distribute + '.inputMesh'
But no matter what I try I get an error. I've tried setAttr, connectAttr, all with no luck. Anyone know how to do this?
You need to connect an outMesh of the shape node to the inputMesh attribute of the MASH_Distribute. You can inspect the manually created connections in the Node Editor to see how it works without scripting first.
cmds.connectAttr('pPlaneShape1.outMesh', mashNetwork.distribute + '.inputMesh')

Blender: Rendering images and producing an animation

I am new to Blender and I’m having a bit of a tough time understanding its key concepts. I am using Blender 2.82 and working with Python scripting. My project consists of using Python to do the following:
Move object slightly.
Take picture with camera 1, camera 2, camera 3, and camera 4.
Repeat.
I had a script that did that. However, I wanted to save the position of my object (a sphere) every time I changed it during the loop in an animation, so I could later see what I did. When trying to insert keyframes for animation in my loop, it seems as if my sphere didn’t move. Below is my code. When I remove the lines that include frame_set and keyframe_insert, my sphere moves as I can see from my rendered images. I think I am confusing some kind of concept… Any help would be appreciated. The goal of this is to produce the images I would obtain from four cameras placed around an object, that is moving, so as to simulate a mocap system.
Why does inserting a keyframe change all of the images being rendered?
import bpy, bgl, blf,sys
import numpy as np
from bpy import data, ops, props, types, context
cameraNames=''
# Loop all command line arguments and try to find "cameras=east" or similar
for arg in sys.argv:
words=arg.split('=')
if ( words[0] == 'cameras'):
cameraNames = words[1]
sceneKey = bpy.data.scenes.keys()[0]
# Loop all objects and try to find Cameras
bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
bpy.data.scenes[sceneKey].cycles.max_bounces=12
bpy.data.scenes[sceneKey].render.tile_x=8
bpy.data.scenes[sceneKey].render.tile_y=8
bpy.data.scenes[sceneKey].cycles.samples = 16
bpy.data.scenes[sceneKey].cycles.caustics_reflective = False
bpy.data.scenes[sceneKey].cycles.caustics_refractive = False
bpy.data.objects['Sphere'].location=[1,1,1]
frame_num=0
for i in range(0,2): #nframes
bpy.context.scene.frame_set(frame_num)
for obj in bpy.data.objects:
# Find cameras that match cameraNames
if ( obj.type =='CAMERA') and ( cameraNames == '' or obj.name.find(cameraNames) != -1) :
# Set Scenes camera and output filename
bpy.data.scenes[sceneKey].camera = obj
bpy.data.scenes[sceneKey].render.filepath = '//'+obj.name+"_"+str(i)
# Render Scene and store the scene
bpy.ops.render.render( write_still=True )
bpy.data.objects['Sphere'].keyframe_insert(data_path="location",index=-1)
frame_num+=1
bpy.data.objects['Sphere'].location=[2,2,1]
I have no knowledge of python, but you can try to do key frame animation manually and make a script which will render the pictures after a set of key frames(whenever the object has moved to a new location)
It is not too hard (I'm talking about only the animation), just press the circle button near the play animation button on the timeline. This will turn on auto key framing and you just have to go to the desired key frame and move the object according to your need.

Getting, Storing, Setting and Modifying Transform Attributes through PyMel

I'm working on something that gets and stores the transforms of an object moved by the user and then allows the user to click a button to return to the values set by the user.
So far, I have figured out how to get the attribute, and set it. However, I can only get and set once. Is there a way to do this multiple times within the script running once? Or do I have to keep rerunning the script? This is a vital question for me get crystal clear.
basically:
btn1 = button(label="Get x Shape", parent = layout, command ='GetPressed()')
btn2 = button(label="Set x Shape", parent = layout, command ='SetPressed()')
def GetPressed():
print gx #to see value
gx = PyNode( 'object').tx.get() #to get the attr
def SetPressed():
PyNode('object').tx.set(gx) #set the attr???
I'm not 100% on how to do this correctly, or if I'm going the right way?
Thanks
You aren't passing the variable gx so SetPressed() will fail if you run it as written(it might work sporadically if you tried executing the gx = ... line directly in the listener before running the whole thing -- but it's going to be erratic). You'll need to provide a value in your SetPressed() function so the set operation has something to work with.
As an aside, using string names to invoke your button functions isn't a good way to go -- you code will work when executed from the listener but will not work if bundled into a function: when you use a string name for the functions Maya will only find them if they live the the global namespace -- that's where your listener commands go but it's hard to reach from other functions.
Here's a minimal example of how to do this by keeping all of the functions and variables inside another function:
import maya.cmds as cmds
import pymel.core as pm
def example_window():
# make the UI
with pm.window(title = 'example') as w:
with pm.rowLayout(nc =3 ) as cs:
field = pm.floatFieldGrp(label = 'value', nf=3)
get_button = pm.button('get')
set_button = pm.button('set')
# define these after the UI is made, so they inherit the names
# of the UI elements
def get_command(_):
sel = pm.ls(sl=True)
if not sel:
cmds.warning("nothing selected")
return
value = sel[0].t.get() + [0]
pm.floatFieldGrp(field, e=True, v1= value[0], v2 = value[1], v3 = value[2])
def set_command(_):
sel = pm.ls(sl=True)
if not sel:
cmds.warning("nothing selected")
return
value = pm.floatFieldGrp(field, q=True, v=True)
sel[0].t.set(value[:3])
# edit the existing UI to attech the commands. They'll remember the UI pieces they
# are connected to
pm.button(get_button, e=True, command = get_command)
pm.button(set_button, e=True, command = set_command)
w.show()
#open the window
example_window()
In general, it's this kind of thing that is the trickiest bit in doing Maya GUI -- you need to make sure that all the functions and handlers etc see each other and can share information. In this example the function shares the info by defining the handlers after the UI exists, so they can inherit the names of the UI pieces and know what to work on. There are other ways to do this (Classes are the most sophisticated and complex) but this is the minimalist way to do it. There's a deeper dive on how to do this here

VTK update position of multiple render windows

I'm running into a bit of a problem when trying to run multiple render windows in a Python VTK application I'm writing. The application is an attempt to render a 3D model in two separate views for a stereo application (i.e. Left render and right render), but I'm having an issue with updating the cameras of each window simultaneously. I currently have two nearly identical pipelines set up, each with its own vtkCamera, vtkRenderWindow, vtkRenderer, and vtkRenderWindowInteractor, the only difference being that the right camera is positionally shifted 30 units along the X axis.
Each of the render window interactors is being updated via the vtkRenderWindowInteractor.AddObserver() method that calls a simple function to reset the cameras to their original positions and orientations. The biggest issue is that this only seems to occur on one window at a time, specifically the window in focus at the time. It's as if the interactor's timer just shuts off once the interactor loses focus. In addition, when I hold down the mouse (And thus move the camera around), the rendered image begins to 'drift', resetting to a less and less correct position even though I have hardcoded the coordinates into the function.
Obviously I'm very new to VTK, and much of what goes on is fairly confusing as so much is hidden in the backend, so it would be amazing to acquire some assistance on the matter. My code is below. Thanks guys!
from vtk import*
from parse import *
import os
import time, signal, threading
def ParseSIG(signum, stack):
print signum
return
class vtkGyroCallback():
def __init__(self):
pass
def execute(self, obj, event):
#Modified segment to accept input for leftCam position
gyro = (raw_input())
xyz = parse("{} {} {}", gyro)
#This still prints every 100ms, but camera doesn't update!
print xyz
#These arguments are updated and the call is made.
self.leftCam.SetPosition(float(xyz[0]), float(xyz[1]), float(xyz[2]))
self.leftCam.SetFocalPoint(0,0,0)
self.leftCam.SetViewUp(0,1,0)
self.leftCam.OrthogonalizeViewUp()
self.rightCam.SetPosition(10, 40, 100)
self.rightCam.SetFocalPoint(0,0,0)
self.rightCam.SetViewUp(0,1,0)
self.rightCam.OrthogonalizeViewUp()
#Just a guess
obj.Update()
return
def main():
# create two cameras
cameraR = vtkCamera()
cameraR.SetPosition(0,0,200)
cameraR.SetFocalPoint(0,0,0)
cameraL = vtkCamera()
cameraL.SetPosition(40,0,200)
cameraL.SetFocalPoint(0,0,0)
# create a rendering window and renderer
renR = vtkRenderer()
renR.SetActiveCamera(cameraR)
renL = vtkRenderer()
renL.SetActiveCamera(cameraL)
# create source
reader = vtkPolyDataReader()
path = "/home/compilezone/Documents/3DSlicer/SlicerScenes/LegoModel-6_25/Model_5_blood.vtk"
reader.SetFileName(path)
reader.Update()
# create render window
renWinR = vtkRenderWindow()
renWinR.AddRenderer(renR)
renWinR.SetWindowName("Right")
renWinL = vtkRenderWindow()
renWinL.AddRenderer(renL)
renWinL.SetWindowName("Left")
# create a render window interactor
irenR = vtkRenderWindowInteractor()
irenR.SetRenderWindow(renWinR)
irenL = vtkRenderWindowInteractor()
irenL.SetRenderWindow(renWinL)
# mapper
mapper = vtkPolyDataMapper()
mapper.SetInput(reader.GetOutput())
# actor
actor = vtkActor()
actor.SetMapper(mapper)
# assign actor to the renderer
renR.AddActor(actor)
renL.AddActor(actor)
# enable user interface interactor
renWinR.Render()
renWinL.Render()
irenR.Initialize()
irenL.Initialize()
#Create callback object for camera manipulation
cb = vtkGyroCallback()
cb.rightCam = cameraR
cb.leftCam = cameraL
renWinR.AddObserver('InteractionEvent', cb.execute)
renWinL.AddObserver('InteractionEvent', cb.execute)
irenR.AddObserver('TimerEvent', cb.execute)
irenL.AddObserver('TimerEvent', cb.execute)
timerIDR = irenR.CreateRepeatingTimer(100)
timerIDL = irenL.CreateRepeatingTimer(100)
irenR.Start()
irenL.Start()
if __name__ == '__main__':
main()
EDIT:
Upon further viewing it seems like the TimerEvents aren't firing more than once in a row after a MouseClickEvent and I have no idea why.
EDIT 2: Scratch that, they are most definitely firing as per some test outputs I embedded in the code. I modified the code to accept user input for the self.leftCam.SetPosition() call within the vtkGyroCallback.execute() method (Thus replacing the hardcoded "10, 40, 100" parameters with three input variables) then piped the output of a script that simply displayed three random values into my main program. What this should have accomplished was having a render window that would constantly change position. Instead, nothing happens until I click on the screen, at which point the expected functionality begins. The whole time, timer events are still firing and inputs are still being accepted, yet the cameras refuse to update until a mouse event occurs within the scope of their window. What is the deal?
EDIT 3: I've dug around some more and found that within the vtkObject::InvokeEvent() method that is called within every interaction event there is a focus loop that overrides all observers that do not pertain to the object in focus. I'm going to investigate if there is a way to remove focus so that it will instead bypass this focus loop and go to the unfocused loop that handles non focused objects.
So the solution was surprisingly simple, but thanks to the lack of quality documentation provided by VTK, I was left to dig through the source to find it. Effectively all you have to do is pseudo-thread Render() calls from each of the interactors via whatever callback method you're using to handle your TimerEvents. I did this using ID properties added to each interactor (seen in code provided below). You can see that every time a TimerEvent is fired from the irenR interactor's internal timer (irenR handles the right eye), the irenL's Render() function is called, and vice versa.
To solve this I first realized that the standard interactor functionalities (Mouse events and the like), worked normally. So I dug around the source in vtkRenderWindowInteractor.cxx and realized that those methods were abstracted to the individual vtkInteractorStyle implementations. After rooting around in the vtkInteractorStyleTrackball.cxx source, I found that there was actually a Render() function within the vtkRenderWindowInteractor class. Go figure! The documentation sure didn't mention that!
Unfortunately, two renders at once is actually very slow. If I do this method with just one window (At which point it becomes unnecessary), it runs wonderfully. Framerate tanks with a second window though. Oh well, what can you do?
Here's my corrected code (Finally I can start working on what I was supposed to be developing):
from vtk import*
from parse import *
import os
import time, signal, threading
def ParseSIG(signum, stack):
print signum
return
class vtkGyroCallback():
def __init__(self):
pass
def execute(self, obj, event):
#Modified segment to accept input for leftCam position
gyro = (raw_input())
xyz = parse("{} {} {}", gyro)
#print xyz
# "Thread" the renders. Left is called on a right TimerEvent and right is called on a left TimerEvent.
if obj.ID == 1 and event == 'TimerEvent':
self.leftCam.SetPosition(float(xyz[0]), float(xyz[1]), float(xyz[2]))
self.irenL.Render()
#print "Left"
elif obj.ID == 2 and event == 'TimerEvent':
self.rightCam.SetPosition(float(xyz[0]), float(xyz[1]), float(xyz[2]))
self.irenR.Render()
#print "Right"
return
def main():
# create two cameras
cameraR = vtkCamera()
cameraR.SetPosition(0,0,200)
cameraR.SetFocalPoint(0,0,0)
cameraL = vtkCamera()
cameraL.SetPosition(40,0,200)
cameraL.SetFocalPoint(0,0,0)
# create a rendering window and renderer
renR = vtkRenderer()
renR.SetActiveCamera(cameraR)
renL = vtkRenderer()
renL.SetActiveCamera(cameraL)
# create source
reader = vtkPolyDataReader()
path = "/home/compilezone/Documents/3DSlicer/SlicerScenes/LegoModel-6_25/Model_5_blood.vtk"
reader.SetFileName(path)
reader.Update()
# create render window
renWinR = vtkRenderWindow()
renWinR.AddRenderer(renR)
renWinR.SetWindowName("Right")
renWinL = vtkRenderWindow()
renWinL.AddRenderer(renL)
renWinL.SetWindowName("Left")
# create a render window interactor
irenR = vtkRenderWindowInteractor()
irenR.SetRenderWindow(renWinR)
irenL = vtkRenderWindowInteractor()
irenL.SetRenderWindow(renWinL)
# mapper
mapper = vtkPolyDataMapper()
mapper.SetInput(reader.GetOutput())
# actor
actor = vtkActor()
actor.SetMapper(mapper)
# assign actor to the renderer
renR.AddActor(actor)
renL.AddActor(actor)
# enable user interface interactor
renWinR.Render()
renWinL.Render()
irenR.Initialize()
irenL.Initialize()
#Create callback object for camera manipulation
cb = vtkGyroCallback()
cb.rightCam = renR.GetActiveCamera()#cameraR
cb.leftCam = renL.GetActiveCamera()#cameraL
cb.irenR = irenR
cb.irenL = irenL
irenR.ID = 1
irenL.ID = 2
irenR.AddObserver('TimerEvent', cb.execute)
irenL.AddObserver('TimerEvent', cb.execute)
timerIDR = irenR.CreateRepeatingTimer(100)
timerIDL = irenL.CreateRepeatingTimer(100)
irenL.Start()
irenR.Start()
if __name__ == '__main__':
main()

SVG interaction in python with cairo, opengl and rsvg

I render a huge SVG file with a lot of elements with Cairo, OpenGL and rsvg. I draw svg on cairo surface via rsvg and create an OpenGL texture to draw it. Everything is fine. And now I have to interact with elements from SVG. For example, I want to guess an element by coordinates. And I want to change the background of some path in SVG. In the case of changing background I think, I can change SVG DOM and somehow re-render a part of SVG. But in the case of hit testing elements I'm totally embarrassed.
So, is there some python library to interact with SVG? Is it possible to stay with cairo and rsvg and how can I implement it myself? Or is there a better way to render SVG in OpenGL and interact with it in python? All I want is load SVG, manipulate its DOM and render it
I don't know much about librsvg, but it does not appear to have been updated since 2005, and so I would be inclined to recommend using a different implementation.
If you don't have dependencies on any Python libraries outside of the standard library, then you could use Jython together with Batik. This allows you to add event handlers, as well as change the DOM after rendering.
For an example of how to do this with Java, see this link.
Here's a quick port to Jython 2.2.1 (runs, but not thoroughly tested):
from java.awt.event import WindowAdapter;
from java.awt.event import WindowEvent;
from javax.swing import JFrame;
from org.apache.batik.swing import JSVGCanvas;
from org.apache.batik.swing.svg import SVGLoadEventDispatcherAdapter;
from org.apache.batik.swing.svg import SVGLoadEventDispatcherEvent;
from org.apache.batik.script import Window;
from org.w3c.dom import Document;
from org.w3c.dom import Element;
from org.w3c.dom.events import Event;
from org.w3c.dom.events import EventListener;
from org.w3c.dom.events import EventTarget;
class SVGApplication :
def __init__(self) :
class MySVGLoadEventDispatcherAdapter(SVGLoadEventDispatcherAdapter):
def svgLoadEventDispatcherListener(e):
# At this time the document is available...
self.document = self.canvas.getSVGDocument();
# ...and the window object too.
self.window = self.canvas.getUpdateManager().getScriptingEnvironment().createWindow();
# Registers the listeners on the document
# just before the SVGLoad event is
# dispatched.
registerListeners();
# It is time to pack the frame.
self.frame.pack();
def windowAdapter(e):
# The canvas is ready to load the base document
# now, from the AWT thread.
self.canvas.setURI("doc.svg");
self.frame = JFrame(windowOpened = windowAdapter, size=(800, 600));
self.canvas = JSVGCanvas();
# Forces the canvas to always be dynamic even if the current
# document does not contain scripting or animation.
self.canvas.setDocumentState(JSVGCanvas.ALWAYS_DYNAMIC);
self.canvas.addSVGLoadEventDispatcherListener(MySVGLoadEventDispatcherAdapter()) ;
self.frame.getContentPane().add(self.canvas);
self.frame.show()
def registerListeners(self) :
# Gets an element from the loaded document.
elt = self.document.getElementById("elt-id");
t = elt;
def eventHandler(e):
print e, type(e)
self.window.setTimeout(500,run = lambda : self.window.alert("Delayed Action invoked!"));
#window.setInterval(Animation(), 50);
# Adds a 'onload' listener
t.addEventListener("SVGLoad", false, handleEvent = eventHandler );
# Adds a 'onclick' listener
t.addEventListener("click", false, handleEvent = eventHandler );
if __name__ == "__main__":
SVGApplication();
Run with:
jython -Dpython.path=/usr/share/java/batik-all.jar:/home/jacob/apps/batik-1.7/lib/xml-apis-ext.jar test.py
An alternative approach would be is to use blender. It supports svg import and interaction using python. I don't think it will allow you to edit the dom after import.
I had to do the same (changing element color for instance), and had to modify rsvg library because all those nice features exist but they are hidden. You have to make a new interface to link to the nice features.

Categories