Maya Python - How to set input mesh on MASH programmatically? - python

I'm trying to create a MASH in Maya and set the input mesh using the Python API. This is incredibly simple in the GUI but I've spent hours and can't figure out how to make it work in the API. Here's my code so far:
from maya import cmds
import MASH.api as mapi
#create backplate
backplate = cmds.polyPlane(w=10,h=10)
#create cube
cube = cmds.polyCube(w=10,h=10)
#create mash
cmds.select(cube[0])
mashNetwork = mapi.Network()
mashNetwork.createNetwork()
#set mash to mesh distribution type
cmds.setAttr(mashNetwork.distribute + '.arrangement', 4)
What do I do after this? I want the backplate to be the input mesh to the MASH. I know the parameter I need to set can be accessed by this: mashNetwork.distribute + '.inputMesh'
But no matter what I try I get an error. I've tried setAttr, connectAttr, all with no luck. Anyone know how to do this?

You need to connect an outMesh of the shape node to the inputMesh attribute of the MASH_Distribute. You can inspect the manually created connections in the Node Editor to see how it works without scripting first.
cmds.connectAttr('pPlaneShape1.outMesh', mashNetwork.distribute + '.inputMesh')

Related

Click Drag Function Maya Python

I am trying to write my 1St Maya Python Script......I know I have a ways to go before I don't keep getting dozens of Syntax Errors....But it's a simple script with no UI, and any even tidbits or clues I can get to get it to work will set me off with scripting.
Just a simple fuction of 'Clicking and dragging' visibilities on or off of the Display Layer boxes..the main function I am trying to define is:
draggerContext?
or
cmds.selectPref(clickBoxSize=True)
how to implement the fuction on/off..
any clues would be of help as the trickiest part of this script is defining the Function of CLick Drag select...
Thanks
my script that does not work yet:
import maya.cmds as cmds
#Click and drag to Turn On/Off Visibilities of the Display Layers
draggerContext_id = "dga"
def dga():
cmds.selectPref(clickBoxSize=True)
cmds.selectcmds.selectPref(clickBoxSize=True)
if:
clickdragBoxSize.sel=True(layerEditorLayerButtonVisibilityChange):
cmds.selectPref(clickBoxSize=True)
else:
cmds.select(layerEditorLayerButtonVisibilityChange=True)
SelectLayerEditorButtonVisibility()

Blender: Rendering images and producing an animation

I am new to Blender and I’m having a bit of a tough time understanding its key concepts. I am using Blender 2.82 and working with Python scripting. My project consists of using Python to do the following:
Move object slightly.
Take picture with camera 1, camera 2, camera 3, and camera 4.
Repeat.
I had a script that did that. However, I wanted to save the position of my object (a sphere) every time I changed it during the loop in an animation, so I could later see what I did. When trying to insert keyframes for animation in my loop, it seems as if my sphere didn’t move. Below is my code. When I remove the lines that include frame_set and keyframe_insert, my sphere moves as I can see from my rendered images. I think I am confusing some kind of concept… Any help would be appreciated. The goal of this is to produce the images I would obtain from four cameras placed around an object, that is moving, so as to simulate a mocap system.
Why does inserting a keyframe change all of the images being rendered?
import bpy, bgl, blf,sys
import numpy as np
from bpy import data, ops, props, types, context
cameraNames=''
# Loop all command line arguments and try to find "cameras=east" or similar
for arg in sys.argv:
words=arg.split('=')
if ( words[0] == 'cameras'):
cameraNames = words[1]
sceneKey = bpy.data.scenes.keys()[0]
# Loop all objects and try to find Cameras
bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
bpy.data.scenes[sceneKey].cycles.max_bounces=12
bpy.data.scenes[sceneKey].render.tile_x=8
bpy.data.scenes[sceneKey].render.tile_y=8
bpy.data.scenes[sceneKey].cycles.samples = 16
bpy.data.scenes[sceneKey].cycles.caustics_reflective = False
bpy.data.scenes[sceneKey].cycles.caustics_refractive = False
bpy.data.objects['Sphere'].location=[1,1,1]
frame_num=0
for i in range(0,2): #nframes
bpy.context.scene.frame_set(frame_num)
for obj in bpy.data.objects:
# Find cameras that match cameraNames
if ( obj.type =='CAMERA') and ( cameraNames == '' or obj.name.find(cameraNames) != -1) :
# Set Scenes camera and output filename
bpy.data.scenes[sceneKey].camera = obj
bpy.data.scenes[sceneKey].render.filepath = '//'+obj.name+"_"+str(i)
# Render Scene and store the scene
bpy.ops.render.render( write_still=True )
bpy.data.objects['Sphere'].keyframe_insert(data_path="location",index=-1)
frame_num+=1
bpy.data.objects['Sphere'].location=[2,2,1]
I have no knowledge of python, but you can try to do key frame animation manually and make a script which will render the pictures after a set of key frames(whenever the object has moved to a new location)
It is not too hard (I'm talking about only the animation), just press the circle button near the play animation button on the timeline. This will turn on auto key framing and you just have to go to the desired key frame and move the object according to your need.

Maya – Select objects in viewport via Python

Can anyone help me? Is it possible to make a script with Python to automatically select every object in Maya's viewport?
Is it possible?
It's very possible though you have to use Maya's api to do it. You can use OpenMayaUI.MDrawTraversal to collect all objects within a camera's frustum.
This may seem more long winded than using OpenMaya.MGlobal.selectFromScreen, but it gives you a few benefits:
You can do it on any camera, despite it not being used as an active view.
You can perform whatever you need to do all in memory without selecting and forcing a redraw.
OpenMaya.MGlobal.selectFromScreen will be interface dependent, meaning that it's not possible to execute it on Maya batch jobs. This will work on either case.
That being said, here's an example that will create a bunch of random boxes, create a camera looking at them, then select all boxes that are within the camera's view:
import random
import maya.cmds as cmds
import maya.OpenMaya as OpenMaya
import maya.OpenMayaUI as OpenMayaUI
# Create a new camera.
cam, cam_shape = cmds.camera()
cmds.move(15, 10, 15, cam)
cmds.rotate(-25, 45, 0, cam)
cmds.setAttr("{}.focalLength".format(cam), 70)
cmds.setAttr("{}.displayCameraFrustum".format(cam), True)
# Create a bunch of boxes at random positions.
val = 10
for i in range(50):
new_cube, _ = cmds.polyCube()
cmds.move(random.uniform(-val, val), random.uniform(-val, val), random.uniform(-val, val), new_cube)
# Add camera to MDagPath.
mdag_path = OpenMaya.MDagPath()
sel = OpenMaya.MSelectionList()
sel.add(cam)
sel.getDagPath(0, mdag_path)
# Create frustum object with camera.
draw_traversal = OpenMayaUI.MDrawTraversal()
draw_traversal.setFrustum(mdag_path, cmds.getAttr("defaultResolution.width"), cmds.getAttr("defaultResolution.height")) # Use render's resolution.
draw_traversal.traverse() # Traverse scene to get all objects in the camera's view.
frustum_objs = []
# Loop through objects within frustum.
for i in range(draw_traversal.numberOfItems()):
# It will return shapes at first, so we need to fetch its transform.
shape_dag_path = OpenMaya.MDagPath()
draw_traversal.itemPath(i, shape_dag_path)
transform_dag_path = OpenMaya.MDagPath()
OpenMaya.MDagPath.getAPathTo(shape_dag_path.transform(), transform_dag_path)
# Get object's long name and make sure it's a valid transform.
obj = transform_dag_path.fullPathName()
if cmds.objExists(obj):
frustum_objs.append(obj)
# At this point we have a list of objects that we can filter by type and do whatever we want.
# In this case just select them.
cmds.select(frustum_objs)
Hope that gives you a better direction.
you can try the following script
import maya.OpenMaya as om
import maya.OpenMayaUI as omUI
view = omUI.M3dView.active3dView()
om.MGlobal.selectFromScreen( 0, 0, view.portWidth(), view.portHeight(),om.MGlobal.kReplaceList)
I found this snippet on https://forums.cgsociety.org/t/list-objects-in-viewport/1463426, and it seems to do the trick. You can read through the discussion for more information

Is it possible to parent to only one or two axes in Blender?

I'm in the process of creating a 2d platformer using the Blender Game Engine. I'm having trouble getting the camera to follow my character and keep him in the center of the screen. Initially, I tried simply parenting the camera to my character, but whenever my character turns (rotates around the Z-axis 180 degrees), so does my camera, making it face the back of the level. So, I was wondering if there was a way to "parent" only one or two axes of an object to another, or restrain an axes from moving even if it is parented. This way I could keep the camera from rotating, but still have it follow on the Y and Z axes.
One thing I looked into was using Python code. I came up with...
import bpy
char = bpy.data.objects['HitBox']
obj = bpy.data.objects['Camera']
obj.location.x = 69.38762 # this is the set distance from the character to camera
obj.location.y = char.location.y
obj.location.z = char.location.z
bpy.data.scenes[0].update()
I realize I need a loop for this after assigning the 'char' variable, but I can't get any Python loops working that would run through the entire game, as 'while' loops crash the BGE. If you could help with either the parenting issue, or the Python code, I'd really appreciate it.
you just need to use the bge module, because it is for the game engine. So your problem is: you used blender python, but not bge python. Try to reach the camera with cam = bge.logic.getCurrentScene().active_camera. ... so this should work:
import bge
def main():
cam = bge.logic.getCurrentScene().active_camera
obj = bge.logic.getCurrentController().owner
obj.worldPosition.y = cam.worldPosition.y
obj.worldPosition.z = cam.worldPosition.z
main()
(Attach this script to your 'HitBox' with a true triggered always sensor so it can cycle forever.)
Other solution:
You can try to make vertex parent to your player.

Blender's internal data won't update after an scale operation

I have the following script:
import bpy
import os
print("Starter")
selection = bpy.context.selected_objects
for obj in selection:
print("Obj selected")
me = obj.data
for edge in me.edges:
vert1 = me.vertices[edge.vertices[0]]
vert2 = me.vertices[edge.vertices[1]]
print("<boundingLine p1=\"{0}f,0.0f,{1}f,1.0f\" p2=\"{2}f,0.0f,{3}f,1.0f\" />".format(vert1.co.x, vert1.co.y, vert2.co.x, vert2.co.y))
Pretty basic, right? It just prints out all the edges into the console, for me to copy paste into an xml document.
When I scale an object, and perform this script on the object, I get the OLD, unscaled values for the object outputed to the console, before it was scaled. I have tried moving every vertice in the object in all axises, which results in the values outputed being those outscaled and then transformed according to my movement.
If i press n to check the vertices global values, they are properly scaled.
Why am I not getting the correct values?!?
This script was supposed to save time, but getting anything to work in blender is a CHORE! It does not help that they has just updated their api, so all example code out there is outdated!
Allright, this is the deal: when you scale, translate or rotate an object in Blender, or otherwise perform an transformation, that transformation is "stored" somehow. What you need to do I choose the object of which you applied the transformation, and use the short cut CTRL + A, and then apply your transformation.
...
So there was no lack of contingency (am I using this word right? Checked out it's definition and it seems right) between the internal data accessible through the blender api, and the values actually displayed.
I am sure this design makes sense, but right now I want to punch the guy that came up with it, in the throat. If I scale something, I intend the thing that got scaled to be scaled!
But anyways, the reason I got weird values was because the scaling was not applied, which you do with CTRL + A, once you in object mode have selected the object that you scaled.
I`m not really a Blender user(but a Maya one), I think you could try something different(I woulds say slower too...), just iterate over the selected vertices, creating a locator or a null object and constraining it to the vertex position and getting it's x,y,z coordinates. I've done it in maya and works.
Lets say something like this:
data_list = []
selection = #selection code here#
for v in selection:
loc = locator()
pointconstraint(v, loc)
data_list.append(loc.translation_attributes)
Mesh objects have an internal coordinate system for their vertices, as well as global translation, scaling, and rotation transforms that apply to the entire object. You can apply the global scaling matrix to the mesh data, and convert the vertex coordinates to the global coordinate system as follows:
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.transform_apply(scale=True)
bpy.ops.object.select_all(action='DESELECT')
Other options to transform_apply() allow rotation and translation matrices to be applied as well.

Categories