I'm trying to create a script that will select a bunch of Nurbs curves,
and measure the length of those curves.
So ideally I could select the shortest curve leaving longer curves unselected (or the opposite).
So far i have this:
import maya
import string
for i in maya.cmds.ls(selection=True):
shapeNodes = maya.cmds.listRelatives(i,shapes=True)
for shape in shapeNodes:
if maya.cmds.nodeType(shape) == "nurbsCurve":
print "Curve: %s is %s units long" % (shape, maya.cmds.arclen(shape,))
cvs = mc.getAttr(string.join(shapeNodes) + '.spans')+1
print "The: %s has %s cvs" % (shape,cvs)
else:
print "Wrong: %s is a %s" % (shape, maya.cmds.nodeType(shape))
You can get out of doing the loops with a list comprehension. Collect all of the shapes and their lengths into a list of (length, shape) pairs and sort that - that gives you the shortest curve:
import maya.cmds as cmds
sel = cmds.ls(sl=True)
shapeNodes = cmds.listRelatives(sel,shapes=True)
shapeNodes = cmds.ls(shapeNodes, type= 'nurbsCurve', l=True) # long paths to avoid confusion
selectable = [ ( cmds.arclen(item), item) for item in shapeNodes]
if selectable:
selectable.sort()
cmds.select( selectable[0][-1])
else:
cmds.select(cl = True)
You could also just make this into a function and return the selectable list for processing elsewhere.
i recommend to start using pymel for the obvious reasons of simplicity and ease
import pymel.core as pm
curveInfo = pm.createNode('curveInfo')
for thisCurve in pm.ls(sl=True):
#get the shape node
thisShape = thisCurve.getShape()
#connect the world space to the curve info
thisShape.worldSpace >> curveInfo.inputCurve
#this is how you get the value
print curveInfo.arcLength.get()
#from here you can put the value in whatever you need
#delete the curve info
pm.delete(curveInfo)
Related
I'm creating an instance python command where the primary purpose is to generate objects in neat horizontal rows. Even though I can randomize rotation and set the range, I still can't figure out how to get the objects to appear in horizontal rows.
I already tried to use the xform command to get the objects to move along the x coordinates.
import maya.cmds as MC
import random as RN
def ChainmailGenerator():
thing = MC.ls(sl=True)
print thing
if not thing:
MC.error (" ***Error - you need to select an object *** ")
# create a group node
grp = MC.group(empty=True, name=thing[0] + '_grp#')
#Loop though the items below with the range of a
for i in range (0,25):
instanceObj = MC.instance(thing, name=thing[0]+'instance#', smartTransform=True)
rx = RN.uniform(-1,1)*5
ry = RN.uniform(-1,1)*5
rz = RN.uniform(-1,1)*5
MC.rotate (rx,ry,rz, instanceObj)
MC.xform (r=True, ro=(90, 0, 0) )
tx = 5
MC.xform ( instanceObj, t=(0,15+1,0))
MC.parent (instanceObj,grp)
print "*** chainmail ***"
ChainmailGenerator()
The expectations are for the objects to generate in horizontal increments as if they're forming neat rows.
here is an example to create 10 spheres along x, moving them with xform :
step = 1
tx = 0
for x in range(10):
sphere = cmds.polySphere()[0]
cmds.xform(sphere, t=[tx,0,0])
tx+= step
The reason yours is placing everything in the same place now is because you aren't multiplying it against a value that keeps increasing. Normally you could hard-code some random value to space each one out, but this would yield inconsistent results.
Here's a generic way to go about it that seems to work with any object.
The idea is to use the mesh's bounding box to determine what the spacing should be by looking at the size of its x axis. You can also move it in place with xform, but you do need to include its worldspace parameter so that it doesn't move it relative to its current position.
import maya.cmds as cmds
def cloneInRow(count):
# Get selection.
thing = cmds.ls(sl=True)
if not thing:
cmds.error("You need to select an object")
# Get selection's mesh shape.
mesh = cmds.listRelatives(thing[0], shapes=True, f=True, ni=True, type="mesh")
if not mesh:
cmds.error("Unable to find a mesh on the selected object")
# Determine spacing by looking at object's bounding box. Use its `x` axis size.
bb_min = cmds.getAttr(mesh[0] + ".boundingBoxMin")[0]
bb_max = cmds.getAttr(mesh[0] + ".boundingBoxMax")[0]
spacing = bb_max[0] - bb_min[0]
# Create a root transform to parent to.
grp = cmds.group(empty=True, name=thing[0] + '_grp#')
# Create instance, and move it in place.
for i in range (0, count):
instanceObj = cmds.instance(thing[0], name=thing[0] + 'instance' + str(i), smartTransform=True)
cmds.xform(instanceObj, ws=True, t=(i * spacing, 0, 0))
cmds.parent(instanceObj, grp)
cmds.select(grp)
cloneInRow(10)
With this I can take this crossbow:
And clone any of its objects and get nice spacing:
The only catch is rotation. If your pivot isn't centered to the mesh, then randomizing its rotation will lose its place in space (since rotating would also effects its position!) So if you got weird pivots then it won't look nice when you add back on rotations.
I've been stuck in figuring out how to split the environment into foreground and background. I've been thinking of getting the value of "distance from camera" (through "display > heads up display > object details") so I can use it to split into foreground and background using the value of distance character from camera as a guide.
The problem is that I don't know how to get it's value in python. So can someone help me please?
I'm using Maya 2016.
I got "none" in this command :
import maya.cmds as cmds
print cmds.headsUpDisplay('HUDObjDetDistFromCam', q=1)
Rather than trying to hijack object distance from camera, you can just calculate it yourself.
import math
import maya.cmds as cmds
def distance_to_camera(obj, cam):
cam_pos = cmds.xform(cam, t=True, ws=True, q=True)
object_pos = cmds.xform(obj, t=True, ws=True, q=True)
raw_dist = [a-b for a, b in zip(cam_pos, object_pos)]
return math.sqrt (sum([a**2 for a in raw_dist]))
distance_to_camera('pCube1', 'persp')
raw_dist = [a-b for a, b in zip(cam_pos, object_pos)] is taking two lists of 3 numbers (the positions) and subtracting each item in one list from it's opposite number in the other.
math.sqrt (sum([a**2 for a in raw_dist])) is the square root of the squares of the three numbers in raw_dist -- that is, the distance. You can do this using the Maya API but this version doesn't require any extra imports beside math
I have the following piece of code:
from abaqus import *
from odbAccess import *
from abaqusConstants import *
import visualization
import fileinput
import os
import shutil
import numpy as np
from odbAccess import *
def tupled_list(mylist,n):
'''group every n elements of the list into a tuple '''
return list(zip(*[iter(mylist)]*n))
# Open odb
odb_filename = 'AbaqusResults.odb'
work_directory=os.getcwd()
odb_filepath = os.path.abspath(os.path.join(os.getcwd(),'..','outputs',odb_filename))
my_odb=session.openOdb(name=odb_filepath,readOnly=FALSE)
# Automate the proces of reading the step no matter what its name is
StepNames=(my_odb.steps.keys() )
lastStep=( StepNames[-1] )
# Automation of an instance naming (in the same way)
AllInstances = (my_odb.rootAssembly.instances.keys())
MyInstance = ( AllInstances[-1] )
SubmodelInstance=my_odb.rootAssembly.instances[MyInstance]
# Get dataset containing element labels and calculated quantities in here from text file
dataset = np.loadtxt(os.path.join(os.getcwd(),'AbaqusDataInput.txt'),delimiter=',')
# List of all element labels
elLabels = dataset[0,:].astype(int).tolist()
# Count the number of integration points for a single element
gausspoints = elLabels.count(1)
# Get damage data from dataset and format into a list of tuples, tuple length depends on number of integration points for eache element
damage = tupled_list(list(dataset[1,:]),gausspoints)
damage = [list(elem) for elem in damage]
# Remove duplicates from elLabel list
elLabels = list(set(elLabels))
for i in range(len(my_odb.steps[lastStep].frames)):
new_field_damage = my_odb.steps[lastStep].frames[i].FieldOutput(name='Damage',description='Damage sustained after one repitition of the loading history', type=SCALAR)
new_field_damage.addData(position=INTEGRATION_POINT, instance=SubmodelInstance, labels=elLabels, data=damage)
my_odb.save()
my_odb.close()
but I get the following error:
Data at 141968 locations expected. Data at 17746 locations provided. Element data request failed. Element type is C3D20R. Data at 8 integration points per element expected
elLabels is a list with 17746 items and
damage is a tuple list with 17746 items but each item has 8 values (one for each integration point) e.g. [(x,x,x,x,x,x,x,x) , (x,x,x,x,x,x,x,x), … ]
Thus I provide abaqus with 17746 element labels and 17746 * 8 locations (141968 data points in total as asked) so I don’t see why I get the error.
I have tried a list of lists instead of a list of tuples but the same error occurs. I suspect is merely an issue of using the correct datatypes but the Abaqus documentation seems severely lacking.
The script works fine for odb files with C3D8R elements which have 1 integration point. I then provide the same elLabels list and damage list but then each tuple only has 1 value (e.g. [(x, ) (x, ), … ] ) which works fine.
Do you know how I can import my data into the elements integration points?
This is an old question but I stumbled across a similar problem. The correct format seems to be just plain lists:
elLabels = [1, 2, ...]
damage = [1_1, 1_2, 1_3, 1_4, 1_5, 1_6, 1_7, 1_8, 2_1, 2_2, ...]
where i_j is the data of element i at integration point j.
By the way, it's probably better to remove duplicates like this to maintain a correct order:
from collections import OrderedDict
list(OrderedDict.fromkeys(elLabels))
I think you can first try to confirm if the data in Abaqus is indeed stored in element sequence first followed by integration point sequence within an element. For example for S4R type, the stresses of S22 are stored for the bottom surface of all elements, and then for the top surface of all elements. That is, they are not in sequence of element 1(bottom, top), element2(bottom, top)....element n(bottom top); rather, it is: bottom (element1), bottom (element2)...bottom (element n), top(element1), top(element2).. top (element n)
try:
S22_ES = (o1.steps[LoadStep].frames[loadDir+1].fieldOutputs['S'].getScalarField(componentLabel="S22") .getSubset(region=Elset,position=INTEGRATION_POINT,))
Let's say I have one locator above a polyPlane. What I want to do is a lookup or trace from the locator in negative or positive y until it hits the polyPlane and return the position of the closest point/vertex/uv/
I imagine this have been done one million times but the only examples I have found works by locating the closest point based on all axis which in my case is close to useless.
I would appreciate any help I could get!
Edit:
Added image of the difference between the first suggested solution and what I want to achieve
What we can do is use OpenMaya (Maya's API) to loop over the faceVerts gathered in an array, check to see which is shortest distance from the locator position compared to the current facevert, if it is shorter than the last shortest distance, save it as the closestVertex variable.
import maya.OpenMaya as OpenMaya
from pymel.core import *
geo = PyNode('pSphere1')
pos = PyNode('locator1').getRotatePivot(space='world')
nodeDagPath = OpenMaya.MObject()
try:
selectionList = OpenMaya.MSelectionList()
selectionList.add(geo.name())
nodeDagPath = OpenMaya.MDagPath()
selectionList.getDagPath(0, nodeDagPath)
except:
warning('OpenMaya.MDagPath() failed on %s' % geo.name())
mfnMesh = OpenMaya.MFnMesh(nodeDagPath)
pointA = OpenMaya.MPoint(pos.x, pos.y, pos.z)
pointB = OpenMaya.MPoint()
space = OpenMaya.MSpace.kWorld
util = OpenMaya.MScriptUtil()
util.createFromInt(0)
idPointer = util.asIntPtr()
mfnMesh.getClosestPoint(pointA, pointB, space, idPointer)
idx = OpenMaya.MScriptUtil(idPointer).asInt()
faceVerts = [geo.vtx[i] for i in geo.f[idx].getVertices()]
closestVertex = None
minLength = None
for v in faceVerts:
thisLength = (pos - v.getPosition(space='world')).length()
if minLength is None or thisLength < minLength:
minLength = thisLength
closestVertex = v
select(closestVertex)
This could probably be done with python without the API, but if you've got maya, you've got access to the API :)
I hope this helps
I wrote a function in Python 2.7 (on Window OS 64bit) in order to calculate the mean value of of the intersection area from a reference polygon (Ref) and one or more segmented (Seg) polygon(s) in ESRI shapefile format. The code is quite slow because i have more that 2000 reference polygon (s) and for each Ref_polygon the function run for every time for all Seg polygons(s) (more than 7000). I am sorry but the function is a prototype.
I wish to know if multiprocessing can help me to increase the speed of my loop or there are more performance solutions. if multiprocessing can be a possible solution i wish to know the best way to optimize my following function
import numpy as np
import ogr
import osr,gdal
from shapely.geometry import Polygon
from shapely.geometry import Point
import osgeo.gdal
import osgeo.gdal as gdal
def AreaInter(reference,segmented,outFile):
# open shapefile
ref = osgeo.ogr.Open(reference)
if ref is None:
raise SystemExit('Unable to open %s' % reference)
seg = osgeo.ogr.Open(segmented)
if seg is None:
raise SystemExit('Unable to open %s' % segmented)
ref_layer = ref.GetLayer()
seg_layer = seg.GetLayer()
# create outfile
if not os.path.split(outFile)[0]:
file_path, file_name_ext = os.path.split(os.path.abspath(reference))
outFile_filename = os.path.splitext(os.path.basename(outFile))[0]
file_out = open(os.path.abspath("{0}\\{1}.txt".format(file_path, outFile_filename)), "w")
else:
file_path_name, file_ext = os.path.splitext(outFile)
file_out = open(os.path.abspath("{0}.txt".format(file_path_name)), "w")
# For each reference objects-i
for index in xrange(ref_layer.GetFeatureCount()):
ref_feature = ref_layer.GetFeature(index)
# get FID (=Feature ID)
FID = str(ref_feature.GetFID())
ref_geometry = ref_feature.GetGeometryRef()
pts = ref_geometry.GetGeometryRef(0)
points = []
for p in xrange(pts.GetPointCount()):
points.append((pts.GetX(p), pts.GetY(p)))
# convert in a shapely polygon
ref_polygon = Polygon(points)
# get the area
ref_Area = ref_polygon.area
# create an empty list
Area_seg, Area_intersect = ([] for _ in range(2))
# For each segmented objects-j
for segment in xrange(seg_layer.GetFeatureCount()):
seg_feature = seg_layer.GetFeature(segment)
seg_geometry = seg_feature.GetGeometryRef()
pts = seg_geometry.GetGeometryRef(0)
points = []
for p in xrange(pts.GetPointCount()):
points.append((pts.GetX(p), pts.GetY(p)))
seg_polygon = Polygon(points)
seg_Area.append = seg_polygon.area
# intersection (overlap) of reference object with the segmented object
intersect_polygon = ref_polygon.intersection(seg_polygon)
# area of intersection (= 0, No intersection)
intersect_Area.append = intersect_polygon.area
# Avarage for all segmented objects (because 1 or more segmented polygons can intersect with reference polygon)
seg_Area_average = numpy.average(seg_Area)
intersect_Area_average = numpy.average(intersect_Area)
file_out.write(" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")
file_out.close()
You can use the multiprocessing package, and especially the Pool class. First create a function that does all the stuff you want to do within the for loop, and that takes as an argument only the index:
def process_reference_object(index):
ref_feature = ref_layer.GetFeature(index)
# all your code goes here
return (" ".join(["%s" %i for i in [FID, ref_Area,seg_Area_average,intersect_Area_average]])+ "\n")
Note that this doesn't write to a file itself- that would be messy because you'd have multiple processes writing to the same file at the same time. Instead, it returns the string that needs to be written. Also note that there are objects in this function like ref_layer or ref_geometry that will need to reach it somehow- that's up to you how to do it (you could put process_reference_object as the method in a class initialized with them, or it could be as ugly as just defining them globally).
Then, you create a pool of process resources, and run all of your indices using Pool.imap_unordered (which will itself allocate each index to a different process as necessary):
from multiprocessing import Pool
p = Pool() # run multiple processes
for l in p.imap_unordered(process_reference_object, range(ref_layer.GetFeatureCount())):
file_out.write(l)
This will parallelize the independent processing of your reference objects across multiple processes, and write them to the file (in an arbitrary order, note).
Threading can help to a degree, but first you should make sure you can't simplify the algorithm. If you're checking each of 2000 reference polygons against 7000 segmented polygons (perhaps I misunderstood), then you should start there. Stuff that runs at O(n2) is going to be slow, so maybe you can prune away things that will definitely not intersect or find some other way to speed things up. Otherwise, running multiple processes or threads will only improve things linearly when your data grows geometrically.