my question is how can i select random numbers of points of (for example in this script) lattice or vertex of an object or face or everything or totally random selection from a list in python.
please first explain random selection on this script
cm.select('ffd1Lattice.pt[:][:][:]',r=True)
and please after that explain random selection on lists and on every method.
best regards
import maya.cmds as cm
import sys
import random as rand
myList = cm.ls ( sl = True)
def softIt(*arg):
cm.polySmooth( c = True , dv = 1 , kb = False )
def randomize(*arg):
myList = cm.ls ( sl = True)
for i in myList:
cm.lattice( dv=(4, 5, 4), oc=True )
cm.select('ffd1Lattice.pt[:][:][:]',r=True)
cm.xform( r = True , t = [ rand.uniform(-1,1) , rand.uniform(-1,1) , rand.uniform(-1,1)] )
cm.headsUpMessage('this script just work with one object at time', verticalOffset=250)
cm.window( t = 'Randomaize' , s = False)
cm.columnLayout()
cm.button( l = 'do it' , c = randomize , w = 200)
cm.button( l = 'soft it' , c = softIt , w = 200)
cm.showWindow()
sys.stdout.write("this script just work with one object at time\t\"script by Amin khormaei\"")
If you aren't understanding that little snippet then you should first understand how list slicing works. These links should point you in the right direction.
The line:
cm.select('ffd1Lattice.pt[:][:][:]',r=True)
says basically 'select all the control points in the lattice'. There seems to be a bug, however, in maya 2011 (don't know about later) which will try to select points that dont exist. A 1x1x1 lattice should have 8 points numbers 0-1 in all three dimensions - but using your snippet and then calling ls on the result will report
// result: 'ffdLatticeShape.pt[0:2][0:2][0:2]' //
... at least it does on my machine -- and that should be impossible. I'd avoid using the [:] form with lattices (actually the [*] form has the same bug too. Ouch.). It works fine with regular vertices, however.
For your specific application, do you want to move all of the points by the same random amount -- which is what your snippet suggests -- or all of the points by different random amounts? Or random sets of points by a random amount? the code would be different for each alternative.
Related
I have an array of time values. I want to know how many values are in each 0.05 seconds window.
For example, some values of my array are: -1.9493, -1.9433, -1.911 , -1.8977, -1.8671,..
In the first interval of 0.050 seconds (from -1.9493 to -1.893) I´m expecting to have 3 elements
I already create another array with the 0.050 seconds steps.
a=max(array)
b=min(array)
ventanalinea1=np.arange(b,a,0.05)
v1=np.array(ventanalinea1)
In other words, I would like to compare my original array with this one.
I would like to know if there is a way to ask python to evaluate my array within a given dynamic range.
One of the variants:
import numpy as np
# original array
a = [-1.9493, -1.9433, -1.911 , -1.8977, -1.8671]
step = 0.05
bounds = np.arange(min(a), max(a) + step, step)
result = [
list(filter(lambda x: b[i] <= x <= b[i+1], a))
for i in range(len(b)-1)
]
I have found a cool python library python-intervals that simplify your problem a lot:
Install it with pip install python-intervals and try the code below.
import intervals as I
# This is a recursive function
def counter(timevalues, w=0.050):
if not timevalues:
return "" # stops recursion when timevalues is empty
# Make an interval object that provides convenient interval operations like 'contains'
window = I.closed(
timevalues[0], timevalues[0] + w)
interval = list(
filter(window.contains, timevalues))
count = len(interval)
timevalues = timevalues[count:]
print(f"[{interval[0]} : {interval[-1]}] : {count}")
return counter(timevalues)
if __name__ == "__main__":
times = [-1.9493, -1.9433, -1.911, -1.8977, -1.8671]
print(counter(times))
Adapt it as you wish, for example you might want to return a dictionary rather that a string.
You could still get around this without using the python-intervals library here but i have introduced it here because it will be very likely that you would need other complex operations along the way on your code.
I have the following piece of code:
from abaqus import *
from odbAccess import *
from abaqusConstants import *
import visualization
import fileinput
import os
import shutil
import numpy as np
from odbAccess import *
def tupled_list(mylist,n):
'''group every n elements of the list into a tuple '''
return list(zip(*[iter(mylist)]*n))
# Open odb
odb_filename = 'AbaqusResults.odb'
work_directory=os.getcwd()
odb_filepath = os.path.abspath(os.path.join(os.getcwd(),'..','outputs',odb_filename))
my_odb=session.openOdb(name=odb_filepath,readOnly=FALSE)
# Automate the proces of reading the step no matter what its name is
StepNames=(my_odb.steps.keys() )
lastStep=( StepNames[-1] )
# Automation of an instance naming (in the same way)
AllInstances = (my_odb.rootAssembly.instances.keys())
MyInstance = ( AllInstances[-1] )
SubmodelInstance=my_odb.rootAssembly.instances[MyInstance]
# Get dataset containing element labels and calculated quantities in here from text file
dataset = np.loadtxt(os.path.join(os.getcwd(),'AbaqusDataInput.txt'),delimiter=',')
# List of all element labels
elLabels = dataset[0,:].astype(int).tolist()
# Count the number of integration points for a single element
gausspoints = elLabels.count(1)
# Get damage data from dataset and format into a list of tuples, tuple length depends on number of integration points for eache element
damage = tupled_list(list(dataset[1,:]),gausspoints)
damage = [list(elem) for elem in damage]
# Remove duplicates from elLabel list
elLabels = list(set(elLabels))
for i in range(len(my_odb.steps[lastStep].frames)):
new_field_damage = my_odb.steps[lastStep].frames[i].FieldOutput(name='Damage',description='Damage sustained after one repitition of the loading history', type=SCALAR)
new_field_damage.addData(position=INTEGRATION_POINT, instance=SubmodelInstance, labels=elLabels, data=damage)
my_odb.save()
my_odb.close()
but I get the following error:
Data at 141968 locations expected. Data at 17746 locations provided. Element data request failed. Element type is C3D20R. Data at 8 integration points per element expected
elLabels is a list with 17746 items and
damage is a tuple list with 17746 items but each item has 8 values (one for each integration point) e.g. [(x,x,x,x,x,x,x,x) , (x,x,x,x,x,x,x,x), … ]
Thus I provide abaqus with 17746 element labels and 17746 * 8 locations (141968 data points in total as asked) so I don’t see why I get the error.
I have tried a list of lists instead of a list of tuples but the same error occurs. I suspect is merely an issue of using the correct datatypes but the Abaqus documentation seems severely lacking.
The script works fine for odb files with C3D8R elements which have 1 integration point. I then provide the same elLabels list and damage list but then each tuple only has 1 value (e.g. [(x, ) (x, ), … ] ) which works fine.
Do you know how I can import my data into the elements integration points?
This is an old question but I stumbled across a similar problem. The correct format seems to be just plain lists:
elLabels = [1, 2, ...]
damage = [1_1, 1_2, 1_3, 1_4, 1_5, 1_6, 1_7, 1_8, 2_1, 2_2, ...]
where i_j is the data of element i at integration point j.
By the way, it's probably better to remove duplicates like this to maintain a correct order:
from collections import OrderedDict
list(OrderedDict.fromkeys(elLabels))
I think you can first try to confirm if the data in Abaqus is indeed stored in element sequence first followed by integration point sequence within an element. For example for S4R type, the stresses of S22 are stored for the bottom surface of all elements, and then for the top surface of all elements. That is, they are not in sequence of element 1(bottom, top), element2(bottom, top)....element n(bottom top); rather, it is: bottom (element1), bottom (element2)...bottom (element n), top(element1), top(element2).. top (element n)
try:
S22_ES = (o1.steps[LoadStep].frames[loadDir+1].fieldOutputs['S'].getScalarField(componentLabel="S22") .getSubset(region=Elset,position=INTEGRATION_POINT,))
Thanks for taking a moment to read this! So, my first issue is that I'm trying to create a function, "selectionData()", to get an object's translate, rotate, and scale values and then return them as a dictionary, but find myself struggling a bit. Using the code listed below, all I keep getting is the object's name. How can I modify it to get its translate, rotate, and scale values to return as a dictionary?
My second issue is that I'm trying to create a function, "setData(data)", that takes as input the aforementioned dictionary built with selectionData() and restores the data in the dictionary to the selected objects. How do I do that?
My apologies if either of these questions are foolish, but thank you for your time, regardless! Cheers!
Current code is listed below:
from maya import cmds
sel = cmds.ls(sl = 1)
meshes =[]
for s in sel :
shape = cmds.listRelatives(s , shapes = 1 )
if shape :
if cmds.nodeType(shape[0]) == "mesh" :
meshes.append(s )
meshData = {}
for m in meshes :
pos = cmds.xform ( m , q =1 , ws = 1 , t = 1)
rot = cmds.xform( m , q =1 , ws = 1 , rotation = 1 )
scl = cmds.getAttr ( m + '.s' )[0]
currentDict = {
"pos" : pos ,
"rot" : rot ,
"scl" : scl ,
}
meshData[m] = currentDict
def selectionData( selectionDict):
for k in selectionDict :
print k
selectionData(meshData)
First of all, to list all the objects with transforms you can use this command:
cmds.ls(selection=True, transforms=True, dagObjects=True)
If you want to read/write the transforms, you don't need to get the translation, then rotation and scale separately. You can read the composite transformation matrix like this:
xform_matrix = cmds.xform(source_object, query=True, matrix=True)
You'll get a list of 16 float numbers that are ready to be applied to other objects:
cmds.xform(destination_object, matrix=xform_matrix)
I am not sure how do you want to map the transforms from one set of selected objects to another set. If you describe, I'll be able to post the complete code.
The following script will collect the composite transformation matrix of all selected objects in the dictionary with long object names as the keys:
selected_objects_matrix = {}
for current_object in cmds.ls(selection=True, transforms=True, dagObjects=True):
selected_objects_matrix[cmds.ls(current_object, long=True)] = cmds.xform(
current_object,
query=True,
matrix=True)
Then if you move/rotate/scale the objects in Maya, you can revert like that:
for current_object in selected_objects_matrix:
cmds.xform(
current_object,
matrix=selected_objects_matrix[current_object])
you really just want to do what you have in reverse
setting the q=1 sets the command into query mode.
removing this flag defaults the command to edit mode
getAttr and setAttr are their respective commands, however you need to be aware of the data that youre setting. even though youre grabbing the s attribute above youre making that only grab the x value so when you set it you need to specify sx
cmds.xform ( m , ws = 1 , t = posValues)
cmds.xform( m , ws = 1 , rotation = rotValues )
cmds.setAttr ( m + '.sx', scaleValue) # sx because you are only grabbing the X Value
I give a lot of information on the methods that I used to write my code. If you just want to read my question, skip to the quotes at the end.
I'm working on a project that has a goal of detecting sub populations in a group of patients. I thought this sounded like the perfect opportunity to use association rule mining as I'm currently taking a class on the subject.
I there are 42 variables in total. Of those, 20 are continuous and had to be discretized. For each variable, I used the Freedman-Diaconis rule to determine how many categories to divide a group into.
def Freedman_Diaconis(column_values):
#sort the list first
column_values[1].sort()
first_quartile = int(len(column_values[1]) * .25)
third_quartile = int(len(column_values[1]) * .75)
fq_value = column_values[1][first_quartile]
tq_value = column_values[1][third_quartile]
iqr = tq_value - fq_value
n_to_pow = len(column_values[1])**(-1/3)
h = 2 * iqr * n_to_pow
retval = (column_values[1][-1] - column_values[1][1])/h
test = int(retval+1)
return test
From there I used min-max normalization
def min_max_transform(column_of_data, num_bins):
min_max_normalizer = preprocessing.MinMaxScaler(feature_range=(1, num_bins))
data_min_max = min_max_normalizer.fit_transform(column_of_data[1])
data_min_max_ints = take_int(data_min_max)
return data_min_max_ints
to transform my data and then I simply took the interger portion to get the final categorization.
def take_int(list_of_float):
ints = []
for flt in list_of_float:
asint = int(flt)
ints.append(asint)
return ints
I then also wrote a function that I used to combine this value with the variable name.
def string_transform(prefix, column, index):
transformed_list = []
transformed = ""
if index < 4:
for entry in column[1]:
transformed = prefix+str(entry)
transformed_list.append(transformed)
else:
prefix_num = prefix.split('x')
for entry in column[1]:
transformed = str(prefix_num[1])+'x'+str(entry)
transformed_list.append(transformed)
return transformed_list
This was done to differentiate variables that have the same value, but appear in different columns. For example, having a value of 1 for variable x14 means something different from getting a value of 1 in variable x20. The string transform function would create 14x1 and 20x1 for the previously mentioned examples.
After this, I wrote everything to a file in basket format
def create_basket(list_of_lists, headers):
#for filename in os.listdir("."):
# if filename.e
if not os.path.exists('baskets'):
os.makedirs('baskets')
down_length = len(list_of_lists[0])
with open('baskets/dataset.basket', 'w') as basketfile:
basket_writer = csv.DictWriter(basketfile, fieldnames=headers)
for i in range(0, down_length):
basket_writer.writerow({"trt": list_of_lists[0][i], "y": list_of_lists[1][i], "x1": list_of_lists[2][i],
"x2": list_of_lists[3][i], "x3": list_of_lists[4][i], "x4": list_of_lists[5][i],
"x5": list_of_lists[6][i], "x6": list_of_lists[7][i], "x7": list_of_lists[8][i],
"x8": list_of_lists[9][i], "x9": list_of_lists[10][i], "x10": list_of_lists[11][i],
"x11": list_of_lists[12][i], "x12":list_of_lists[13][i], "x13": list_of_lists[14][i],
"x14": list_of_lists[15][i], "x15": list_of_lists[16][i], "x16": list_of_lists[17][i],
"x17": list_of_lists[18][i], "x18": list_of_lists[19][i], "x19": list_of_lists[20][i],
"x20": list_of_lists[21][i], "x21": list_of_lists[22][i], "x22": list_of_lists[23][i],
"x23": list_of_lists[24][i], "x24": list_of_lists[25][i], "x25": list_of_lists[26][i],
"x26": list_of_lists[27][i], "x27": list_of_lists[28][i], "x28": list_of_lists[29][i],
"x29": list_of_lists[30][i], "x30": list_of_lists[31][i], "x31": list_of_lists[32][i],
"x32": list_of_lists[33][i], "x33": list_of_lists[34][i], "x34": list_of_lists[35][i],
"x35": list_of_lists[36][i], "x36": list_of_lists[37][i], "x37": list_of_lists[38][i],
"x38": list_of_lists[39][i], "x39": list_of_lists[40][i], "x40": list_of_lists[41][i]})
and I used the apriori package in Orange to see if there were any association rules.
rules = Orange.associate.AssociationRulesSparseInducer(patient_basket, support=0.3, confidence=0.3)
print "%4s %4s %s" % ("Supp", "Conf", "Rule")
for r in rules:
my_rule = str(r)
split_rule = my_rule.split("->")
if 'trt' in split_rule[1]:
print 'treatment rule'
print "%4.1f %4.1f %s" % (r.support, r.confidence, r)
Using this, technique I found quite a few association rules with my testing data.
THIS IS WHERE I HAVE A PROBLEM
When I read the notes for the training data, there is this note
...That is, the only
reason for the differences among observed responses to the same treatment across patients is
random noise. Hence, there is NO meaningful subgroup for this dataset...
My question is,
why do I get multiple association rules that would imply that there are subgroups, when according to the notes I shouldn't see anything?
I'm getting lift numbers that are above 2 as opposed to the 1 that you should expect if everything was random like the notes state.
Supp Conf Rule
0.3 0.7 6x0 -> trt1
Even though my code runs, I'm not getting results anywhere close to what should be expected. This leads me to believe that I messed something up, but I'm not sure what it is.
After some research, I realized that my sample size is too small for the number of variables that I have. I would need a way larger sample size in order to really use the method that I was using. In fact, the method that I tried to use was developed with the assumption that it would be run on databases with hundreds of thousands or millions of rows.
Let's say I have one locator above a polyPlane. What I want to do is a lookup or trace from the locator in negative or positive y until it hits the polyPlane and return the position of the closest point/vertex/uv/
I imagine this have been done one million times but the only examples I have found works by locating the closest point based on all axis which in my case is close to useless.
I would appreciate any help I could get!
Edit:
Added image of the difference between the first suggested solution and what I want to achieve
What we can do is use OpenMaya (Maya's API) to loop over the faceVerts gathered in an array, check to see which is shortest distance from the locator position compared to the current facevert, if it is shorter than the last shortest distance, save it as the closestVertex variable.
import maya.OpenMaya as OpenMaya
from pymel.core import *
geo = PyNode('pSphere1')
pos = PyNode('locator1').getRotatePivot(space='world')
nodeDagPath = OpenMaya.MObject()
try:
selectionList = OpenMaya.MSelectionList()
selectionList.add(geo.name())
nodeDagPath = OpenMaya.MDagPath()
selectionList.getDagPath(0, nodeDagPath)
except:
warning('OpenMaya.MDagPath() failed on %s' % geo.name())
mfnMesh = OpenMaya.MFnMesh(nodeDagPath)
pointA = OpenMaya.MPoint(pos.x, pos.y, pos.z)
pointB = OpenMaya.MPoint()
space = OpenMaya.MSpace.kWorld
util = OpenMaya.MScriptUtil()
util.createFromInt(0)
idPointer = util.asIntPtr()
mfnMesh.getClosestPoint(pointA, pointB, space, idPointer)
idx = OpenMaya.MScriptUtil(idPointer).asInt()
faceVerts = [geo.vtx[i] for i in geo.f[idx].getVertices()]
closestVertex = None
minLength = None
for v in faceVerts:
thisLength = (pos - v.getPosition(space='world')).length()
if minLength is None or thisLength < minLength:
minLength = thisLength
closestVertex = v
select(closestVertex)
This could probably be done with python without the API, but if you've got maya, you've got access to the API :)
I hope this helps