Maya Scale Constraint - python

I have two list
mainCTRL = ['nurbsCircle1','nurbsCircle2','nurbsCircle3']
grpCTRL = ['group1','group2','group3']
and for each object in mainCTRL and grpCTRL, I am trying to apply scale constraint to its same order in the grpCTRL. Scale should be apply in the order, like
'nurbsCicle1' should apply scale constraint to 'group1'
'nurbsCicle2' should apply scale constraint to 'group2'
'nurbsCicle3' should apply scale constraint to 'group3'
How can I have do this? How can I tell python to apply this command for each nurbsCircle to its grpCTRL.
cmds.scaleConstraint('eachnubrsCircle', 'eachgrp')
I am a newbie to python and learning things as I go. Any help is really appreciated.
Thank you very much :)

This is relatively easy with Python's zip method.
mainCTRL = ['nurbsCircle1','nurbsCircle2','nurbsCircle3']
grpCTRL = ['group1','group2','group3']
for ctrl, grp in zip(mainCTRL, grpCTRL):
cmds.scaleConstraint(ctrl, grp)
You can print zip(mainCTRL, grpCTRL) to see what it's actually returning.

Related

How do I convert an LpSum to an LPConstraint?

In Pulp I have constraints of the form,
lpSum([decision_vars[an_id][item] for item in a_vector]) == count_req[an_id], f'constraint_{user_id}'
and I want to convert this to use LpConstraint as a stepping stone to making this constraint elastic. i.e. LpConstraint(...).makeElasticSubProblem(...)
LpConstraint(
e=pl.lpSum([decision_vars[an_id][item] for item in a_vector]),
sense=LpConstraintEQ,
rhs=count_req[an_id],
name=f'constraint_{an_id}'
)
Are these equivalent?
Is there some cleaner example or documentation for converting an lpSum to an LpConstraint?
It's maybe not the answer you're looking for, but I recommend against using PuLP. cvxpy is easier to learn and use (free variables instead of variables bound to models) and allows you to easily extend your models to nonlinear convex systems, which gives you much more bang for your buck in terms of effort put into learning versus capability obtained.
For instance, in cvxpy, your problem might be cast as:
from collections import defaultdict
from cvxpy import cp
decision_vars = {}
for an_id in ids:
for item in a_vector:
decision_vars[an_id][item] = cp.Variable(pos=True)
constraints = []
for an_id in ids:
my_sum = sum(x for x in decision_vars[an_id].values())
constraints.append(count_req[an_id]*lower_proportion <= my_sum)
constraints.append(my_sum <= count_req[an_id]*upper_proportion)
problem = cp.Problem(cp.Minimize(OBJECTIVE_HERE), constraints)
optimal_value = problem.solve()
for an_id in ids:
for item in a_vector:
print(f"{an_id} {item} {decision_vars[an_id][item].value}")
Note that the use of free variables means that we can use standard Python constructs like dictionaries, sums, and comparison operators to build our problem. Sure, you have to construct the elastic constraint manually, but that's not challenging.

(hairSimulation/CFX) selecting multiple curves with its own duplicate and blendShape at the same time

In maya I am simulating hair and I want to lock the curves that are jittering. I duplicated those curves to make blend shapes, but there are too many to blend shape them individually. Is there a way to script to solve this? I think the way is to slice to get the name/number of the curves and blendShape all of them with a loop. But since I'm new to scripting I need help.
You can get follicles with this command :
fols = cmds.ls(type='follicle')
After that, find the curves simulated :
crvs = cmds.ls(fols, dag=True, type='nurbsCurve')
Loop through those curves and use :
dup = cmds.duplicate(c)[0] # where c is the iterator of the for loop on crvs
then :
bs = cmds.blendShape(dup, c)
You have few flags on every command but it should help you like name, weight and few more.
I don't have maya for few weeks so I hope it will help you
EDIT :
Not that follicle curves are set as intermediate, for blendshaping, you might need to temporarly set them :
cmds.setAttr(c+'.io', 0)
bs = cmds.blendShape(dup, c)
cmds.setAttr(c+'.io', 1)

Nurse rostering using ortools constraint solver

I went through tutorial from google and I seem to understand most of the code. My problem is that they choose solutions only based on hard constraints. Most of papers also use soft constraints and every constraint has it's coeficient. Sum of all constraints each multiplied by their coeficient produces a cost of the roster, so the goal is to minimize this value. My question is, how can I add this to the code?
# Create the decision builder.
db = solver.Phase(shifts_flat, solver.CHOOSE_FIRST_UNBOUND,
solver.ASSIGN_MIN_VALUE)
# Create the solution collector.
solution = solver.Assignment()
solution.Add(shifts_flat)
collector = solver.AllSolutionCollector(solution)
solver.Solve(db, [collector])
I'm not sure what the decision builder does (or it's parameters), nor solver.Assignment(), nor solver.AllSolutionCollector(solution).
Only thing I found is this, but I'm not sure how to use it. (maybe call solver.Minimize(cost, ?) instead of assignment?)
0
If you look at:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py
The data defines employee requests:
https://github.com/google/or-tools/blob/stable/examples/python/shift_scheduling_sat.py#L219
The model directly creates one bool var for each tuple (employee, day, shift).
Thus adding that to the objective is straightforward:
# Employee requests
for e, s, d, w in requests:
obj_bool_vars.append(work[e, s, d])
obj_bool_coeffs.append(w)
This is used in the minimize code:
# Objective
model.Minimize(
sum(obj_bool_vars[i] * obj_bool_coeffs[i]
for i in range(len(obj_bool_vars))) + sum(
obj_int_vars[i] * obj_int_coeffs[i]
for i in range(len(obj_int_vars))))

How to remove the mean of some values in the 3rd dimension of a matrix?

I have been stuck trying to do this with numpy with no luck. I am trying to move from MATLAB to Python, however, the transition hasn't been so easy. Anyway, that doesn't matter.
I am trying to code the Python analog of this simple MATLAB line of code:
A(:,:,condtype==1 & Mat(:,9)==contra(ii)) = A(:,:, condtype ==1 & Mat(:,9)==contra(ii))-mean(A(:,:, condtype ==1 & Mat(:,9)==contra(ii)),3);
Right, so the above convoluted line of code does the following. Indexes a condition which is half of the 3rd dimension of A and removes the mean of those indexes which simultaneously changing the values in A to the new mean removed values.
How would one go about doing this in Python?
I actually figured it out. I was trying to use and when I should have been using np.isequal. Also, I needed to use keepdims=True for the mean. Here it is for anyone that wants to see:
def RmContrastMean(targettype,trialsMat,Contrastlvls,dX):
present = targettype==1
absent = targettype==0
for i in range(0,Contrastlvls.size):
CurrentContrast = trialsMat[:,8]==Contrastlvls[i]
preIdx = np.equal(present, CurrentContrast)
absIdx = np.equal(absent, CurrentContrast)
#mean
dX[:,:,preIdx] = dX[:,:,preIdx]-np.mean(dX[:,:,preIdx],axis=2,keepdims=True)
dX[:,:,absIdx] = dX[:,:,absIdx]-np.mean(dX[:,:,absIdx],axis=2,keepdims=True)
#std
dX[:,:,preIdx] = dX[:,:,preIdx]/np.std(dX[:,:,preIdx],axis=2,keepdims=True)
dX[:,:,absIdx] = dX[:,:,absIdx]/np.std(dX[:,:,absIdx],axis=2,keepdims=True)
return dX

google tensor flow crash course. Issues with REPRESENTATION:Programming exercises Task 2: Make Better Use of Latitude

Hi got into another roadblock in tensorflow crashcourse...at the representation programming excercises at this page.
https://developers.google.com/…/repres…/programming-exercise
I'm at Task 2: Make Better Use of Latitude
seems I narrowed the issue to when I convert the raw latitude data into "buckets" or ranges which will be represented as 1 or zero in my feature. The actual code and issue I have is in the paste bin. Any advice would be great! thanks!
https://pastebin.com/xvV2A9Ac
this is to convert the raw latitude data in my pandas dictionary into "buckets" or ranges as google calls them.
LATITUDE_RANGES = zip(xrange(32, 44), xrange(33, 45))
the above code I changed and replaced xrange with just range since xrange is already deprecated python3.
could this be the problem? using range instead of xrange? see below for my conundrum.
def select_and_transform_features(source_df):
selected_examples = pd.DataFrame()
selected_examples["median_income"] = source_df["median_income"]
for r in LATITUDE_RANGES:
selected_examples["latitude_%d_to_%d" % r] = source_df["latitude"].apply(
lambda l: 1.0 if l >= r[0] and l < r[1] else 0.0)
return selected_examples
The next two are to run the above function and convert may exiting training and validation data sets into ranges or buckets for latitude
selected_training_examples = select_and_transform_features(training_examples)
selected_validation_examples = select_and_transform_features(validation_examples)
this is the training model
_ = train_model(
learning_rate=0.01,
steps=500,
batch_size=5,
training_examples=selected_training_examples,
training_targets=training_targets,
validation_examples=selected_validation_examples,
validation_targets=validation_targets)
THE PROBLEM:
oki so here is how I understand the problem. When I run the training model it throws this error
ValueError: Feature latitude_32_to_33 is not in features dictionary.
So I called selected_training_examples and selected_validation_examples
here's what I found. If I run
selected_training_examples = select_and_transform_features(training_examples)
then I get the proper data set when I call selected_training_examples which yields all the feature "buckets" including Feature #latitude_32_to_33
but when I run the next function
selected_validation_examples = select_and_transform_features(validation_examples)
it yields no buckets or ranges resulting in the
`ValueError: Feature latitude_32_to_33 is not in features dictionary.`
so I next tried disabling the first function
selected_training_examples = select_and_transform_features(training_examples)
and I just ran the second function
selected_validation_examples = select_and_transform_features(validation_examples)
If I do this, I then get the desired dataset for
selected_validation_examples .
The problem now is running the first function no longer gives me the "buckets" and I'm back to where I began? I guess my question is how are the two functions affecting each other? and preventing the other from giving me the datasets I need? If I run them together?
Thanks in advance!
a python developer gave me the solution so just wanted to share. LATITUDE_RANGES = zip(xrange(32, 44), xrange(33, 45)) can only be used once the way it was written so I placed it inside the succeding def select_and_transform_features(source_df) function which solved the issues. Thanks again everyone.

Categories