With pyparsing how do you ignore nested structures? - python

I am having trouble finding a way to ignore a structure if it is nested in another type of structure. In the example below I have a structure_a that I am trying to parse for, but in my results I am also getting matches for structure_a that are nested in another structure. I don't want pyparsing to match those unless I match the outer structure first. How would I go about doing that?
self.LBRACE, self.RBRACE, self.LBRACK, self.RBRACK, self.SEMI, self.COMMA, self.DOUBLEQUOTE = map(pp.Suppress, '{}[];,"')
def parse(self, data):
template = CaselessKeyword("structure_a")
words = Word(alphanums + "_" + "." + "[" + "]")
recursive_grammar = Forward()
recursive_grammar <<= (
Group(words("type") + words("name") + self.LBRACE +
ZeroOrMore(recursive_grammar) + self.RBRACE |
words("name") + self.LBRACE +
ZeroOrMore(recursive_grammar) + self.RBRACE |
self.LBRACE + ZeroOrMore(recursive_grammar) + self.RBRACE |
self.LBRACE + ZeroOrMore(words("type")) + self.RBRACE) |
Group(words("name") + self.EQUAL + recursive_grammar |
ZeroOrMore(words("type")) + words("name") + self.EQUAL +
words("value") + Optional(self.COMMA) |
words("name") + self.EQUAL + words("value") +
Optional(self.COMMA))
)
grammar = (template("category") + words("type") + words("name") +
self.LBRACE + ZeroOrMore(recursive_grammar)("members") +
self.RBRACE + Optional(cStyleComment)("short_description"))
result = grammar.searchString(data)
return result
# I want to match this structure
structure_a type name {
variable = 1
}
structure_b name {
# I only want to match a nested structure_a if I create a new
# grammar to match structure_b that have structure_a nested in it.
# Otherwise I want to ignore nested structure_a
structure_a type name {
variable = 2
}
}
Currently my grammar matches stuff that are in structure_b as well top level elements. I don't want pyparsing to match stuff in structure_b unless I explicitly match structure_b first.

After writing the question out and posting it and taking time away from the problem, I think I have come up with a solution. I think the reason it was matching the nested stucture_a was because it wasn't able to find match for the outer structure_b so the parser just moves to the next line of text and the parser doesn't know that the nested structure_a is nested. So I rewrote my code to this and it seems to work.
self.LBRACE, self.RBRACE, self.LBRACK, self.RBRACK, self.SEMI, self.COMMA, self.DOUBLEQUOTE = map(pp.Suppress, '{}[];,"')
def parse(self, data):
template1 = CaselessKeyword("structure_a")
template2 = CaselessKeyword("structure_b")
words = Word(alphanums + "_" + "." + "[" + "]")
recursive_grammar = Forward()
recursive_grammar <<= (
Group(words("type") + words("name") + self.LBRACE +
ZeroOrMore(recursive_grammar) + self.RBRACE |
words("name") + self.LBRACE + ZeroOrMore(recursive_grammar) +
self.RBRACE |
self.LBRACE + ZeroOrMore(recursive_grammar) + self.RBRACE |
self.LBRACE + ZeroOrMore(words("type")) + self.RBRACE |
# added the nested structure to my recursive grammar
template1("category") + words("type") + words("name") +
self.LBRACE + ZeroOrMore(recursive_grammar)("members") +
self.RBRACE + Optional(cStyleComment)("short_description")) |
Group(words("name") + self.EQUAL + recursive_grammar |
ZeroOrMore(words("type")) + words("name") + self.EQUAL +
words("value") + Optional(self.COMMA) |
words("name") + self.EQUAL + words("value") + Optional(self.COMMA))
)
grammar = (template1("category") + words("type") + words("name") +
self.LBRACE + ZeroOrMore(recursive_grammar)("members") +
self.RBRACE + Optional(cStyleComment)("short_description") |
# Match stucture_b
template2("category") + words("name") + self.LBRACE +
ZeroOrMore(recursive_grammar)("members") + self.RBRACE +
Optional(cStyleComment)("short_description")
)
result = grammar.searchString(data)
return result
# Same example from question...
structure_a type name {
variable = 1
}
structure_b name {
structure_a type name {
variable = 2
}
}
# Results...
Name: name
Category: structure_a
Type: type
['variable', '=', '1']
Name: name
Category: structure_b
Type:
['structure_a', 'type', 'name', ['variable', '=', '2']]

Related

Facing "attribute error:'dict' object has no attribute 'dtype' " in google colab

I have tried "Data Augmentation" for a numeric datatset. Augmentation is successful but while exporting the augmented dataset from google colab I am facing the Attribute Error:'dict' object has no attribute 'dtype'
The part of the code and the error message is given below:
A=[]
A.append(df)
for _ in range(5):
for _,row in df.iterrows():
temp = {
'PERCENT_PUB_DATA': row['PERCENT_PUB_DATA'] + np.random.uniform(percst),
'ACCESS_TO_PUB_DATA':row['ACCESS_TO_PUB_DATA'] + np.random.uniform(accest),
'COUPLING_BETWEEN_OBJECTS':row['COUPLING_BETWEEN_OBJECTS'] + np.random.uniform(coupst),
'DEPTH':row['DEPTH'] + np.random.uniform(deptst),
'LACK_OF_COHESION_OF_METHODS':row['LACK_OF_COHESION_OF_METHODS'] + np.random.uniform(lackst),
'NUM_OF_CHILDREN':row['NUM_OF_CHILDREN'] + np.random.uniform(numost),
'DEP_ON_CHILD':row['DEP_ON_CHILD'] + np.random.uniform(depost),
'FAN_IN':row['FAN_IN'] + np.random.uniform(fanist),
'RESPONSE_FOR_CLASS':row['RESPONSE_FOR_CLASS'] + np.random.uniform(respst),
'WEIGHTED_METHODS_PER_CLASS':row['WEIGHTED_METHODS_PER_CLASS'] + np.random.uniform(weigst),
'minLOC_BLANK':row['minLOC_BLANK'] + np.random.uniform(blankst),
'minBRANCH_COUNT':row['minBRANCH_COUNT'] + np.random.uniform(branchst),
'minLOC_CODE_AND_COMMENT':row['minLOC_CODE_AND_COMMENT'] + np.random.uniform(codest),
'minLOC_COMMENTS':row['minLOC_COMMENTS'] + np.random.uniform(comentsst),
'minCYCLOMATIC_COMPLEXITY':row['minCYCLOMATIC_COMPLEXITY'] + np.random.uniform(cyclost),
'minDESIGN_COMPLEXITY':row['minDESIGN_COMPLEXITY'] + np.random.uniform(desist),
'minESSENTIAL_COMPLEXITY':row['minESSENTIAL_COMPLEXITY'] + np.random.uniform(essest),
'minLOC_EXECUTABLE':row['minLOC_EXECUTABLE'] + np.random.uniform(execst),
'minHALSTEAD_CONTENT':row['minHALSTEAD_CONTENT'] + np.random.uniform(contst),
'minHALSTEAD_DIFFICULTY':row['minHALSTEAD_DIFFICULTY'] + np.random.uniform(diffest),
'minHALSTEAD_EFFORT':row['minHALSTEAD_EFFORT'] + np.random.uniform(effortsst),
'minHALSTEAD_ERROR_EST':row['minHALSTEAD_ERROR_EST'] + np.random.uniform(errost),
'minHALSTEAD_LENGTH':row['minHALSTEAD_LENGTH'] + np.random.uniform(lengtst),
'minHALSTEAD_LEVEL':row['minHALSTEAD_LEVEL'] + np.random.uniform(levst),
'minHALSTEAD_PROG_TIME':row['minHALSTEAD_PROG_TIME'] + np.random.uniform(progst),
'minHALSTEAD_VOLUME':row['minHALSTEAD_VOLUME'] + np.random.uniform(volust),
'minNUM_OPERANDS':row['minNUM_OPERANDS'] + np.random.uniform(operanst),
'minNUM_OPERATORS':row['minNUM_OPERATORS'] + np.random.uniform(operatst),
'minNUM_UNIQUE_OPERANDS':row['minNUM_UNIQUE_OPERANDS'] + np.random.uniform(uoperandst),
'minNUM_UNIQUE_OPERATORS' :row['minNUM_UNIQUE_OPERATORS'] + np.random.uniform(uoperatorst),
'minLOC_TOTAL' :row['minLOC_TOTAL'] + np.random.uniform(totst),
'maxLOC_BLANK' :row['maxLOC_BLANK'] + np.random.uniform(mblankst),
'maxBRANCH_COUNT' :row['maxBRANCH_COUNT'] + np.random.uniform(branchcountst),
'maxLOC_CODE_AND_COMMENT' :row['maxLOC_CODE_AND_COMMENT'] + np.random.uniform(mcodest),
'maxLOC_COMMENTS' :row['maxLOC_COMMENTS'] + np.random.uniform(mcommentst),
'maxCYCLOMATIC_COMPLEXITY' :row['maxCYCLOMATIC_COMPLEXITY'] + np.random.uniform(mcyclost),
'maxDESIGN_COMPLEXITY' :row['maxDESIGN_COMPLEXITY'] + np.random.uniform(mdesist),
'maxESSENTIAL_COMPLEXITY' :row['maxESSENTIAL_COMPLEXITY'] + np.random.uniform(messenst),
'maxLOC_EXECUTABLE' :row['maxLOC_EXECUTABLE'] + np.random.uniform(mlocst),
'maxHALSTEAD_CONTENT' :row['maxHALSTEAD_CONTENT'] + np.random.uniform(mhalconst),
'maxHALSTEAD_DIFFICULTY' :row['maxHALSTEAD_DIFFICULTY'] + np.random.uniform(mhaldiffst),
'maxHALSTEAD_EFFORT' :row['maxHALSTEAD_EFFORT'] + np.random.uniform(mhaleffst),
'maxHALSTEAD_ERROR_EST' :row['maxHALSTEAD_ERROR_EST'] + np.random.uniform(mhalerrst),
'maxHALSTEAD_LENGTH' :row['maxHALSTEAD_LENGTH'] + np.random.uniform(mhallenst),
'maxHALSTEAD_LEVEL' :row['maxHALSTEAD_LEVEL'] + np.random.uniform(mhallevst),
'maxHALSTEAD_PROG_TIME' :row['maxHALSTEAD_PROG_TIME'] + np.random.uniform(mhalpst),
'maxHALSTEAD_VOLUME' :row['maxHALSTEAD_VOLUME'] + np.random.uniform(mhalvst),
'maxNUM_OPERANDS' :row['maxNUM_OPERANDS'] + np.random.uniform(mnumopst),
'maxNUM_OPERATORS' :row['maxNUM_OPERATORS'] + np.random.uniform(mnopst),
'maxNUM_UNIQUE_OPERANDS':row['maxNUM_UNIQUE_OPERANDS'] + np.random.uniform(muopst),
'maxNUM_UNIQUE_OPERATORS':row['maxNUM_UNIQUE_OPERATORS'] + np.random.uniform(muoprst),
'maxLOC_TOTAL':row['maxLOC_TOTAL'] + np.random.uniform(mloctst),
'avgLOC_BLANK' :row['avgLOC_BLANK'] + np.random.uniform(alocbst),
'avgBRANCH_COUNT' :row['avgBRANCH_COUNT'] + np.random.uniform(abcst),
'avgLOC_CODE_AND_COMMENT' :row['avgLOC_CODE_AND_COMMENT'] + np.random.uniform(aloccodest),
'avgLOC_COMMENTS' :row['avgLOC_COMMENTS'] + np.random.uniform(aloccommst),
'avgCYCLOMATIC_COMPLEXITY' :row['avgCYCLOMATIC_COMPLEXITY'] + np.random.uniform(acyclost),
'avgDESIGN_COMPLEXITY' :row['avgDESIGN_COMPLEXITY'] + np.random.uniform(adesigst),
'avgESSENTIAL_COMPLEXITY' :row['avgESSENTIAL_COMPLEXITY'] + np.random.uniform(aessest),
'avgLOC_EXECUTABLE' :row['avgLOC_EXECUTABLE'] + np.random.uniform(alocexest),
'avgHALSTEAD_CONTENT' :row['avgHALSTEAD_CONTENT'] + np.random.uniform(ahalconst),
'avgHALSTEAD_DIFFICULTY' :row['avgHALSTEAD_DIFFICULTY'] + np.random.uniform(ahaldifficst),
'avgHALSTEAD_EFFORT' :row['avgHALSTEAD_EFFORT'] + np.random.uniform(ahaleffortst),
'avgHALSTEAD_ERROR_EST' :row['avgHALSTEAD_ERROR_EST'] + np.random.uniform(ahalestst),
'avgHALSTEAD_LENGTH' :row['avgHALSTEAD_LENGTH'] + np.random.uniform(ahallenst),
'avgHALSTEAD_LEVEL' :row['avgHALSTEAD_LEVEL'] + np.random.uniform(ahallevst),
'avgHALSTEAD_PROG_TIME' :row['avgHALSTEAD_PROG_TIME'] + np.random.uniform(ahalprogst),
'avgHALSTEAD_VOLUME' :row['avgHALSTEAD_VOLUME'] + np.random.uniform(ahalvolst),
'avgNUM_OPERANDS' :row['avgNUM_OPERANDS'] + np.random.uniform(ahalnumost),
'avgNUM_OPERATORS' :row['avgNUM_OPERATORS'] + np.random.uniform(ahalnumopst),
'avgNUM_UNIQUE_OPERANDS' :row['avgNUM_UNIQUE_OPERANDS'] + np.random.uniform(anumoperanst),
'avgNUM_UNIQUE_OPERATORS' :row['avgNUM_UNIQUE_OPERATORS'] + np.random.uniform(anumuniquest),
'avgLOC_TOTAL' :row['avgLOC_TOTAL'] + np.random.uniform(aloctst),
'sumLOC_BLANK' :row['sumLOC_BLANK'] + np.random.uniform(alocbst),
'sumBRANCH_COUNT' :row['sumBRANCH_COUNT'] + np.random.uniform(sumbst),
'sumLOC_CODE_AND_COMMENT' :row['sumLOC_CODE_AND_COMMENT'] + np.random.uniform(sunlst),
'sumLOC_COMMENTS' :row['sumLOC_COMMENTS'] + np.random.uniform(sumlcommst),
'sumCYCLOMATIC_COMPLEXITY' :row['sumCYCLOMATIC_COMPLEXITY'] + np.random.uniform(sumcyclost),
'sumDESIGN_COMPLEXITY' :row['sumDESIGN_COMPLEXITY'] + np.random.uniform(sumdesist),
'sumESSENTIAL_COMPLEXITY' :row['sumESSENTIAL_COMPLEXITY'] + np.random.uniform(sumessst),
'sumLOC_EXECUTABLE' :row['sumLOC_EXECUTABLE'] + np.random.uniform(sumexst),
'sumHALSTEAD_CONTENT' :row['sumHALSTEAD_CONTENT'] + np.random.uniform(sumconst),
'sumHALSTEAD_DIFFICULTY' :row['sumHALSTEAD_DIFFICULTY'] + np.random.uniform(sumdiffest),
'sumHALSTEAD_EFFORT' :row['sumHALSTEAD_EFFORT'] + np.random.uniform(sumeffst),
'sumHALSTEAD_ERROR_EST' :row['sumHALSTEAD_ERROR_EST'] + np.random.uniform(sumerrost),
'sumHALSTEAD_LENGTH' :row['sumHALSTEAD_LENGTH'] + np.random.uniform(sumlengst),
'sumHALSTEAD_LEVEL' :row['sumHALSTEAD_LEVEL'] + np.random.uniform(sumlevst),
'sumHALSTEAD_PROG_TIME' :row['sumHALSTEAD_PROG_TIME'] + np.random.uniform(sumprogst),
'sumHALSTEAD_VOLUME' :row['sumHALSTEAD_VOLUME'] + np.random.uniform(sumvolust),
'sumNUM_OPERANDS' :row['sumNUM_OPERANDS'] + np.random.uniform(sumoperst),
'sumNUM_OPERATORS' :row['sumNUM_OPERATORS'] + np.random.uniform(sumoperandst),
'sumNUM_UNIQUE_OPERANDS' :row['sumNUM_UNIQUE_OPERANDS'] + np.random.uniform(sumuopst),
'sumNUM_UNIQUE_OPERATORS' :row['sumNUM_UNIQUE_OPERATORS'] + np.random.uniform(sumuoprst),
'sumLOC_TOTAL' :row['sumLOC_TOTAL'] + np.random.uniform(sumtolst),
'DEFECTT' :row['DEFECTT'] + np.random.uniform(deftst),
'DEFECT5' :row['DEFECT5'] + np.random.uniform(defest),
'NUMDEFECTS' :row['NUMDEFECTS'] + np.random.uniform(ndefst)
}
A.append(temp)
print(len(A), "dataset created")
df=pd. DataFrame(A)
df.to_csv("A1.csv")
The output is as follows
726 dataset created
AttributeError Traceback (most recent call
last)
in ()
1 print(len(A), "dataset created")
----> 2 df=pd. DataFrame(A)
3 df.to_csv("A1.csv")
5 frames
/usr/local/lib/python3.7/dist-packages/pandas/core/dtypes/cast.py in
maybe_convert_platform(values)
122 arr = values
123
--> 124 if arr.dtype == object:
125 arr = cast(np.ndarray, arr)
126 arr = lib.maybe_convert_objects(arr)
AttributeError: 'dict' object has no attribute 'dtype'
Any help is appreciated
Thank You!
A=[]
if you use curly braces {} rather than block braces [] then the problem
AttributeError: 'dict' object has no attribute 'dtype'
will be solved.
Use this:
A={}

How to dynamically create a JSON string?

I am trying to create a JSON string that I can send in a PUT request but build it dynamically. For example, once I am done, I'd like the string to look like this:
{
"request-id": 1045058,
"db-connections":
[
{
"db-name":"Sales",
"table-name":"customer"
}
]
}
I'd like to use constants for the keys, for example, for request-id, I'd like to use CONST_REQUEST_ID. So the keys will be,
CONST_REQUEST_ID = "request-id"
CONST_DB_CONNECTIONS = "db_connections"
CONST_DB_NAME = "db-name"
CONST_TABLE_NAME = "table-name"
The values for the keys will be taken from various variables. For example we can take value "Sales" from a var called dbname with the value "Sales".
I have tried json.load and getting exception.
Help would be appreciated please as I am a bit new to Python.
Create a normal python dictionary, then convert it to JSON.
raw_data = {
CONST_DB_NAME: getDbNameValue()
}
# ...
json_data = json.dumps(raw_data)
# Use json_data in your PUT request.
You can concat the string using + with variables.
CONST_REQUEST_ID = "request-id"
CONST_DB_CONNECTIONS = "db_connections"
CONST_DB_NAME = "db-name"
CONST_TABLE_NAME = "table-name"
request_id = "1045058"
db_name = "Sales"
table_name = 'customer'
json_string = '{' + \
'"' + CONST_REQUEST_ID + '": ' + request_id \
+ ',' + \
'"db-connections":' \
+ '[' \
+ '{' \
+ '"' + CONST_DB_NAME +'":"' + db_name + '",' \
+ '"' + CONST_TABLE_NAME + '":"' + table_name + '"' \
+ '}' \
+ ']' \
+ '}'
print json_string
And this is the result
python st_ans.py
{"request-id": 1045058,"db-connections":[{"db-name":"Sales","table-name":"customer"}]}

Pass a variable between multiple several functions | Python 3

thanks for reading this post. I want to make an advanced TicTacToe game with AI and other stuff. I need to pass the spots(s1-s9) variable between different functions. I have been researching for quite a bit now, and I would like to meet an answer. Here is part of the code I need to execute:
def set_spots(s1, s2, s3, s4, s5, s6, s7, s8, s9):
return s1,s2,s3,s4,s5,s6,s7,s8,s9
def print_spots():
print('\n')
print(str(s1) + ' | ' + str(s2) + ' | ' + str(s3))
print('--+---+--')
print(str(s4) + ' | ' + str(s5) + ' | ' + str(s6))
print('--+---+--')
print(str(s7) + ' | ' + str(s8) + ' | ' + str(s9))
def game_loop():
set_spots(1,2,3,4,5,6,7,8,9)
print_spots()
game_loop()
I want to be able to set the spots in any function like if I had a turnX function. Like if I had:
def turnx(): #This isnt in this code though
#if stuff == other stuff (just example):
set_spots('X','O',3,4,5,6,7,8,9)
But the output is:
NameError: name 's1' is not defined
So basically, I need the program to ask the user where their x or o would be placed on the board (which you don't have to worry about) then have that value stored to be printed out. Like if I change 1 to X in the game, it needs to be stored so it can be printed out.
i believe you should try using the variables from set spots, on print spots:
def print_spots(s1, s2, s3, s4, s5, s6, s7, s8, s9):
return s1,s2,s3,s4,s5,s6,s7,s8,s9):
print('\n')
print(str(s1) + ' | ' + str(s2) + ' | ' + str(s3))
print('--+---+--')
print(str(s4) + ' | ' + str(s5) + ' | ' + str(s6))
print('--+---+--')
print(str(s7) + ' | ' + str(s8) + ' | ' + str(s9))
def game_loop():
print_spots(1,2,3,4,5,6,7,8,9)
print_spots()
game_loop()
if that doesen't work then i'm not sure
You can do this without the need of the unnecessary set_spots() function.
All you need is an array:
def turnx(s):
if 1 == 1: # (a condition):
s = ['X','O',3,4,5,6,7,8,9]
return s
def print_spots(s):
print('\n')
print(str(s[0]) + ' | ' + str(s[1]) + ' | ' + str(s[2]))
print('--+---+--')
print(str(s[3]) + ' | ' + str(s[4]) + ' | ' + str(s[5]))
print('--+---+--')
print(str(s[6]) + ' | ' + str(s[7]) + ' | ' + str(s[8]))
def game_loop():
spots = [1,2,3,4,5,6,7,8,9] # Use an array
print_spots(spots)
spots = turnx(spots)
print_spots(spots)
game_loop()
This outputs:
1 | 2 | 3
--+---+--
4 | 5 | 6
--+---+--
7 | 8 | 9
X | O | 3
--+---+--
4 | 5 | 6
--+---+--
7 | 8 | 9
do yourself a favor and represent the spots as a list. Then store that list in a variable when you call set spots and pass it on to print:
def set_spots(s1, s2, s3, s4, s5, s6, s7, s8, s9):
return [s1,s2,s3,s4,s5,s6,s7,s8,s9]
def print_spots( spots ):
s1,s2,s3,s4,s5,s6,s7,s8,s9 = spots
print('\n')
print(str(s1) + ' | ' + str(s2) + ' | ' + str(s3))
print('--+---+--')
print(str(s4) + ' | ' + str(s5) + ' | ' + str(s6))
print('--+---+--')
print(str(s7) + ' | ' + str(s8) + ' | ' + str(s9))
def game_loop():
spots = set_spots(1,2,3,4,5,6,7,8,9)
print_spots(spots )
game_loop()

Querying a shading node Maya Python

I am currently having a problem where i want to query the 'inputX' of a multiplyDivide Node in maya and put the queried number into the 'inputX' of another multiplyDivide node.
The script currently makes an stretchy IK set up for an arm. Using a distanceBetween the shoulder and the wrist (at a certain point, which is what i want to query) the bones would then stretch. So obviously, I don't want to connect the two together.
def stretchyIK(firstJointStore, lastJointStore, side, limb):
GlobalMoveRig = cmds.rename ('GlobalMove_Grp_01')
locFirstJoint = cmds.spaceLocator (n='Loc_' + firstJointStore + '_0#')
locLastJoint = cmds.spaceLocator (n='Loc_' + lastJointStore + '_0#')
pointLoc1 = cmds.pointConstraint (side + '_Fk_' + firstJointStore + suffix, locFirstJoint)
pointLoc2 = cmds.pointConstraint (side + '_Fk_' + lastJointStore + suffix, locLastJoint)
cmds.delete (pointLoc1, pointLoc2)
cmds.pointConstraint (side + '_FK_' + firstJointStore + suffix, locFirstJoint)
cmds.pointConstraint (ikCtr, locLastJoint)
cmds.parent (locFirstJoint, locLastJoint, 'Do_Not_Touch')
#Creating Nodes for Stretchy IK
IkStretch_DisNode = cmds.shadingNode ('distanceBetween', asUtility=True, n='DistBet_IkStretch_' + side + limb + '_#')
cmds.connectAttr (locFirstJoint[0] + '.translate', IkStretch_DisNode + '.point1')
cmds.connectAttr (locLastJoint[0] + '.translate', IkStretch_DisNode + '.point2')
IkStretch_DivNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Div_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_DivNode + '.operation', 2)
input = cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1.input1X') ########HELP NEEDED HERE
cmds.setAttr (ikCtr + '.translateX', 2)
IkStretch_MultNode = cmds.shadingNode ('multiplyDivide', asUtility=True, n='Mult_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_MultNode + '.input1X', IkStretch_DivNode + '.input1.input1X')#WAIT FOR PAUL
cmds.connectAttr (GlobalMoveRig + '.scaleY', IkStretch_MultNode + '.input2X')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_DivNode + '.input2X')
IkStretch_Cond_Equ = cmds.shadingNode ('condition', asUtility=True, n='Cond_Equ_IkStretch_' + side + limb + '_#')
IkStretch_Cond_GrtEqu = cmds.shadingNode ('condition', asUtility=True, n='Cond_GrtEqu_IkStretch_' + side + limb + '_#')
cmds.setAttr (IkStretch_Cond_GrtEqu + '.operation', 3)
cmds.connectAttr (ikCtr + '.Enable', IkStretch_Cond_Equ + '.firstTerm')
cmds.setAttr (IkStretch_Cond_Equ + '.secondTerm', 1)
cmds.connectAttr (IkStretch_DisNode + '.distance', IkStretch_Cond_GrtEqu + '.firstTerm')
cmds.connectAttr (IkStretch_MultNode + '.outputX', IkStretch_Cond_GrtEqu + '.secondTerm')
cmds.connectAttr (IkStretch_DivNode + '.outputX', IkStretch_Cond_GrtEqu + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', IkStretch_Cond_Equ + '.colorIfTrueR')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + secondJointStore + suffix + '.scaleX')
cmds.connectAttr (IkStretch_Cond_GrtEqu + '.outColorR', side + '_Ik_' + firstJointStore + suffix + '.scaleX')
Yes, your error makes perfect sense... The attribute you're looking for is actually just '.input1X' rather than '.input1.input1X'.
I know, that isn't very clear, but you'll know in the future. An easy way of figuring out stuff like this, by the way, is manually connecting stuff in Maya and seeing the MEL output in the script editor. You'll get the real deal every time, and translating that stuff to Python afterwards is quick.
So:
cmds.connectAttr(IkStretch_DisNode + '.distance', IkStretch_DivNode + '.input1X')
By the way, I'm not sure why you were assigning the result to input. I admit I'm not sure what that would return, but I don't see how it could be any useful!
Additionally: To answer your direct question, you can use getattr to query the value.
cmds.setAttr(
IkStretch_MultNode + '.input1X',
cmds.getattr(IkStretch_DivNode + '.input1X')
)
In my case, the variable being assigned to be set as the new attribute value was not being evaluated properly. setAttr interpreted the variable value as text, even though the value was input as a float value.
So, I simply assigned the variable and set it to float the variable within the command. In my case, I did the following:
cmds.setAttr(Node + '.input1X', float(variable))

How to get pyparser to work in a particular form

Sorry for the sorry title. I could not think of anything better
I am trying to implement a DSL with pyparsing that has the following requirements:
varaibles: All of them begin with v_
Unary operators: +, -
Binary operators: +,-,*,/,%
Constant numbers
Functions, like normal functions when they have just one variable
Functions need to work like this: foo(v_1+v_2) = foo(v_1) + foo(v_2), foo(bar(10*v_6))=foo(bar(10))*foo(bar(v_6)). It should be the case for any binary operation
I am able to get 1-5 working
This is the code I have so far
from pyparsing import *
exprstack = []
#~ #traceParseAction
def pushFirst(tokens):
exprstack.insert(0,tokens[0])
# define grammar
point = Literal( '.' )
plusorminus = Literal( '+' ) | Literal( '-' )
number = Word( nums )
integer = Combine( Optional( plusorminus ) + number )
floatnumber = Combine( integer +
Optional( point + Optional( number ) ) +
Optional( integer )
)
ident = Combine("v_" + Word(nums))
plus = Literal( "+" )
minus = Literal( "-" )
mult = Literal( "*" )
div = Literal( "/" )
cent = Literal( "%" )
lpar = Literal( "(" ).suppress()
rpar = Literal( ")" ).suppress()
addop = plus | minus
multop = mult | div | cent
expop = Literal( "^" )
band = Literal( "#" )
# define expr as Forward, since we will reference it in atom
expr = Forward()
fn = Word( alphas )
atom = ( ( floatnumber | integer | ident | ( fn + lpar + expr + rpar ) ).setParseAction(pushFirst) |
( lpar + expr.suppress() + rpar ))
factor = Forward()
factor << atom + ( ( band + factor ).setParseAction( pushFirst ) | ZeroOrMore( ( expop + factor ).setParseAction( pushFirst ) ) )
term = factor + ZeroOrMore( ( multop + factor ).setParseAction( pushFirst ) )
expr << term + ZeroOrMore( ( addop + term ).setParseAction( pushFirst ) )
print(expr)
bnf = expr
pattern = bnf + StringEnd()
def test(s):
del exprstack[:]
bnf.parseString(s,parseAll=True)
print exprstack
test("avg(+10)")
test("v_1+8")
test("avg(v_1+10)+10")
Here is the what I want.
My functions are of this type:
foo(v_1+v_2) = foo(v_1) + foo(v_2)
The same behaviour is expected for any other binary operation as well. I have no idea how to make the parser do this automatically.
Break out the function call as a separate sub expression:
function_call = fn + lpar + expr + rpar
Then add a parse action to function_call that pops the operators and operands from expr_stack, then pushes them back onto the stack:
if an operand, push operand then function
if an operator, push the operator
Since you are only doing binary operations, you might be better off doing a simple approach first:
expr = Forward()
identifier = Word(alphas+'_', alphanums+'_')
expr = Forward()
function_call = Group(identifier + LPAR + Group(expr) + RPAR)
unop = oneOf("+ -")
binop = oneOf("+ - * / %")
operand = Group(Optional(unop) + (function_call | number | identifier))
binexpr = operand + binop + operand
expr << (binexpr | operand)
bnf = expr
This gives you a simpler structure to work with, without having to mess with exprstack.
def test(s):
exprtokens = bnf.parseString(s,parseAll=True)
print exprtokens
test("10")
test("10+20")
test("avg(10)")
test("avg(+10)")
test("column_1+8")
test("avg(column_1+10)+10")
Gives:
[['10']]
[['10'], '+', ['20']]
[[['avg', [['10']]]]]
[[['avg', [['+', '10']]]]]
[['column_1'], '+', ['8']]
[[['avg', [['column_1'], '+', ['10']]]], '+', ['10']]
You want to expand fn(a op b) to fn(a) op fn(b), but fn(a) should be left alone, so you need to test on the length of the parsed expression argument:
def distribute_function(tokens):
# unpack function name and arguments
fname, args = tokens[0]
# if args contains an expression, expand it
if len(args) > 1:
ret = ParseResults([])
for i,a in enumerate(args):
if i % 2 == 0:
# even args are operands to be wrapped in the function
ret += ParseResults([ParseResults([fname,ParseResults([a])])])
else:
# odd args are operators, just add them to the results
ret += ParseResults([a])
return ParseResults([ret])
function_call.setParseAction(distribute_function)
Now your calls to test will look like:
[['10']]
[['10'], '+', ['20']]
[[['avg', [['10']]]]]
[[['avg', [['+', '10']]]]]
[['column_1'], '+', ['8']]
[[[['avg', [['column_1']]], '+', ['avg', [['10']]]]], '+', ['10']]
This should even work recursively with a call like fna(fnb(3+2)+fnc(4+9)).

Categories