Testing if an object is dependent to another object - python

Is there a way to check if an object is dependent via parenting, constraints, or connections to another object? I would like to do this check prior to parenting an object to see if it would cause dependency cycles or not.
I remember 3DsMax had a command to do this exactly. I checked OpenMaya but couldn't find anything. There is cmds.cycleCheck, but this only works when there currently is a cycle, which would be too late for me to use.
The tricky thing is that these 2 objects could be anywhere in the scene hierarchy, so they may or may not have direct parenting relationships.
EDIT
It's relatively easy to check if the hierarchy will cause any issues:
children = cmds.listRelatives(obj1, ad = True, f = True)
if obj2 in children:
print "Can't parent to its own children!"
Checking for constraints or connections is another story though.

depending on what you're looking for, cmds.listHistory or cmds.listConnections will tell you what's coming in to a given node. listHistory is limited to a subset of possible connections that drive shape node changes, so if you're interested in constraints you'll need to traverse the listConnections for your node and see what's upstream. The list can be arbitrarily large because it may include lots of hidden nodes like like unit translations, group parts and so on that you probably don't want to care about.
Here's simple way to troll the incoming connections of a node and get a tree of incoming connections:
def input_tree(root_node):
visited = set() # so we don't get into loops
# recursively extract input connections
def upstream(node, depth = 0):
if node not in visited:
visited.add(node)
children = cmds.listConnections(node, s=True, d=False)
if children:
grandparents = ()
for history_node in children:
grandparents += (tuple(d for d in upstream(history_node, depth + 1)))
yield node, tuple((g for g in grandparents if len(g)))
# unfold the recursive generation of the tree
tree_iter = tuple((i for i in upstream(root_node)))
# return the grandparent array of the first node
return tree_iter[0][-1]
Which should produce a nested list of input connections like
((u'pCube1_parentConstraint1',
((u'pSphere1',
((u'pSphere1_orientConstraint1', ()),
(u'pSphere1_scaleConstraint1', ()))),)),
(u'pCube1_scaleConstraint1', ()))
in which each level contains a list of inputs. You can then troll through that to see if your proposed change includes that item.
This won't tell you if the connection will cause a real cycle, however: that's dependent on the data flow within the different nodes. Once you identify the possible cycle you can work your way back to see if the cycle is real (two items affecting each other's translation, for example) or harmless (I affect your rotation and you affect my translation).

This is not the most elegant approach, but it's a quick and dirty way that seems to be working ok so far. The idea is that if a cycle happens, then just undo the operation and stop the rest of the script. Testing with a rig, it doesn't matter how complex the connections are, it will catch it.
# Class to use to undo operations
class UndoStack():
def __init__(self, inputName = ''):
self.name = inputName
def __enter__(self):
cmds.undoInfo(openChunk = True, chunkName = self.name, length = 300)
def __exit__(self, type, value, traceback):
cmds.undoInfo(closeChunk = True)
# Create a sphere and a box
mySphere = cmds.polySphere()[0]
myBox = cmds.polyCube()[0]
# Parent box to the sphere
myBox = cmds.parent(myBox, mySphere)[0]
# Set constraint from sphere to box (will cause cycle)
with UndoStack("Parent box"):
cmds.parentConstraint(myBox, mySphere)
# If there's a cycle, undo it
hasCycle = cmds.cycleCheck([mySphere, myBox])
if hasCycle:
cmds.undo()
cmds.warning("Can't do this operation, a dependency cycle has occurred!")

Related

Graphs - what is the functional difference between "pure" Dijkstra and my hybrid BFS-Dijkstra solution?

I am (or was supposed to, at least) make a Dijkstra implementation, but upon reviewing what I've done it looks more like a breadth-first search. But I wonder if I have come across a way to kind of do both things at the same time?
Essentially by using an OOP approach I can perform a BFS that also preserves knowledge of the shortest weighted path, thereby eliminating the need to determine during the search process whether some node has a lower cost than its alternatives like Dijkstra does.
I've searched for clues as to why a more "pure" implementation of Dijkstra should be faster than this, most prominently the answers in these two threads:
What is difference between BFS and Dijkstra's algorithms when looking for shortest path?
Why use Dijkstra's Algorithm if Breadth First Search (BFS) can do the same thing faster?
I haven't seen anything that made me understand the question to what I'm wondering, though.
My approach is essentially this:
Start at whatever node we require, this becomes a "path"
Step out to each adjacent node, and each of these steps create a new path that we store in a path collection
Every path contains an ordered list of which nodes it has visited from the starting node all the way to wherever it is, as well as the associated weight/cost
Mark the "parent" path as closed (stop iterating on it)
Select the first open path in the path collection and repeat the above
When there are no more open paths, delete all paths that didn't make it to the destination node
Compare the lengths and return the path with the lowest weight
I struggle to see where the performance difference between this and pure Dijkstra would come from. Dijkstra would still have to iterate over all possible paths, right? So I perform exactly the same number of steps with this implementation as if I change returnNextOpenPath() (serving the function of a normal queue, just not implemented as one) to a more priority queue-looking returnShortestOpenPath().
There's presumably a marginal performance penalty at the end where I examine all the collected, non-destroyed paths before I can print a result instead of just popping from a queue - but aside from that, am I just not seeing where else my implementation would also be worse than "pure" Dijkstra?
I don't think it matters, but in case it does: The actual code I have for this is gigantic so my first instinct is to hold off on posting it for now, but here's a stripped down version of it.
class DijkstraNode:
def getNeighbors(self):
# returns a Dict of all adjacent nodes and their cost
def weightedDistanceToNeighbor(self, neighbor):
# returns the cost associated with traversing from current node to the chosen neighbor node
return int(self.getNeighbors()[neighbor])
class DijkstraEdge:
def __init__(self, startingNode: DijkstraNode, destinationNode: DijkstraNode)
self.start = startingNode
self.goal = destinationNode
def weightedLength(self):
return self.start.weightedDistanceToNeighbor(self.goal)
class DijkstraPath:
def __init__(self, startingNode: DijkstraNode, destinationNode: DijkstraNode):
self.visitedNodes: list[DjikstraNode] = [startingNode]
self.previousNode = self.start
self.edges: list[DijkstraEdge] = []
def addNode(self, node: DijkstraNode)
# if the node we're inspecting is new, add it to all the lists
if not node in self.visitedNodes:
self.edges.append(DijkstraEdge(self.prevNode, node))
self.visitedNodes.append(node)
self.prevNode = node
# if the node we just added above is our destination, stop iterating on this path
if node = self.goal:
self.closed = True
self.valid = True
class DijkstraTree:
def bruteforceAllPaths(self, startingNode: DijkstraNode, destinationNode: DijkstraNode):
self.pathlist = []
self.pathlist.append(DjikstraPath(startingNode, destinationNode))
cn: DjikstraNode
# iterate over all open paths
while self.hasOpenPaths():
currentPath = self.returnNextOpenPath()
neighbors: Dict = currentPath.lastNode().getNeighbors()
for c in neighbors:
cn = self.returnNode(c)
# copy the current path
tmpPath = deepcopy(currentPath)
# add the child node onto the newly made path
tmpPath.addNode(cn)
# add the new path to pathlist
if tmpPath.isOpen() or tmpPath.isValid():
self.pathlist.append(tmpPath)
# then we close the parent path
currentPath.close()

While traversing an ancestor tree starting from two target nodes, can I mark nodes I've seen in recursive calls to find their lowest common ancestor?

I'm solving a problem where we're given a tree, its root and two target nodes (descendantOne and descendantTwo) within the tree.
I am asked to return the lowest common ancestor of the two target nodes.
However, we are also told that our tree is an instance of AncestralTree, which is given by:
class AncestralTree:
def __init__(self, name):
self.name = name
self.ancestor = None
i.e. for every node in the tree, we only have pointers going upwards to the parents (as opposed to a normal tree which has pointer from parent to child!)
My idea of solving this problem is to start from both target nodes and move upwards, marking each node that we visit. At one point, we are bound to visit a node twice, and the first time we do- this is our lowest common ancestor!
Here is my code:
def getYoungestCommonAncestor(topAncestor, descendantOne, descendantTwo):
lowestCommonAncestor = None
def checkAncestors(topAncestor,descendantOne, descendantTwo,descendantOneSeen,descendantTwoSeen):
if descendantOneSeen and descendantTwoSeen:
return descendantOne
else:
return None
while not lowestCommonAncestor:
**lowestCommonAncestor = checkAncestors(topAncestor,descendantOne.ancestor, descendantTwo,True,False)
if lowestCommonAncestor:
break
**lowestCommonAncestor = checkAncestors(topAncestor,descendantOne, descendantTwo.ancestor,False,True)
if descendantOne.ancestor == topAncestor:
pass
else:
descendantOne = descendantOne.ancestor
if descendantTwo.ancestor == topAncestor:
pass
else:
descendantTwo= descendantTwo.ancestor
return lowestCommonAncestor
I have put stars ** next to the two recursion calls in my code, because I believe this is the issue.
As I run the recursion calls, e.g. say we have seen descendantOne, when I run the recursion call for descendantTwo, it automatically marks descendantOneSeen as false
in its recursion call. So this causes us to never have descendantOneSeen and descendantTwoSeen to be true.
And when I run the above code, I do get a infiniteLoop error- and I do see why.
Is there any way to amend my code to achieve what I want WITHOUT using global variables?
Indeed, it will not work like that, as descendantOneSeen and descendantTwoSeen is never true. But even if you would fix that part of the logic, the distance the two nodes have to their lowest common ancestor may be far apart... so you need a different algorithm.
One way is to walk to the top of the tree in tandem like you did, but then when you reach the top, continue with that reference at the other starting node. When both references have made this jump back down, they will have visited the exact same number of nodes at the moment they meet eachother at the common lowest ancestor.
This leads to a very simple algorithm:
def getYoungestCommonAncestor(topAncestor, descendantOne, descendantTwo):
nodeOne = descendantOne
nodeTwo = descendantTwo
while nodeOne is not nodeTwo:
nodeOne = descendantTwo if nodeOne is topAncestor else nodeOne.ancestor
nodeTwo = descendantOne if nodeTwo is topAncestor else nodeTwo.ancestor
return nodeOne
This may look dodgy, as it looks like a matter of luck that these node references will ever be equal. But both nodeOne and nodeTwo references will walk from both starting points (descendantOne and descendantTwo) -- it is just the order in which they do this that is inverted. But that still means they will visit the same number of nodes by the time they visit the common ancestor the second time.
Here is your example graph, where the two starting nodes are C and I. I have removed some of the nodes, as they are unreachable from these two nodes, so they don't play a role:
So the idea is that we start the traversal at nodes I and C. By applying the rule that when a traversal reaches the root, it will continue from the other starting node, we see that from I we will first follow the red edges, and then the green one, while the path that starts from C will first follow the green edge and then follow the green edges.
From this it is clear that these two traversals will take an equal number of steps to visit both the green and the red edges (just in a different order) and so they will reach node A at the same time when they each visit it for the second time.

An efficient way to implement circular priority queue?

I have designed a circular priority queue. But it took me a while because it is so conditional and has a bit much of a time complexity.
I implemented it using a list. But I need a more efficient circular priority queue implementation.
I'll illustrate my queue structure, sometimes it would be helpful for someone who seeks for a code to understand circular priority queues.
class PriorityQueue:
def __init__(self,n,key=None):
if key is None:
key=lambda x:x
self.maxsize = n
self.key=key
self.arr = list(range(self.maxsize))
self.rear = -1
self.front = 0
self.nelements=0
def isPQueueful(self):
return self.nelements==self.maxsize
def isPQueueempty(self):
return self.nelements==0
def insert(self, item):
if not self.isPQueueful():
pos=self.rear+1
scope = range(self.rear - self.maxsize, self.front - self.maxsize - 1, -1)
if self.rear==0 and self.rear<self.front:
scope=range(0,self.front-self.maxsize-1,-1)
for i in scope:
if self.key(item)>self.key(self.arr[i]):
self.arr[i+1]=self.arr[i]
pos=i
else:
break
self.rear+=1
if self.rear==self.maxsize:
self.rear=0
if pos==self.maxsize:
pos=0
self.arr[pos]=item
self.nelements+=1
else:
print("Priority Queue is full")
def remove(self):
revalue=None
if not self.isPQueueempty():
revalue=self.arr[self.front]
self.front+=1
if self.front==self.maxsize:
self.front=0
self.nelements-=1
else:
print("Priority Queue is empty")
return revalue
I really appreciate if someone can say whether what I designed is suitable for used in a production code. I think mostly it is not an efficient one.
If so can you point out to me how to design a efficient circular priority queue?
So, think of the interface and implementation separately.
The interface to a circular priority queue will make you think that the structure is a circular queue. It has a "highest" priority head and the next one is slightly lower, and then you get to the end, and the next one is the head again.
The methods you write need to act that way.
But the implementation doesn't actually need to be any kind of queue, list, array or linear structure.
For the implementation, you are trying to maintain a set of nodes that are always sorted by priority. For that, it would be better to use some kind of balanced tree (for example a red-black tree).
You hide that detail below your interface -- when you get to the end, you just reset yourself to the beginning -- your interfaces makes it look circular.

Bi-Directional Binary Search Trees?

I have tried to implement a BST. As of now it only adds keys according to the BST property(Left-Lower, Right-Bigger). Though I implemented it in a different way.
This is how I think BST's are supposed to be
Single Direction BST
How I have implemented my BST
Bi-Directional BST
The question is whether or not is it the correct implementation of BST?
(The way i see it in double sided BST's it would be easier to search, delete and insert)
import pdb;
class Node:
def __init__(self, value):
self.value=value
self.parent=None
self.left_child=None
self.right_child=None
class BST:
def __init__(self,root=None):
self.root=root
def add(self,value):
#pdb.set_trace()
new_node=Node(value)
self.tp=self.root
if self.root is not None:
while True:
if self.tp.parent is None:
break
else:
self.tp=self.tp.parent
#the self.tp varible always is at the first node.
while True:
if new_node.value >= self.tp.value :
if self.tp.right_child is None:
new_node.parent=self.tp
self.tp.right_child=new_node
break
elif self.tp.right_child is not None:
self.tp=self.tp.right_child
print("Going Down Right")
print(new_node.value)
elif new_node.value < self.tp.value :
if self.tp.left_child is None:
new_node.parent=self.tp
self.tp.left_child=new_node
break
elif self.tp.left_child is not None:
self.tp=self.tp.left_child
print("Going Down Left")
print(new_node.value)
self.root=new_node
newBST=BST()
newBST.add(9)
newBST.add(10)
newBST.add(2)
newBST.add(15)
newBST.add(14)
newBST.add(1)
newBST.add(3)
Edit: I have used while loops instead of recursion. Could someone please elaborate as why using while loops instead of recursion is a bad idea in this particular case and in general?
BSTs with parent links are used occasionally.
The benefit is not that the links make it easier to search or update (they don't really), but that you can insert before or after any given node, or traverse forward or backward from that node, without having to search from the root.
It becomes convenient to use a pointer to a node to represent a position in the tree, instead of a full path, even when the tree contains duplicates, and that position remains valid as updates or deletions are performed elsewhere.
In an abstract data type, these properties make it easy, for example, to provide iterators that aren't invalidated by mutations.
You haven't described how you gain anything with the parent pointer. An algorithm that cares about rewinding to the parent node, will do so by crawling back up the call stack.
I've been there -- in my data structures class, I implemented my stuff with bi-directional pointers. When we got to binary trees, those pointers ceased to be useful. Proper use of recursion replaces the need to follow a link back up the tree.

Recursive Depth First Search in Python

So I've been trying to implement depth first search recursion in python. In my program I'm aiming to return the parent array. (the parent of the vertices in the graph)
def dfs_tree(graph, start):
a_list = graph.adjacency_list
parent = [None]*len(a_list)
state = [False]*len(a_list)
state[start] = True
for item in a_list[start]:
if state[item[0]] == False:
parent[item[0]] = start
dfs_tree(graph, item[0])
state[start] = True
return parent
It says that the maximum recursion depth is reached. How do I fix this?
The main reason for such behavior as I can see is the re-initialization of state (and parent) arrays on each recursion call. You need to initialize them only once for each traverse. The common approach for that is to add these arrays to function arguments list, initialize them with None's, and replacing with lists on first call:
def dfs_tree(graph, start, parent=None, state=None):
a_list = graph.adjacency_list
if parent is None:
parent = [None]*len(a_list)
if state is None:
state = [False]*len(a_list)
state[start] = True
for item in a_list[start]:
if state[item[0]] == False:
parent[item[0]] = start
dfs_tree(graph, item[0], parent, state)
state[start] = True
return parent
After that change algorithm seems to be correct. But the problem mentioned by Esdes is still there. If you want to correctly traverse graphs of size n you need to do sys.setrecursionlimit(n + 10). 10 there stands for the maximum number of nested function calls outside dfs_tree including hidden initial calls. You should increase it if you need.
But it's not a final solution. Python interpreter have some limit on stack memory itself, which differs depending on OS and some settings. So if you increase recursionlimit above some threshold you would start to get "segmentation fault" errors when the interpreter's memory limit is exceeded. I don't remember exactly the approximate value of this threshold but it looks to be about 3000 calls. I don't have an interpreter now to check.
In this case you may want to consider re-implementing you algorithm to non-recursive version using stack or try to change python interpreter's stack memory limit.
P.S. You may want to change if state[item[0]] == False: to if not state[item[0]]:. Comparing booleans is usually considered as a bad practice.
To fix this error just increase the default recursion depth limit like this (by default it is 1000):
sys.setrecursionlimit(2000)

Categories