What is the difference between object and node in Maya (scripting)? - python

This question may sound silly I know, but I would like to know exactly what is the difference and in which way the hierarchy works in their case.
If I create for instance a polyCylinder and bind it to a variable
exVar = cmds.polyCylinder(name='cylinder_01')
And now I print exVar, I get a list with two Unicode string items: one for the name of the object and another one for the name of the node.
[u'cylinder_01', u'polyCylinder1']
If I go to the Outliner I just can see cylinder_01, I cannot see the polyCylinder1 item.
What do they mean? Is there any way to visualize them in the Outliner or the Hypergraph?
Thanks in advance.

cylinder_01 is the transform, which handles translation, rotation, scale, etc.
polyCylinder1 is the shape, which holds the vertices, polygons, shader connections, etc.
The shape is parented to the transform. You can see it in the Outliner if you select Display > Shapes

Related

Getting (t, c, k) values from OpenCascade surfaces

I've created a library for creating and using b-spline surfaces in Python, utilizing parallel scipy.interpolate.RectBivariateSpline() instances to hold the knot vectors, (X, Y, Z) control point mesh, and degrees in u and v (the (t, c, k) tuple against which surface evaluation is performed). I also wrote a STEP parser to read surface data exported from CAD packages; I take the (t, c, k) values from the b_spline_surface_with_knots entities in the file and stuff them into my own objects. The surface library works pretty well for me, but the STEP parser is a pain and fails in various ways almost every time I use it. So I've tried using a 'real' STEP parser, like this:
from OCC.STEPControl import STEPControl_Reader
from OCC.IFSelect import IFSelect_RetDone, IFSelect_ItemsByEntity
step_reader = STEPControl_Reader()
status = step_reader.ReadFile('c:/LPT/nomdata/lpt3.stp')
if status == IFSelect_RetDone: # check status
failsonly = False
step_reader.PrintCheckLoad(failsonly, IFSelect_ItemsByEntity)
step_reader.PrintCheckTransfer(failsonly, IFSelect_ItemsByEntity)
ok = step_reader.TransferRoot(1)
_nbs = step_reader.NbShapes()
aResShape = step_reader.Shape(1)
else:
print("Error: can't read file.")
sys.exit(0)
Now I have this aResShape object, but no amount of poking and prodding it in IPython (nor googling) reveals how to get at the (t, c, k) values that define the surface.
Can someone please point me to the method that will reveal these values? Or is there possibly another Python-based STEP parser that's a little less opaque?
The question is a bit old, but just in case anybody else hits here with a similar problem...
The result of step_reader.Shape() is a TopoDS_Shape, which is a topological entity which can be divided into the following component topologies:
Vertex – a zero-dimensional shape corresponding to a point in geometry;
Edge – a shape corresponding to a curve, and bound by a vertex at each extremity;
Wire – a sequence of edges connected by their vertices;
Face – part of a plane (in 2D geometry) or a surface (in 3D geometry) bounded by a closed wire;
Shell – a collection of faces connected by some edges of their wire boundaries;
Solid – a part of 3D space bound by a shell;
Compound solid – a collection of solids.
Tipically, you'd query it with the method TopoDS_Shape::ShapeType() in order know what is that shape (vertex? edge?, ...).
If the model is formed by a single b-spline surface, the shape then should be a TopoDS_Face, that you can get by calling:
face = aResShape.Face();
Once you have the TopoDS_Face at hand, you can get the underlying geometry (Geom_Surface) like this:
surface = BRepAdaptor_Surface(face).Surface().BSpline();
Now that you have had access to the underlying geometry, you can call this object's methods and they will provide you with the information you need.
They are documented here:
https://www.opencascade.com/doc/occt-7.1.0/refman/html/class_geom___b_spline_surface.html
OpenCASCADE documentation may seem confusing, but I think you might be interested on this topic:
https://www.opencascade.com/doc/occt-7.0.0/overview/html/occt_user_guides__modeling_data.html#occt_modat_3
Hope it helps.

How to use igraph python's metamagic class?

The python interface of igraph has a class called metamagic, serving the purpose to collect graphical parameters for plotting. I am writing a module using igraph, and I almost started to write my own wrapper functions for this purpose, when I've found metamagic in the documentation. But after searching and trying, it's still not clear how to use these classes. If I define an AttributeCollectorBase class for edges, like this:
class VisEdge(igraph.drawing.metamagic.AttributeCollectorBase):
width = 0.002
color = "#CCCCCC44"
Then, is there an easy way to pass all these parameters to the igraph.plot() function? Or I can only do one by one, like this: plot(graph,edge_color=VisEdge(graph.es).color)?
And what if I would like to use not constant parameters, but calculate by a custom function? For example, vertex_size proportional to degree. The func parameter of the AttributeSpecification class supposed to do this, isn't it? But I haven't seen any example how to use it. If I define an AttributeSpecification instance, like this:
ds = igraph.drawing.metamagic.AttributeSpecification(name="vertex_size",alt_name="size",default=2,func='degree')
After how to pass it to an AtributeCollector, and finally to plot()?
(To put things in context: I am the author of the Python interface of igraph).
I'm not sure whether the metamagic package is the right tool for you. The only purpose of the AttributeCollectorBase class is to allow the vertex and edge drawers in igraph (see the igraph.drawing.vertex and igraph.drawing.edge packages) to define what vertex and edge attributes they are able to treat as visual properties in a nice and concise manner (without me having to type too much). So, for instance, if you take a look at the DefaultVertexDrawer class in igraph.drawing.vertex, you can see that I construct a VisualVertexBuilder class by deriving it from AttributeCollectorBase as follows:
class VisualVertexBuilder(AttributeCollectorBase):
"""Collects some visual properties of a vertex for drawing"""
_kwds_prefix = "vertex_"
color = ("red", self.palette.get)
frame_color = ("black", self.palette.get)
frame_width = 1.0
...
Later on, when the DefaultVertexDrawer is being used in DefaultGraphDrawer, I simply construct a VisualVertexBuilder as follows:
vertex_builder = vertex_drawer.VisualVertexBuilder(graph.vs, kwds)
where graph.vs is the vertex sequence of the graph (so the vertex builder can get access to the vertex attributes) and kwds is the set of keyword arguments passed to plot(). The vertex_builder variable then allows me to retrieve the calculated, effective visual properties of vertex i by writing something like vertex_builder[i].color; here, it is the responsibility of the VisualVertexBuilder to determine the effective color by looking at the vertex and checking its color attribute as well as looking at the keyword arguments and checking whether it contains vertex_color.
The bottom line is that the AttributeCollectorBase class is likely to be useful to you only if you are implementing a custom graph, vertex or edge drawer and you want to specify which vertex attributes you wish to treat as visual properties. If you only want to plot a graph and derive the visual properties of that particular graph from some other data, then AttributeCollectorBase is of no use to you. For instance, if you want the size of the vertex be proportional to the degree, the preferred way to do it is either this:
sizes = rescale(graph.degree(), out_range=(0, 10))
plot(graph, vertex_size=sizes)
or this:
graph.vs["size"] = rescale(graph.degree(), out_range=(0, 10))
plot(g)
If you have many visual properties, the best way is probably to collect them into a dictionary first and then pass that dictionary to plot(); e.g.:
visual_props = dict(
vertex_size = rescale(graph.degree(), out_range=(0, 10)),
edge_width = rescale(graph.es["weight"], out_range=(0, 5), scale=log10)
)
plot(g, **visual_props)
Take a look at the documentation of the rescale function for more details. If you want to map some vertex property into the color of the vertex, you can still use rescale to map the property into the range 0-255, then round them to the nearest integer and use a palette when plotting:
palette = palettes["red-yellow-green"]
colors = [round(x) for x in rescale(g.degree(), out_range=(0, len(palette)-1))]
plot(g, vertex_color=colors, palette=palette)

Recognize which model object is clicked in Tkinter

I have many overlapping shapes representing irrelevant background items on a canvas. I also have a pattern of non-overlapping circles, each of which is a "hole". Each "hole" sprite (circle) has an associated "hole" object, though never explicitly in the code. (side note: I would love to have a logical association between model and view with these objects, but haven't found a smart way to do that). Each "hole" is different, and has different effects.
There is a small circular "ball" which can be dragged into any "hole". I found how to drag and drop from this question. I need to find which hole the ball went into.
The best way I have found to do that so far is to:
create a dict mapping the coordinates of the center of the hole sprite to the hole object
tag each hole like this:
t=("hole", "hole_at_{}_{}".format(x, y))
on releasing the ball, do this:
def on_ball_release(self, event):
'''Process button event when user releases mouse holding ball.'''
# use small invisible rectangle and find all overlapping items
items = self._canvas.find_overlapping(event.x - 10, event.y - 10, event.x + 10, event.y + 10)
for item in items:
# there should only be 1 overlapping hole
if "hole" in self._canvas.gettags(item):
# get the coordinates from the tag
coords = tuple([int(i) for i in self._canvas.gettags(item)[1].replace("hole_at_", "").split("_")])
# get associated object using dictionary established before
hole = self._hole_dict[coords]
hole.process_ball()
return
That seems very messy. I feel there should be some smarter way to do this.
Disclaimer: I don't use Python, but many Tkinter questions can be answered in a useful from an experience with Tcl/Tk, which I have. In this case, it takes some more work to figure out whether what I would do in Tcl is easy to represent with Tkinter.
First, I wouldn't add "identifier tags" (hole_at_...): if I have model objects corresponding to canvas items, I would use the item id (which canvas returns during item creation) as an index, to be able to find an object for an item id without parsing tags. (And if I had to add string identifiers, even if I decided to make them from coordinates, I would use that very string as my dictionary key, to avoid reparsing it. Do we need coordinates later? Then make them properties of the hole object).
Second, I would use pathName find subcommand with multiple criteria to find (canvas id of) item which is tagged as hole and is nearest to the given point (overlapping is fine when we want to ignore drops too far from any hole, closest is for the case where nearest hole should be used even if it's not too near). Here is the problematic part: does Tkinter support multiple criteria in canvases' $pathName find?

HTML like layouting

I'm trying to implement my own little flow-based layout engine. It should imitate the behavior of HTML layouting, but only the render-tree, not the DOM part. The base class for elements in the render-tree is the Node class. It has:
A link to the element in the DOM (for the ones that build a render-tree with that library)
A reference to it's parent (which is a ContainerNode instance or None, see later)
A reference to the layouting-options
X, Y, width and height (the position is computed in layout(), after the size has been computed in compute_size(). While the position is defined by the layout() method of the parent, the size is defined by the options reference, for instance).
It's methods are:
reflow() invoking compute_size() and layout()
compute_size() that is intended to compute the width and height of the node.
layout() which is intended to position the sub-nodes of the node, not the node itself.
paint() which is there to be overwritten by the user of the library.
The ContainerNode class is implementing the handling of sub-nodes. It provides a new method called add_node(), which adds the passed node to the containers children. The function also accepts a parameter force which defaults to False, because the container is allowed to deny the passed node, except force is set to True.
These two classes do not implement any layouting algorithm. My aim was to create different classes for the different types of layouts (In CSS, mainly defined by the display attribute). I did some tests with text-layouting last night and you can find my code from at pastebin.com (requires pygame). You can save it to a python script file and invoke it like this:
python text_test block -c -f "Georgia" -s 15
Note: The code is really really crappy. I appreciate comments on deep lying misconceptions.
The class InlineNodeRow from the code mentioned above actually represents my idea of how to implement the node that lays out similar to the display:inline attribute (in combination with the NodeBox).
Problem 1 - Margin & Padding for inline-text
Back to my current approach in the library: A single word from a text would also be represented as a single node (just like in the code above). But I noticed two things about margins and paddings in a <span> tag.
When margin is set, only horizontal margin is taken in account, the vertical margin is ignored.
The padding is overflowing the parent container and does not "move" the span node.
See http://jsfiddle.net/CeRkT/1/.
I see the problem here: When I want to compute the size of the InlineNodeBox, I ask a text-node for it's size and add it to the size of the node. But the text-nodes size is including it's margin and padding, which is not included in the HTML renderer's positioning. Therefore the following code would not be right:
def compute_size(self):
# Propagates the computation to the child-nodes.
super(InlineNodeBox, self).compute_size()
self.w = 0
self.h = 0
for node in self.nodes:
self.w += node.w
if self.h < node.h:
self.h = node.h
node.w would include the margin and padding. Next problem I see is, that I for laying out the text-nodes correctly, I wanted to split them into single TextNodes for each word, but the margin and padding would then be applied to all these nodes, while the margin and padding in HTML is to the <span> tag only.
I think my current idea of putting each word into a seperate node is not ideal. How to browsers structure their render-tree, or do you have a better idea?
Problem 2 - Word too long, put it into the next line.
The InlineNodeBox class currently only organizes a single line. In the code example above, I've created a new InlineNodeBox from within the NodeBox when the former refused to accept the node (which means it didn't fit in). I can not to this with my current approach, as I do not want to rebuild the render-tree all over again. When a node was accepted once, but exceeds the InlineNodeBox on the next reflow, how do I properly manage to put the word into the next line (assuming I keep the idea of the InlineNodeBox class only organizing a single line of nodes)?
I really hope this all makes sense. Feel free to ask if you do not understand my concept. I'm also very open to criticism and ideas for other concepts, links to resources, documentations, publications and alike.
Problem 2:
You can do it like HTML renderers do and render a multiline (e.g. check if the new word to be added exceeds the width and add a new line if it does). You can do it in your InlineNodeRow, by taking care of height too and wrapping words if they exceed the max width.
Problem 1:
If you do figure out problem 2 for text, then you can put in the offset (horizontal padding) only for the first line.
Although <span> doesn't take height into consideration, it does take line-height, so your calculation could be that the default height is the font height unless you have a line-height option available.
Mind you, if you have two or more successive InlineNodeRow representing spans, you'd need some smart logic to make the second one continue from where the first one ended :)
As a side note, From what I remember from Qt's rich text label, each set of words with the same rendering properties is considered to be a node, and its render function takes care of calculating all the stuff. Your approach is a bit more granular and its only disadvantage from what I see is that you can't split words.
HTH,
May have found solution to problem 1 in the box model documentation (you may want to check out the documentation about clearance and the one for overflow as well for problem 2).
"margins of absolutely positioned boxes do not collapse."
You can see this jsfiddle for an example.

How to apply a modifier in Python, creating a new mesh?

Let's say I have a bpy.types.Object containing a bpy.types.Mesh data field; how can I apply one of the modifiers associated with the object, in order to obtain a NEW bpy.types.Mesh, possibly contained within a NEW bpy.types.Object, thus leaving the original scene unchaged?
I'm interested in applying the EdgeSplit modifier right before exporting vertex data to my custom format; the reason why I want to do this is to have Blender automatically and transparently duplicate the vertices shared by two faces with very different orientations.
I suppose you're using the 2.6 API.
bpy.ops.object.modifier_apply (modifier='EdgeSplit')
...applies to the currently active object its Edge Split modifier. Note that it's object.modifier_apply (...)
You can use
bpy.context.scene.objects.active = my_object
to set the active object. Note that it's objects.active.
Also, check the modifier_apply docs. Lot's of stuff you can only do with bpy.ops.*.
EDIT: Just saw you need a new (presumably temporary) mesh object. Just do
bpy.ops.object.duplicate()
after you set the active object and the new active object then becomes the duplicate (it retains any added modifier; if it was an object named 'Cube', it duplicates it, makes it active and names it 'Cube.001') to which you can then apply the modifier. Hope this was clear enough :)
EDIT: Note, that bpy.ops.object.duplicate() uses not active object, but selected. To ensure the correct object is selected and duplicated do this
bpy.ops.object.select_all(action = 'DESELECT')
object.select = True
There is another way, which seems better suited for custom exporters: Call the to_mesh method on the object you want to export. It gives you a copy of the object's mesh with all the modifiers applied. Use it like this:
mesh = your_object.to_mesh(scene = bpy.context.scene, apply_modifiers = True, settings = 'PREVIEW')
Then use the returned mesh to write any data you need into your custom format. The original object (including it's data) will stay unchanged and the returned mesh can be discarded after the export is finished.
Check the Blender Python API Docs for more info.
There is one possible issue with this method. I'm not sure you can use it to apply only one specific modifier, if you have more than one defined. It seems to apply all of them, so it might not be useful in your case.

Categories