Let's say I have a bpy.types.Object containing a bpy.types.Mesh data field; how can I apply one of the modifiers associated with the object, in order to obtain a NEW bpy.types.Mesh, possibly contained within a NEW bpy.types.Object, thus leaving the original scene unchaged?
I'm interested in applying the EdgeSplit modifier right before exporting vertex data to my custom format; the reason why I want to do this is to have Blender automatically and transparently duplicate the vertices shared by two faces with very different orientations.
I suppose you're using the 2.6 API.
bpy.ops.object.modifier_apply (modifier='EdgeSplit')
...applies to the currently active object its Edge Split modifier. Note that it's object.modifier_apply (...)
You can use
bpy.context.scene.objects.active = my_object
to set the active object. Note that it's objects.active.
Also, check the modifier_apply docs. Lot's of stuff you can only do with bpy.ops.*.
EDIT: Just saw you need a new (presumably temporary) mesh object. Just do
bpy.ops.object.duplicate()
after you set the active object and the new active object then becomes the duplicate (it retains any added modifier; if it was an object named 'Cube', it duplicates it, makes it active and names it 'Cube.001') to which you can then apply the modifier. Hope this was clear enough :)
EDIT: Note, that bpy.ops.object.duplicate() uses not active object, but selected. To ensure the correct object is selected and duplicated do this
bpy.ops.object.select_all(action = 'DESELECT')
object.select = True
There is another way, which seems better suited for custom exporters: Call the to_mesh method on the object you want to export. It gives you a copy of the object's mesh with all the modifiers applied. Use it like this:
mesh = your_object.to_mesh(scene = bpy.context.scene, apply_modifiers = True, settings = 'PREVIEW')
Then use the returned mesh to write any data you need into your custom format. The original object (including it's data) will stay unchanged and the returned mesh can be discarded after the export is finished.
Check the Blender Python API Docs for more info.
There is one possible issue with this method. I'm not sure you can use it to apply only one specific modifier, if you have more than one defined. It seems to apply all of them, so it might not be useful in your case.
Related
For my application (modelling hardware registers with named bit fields), I'd like to support syntax like this to access the fields:
device.REG0.F0 = 1 # access some defined subset of bits
print device.REG0.F0
but also allow access to the whole register as an integer:
device.REG0 = 123 # access all bits
print device.REG0
To support this, __getattr__() on the outer object needs to determine whether it is part of an access to some innermost field (in which case return the register object for further __get/setattr__() processing), or simply an access to a whole register (in which case return the integer value).
I have a half-assed proof-of-concept working by looking at the source text of caller's context via the inspect module, but it's easily broken. Is there some more reliable way to get maybe AST or other syntactic information about the 'current spot' in the code?
Or, are there alternative approaches which give the desired syntax:
some way for an object to implicitly yield an integer in appropriate contexts?
some way to simulate properties of a property?
some other magic?
Note: I'm aware that I can achieve the required functionality with different syntax. It's this specific syntax which is important to me.
The (probably not so well written) question is: Is there any way to get object data right after it is loaded through bpy.import_scene.obj function?
I mean when i import an obj file with this function i need to make some more transformation for it. When i select an object via name 'Mesh' (default name of object after import) all those functions works for other objects named 'Mesh' in my scene. I tried to get an last object from objects list in scene but they're arranged alphabeticaly, so it didn't worked well. When i tried to change object.name and apply next functions to it, it works only for one. All earlier instances of imported object are back to default.
How to solve that problem? Is there a option to get from scene last added object? Or maybe some way to 'mark' *obj object right after it is imported before next functions are applied? Or maybe there is a way to import *obj data straight into created earlier blank object.
cheers,
regg
PS: Working on Blender 2.63
Operators don't return data they load, but you can use tagging this way...
for obj in bpy.data.objects:
obj.tag = True
bpy.import_scene.obj(filepath="somefile.obj")
imported_objects = [obj for obj in bpy.data.objects if obj.tag is False]
From what I saw after importing things, the default tag is true for all objects (including those who already exist in the scene). So it seems like that in order to mark objects, you have to assign them a false value, and then import, and then add them to imported objects if their tag is True. No the other way around. So I'm not sure this answer is accurate.
so i know this is a bit of a workaround and theres probably a better way to do this, but heres the deal. Ive simplified the code from where tis gathering this info from and just given solid values.
curSel = nuke.selectedNodes()
knobToChange = "label"
codeIn = "[value in]"
kcPrefix = "x"
kcStart = "['"
kcEnd = "']"
changerString = kcPrefix+kcStart+knobToChange+kcEnd
for x in curSel:
changerString.setValue(codeIn)
But i get the error i figured i would - which is that a string has no attribute "setValue"
its because if i just type x['label'] instead of changerString, it works, but even though changer string says the exact same thing, its being read as a string instead of code.
Any ideas?
It looks like you're looking for something to evaluate the string into a python object based on your current namespace. One way to do that would be to use the globals dictionary:
globals()['x']['label'].setValue(...)
In other words, globals()['x']['label'] is the same thing as x['label'].
Or to spell it out explicitly for your case:
globals()[kcPrefix][knobToChange].setValue(codeIn)
Others might suggest eval:
eval('x["label"]').setValue(...) #insecure and inefficient
but globals is definitely a better idea here.
Finally, usually when you want to do something like this, you're better off using a dictionary or some other sort of data structure in the first place to keep your data more organized
Righto, there's two things you're falling afoul of. Firstly, in your original code where you are trying to do the setValue() call on a string you're right in that it won't work. Ideally use one of the two calls (x.knob('name_of_the_knob') or x['name_of_the_knob'], whichever is consistent with your project/facility/personal style) to get and set the value of the knob object.
From the comments, your code would look like this (my comments added for other people who aren't quite as au fait with Nuke):
# select all the nodes
curSel = nuke.selectedNodes()
# nuke.thisNode() returns the script's context
# i.e. the node from which the script was invoked
knobToChange = nuke.thisNode()['knobname'].getValue()
codeIn = nuke.thisNode()['codeinput'].getValue()
for x in curSel:
x.knob(knobToChange).setValue(codeIn)
Using this sample UI with the values in the two fields as shown and the button firing off the script...
...this code is going to give you an error message of 'Nothing is named "foo"' when you execute it because the .getValue() call is actually returning you the evaluated result of the knob - which is the error message as it tries to execute the TCL [value foo], and finds that there isn't any object named foo.
What you should ideally do is instead invoke .toScript() which returns the raw text.
# select all the nodes
curSel = nuke.selectedNodes()
# nuke.thisNode() returns the script's context
# i.e. the node from which the script was invoked
knobToChange = nuke.thisNode()['knobname'].toScript()
codeIn = nuke.thisNode()['codeinput'].toScript()
for x in curSel:
x.knob(knobToChange).setValue(codeIn)
You can sidestep this problem as you've noted by building up a string, adding in square brackets etc etc as per your original code, but yes, it's a pain, a maintenance nightmare, and starting to go down that route of building objects up from strings (which #mgilson explains how to do in both a globals() or eval() method)
For those who haven't had the joy of working with Nuke, here's a small screencap that may (or may not..) provide more context:
I have a nested dictionary containing a bunch of data on a number of different objects (where I mean object in the non-programming sense of the word). The format of the dictionary is allData[i][someDataType], where i is a number designation of the object that I have data on, and someDataType is a specific data array associated with the object in question.
Now, I have a function that I have defined that requires a particular data array for a calculation to be performed for each object. The data array is called cleanFDF. So I feed this to my function, along with a bunch of other things it requires to work. I call it like this:
rm.analyze4complexity(allData[i]['cleanFDF'], other data, other data, other data)
Inside the function itself, I straight away re-assign the cleanFDF data to another variable name, namely clFDF. I.e. The end result is:
clFDF = allData[i]['cleanFDF']
I then have to zero out all of the data that lies below a certain threshold, as such:
clFDF[ clFDF < threshold ] = 0
OK - the function works as it is supposed to. But now when I try to plot the original cleanFDF data back in the main script, the entries that got zeroed out in clFDF are also zeroed out in allData[i]['cleanFDF']. WTF? Obviously something is happening here that I do not understand.
To make matters even weirder (from my point of view), I've tried to do a bodgy kludge to get around this by 'saving' the array to another variable before calling the function. I.e. I do
saveFDF = allData[i]['cleanFDF']
then run the function, then update the cleanFDF entry with the 'saved' data:
allData[i].update( {'cleanFDF':saveFDF} )
but somehow, simply by performing clFDF[ clFDF < threshold ] = 0 within the function modifies clFDF, saveFDF and allData[i]['cleanFDF'] in the main friggin' script, zeroing out all the entires at the same array indexes! It is like they are all associated global variables somehow, but I've made no such declarations anywhere...
I am a hopeless Python newbie, so no doubt I'm not understanding something about how it works. Any help would be greatly appreciated!
You are passing the value at allData[i]['cleanFDF'] by reference (decent explanation at https://stackoverflow.com/a/430958/337678). Any changes made to it will be made to the object it refers to, which is still the same object as the original, just assigned to a different variable.
Making a deep copy of the data will likely fix your issue (Python has a deepcopy library that should do the trick ;)).
Everything is a reference in Python.
def function(y):
y.append('yes')
return y
example = list()
function(example)
print(example)
it would return ['yes'] even though i am not directly changing the variable 'example'.
See Why does list.append evaluate to false?, Python append() vs. + operator on lists, why do these give different results?, Python lists append return value.
I have a ListStore in PyGTK, which has a bunch of rows. There is a background job processing the data represented by the rows, and when it finishes, it needs to update the row. Of course, to do this, it needs to know which row to update, and is thus keeping an iterator to the row around. However, during the background jobs life, the user might remove the row. This is OK — we just replace the stored iterator with "None", and the background job continues along merrily. The problem is that when the row is removed, the iterators don't compare as equal, and nothing gets set to None. In fact, no two iterators, AFAIK, compare equal. The problem, in a minimal example, is this:
>>> store = gtk.ListStore(int)
>>> store.insert(1)
<GtkTreeIter at 0x1d49600>
>>> print store[0].iter == store[0].iter
False
False, yet they're the same iterator! (I'm aware they are different instances, but they represent the same thing, and they define a __eq__ method.) What am I missing here, and how do I keep track of rows in a ListStore for later updating?
Try using the list store's .get_path(iter) method, and compare the resulting paths, instead of comparing the iterators directly.
UPDATE: You can just call set_value with the invalid iter. gtk will give you a warning but will not throw an exception or anything. It probably just checks whether it's a valid iter anyway.
I would approach this differently — here's what I've done in a similar situation:
The underlying data object represented in each row is an instance of a GObject
This GObject subclass has a bunch of properties
When the property changes, it emits the notify::myproperty signal
At the same time:
My ListStore stores these objects, and uses the gtk.TreeViewColumn.set_cell_data_func() method to render each column (see note below)
For each object/row, my object managing the TreeView connects to the notify::myproperty
The function connected to this notify::... signal triggers the row-changed signal on the ListStore
Some code:
def on_myprop_changed(self, iter, prop):
path = self.model.get_path(iter)
self.model.row_changed(path ,iter)
def on_thing_processed(self, thingdata):
# Model is a ListStore
tree_iter = self.model.append((thingdata,))
# You might want to connect to many 'notify::...' signals here,
# or even have your underlying object emit a single signal when
# anything is updated.
hid = thingdata.connect_object('notify::myprop',
self.on_myprop_changed,
tree_iter)
self.hids.add((thingdata, hid))
I keep the hids in a list so I can disconnect them when the table is cleared. If you let individual rows get removed, you'll probably need to store them in a map (path -> hid, or object -> hid).
Note: You need to remember that set_cell_data_func causes the row to re-check its information every time there's a redraw, so the underlying function should just be a lookup function, not an intensive computation. Practically speaking, because of this you could get away with not doing the "connect-to-signal/emit-row-changed" procedure, but personally I feel better knowing that there won't be any edge cases.