Blender's internal data won't update after an scale operation - python

I have the following script:
import bpy
import os
print("Starter")
selection = bpy.context.selected_objects
for obj in selection:
print("Obj selected")
me = obj.data
for edge in me.edges:
vert1 = me.vertices[edge.vertices[0]]
vert2 = me.vertices[edge.vertices[1]]
print("<boundingLine p1=\"{0}f,0.0f,{1}f,1.0f\" p2=\"{2}f,0.0f,{3}f,1.0f\" />".format(vert1.co.x, vert1.co.y, vert2.co.x, vert2.co.y))
Pretty basic, right? It just prints out all the edges into the console, for me to copy paste into an xml document.
When I scale an object, and perform this script on the object, I get the OLD, unscaled values for the object outputed to the console, before it was scaled. I have tried moving every vertice in the object in all axises, which results in the values outputed being those outscaled and then transformed according to my movement.
If i press n to check the vertices global values, they are properly scaled.
Why am I not getting the correct values?!?
This script was supposed to save time, but getting anything to work in blender is a CHORE! It does not help that they has just updated their api, so all example code out there is outdated!

Allright, this is the deal: when you scale, translate or rotate an object in Blender, or otherwise perform an transformation, that transformation is "stored" somehow. What you need to do I choose the object of which you applied the transformation, and use the short cut CTRL + A, and then apply your transformation.
...
So there was no lack of contingency (am I using this word right? Checked out it's definition and it seems right) between the internal data accessible through the blender api, and the values actually displayed.
I am sure this design makes sense, but right now I want to punch the guy that came up with it, in the throat. If I scale something, I intend the thing that got scaled to be scaled!
But anyways, the reason I got weird values was because the scaling was not applied, which you do with CTRL + A, once you in object mode have selected the object that you scaled.

I`m not really a Blender user(but a Maya one), I think you could try something different(I woulds say slower too...), just iterate over the selected vertices, creating a locator or a null object and constraining it to the vertex position and getting it's x,y,z coordinates. I've done it in maya and works.
Lets say something like this:
data_list = []
selection = #selection code here#
for v in selection:
loc = locator()
pointconstraint(v, loc)
data_list.append(loc.translation_attributes)

Mesh objects have an internal coordinate system for their vertices, as well as global translation, scaling, and rotation transforms that apply to the entire object. You can apply the global scaling matrix to the mesh data, and convert the vertex coordinates to the global coordinate system as follows:
bpy.ops.object.select_all(action='SELECT')
bpy.ops.object.transform_apply(scale=True)
bpy.ops.object.select_all(action='DESELECT')
Other options to transform_apply() allow rotation and translation matrices to be applied as well.

Related

Blender: Rendering images and producing an animation

I am new to Blender and I’m having a bit of a tough time understanding its key concepts. I am using Blender 2.82 and working with Python scripting. My project consists of using Python to do the following:
Move object slightly.
Take picture with camera 1, camera 2, camera 3, and camera 4.
Repeat.
I had a script that did that. However, I wanted to save the position of my object (a sphere) every time I changed it during the loop in an animation, so I could later see what I did. When trying to insert keyframes for animation in my loop, it seems as if my sphere didn’t move. Below is my code. When I remove the lines that include frame_set and keyframe_insert, my sphere moves as I can see from my rendered images. I think I am confusing some kind of concept… Any help would be appreciated. The goal of this is to produce the images I would obtain from four cameras placed around an object, that is moving, so as to simulate a mocap system.
Why does inserting a keyframe change all of the images being rendered?
import bpy, bgl, blf,sys
import numpy as np
from bpy import data, ops, props, types, context
cameraNames=''
# Loop all command line arguments and try to find "cameras=east" or similar
for arg in sys.argv:
words=arg.split('=')
if ( words[0] == 'cameras'):
cameraNames = words[1]
sceneKey = bpy.data.scenes.keys()[0]
# Loop all objects and try to find Cameras
bpy.data.scenes[sceneKey].render.image_settings.file_format = 'JPEG'
bpy.data.scenes[sceneKey].cycles.max_bounces=12
bpy.data.scenes[sceneKey].render.tile_x=8
bpy.data.scenes[sceneKey].render.tile_y=8
bpy.data.scenes[sceneKey].cycles.samples = 16
bpy.data.scenes[sceneKey].cycles.caustics_reflective = False
bpy.data.scenes[sceneKey].cycles.caustics_refractive = False
bpy.data.objects['Sphere'].location=[1,1,1]
frame_num=0
for i in range(0,2): #nframes
bpy.context.scene.frame_set(frame_num)
for obj in bpy.data.objects:
# Find cameras that match cameraNames
if ( obj.type =='CAMERA') and ( cameraNames == '' or obj.name.find(cameraNames) != -1) :
# Set Scenes camera and output filename
bpy.data.scenes[sceneKey].camera = obj
bpy.data.scenes[sceneKey].render.filepath = '//'+obj.name+"_"+str(i)
# Render Scene and store the scene
bpy.ops.render.render( write_still=True )
bpy.data.objects['Sphere'].keyframe_insert(data_path="location",index=-1)
frame_num+=1
bpy.data.objects['Sphere'].location=[2,2,1]
I have no knowledge of python, but you can try to do key frame animation manually and make a script which will render the pictures after a set of key frames(whenever the object has moved to a new location)
It is not too hard (I'm talking about only the animation), just press the circle button near the play animation button on the timeline. This will turn on auto key framing and you just have to go to the desired key frame and move the object according to your need.

Incorrect hover area with QGraphicsPathItem and QPainterPathStroker

I am writing a class inheriting from QGraphicsItemGroup and its main child is a QGraphicsPathItem. The whole thing is used to draw polylines. I wanted to have a proper hover system for the lines, si I reimplemented the shape method of the object as follows:
def shape(self):
stroker = QtWidgets.QPainterPathStroker()
stroker.setWidth(10 * self.resolution) # resolution handles zoom and stuff
path = stroker.createStroke(self.__path.path()).simplified()
return path
In the above snippet, self.__path is the QGraphicsPathItem I mentioned ealier.
To make things simple, here are a few pictures. The line I drew, that I see on the screen:
The hover area I want:
The hover area I currently have with the reimplemented shape method shown above:
As you guessed, such a selection area is hardly useful for any purpose. Worst of all, I tried to use the exact same method to generate the outlines of the line, then used toFillPolygon to generate a polygon that I rendered in the same object by adding a QGraphicsPolygonItem child to my object: the shape that appears on my screen is exactly what I want, but when I use the same path to create the hover area via shape, it gives me the useless hover area (image 3) instead.
So, do you know why the path obtained with the QPainterPathStroker allows me to display a polygon that seems to exactly correspond to the hover area I want, but when I use that path in shape, the obtained over area is wonky? If so, do you know how to fix this problem?

Tkinter canvas create_image and create_oval optimization

Background
I am trying - and succeeding - in creating a simple plot using using the Canvas object within tkinter. I am trying to use as many tools that are installed with Python3 as possible. Matplotlib and others are great, but they are pretty large installs for something that I'm trying to keep a bit smaller.
The plots are updated every 0.5s based on input from a hardware device. The previous 128 points are deleted and the current 128 points are drawn. See my most recent blog post for a couple of screenshots. I have successfully created the plots using canvas.create_oval(), but as I was running it, I heard my PC fans ramp up a bit (I have them on an aggressive thermal profile) and realized that I was using 15% of the CPU, which seemed odd.
The Problem
After running cProfile, I found that the canvas.create_oval() was taking more cumulative time than I would have expected.
After reading a bit about optimization in the tkinter canvas (there isn't much out there except 'use something else'), I came across a post that suggested that one might use an image of a dot and use canvas.create_images() instead of a canvas.create_oval(). I tried that and the time in create_image() was a bit less, but still quite significant.
For completeness, I will include the code fragment. Note that this method is part of a class called Plot4Q which is a subclass of tk.Canvas:
def plot_point(self, point, point_format=None, fill='green', tag='data_point'):
x, y = point
x /= self.x_per_pixel
y /= self.y_per_pixel
x_screen, y_screen = self.to_screen_coords(x, y)
if fill == 'blue':
self.plot.create_image((x_screen, y_screen), image=self.blue_dot, tag=tag)
else:
self.plot.create_image((x_screen, y_screen), image=self.green_dot, tag=tag)
The Profile
I am a profiling newb, so it would be prudent to include some portion of the output of that profiler. I have sorted by 'cumtime' and highlighted the relevant methods.
update_plots calls scatter
scatter calls plot_point (above)
Note that scatter consumes 11.6% of the total run time.
The Question
Is there a more efficient method of creating points (and deleting them, though that doesn't take very long in tkinter) on a canvas?
If not, is there a more efficient way of creating the plot and embedding it into the tkinter interface?
I am somewhat open to using a different library, but I would like to keep it small and fast. I had thought that the tk canvas would be small and fast since it was functioning competently on machines with 1/10th of the power that a modern PC has.
More Info
After running a helpful answer below (Brian Oakley), I have updated results.
To explain the updated code a bit, I am using ovals again (I like the color control). I check to see if the tag exists. If it does not exist, then the new oval is created at the point specified. If the tag does exist, then the new coordinate is calculated and the move function is called.
def plot_point(self, point, fill='green', tag='data_point'):
if not fill:
fill = self.DEFAULT_LINE_COLOR
point_width = 2
# find the location of the point on the canvas
x, y = point
x /= self.x_per_pixel
y /= self.y_per_pixel
x_screen, y_screen = self.to_screen_coords(x, y)
x0 = x_screen - point_width
y0 = y_screen - point_width
x1 = x_screen + point_width
y1 = y_screen + point_width
# if the tag exists, then move the point, else create the point
point_ids = self.plot.find_withtag(tag)
if point_ids != ():
point_id = point_ids[0]
location = self.plot.coords(point_id)
current_x = location[0]
current_y = location[1]
move_x = x_screen - current_x
move_y = y_screen - current_y
self.plot.move(point_id, move_x, move_y)
else:
point = self.plot.create_oval(x0,
y0,
x1,
y1,
outline=fill,
fill=fill,
tag=tag)
The improvement is only slight, 10.4% vs. 11.6%.
The canvas has performance problems when many items are created (more specifically, when new object ids are created). Deleting objects doesn't help, the problem is in the ever increasing object ids which are never reused. This problem usually doesn't appear until you have 10's of thousands of items. If you're creating 256/second, you'll start to bump into that problem in just a minute or two.
You can completely eliminate this overhead if you create 128 objects off screen once, and then simply move them around rather than destroying and recreating them.

Get pixel location from mouse click in TKinter

I'm quite new to Python and have been unsuccessful in finding a way around this problem. I have a GUI using TKinter that displays an image using Label. I would like the user to be able to click on two places in the image and use those two pixel locations elsewhere.
Below is the basic code I'm using so far, but I'm unable to return the pixel locations. I believe bind is not what I want to use, is there another option?
px = []
py = []
def onmouse(event):
px.append(event.x)
py.append(event.y)
return px,py
self.ImgPanel.bind('<button-1>',onmouse)
If I try to use:
px,py = self.ImgPanel.bind('<button-1>',onmouse)
I get an error "Too many values to unpack"
bind is what you want, if you want to capture the x,y coordinate of the click. However, functions called from bindings don't "return". Technically they do, but they return a value to the internals of Tkinter.
What you need to do is set an instance or global variable within the bound function. In the code you included in your question, if you add global px,py, you can then use those values in other code.

Hide unselected in Maya/Python

I'm trying for a while to get this sorted out in Maya:
I want a script which can hide my unselected lights for example, so the only way which comes to mind (and doesn't work) is this one:
lt=cmds.ls(lt=True,sl=False)
cmds.hide(lt)
I see that the argument False with selection doesn't work, so I want to find out about some other ways...thanks
#goncalops answer will work if you select the light shapes, but not their transforms.
Try:
lights = cmds.ls(type = 'light') or []
lights = set(cmds.listRelatives(*lights, p=True) or [])
for item in lights.difference(set(cmds.ls(sl=True))):
cmds.hide(item)
I think most of the answers go into over engineering land. The question is how to hide non selected lights at the end of your operation, Nothing says you can not hide them all and bring the lights selected back. So conceptually easier is to do (and slightly faster but that's beside the point):
cmds.hide(cmds.ls(lights=True, dag=True))
cmds.showHidden()
One comment: There's no need to fetch shapes separately in this case, as it has a the dag flag for this. See conceptually Maya items are packets of transform and the shape separately. However its so common occurrence that you want to convert between the shape and dag to shape that ls offers a way to do this with the dag and shapes flags.
Second comment: If you do not pass a list to Maya it will operate on selection thats why showHidden works without any data.
PS: conceptually neither my answer nor #theodox answer will work in all cases as you MAY indeed have selected the shape. However most users will not so it will commonly work this way.
Reading the documentation for the ls command in Maya 2011, it doesn't seem to have either lt or sl parameters, although it has lights and selection.
Further, it seems the selection argument only serves the purpose of returning the selected arguments, not of filtering unselected ones.
OTOH, the hide method accepts a single argument.
Try this:
lights= set(cmds.ls(lights=True)) - set(cmds.ls(selection=True))
for light in lights:
cmds.hide(light)
this will work for your condition
hide_light = set(cmds.ls(lights=True, l=True)) - set(cmds.ls(sl=True, dag=True, l=True, leaf=True))
for each_lit in hide_light:
cmds.setAttr("%s.visibility" % each_lit, 0)
Let's discuss the problem a bit:
There are a few things to consider. When users select a light, (from the Viewport or the Outliner), most of the time they would really be selecting the transform node of a light.
When we perform a cmds.ls(type='lights'), we are actually selecting their shape nodes. This is in line with what #theodox is saying.
I don't know about you, but when I hide lights manually, I select lights in Outliner/Viewport. When I hide them (ctrl-h), they grey out in the outliner. What I've done is hidden their transform nodes (not their shape nodes).
To make things more complicated, Maya actually lets us hide shape nodes too. But the transform node will not grey out when the shape node is hidden.
Imagine if my script were to hide the light shape node, in the Outliner there would be no indication that those lights are hidden, if the Outliner is not set to display shape nodes (this is the default setting in the Outliner). Without the greying-out to indicate that the lights are hidden, many artists especially less experienced ones would assume that lights are turned on when they have already been disabled and hidden. This is going to cost a lot of confusion, time wasted, frustration, basically not what we want.
Thus when I write a script like this I'll expect the user to be selecting transform nodes. Also when I hide lights, I will hide the transform nodes of the lights instead of hiding the light shapes directly. That would be my game plan.
import maya.cmds as mc
def hideDeselected(targetNodeType):
# selectedNodeTransforms will contain transform nodes
# of all target node type shapes that are selected
selectedNodeTransforms = []
for selNode in mc.ls(sl=True):
if targetNodeType in mc.nodeType(selNode):
# selected node is the correct type
# add the transform node to selectedNodeTransforms
selectedNodeTransforms.append(mc.listRelatives(selNode, parent=True))
elif mc.listRelatives(selNode, children=True, type=targetNodeType):
# selected node is a transform node
# with a child node of the correct type
# add the transform node to selectedNodeTransforms
selectedNodeTransforms.append(selNode)
if selectedNodeTransforms:
# only if something is selected, do the hiding thing.
# If we do not do this check, and if nothing is selected
# all transform nodes of targetNodeType will be hidden
print 'selected objects:',selectedNodeTransforms
for thisNode in mc.ls(type=targetNodeType):
# loops through all target shapes in the scene
# get the transform node
thisNodeTransform = mc.listRelatives(thisNode, parent=True)[0]
if not thisNodeTransform in selectedNodeTransforms:
print 'hiding', thisNodeTransform
hide(thisNodeTransform)
else:
print 'nothing is selected'
hideDeselected('light')
In the code above, I've made a function out of it so we can pass in any dag node type that is able to have a parent in the scene, and the code will work.
Thus, to hide all the other cameras in the scene that are not currently selected, we just have to call the function with the camera node type:
hideDeselected('camera')

Categories