How to rotate a camera around obj wavefront file contents? - python

I have an .obj file. I do not know it’s contents bounding box before hand. I want to load it into blender and rotate camera around it in "K"th frames (e.g. 15 frames). How to do such a thing in blender using python api?

A common way to do an object turnaround is to add an empty and make it the parent of the camera, animating the z-rotation of the empty will then rotate the camera around the object, you can give the camera a trackto constraint so that the camera always points at the target object.
You can use the objects bound_box to find its outer limits, then add a bit more so the object stays inside the view and position the camera with that. Making the extra distance proportional to the object size should work for most objects.
The addon I made for this answer shows how to make a bounding box around multiple objects, which may be helpful if you have multiple objects at once.
To do that in python -
import bpy
scn = bpy.context.scene
bpy.ops.import_scene.obj(filepath='obj1.obj')
target = bpy.context.selected_objects[0]
scn.objects.active = target
# centring the origin gives a better bounding box and rotation point
bpy.ops.object.origin_set(type='ORIGIN_GEOMETRY')
cam_x_pos = max([v[0] for v in target.bound_box]) * 2.5
cam_y_pos = max([v[1] for v in target.bound_box]) * 2.5
cam_z_pos = max([v[2] for v in target.bound_box]) * 2.5
rot_centre = bpy.data.objects.new('rot_centre', None)
scn.objects.link(rot_centre)
rot_centre.location = target.location
camera = bpy.data.objects.new('camera', bpy.data.cameras.new('camera'))
scn.objects.link(camera)
camera.location = (cam_x_pos, cam_y_pos, cam_z_pos)
camera.parent = rot_centre
m = camera.constraints.new('TRACK_TO')
m.target = target
m.track_axis = 'TRACK_NEGATIVE_Z'
m.up_axis = 'UP_Y'
rot_centre.rotation_euler.z = 0.0
rot_centre.keyframe_insert('rotation_euler', index=2, frame=1)
rot_centre.rotation_euler.z = radians(360.0)
rot_centre.keyframe_insert('rotation_euler', index=2, frame=101)
# set linear interpolation for constant rotation speed
for c in rot_centre.animation_data.action.fcurves:
for k in c.keyframe_points:
k.interpolation = 'LINEAR'
scn.frame_end = 100

Related

Vedo: Is there a way to add a camera to scenes and see images from perspective?

I'm using Vedo in Python to visualize some 3D scans of indoor locations.
I would like to, e.g., add a 'camera' at (0,0,0), look left 90 degrees (or where ever), and see the camera's output.
Can this be done with Vedo? If not, is there a different python programming framework where I can open .obj files and add a camera and view through it programmatically?
I usually use schema:
...
plt = Plotter(bg='bb', interactive=False)
camera = plt.camera
plt.show(actors, axes=4, viewup='y')
for i in range(360):
camera.Azimuth(1)
camera.Roll(-1)
plt.render()
...
plt.interactive().close()
Good Luck
You can plot the same object in an embedded renderer and control its behaviour via a simple callback function:
from vedo import *
settings.immediateRendering = False # can be faster for multi-renderers
# (0,0) is the bottom-left corner of the window, (1,1) the top-right
# the order in the list defines the priority when overlapping
custom_shape = [
dict(bottomleft=(0.00,0.00), topright=(1.00,1.00), bg='wheat', bg2='w' ),# ren0
dict(bottomleft=(0.01,0.01), topright=(0.15,0.30), bg='blue3', bg2='lb'),# ren1
]
plt = Plotter(shape=custom_shape, size=(1600,800), sharecam=False)
s = ParametricShape(0) # whatever object to be shown
plt.show(s, 'Renderer0', at=0)
plt.show(s, 'Renderer1', at=1)
def update(event):
cam = plt.renderers[1].GetActiveCamera() # vtkCamera of renderer1
cam.Azimuth(1) # add one degree in azimuth
plt.addCallback("Interaction", update)
interactive()
Check out a related example here.
Check out the vtkCamera object methods here.

how to find Y face of the cube in Maya with Python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

How to find face neighbours in Maya?

I have a problem where I need to select faces that are next to one pre-selected face.
This may be done easily but the problem is that when I get a neighbour face I need to know in which direction it is facing.
So now I am able to select faces which are connected with an edge but I can't get the face that is for example left or right from the first selected face. I have tried multiple approaches but can't find the solution.
I tried with:
pickWalk - cmds.pickWalk()- problem with this is that it's behavior can't be predicted, since it walks the mesh from the camera perspective.
polyInfo - cmds.polyInfo()- this is a very useful function and closest to the answer. In this approach I try to extract edges from a face and then see which are neighbours to that face with edgeToFace(). This works well but doesn't solve my problem. To elaborate, when polyInfo returns faces that share edges, it doesn't return them in a way that I can always know that edgesList[0] (for example) is the edge that points left or right. Hence if I use this on different faces the resulting face may be facing in a different direction in each case.
Hard way with many conversions from vertex to edge then to face etc. But still again it's the same problem where I don't know which edge is the top or left one.
conectedFaces()method who i call on selected face and it returns faces which are connected to first face,but still it`s the same problem,i dont know which face is facing which way.
To be clear I'm not using a pre-selected list of faces and checking them, but I need to know the faces without knowing or keeping their names somewhere. Does someone know a way that works with selection of faces?
To elaborate my question I made an image to make it clear:
As you can see from the example if there is selected face I need to select any of pointed faces, but that must be exact face I want to select. Other methods select all neighbour faces, but I need method that I can say "select right" and will select right one from first selected face.
This is one solution that would be fairly consistent under the rule that up/down/left/right is aligned with the mesh's transformation (local space), though could be world space too.
The first thing I would do is build a face relative coordinate system for every mesh face using the average face vertex position, face normal, and world space Y axis of the mesh's transformation. This involves a little vector math, so I will use the API to make this easier. This first part will make a coordinate system for each face that we will store into lists for future querying. See below.
from maya import OpenMaya, cmds
meshTransform = 'polySphere'
meshShape = cmds.listRelatives(meshTransform, c=True)[0]
meshMatrix = cmds.xform(meshTransform, q=True, ws=True, matrix=True)
primaryUp = OpenMaya.MVector(*meshMatrix[4:7])
# have a secondary up vector for faces that are facing the same way as the original up
secondaryUp = OpenMaya.MVector(*meshMatrix[8:11])
sel = OpenMaya.MSelectionList()
sel.add(meshShape)
meshObj = OpenMaya.MObject()
sel.getDependNode(0, meshObj)
meshPolyIt = OpenMaya.MItMeshPolygon(meshObj)
faceNeighbors = []
faceCoordinates = []
while not meshPolyIt.isDone():
normal = OpenMaya.MVector()
meshPolyIt.getNormal(normal)
# use the seconary up if the normal is facing the same direction as the object Y
up = primaryUp if (1 - abs(primaryUp * normal)) > 0.001 else secondaryUp
center = meshPolyIt.center()
faceArray = OpenMaya.MIntArray()
meshPolyIt.getConnectedFaces(faceArray)
meshPolyIt.next()
faceNeighbors.append([faceArray[i] for i in range(faceArray.length())])
xAxis = up ^ normal
yAxis = normal ^ xAxis
matrixList = [xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
normal.x, normal.y, normal.z, 0,
center.x, center.y, center.z, 1]
faceMatrix = OpenMaya.MMatrix()
OpenMaya.MScriptUtil.createMatrixFromList(matrixList, faceMatrix)
faceCoordinates.append(faceMatrix)
These functions will look up and return which face is next to the one given in a particular direction (X and Y) relative to the face. This uses a dot product to see which face is more in that particular direction. This should work with any number of faces but it will only return one face that is in the most of that direction.
def getUpFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,1,0))
def getDownFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,-1,0))
def getRightFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(1,0,0))
def getLeftFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(-1,0,0))
def getDirectionalFace(faceIndex, axis):
faceMatrix = faceCoordinates[faceIndex]
closestDotProd = -1.0
nextFace = -1
for n in faceNeighbors[faceIndex]:
nMatrix = faceCoordinates[n] * faceMatrix.inverse()
nVector = OpenMaya.MVector(nMatrix(3,0), nMatrix(3,1), nMatrix(3,2))
dp = nVector * axis
if dp > closestDotProd:
closestDotProd = dp
nextFace = n
return nextFace
So you would call it like this:
getUpFace(123)
With the number being the face index you want to get the face that is "up" from it.
Give this a try and see if it satisfies your needs.
polyListComponentConversion
import pprint
init_face = cmds.ls(sl=True)
#get edges
edges = cmds.polyListComponentConversion(init_face, ff=True, te=True)
#get neighbour faces
faces = cmds.polyListComponentConversion(edges, fe=True, tf=True, bo=True)
# show neighbour faces
cmds.select(faces)
# print face normal of each neighbour face
pprint.pprint(cmds.ployInfo(faces,fn=True))
The easiest way of doing this is using Pymel's connectedFaces() on the MeshFace:
http://download.autodesk.com/us/maya/2011help/pymel/generated/classes/pymel.core.general/pymel.core.general.MeshFace.html
import pymel.core as pm
sel = pm.ls(sl=True)[0]
pm.select(sel.connectedFaces())

Convert VTK to raster image (Ruby or Python)

I have the results of a simulation on an unstructured 2D mesh. I usually export the results in VTK and visualize them with Paraview. This is what results look like.
I would like to obtain a raster image from the results (with or without interpolation) to use it as a texture for visualization in a 3D software. From reading around I have gathered that I need to do some kind of resampling in order to convert from the unstructured grid to a 2d regular grid for the raster image.
VTK can export to raster, but it exports only a full scene without any defined boundary so it requires manual tweaking to fit the image.
Ideally I would like to export only the results within the results bounding box and 'map' them to a raster image programmatically with Ruby or Python.
This script uses paraview and creates an image perfectly centered and scaled so that it can be used as a texture. Notice the 855 value for the vertical size. It seems to be related to the resolution of the screen and it is needed only on OSX according to Paraview mailing list.
It should be run to the Paraview Python interpreter pvbatch.
import sys, json
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
args = json.loads(sys.argv[1])
# create a new 'Legacy VTK Reader'
vtk_file = args["file"]
data = LegacyVTKReader(FileNames=[vtk_file])
# get active view
renderView1 = GetActiveViewOrCreate('RenderView')
# uncomment following to set a specific view size
xc = float(args["center"][0])
yc = float(args["center"][1])
zc = float(args["center"][2])
width = float(args["width"])
height = float(args["height"])
output_file = args["output_file"]
scalar = args["scalar"]
colormap_min = float(args["colormap_min"])
colormap_max = float(args["colormap_max"])
ratio = height / width
magnification = 2
height_p = 855 * magnification
width_p = int(height_p * 1.0 / ratio / magnification)
renderView1.ViewSize = [width_p , height_p]
# show data in view
dataDisplay = Show(data, renderView1)
# trace defaults for the display properties.
dataDisplay.ColorArrayName = ['CELLS', scalar]
# set scalar coloring
ColorBy(dataDisplay, ('CELLS', scalar))
# rescale color and/or opacity maps used to include current data range
dataDisplay.RescaleTransferFunctionToDataRange(True)
# get color transfer function/color map for 'irradiation'
irradiationLUT = GetColorTransferFunction(scalar)
# Rescale transfer function
irradiationLUT.RescaleTransferFunction(colormap_min, colormap_max)
irradiationLUT.LockDataRange = 1
irradiationLUT.ColorSpace = 'RGB'
irradiationLUT.NanColor = [0.498039, 0.0, 0.0]
#changing interaction mode based on data extents
renderView1.InteractionMode = '2D'
renderView1.CameraPosition = [xc, yc, 10000.0 + zc]
renderView1.CameraFocalPoint = [xc, yc, zc]
# hide color bar/color legend
dataDisplay.SetScalarBarVisibility(renderView1, False)
# current camera placement for renderView1
renderView1.InteractionMode = '2D'
#renderView1.CameraPosition = [3.641002, 197.944122, 10001.75]
#renderView1.CameraFocalPoint = [3.641002, 197.944122, 1.75]
renderView1.CameraParallelScale = (height / 2.0)
# save screenshot
SaveScreenshot(output_file, magnification=magnification, quality=100, view=renderView1)
I have a DIY solution. Usually, I do as follows:
Open my mesh as a polygon layer in QGIS and do the following:
calculate mesh centroids in QGIS (Vector/Geometry Tools/Polygon Centroids)
right click on the newly created layer, select Save As, select CSV format and under Layer options/GEOMETRY select xy or xyz
Then, with a simple python script I associate the vtk data (like e.g. water depth) to the centroids (be aware that ParaView numbers the nodes with a -1 offset in respect to QGIS, so node 2 in ParaView is node 3 in QGIS).
Eventually, again in QGIS, I interpolate a raster from vector points e.g. with the GRASSS GIS module v.to.rast.attribute

Rotation between keyframes

I need to full rotate the camera around some object starting at frame 1 and ending at frame 1147. To interpolate automatically, I need to use keyframes. How do I insert keyframes at frames 1 and 1147 and rotate the camera between these keyframes using python script? Any help would be appreciated.
An easy way to rotate a camera around an object is to add an empty at the same location as the object of attention, parent the camera to the empty, use a track to constraint to keep the camera pointed at the object and then rotate the empty.
This can be done in python as -
import bpy
import math
scene = bpy.context.scene
cam = scene.camera
bpy.ops.object.empty_add()
target = bpy.context.active_object
target.name = 'focus point'
target.location = bpy.data.objects['focusObj'].location
cam.parent = target
tc = cam.constraints.new(type='TRACK_TO')
tc.target = target
tc.up_axis = 'UP_Y'
tc.track_axis = 'TRACK_NEGATIVE_Z'
scene.frame_current = 1
target.rotation_euler = (0,0,0)
target.keyframe_insert(data_path="rotation_euler")
scene.frame_current = 1147
target.rotation_euler = (0,0,math.radians(360))
target.keyframe_insert(data_path="rotation_euler")
for fc in target.animation_data.action.fcurves:
fc.extrapolation = 'LINEAR'
for kp in fc.keyframe_points:
kp.interpolation = 'LINEAR'
You will need to adjust the name "focusObj".
By setting the interpolation to linear you will get a constant rotation speed, not an ease in and out at the start and end. Setting the extrapolation to linear means it will continue to rotate endlessly.

Categories