Rotation between keyframes - python

I need to full rotate the camera around some object starting at frame 1 and ending at frame 1147. To interpolate automatically, I need to use keyframes. How do I insert keyframes at frames 1 and 1147 and rotate the camera between these keyframes using python script? Any help would be appreciated.

An easy way to rotate a camera around an object is to add an empty at the same location as the object of attention, parent the camera to the empty, use a track to constraint to keep the camera pointed at the object and then rotate the empty.
This can be done in python as -
import bpy
import math
scene = bpy.context.scene
cam = scene.camera
bpy.ops.object.empty_add()
target = bpy.context.active_object
target.name = 'focus point'
target.location = bpy.data.objects['focusObj'].location
cam.parent = target
tc = cam.constraints.new(type='TRACK_TO')
tc.target = target
tc.up_axis = 'UP_Y'
tc.track_axis = 'TRACK_NEGATIVE_Z'
scene.frame_current = 1
target.rotation_euler = (0,0,0)
target.keyframe_insert(data_path="rotation_euler")
scene.frame_current = 1147
target.rotation_euler = (0,0,math.radians(360))
target.keyframe_insert(data_path="rotation_euler")
for fc in target.animation_data.action.fcurves:
fc.extrapolation = 'LINEAR'
for kp in fc.keyframe_points:
kp.interpolation = 'LINEAR'
You will need to adjust the name "focusObj".
By setting the interpolation to linear you will get a constant rotation speed, not an ease in and out at the start and end. Setting the extrapolation to linear means it will continue to rotate endlessly.

Related

Vedo: Is there a way to add a camera to scenes and see images from perspective?

I'm using Vedo in Python to visualize some 3D scans of indoor locations.
I would like to, e.g., add a 'camera' at (0,0,0), look left 90 degrees (or where ever), and see the camera's output.
Can this be done with Vedo? If not, is there a different python programming framework where I can open .obj files and add a camera and view through it programmatically?
I usually use schema:
...
plt = Plotter(bg='bb', interactive=False)
camera = plt.camera
plt.show(actors, axes=4, viewup='y')
for i in range(360):
camera.Azimuth(1)
camera.Roll(-1)
plt.render()
...
plt.interactive().close()
Good Luck
You can plot the same object in an embedded renderer and control its behaviour via a simple callback function:
from vedo import *
settings.immediateRendering = False # can be faster for multi-renderers
# (0,0) is the bottom-left corner of the window, (1,1) the top-right
# the order in the list defines the priority when overlapping
custom_shape = [
dict(bottomleft=(0.00,0.00), topright=(1.00,1.00), bg='wheat', bg2='w' ),# ren0
dict(bottomleft=(0.01,0.01), topright=(0.15,0.30), bg='blue3', bg2='lb'),# ren1
]
plt = Plotter(shape=custom_shape, size=(1600,800), sharecam=False)
s = ParametricShape(0) # whatever object to be shown
plt.show(s, 'Renderer0', at=0)
plt.show(s, 'Renderer1', at=1)
def update(event):
cam = plt.renderers[1].GetActiveCamera() # vtkCamera of renderer1
cam.Azimuth(1) # add one degree in azimuth
plt.addCallback("Interaction", update)
interactive()
Check out a related example here.
Check out the vtkCamera object methods here.

How to rotate a camera around obj wavefront file contents?

I have an .obj file. I do not know it’s contents bounding box before hand. I want to load it into blender and rotate camera around it in "K"th frames (e.g. 15 frames). How to do such a thing in blender using python api?
A common way to do an object turnaround is to add an empty and make it the parent of the camera, animating the z-rotation of the empty will then rotate the camera around the object, you can give the camera a trackto constraint so that the camera always points at the target object.
You can use the objects bound_box to find its outer limits, then add a bit more so the object stays inside the view and position the camera with that. Making the extra distance proportional to the object size should work for most objects.
The addon I made for this answer shows how to make a bounding box around multiple objects, which may be helpful if you have multiple objects at once.
To do that in python -
import bpy
scn = bpy.context.scene
bpy.ops.import_scene.obj(filepath='obj1.obj')
target = bpy.context.selected_objects[0]
scn.objects.active = target
# centring the origin gives a better bounding box and rotation point
bpy.ops.object.origin_set(type='ORIGIN_GEOMETRY')
cam_x_pos = max([v[0] for v in target.bound_box]) * 2.5
cam_y_pos = max([v[1] for v in target.bound_box]) * 2.5
cam_z_pos = max([v[2] for v in target.bound_box]) * 2.5
rot_centre = bpy.data.objects.new('rot_centre', None)
scn.objects.link(rot_centre)
rot_centre.location = target.location
camera = bpy.data.objects.new('camera', bpy.data.cameras.new('camera'))
scn.objects.link(camera)
camera.location = (cam_x_pos, cam_y_pos, cam_z_pos)
camera.parent = rot_centre
m = camera.constraints.new('TRACK_TO')
m.target = target
m.track_axis = 'TRACK_NEGATIVE_Z'
m.up_axis = 'UP_Y'
rot_centre.rotation_euler.z = 0.0
rot_centre.keyframe_insert('rotation_euler', index=2, frame=1)
rot_centre.rotation_euler.z = radians(360.0)
rot_centre.keyframe_insert('rotation_euler', index=2, frame=101)
# set linear interpolation for constant rotation speed
for c in rot_centre.animation_data.action.fcurves:
for k in c.keyframe_points:
k.interpolation = 'LINEAR'
scn.frame_end = 100

how to find Y face of the cube in Maya with Python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

maya python iterating a big number of vertex

I am writing a script in python for maya to swap vertex position from one side to another.
Since I want the flipping to be topology based I am using the topological symmetry selection tool to find the vertex correspondence.
I managed to do that using filterExpand and xform.
The problem is that it is quite slow on a large poly count mesh and I was wondering how this could be done using openMaya instead.
import maya.cmds as cmds
def flipMesh():
sel=cmds.ls(sl=1)
axis={'x':0,'y':1,'z':2}
reverse=[1.0,1.0,1.0]
#quring the axtive symmetry axis
activeaxis=cmds.symmetricModelling(q=1, axis=1)
reverse[axis[activeaxis]]=-1.0
#getting the vertex count
verts=cmds.polyEvaluate(v=1)
#selecting all vertex
cmds.select(sel[0]+'.vtx[0:'+str(verts)+']')
#getting all the positive vertex
posit=cmds.filterExpand(sm=31,ex=1,smp=1)
seam=cmds.filterExpand(sm=31,ex=1,sms=1)
#swapping position on the positive side with the negative side
for pos in posit:
cmds.select(pos, sym=True)
neg=cmds.filterExpand(sm=31,ex=1,smn=1)
posT=cmds.xform(pos, q=1, t=1)
negT=cmds.xform(neg[0], q=1, t=1)
cmds.xform(pos,t=[a*b for a,b in zip(negT,reverse)])
cmds.xform(neg[0],t=[a*b for a,b in zip(posT,reverse)])
#inverting position on the seam
for each in seam:
seamP=cmds.xform(each, q=1, t=1)
seaminvP=[a*b for a,b in zip(seamP,reverse)]
cmds.xform(each, t=(seaminvP))
cmds.select(sel)
Thanks
Maurizio
You can try out OpenMaya.MFnMesh to get and set your vertices.
Here's an example that will simply mirror all points of a selected object along their z axis:
import maya.OpenMaya as OpenMaya
# Get selected object
mSelList = OpenMaya.MSelectionList()
OpenMaya.MGlobal.getActiveSelectionList(mSelList)
sel = OpenMaya.MItSelectionList(mSelList)
path = OpenMaya.MDagPath()
sel.getDagPath(path)
# Attach to MFnMesh
MFnMesh = OpenMaya.MFnMesh(path)
# Create empty point array to store new points
newPointArray = OpenMaya.MPointArray()
for i in range( MFnMesh.numVertices() ):
# Create a point, and mirror it
newPoint = OpenMaya.MPoint()
MFnMesh.getPoint(i, newPoint)
newPoint.z = -newPoint.z
newPointArray.append(newPoint)
# Set new points to mesh all at once
MFnMesh.setPoints(newPointArray)
Instead of moving them one at at time you can use MFnMesh.setPoints to set them all at once. You'll have to adapt your logic to this, but hopefully this will help you out manipulating with Maya's api. I should also note that you would also have to resolve normals afterwards.

Convert VTK to raster image (Ruby or Python)

I have the results of a simulation on an unstructured 2D mesh. I usually export the results in VTK and visualize them with Paraview. This is what results look like.
I would like to obtain a raster image from the results (with or without interpolation) to use it as a texture for visualization in a 3D software. From reading around I have gathered that I need to do some kind of resampling in order to convert from the unstructured grid to a 2d regular grid for the raster image.
VTK can export to raster, but it exports only a full scene without any defined boundary so it requires manual tweaking to fit the image.
Ideally I would like to export only the results within the results bounding box and 'map' them to a raster image programmatically with Ruby or Python.
This script uses paraview and creates an image perfectly centered and scaled so that it can be used as a texture. Notice the 855 value for the vertical size. It seems to be related to the resolution of the screen and it is needed only on OSX according to Paraview mailing list.
It should be run to the Paraview Python interpreter pvbatch.
import sys, json
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
args = json.loads(sys.argv[1])
# create a new 'Legacy VTK Reader'
vtk_file = args["file"]
data = LegacyVTKReader(FileNames=[vtk_file])
# get active view
renderView1 = GetActiveViewOrCreate('RenderView')
# uncomment following to set a specific view size
xc = float(args["center"][0])
yc = float(args["center"][1])
zc = float(args["center"][2])
width = float(args["width"])
height = float(args["height"])
output_file = args["output_file"]
scalar = args["scalar"]
colormap_min = float(args["colormap_min"])
colormap_max = float(args["colormap_max"])
ratio = height / width
magnification = 2
height_p = 855 * magnification
width_p = int(height_p * 1.0 / ratio / magnification)
renderView1.ViewSize = [width_p , height_p]
# show data in view
dataDisplay = Show(data, renderView1)
# trace defaults for the display properties.
dataDisplay.ColorArrayName = ['CELLS', scalar]
# set scalar coloring
ColorBy(dataDisplay, ('CELLS', scalar))
# rescale color and/or opacity maps used to include current data range
dataDisplay.RescaleTransferFunctionToDataRange(True)
# get color transfer function/color map for 'irradiation'
irradiationLUT = GetColorTransferFunction(scalar)
# Rescale transfer function
irradiationLUT.RescaleTransferFunction(colormap_min, colormap_max)
irradiationLUT.LockDataRange = 1
irradiationLUT.ColorSpace = 'RGB'
irradiationLUT.NanColor = [0.498039, 0.0, 0.0]
#changing interaction mode based on data extents
renderView1.InteractionMode = '2D'
renderView1.CameraPosition = [xc, yc, 10000.0 + zc]
renderView1.CameraFocalPoint = [xc, yc, zc]
# hide color bar/color legend
dataDisplay.SetScalarBarVisibility(renderView1, False)
# current camera placement for renderView1
renderView1.InteractionMode = '2D'
#renderView1.CameraPosition = [3.641002, 197.944122, 10001.75]
#renderView1.CameraFocalPoint = [3.641002, 197.944122, 1.75]
renderView1.CameraParallelScale = (height / 2.0)
# save screenshot
SaveScreenshot(output_file, magnification=magnification, quality=100, view=renderView1)
I have a DIY solution. Usually, I do as follows:
Open my mesh as a polygon layer in QGIS and do the following:
calculate mesh centroids in QGIS (Vector/Geometry Tools/Polygon Centroids)
right click on the newly created layer, select Save As, select CSV format and under Layer options/GEOMETRY select xy or xyz
Then, with a simple python script I associate the vtk data (like e.g. water depth) to the centroids (be aware that ParaView numbers the nodes with a -1 offset in respect to QGIS, so node 2 in ParaView is node 3 in QGIS).
Eventually, again in QGIS, I interpolate a raster from vector points e.g. with the GRASSS GIS module v.to.rast.attribute

Categories