I have the results of a simulation on an unstructured 2D mesh. I usually export the results in VTK and visualize them with Paraview. This is what results look like.
I would like to obtain a raster image from the results (with or without interpolation) to use it as a texture for visualization in a 3D software. From reading around I have gathered that I need to do some kind of resampling in order to convert from the unstructured grid to a 2d regular grid for the raster image.
VTK can export to raster, but it exports only a full scene without any defined boundary so it requires manual tweaking to fit the image.
Ideally I would like to export only the results within the results bounding box and 'map' them to a raster image programmatically with Ruby or Python.
This script uses paraview and creates an image perfectly centered and scaled so that it can be used as a texture. Notice the 855 value for the vertical size. It seems to be related to the resolution of the screen and it is needed only on OSX according to Paraview mailing list.
It should be run to the Paraview Python interpreter pvbatch.
import sys, json
#### import the simple module from the paraview
from paraview.simple import *
#### disable automatic camera reset on 'Show'
paraview.simple._DisableFirstRenderCameraReset()
args = json.loads(sys.argv[1])
# create a new 'Legacy VTK Reader'
vtk_file = args["file"]
data = LegacyVTKReader(FileNames=[vtk_file])
# get active view
renderView1 = GetActiveViewOrCreate('RenderView')
# uncomment following to set a specific view size
xc = float(args["center"][0])
yc = float(args["center"][1])
zc = float(args["center"][2])
width = float(args["width"])
height = float(args["height"])
output_file = args["output_file"]
scalar = args["scalar"]
colormap_min = float(args["colormap_min"])
colormap_max = float(args["colormap_max"])
ratio = height / width
magnification = 2
height_p = 855 * magnification
width_p = int(height_p * 1.0 / ratio / magnification)
renderView1.ViewSize = [width_p , height_p]
# show data in view
dataDisplay = Show(data, renderView1)
# trace defaults for the display properties.
dataDisplay.ColorArrayName = ['CELLS', scalar]
# set scalar coloring
ColorBy(dataDisplay, ('CELLS', scalar))
# rescale color and/or opacity maps used to include current data range
dataDisplay.RescaleTransferFunctionToDataRange(True)
# get color transfer function/color map for 'irradiation'
irradiationLUT = GetColorTransferFunction(scalar)
# Rescale transfer function
irradiationLUT.RescaleTransferFunction(colormap_min, colormap_max)
irradiationLUT.LockDataRange = 1
irradiationLUT.ColorSpace = 'RGB'
irradiationLUT.NanColor = [0.498039, 0.0, 0.0]
#changing interaction mode based on data extents
renderView1.InteractionMode = '2D'
renderView1.CameraPosition = [xc, yc, 10000.0 + zc]
renderView1.CameraFocalPoint = [xc, yc, zc]
# hide color bar/color legend
dataDisplay.SetScalarBarVisibility(renderView1, False)
# current camera placement for renderView1
renderView1.InteractionMode = '2D'
#renderView1.CameraPosition = [3.641002, 197.944122, 10001.75]
#renderView1.CameraFocalPoint = [3.641002, 197.944122, 1.75]
renderView1.CameraParallelScale = (height / 2.0)
# save screenshot
SaveScreenshot(output_file, magnification=magnification, quality=100, view=renderView1)
I have a DIY solution. Usually, I do as follows:
Open my mesh as a polygon layer in QGIS and do the following:
calculate mesh centroids in QGIS (Vector/Geometry Tools/Polygon Centroids)
right click on the newly created layer, select Save As, select CSV format and under Layer options/GEOMETRY select xy or xyz
Then, with a simple python script I associate the vtk data (like e.g. water depth) to the centroids (be aware that ParaView numbers the nodes with a -1 offset in respect to QGIS, so node 2 in ParaView is node 3 in QGIS).
Eventually, again in QGIS, I interpolate a raster from vector points e.g. with the GRASSS GIS module v.to.rast.attribute
Related
I have a Satellite Image of a Terrain, and I have the Latitude and Longitude measures of the four cordinates of the file. How can I add the following data to the GeoTiff file, using Python or Matlab
You can open the file in update-mode and set the geotransform.
It does require your image to be a regular grid, and often the assumption is that it isn't rotated (which you can tell of you know all four corner coordinates).
The geotransform also requires the x/y size in pixels, which you can calculate from the image and the corner coordinates if you don't already know.
If this is more or less where you're starting from:
ulx = -180
uly = 90
lrx = 180
lry = -90
fn = "somefile.tif"
You can calculate and set the geotransform with:
from osgeo import gdal
ds = gdal.OpenEx(fn, gdal.OF_UPDATE)
# calculate the resolution
xres = (lrx - ulx) / ds.RasterXSize
yres = (lry - uly) / ds.RasterYSize
# define the geotransform
gt = (ulx, xres, 0, uly, 0, yres)
# set the geotransform in the metadata
ds.SetGeoTransform(gt)
# close file
ds = None
Since you're modifying an existing file, it would probably be best to first do it on a copy and make sure it's doing what you expect.
If the geotransform is missing, your file might also lack other metadata like the projection. If that's the case you can set it similarly with ds.SetProjection(srs_wkt).
I have a geotiff file that I'm opening with gdal in Python, and I need to find the single pixel closest to a specified latitude/longitude. I was previously working with an unrelated file type for similar data, so I'm completely new to both gdal and geotiff.
How does one do this? What I have so far is
import gdal
ds = gdal.Open('foo.tiff')
width = ds.RasterXSize
height = ds.RasterYSize
gt = ds.GetGeoTransform()
gp = ds.GetProjection()
data = np.array(ds.ReadAsArray())
print(gt)
print(gp)
which produces (for my files)
(-3272421.457337171, 2539.703, 0.0, 3790842.1060354356, 0.0, -2539.703)
and
PROJCS["unnamed",GEOGCS["Coordinate System imported from GRIB file",DATUM["unnamed",SPHEROID["Sphere",6371200,0]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]]],PROJECTION["Lambert_Conformal_Conic_2SP"],PARAMETER["latitude_of_origin",25],PARAMETER["central_meridian",265],PARAMETER["standard_parallel_1",25],PARAMETER["standard_parallel_2",25],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]
Ideally, there'd be a single simple function call, and it would also return an indication whether the specified location falls outside the bounds of the raster.
My fallback is to obtain a grid from another source containing the latitudes and longitudes for each pixel and then do a brute force search for the desired location, but I'm hoping there's a more elegant way.
Note: I think what I'm trying to do is equivalent to the command line
gdallocationinfo -wgs84 foo.tif <longitude> <latitude>
which returns results like
Report:
Location: (1475P,1181L)
Band 1:
Value: 66
This suggests to me that the functionality is probably already in the gdal module, if I can just find the right method to call.
You basically need two steps:
Convert the lat/lon point to the raster-projection
Convert the mapx/mapy (in raster proj) to pixel coordinates
Given the code you already posted above, defining both projection systems can be done with:
from osgeo import gdal, osr
point_srs = osr.SpatialReference()
point_srs.ImportFromEPSG(4326) # hardcode for lon/lat
# GDAL>=3: make sure it's x/y
# see https://trac.osgeo.org/gdal/wiki/rfc73_proj6_wkt2_srsbarn
point_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
file_srs = osr.SpatialReference()
file_srs.ImportFromWkt(gp)
Creating the coordinate transformation, and using it to convert the point from lon/lat to mapx/mapy coordinates (whatever projection it is) with:
ct = osr.CoordinateTransformation(point_srs, file_srs)
point_x = -114.06138 # lon
point_y = 51.03163 # lat
mapx, mapy, z = ct.TransformPoint(point_x, point_y)
To go from map coordinates to pixel coordinates, the geotransform needs to be inverted first. And can then be used to retrieve the pixel coordinates like:
gt_inv = gdal.InvGeoTransform(gt)
pixel_x, pixel_y = gdal.ApplyGeoTransform(gt_inv, mapx, mapy)
Rounding those pixel coordinates should allow you to use them for indexing the data array. You might need to clip them if the point you're querying is outside the raster.
# round to pixel
pixel_x = round(pixel_x)
pixel_y = round(pixel_y)
# clip to file extent
pixel_x = max(min(pixel_x, width-1), 0)
pixel_y = max(min(pixel_y, height-1), 0)
pixel_data = data[pixel_y, pixel_x]
I am trying to convert my visual image to Polar form using Vispy
Something similar to below images..
Original Image generated using below code of Vispy
scene.visuals.Image(self.img_data,interpolation=interpolation,parent = self.viewbox.scene, cmap=self.cmap, method='subdivide', clim=(-65,40))
Required Polar Image:
I did try to implement polarTransform using PolarTransform Example from Vispy but couldn't succeed.
Can anyone please guide my on how to do polarTransform of above Image to Polar using Vispy.
Thanks
Reply to #djhoese:
Image generated by Vispy before PolarTransform
Image generated by Vispy after PolarTransform
Code for PolarTransform:
self.img.transform = PolarTransform()
ImageVisual and PolarTransform do not play well together without some nudging.
VisPy has two methods of drawing, subdivide and impostor. I'll concentrate on subdivide here. This won't work with impostor.
First, create the ImageVisual like that:
img = visuals.ImageVisual(image,
grid=(1, N),
method='subdivide')
For N use a reasonable high number (eg 360). Playing with that number you'll immediately see how the polar resolution is affected.
Further you need to setup some specific transform chain:
transform = (
# move to final location and scale to your liking
STTransform(scale=(scx,scy), translate=(xoff,yoff))
# 0
# just plain simple polar transform
*PolarTransform()
# 1
# pre scale image to work with polar transform
# PolarTransform does not work without this
# scale vertex coordinates to 2*pi
* STTransform(scale=(2 * np.pi / img.size[0], 1.0))
# 2
# origin switch via translate.y, fix translate.x
* STTransform(translate=(img.size[0] * (ori0 % 2) * 0.5,
-img.size[1] * (ori0 % 2)))
# 3
# location change via translate.x
* STTransform(translate=(img.size[0] * (-loc0 - 0.25), 0.0))
# 4
# direction switch via inverting scale.x
* STTransform(scale=(-dir0, 1.0))
)
# set transform
img.transform = transform
dir0 - Direction cw/ccw (takes values -1/1, respectively)
loc0 - Location of Zero (value between 0 and 2 * np.pi, counter clockwise)
ori0 - Side which will be transformed to center of polar image (takes values 0, 1 for top or bottom
The bottom four STTransform can surely be simplified. They are split apart to show the different changes and how they have to be applied.
An example will be added to the VisPy examples section later.
I've wrote a code to produce cylinder objects using vtk in python. This code works fine where it produces a 3D scene where i can zoom or turn around the cylinders which i have been made. The problem is i want to export this rendered scene to paraview to view and save it for later works. How can i do this?
Here is the code that produce a Y-shape with cylinders:
import vtk
import numpy as np
'''
Adding multiple Actors to one renderer scene using VTK package with python api.
Each cylinder is an Actor with three input specifications: Startpoint, Endpoint and radius.
After creating all the Actors, the preferred Actors will be added to a list and that list will be our input to the
renderer scene.
A list or numpy array with appropriate 3*1 shape could be used to specify starting and ending points.
There are two alternative ways to apply the transform.
1) Use vtkTransformPolyDataFilter to create a new transformed polydata.
This method is useful if the transformed polydata is needed
later in the pipeline
To do this, set USER_MATRIX = True
2) Apply the transform directly to the actor using vtkProp3D's SetUserMatrix.
No new data is produced.
To do this, set USER_MATRIX = False
'''
USER_MATRIX = True
def cylinder_object(startPoint, endPoint, radius, my_color="DarkRed"):
colors = vtk.vtkNamedColors()
# Create a cylinder.
# Cylinder height vector is (0,1,0).
# Cylinder center is in the middle of the cylinder
cylinderSource = vtk.vtkCylinderSource()
cylinderSource.SetRadius(radius)
cylinderSource.SetResolution(50)
# Generate a random start and end point
# startPoint = [0] * 3
# endPoint = [0] * 3
rng = vtk.vtkMinimalStandardRandomSequence()
rng.SetSeed(8775070) # For testing.8775070
# Compute a basis
normalizedX = [0] * 3
normalizedY = [0] * 3
normalizedZ = [0] * 3
# The X axis is a vector from start to end
vtk.vtkMath.Subtract(endPoint, startPoint, normalizedX)
length = vtk.vtkMath.Norm(normalizedX)
vtk.vtkMath.Normalize(normalizedX)
# The Z axis is an arbitrary vector cross X
arbitrary = [0] * 3
for i in range(0, 3):
rng.Next()
arbitrary[i] = rng.GetRangeValue(-10, 10)
vtk.vtkMath.Cross(normalizedX, arbitrary, normalizedZ)
vtk.vtkMath.Normalize(normalizedZ)
# The Y axis is Z cross X
vtk.vtkMath.Cross(normalizedZ, normalizedX, normalizedY)
matrix = vtk.vtkMatrix4x4()
# Create the direction cosine matrix
matrix.Identity()
for i in range(0, 3):
matrix.SetElement(i, 0, normalizedX[i])
matrix.SetElement(i, 1, normalizedY[i])
matrix.SetElement(i, 2, normalizedZ[i])
# Apply the transforms
transform = vtk.vtkTransform()
transform.Translate(startPoint) # translate to starting point
transform.Concatenate(matrix) # apply direction cosines
transform.RotateZ(-90.0) # align cylinder to x axis
transform.Scale(1.0, length, 1.0) # scale along the height vector
transform.Translate(0, .5, 0) # translate to start of cylinder
# Transform the polydata
transformPD = vtk.vtkTransformPolyDataFilter()
transformPD.SetTransform(transform)
transformPD.SetInputConnection(cylinderSource.GetOutputPort())
# Create a mapper and actor for the arrow
mapper = vtk.vtkPolyDataMapper()
actor = vtk.vtkActor()
if USER_MATRIX:
mapper.SetInputConnection(cylinderSource.GetOutputPort())
actor.SetUserMatrix(transform.GetMatrix())
else:
mapper.SetInputConnection(transformPD.GetOutputPort())
actor.SetMapper(mapper)
actor.GetProperty().SetColor(colors.GetColor3d(my_color))
return actor
def render_scene(my_actor_list):
renderer = vtk.vtkRenderer()
for arg in my_actor_list:
renderer.AddActor(arg)
namedColors = vtk.vtkNamedColors()
renderer.SetBackground(namedColors.GetColor3d("SlateGray"))
window = vtk.vtkRenderWindow()
window.SetWindowName("Oriented Cylinder")
window.AddRenderer(renderer)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(window)
# Visualize
window.Render()
interactor.Start()
if __name__ == '__main__':
my_list = []
p0 = np.array([0, 0, 0])
p1 = np.array([0, 10, 0])
p2 = np.array([7, 17, 0])
p3 = np.array([-5, 15, 0])
my_list.append(cylinder_object(p0, p1, 1, "Red"))
my_list.append(cylinder_object(p1, p2, 0.8, "Green"))
my_list.append(cylinder_object(p1, p3, 0.75, "Navy"))
render_scene(my_list)
I have multiple actors where all of them are rendered together in one render scene, can i pass each actor into a vtk.vtkSTLWriter? this seems not working!
What you're looking for is subclasses of the vtkExporter class which, as per the linked doco:
vtkExporter is an abstract class that exports a scene to a file. It is very similar to vtkWriter except that a writer only writes out the geometric and topological data for an object, where an exporter can write out material properties, lighting, camera parameters etc.
As you can see from the inheritance diagram of the class there's about 15 classes that support exporting such a scene into a file that can be viewed in appropriate readers.
IMHO the one you'll have the most luck with is the vtkVRMLExporter class as it's a fairly common format. That being said I don't believe Paraview supports VRML files (at least based on some pretty ancient posts I've found) but I'm pretty sure MayaVi does.
Alternatively you could, as you mentioned, export objects into STL files but STL files simply contain triangle coordinates and info on how they connect. Such files cannot possibly describe info re the scene such as camera or lighting information. Also last I checked a single STL file can only contain a single object so your three cylinders would end up being a merged object so its probably not what you want.
I added these codes and it created a VRML file from my rendered scene.
exporter = vtk.vtkVRMLExporter()
exporter.SetRenderWindow(window)
exporter.SetFileName("cylinders.wrl")
exporter.Write()
exporter.Update()
I have an .obj file. I do not know it’s contents bounding box before hand. I want to load it into blender and rotate camera around it in "K"th frames (e.g. 15 frames). How to do such a thing in blender using python api?
A common way to do an object turnaround is to add an empty and make it the parent of the camera, animating the z-rotation of the empty will then rotate the camera around the object, you can give the camera a trackto constraint so that the camera always points at the target object.
You can use the objects bound_box to find its outer limits, then add a bit more so the object stays inside the view and position the camera with that. Making the extra distance proportional to the object size should work for most objects.
The addon I made for this answer shows how to make a bounding box around multiple objects, which may be helpful if you have multiple objects at once.
To do that in python -
import bpy
scn = bpy.context.scene
bpy.ops.import_scene.obj(filepath='obj1.obj')
target = bpy.context.selected_objects[0]
scn.objects.active = target
# centring the origin gives a better bounding box and rotation point
bpy.ops.object.origin_set(type='ORIGIN_GEOMETRY')
cam_x_pos = max([v[0] for v in target.bound_box]) * 2.5
cam_y_pos = max([v[1] for v in target.bound_box]) * 2.5
cam_z_pos = max([v[2] for v in target.bound_box]) * 2.5
rot_centre = bpy.data.objects.new('rot_centre', None)
scn.objects.link(rot_centre)
rot_centre.location = target.location
camera = bpy.data.objects.new('camera', bpy.data.cameras.new('camera'))
scn.objects.link(camera)
camera.location = (cam_x_pos, cam_y_pos, cam_z_pos)
camera.parent = rot_centre
m = camera.constraints.new('TRACK_TO')
m.target = target
m.track_axis = 'TRACK_NEGATIVE_Z'
m.up_axis = 'UP_Y'
rot_centre.rotation_euler.z = 0.0
rot_centre.keyframe_insert('rotation_euler', index=2, frame=1)
rot_centre.rotation_euler.z = radians(360.0)
rot_centre.keyframe_insert('rotation_euler', index=2, frame=101)
# set linear interpolation for constant rotation speed
for c in rot_centre.animation_data.action.fcurves:
for k in c.keyframe_points:
k.interpolation = 'LINEAR'
scn.frame_end = 100