Here's a rough explanation of what I do in vtk:
Create a surface (a minimal surface, not too relevant what it is, the geometry is important though: the gyroid has two labyrinths that are completely shut off from each other).
use vtkClipClosedSurface to shut off one of the labyrinths so that I get an object that has no open surfaces anymore. A regular surface looks like this, with a closed surface it looks like this.
Here's my problem: For more complicated versions of my structure, I get this:
Can you see how on the top left it works fine and near the bottom right it stops creating surfaces? Sometimes I also get really weird triangles in that last part.
To my understanding vtkClipClosedSurface knows from the surface normals where to close a surface and where not. The thing is: The normals of my structure are fine and they all point in the right direction. If you take a closer look on the structure you will notice that the lower part is basically an inversion of the top part that changes gradually, all in one surface.
I tried to modify my structure before cutting with many things like vtkSmoothPolyDataFilter, vtkCleanPolyData or vtkPolyDataNormals. I even tried extracting the boundary surfaces with vtkFeatureEdges, which led to an even worse result. Even vtkFillHolesFilter didn't yield any acceptable results. My surface seems flawless and easy enough to create a boundary.
I have no idea what else to try. This happens for other structures, too. Fixing it with a CAD tool is out of question, because it is supposed to work out of box. Please help me!
Here's another example of a geometry that doesn't close the surface properly. This time I used vtkFillHolesFilter which results in surfaces on the inside of the structure, while they should only occupy the boundary of te object.
In case you need a more detailed rundown of my pipeline, here goes:
create surface using mayavi.mlab.contour3d
get the PolyData by extracting the actor.mapper.input
convert format from tvtk to regular vtk
vtkClipClosedSurface with a plane collection that cuts away part of the structure (errors occur when the plane collection is the same as the structure boundary)
visualize it
Edit: Okay, this did not receive enough attention, so I constructed a minimal, complete and verifiable working example that reproduces the behaviour:
import numpy as np
import vtk # VTK version 7.0
from mayavi import mlab # mayavi version 4.4.4
from mayavi.api import Engine, OffScreenEngine
from tvtk.api import tvtk
def schwarz_D(x, y, z, linear_term=0):
"""This is the function for the Schwarz Diamond level surface."""
return (np.sin(x) * np.sin(y) * np.sin(z) + np.sin(x) * np.cos(y) * np.cos(z) +
np.cos(x) * np.sin(y) * np.cos(z) + np.cos(x) * np.cos(y) * np.sin(z)) - linear_term * z
def plane_collection(xn, x, yn, y, zn, z):
"""Defines the 6 planes for cutting rectangular objects to the right size."""
plane1 = vtk.vtkPlane()
plane1.SetOrigin(x, 0, 0)
plane1.SetNormal(-1, 0, 0)
plane2 = vtk.vtkPlane()
plane2.SetOrigin(0, y, 0)
plane2.SetNormal(0, -1, 0)
plane3 = vtk.vtkPlane()
plane3.SetOrigin(0, 0, z)
plane3.SetNormal(0, 0, -1)
plane4 = vtk.vtkPlane()
plane4.SetOrigin(xn, 0, 0)
plane4.SetNormal(1, 0, 0)
plane5 = vtk.vtkPlane()
plane5.SetOrigin(0, yn, 0)
plane5.SetNormal(0, 1, 0)
plane6 = vtk.vtkPlane()
plane6.SetOrigin(0, 0, zn)
plane6.SetNormal(0, 0, 1)
plane_list = [plane4, plane1, plane5, plane2, plane6, plane3]
planes = vtk.vtkPlaneCollection()
for item in plane_list:
planes.AddItem(item)
return planes
[nx, ny, nz] = [2, 2, 8] # amount of unit cells
cell_size = 1
gradient_value = 0.04 # only values below 0.1 produce the desired geometry; this term is essential
x, y, z = np.mgrid[-cell_size*(nx + 1)/2:cell_size*(nx + 1)/2:100j,
-cell_size*(ny + 1)/2:cell_size*(ny + 1)/2:100j,
-cell_size*(nz + 1)/2:cell_size*(nz + 1)/2:100*2j] * np.pi / (cell_size/2)
# engine = Engine()
engine = OffScreenEngine() # do not start mayavi GUI
engine.start()
fig = mlab.figure(figure=None, engine=engine)
contour3d = mlab.contour3d(x, y, z, schwarz_D(x, y, z, gradient_value), figure=fig)
scene = engine.scenes[0]
actor = contour3d.actor.actors[0]
iso_surface = scene.children[0].children[0].children[0]
iso_surface.contour.minimum_contour = 0
iso_surface.contour.number_of_contours = 1
iso_surface.compute_normals = False
iso_surface.contour.auto_update_range = False
mlab.draw(fig)
# mlab.show() # enable if you want to see the mayavi GUI
polydata = tvtk.to_vtk(actor.mapper.input) # convert tvtkPolyData to vtkPolyData
# Move object to the coordinate center to make clipping easier later on.
center_coords = np.array(polydata.GetCenter())
center = vtk.vtkTransform()
center.Translate(-center_coords[0], -center_coords[1], -center_coords[2])
centerFilter = vtk.vtkTransformPolyDataFilter()
centerFilter.SetTransform(center)
centerFilter.SetInputData(polydata)
centerFilter.Update()
# Reverse normals in order to receive a closed surface after clipping
reverse = vtk.vtkReverseSense()
reverse.SetInputConnection(centerFilter.GetOutputPort())
reverse.ReverseNormalsOn()
reverse.ReverseCellsOn()
reverse.Update()
bounds = np.asarray(reverse.GetOutput().GetBounds())
clip = vtk.vtkClipClosedSurface()
clip.SetInputConnection(reverse.GetOutputPort())
clip.SetTolerance(10e-3)
# clip.TriangulationErrorDisplayOn() # enable to see errors for not watertight surfaces
clip.SetClippingPlanes(plane_collection(bounds[0] + cell_size/2, bounds[1] - cell_size/2,
bounds[2] + cell_size/2, bounds[3] - cell_size/2,
bounds[4] + cell_size/2, bounds[5] - cell_size/2))
clip.Update()
# Render the result
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(clip.GetOutputPort())
actor = vtk.vtkActor()
actor.SetMapper(mapper)
renderer = vtk.vtkRenderer()
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtk.vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
renderer.AddActor(actor)
renderWindow.Render()
renderWindowInteractor.Start()
This really is a short as it gets, I stripped as much as I could. The problem still persists and I can't figure out a solution.
Try using pymeshfix. I had a very similar problem with some low-res mandelbulbs I was generating.
You may also want ot check out pyvista, it's a nice python wrapper for vtk.
Great problem and thanks for the example.
I was able to get this to work in pyvista with some modifications:
import numpy as np
import pyvista as pv
def schwarz_D(x, y, z, linear_term=0):
"""This is the function for the Schwarz Diamond level surface."""
return (np.sin(x) * np.sin(y) * np.sin(z) + np.sin(x) * np.cos(y) * np.cos(z) +
np.cos(x) * np.sin(y) * np.cos(z) + np.cos(x) * np.cos(y) * np.sin(z)) - linear_term * z
# Create the grid
[nx, ny, nz] = [2, 2, 8] # amount of unit cells
cell_size = 1
gradient_value = 0.04 # only values below 0.1 produce the desired geometry; this term is essential
x, y, z = np.mgrid[-cell_size*(nx + 1)/2:cell_size*(nx + 1)/2:100j,
-cell_size*(ny + 1)/2:cell_size*(ny + 1)/2:100j,
-cell_size*(nz + 1)/2:cell_size*(nz + 1)/2:100*2j] * np.pi / (cell_size/2)
# make a grid and exclude cells below 0.1
grid = pv.StructuredGrid(x, y, z)
grid['scalars'] = schwarz_D(x, y, z, gradient_value).ravel(order='F')
contour = grid.clip_scalar(value=0.1)
# make a bunch of clips
# bounds = contour.bounds
# contour.clip(origin=(bounds[0] + cell_size/2, 0, 0), normal='-x', inplace=True)
# contour.clip(origin=(0, bounds[1] - cell_size/2, 0), normal='-y', inplace=True)
# contour.clip(origin=(0, 0, bounds[2] + cell_size/2), normal='-z', inplace=True)
# contour.clip(origin=(bounds[3] - cell_size/2, 0, 0), normal='x', inplace=True)
# contour.clip(origin=(0, bounds[4] + cell_size/2, 0), normal='y', inplace=True)
# contour.clip(origin=(0, 0, bounds[5] - cell_size/2), normal='z', inplace=True)
contour.plot(smooth_shading=True, color='w')
I'm not sure why you're using clipping planes, and I think that you would be better off simply limiting your x, y, and z ranges put into creating the grids. That way, you wouldn't have to clip the final mesh.
Related
I am attempting to write a Python code to simulate many particles in a confined box. These particles behave in such a way that they move in the box in straight lines with a slight angular noise (small changes in the direction of the particle path). They should interact by acknowledging the other particle and 'shuffle/squeeze' past each other and continue on their intended path, much like humans on a busy street. Eventually, the particles should cluster together when the density of particles (or packing fraction) reaches a certain value, but I haven't got to this stage yet.
The code currently has particle interactions and I have attempted the angular noise but without success so far. The problems I would like some help with are the angular noise and the particle interactions. My idea for the angular noise was to multiply the data by a number between roughly 0.9 and 1.1 to change the direction of the particle slightly but after adding the lines in, nothing changed. The particles do interact but they seem to move in a fast semi-circle around the other interacting which is not what I want.
I don't think any knowledge or understanding of ABM is needed to write the angular noise code but some may be needed for the forces, I am not 100% sure though.
If anyone has any improvements for the code speed or ideas which may help with the interactions and/or angular noise that would be much appreciated. I will also leave an example of an animation which is my aim: https://warwick.ac.uk/fac/sci/physics/staff/research/cwhitfield/abpsimulations
The above link shows the animation I am looking for, although I don't need the sliders, just the box, and moving particles. The whole code is shown below:
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
def set_initial_coordinates():
x_co = [np.random.uniform(0, 2) for i in range(n_particles)]
y_co = [np.random.uniform(0, 2) for i in range(n_particles)]
return x_co, y_co
def set_initial_velocities():
x_vel = np.array([np.random.uniform(-1, 1) for i in range(n_particles)])
y_vel = np.array([np.random.uniform(-1, 1) for i in range(n_particles)])
return x_vel, y_vel
def init():
ax.set_xlim(-0.05, 2.05)
ax.set_ylim(-0.07, 2.07)
return ln,
def update(dt):
xdata = initialx + vx * dt
ydata = initialy + vy * dt
fx = np.abs((xdata + 2) % 4 - 2)
fy = np.abs((ydata + 2) % 4 - 2)
for i in range(n_particles):
for j in range(n_particles):
if i == j:
continue
dx = fx[j] - fx[i] # distance in x direction
dy = fy[j] - fy[i] # distance in y direction
dr = np.sqrt((dx ** 2) + (dy ** 2)) # distance between x
if dr <= r:
force = k * ((2 * r) - dr) # size of the force if distance is less than or equal to radius
# Imagine a unit vector going from i to j
x_comp = dx / dr # x component of force
y_comp = dy / dr # y component of force
fx[i] += -x_comp * force # x force
fy[i] += -y_comp * force # y force
ln.set_data(fx, fy)
return ln,
# theta = np.random.uniform(0, 2) for i in range(n_particles)
n_particles = 10
initialx, initialy = set_initial_coordinates()
vx, vy = set_initial_velocities()
fig, ax = plt.subplots()
x_co, y_co = [], []
ln, = plt.plot([], [], 'bo', markersize=15) # radius 0.05
plt.xlim(0, 2)
plt.ylim(0, 2)
k = 1
r = 0.1
t = np.linspace(0, 10, 1000)
ani = FuncAnimation(fig, update, t, init_func=init, blit=True, repeat=False)
plt.show()
I'm visualizing some fluid motion. I have a point field and each point is connected to a velocity and acceleration vector. This question is about if my code can be improved, speed-wise, as I'm trying to visualize it as an animation, which takes quite some time for larger fields, which is obvious as to how the code is set up. There is a lot of repetitive work, that in my opinion, should be possible to perform more effectively, I just do not know how to do it.
Working example:
import vtk
import numpy as np
renderer = vtk.vtkRenderer()
renderer.SetBackground(0.5, 0.5, 0.5)
renderWindow = vtk.vtkRenderWindow()
renderWindow.AddRenderer(renderer)
renderWindowInteractor = vtk.vtkRenderWindowInteractor()
renderWindowInteractor.SetRenderWindow(renderWindow)
lut = vtk.vtkLookupTable()
lutNum = 2560
lut.SetNumberOfTableValues(lutNum)
ctf = vtk.vtkColorTransferFunction()
ctf.SetColorSpaceToDiverging()
ctf.AddRGBPoint(0.0, 0, 0, 1.0)
ctf.AddRGBPoint(1.0, 1.0, 0, 0)
for ii, xx in enumerate(range(lutNum)):
ss = float(xx) / float(lutNum)
cc = ctf.GetColor(ss)
lut.SetTableValue(ii, cc[0], cc[1], cc[2], 1.0)
lut.SetRampToLinear()
lut.SetVectorModeToMagnitude()
xx = np.arange(-7, 7, 1)
yy = np.arange(-7, 7, 1)
zz = np.arange(-40, 0, 1)
no_datapts = len(xx) * len(yy) * len(zz)
renderWindowInteractor.Initialize()
points = vtk.vtkPoints()
vel = vtk.vtkFloatArray()
vel.SetNumberOfComponents(3)
vel.SetNumberOfTuples(no_datapts)
acc = vtk.vtkFloatArray()
acc.SetNumberOfComponents(3)
acc.SetNumberOfTuples(no_datapts)
index = 0
for idx, x in enumerate(xx):
for y in yy:
for idz, z in enumerate(zz):
points.InsertPoint(index, x, y, z)
acc_x = 2 * z + 0.1 * x
acc_z = 1.2 * z + 0.2 * x
vel_x = 0.5 * z + 0.7 * x
vel_z = 0.7 * z + 0.3 * x
acc.SetTuple3(index, acc_x, 0, acc_z)
vel.SetTuple3(index, vel_x, 0, vel_z)
index += 1
grid = vtk.vtkStructuredGrid()
grid.SetPoints(points)
grid.GetPointData().SetVectors(vel)
arrow = vtk.vtkArrowSource()
arrow.SetTipResolution(16)
arrow.SetTipLength(0.3)
arrow.SetTipRadius(0.1)
glyph = vtk.vtkGlyph3D()
glyph.SetInputData(grid)
glyph.SetSourceConnection(arrow.GetOutputPort())
glyph.SetVectorModeToUseVector()
glyph.SetColorModeToColorByVector()
glyph.OrientOn()
glyph.Update()
mapper = vtk.vtkPolyDataMapper()
mapper.SetInputConnection(glyph.GetOutputPort())
mapper.SetLookupTable(lut)
mapper.ScalarVisibilityOn()
mapper.SetScalarModeToUsePointData()
mapper.SetScalarRange(grid.GetPointData().GetVectors().GetRange(-1))
actor = vtk.vtkActor()
actor.SetMapper(mapper)
renderer.AddActor(actor)
renderWindow.Render()
renderWindowInteractor.Start()
That should work, I have included some coloring and stuff as well, even though not really necessary for the example but, why not.
The "interesting" thing is that I have a 3D grid, which is basically just a 2D x-z grid that is being represented in multiple y positions.
Hence the question. Is there any way I can avoid this triple loop, by creating my structured grid differently? Do I have to set the points and the tuples individually?
I hope the question is clear, and I will, of course, update the question if you find it unclear and if I find a different solution myself.
I would be most grateful for any help you are able to provide.
I'd like to plot two profiles through the highest intensity point in a 2D numpy array, which is an image of a blob (i.e. a line through the semi-major axis, and another line through the semi-minor axis). The blob is rotated at an angle theta counterclockwise from the standard x-axis and is asymmetric.
It is a 600x600 array with a max intensity of 1 (at only one pixel) that is located right at the center at (300, 300). The angle rotation from the x-axis (which then gives the location of the semi-major axis when rotated by that angle) is theta = 89.54 degrees. I do not want to use scipy.ndimage.rotate because it uses spline interpolation, and I do not want to change any of my pixel values. But I suppose a nearest-neighbor interpolation method would be okay.
I tried generating lines corresponding to the major and minor axes across the image, but the result was not right at all (the peak was far less than 1), so maybe I did something wrong. The code for this is below:
import numpy as np
import matplotlib.pyplot as plt
from scipy import ndimage
def profiles_at_angle(image, axis, theta):
theta = np.deg2rad(theta)
if axis == 'major':
x_0, y_0 = 0, 300-300*np.tan(theta)
x_1, y_1 = 599, 300+300*np.tan(theta)
elif axis=='minor':
x_0, y_0 = 300-300*np.tan(theta), 599
x_1, y_1 = 300+300*np.tan(theta), -599
num = 600
x, y = np.linspace(x_0, x_1, num), np.linspace(y_0, y_1, num)
z = ndimage.map_coordinates(image, np.vstack((x,y)))
fig, axes = plt.subplots(nrows=2)
axes[0].imshow(image, cmap='gray')
axes[0].axis('image')
axes[1].plot(z)
plt.xlim(250,350)
plt.show()
profiles_at_angle(image, 'major', theta)
Did I do something obviously wrong in my code above? Or how else can I accomplish this? Thank you.
Edit: Here are some example images. Sorry for the bad quality; my browser crashed every time I tried uploading them anywhere so I had to take photos of the screen.
Figure 1: This is the result of my code above, which is clearly wrong since the peak should be at 1. I'm not sure what I did wrong though.
Figure 2: I made this plot below by just taking the profiles through the standard x and y axes, ignoring any rotation (this only looks good coincidentally because the real angle of rotation is so close to 90 degrees, so I was able to just switch the labels and get this). I want my result to look something like this, but taking the correction rotation angle into account.
Edit: It could be useful to run tests on this method using data very much like my own (it's a 2D Gaussian with nearly the same parameters):
image = np.random.random((600,600))
def generate(data_set):
xvec = np.arange(0, np.shape(data_set)[1], 1)
yvec = np.arange(0, np.shape(data_set)[0], 1)
X, Y = np.meshgrid(xvec, yvec)
return X, Y
def gaussian_func(xy, x0, y0, sigma_x, sigma_y, amp, theta, offset):
x, y = xy
a = (np.cos(theta))**2/(2*sigma_x**2) + (np.sin(theta))**2/(2*sigma_y**2)
b = -np.sin(2*theta)/(4*sigma_x**2) + np.sin(2*theta)/(4*sigma_y**2)
c = (np.sin(theta))**2/(2*sigma_x**2) + (np.cos(theta))**2/(2*sigma_y**2)
inner = a * (x-x0)**2
inner += 2*b*(x-x0)*(y-y0)
inner += c * (y-y0)**2
return (offset + amp * np.exp(-inner)).ravel()
xx, yy = generate(image)
image = gaussian_func((xx.ravel(), yy.ravel()), 300, 300, 5, 4, 1, 1.56, 0)
image = np.reshape(image, (600, 600))
This should do it for you. You just did not properly compute your lines.
theta = 65
peak = np.argwhere(image==1)[0]
x = np.linspace(peak[0]-100,peak[0]+100,1000)
y = lambda x: (x-peak[1])*np.tan(np.deg2rad(theta))+peak[0]
y_maj = np.linspace(y(peak[1]-100),y(peak[1]+100),1000)
y = lambda x: -(x-peak[1])/np.tan(np.deg2rad(theta))+peak[0]
y_min = np.linspace(y(peak[1]-100),y(peak[1]+100),1000)
del y
z_min = scipy.ndimage.map_coordinates(image, np.vstack((x,y_min)))
z_maj = scipy.ndimage.map_coordinates(image, np.vstack((x,y_maj)))
fig, axes = plt.subplots(nrows=2)
axes[0].imshow(image)
axes[0].plot(x,y_maj)
axes[0].plot(x,y_min)
axes[0].axis('image')
axes[1].plot(z_min)
axes[1].plot(z_maj)
plt.show()
The idea behind this is to create a detection area for a security camera. Currently, I know how to find and use the modelview matrix data as shown below in the function "matrixTransformation". The value for the matrix should then be calculated for each increase of rotation of the security camera in the initialization function.
I would like to know how you would find coordinates of the edges of each security camera, a cylinder shape, using the matrix. I am using Pygame 1.9.2, Python 3.5 and PyOpenGL-3.1.0.
Picture of coordinates on the security camera which need to be calculated
def matrixTransformation(x,y,z):
matrix = (GLfloat * 16)()
glGetFloatv(GL_MODELVIEW_MATRIX, matrix)
xp = matrix[0] * x + matrix[4] * y + matrix[8] * z + matrix[12]
yp = matrix[1] * x + matrix[5] * y + matrix[9] * z + matrix[13]
zp = matrix[2] * x + matrix[6] * y + matrix[10] * z + matrix[14]
wp = matrix[3] * x + matrix[7] * y + matrix[11] * z + matrix[15]
xp /= wp
yp /= wp
zp /= wp
return xp,yp,zp
def init():
securityCameraRotation=380
glEnable(GL_DEPTH_TEST)
multipleRotations=0
result=[]
glPushMatrix()
glTranslatef(-4,1.5,5.5)
glRotate(315,1,1,1)
while True:
if securityCameraRotation>=380:
clockwise=True
multipleRotations+=1
elif securityCameraRotation<=310:
clockwise=False
glRotate(securityCameraRotation,0,1,0)
#append the transformed coordinates to result
if clockwise==True:
securityCameraRotation-=0.2
elif clockwise==False:
securityCameraRotation+=0.2
if multipleRotations>1:
#End the loop when one complete rotation between 310 and 380 has occured
break
glPopMatrix()
return result
def securityCamera(radius, height, num_slices,frontCircleColour,backCircleColour,tubeColour):
r = radius
h = height
n = float(num_slices)
circle_pts = []
for i in range(int(n) + 1):
angle = 2 * math.pi * (i/n)
x = r * math.cos(angle)
y = r * math.sin(angle)
pt = (x, y)
circle_pts.append(pt)
glBegin(GL_TRIANGLE_FAN) #drawing the back circle
glColor(backCircleColour)
glVertex(0, 0, h/2.0)
for (x, y) in circle_pts:
z = h/2.0
glVertex(x, y, z)
glEnd()
glBegin(GL_TRIANGLE_FAN) #drawing the front circle
glColor(frontCircleColour)
glVertex(0, 0, h/2.0)
for (x, y) in circle_pts:
z = -h/2.0
glVertex(x, y, z)
glEnd()
glBegin(GL_TRIANGLE_STRIP) #draw the tube
glColor(tubeColour)
for (x, y) in circle_pts:
z = h/2.0
glVertex(x, y, z)
glVertex(x, y, -z)
glEnd()
In OpenGL, there are a bunch of transformations that occur. First, we treat the object as if it is in model space, where the the object is centered at the origin and we draw the mesh (in this case, the cylinder). Then, we apply a model matrix transform (where we translate/rotate/scale our cylinder) and a view matrix transform (where we shift our scene relative to the imaginary camera). Finally we apply the projection matrix that adds the "3d perspective" to our scene by creating a matrix with gluPerspective or some more modern means. All of these matrix multiplications basically put the coordinates of your 3d models in the right place on our 2d screens (sort of, more detailed info here).
In terms of the model space, the yellow points you highlighted in your picture are actually just (0, 0, -h/2.0) and (0, 0, h/2.0). This is fine if you are just drawing your yellow points with glBegin(GL_POINTS) in your securityCamera function. However, you are probably more interested in calculating where these yellow points are located in world space (that is, after multiplication by the modelview matrix).
One simple way to get these world space coordinates is to multiply the yellow points' model space coordinates by the modelview matrix. Use your matrixTransformation function on (0, 0, -h/2.0) and (0, 0, h/2.0) and that should work!
Alternatively, as I hinted at in the comments, matrices like your modelview matrix are actually contain useful information that results from the accumulation of multiplications of translation, rotation, and scaling matrices. I pointed to this picture:
Each of these column axes actually corresponds to the rows of your numpy array (which is interesting since numpy is row-major while OpenGL is column-major). You can get the following axes of how your model is pointing in world spaces with the following snippet:
mv_matrix = glGetFloatv(GL_MODELVIEW_MATRIX)
left, up, forward, position = [v/(np.linalg.norm(v)) for v in mv_matrix[:,:3]]
Note that I cut off the last row in the numpy array and normalized each of the axes. If you take the forward array you get from there, you get the direction in world space of where that particular camera is pointing, while the position array gives you the world space position of the center (model space equivalent of (0, 0, 0)) of the security camera. Multiply the normalized forward array by h/2.0 and add that to position and you should get the world space position of the front of your security camera. This is not too useful for rendering to the screen, but could be used for "behind the scenes" math for interesting objects with the security camera.
As a side note, I realized I made a sign error in this line for the cylinder drawing code:
glBegin(GL_TRIANGLE_FAN) #drawing the front circle
glColor(frontCircleColour)
glVertex(0, 0, -h/2.0)#this was + in the original code!!!
Let me know if this helps you make sense of my earlier comments!
I am attempting to generate map overlay images that would assist in identifying hot-spots, that is areas on the map that have high density of data points. None of the approaches that I've tried are fast enough for my needs.
Note: I forgot to mention that the algorithm should work well under both low and high zoom scenarios (or low and high data point density).
I looked through numpy, pyplot and scipy libraries, and the closest I could find was numpy.histogram2d. As you can see in the image below, the histogram2d output is rather crude. (Each image includes points overlaying the heatmap for better understanding)
My second attempt was to iterate over all the data points, and then calculate the hot-spot value as a function of distance. This produced a better looking image, however it is too slow to use in my application. Since it's O(n), it works ok with 100 points, but blows out when I use my actual dataset of 30000 points.
My final attempt was to store the data in an KDTree, and use the nearest 5 points to calculate the hot-spot value. This algorithm is O(1), so much faster with large dataset. It's still not fast enough, it takes about 20 seconds to generate a 256x256 bitmap, and I would like this to happen in around 1 second time.
Edit
The boxsum smoothing solution provided by 6502 works well at all zoom levels and is much faster than my original methods.
The gaussian filter solution suggested by Luke and Neil G is the fastest.
You can see all four approaches below, using 1000 data points in total, at 3x zoom there are around 60 points visible.
Complete code that generates my original 3 attempts, the boxsum smoothing solution provided by 6502 and gaussian filter suggested by Luke (improved to handle edges better and allow zooming in) is here:
import matplotlib
import numpy as np
from matplotlib.mlab import griddata
import matplotlib.cm as cm
import matplotlib.pyplot as plt
import math
from scipy.spatial import KDTree
import time
import scipy.ndimage as ndi
def grid_density_kdtree(xl, yl, xi, yi, dfactor):
zz = np.empty([len(xi),len(yi)], dtype=np.uint8)
zipped = zip(xl, yl)
kdtree = KDTree(zipped)
for xci in range(0, len(xi)):
xc = xi[xci]
for yci in range(0, len(yi)):
yc = yi[yci]
density = 0.
retvalset = kdtree.query((xc,yc), k=5)
for dist in retvalset[0]:
density = density + math.exp(-dfactor * pow(dist, 2)) / 5
zz[yci][xci] = min(density, 1.0) * 255
return zz
def grid_density(xl, yl, xi, yi):
ximin, ximax = min(xi), max(xi)
yimin, yimax = min(yi), max(yi)
xxi,yyi = np.meshgrid(xi,yi)
#zz = np.empty_like(xxi)
zz = np.empty([len(xi),len(yi)])
for xci in range(0, len(xi)):
xc = xi[xci]
for yci in range(0, len(yi)):
yc = yi[yci]
density = 0.
for i in range(0,len(xl)):
xd = math.fabs(xl[i] - xc)
yd = math.fabs(yl[i] - yc)
if xd < 1 and yd < 1:
dist = math.sqrt(math.pow(xd, 2) + math.pow(yd, 2))
density = density + math.exp(-5.0 * pow(dist, 2))
zz[yci][xci] = density
return zz
def boxsum(img, w, h, r):
st = [0] * (w+1) * (h+1)
for x in xrange(w):
st[x+1] = st[x] + img[x]
for y in xrange(h):
st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w]
for x in xrange(w):
st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x]
for y in xrange(h):
y0 = max(0, y - r)
y1 = min(h, y + r + 1)
for x in xrange(w):
x0 = max(0, x - r)
x1 = min(w, x + r + 1)
img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1]
def grid_density_boxsum(x0, y0, x1, y1, w, h, data):
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
r = 15
border = r * 2
imgw = (w + 2 * border)
imgh = (h + 2 * border)
img = [0] * (imgw * imgh)
for x, y in data:
ix = int((x - x0) * kx) + border
iy = int((y - y0) * ky) + border
if 0 <= ix < imgw and 0 <= iy < imgh:
img[iy * imgw + ix] += 1
for p in xrange(4):
boxsum(img, imgw, imgh, r)
a = np.array(img).reshape(imgh,imgw)
b = a[border:(border+h),border:(border+w)]
return b
def grid_density_gaussian_filter(x0, y0, x1, y1, w, h, data):
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
r = 20
border = r
imgw = (w + 2 * border)
imgh = (h + 2 * border)
img = np.zeros((imgh,imgw))
for x, y in data:
ix = int((x - x0) * kx) + border
iy = int((y - y0) * ky) + border
if 0 <= ix < imgw and 0 <= iy < imgh:
img[iy][ix] += 1
return ndi.gaussian_filter(img, (r,r)) ## gaussian convolution
def generate_graph():
n = 1000
# data points range
data_ymin = -2.
data_ymax = 2.
data_xmin = -2.
data_xmax = 2.
# view area range
view_ymin = -.5
view_ymax = .5
view_xmin = -.5
view_xmax = .5
# generate data
xl = np.random.uniform(data_xmin, data_xmax, n)
yl = np.random.uniform(data_ymin, data_ymax, n)
zl = np.random.uniform(0, 1, n)
# get visible data points
xlvis = []
ylvis = []
for i in range(0,len(xl)):
if view_xmin < xl[i] < view_xmax and view_ymin < yl[i] < view_ymax:
xlvis.append(xl[i])
ylvis.append(yl[i])
fig = plt.figure()
# plot histogram
plt1 = fig.add_subplot(221)
plt1.set_axis_off()
t0 = time.clock()
zd, xe, ye = np.histogram2d(yl, xl, bins=10, range=[[view_ymin, view_ymax],[view_xmin, view_xmax]], normed=True)
plt.title('numpy.histogram2d - '+str(time.clock()-t0)+"sec")
plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# plot density calculated with kdtree
plt2 = fig.add_subplot(222)
plt2.set_axis_off()
xi = np.linspace(view_xmin, view_xmax, 256)
yi = np.linspace(view_ymin, view_ymax, 256)
t0 = time.clock()
zd = grid_density_kdtree(xl, yl, xi, yi, 70)
plt.title('function of 5 nearest using kdtree\n'+str(time.clock()-t0)+"sec")
cmap=cm.jet
A = (cmap(zd/256.0)*255).astype(np.uint8)
#A[:,:,3] = zd
plt.imshow(A , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# gaussian filter
plt3 = fig.add_subplot(223)
plt3.set_axis_off()
t0 = time.clock()
zd = grid_density_gaussian_filter(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))
plt.title('ndi.gaussian_filter - '+str(time.clock()-t0)+"sec")
plt.imshow(zd , origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
# boxsum smoothing
plt3 = fig.add_subplot(224)
plt3.set_axis_off()
t0 = time.clock()
zd = grid_density_boxsum(view_xmin, view_ymin, view_xmax, view_ymax, 256, 256, zip(xl, yl))
plt.title('boxsum smoothing - '+str(time.clock()-t0)+"sec")
plt.imshow(zd, origin='lower', extent=[view_xmin, view_xmax, view_ymin, view_ymax])
plt.scatter(xlvis, ylvis)
if __name__=='__main__':
generate_graph()
plt.show()
This approach is along the lines of some previous answers: increment a pixel for each spot, then smooth the image with a gaussian filter. A 256x256 image runs in about 350ms on my 6-year-old laptop.
import numpy as np
import scipy.ndimage as ndi
data = np.random.rand(30000,2) ## create random dataset
inds = (data * 255).astype('uint') ## convert to indices
img = np.zeros((256,256)) ## blank image
for i in xrange(data.shape[0]): ## draw pixels
img[inds[i,0], inds[i,1]] += 1
img = ndi.gaussian_filter(img, (10,10))
A very simple implementation that could be done (with C) in realtime and that only takes fractions of a second in pure python is to just compute the result in screen space.
The algorithm is
Allocate the final matrix (e.g. 256x256) with all zeros
For each point in the dataset increment the corresponding cell
Replace each cell in the matrix with the sum of the values of the matrix in an NxN box centered on the cell. Repeat this step a few times.
Scale result and output
The computation of the box sum can be made very fast and independent on N by using a sum table. Every computation just requires two scan of the matrix... total complexity is O(S + WHP) where S is the number of points; W, H are width and height of output and P is the number of smoothing passes.
Below is the code for a pure python implementation (also very un-optimized); with 30000 points and a 256x256 output grayscale image the computation is 0.5sec including linear scaling to 0..255 and saving of a .pgm file (N = 5, 4 passes).
def boxsum(img, w, h, r):
st = [0] * (w+1) * (h+1)
for x in xrange(w):
st[x+1] = st[x] + img[x]
for y in xrange(h):
st[(y+1)*(w+1)] = st[y*(w+1)] + img[y*w]
for x in xrange(w):
st[(y+1)*(w+1)+(x+1)] = st[(y+1)*(w+1)+x] + st[y*(w+1)+(x+1)] - st[y*(w+1)+x] + img[y*w+x]
for y in xrange(h):
y0 = max(0, y - r)
y1 = min(h, y + r + 1)
for x in xrange(w):
x0 = max(0, x - r)
x1 = min(w, x + r + 1)
img[y*w+x] = st[y0*(w+1)+x0] + st[y1*(w+1)+x1] - st[y1*(w+1)+x0] - st[y0*(w+1)+x1]
def saveGraph(w, h, data):
X = [x for x, y in data]
Y = [y for x, y in data]
x0, y0, x1, y1 = min(X), min(Y), max(X), max(Y)
kx = (w - 1) / (x1 - x0)
ky = (h - 1) / (y1 - y0)
img = [0] * (w * h)
for x, y in data:
ix = int((x - x0) * kx)
iy = int((y - y0) * ky)
img[iy * w + ix] += 1
for p in xrange(4):
boxsum(img, w, h, 2)
mx = max(img)
k = 255.0 / mx
out = open("result.pgm", "wb")
out.write("P5\n%i %i 255\n" % (w, h))
out.write("".join(map(chr, [int(v*k) for v in img])))
out.close()
import random
data = [(random.random(), random.random())
for i in xrange(30000)]
saveGraph(256, 256, data)
Edit
Of course the very definition of density in your case depends on a resolution radius, or is the density just +inf when you hit a point and zero when you don't?
The following is an animation built with the above program with just a few cosmetic changes:
used sqrt(average of squared values) instead of sum for the averaging pass
color-coded the results
stretching the result to always use the full color scale
drawn antialiased black dots where the data points are
made an animation by incrementing the radius from 2 to 40
The total computing time of the 39 frames of the following animation with this cosmetic version is 5.4 seconds with PyPy and 26 seconds with standard Python.
Histograms
The histogram way is not the fastest, and can't tell the difference between an arbitrarily small separation of points and 2 * sqrt(2) * b (where b is bin width).
Even if you construct the x bins and y bins separately (O(N)), you still have to perform some ab convolution (number of bins each way), which is close to N^2 for any dense system, and even bigger for a sparse one (well, ab >> N^2 in a sparse system.)
Looking at the code above, you seem to have a loop in grid_density() which runs over the number of bins in y inside a loop of the number of bins in x, which is why you're getting O(N^2) performance (although if you are already order N, which you should plot on different numbers of elements to see, then you're just going to have to run less code per cycle).
If you want an actual distance function then you need to start looking at contact detection algorithms.
Contact Detection
Naive contact detection algorithms come in at O(N^2) in either RAM or CPU time, but there is an algorithm, rightly or wrongly attributed to Munjiza at St. Mary's college London, which runs in linear time and RAM.
you can read about it and implement it yourself from his book, if you like.
I have written this code myself, in fact
I have written a python-wrapped C implementation of this in 2D, which is not really ready for production (it is still single threaded, etc) but it will run in as close to O(N) as your dataset will allow. You set the "element size", which acts as a bin size (the code will call interactions on everything within b of another point, and sometimes between b and 2 * sqrt(2) * b), give it an array (native python list) of objects with an x and y property and my C module will callback to a python function of your choice to run an interaction function for matched pairs of elements. it's designed for running contact force DEM simulations, but it will work fine on this problem too.
As I haven't released it yet, because the other bits of the library aren't ready yet, I'll have to give you a zip of my current source but the contact detection part is solid. The code is LGPL'd.
You'll need Cython and a c compiler to make it work, and it's only been tested and working under *nix environemnts, if you're on windows you'll need the mingw c compiler for Cython to work at all.
Once Cython's installed, building/installing pynet should be a case of running setup.py.
The function you are interested in is pynet.d2.run_contact_detection(py_elements, py_interaction_function, py_simulation_parameters) (and you should check out the classes Element and SimulationParameters at the same level if you want it to throw less errors - look in the file at archive-root/pynet/d2/__init__.py to see the class implementations, they're trivial data holders with useful constructors.)
(I will update this answer with a public mercurial repo when the code is ready for more general release...)
Your solution is okay, but one clear problem is that you're getting dark regions despite there being a point right in the middle of them.
I would instead center an n-dimensional Gaussian on each point and evaluate the sum over each point you want to display. To reduce it to linear time in the common case, use query_ball_point to consider only points within a couple standard deviations.
If you find that he KDTree is really slow, why not call query_ball_point once every five pixels with a slightly larger threshold? It doesn't hurt too much to evaluate a few too many Gaussians.
You can do this with a 2D, separable convolution (scipy.ndimage.convolve1d) of your original image with a gaussian shaped kernel. With an image size of MxM and a filter size of P, the complexity is O(PM^2) using separable filtering. The "Big-Oh" complexity is no doubt greater, but you can take advantage of numpy's efficient array operations which should greatly speed up your calculations.
Just a note, the histogram2d function should work fine for this. Did you play around with different bin sizes? Your initial histogram2d plot seems to just use the default bin sizes... but there's no reason to expect the default sizes to give you the representation you want. Having said that, many of the other solutions are impressive too.