In pyqtgraph you can scatterplot each item for itself or a whole bunch of them as bulk (using spots). working with large datasets i prefer the last method since the figure stays light and is movable without lagging all over the screen.
my problem
some of my symbols i need an angle... that isn't that much of a problem, however if i add them separately to the plot it results in a laggy figure. so my problem is that i am currently unable to find a suitable way to subclass the whole thing and implement a small method for the keyword argument "rotation"/"angle". has anyone finished this task already or has someone an idea?
thank you very much in advance!
After another look today I finally found that it was way too simple: Just rotating my symbol before adding it to the ScatterPlotItem did the trick. For the sake of documentation and maybe some other struggling programmers, a snippet:
import numpy as np
import pyqtgraph as pg
# define a symbol bowtie style
_mos = np.asarray([
[0.5, 0.25],
[0.5, -0.25],
[-0.5, 0.25],
[-0.5, -0.25],
[0.5, 0.25]
])
my_symbol = pg.arrayToQPath(_mos[:, 0], _mos[:, 1], connect='all')
# define color and stuff for your items
exit_item = pg.ScatterPlotItem(
size=20,
pen=pg.mkPen(128, 128, 128, 255),
brush=pg.mkBrush(255, 255, 255, 255),
)
# calculate angle between two sets of points
angle = np.arctan2(np.asarray(y1-y0), np.asarray(x1-x0)) * 180/np.pi
# rotate symbol with that angle
tr = QTransform()
angle_rot = tr.rotate(angle)
my_rotated_symbol = angle_rot.map(my_symbol)
# may be a whole list of spots with different angles and positions
exit_spots = []
exit_spots.append({
'pos': (0, 0),
'symbol': my_rotated_symbol
})
# add the spots to the item
exit_item.addPoints(exit_spots)
# create a plot and add the content
win = pg.GraphicsWindow()
plot = win.addPlot()
plot.addItem(exit_item)
Related
I am quite intrigued by the idea of a homography and try to get it to work at a minimal example with python and OpenCV. Yet, my tests do not pass and I am not quite sure why. I pass in a set of corresponding points into the findHomography function according to This
and then multiply the homography matrix to receive my new point.
so the idea behind it is to find the planar coordinate transformation and then transform the points with
X' = H#X
where X' are the new coordinates and X are the coordinates in the new coordinate frame.
Here is some minimal code example:
import cv2
import numpy as np
import matplotlib.pyplot as plt
points = np.array([
[675, 585],
[675, 1722],
[3155, 580],
[3162, 1722],
])
t_points = np.array([
[0,0],
[0, 8.23],
[23.77, 0],
[23.77, 8.23]
])
pt = np.array([675, 580+(1722-580)/2, 0])
pt_test = np.array([0,8.23/2, 0])
def get_h_matrix(src_list, dst_list):
src_pts = np.array(src_list).reshape(-1,1,2)
dst_pts = np.array(dst_list).reshape(-1,1,2)
H, mask = cv2.findHomography(src_pts, dst_pts)
return H
H = get_h_matrix(points, t_points)
transformed = H#pt
plt.scatter(t_points[:,0], t_points[:,1], color = 'blue')
plt.scatter(transformed[0], transformed[1], color = 'orange')
plt.scatter(pt_test[0], pt_test[1], color = 'green')
plt.show()
plt.scatter(points[:,0], points[:,1], color = 'blue')
plt.scatter(pt[0],pt[1], color = 'orange')
plt.show()
where the output corresponds to the following plot
Plot of the coordinate Transformation. We can see that the green point, where the transformed point actually should be, is not even close to the orange point, where the homography transformed the point to.
Maybe somebody can see the error in my train of thoughts.
Your help is kindly appreciated.
EDIT: I swaped the points array a few times, because I thought I made a mistake, but still the wrong transformation.
As Micka mentioned in the comments, the problem is the representation of the test points.
pt = [x,y,1]
instead of
pt = [x,y,0]
after the transformation, the homogeneous coordinates get transformed back by
pt' = pt'/pt'[2]
I appreciate the help.
I am plotting a 3d_image using GLSurfacePlotItem.
My Z-axis Data lies between 0 - 255
gl.GLSurfacePlotItem(x=x[:, 0], y=y[0, :], shader='heightColor', computeNormals=False, smooth=False)
Following is the ColorMap:
p4.shader()['colorMap'] = np.array([0.45, 0, 0.1, 0.005, 0.5, 2, 0, 0.05, 0.2])
I do get some color shade in the output image but would like to know how can i enable multiple colors in the same
Thanks
I came across the following answer on google groups , which I am copying across here, as it is the first item I have come across on the web giving details of how the colormap works on pyqtgraph for 3d plots
copied across from google groups
Hi,
Yeah, the GL stuff doesn't use the same colour maps as the 2D images. I can't see any documentation on the shaders (http://www.pyqtgraph.org/documentation/3dgraphics/glmeshitem.html) that are used to colour the surface. Looking at the code under shaders.py does help though. For the "heightColor" shader, the 9 numbers in the array that are used (as in the Surface Plot example) are variables used in a formula to compute the RGB colour.
From comment in code:
## colors fragments by z-value.
## This is useful for coloring surface plots by height.
## This shader uses a uniform called "colorMap" to determine how to map the colors:
## red = pow(z * colorMap[0] + colorMap[1], colorMap[2])
## green = pow(z * colorMap[3] + colorMap[4], colorMap[5])
## blue = pow(z * colorMap[6] + colorMap[7], colorMap[8])
## (set the values like this: shader['uniformMap'] = array([...])
I assume the output RGB values are expressed as a range from 0 to 1. So to tweak the example to work with ranges from say zmin to zmax, I think something like:
p4.shader()['colorMap'] = np.array([0.2*(zmax - zmin), 2 - zmin, 0.5, 0.2*(zmax - zmin), 1 - zmin, 1, 0.2*(zmax - zmin), 0 - zmin, 2])
Patrick
I'm trying to combine Holoviews' Pointdraw functionality with its Sample functionality (I couldn't find a specific page, but it is shown in action here http://holoviews.org/gallery/demos/bokeh/mandelbrot_section.html)
Specifically, I want to have two subplots with interactivity. The one on the left shows a colormap, and the one on the right shows a sample (a linecut) of the colormap. This is achieved with .sample. Inside this right plot I'd like to have points that can be drawn, moved, and removed, typically done with pointdraw. I'd then also like to access their coordinates once I am done moving, which is possible when following the example from the documentation.
Now, I've got the two working independently, following the examples above. But when combined in the way that I have, this results in a plot that looks like this:
It has the elements I am looking for, except the points cannot be interacted with. This is somehow related to Holoviews' streams, but I am not sure how to solve it. Would anyone be able to help out?
The code that generates the above:
%%opts Points (color='color' size=10) [tools=['hover'] width=400 height=400]
%%opts Layout [shared_datasource=True] Table (editable=True)
import param
import numpy as np
import holoviews as hv
hv.extension('bokeh', 'matplotlib')
from holoviews import streams
def lorentzian(x, x0, gamma):
return 1/np.pi*1/2*gamma/((x-x0)**2+(1/2*gamma)**2)
xs = np.arange(0,4*np.pi,0.05)
ys = np.arange(0,4*np.pi,0.05)
data = hv.OrderedDict({'x': [2., 2., 2.], 'y': [0.5, 0.4, 0.2], 'color': ['red', 'green', 'blue']})
z = lorentzian(xs.reshape(len(xs),1),2*np.sin(ys.reshape(1,len(ys)))+5,1) + lorentzian(xs.reshape(len(xs),1),-2*np.sin(ys.reshape(1,len(ys)))+5,1)
def dispersions(f0):
points = hv.Points(data, vdims=['color']).redim.range(x=(xs[0], xs[-1]), y=(np.min(z), np.max(z)))
point_stream = streams.PointDraw(data=points.columns(), source=points, empty_value='black')
image = hv.Image(z, bounds=(xs[0], ys[0], xs[-1], ys[-1]))
return image* hv.VLine(x=f0) + image.sample(x=f0)*points
dmap = hv.DynamicMap(dispersions, kdims=['f0'])
dmap.redim.range(f0=(0,10)).redim.step(f0=(0.1))
I apologize for the weird function that we are plotting, I couldn't immediately come up with a simple one.
Based on your example it's not yet quite clear to me what you will be doing with the points but I do have some suggestions on structuring the code better.
In general it is always better to compose plots from several separate DynamicMaps than creating a single DynamicMap that does everything. Not only is it more composable but you also get handles on the individual objects allowing you to set up streams to listen to changes on each component and most importantly it's more efficient, only the plots that need to be updated will be updated. In your example I'd split up the code as follows:
def lorentzian(x, x0, gamma):
return 1/np.pi*1/2*gamma/((x-x0)**2+(1/2*gamma)**2)
xs = np.arange(0,4*np.pi,0.05)
ys = np.arange(0,4*np.pi,0.05)
data = hv.OrderedDict({'x': [2., 2., 2.], 'y': [0.5, 0.4, 0.2], 'color': ['red', 'green', 'blue']})
points = hv.Points(data, vdims=['color']).redim.range(x=(xs[0], xs[-1]), y=(np.min(z), np.max(z)))
image = hv.Image(z, bounds=(xs[0], ys[0], xs[-1], ys[-1]))
z = lorentzian(xs.reshape(len(xs),1),2*np.sin(ys.reshape(1,len(ys)))+5,1) + lorentzian(xs.reshape(len(xs),1),-2*np.sin(ys.reshape(1,len(ys)))+5,1)
taps = []
def vline(f0):
return hv.VLine(x=f0)
def sample(f0):
return image.sample(x=f0)
dim = hv.Dimension('f0', step=0.1, range=(0,10))
vline_dmap = hv.DynamicMap(vline, kdims=[dim])
sample_dmap = hv.DynamicMap(sample, kdims=[dim])
point_stream = streams.PointDraw(data=points.columns(), source=points, empty_value='black')
(image * vline_dmap + sample_dmap * points)
Since the Image and Points are not themselves dynamic there is no reason to put them inside the DynamicMap and the VLine and the sampled Curve are easily split out. The PointDraw stream doesn't do anything yet but you can now set that up as yet another DynamicMap which you can compose with the rest.
I'm wondering if something is possible using VisPy, or if I should start looking for other alternatives.
Here's what's going on - I'm writing an undergraduate thesis on some of the paradox-like situations of Special Relativity. What I am doing is this essentially this, I'll have some script in python producing arrays of ordered triplets (points in 3D space) that change with a time like variable (the "time like" variable is actually velocity, but it will count on just like time). I need to animate these points, which in the simpler cases will form rods, 2D squares, and 3D cubes, and of course produce .gif or similar.
Coding is definitely not my strong suit, but I have been using Python for a while. I have looked into VisPy and like that it uses OpenGL capability, and generally like how nice the various examples are (which I can get to work).
My question is: Is VisPy the best thing for what I need to do? I am having a hard time figuring out how to make VisPy make individual points into 3D objects and such - I have played around with the various Geometry things like create_cube, but it doesn't look like I can shift just the vertices around.
If anyone has any suggestions on where to start, or if Mayavi or another thing would be easier please let me know.
Update: I did figure out how to make a really nice (and simple) 3D cube outline, which is exactly what I'm wanting. I am still not sure how to get it to animate, as examples using VisPy for animation are quite different.
Does anyone have any direction? In the code, what needs to happen is that initial array of points needs to update each frame (the points can be imported or calculated in the script, either way)
import numpy as np
import vispy
import vispy.scene
from vispy.scene import visuals
from vispy import app
canvas = vispy.scene.SceneCanvas(keys='interactive', show=True)
view = canvas.central_widget.add_view()
# generate data
pos = np.array([[0, 0, 0], [0.5, 0.5, 0.5], [0, 0.5, 0.5], [0.5, 0, 0.5],
[0.5, 0.5, 0], [0, 0, 0.5], [0, 0.5, 0], [0.5, 0, 0]])
# These are the data that need to be updated each frame --^
scatter = visuals.Markers()
scatter.set_data(pos, edge_color=None, face_color=(1, 1, 1, .5), size=10)
view.add(scatter)
view.camera = 'turntable'
# just makes the axes
axis = visuals.XYZAxis(parent=view.scene)
if __name__ == '__main__':
import sys
if sys.flags.interactive != 1:
vispy.app.run()
I have the same problem and it though to seek a solution. I checked the Vispy API and changed the codes, it works well.
Hope this is helpful for you and the others.
import numpy as np
import vispy
import vispy.scene
from vispy.scene import visuals
from vispy import app
canvas = vispy.scene.SceneCanvas(keys='interactive', show=True)
view = canvas.central_widget.add_view()
view.camera = 'turntable'
# generate data
def solver(t):
pos = np.array([[0.5 + t/10000, 0.5, 0], [0, 0, 0.5], [0, 0.5, 0], [0.5, 0, 0]])
return pos
# These are the data that need to be updated each frame --^
scatter = visuals.Markers()
view.add(scatter)
#view.camera = scene.TurntableCamera(up='z')
# just makes the axes
axis = visuals.XYZAxis(parent=view.scene)
t = 0.0
def update(ev):
global scatter
global t
t += 1.0
scatter.set_data(solver(t), edge_color=None, face_color=(1, 1, 1, .5), size=10)
timer = app.Timer()
timer.connect(update)
timer.start(0)
if __name__ == '__main__':
canvas.show()
if sys.flags.interactive == 0:
app.run()
I want to generate a surface which should look like a hemisphere.. What I have done so far is to read an already existing BEM mesh and try to show the scalar values on it. But now I have to show the scalar values on a hemisphere instead of the Bem mesh. And I don't know how to generate using a triangular mesh that looks like an hemisphere.
This hemisphere needs to contain a set of N number of points(x,y,z)[using the mlab.triangular_mesh] and at each vertex I need to represent N data(float) either as a value or using variations in colormap(eg: blue(lowest value of the data) to red(highest value of the data)). data=its an array of size 2562, a set of float values, could be randomly generated as its part of another codes. Points were part of another set of code too.its of shape(2562,3). but the shape is not a hemisphere
This was the program I used for viewing using the BEM surface
fname = data_path + '/subjects/sample/bem/sample-5120-5120-5120-bem-sol.fif'
surfaces = mne.read_bem_surfaces(fname, add_geom=True)
print "Number of surfaces : %d" % len(surfaces)
head_col = (0.95, 0.83, 0.83) # light pink
colors = [head_col]
try:
from enthought.mayavi import mlab
except:
from mayavi import mlab
mlab.figure(size=(600, 600), bgcolor=(0, 0, 0))
for c, surf in zip(colors, surfaces):
points = surf['rr']
faces = surf['tris']
s=data
mlab.triangular_mesh(points[:, 0], points[:, 1], points[:, 2],faces,color=c, opacity=1,scalars=s[:,0])
#mesh= mlab.triangular_mesh(x,y,z,triangles,representation='wireframe',opacity=0) #point_data=mesh.mlab_source.dataset.point_data
#point_data.scalars=t
#point_data.scalars.name='Point data'
#mesh2= mlab.pipeline.set_active_attribute(mesh,point_scalars='Point data')
As others have pointed out your question is not very clear, and does not include an easily reproducible example -- your example would take considerable work for us to reproduce and you have not described the steps you have taken very clearly.
What you are trying to do is easy. Scalars can be defined for each vertex (i.e., each VTK point):
surf = mlab.triangular_mesh(x,y,z,triangles)
surf.mlab_source.scalars = t
And you need to set a flag to get them to appear, which I think might be your problem:
surf.actor.mapper.scalar_visibility=True
Here is some code to generate a half-sphere. It produces a VTK polydata. I'm not 100% sure if the mayavi source is the same source type as triangular_mesh but I think it is.
res = 250. #desired resolution (number of samples on sphere)
phi,theta = np.mgrid[0:np.pi:np.pi/res, 0:np.pi:np.pi/res]
x=np.cos(theta) * np.sin(phi)
y=np.sin(theta) * np.sin(phi)
z=np.cos(phi)
mlab.mesh(x,y,z,color=(1,1,1))