i have a problem with entity priorities.
I created two 2D 'quad' entities with different colors, but when they appear on the screen sometimes the color of one prevails and other times that of the other.
from ursina import *
app = Ursina()
o = Entity(model='quad', scale_x=0.3, scale_y=0.3, collider="box", color=color.red,position=(-0.45,-1.4))
Re_bianco = Entity(model='quad', scale_x=0.3, scale_y=0.3, collider="box", position=(-0.5,-1.4))
app.run()
I tried to use the z axis, but the result is not optimal because by offsetting the z axis the quarter distorts.
and even overlapping them you can see both.
from ursina import *
app = Ursina()
o = Entity(model='quad', scale_x=0.3, scale_y=0.3, collider="box", color=color.red,position=(-0.5,-1.4,0))
Re_bianco = Entity(model='quad', scale_x=0.3, scale_y=0.3, collider="box", position=(-0.5,-1.4,1))
app.run()
Thanks for your help
Currently both entities are at the same depth, 0. How can the program know which one should be rendered above the other?
You can specify this by giving a third position value, for example, position=(-0.45, -1.4, -1), which would be in front of an entity at position (-0.5, -1.4, 0)
Related
I'm trying to scale a QPolygonF that is on a QGraphicsScene's QGraphicsView on its origin.
However, even after translating the polygon (poly_2) to its origin (using QPolygon.translate() and the center coordinates of the polygon received via boundingRect (x+width)/2 and (y+height)/2), the new polygon is still placed on the wrong location.
The blue polygon should be scaled according to the origin of poly_2 (please see the image below, black is the original polygon, blue polygon is the result of the code below, and the orange polygon is representing the intended outcome)
I thought that the issue might be that coordinates are from global and should be local, yet this does solve the issue unfortunately.
Here's the code:
import PyQt5
from PyQt5 import QtCore
import sys
import PyQt5
from PyQt5.QtCore import *#QPointF, QRectF
from PyQt5.QtGui import *#QPainterPath, QPolygonF, QBrush,QPen,QFont,QColor, QTransform
from PyQt5.QtWidgets import *#QApplication, QGraphicsScene, QGraphicsView, QGraphicsSimpleTextItem
poly_2_coords= [PyQt5.QtCore.QPointF(532.35, 274.98), PyQt5.QtCore.QPointF(525.67, 281.66), PyQt5.QtCore.QPointF(518.4, 292.58), PyQt5.QtCore.QPointF(507.72, 315.49), PyQt5.QtCore.QPointF(501.22, 326.04), PyQt5.QtCore.QPointF(497.16, 328.47), PyQt5.QtCore.QPointF(495.53, 331.71), PyQt5.QtCore.QPointF(488.24, 339.02), PyQt5.QtCore.QPointF(480.94, 349.56), PyQt5.QtCore.QPointF(476.09, 360.1), PyQt5.QtCore.QPointF(476.89, 378.76), PyQt5.QtCore.QPointF(492.3, 393.35), PyQt5.QtCore.QPointF(501.22, 398.21), PyQt5.QtCore.QPointF(527.17, 398.21), PyQt5.QtCore.QPointF(535.28, 390.1), PyQt5.QtCore.QPointF(540.96, 373.89), PyQt5.QtCore.QPointF(539.64, 356.93), PyQt5.QtCore.QPointF(541.46, 329.0), PyQt5.QtCore.QPointF(543.39, 313.87), PyQt5.QtCore.QPointF(545.83, 300.89), PyQt5.QtCore.QPointF(545.83, 276.56), PyQt5.QtCore.QPointF(543.39, 267.64), PyQt5.QtCore.QPointF(537.81, 268.91)]
def main():
app = QApplication(sys.argv)
scene = QGraphicsScene()
view = QGraphicsView(scene)
pen = QPen(QColor(0, 20, 255))
scene.addPolygon(QPolygonF(poly_2_coords))
poly_2 = QPolygonF(poly_2_coords)
trans = QTransform().scale(1.5,1.5)
#poly_22 = trans.mapToPolygon(QRect(int(poly_2.boundingRect().x()),int(poly_2.boundingRect().y()),int(poly_2.boundingRect().width()),int(poly_2.boundingRect().height())))
#trans.mapToPolygon()
#scene.addPolygon(QPolygonF(poly_22),QPen(QColor(0, 20, 255)))
poly_2.translate((poly_2.boundingRect().x()+poly_2.boundingRect().width())/2,(poly_2.boundingRect().y()+poly_2.boundingRect().height())/2)
print(f'poly_2.boundingRect().x() {poly_2.boundingRect().x()}+poly_2.boundingRect().width(){poly_2.boundingRect().width()}')
trans = QTransform().scale(1.4,1.4)
#poly_2.setTransformOriginPoint()
poly_22 = trans.map(poly_2)
scene.addPolygon(poly_22,QPen(QColor(0, 20, 255)))
view.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
Edit: I've tried saving the polygon as a QGraphicsItem, and set its transformation origin point according the bbox's middle X,Y and then mapped from Global to Scene, yet no luck: the new polygon is still drawn to the wrong place.
poly_2 = QPolygonF(poly_2_coords)
poly = scene.addPolygon(poly_2)
point = QPoint((poly_2.boundingRect().x()+poly_2.boundingRect().width())/2,(poly_2.boundingRect().y()+poly_2.boundingRect().height())/2)
poly.setTransformOriginPoint(point)
poly.setScale(3)
If replacing point to equal only X,Y of the bounding rectangle, the result seems to be closer to what I need. However, in this case the origin point is obviously wrong. Is this just random luck that this answer seems to be closer to what I need?
Before considering the problem of the translation, there is a more important aspect that has to be considered: if you want to create a transformation based on the center of a polygon, you must find that center. That point is called centroid, the geometric center of any polygon.
While there are simple formulas for all basic geometric shapes, finding the centroid of a (possibly irregular) polygon with an arbitrary number of vertices is a bit more complex.
Using the arithmetic mean of vertices is not a viable option, as even in a simple square you might have multiple points on a single side, which would move the computed "center" towards those points.
The formula can be found in the Wikipedia article linked above, while a valid python implementation is available in this answer.
I modified the formula of that answer in order to accept a sequence of QPoints, while improving readability and performance, but the concept remains the same:
def centroid(points):
if len(points) < 3:
raise ValueError('At least 3 points are required')
# https://en.wikipedia.org/wiki/Centroid#Of_a_polygon
# https://en.wikipedia.org/wiki/Shoelace_formula
# computation uses concatenated pairs from the sequence, with the
# last point paired to the first one:
# (p[0], p[1]), (p[1], p[2]) [...] (p[n], p[0])
area = cx = cy = 0
p1 = points[0]
for p2 in points[1:] + [p1]:
shoelace = p1.x() * p2.y() - p2.x() * p1.y()
area += shoelace
cx += (p1.x() + p2.x()) * shoelace
cy += (p1.y() + p2.y()) * shoelace
p1 = p2
A = 0.5 * area
factor = 1 / (6 * A)
return cx * factor, cy * factor
Then, you have two options, depending on what you want to do with the resulting item.
Scale the item
In this case, you create a QGraphicsPolygonItem like the original one, then set its transform origin point using the formula above, and scale it:
poly_2 = QtGui.QPolygonF(poly_2_coords)
item2 = scene.addPolygon(poly_2, QtGui.QPen(QtGui.QColor(0, 20, 255)))
item2.setTransformOriginPoint(*centroid(poly_2_coords))
item2.setScale(1.5)
Use a QTransform
With Qt transformations some special care must be taken, as scaling always uses 0, 0 as origin point.
To scale around a specified point, you must first translate the matrix to that point, then apply the scale, and finally restore the matrix translation to its origin:
poly_2 = QtGui.QPolygonF(poly_2_coords)
cx, cy = centroid(poly_2_coords)
trans = QtGui.QTransform()
trans.translate(cx, cy)
trans.scale(1.5, 1.5)
trans.translate(-cx, -cy)
poly_2_scaled = trans.map(poly_2)
scene.addPolygon(poly_2_scaled, QtGui.QPen(QtGui.QColor(0, 20, 255)))
This is exactly what QGraphicsItems do when using the basic setScale() and setRotation() transformations.
Shape origin point and item position
Remember that QGraphicsItems are always created with their position at 0, 0.
This might not seem obvious especially for basic shapes: when you create a QGraphicsRectItem giving its x, y, width, height, the position will still be 0, 0. When dealing with complex geometry management, it's usually better to create basic shapes with the origin/reference at 0, 0 and then move the item at x, y.
For complex polygons like yours, a possibility could be to translate the centroid of the polygon at 0, 0, and then move it at the actual centroid coordinates:
item = scene.addPolygon(polygon.translated(-cx, -cy))
item.setPos(cx, cy)
item.setScale(1.5)
This might make things easier for development (the mapped points will always be consistent with the item position), and the fact that you don't need to change the transform origin point anymore makes reverse mapping even simpler.
First project in qt.
I'm having trouble translating/rotating a rect along a line. Basically i would want to align the rect with the position of the line. When i change position of the circle the rect should translate along the line. See images below.
What i have at the moment
w_len = len(str(weight)) / 3 * r + r / 3
weight_v = Vector(r if w_len <= r else w_len, r)
weight_rectangle = QRectF(*(mid - weight_v), *(2 * weight_v))
painter.drawRect(weight_rectangle)
*mid is just a vector with coordinates at half of the link , weight_v is a vector based on the text size.
Any pointers , should i look at adding a translate to the painter ? Whenever i try to add translation to the painter it breaks the other shapes as well.
t = QTransform()
t.translate(-5 ,-5)
t.rotate(90)
painter.setTransform(t)
painter.drawRect(weight_rectangle)
painter.resetTransform()
Update:
With below answer i was able to fix the rotation. Many thanks, looks like my text is not displaying correctly.
I have the following code:
painter.translate(center_of_rec_x, center_of_rec_y);
painter.rotate(- link_paint.angle());
rx = -(weight_v[0] * 0.5)
ry = -(weight_v[1] )
new_rec = QRect(rx , ry, weight_v[0], 2 * weight_v[1])
painter.drawRect(QRect(rx , ry, weight_v[0] , 2 * weight_v[1] ))
painter.drawText(new_rec, Qt.AlignCenter, str(weight))
Update2:
All is fine , was a mistake in my code. I was taking the wrong link angle.
Thx.
Rotation is always done according to the origin point (0, 0), so you need to translate to the origin point of the rotation and then apply it.
Also, when applying any temporary change to the painter, save() and restore() should be used: in this way the current state of the painter is stored, and that state will be restored afterwards (including any transformation applied in the meantime). Painter states can be nested, and one could save multiple times to apply multiple "layers" of painter state modifications. Just remember that the all states must be restored to the base status before releasing (ending) the painter.
Since you didn't provide an MRE, I created a small widget to show how this works:
class AngledRect(QtWidgets.QWidget):
def __init__(self):
QtWidgets.QWidget.__init__(self)
self.setMinimumSize(200, 200)
def paintEvent(self, event):
qp = QtGui.QPainter(self)
qp.setRenderHints(qp.Antialiasing)
contents = self.contentsRect()
# draw a line from the top left to the bottom right of the widget
line = QtCore.QLineF(contents.topLeft(), contents.bottomRight())
qp.drawLine(line)
# save the current state of the painter
qp.save()
# translate to the center of the painting rectangle
qp.translate(contents.center())
# apply an inverted rotation, since the line angle is counterclockwise
qp.rotate(-line.angle())
# create a rectangle that is centered at the origin point
rect = QtCore.QRect(-40, -10, 80, 20)
qp.setPen(QtCore.Qt.white)
qp.setBrush(QtCore.Qt.black)
qp.drawRect(rect)
qp.drawText(rect, QtCore.Qt.AlignCenter, '{:.05f}'.format(line.angle()))
qp.restore()
# ... other painting...
For simple transformations, using translate and rotate is usually enough, but the above is almost identical to:
transform = QtGui.QTransform()
transform.translate(contents.center().x(), contents.center().y())
transform.rotate(-line.angle())
qp.save()
qp.setTransform(transform)
# ...
I've wrote a code to produce cylinder objects using vtk in python. This code works fine where it produces a 3D scene where i can zoom or turn around the cylinders which i have been made. The problem is i want to export this rendered scene to paraview to view and save it for later works. How can i do this?
Here is the code that produce a Y-shape with cylinders:
import vtk
import numpy as np
'''
Adding multiple Actors to one renderer scene using VTK package with python api.
Each cylinder is an Actor with three input specifications: Startpoint, Endpoint and radius.
After creating all the Actors, the preferred Actors will be added to a list and that list will be our input to the
renderer scene.
A list or numpy array with appropriate 3*1 shape could be used to specify starting and ending points.
There are two alternative ways to apply the transform.
1) Use vtkTransformPolyDataFilter to create a new transformed polydata.
This method is useful if the transformed polydata is needed
later in the pipeline
To do this, set USER_MATRIX = True
2) Apply the transform directly to the actor using vtkProp3D's SetUserMatrix.
No new data is produced.
To do this, set USER_MATRIX = False
'''
USER_MATRIX = True
def cylinder_object(startPoint, endPoint, radius, my_color="DarkRed"):
colors = vtk.vtkNamedColors()
# Create a cylinder.
# Cylinder height vector is (0,1,0).
# Cylinder center is in the middle of the cylinder
cylinderSource = vtk.vtkCylinderSource()
cylinderSource.SetRadius(radius)
cylinderSource.SetResolution(50)
# Generate a random start and end point
# startPoint = [0] * 3
# endPoint = [0] * 3
rng = vtk.vtkMinimalStandardRandomSequence()
rng.SetSeed(8775070) # For testing.8775070
# Compute a basis
normalizedX = [0] * 3
normalizedY = [0] * 3
normalizedZ = [0] * 3
# The X axis is a vector from start to end
vtk.vtkMath.Subtract(endPoint, startPoint, normalizedX)
length = vtk.vtkMath.Norm(normalizedX)
vtk.vtkMath.Normalize(normalizedX)
# The Z axis is an arbitrary vector cross X
arbitrary = [0] * 3
for i in range(0, 3):
rng.Next()
arbitrary[i] = rng.GetRangeValue(-10, 10)
vtk.vtkMath.Cross(normalizedX, arbitrary, normalizedZ)
vtk.vtkMath.Normalize(normalizedZ)
# The Y axis is Z cross X
vtk.vtkMath.Cross(normalizedZ, normalizedX, normalizedY)
matrix = vtk.vtkMatrix4x4()
# Create the direction cosine matrix
matrix.Identity()
for i in range(0, 3):
matrix.SetElement(i, 0, normalizedX[i])
matrix.SetElement(i, 1, normalizedY[i])
matrix.SetElement(i, 2, normalizedZ[i])
# Apply the transforms
transform = vtk.vtkTransform()
transform.Translate(startPoint) # translate to starting point
transform.Concatenate(matrix) # apply direction cosines
transform.RotateZ(-90.0) # align cylinder to x axis
transform.Scale(1.0, length, 1.0) # scale along the height vector
transform.Translate(0, .5, 0) # translate to start of cylinder
# Transform the polydata
transformPD = vtk.vtkTransformPolyDataFilter()
transformPD.SetTransform(transform)
transformPD.SetInputConnection(cylinderSource.GetOutputPort())
# Create a mapper and actor for the arrow
mapper = vtk.vtkPolyDataMapper()
actor = vtk.vtkActor()
if USER_MATRIX:
mapper.SetInputConnection(cylinderSource.GetOutputPort())
actor.SetUserMatrix(transform.GetMatrix())
else:
mapper.SetInputConnection(transformPD.GetOutputPort())
actor.SetMapper(mapper)
actor.GetProperty().SetColor(colors.GetColor3d(my_color))
return actor
def render_scene(my_actor_list):
renderer = vtk.vtkRenderer()
for arg in my_actor_list:
renderer.AddActor(arg)
namedColors = vtk.vtkNamedColors()
renderer.SetBackground(namedColors.GetColor3d("SlateGray"))
window = vtk.vtkRenderWindow()
window.SetWindowName("Oriented Cylinder")
window.AddRenderer(renderer)
interactor = vtk.vtkRenderWindowInteractor()
interactor.SetRenderWindow(window)
# Visualize
window.Render()
interactor.Start()
if __name__ == '__main__':
my_list = []
p0 = np.array([0, 0, 0])
p1 = np.array([0, 10, 0])
p2 = np.array([7, 17, 0])
p3 = np.array([-5, 15, 0])
my_list.append(cylinder_object(p0, p1, 1, "Red"))
my_list.append(cylinder_object(p1, p2, 0.8, "Green"))
my_list.append(cylinder_object(p1, p3, 0.75, "Navy"))
render_scene(my_list)
I have multiple actors where all of them are rendered together in one render scene, can i pass each actor into a vtk.vtkSTLWriter? this seems not working!
What you're looking for is subclasses of the vtkExporter class which, as per the linked doco:
vtkExporter is an abstract class that exports a scene to a file. It is very similar to vtkWriter except that a writer only writes out the geometric and topological data for an object, where an exporter can write out material properties, lighting, camera parameters etc.
As you can see from the inheritance diagram of the class there's about 15 classes that support exporting such a scene into a file that can be viewed in appropriate readers.
IMHO the one you'll have the most luck with is the vtkVRMLExporter class as it's a fairly common format. That being said I don't believe Paraview supports VRML files (at least based on some pretty ancient posts I've found) but I'm pretty sure MayaVi does.
Alternatively you could, as you mentioned, export objects into STL files but STL files simply contain triangle coordinates and info on how they connect. Such files cannot possibly describe info re the scene such as camera or lighting information. Also last I checked a single STL file can only contain a single object so your three cylinders would end up being a merged object so its probably not what you want.
I added these codes and it created a VRML file from my rendered scene.
exporter = vtk.vtkVRMLExporter()
exporter.SetRenderWindow(window)
exporter.SetFileName("cylinders.wrl")
exporter.Write()
exporter.Update()
In pyqtgraph you can scatterplot each item for itself or a whole bunch of them as bulk (using spots). working with large datasets i prefer the last method since the figure stays light and is movable without lagging all over the screen.
my problem
some of my symbols i need an angle... that isn't that much of a problem, however if i add them separately to the plot it results in a laggy figure. so my problem is that i am currently unable to find a suitable way to subclass the whole thing and implement a small method for the keyword argument "rotation"/"angle". has anyone finished this task already or has someone an idea?
thank you very much in advance!
After another look today I finally found that it was way too simple: Just rotating my symbol before adding it to the ScatterPlotItem did the trick. For the sake of documentation and maybe some other struggling programmers, a snippet:
import numpy as np
import pyqtgraph as pg
# define a symbol bowtie style
_mos = np.asarray([
[0.5, 0.25],
[0.5, -0.25],
[-0.5, 0.25],
[-0.5, -0.25],
[0.5, 0.25]
])
my_symbol = pg.arrayToQPath(_mos[:, 0], _mos[:, 1], connect='all')
# define color and stuff for your items
exit_item = pg.ScatterPlotItem(
size=20,
pen=pg.mkPen(128, 128, 128, 255),
brush=pg.mkBrush(255, 255, 255, 255),
)
# calculate angle between two sets of points
angle = np.arctan2(np.asarray(y1-y0), np.asarray(x1-x0)) * 180/np.pi
# rotate symbol with that angle
tr = QTransform()
angle_rot = tr.rotate(angle)
my_rotated_symbol = angle_rot.map(my_symbol)
# may be a whole list of spots with different angles and positions
exit_spots = []
exit_spots.append({
'pos': (0, 0),
'symbol': my_rotated_symbol
})
# add the spots to the item
exit_item.addPoints(exit_spots)
# create a plot and add the content
win = pg.GraphicsWindow()
plot = win.addPlot()
plot.addItem(exit_item)
sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)