Scaling QPolygon on its origin - python

I'm trying to scale a QPolygonF that is on a QGraphicsScene's QGraphicsView on its origin.
However, even after translating the polygon (poly_2) to its origin (using QPolygon.translate() and the center coordinates of the polygon received via boundingRect (x+width)/2 and (y+height)/2), the new polygon is still placed on the wrong location.
The blue polygon should be scaled according to the origin of poly_2 (please see the image below, black is the original polygon, blue polygon is the result of the code below, and the orange polygon is representing the intended outcome)
I thought that the issue might be that coordinates are from global and should be local, yet this does solve the issue unfortunately.
Here's the code:
import PyQt5
from PyQt5 import QtCore
import sys
import PyQt5
from PyQt5.QtCore import *#QPointF, QRectF
from PyQt5.QtGui import *#QPainterPath, QPolygonF, QBrush,QPen,QFont,QColor, QTransform
from PyQt5.QtWidgets import *#QApplication, QGraphicsScene, QGraphicsView, QGraphicsSimpleTextItem
poly_2_coords= [PyQt5.QtCore.QPointF(532.35, 274.98), PyQt5.QtCore.QPointF(525.67, 281.66), PyQt5.QtCore.QPointF(518.4, 292.58), PyQt5.QtCore.QPointF(507.72, 315.49), PyQt5.QtCore.QPointF(501.22, 326.04), PyQt5.QtCore.QPointF(497.16, 328.47), PyQt5.QtCore.QPointF(495.53, 331.71), PyQt5.QtCore.QPointF(488.24, 339.02), PyQt5.QtCore.QPointF(480.94, 349.56), PyQt5.QtCore.QPointF(476.09, 360.1), PyQt5.QtCore.QPointF(476.89, 378.76), PyQt5.QtCore.QPointF(492.3, 393.35), PyQt5.QtCore.QPointF(501.22, 398.21), PyQt5.QtCore.QPointF(527.17, 398.21), PyQt5.QtCore.QPointF(535.28, 390.1), PyQt5.QtCore.QPointF(540.96, 373.89), PyQt5.QtCore.QPointF(539.64, 356.93), PyQt5.QtCore.QPointF(541.46, 329.0), PyQt5.QtCore.QPointF(543.39, 313.87), PyQt5.QtCore.QPointF(545.83, 300.89), PyQt5.QtCore.QPointF(545.83, 276.56), PyQt5.QtCore.QPointF(543.39, 267.64), PyQt5.QtCore.QPointF(537.81, 268.91)]
def main():
app = QApplication(sys.argv)
scene = QGraphicsScene()
view = QGraphicsView(scene)
pen = QPen(QColor(0, 20, 255))
scene.addPolygon(QPolygonF(poly_2_coords))
poly_2 = QPolygonF(poly_2_coords)
trans = QTransform().scale(1.5,1.5)
#poly_22 = trans.mapToPolygon(QRect(int(poly_2.boundingRect().x()),int(poly_2.boundingRect().y()),int(poly_2.boundingRect().width()),int(poly_2.boundingRect().height())))
#trans.mapToPolygon()
#scene.addPolygon(QPolygonF(poly_22),QPen(QColor(0, 20, 255)))
poly_2.translate((poly_2.boundingRect().x()+poly_2.boundingRect().width())/2,(poly_2.boundingRect().y()+poly_2.boundingRect().height())/2)
print(f'poly_2.boundingRect().x() {poly_2.boundingRect().x()}+poly_2.boundingRect().width(){poly_2.boundingRect().width()}')
trans = QTransform().scale(1.4,1.4)
#poly_2.setTransformOriginPoint()
poly_22 = trans.map(poly_2)
scene.addPolygon(poly_22,QPen(QColor(0, 20, 255)))
view.show()
sys.exit(app.exec_())
if __name__ == "__main__":
main()
Edit: I've tried saving the polygon as a QGraphicsItem, and set its transformation origin point according the bbox's middle X,Y and then mapped from Global to Scene, yet no luck: the new polygon is still drawn to the wrong place.
poly_2 = QPolygonF(poly_2_coords)
poly = scene.addPolygon(poly_2)
point = QPoint((poly_2.boundingRect().x()+poly_2.boundingRect().width())/2,(poly_2.boundingRect().y()+poly_2.boundingRect().height())/2)
poly.setTransformOriginPoint(point)
poly.setScale(3)
If replacing point to equal only X,Y of the bounding rectangle, the result seems to be closer to what I need. However, in this case the origin point is obviously wrong. Is this just random luck that this answer seems to be closer to what I need?

Before considering the problem of the translation, there is a more important aspect that has to be considered: if you want to create a transformation based on the center of a polygon, you must find that center. That point is called centroid, the geometric center of any polygon.
While there are simple formulas for all basic geometric shapes, finding the centroid of a (possibly irregular) polygon with an arbitrary number of vertices is a bit more complex.
Using the arithmetic mean of vertices is not a viable option, as even in a simple square you might have multiple points on a single side, which would move the computed "center" towards those points.
The formula can be found in the Wikipedia article linked above, while a valid python implementation is available in this answer.
I modified the formula of that answer in order to accept a sequence of QPoints, while improving readability and performance, but the concept remains the same:
def centroid(points):
if len(points) < 3:
raise ValueError('At least 3 points are required')
# https://en.wikipedia.org/wiki/Centroid#Of_a_polygon
# https://en.wikipedia.org/wiki/Shoelace_formula
# computation uses concatenated pairs from the sequence, with the
# last point paired to the first one:
# (p[0], p[1]), (p[1], p[2]) [...] (p[n], p[0])
area = cx = cy = 0
p1 = points[0]
for p2 in points[1:] + [p1]:
shoelace = p1.x() * p2.y() - p2.x() * p1.y()
area += shoelace
cx += (p1.x() + p2.x()) * shoelace
cy += (p1.y() + p2.y()) * shoelace
p1 = p2
A = 0.5 * area
factor = 1 / (6 * A)
return cx * factor, cy * factor
Then, you have two options, depending on what you want to do with the resulting item.
Scale the item
In this case, you create a QGraphicsPolygonItem like the original one, then set its transform origin point using the formula above, and scale it:
poly_2 = QtGui.QPolygonF(poly_2_coords)
item2 = scene.addPolygon(poly_2, QtGui.QPen(QtGui.QColor(0, 20, 255)))
item2.setTransformOriginPoint(*centroid(poly_2_coords))
item2.setScale(1.5)
Use a QTransform
With Qt transformations some special care must be taken, as scaling always uses 0, 0 as origin point.
To scale around a specified point, you must first translate the matrix to that point, then apply the scale, and finally restore the matrix translation to its origin:
poly_2 = QtGui.QPolygonF(poly_2_coords)
cx, cy = centroid(poly_2_coords)
trans = QtGui.QTransform()
trans.translate(cx, cy)
trans.scale(1.5, 1.5)
trans.translate(-cx, -cy)
poly_2_scaled = trans.map(poly_2)
scene.addPolygon(poly_2_scaled, QtGui.QPen(QtGui.QColor(0, 20, 255)))
This is exactly what QGraphicsItems do when using the basic setScale() and setRotation() transformations.
Shape origin point and item position
Remember that QGraphicsItems are always created with their position at 0, 0.
This might not seem obvious especially for basic shapes: when you create a QGraphicsRectItem giving its x, y, width, height, the position will still be 0, 0. When dealing with complex geometry management, it's usually better to create basic shapes with the origin/reference at 0, 0 and then move the item at x, y.
For complex polygons like yours, a possibility could be to translate the centroid of the polygon at 0, 0, and then move it at the actual centroid coordinates:
item = scene.addPolygon(polygon.translated(-cx, -cy))
item.setPos(cx, cy)
item.setScale(1.5)
This might make things easier for development (the mapped points will always be consistent with the item position), and the fact that you don't need to change the transform origin point anymore makes reverse mapping even simpler.

Related

Open3d Ray Casting Generates Error in Python

I am using Open3d in Python to cast shadows and determine intersections on an object. In the example below I use a 2 twist mobius strip from the Open3d library as the object and create a tensor for each point on the mobius strip. The tensor origin is the point on the object, and the direction is the same for all tensors: [1,0,0]. Thus, roughly speaking, things to the left (negative x direction) should generally intersect with the object, and things to the right (positive x direction) will generally not intersect. On a macro level I yield this result, as you can see in the two images below. But the lighted (yellow) section is very spotty for some reason. I have tried this with several shapes and get the same result. Why does the Raycasting in Open3d generate such an incorrect and spotty intersection result?
Code:
#create mobius mesh and points
mesh=o3d.geometry.TriangleMesh.create_mobius(twists=2)
mesh.compute_vertex_normals()
pcd = mesh.sample_points_uniformly(number_of_points=1000000)
points=np.asarray(pcd.points)
#create a scene and add the triangle mesh for ray tracing
cube = o3d.t.geometry.TriangleMesh.from_legacy(mesh)
scene = o3d.t.geometry.RaycastingScene()
cube_id=scene.add_triangles(cube)
#create ray
ray=[1,0,0]
ray=ray/np.linalg.norm(ray)
#create array of ray
array=np.ones((len(points),3))*ray[0]
pd.DataFrame(array).loc[:,1]=ray[1]
pd.DataFrame(array).loc[:,2]=ray[2]
array=pd.DataFrame(array)
#create tensor with origin at each point in mobius strip and same direction for all
tensorrays=np.array([points.loc[:,0].values.T, points.loc[:,1].values.T, points.loc[:,2].values.T,`array.loc[:,0].values.T, array.loc[:,1].values.T, array.loc[:,2].values.T]).T `
rays = o3d.core.Tensor([[tensorrays]],dtype=o3d.core.Dtype.Float32)
ans = scene.cast_rays(rays)
#determine if ray intersected the object
intersections=ans['t_hit'].numpy()[0][0]
intersections[intersections==float('inf')]=1
intersections[intersections!=1]=0
pts=1000000
[x,y]=np.meshgrid(np.linspace(np.min(points.loc[:,0].values),np.max(points.loc[:,0].values),int(np.sqrt(pts))), np.linspace(np.min(points.loc[:,1].values),np.max(points.loc[:,1].values),int(np.sqrt(pts))))
z = griddata((points.loc[:,0].values, points.loc[:,1].values), points.loc[:,2].values, (x, y), method='linear',rescale=True)
color = griddata((points.loc[:,0].values, points.loc[:,1].values), intersections, (x, y), method='linear',rescale=True)
#create surface for mobius, colored by 0's and 1's for ray intersection
trace = go.Surface(x=x,y=y,z=z, surfacecolor=color)
fig_data=[trace]
#plot
layout=go.Layout(margin={'l': 0, 'r': 0, 'b': 0, 't': 0})
fig=Figure(data=fig_data,layout=layout) path=r'C:\Users\JosephKenrick\test.html'
pio.write_html(fig, file=path, auto_open=True,validate=False)}

How to use ezdxf to find location of mirrored entities like blocks/circles?

How do you calculate the location of a block or an insert entity that has been mirrored?
There is a circle inside a 'wb' insert/block entity. I'm trying to identify it's location on msp and draw a circle it. There are 2 'wb' blocks in the attached DXF file, one of which is mirrored.
DXF File link: https://drive.google.com/file/d/1T1XFeH6Q2OFdieIZdfIGNarlZ8tQK8XE/view?usp=sharing
import ezdxf
from ezdxf.math import Vector
DXFFILE = 'washbasins.dxf'
OUTFILE = 'encircle.dxf'
dwg = ezdxf.readfile(DXFFILE)
msp = dwg.modelspace()
dwg.layers.new(name='MyCircles', dxfattribs={'color': 4})
def get_first_circle_center(block_layout):
block = block_layout.block
base_point = Vector(block.dxf.base_point)
circles = block_layout.query('CIRCLE')
if len(circles):
circle = circles[0] # take first circle
center = Vector(circle.dxf.center)
return center - base_point
else:
return Vector(0, 0, 0)
# block definition to examine
block_layout = dwg.blocks.get('wb')
offset = get_first_circle_center(block_layout)
for e in msp.query('INSERT[name=="wb"]'):
scale = e.get_dxf_attrib('xscale', 1) # assume uniform scaling
_offset = offset.rotate_deg(e.get_dxf_attrib('rotation', 0)) * scale
location = e.dxf.insert + _offset
msp.add_circle(center=location, radius=3, dxfattribs={'layer': 'MyCircles'})
dwg.saveas(OUTFILE)
The above code doesn't work for the block that is mirrored in the AutoCAD file. It's circle is drawn at a very different location. For a block placed through the mirror command, the entity.dxf.insert and entity.dxf.rotation returns a point and rotation that is different than that if the block was placed there by copying and rotating.
Kindly help in such cases. Similarly, how will we handle lines and circle entities? Kindly share python functions/code for the same.
Since you are obtaining the circle center relative to the block definition base point, you will need to construct a 4x4 transformation matrix which encodes the X-Y-Z scale, rotation & orientation of each block reference encountered within your for loop.
The ezdxf library usefully includes the Matrix44 class which will take care of the matrix multiplication for you. The construction of such a matrix will be something along the lines of the following:
import math
import ezdxf
from ezdxf.math import OCS, Matrix44
ocs = math.OCS(e.dxf.extrusion)
Matrix44.chain
(
Matrix44.ucs(ocs.ux, ocs.uy, ocs.uz),
Matrix44.z_rotate(e.get_dxf_attrib('rotation', 0)),
Matrix44.scale
(
e.get_dxf_attrib('xscale', 1),
e.get_dxf_attrib('yscale', 1),
e.get_dxf_attrib('zscale', 1)
)
)
You can then use this matrix to transform the coordinates of the circle centre from the coordinate system relative to the block definition, to that relative to the block reference, i.e. the Object Coordinate System (OCS).
After transformation, you will also need to translate the coordinates using a vector calculated as the difference between the block reference insertion point and the block definition base point following transformation using the above matrix.
mat = Matrix44.chain ...
vec = e.dxf.insert - mat.transform(block.dxf.base_point)
Then the final location becomes:
location = mat.transform(circle.dxf.center) + vec

how to find Y face of the cube in Maya with Python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

Mayavi Contour 3d

If I plot a 3d data using contour3d option of mayavi, there are 3 default contours but how they spaced. I understand the number of contours can be changed, but can they be at user specified values (I would surely guess that is possible). I would like to know how are the default 3 contours drawn. Depending on maximum value of scalar and how is it distributed.
As it happens I just had the same problem and found a solution.
Here is some sample code:
import numpy as np
from mayavi import mlab
from mayavi.api import Engine
def fun(x, y, z):
return np.cos(x) * np.cos(y) * np.cos(z)
# create engine and assign figure to it
engine = Engine()
engine.start()
fig = mlab.figure(figure=None, engine=engine)
contour3d = mlab.contour3d(x, y, z, fun, figure=fig)
scene = engine.scenes[0]
# get a handle for the plot
iso_surface = scene.children[0].children[0].children[0]
# the following line will print you everything that you can modify on that object
iso_surface.contour.print_traits()
# now let's modify the number of contours and the min/max
# you can also do these steps manually in the mayavi pipeline editor
iso_surface.compute_normals = False # without this only 1 contour will be displayed
iso_surface.contour.number_of_contours = 2
iso_surface.contour.minimum_contour = -1.3
iso_surface.contour.maximum_contour = 1.3
Now about the meaning of the contours. Well, the number obviously says how many contours are created. Then the values for min/max will define a linear space over which the contours will be spread. The value should basically influence the shrinkage/expansion along the surface normals.
Edit: Here's a tip. When you got your plot window, click on the mayavi pipeline icon in the top left. There you can modify your object (usually lowest in the tree). When you press the red record button and start modifying things it will give you the corresponding lines of code.

Correct method and Python package that can find width of an image's feature

The input is a spectrum with colorful (sorry) vertical lines on a black background. Given the approximate x coordinate of that band (as marked by X), I want to find the width of that band.
I am unfamiliar with image processing. Please direct me to the correct method of image processing and a Python image processing package that can do the same.
I am thinking PIL, OpenCV gave me an impression of being overkill for this particular application.
What if I want to make this an expert system that can classify them in the future?
I'll give a complete minimal working example (as suggested by sega_sai). I don't have access to your original image, but you'll see it doesn't really matter! The peak distributions found by the code below are:
Mean values at: 26.2840960523 80.8255092125
import Image
from scipy import *
from scipy.optimize import leastsq
# Load the picture with PIL, process if needed
pic = asarray(Image.open("band2.png"))
# Average the pixel values along vertical axis
pic_avg = pic.mean(axis=2)
projection = pic_avg.sum(axis=0)
# Set the min value to zero for a nice fit
projection /= projection.mean()
projection -= projection.min()
# Fit function, two gaussians, adjust as needed
def fitfunc(p,x):
return p[0]*exp(-(x-p[1])**2/(2.0*p[2]**2)) + \
p[3]*exp(-(x-p[4])**2/(2.0*p[5]**2))
errfunc = lambda p, x, y: fitfunc(p,x)-y
# Use scipy to fit, p0 is inital guess
p0 = array([0,20,1,0,75,10])
X = xrange(len(projection))
p1, success = leastsq(errfunc, p0, args=(X,projection))
Y = fitfunc(p1,X)
# Output the result
print "Mean values at: ", p1[1], p1[4]
# Plot the result
from pylab import *
subplot(211)
imshow(pic)
subplot(223)
plot(projection)
subplot(224)
plot(X,Y,'r',lw=5)
show()
Below is a simple thresholding method to find the lines and their width, it should work quite reliably for any number of lines. The yellow and black image below was processed using this script, the red/black plot illustrates the found lines using parameters of threshold = 0.3, min_line_width = 5)
The script averages the rows of an image, and then determines the basic start and end positions of each line based on a threshold (which you can set between 0 and 1), and a minimum line width (in pixels). By using thresholding and minimum line width you can easily filter your input images to get the lines out of them. The first function find_lines returns all the lines in an image as a list of tuples containing the start, end, center, and width of each line. The second function find_closest_band_width is called with the specified x_position, and returns the width of the closest line to this position (assuming you want distance to centre for each line). As the lines are saturated (255 cut-off per channel), their cross-sections are not far from a uniform distribution, so I don't believe trying to fit any kind of distribution is really going to help too much, just unnecessarily complicates.
import Image, ImageStat
def find_lines(image_file, threshold, min_line_width):
im = Image.open(image_file)
width, height = im.size
hist = []
lines = []
start = end = 0
for x in xrange(width):
column = im.crop((x, 0, x + 1, height))
stat = ImageStat.Stat(column)
## normalises by 2 * 255 as in your example the colour is yellow
## if your images start using white lines change this to 3 * 255
hist.append(sum(stat.sum) / (height * 2 * 255))
for index, value in enumerate(hist):
if value > threshold and end >= start:
start = index
if value < threshold and end < start:
if index - start < min_line_width:
start = 0
else:
end = index
center = start + (end - start) / 2.0
width = end - start
lines.append((start, end, center, width))
return lines
def find_closest_band_width(x_position, lines):
distances = [((value[2] - x_position) ** 2) for value in lines]
index = distances.index(min(distances))
return lines[index][3]
## set your threshold, and min_line_width for finding lines
lines = find_lines("8IxWA_sample.png", 0.7, 4)
## sets x_position to 59th pixel
print 'width of nearest line:', find_closest_band_width(59, lines)
I don't think that you need anything fancy for you particular task.
I would just use PIL + scipy. That should be enough.
Because you essentially need to take your image, make a 1D-projection of it
and then fit a Gaussian or something like that to it. The information about the approximate location of the band should be used a first guess for the fitter.

Categories