How to draw a circular shape in PyBox2D - python

I have a code which draws a figure consisting of few polygon shapes using pyBox2D and PyGame. Ihave defined bodies and joints, it works well, it does what it is supposed to do, but problem occures when I want to change the head from polygon to circle shape, but I cannot draw it because I use for drawing vertices and circle shape has no vertices.
The problem occurs in this part of code (final drawing):
for body in world.bodies: #(ground_body, dynamic_body): # or: world.bodies
# The body gives us the position and angle of its shapes
for fixture in body.fixtures:
shape = fixture.shape
vertices = [(body.transform * v) * PPM for v in shape.vertices]
vertices = [(v[0], SCREEN_HEIGHT - v[1]) for v in vertices]
pygame.draw.polygon(screen, colors[body.type], vertices)
As I said above, the problem is that box2D. b2circleShape does not have vertices. How can i draw a circle or ad vertices to that shape?
Thank you very much
EDIT: The "duplicate" does not answer my question, could you please show me how to define circular body I tried this
import Box2D # The main library
# Box2D.b2 maps Box2D.b2Vec2 to vec2 (and so on)
from Box2D.b2 import (world, polygonShape, staticBody, dynamicBody, circleShape)
from Box2D import (b2FixtureDef, b2PolygonShape, b2CircleShape)
chest_body = world.CreateDynamicBody(
position=(10, 6.5),
fixtures=b2FixtureDef(
shape=b2PolygonShape(box=(0.5, 1.5)), density=120),
angle=0) # This is a rectangular body which is defined correctly
circle = world.CreateDynamicBody(
position=(10, 6.5),
fixtures=b2FixtureDef(
shape=b2CircleShape(0.5),
angle=0)) # I tried this after checking your manual, this does not work
The problem may be caused (and probably is caused) by the fact that i do not know how to run IntelliSense for pyBox2D or whether there is intellisense at all. That means that I do not know which parameters are needed
Any help appreciated

There is a manual for python too. Check this
circle = b2CircleShape(pos=(1, 2), radius=0.5)

Related

Vedo: Is there a way to add a camera to scenes and see images from perspective?

I'm using Vedo in Python to visualize some 3D scans of indoor locations.
I would like to, e.g., add a 'camera' at (0,0,0), look left 90 degrees (or where ever), and see the camera's output.
Can this be done with Vedo? If not, is there a different python programming framework where I can open .obj files and add a camera and view through it programmatically?
I usually use schema:
...
plt = Plotter(bg='bb', interactive=False)
camera = plt.camera
plt.show(actors, axes=4, viewup='y')
for i in range(360):
camera.Azimuth(1)
camera.Roll(-1)
plt.render()
...
plt.interactive().close()
Good Luck
You can plot the same object in an embedded renderer and control its behaviour via a simple callback function:
from vedo import *
settings.immediateRendering = False # can be faster for multi-renderers
# (0,0) is the bottom-left corner of the window, (1,1) the top-right
# the order in the list defines the priority when overlapping
custom_shape = [
dict(bottomleft=(0.00,0.00), topright=(1.00,1.00), bg='wheat', bg2='w' ),# ren0
dict(bottomleft=(0.01,0.01), topright=(0.15,0.30), bg='blue3', bg2='lb'),# ren1
]
plt = Plotter(shape=custom_shape, size=(1600,800), sharecam=False)
s = ParametricShape(0) # whatever object to be shown
plt.show(s, 'Renderer0', at=0)
plt.show(s, 'Renderer1', at=1)
def update(event):
cam = plt.renderers[1].GetActiveCamera() # vtkCamera of renderer1
cam.Azimuth(1) # add one degree in azimuth
plt.addCallback("Interaction", update)
interactive()
Check out a related example here.
Check out the vtkCamera object methods here.

Why is transforming a shapely polygon not working in some cases?

I'm trying to calculate the size of a polygon of geographic coordinates using shapely, which seems to require a transformation into a suitable projection to yield a results in square meter. I found a couple of examples online, but I couldn't get it working for my example polygon.
I therefore tried to use the same example polygons that came with the code snippets I found, and I noticed that it works for some whole not for others. To reproduce the results, here's the minimal example code:
import json
import pyproj
from shapely.ops import transform
from shapely.geometry import Polygon, mapping
from functools import partial
coords1 = [(-97.59238135821987, 43.47456565304017),
(-97.59244690469288, 43.47962399877412),
(-97.59191951546768, 43.47962728271748),
(-97.59185396090983, 43.47456565304017),
(-97.59238135821987, 43.47456565304017)]
coords1 = reversed(coords1) # Not sure if important, but https://geojsonlint.com says it's wrong handedness
# Doesn't seem to affect the error message though
coords2 = [(13.65374516425911, 52.38533382814119),
(13.65239769133293, 52.38675829106993),
(13.64970274383571, 52.38675829106993),
(13.64835527090953, 52.38533382814119),
(13.64970274383571, 52.38390931824483),
(13.65239769133293, 52.38390931824483),
(13.65374516425911, 52.38533382814119)]
coords = coords1 # DOES NOT WORK
#coords = coords2 # WORKS
polygon = Polygon(coords)
# Print GeoJON to check on https://geojsonlint.com
print(json.dumps(mapping(polygon)))
projection = partial(pyproj.transform,
pyproj.Proj('epsg:4326'),
pyproj.Proj('esri:54009'))
transform(projection, polygon)
Both coords1 and coords2 are just copied from code snippets that supposedly work. However, only coords2 works for me. I've used https://geojsonlint.com to see if there's a difference between the two polygons, and it seems that the handedness/orientation of the polygon is not valid GeoJSON. I don't know if shapely even cares, but reversing the order -- and https://geojsonlint.com says it's valid GeoJSON then, and it shows the polygon on the map -- does not change the error.
So, it works with coords2, but when I use coords1 I get the following error:
~/env/anaconda3/envs/py36/lib/python3.6/site-packages/shapely/geometry/base.py in _repr_svg_(self)
398 if xmin == xmax and ymin == ymax:
399 # This is a point; buffer using an arbitrary size
--> 400 xmin, ymin, xmax, ymax = self.buffer(1).bounds
401 else:
402 # Expand bounds by a fraction of the data ranges
ValueError: not enough values to unpack (expected 4, got 0)
I assume there's something different about coords1 (and the example polygon from my own data) that causes the problem, but I cannot tell what could be different compared to coords2.
In short, what's the difference between coords1 and coords2, with one working and the other not?
UPDATE: I got it working by adding always_xy=True to the definition of the projections. Together with the newer syntax provided by shapely, avoiding partial, the working snippet looks like this:
project = pyproj.Transformer.from_proj(
pyproj.Proj('epsg:4326'), # source coordinate system
pyproj.Proj('epsg:3857'),
always_xy=True
) # destination coordinate system
transform(project.transform, polygon)
To be honest, even after reading the docs, I don't really know what always_xy is doing. Hence I don't want to provide is an answer.
i think you did good, only that the reversed does not create new dataset.
try to use this function to create reversed order list:
def rev_slice(mylist):
'''
return a revered list
mylist: is a list
'''
a = mylist[::-1]
return a
execute the function like so:
coords = rev_slice(coords1)

How to use ezdxf to find location of mirrored entities like blocks/circles?

How do you calculate the location of a block or an insert entity that has been mirrored?
There is a circle inside a 'wb' insert/block entity. I'm trying to identify it's location on msp and draw a circle it. There are 2 'wb' blocks in the attached DXF file, one of which is mirrored.
DXF File link: https://drive.google.com/file/d/1T1XFeH6Q2OFdieIZdfIGNarlZ8tQK8XE/view?usp=sharing
import ezdxf
from ezdxf.math import Vector
DXFFILE = 'washbasins.dxf'
OUTFILE = 'encircle.dxf'
dwg = ezdxf.readfile(DXFFILE)
msp = dwg.modelspace()
dwg.layers.new(name='MyCircles', dxfattribs={'color': 4})
def get_first_circle_center(block_layout):
block = block_layout.block
base_point = Vector(block.dxf.base_point)
circles = block_layout.query('CIRCLE')
if len(circles):
circle = circles[0] # take first circle
center = Vector(circle.dxf.center)
return center - base_point
else:
return Vector(0, 0, 0)
# block definition to examine
block_layout = dwg.blocks.get('wb')
offset = get_first_circle_center(block_layout)
for e in msp.query('INSERT[name=="wb"]'):
scale = e.get_dxf_attrib('xscale', 1) # assume uniform scaling
_offset = offset.rotate_deg(e.get_dxf_attrib('rotation', 0)) * scale
location = e.dxf.insert + _offset
msp.add_circle(center=location, radius=3, dxfattribs={'layer': 'MyCircles'})
dwg.saveas(OUTFILE)
The above code doesn't work for the block that is mirrored in the AutoCAD file. It's circle is drawn at a very different location. For a block placed through the mirror command, the entity.dxf.insert and entity.dxf.rotation returns a point and rotation that is different than that if the block was placed there by copying and rotating.
Kindly help in such cases. Similarly, how will we handle lines and circle entities? Kindly share python functions/code for the same.
Since you are obtaining the circle center relative to the block definition base point, you will need to construct a 4x4 transformation matrix which encodes the X-Y-Z scale, rotation & orientation of each block reference encountered within your for loop.
The ezdxf library usefully includes the Matrix44 class which will take care of the matrix multiplication for you. The construction of such a matrix will be something along the lines of the following:
import math
import ezdxf
from ezdxf.math import OCS, Matrix44
ocs = math.OCS(e.dxf.extrusion)
Matrix44.chain
(
Matrix44.ucs(ocs.ux, ocs.uy, ocs.uz),
Matrix44.z_rotate(e.get_dxf_attrib('rotation', 0)),
Matrix44.scale
(
e.get_dxf_attrib('xscale', 1),
e.get_dxf_attrib('yscale', 1),
e.get_dxf_attrib('zscale', 1)
)
)
You can then use this matrix to transform the coordinates of the circle centre from the coordinate system relative to the block definition, to that relative to the block reference, i.e. the Object Coordinate System (OCS).
After transformation, you will also need to translate the coordinates using a vector calculated as the difference between the block reference insertion point and the block definition base point following transformation using the above matrix.
mat = Matrix44.chain ...
vec = e.dxf.insert - mat.transform(block.dxf.base_point)
Then the final location becomes:
location = mat.transform(circle.dxf.center) + vec

how to find Y face of the cube in Maya with Python

sorry for such specific question guys , I think people only with knowledge of Maya will answer tho. In Maya I have cubes different sizes and I need to find with python which face of cube is pointing Y axis down. (Pivot is in center) Any tips will be appreciated
Thanks a lot :)
import re
from maya import cmds
from pymel.core.datatypes import Vector, Matrix, Point
obj = 'pCube1'
# Get the world transformation matrix of the object
obj_matrix = Matrix(cmds.xform(obj, query=True, worldSpace=True, matrix=True))
# Iterate through all faces
for face in cmds.ls(obj + '.f[*]', flatten=True):
# Get face normal in object space
face_normals_text = cmds.polyInfo(face, faceNormals=True)[0]
# Convert to a list of floats
face_normals = [float(digit) for digit in re.findall(r'-?\d*\.\d*', face_normals_text)]
# Create a Vector object and multiply with matrix to get world space
v = Vector(face_normals) * obj_matrix
# Check if vector faces downwards
if max(abs(v[0]), abs(v[1]), abs(v[2])) == -v[1]:
print face, v
If you just need a quick solution without vector math and Pymel or the the API, you can use cmds.polySelectConstraint to find the faces aligned with a normal. All you need to do is select all the faces, then use the constraint to get only the ones pointing the right way. This will select all the faces in a mesh that are pointing along a given axis:
import maya.cmds as cmds
def select_faces_by_axis (mesh, axis = (0,1,0), tolerance = 45):
cmds.select(mesh + ".f[*]")
cmds.polySelectConstraint(mode = 3, type = 8, orient = 2, orientaxis = axis, orientbound = (0, tolerance))
cmds.polySelectConstraint(dis=True) # remember to turn constraint off!
The axis is the x,y,z axis you want and tolerance is the slop in degrees you'll tolerate. To get the downward faces you'd do
select_faces_by_axis ('your_mesh_here', (0,0,-1))
or
select_faces_by_axis ('your_mesh_here', (0,0,-1), 1)
# this would get faces only within 1 degree of downard
This method has the advantage of operating mostly in Maya's C++, it's going to be faster than python-based methods that loop over all the faces in a mesh.
With pymel the code can be a bit more compact. Selecting the faces pointing downwards:
n=pm.PyNode("pCubeShape1")
s = []
for f in n.faces:
if f.getNormal(space='world')[1] < 0.0:
s.append(f)
pm.select(s)

How to find face neighbours in Maya?

I have a problem where I need to select faces that are next to one pre-selected face.
This may be done easily but the problem is that when I get a neighbour face I need to know in which direction it is facing.
So now I am able to select faces which are connected with an edge but I can't get the face that is for example left or right from the first selected face. I have tried multiple approaches but can't find the solution.
I tried with:
pickWalk - cmds.pickWalk()- problem with this is that it's behavior can't be predicted, since it walks the mesh from the camera perspective.
polyInfo - cmds.polyInfo()- this is a very useful function and closest to the answer. In this approach I try to extract edges from a face and then see which are neighbours to that face with edgeToFace(). This works well but doesn't solve my problem. To elaborate, when polyInfo returns faces that share edges, it doesn't return them in a way that I can always know that edgesList[0] (for example) is the edge that points left or right. Hence if I use this on different faces the resulting face may be facing in a different direction in each case.
Hard way with many conversions from vertex to edge then to face etc. But still again it's the same problem where I don't know which edge is the top or left one.
conectedFaces()method who i call on selected face and it returns faces which are connected to first face,but still it`s the same problem,i dont know which face is facing which way.
To be clear I'm not using a pre-selected list of faces and checking them, but I need to know the faces without knowing or keeping their names somewhere. Does someone know a way that works with selection of faces?
To elaborate my question I made an image to make it clear:
As you can see from the example if there is selected face I need to select any of pointed faces, but that must be exact face I want to select. Other methods select all neighbour faces, but I need method that I can say "select right" and will select right one from first selected face.
This is one solution that would be fairly consistent under the rule that up/down/left/right is aligned with the mesh's transformation (local space), though could be world space too.
The first thing I would do is build a face relative coordinate system for every mesh face using the average face vertex position, face normal, and world space Y axis of the mesh's transformation. This involves a little vector math, so I will use the API to make this easier. This first part will make a coordinate system for each face that we will store into lists for future querying. See below.
from maya import OpenMaya, cmds
meshTransform = 'polySphere'
meshShape = cmds.listRelatives(meshTransform, c=True)[0]
meshMatrix = cmds.xform(meshTransform, q=True, ws=True, matrix=True)
primaryUp = OpenMaya.MVector(*meshMatrix[4:7])
# have a secondary up vector for faces that are facing the same way as the original up
secondaryUp = OpenMaya.MVector(*meshMatrix[8:11])
sel = OpenMaya.MSelectionList()
sel.add(meshShape)
meshObj = OpenMaya.MObject()
sel.getDependNode(0, meshObj)
meshPolyIt = OpenMaya.MItMeshPolygon(meshObj)
faceNeighbors = []
faceCoordinates = []
while not meshPolyIt.isDone():
normal = OpenMaya.MVector()
meshPolyIt.getNormal(normal)
# use the seconary up if the normal is facing the same direction as the object Y
up = primaryUp if (1 - abs(primaryUp * normal)) > 0.001 else secondaryUp
center = meshPolyIt.center()
faceArray = OpenMaya.MIntArray()
meshPolyIt.getConnectedFaces(faceArray)
meshPolyIt.next()
faceNeighbors.append([faceArray[i] for i in range(faceArray.length())])
xAxis = up ^ normal
yAxis = normal ^ xAxis
matrixList = [xAxis.x, xAxis.y, xAxis.z, 0,
yAxis.x, yAxis.y, yAxis.z, 0,
normal.x, normal.y, normal.z, 0,
center.x, center.y, center.z, 1]
faceMatrix = OpenMaya.MMatrix()
OpenMaya.MScriptUtil.createMatrixFromList(matrixList, faceMatrix)
faceCoordinates.append(faceMatrix)
These functions will look up and return which face is next to the one given in a particular direction (X and Y) relative to the face. This uses a dot product to see which face is more in that particular direction. This should work with any number of faces but it will only return one face that is in the most of that direction.
def getUpFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,1,0))
def getDownFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(0,-1,0))
def getRightFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(1,0,0))
def getLeftFace(faceIndex):
return getDirectionalFace(faceIndex, OpenMaya.MVector(-1,0,0))
def getDirectionalFace(faceIndex, axis):
faceMatrix = faceCoordinates[faceIndex]
closestDotProd = -1.0
nextFace = -1
for n in faceNeighbors[faceIndex]:
nMatrix = faceCoordinates[n] * faceMatrix.inverse()
nVector = OpenMaya.MVector(nMatrix(3,0), nMatrix(3,1), nMatrix(3,2))
dp = nVector * axis
if dp > closestDotProd:
closestDotProd = dp
nextFace = n
return nextFace
So you would call it like this:
getUpFace(123)
With the number being the face index you want to get the face that is "up" from it.
Give this a try and see if it satisfies your needs.
polyListComponentConversion
import pprint
init_face = cmds.ls(sl=True)
#get edges
edges = cmds.polyListComponentConversion(init_face, ff=True, te=True)
#get neighbour faces
faces = cmds.polyListComponentConversion(edges, fe=True, tf=True, bo=True)
# show neighbour faces
cmds.select(faces)
# print face normal of each neighbour face
pprint.pprint(cmds.ployInfo(faces,fn=True))
The easiest way of doing this is using Pymel's connectedFaces() on the MeshFace:
http://download.autodesk.com/us/maya/2011help/pymel/generated/classes/pymel.core.general/pymel.core.general.MeshFace.html
import pymel.core as pm
sel = pm.ls(sl=True)[0]
pm.select(sel.connectedFaces())

Categories