player_stand = pygame.image.load('graphics/player/player_stand.png').convert_alpha()
player_stand = pygame.transform.scale(player_stand,(136, 168))
player_stand_rect = player_stand.get_rect(center = (400,200))
How can I get the exact coordinates of the player_stand_rect? It moves around.
A pygame.Rect object has a set of virtual attributes that can be read and written to. The center's current x-coordinate can be determined with player_stand_rect.centerx and the center's current y-coordinate with player_stand_rect.centery. A tuple with the center of the rectangle can be determined via the attribute 'center':
x, y = player_stand_rect.center
Related
I am using Open3d in Python to cast shadows and determine intersections on an object. In the example below I use a 2 twist mobius strip from the Open3d library as the object and create a tensor for each point on the mobius strip. The tensor origin is the point on the object, and the direction is the same for all tensors: [1,0,0]. Thus, roughly speaking, things to the left (negative x direction) should generally intersect with the object, and things to the right (positive x direction) will generally not intersect. On a macro level I yield this result, as you can see in the two images below. But the lighted (yellow) section is very spotty for some reason. I have tried this with several shapes and get the same result. Why does the Raycasting in Open3d generate such an incorrect and spotty intersection result?
Code:
#create mobius mesh and points
mesh=o3d.geometry.TriangleMesh.create_mobius(twists=2)
mesh.compute_vertex_normals()
pcd = mesh.sample_points_uniformly(number_of_points=1000000)
points=np.asarray(pcd.points)
#create a scene and add the triangle mesh for ray tracing
cube = o3d.t.geometry.TriangleMesh.from_legacy(mesh)
scene = o3d.t.geometry.RaycastingScene()
cube_id=scene.add_triangles(cube)
#create ray
ray=[1,0,0]
ray=ray/np.linalg.norm(ray)
#create array of ray
array=np.ones((len(points),3))*ray[0]
pd.DataFrame(array).loc[:,1]=ray[1]
pd.DataFrame(array).loc[:,2]=ray[2]
array=pd.DataFrame(array)
#create tensor with origin at each point in mobius strip and same direction for all
tensorrays=np.array([points.loc[:,0].values.T, points.loc[:,1].values.T, points.loc[:,2].values.T,`array.loc[:,0].values.T, array.loc[:,1].values.T, array.loc[:,2].values.T]).T `
rays = o3d.core.Tensor([[tensorrays]],dtype=o3d.core.Dtype.Float32)
ans = scene.cast_rays(rays)
#determine if ray intersected the object
intersections=ans['t_hit'].numpy()[0][0]
intersections[intersections==float('inf')]=1
intersections[intersections!=1]=0
pts=1000000
[x,y]=np.meshgrid(np.linspace(np.min(points.loc[:,0].values),np.max(points.loc[:,0].values),int(np.sqrt(pts))), np.linspace(np.min(points.loc[:,1].values),np.max(points.loc[:,1].values),int(np.sqrt(pts))))
z = griddata((points.loc[:,0].values, points.loc[:,1].values), points.loc[:,2].values, (x, y), method='linear',rescale=True)
color = griddata((points.loc[:,0].values, points.loc[:,1].values), intersections, (x, y), method='linear',rescale=True)
#create surface for mobius, colored by 0's and 1's for ray intersection
trace = go.Surface(x=x,y=y,z=z, surfacecolor=color)
fig_data=[trace]
#plot
layout=go.Layout(margin={'l': 0, 'r': 0, 'b': 0, 't': 0})
fig=Figure(data=fig_data,layout=layout) path=r'C:\Users\JosephKenrick\test.html'
pio.write_html(fig, file=path, auto_open=True,validate=False)}
I have a geotiff file that I'm opening with gdal in Python, and I need to find the single pixel closest to a specified latitude/longitude. I was previously working with an unrelated file type for similar data, so I'm completely new to both gdal and geotiff.
How does one do this? What I have so far is
import gdal
ds = gdal.Open('foo.tiff')
width = ds.RasterXSize
height = ds.RasterYSize
gt = ds.GetGeoTransform()
gp = ds.GetProjection()
data = np.array(ds.ReadAsArray())
print(gt)
print(gp)
which produces (for my files)
(-3272421.457337171, 2539.703, 0.0, 3790842.1060354356, 0.0, -2539.703)
and
PROJCS["unnamed",GEOGCS["Coordinate System imported from GRIB file",DATUM["unnamed",SPHEROID["Sphere",6371200,0]],PRIMEM["Greenwich",0],UNIT["degree",0.0174532925199433,AUTHORITY["EPSG","9122"]]],PROJECTION["Lambert_Conformal_Conic_2SP"],PARAMETER["latitude_of_origin",25],PARAMETER["central_meridian",265],PARAMETER["standard_parallel_1",25],PARAMETER["standard_parallel_2",25],PARAMETER["false_easting",0],PARAMETER["false_northing",0],UNIT["metre",1,AUTHORITY["EPSG","9001"]],AXIS["Easting",EAST],AXIS["Northing",NORTH]]
Ideally, there'd be a single simple function call, and it would also return an indication whether the specified location falls outside the bounds of the raster.
My fallback is to obtain a grid from another source containing the latitudes and longitudes for each pixel and then do a brute force search for the desired location, but I'm hoping there's a more elegant way.
Note: I think what I'm trying to do is equivalent to the command line
gdallocationinfo -wgs84 foo.tif <longitude> <latitude>
which returns results like
Report:
Location: (1475P,1181L)
Band 1:
Value: 66
This suggests to me that the functionality is probably already in the gdal module, if I can just find the right method to call.
You basically need two steps:
Convert the lat/lon point to the raster-projection
Convert the mapx/mapy (in raster proj) to pixel coordinates
Given the code you already posted above, defining both projection systems can be done with:
from osgeo import gdal, osr
point_srs = osr.SpatialReference()
point_srs.ImportFromEPSG(4326) # hardcode for lon/lat
# GDAL>=3: make sure it's x/y
# see https://trac.osgeo.org/gdal/wiki/rfc73_proj6_wkt2_srsbarn
point_srs.SetAxisMappingStrategy(osr.OAMS_TRADITIONAL_GIS_ORDER)
file_srs = osr.SpatialReference()
file_srs.ImportFromWkt(gp)
Creating the coordinate transformation, and using it to convert the point from lon/lat to mapx/mapy coordinates (whatever projection it is) with:
ct = osr.CoordinateTransformation(point_srs, file_srs)
point_x = -114.06138 # lon
point_y = 51.03163 # lat
mapx, mapy, z = ct.TransformPoint(point_x, point_y)
To go from map coordinates to pixel coordinates, the geotransform needs to be inverted first. And can then be used to retrieve the pixel coordinates like:
gt_inv = gdal.InvGeoTransform(gt)
pixel_x, pixel_y = gdal.ApplyGeoTransform(gt_inv, mapx, mapy)
Rounding those pixel coordinates should allow you to use them for indexing the data array. You might need to clip them if the point you're querying is outside the raster.
# round to pixel
pixel_x = round(pixel_x)
pixel_y = round(pixel_y)
# clip to file extent
pixel_x = max(min(pixel_x, width-1), 0)
pixel_y = max(min(pixel_y, height-1), 0)
pixel_data = data[pixel_y, pixel_x]
I have an .obj file. I do not know it’s contents bounding box before hand. I want to load it into blender and rotate camera around it in "K"th frames (e.g. 15 frames). How to do such a thing in blender using python api?
A common way to do an object turnaround is to add an empty and make it the parent of the camera, animating the z-rotation of the empty will then rotate the camera around the object, you can give the camera a trackto constraint so that the camera always points at the target object.
You can use the objects bound_box to find its outer limits, then add a bit more so the object stays inside the view and position the camera with that. Making the extra distance proportional to the object size should work for most objects.
The addon I made for this answer shows how to make a bounding box around multiple objects, which may be helpful if you have multiple objects at once.
To do that in python -
import bpy
scn = bpy.context.scene
bpy.ops.import_scene.obj(filepath='obj1.obj')
target = bpy.context.selected_objects[0]
scn.objects.active = target
# centring the origin gives a better bounding box and rotation point
bpy.ops.object.origin_set(type='ORIGIN_GEOMETRY')
cam_x_pos = max([v[0] for v in target.bound_box]) * 2.5
cam_y_pos = max([v[1] for v in target.bound_box]) * 2.5
cam_z_pos = max([v[2] for v in target.bound_box]) * 2.5
rot_centre = bpy.data.objects.new('rot_centre', None)
scn.objects.link(rot_centre)
rot_centre.location = target.location
camera = bpy.data.objects.new('camera', bpy.data.cameras.new('camera'))
scn.objects.link(camera)
camera.location = (cam_x_pos, cam_y_pos, cam_z_pos)
camera.parent = rot_centre
m = camera.constraints.new('TRACK_TO')
m.target = target
m.track_axis = 'TRACK_NEGATIVE_Z'
m.up_axis = 'UP_Y'
rot_centre.rotation_euler.z = 0.0
rot_centre.keyframe_insert('rotation_euler', index=2, frame=1)
rot_centre.rotation_euler.z = radians(360.0)
rot_centre.keyframe_insert('rotation_euler', index=2, frame=101)
# set linear interpolation for constant rotation speed
for c in rot_centre.animation_data.action.fcurves:
for k in c.keyframe_points:
k.interpolation = 'LINEAR'
scn.frame_end = 100
I was wondering if there is a way to access the symmetry table of the MRichSelection having as a result the positive, the seam and the negative side with the positive and the negative ordered by vertex id correspondence. ie: vertex id 15 is the symmetry correlated to vert id 350. They are both at index 5 in the positive and negative list.
I know I can achieve something similar using the filterXpand, but I believe the lists are not ordered in the way I can access the opposite vertex.
I don't know if you ever found a solution to this, but I will post mine for future TD's looking for a solution.
So let's assume you want to get the corresponding verts between left and right on the YZ plane. you have 2 different options. Using the MRichSelection to handle you symmetry table. Or calculate the vert yourself, by getting the smallest distance vector on the opposite side. Note: if you use the MRichSelection method, you will need to make sure that symmetry mode is enbaled in the viewport.
I will show both answers, so lets get started:
Also note: I will be calculating the YZ Plane, as mentioned earlier. So adjust to your liking if needed.
Solution 1(Calculating yourself):
#importing the OpenMaya Module
from maya.api import OpenMaya as om
#converting selected object into MObject and MFnMesh functionset
mSel=om.MSelectionList()
mSel.add(cmds.ls(sl=1)[0])
mObj=mSel.getDagPath(0)
mfnMesh=om.MFnMesh(mObj)
#getting our basePoints
baseShape = mfnMesh.getPoints()
#this function can be used to revert the object back to the baseShape
mfnMesh.setPoints(baseShape)
#getting l and r verts
mtol=0.02# this will be our mid tolerance, if the mesh is not completely symmetric on the mid
lVerts=[]#for storing left Verts
rVerts=[]#for storing right Verts
mVerts=[]#for storing mid Verts
corrVerts={} #for storing correspondign verts
for i in range(mfnMesh.numVertices): #iteratign through all the verts on the mesh
thisPoint = mfnMesh.getPoint(i) #getting current point position
if thisPoint.x>0+mtol: # if pointValue on x axis is bigger than 0+midTolerance
lVerts.append((i, thisPoint))#append to left vert storage list(i = vert index, thisPoint = vert Mpoint position)
elif thisPoint.x<0-mtol: #opposite of left vert calculation
rVerts.append((i, thisPoint))
else: #if none of the above, assign to mid verts
mVerts.append((i, thisPoint))
rVertspoints=[i for v,i in rVerts] #getting the vert mPoint positions of the right side
for vert, mp in lVerts: #going through our left points, unpacking our vert index and mPoint position()
nmp=om.MPoint(-mp.x, mp.y, mp.z) #storing the reversed mpoint of the left side vert
rp = mfnMesh.getClosestPoint(nmp)#getting the closest point on the mesh
if rp[0] in rVertspoints: #cheking if the point is in the right side
corrVerts[vert] = rVerts[rVertspoints.index(rp[0])][0] #adding it if it is true
else:#if it is not, calculate closest vert
#iterating through rVertspoints and find smallest distance
dList=[nmp.distanceTo(rVert) for rVert in rVertspoints]#distance list for each vert based on input point
mindist = min(dList)#getting the closest distance
corrVerts[vert] = rVerts[dList.index(mindist)][0]#adding the vert
#now the corrVerts will have stored the corresponding vertices from left to right
Solution 2(using MRichSelection):
#MAKE SURE SYMMETRY IN THE VIEWPORT IS TURNED ON TO WORK! (will also work with topological symmetry)
#importing the OpenMaya Module
from maya.api import OpenMaya as om
#converting selected object into MObject and MFnMesh functionset
mSel=om.MSelectionList()
mSel.add(cmds.ls(sl=1)[0])
mObj=mSel.getDagPath(0)
mfnMesh=om.MFnMesh(mObj)
#getting our basePoints
baseShape = mfnMesh.getPoints()
#this function can be used to revert the object back to the baseShape
mfnMesh.setPoints(baseShape)
#getting l and r verts
mtol=0.02# this will be our mid tolerance, if the mesh is not completely symmetric on the mid
lVerts=[]#for storing left Verts
corrVerts={} #for storing correspondign verts
for i in range(mfnMesh.numVertices): #iteratign through all the verts on the mesh
thisPoint = mfnMesh.getPoint(i) #getting current point position
if thisPoint.x>0+mtol: # if pointValue on x axis is bigger than 0+midTolerance
lVerts.append((i, thisPoint))#append to left vert storage list(i = vert index, thisPoint = vert Mpoint position)
#selecting our verts with symmetry on
SymSelection = cmds.select(["%s.vtx[%s]"%(mObj,i) for i,v in lVerts], sym=True)
#getting the rich selection. it will store the symmetry iformation for us
mRichBase = om.MGlobal.getRichSelection()
lCor = mRichBase.getSelection()#this will store our lSide verts as an MSelectionList
rCor = mRichBase.getSymmetry()#this will symmetry verts as an MSelectionList
mitL = om.MItSelectionList(lCor)#creating iterative lists so we can get the components
mitR = om.MItSelectionList(rCor)
while not mitL.isDone():#iterating through the left list
mitLComp = mitL.getComponent()#getting dag path and components of leftside
mitRComp = mitR.getComponent()#getting dag path and components of rightside
mitLCorVert = om.MItMeshVertex(mitLComp[0], mitLComp[1]) #creating our iterative vertex lists
mitRCorVert = om.MItMeshVertex(mitRComp[0], mitRComp[1])
while not mitLCorVert.isDone():#iterating through our verts
corrVerts[mitLCorVert.index()] = mitRCorVert.index()#adding corresponding verts to our dictionary
mitLCorVert.next()#go to next vert. needed to stop loop
mitRCorVert.next()#go to next vert. needed to stop loop
mitL.next()#go to next selection in list if more. needed to stop loop
mitR.next()#go to next selection in list if more. needed to stop loop
cmds.select(cl=1)#deseleting our verts
#now the corrVerts will have stored the corresponding vertices from left to right
Hope it will help you all, looking for a few solutions.
Cheers
Bjarke Rauff, Rigging TD.
The answer by #Bjarke Rauff was very helpful, wanted to add a note about speed.
MFnMesh.getClosestPoint() builds an octree to efficiently find the point, but it will do that on every call. A mesh with 100k points can take up to 45s to process.
Use a MMeshIntersector() to cache the data between lookups. This speeds up the table creation by 900x for 100k points to .05s.
mesh # MDagpath obj to poly
flip_matrix # MTransformMatrix to flop the point
itMesh = om.MItMeshPolygon(mesh)
mesh.extendToShape()
matrix = mesh.inclusiveMatrix()
node = mesh.node()
intersector = om.MMeshIntersector()
intersector.create(node, matrix)
if not (intersector.isCreated):
print("Failed to create mesh intersector")
return
flipped_ids={}
while not itMesh.isDone():
id = itMesh.index()
face_center = itMesh.center()
# flop the point across the axis
flipped_point = face_center*flip_matrix
MpointOnMesh = intersector.getClosestPoint(flipped_point)
if MpointOnMesh is not None:
# get face id property from MPointOnMesh
flipped_id = MpointOnMesh.face
flipped_ids[id] = flipped_id
else:
print("No intersection")
itMesh.next()
NOTE
I tried hash tables with a tuple of the point as the key, but the point positions had slight variations, even with rounding, which created different hashes.
I've tested the MRichSelection approach and it doesn't actually work consistently in practice. It seems like it works when you have a perfectly mirrored mesh, but that can't be assumed. The component lists are not necessarily in sync.
I have created an array of Circle patches in Matplotlib. I need to get a
list of the centers of these circle patches, for some computation.
On the documentation page of the circle patch (see matplotlib.patches.Circle on this page) , there don't seem to be any methods for extracting the center of the circle say as in mycircle.get_center. They have one for the radius, but not for the center. Any suggestions?
EDIT:
Here is some code. Basically, what I want to do is to create an interactive app in which the user clicks some disks with the mouse onto
the screen. The only constraint on positioning these disks, is that they
should all be disjoint. So when the user tries to insert a disk with a mouse click, I want to check if the new disk intersects the already inputted disks.
I am storing all the circle patches in an array called disk_arrangement.
Sure, I could create a separate array recording the centers, to do my job,
but that seems ugly. That's why I hope Matplotlib as a method to extract the center of a given circle-patch
def place_disk(event, disk_arrangement=[] ):
def is_inside_an_existing_disk(center_x, center_y):
if disk_arrangement != []:
for existing_disk in disk_arrangement:
if existing_disk.contains(event): #### How to do this????
return True
return False
if event.name == 'button_press_event' and \
event.dblclick == True and \
event.xdata != None and \
event.ydata != None and \
is_inside_an_existing_disk(event.xdata,event.ydata) == False :
cursor_circle = mpl.patches.Circle((event.xdata,
event.ydata),
radius=0.3,
facecolor= 'green')
disk_arrangement.append(cursor_circle)
ax.add_patch(cursor_circle)
fig.canvas.draw()
I am using Python 2.7.11 on Ubuntu 14.04
Try the center attribute, e.g. for a patch initialized with:
from matplotlib.patches import Circle
circ = Circle((1, 2), radius=1)
circ.center == (1,2) #should return True
To determine all the attributes of an object you can use dir, e.g. dir(circ) gives all attributes of the circ object including center, radius, etc.