Turtle module - Detecting collisions with polygons - python

I'm doing a small experiment with turtle, basically a small game where you can control a turtle... I have trajectory calculations already, so I can control it pretty decently!
However, I'd like to define obstacles, mostly polygons defined by a set of points.
Those would trigger collisions if the turtle hurt them (elastic collisions).
My question is...
How can I detect if I'm on the border of a polygon? Say I run straight into the middle of the hypotenuse of a triangle, how can I know this is actually a line which I cannot go through, at the right moment, in order to trigger the collision code?
Thanks!

Your polygons are defined by line segments. The turtle move is also a line segment, so you merely need to check for intersections between the turtle line segment and the list of polygon line segments.
You can probably do this for a hundred or a thousand line segments fairly quickly. Beyond that you'll need to look for smarter algorithms. eg. the Bentley-Ottmann algorithm.

Related

How to determine the distance in pixels between a circle and various obstacles

So I have a circle created in pygame as my main character, and I'm looking to find the shortest distance between its sides and the various surrounding obstacles. You can see what I mean in the picture. I have a left quadrant and a right quadrant, and I want to see what exists in these quadrants. Whatever is the closest obstacle pixel wise to the circle, that pixel count becomes the left and right values (ie. if in the right quadrant the closest obstacle was 40 pixels away, right = 40). I also have a front value, that will be scanning for things directly in front of the circle.
I've seen things for looking for collision with a circle (creating a circular field around the object as a whole) and I've also seen stuff that uses Pythagorean theorem to look for distances, but I'm not sure how to tackle it in a more "spotlight" scope, if that makes sense.
Any suggestions on how to go about this would be much appreciated! The overall goal is for the circle to move automatically around these obstacles by avoiding them, hence why I want it to scan in various areas to determine how to move about the space.
The entire space is enclosed with a wall, and all the obstacles are randomly placed squares within the wall.
The distance to a circle is the same as the distance to the center minus the radius.
Now, the distance from a rectangle to a point is found by considering the nine regions defined by the supporting lines of the sides.
Depending on the region, the shortest distance is axis-aligned (from a side to the point) or oblique (from a corner to the point) and the formulas are easy.
The discussion is even simpler for a rectangular hole or inner angle.

Separating Axis Theorem in Python/Pyglet

I'm trying to make a (sort-of) clone of Asteroids in Python using Pyglet. I figured I'd try to get a little fancy and implement the separating axis theorem to do collision. I got it to work, but the problem is that it's miserably slow. I test collision between bullets that the player shoots and the asteroids on the screen in a double for-loop, which I believe is quadratic time, but the frame rate drops from about 60 to 30 fps by the time there's about 6 asteroids and 6 bullets on the screen, which seems incredibly slow, even for a non-optimized way of detecting collision.
So I ran a profiler to determine where, exactly, in the code the program is getting hung up. It seems to be hung up in the method where I transform shape vertices into world space (I define the shapes around the origin and use OpenGL code to transform to world space for drawing, which I believe is the right way to do it). I grab the transformation matrix from OpenGL, turn it into a NumPy array, and then multiply each vertex by this matrix to get the transformed vertices. It's worth noting that I do this every collision check: I used to use XNA, and when I implemented the SAT in that (I made an asteroids clone there, too), the vertices were also defined around the origin and then you had to transform them using a world matrix.
Is it best to store the vertices around (0, 0) and transform each call, or just store the transformed vertices? I feel like the algorithm shouldn't be THIS slow, so I'm willing to bet I screwed up implementing something. If I was better at profiling (I'm pretty unfamiliar with it) I might be able to get a more complete picture, but I was hoping you guys might have some idea.
Here's a direct link to the file with the Shape class in it, where all the collision logic happens: shape.py. The specific method that the profiler seemed to mark as the bottleneck was __get_transformed_verts. Obviously you can get to the entire repo from there too, but just be aware that there's still a good deal not commented.
As Nico suggests in comments, a quick way to get a good speed-up would be to check simpler geometry first. For an Asteroids clone I guess a circle will be a good fit (or sphere for 3D). If the circles (at least large enough to cover your actual shape) don't overlap, then there is no need to do the more expensive geometry test.
If you have many objects, you will probably want to avoid doing n*n tests every frame. Take a look at space partitioning structures/algorithms. The simplest scheme with a lot of moving objects in 2D would be a grid. Then you only need to test objects belonging to the same - or neighbouring - grid cells for collision.
Another thing I noticed: You generate the transformed vertices every time you test for collision. It would be quicker to generate them only once per timestep (frame) for each object that fails the circle-circle test.

Getting outline of a graph as a polygon in PyGame

I have a feeling that this is impossible (or at least very complicated), but I currently have a generated graph that looks something like this (excuse my terrible paint skills):
Now, I'd like to be able to create a polygon of the outline, I have the coordnates of all the nodes, but not the intersections. The best I can manage so far is the Gift Wrapping algorithm, which gives more of a rough outline of the polygon than anything else.
Does anyone have any ideas as to how I could go about this?
(I'm currently using PyGame)
You're going to want to figure out where intersections happen and make new nodes there.
Then you want to find an edge that's on the outer polygon. I suggest running a random ray in from infinity until it strikes an edge.
Then imagine yourself walking along that edge, keeping your left hand on the boundary and your right hand outside. Start walking.
When you hit a node, you turn, such that you don't cross any edge. That is, you begin to traverse the next edge in counterclockwise order. (A simple implementation of this would be to sort them by direction, using atan2().)
It's all basic high school algebra and trigonometry, but it might be a little rough if this is your first time programming anything of this nature. You'll learn a lot, though.

directing a mass of enemies at once

I am working on a simple 2d game where many enemies continually spawn and chase the player or players in python + pygame. A problem I ran into, and one many people that have programmed this type of game have run into is that the enemies converge very quickly. I have made a temporary solution to this problem with a function that pushes any two enemies randomly apart if they are too close to each other. This works well but is about an O(n^2) algorithm which is run every frame and at high enemies the program starts to slow down.
When my program runs with this function the enemies seem to form round object I nicknamed a "clump". The clump seems to usually ecliptic but may actually be more complex (not symmetrical) because as the player moves the enemies are being pulled in different directions. I do like the way this clump behaves, however I am wondering if there is a more efficient way to calculate it. Currently every enemy in the clump (often >100) is first moved in the direction of the player, and then pushed apart. If there was instead a way to calculate the figure that the clump creates, and how it moves it would save a lot of computation.
I am not exactly sure how to approach the problem. It may be possible to calculate where the border of the figure moves, and then expand it to make sure the area stays the same.
Also my two functions currently being used to move enemies:
def moveEnemy(enemy, player, speed):
a = player.left-enemy.left
b = player.top-enemy.top
r = speed/math.hypot(a,b)
return enemy.move(r*a, r*b)
def clump(enemys):
for p in range(len(enemys)):
for q in range(len(enemys)-p-1):
a = enemys[p]
b = enemys[p+q+1]
if abs(a.left-b.left)+abs(a.top-b.top)<CLUMP:
xChange = (random.random()-.5)*CLUMP
yChange = ((CLUMP/2)**2-xChange**2)**.5
enemys[p] = enemys[p].move(int(xChange+.5), int(yChange + .5))
enemys[p+q+1] = enemys[p+q+1].move(-int(xChange+.5),-int(yChange+.5))
return enemys
Edit: some screen shots of how the clump looks:
http://imageshack.us/photo/my-images/651/elip.png/
http://imageshack.us/photo/my-images/832/newfni.png/
http://imageshack.us/photo/my-images/836/gamewk.png/
The clump seems to be mostly a round object just stretched (like an eclipse but may be stretched in multiple directions), however it currently has straight edges due to the rectangular enemies.
There are several ways to go about this, depending on your game. Here are some ideas for improving the performance:
Allow for some overlap.
Reduce your distance checking to be done after a fixed number of frames.
Improve your distance checking formula. If you are using the standard distance formula, this can be optimized in many ways. For one, get rid of the square root. Precision doesn't matter, only the relative distance.
Each unit can keep track of a list of nearby units. Only perform your calculations between the units in that list. Every so often, update that list by checking against all units.
Depending on how your game is setup, you can split the field up into areas, such as quadrants or cells. Units only get tested against other units in that cell.
EDIT: When the units get close to their target, it might not behave correctly. I would suggest rather than having them home-in on the exact target from far away, that they actually seek a randomized nearby target. Like an offset from their real target.
I'm sure there are many other ways to improve this, it is pretty open ended after all. I should also point out Boids and flocking which could be of interest.
You could define a clump as a separate object with a fixed number of spacial "slots" for each enemy unit in the clump. Each slot would have a set of coordinates relative to the clump center and would either be empty or would hold a reference to one unit.
A new unit trying to join the clump would move towards the innermost free slot, and once it got there it would "stay in formation", its position always being the position of the slot it occupied. Clumps would have a radius much larger than a single unit, would adjust position to avoid overlapping other clumps or loose units that weren't trying to join the clump, etc.
At some point, though, you'd need to deal with interactions for the individual units in the clump, though, so I'm not sure it's worthwhile. I think Austin Henley's suggestion of splitting the field up into cells/regions and only testing against units in nearby cells is the most practical approach.
I think you're looking for flocking.
The best intro to flocking / steering behavior movement: http://www.red3d.com/cwr/steer/ . See the attached paper red3d paper. And the related OpenSteer

Native PyGame method for automatically scaling inputs to a surface resolution?

This question is related to this other one.
In my program (which uses pygame to draw objects on the video) I have two representation of my world:
A physical one that I use to make all the calculations involved in the simulation and in which objects are located on a 1000x1000 metres surface.
A visual one which I use to draw on the screen, in which my objects are located in a window measuring 100x100 pixels.
What I want to achieve is to be able to pass to my pygame drawing functions (which normally accept inputs in pixels) my physical/real-word coordinates. In other words, I would like to be able to say:
Draw a 20m radius circle at coordinates (200m, 500m)
using the precise pygame syntax:
pygame.draw.circle(surface, (255,255,255), (200,500), 20)
and get my circle of 2px radius at centred on pixels (20,50).
Please note that this question is about a native pygame way to do this, not some sort of workaround to achieve that result (if you want to answer that, you should take a look to the question I already mentioned) instead.
Thanks in advance for your time and support.
There is no native pygame way to do this.
You may be misunderstanding the function of pygame. It is not for drawing vector objects. It is for writing pixels into video surfaces.
Since you have vector objects, you must define how they will be converted into pixels. Doing this is not a workaround - it's how you are intended to use pygame.
Since it seems that PyGame developers do not hang around here too much, I brought the question to the Pygame mailing list where it originated a monster thread and the issue has been debated at large.
The summary would be:
At present there is not such a feature.
There is interest to implement it, or at least to try to implement it...
...although is not a priority of the core devs in any way
There is more than one way to skin a cat:
should be the scaling happen both ways (inputting coordinates and reading them)?
how to deal with lines that have no thickness but that should be visible?
how to deal with visibility of objects at the edge of the image? which of their points should be taken as reference to know if a pixel should be lit or not for them?
and more (see linked thread).

Categories