I'm working on a Minecraft-style game, and I need a way to reduce the amount of the world rendered. Currently, I'm using a naive, render-everything approach, which is having obvious scaling problems. I need a way to take an array of blocks, and in some way find out which ones are touching air, water, or any other translucent block.
I'm open to using external modules like NumPy or SciPy, though some of their documentation is a bit over my head. Alternatively, it would also be acceptable to iterate through each block and get a list of neighbors, though the performance cost of doing these calculations in Python instead of C would be pretty hefty.
For the record, I've already tried looking at NetworkX, but it seems to be more for scientific analysis or pathfinding than visibility checking.
If you only need to do this once, performance should not be an issue. If you also incrementally update the .isBoundary property of blocks whenever the world is changed, you will never have to do it again.
However you still run into issues if your world is too large or full of holes and caverns and transparent-interleaved-with-nontransparent. If you need to dynamically determine what is visible, you can keep an octree ( http://en.wikipedia.org/wiki/Octree ) where you can have giant expanses of air/water/etc. as a single node (giant block), labeled as "transparent". Then use the "paintbucket" algorithm (modified to perform Dijkstra's algorithm, so it is easy to detect when you've "gone around a corner" by checking to see if blocks exist between the current block and the origin) to quickly figure out which blocks are in sight. Updates for things far in the distance can be delayed significantly if the player is moving slowly.
You could use the Z-Buffer solution. Concerning speed, I'd write as much as possible in python and use pypy. EVE Online (a successful 3D MMO) was written in stackless python I believe.
Related
With OpenCV I've written a script that;
captures a running application in real-time and displays it in a window
searches for a specific image within the window
if it finds the image, it draws a box around it otherwise it does nothing
I want to extend this functionality to be able to identify multiple images on the same window at once.
As I understand it I can either:
store the images as a list and iterate over the list doing something at each iteration
use the same functionality noted above, in a different thread.
What do I need to take into consideration when deciding on the approach? Is there a best practice to follow or things I need to be aware of before deciding on a course?
I've found a number of topics on the subject but they're all very old and mostly deal with sensors or performance or are other languages. It's a 2D image I'm detecting, I don't think performance is going to be an issue.
When is it appropriate to multi-thread?
When to thread and when to WAR?
Python: Multi-threading or Loops?
When multi-threading is a bad idea?
To multi-thread or not to multi-thread!
I was thinking about creating minesweeper in python/pygame. However, I was stumped when I was trying to figure out a way to guarantee a large swath of empty space on the first move (such as in minesweeper on Windows XP). Does anyone have a method for doing this? I don't want code, just words.
Thank you in advance
Firstly, that doesn't happen on Windows XP (or any common minesweeper implementation) every time. It's very likely to if you are playing on a low difficulty though.
There are some ideas though;
Generate the map after the first click. This allows you to avoid the area the user clicked, giving you the large swath you desire - simply by tweaking your mine placement algorithm to avoid the area around where the user clicked.
Generate the map - but change it if insufficient space would be exposed. This will (probably) result in a faster reaction on the first click as the map will likely be already generated.
Don't do this. As mentioned previously - this is not how windows XP worked. But there was a high likelihood of this just happening naturally on easier difficulties. It might be worth recalculating the map if the user clicks on a mine on the first move, but otherwise leave it to your random distribution. Remember that (except some custom modes) there are going to be many more empty squares than ones with mines.
Hopefully that will get you started.
I'm trying to make a (sort-of) clone of Asteroids in Python using Pyglet. I figured I'd try to get a little fancy and implement the separating axis theorem to do collision. I got it to work, but the problem is that it's miserably slow. I test collision between bullets that the player shoots and the asteroids on the screen in a double for-loop, which I believe is quadratic time, but the frame rate drops from about 60 to 30 fps by the time there's about 6 asteroids and 6 bullets on the screen, which seems incredibly slow, even for a non-optimized way of detecting collision.
So I ran a profiler to determine where, exactly, in the code the program is getting hung up. It seems to be hung up in the method where I transform shape vertices into world space (I define the shapes around the origin and use OpenGL code to transform to world space for drawing, which I believe is the right way to do it). I grab the transformation matrix from OpenGL, turn it into a NumPy array, and then multiply each vertex by this matrix to get the transformed vertices. It's worth noting that I do this every collision check: I used to use XNA, and when I implemented the SAT in that (I made an asteroids clone there, too), the vertices were also defined around the origin and then you had to transform them using a world matrix.
Is it best to store the vertices around (0, 0) and transform each call, or just store the transformed vertices? I feel like the algorithm shouldn't be THIS slow, so I'm willing to bet I screwed up implementing something. If I was better at profiling (I'm pretty unfamiliar with it) I might be able to get a more complete picture, but I was hoping you guys might have some idea.
Here's a direct link to the file with the Shape class in it, where all the collision logic happens: shape.py. The specific method that the profiler seemed to mark as the bottleneck was __get_transformed_verts. Obviously you can get to the entire repo from there too, but just be aware that there's still a good deal not commented.
As Nico suggests in comments, a quick way to get a good speed-up would be to check simpler geometry first. For an Asteroids clone I guess a circle will be a good fit (or sphere for 3D). If the circles (at least large enough to cover your actual shape) don't overlap, then there is no need to do the more expensive geometry test.
If you have many objects, you will probably want to avoid doing n*n tests every frame. Take a look at space partitioning structures/algorithms. The simplest scheme with a lot of moving objects in 2D would be a grid. Then you only need to test objects belonging to the same - or neighbouring - grid cells for collision.
Another thing I noticed: You generate the transformed vertices every time you test for collision. It would be quicker to generate them only once per timestep (frame) for each object that fails the circle-circle test.
I'm interested in using python to make diagrams representing the size of values based on the size of squares (and optionally their colour). Basically I'm looking for a way to make overviews of a bunch of values like the good old program windirstat does with hard-drive usage (it basically makes a big square representing your harddrive and then smaller squares making up the area inside of it representing different programs, the bigger the square the larger the file, colour indicates the type of file). I'm fairly familiar with matplotlib, and I don't think it's possible to do something like this with it. Is there any other python package that would help? Any suggestions for something more low level if it's not? I guess I could do it manually if I could find a way to draw the boxes programatically (I don't really care about the format, but the option to export SVG as well as PNG would be nice).
Ultimately, it would be nice to have it be interactive like windirstat is, where if you were to hover over a particular square you get more information on it, and if you clicked on it maybe you'd go in and see the makeup of that particular square. I'm only familiar with wxpython for GUI stuff, not sure if it could be used for something like this. For now I'd be happy with just outputting them though.
Thanks a lot!
Alex
Edit:
Thanks guys, both your answers helped a lot.
You're looking for Treemapping algorithms. Once implemented, you can transform the output (which should be rectangles) into plotting commands to anything that can draw layered rectangles.
Edit:
More links and information:
If you don't mind reading papers, the browser-based d3 library provides for 'squarified' treemaps (js implementation). They reference this paper by Bruls, Huizing, and van Wijk. (This is also citation 3 on the wikipedia article)
I'd search on the algorithms listed on the linked Wikipedia article. For instance, they also link to this article, which describes an algorithm for "mixed treemaps". The paper also includes some interesting portions at the end describing transformations into other-than-rectangular shapes.
Squarified certainly appears to be the most common variety around. The above links should give you enough to work towards a solution or, even, directly port the d3 implementation. However, the cost of grokking d3's model (which is something like a declarative form of jQuery) may be somewhat high. At first glance, though, the implementation appears relatively straightforward.
Squaremap does this. I haven't used it (I only know it from RunSnakeRun) and its documentation is severely lacking, but it seems to work.
I am planning to write an simple 3d(isometric view) game in Java using jMonkeyEngine - nothing to fancy, I just want to learn something about OpenGL and writing efficient algorithms (random map generating ones).
When I was planning what to do, I started wondering about switching to Python. I know that Python didn't come into existence to be a tool to write 3d games, but is it possible to write good looking games with this language?
I have in mind 3d graphics, nice effects and free CPU time to power to rest of game engine? I had seen good looking java games - and too be honest, I was rather shocked when I saw level of detail achieved in Runescape HD.
On the other hand, pygame.org has only 2d games, with some starting 3d projects. Are there any efficient 3d game engines for python? Is pyopengl the only alternative? Good looking games in python aren't popular or possible to achieve?
I would be grateful for any information / feedback.
If you are worried about 3D performance: Most of the performance-critical parts will be handled by OpenGL (in a C library or even in hardware), so the language you use to drive it should not matter too much.
To really find out if performance is a problem, you'd have to try it. But there is no reason why it cannot work in principle.
At any rate, you could still optimize the critical parts, either in Python or by dropping to C. You still gain Python's benefit for most of the game engine which is less performance-critical.
Yes. Eve Online does it.
http://support.eve-online.com/Pages/KB/Article.aspx?id=128
I did a EuroPython talk about my amateur attempts to drive OpenGL from Python:
http://pyvideo.org/video/381/pycon-2011--algorithmic-generation-of-opengl-geom
The latest version of the code I'm talking about is here:
https://github.com/tartley/gloopy
It's billed as a 'library', but that was naive of me: It's a bunch of personal experimental code.
Nevertheless, it demonstrates that you can move around hundreds of bits of geometry at 60fps from Python.
Although the demo above is fairly bare-bones in that it uses simply geometry and untextured faces, one thing I found is that more detailed geometry, texture mapping or other more modern graphics effects don't substantially affect the framerate. Or at least they don't affect it any worse than using the same effects in a C program. These are run on the GPU, so it doesn't make any difference at all if your program is written in Python.
One thing that is performance-sensitive from Python is if you are creating dynamic geometry on the CPU side, e.g. moving individual vertices within a shape, by bending or melting the shape. Doing this sort of per-vertex calculation in Python, then constructing a new ctypes array from the result, then shunting this geometry to the GPU, every frame, will be slow. Instead you should probably be doing this in a vertex shader.
On the other hand, if you just want affine transformations (moving objects around, rotating them, opening chests of drawers, rotating car wheels, bending a jointed robot arm) then all of this can be done by the GPU and the fact your program is written in Python makes little difference to the performance.
You might want to check out Python-Ogre. I just messed with it myself, nothing serious, but seems pretty good.
I would recommend pyglet which is a similar system to pygame, but with full bindings to OpenGL. You can start with simple 2D games to get the hang of the system and work up to 3D later. It is a more modern system than PyGame which is built around SDL which itself is a bit long in the tooth these days.
Perhaps a wee bit off topic but, if your goal is to learn Python, how about creating a game using IronPython and XNA? XNA is not OpenGL though, yet I find it an extremely simple 2D/3D engine which is fast and supports Shader Model 3.0.
Check out the Frets on Fire project -- an open source Guitar Hero alternative. It's written in Python and has decent 3D graphics in OpenGL. I would suggest checking out its sources for hints on libraries etc.
There was a Vampires game out a few years ago where most if not all of the code was in Python. Not sure if the 3D routines were in them, but it worked fine.