There are a lot of games that can generally be viewed as a bunch of objects spread out through space, and a very common operation is to pick all objects in a sub-area. The typical example would be a game with tons of units across a large map, and an explosion that affects units in a certain radius. This requires picking every unit in the radius in order to apply the effects of the explosion.
Now, there are several ways to store objects that allows efficiently picking a sub-area. The easiest method is probably to divide the map into a grid; picking units in an area would involve selecting only the parts of the grid that is affected, and do a fine-grained coordinate check grid tiles that aren't 100% inside the area.
What I don't like about this approach is answering "How large should the grid tiles be?" Too large, and efficiency may become a real problem. Too small, and the grid takes up tons of memory if the game world is large enough (and can become ridiculous if the game is 3d). There may not even be a suitable golden mean.
The obvious solution to the above is to make a large grid with some kind of intelligent subdivision, like a pseudo tree-structure. And it is at this point I know for sure I am far into premature optimization. (Then there are proper dynamic quad/octrees, but that's even more complex to code and I'm not even confident it will perform any better.)
So my question is: Is there a standard solution to the above problem? Something, in the lines of an STL container, that can just store any object with a coordinate, and retreive a list of objects within a certain area? It doesn't have to be different than what I described above, as long as it's something that has been thought out and deemed "good enough" for a start.
Bonus points if there is an implementation of the algorithm in Python, but C would also do.
The first step to writing a practical program is accepting that choices for some constants come from real-world considerations and not transcendent mathematical truths. This especially applies to game design/world simulation type coding, where you'd never get anywhere if you persisted in trying to optimally model the real world. :-)
If your objects will all be of fairly uniform size, I would just choose a grid size proportional to the average object size, and go with that. It's the simplest - and keep in mind simplicity will buy you some speed even if you end up searching over a few more objects than absolutely necessary!
Things get a big harder if your objects vary greatly in size - for example if you're trying to use the same engine to deal with bullets, mice, humans, giant monsters, vehicles, asteroids, planets, etc. If that's the case, a common accepted (but ugly) approach is to have different 'modes' of play depending on the type of situation you're in. Short of that, one idea might be to use a large grid with a binary-tree subdivision of grid cells after they accumulate too many small objects.
One aside: if you're using floating point coordinates, you need to be careful with precision and rounding issues for your grid size, since points close to the origin have a lot more precision than those far away, which could lead to errors where grid cells miss some objects.
Here is a free book available online that will answer your question.
Specifically look at Chapter 18 on collision detection and intersection.
I don't know anything about games programming, but I would imagine (based on intuition and what I've read in the past) that a complete grid will get very inefficient for large spaces; you'll lose out in both storage, and also in time, because you'll melt the cache.
STL containers are fundamentally one-dimensional. Yes, things like set and map allow you to define arbitrary sort relationships, but it's still ordered in only one dimension. If you want to do better, you'll probably need to use a quad-tree, a kd-tree, or something like that.
Related
The Problem
I have a function, with 2 independent variables, using which I need to construct a 2-D grid, with parameters of my choice. These parameters will be the ranges of the two independent variables, smallest/largest cell size etc.
Visually this can be thought of as a space partition data structure whose geometry is provided by my function. To each cell, I will be then assigning some properties (that will also be determined by the function and other things).
My Idea
Once such data structure is prepared, I can simulate a photon (in a Monte-Carlo based approach) which will travel to a cell randomly (with some constraints, which are given by the function and cell properties), will be absorbed or scattered (re-emitted) from that cell with some probabilities (at which point I will be solving the radiative transfer equation in that cell). After this, the photon, if re-emitted, with its wavelength now different, moves to a neighbouring cell, and will keep moving till it escapes the grid (whose boundaries were decided by the parameters) or is completely absorbed in one of the cells. So, my data structure should also be in such a way that it can access the nearest neighboring cells efficiently from a computation point of view.
What I have done
I looked into scipy.spatial.kdtree but I am not sure I will be able to assign properties to cells the way I want (though if someone can explain that, it would be really helpful as it is very good with accessing nearest neighbours). I have also looked at othe tree-based algorithms but I am a bit lost at how to implement them. Numpy arrays are, at the end of the day, matrices, which does not suit my function (it leads to wastage of memory).
So any suggestions on data structures I can use (and some nudge towards how I can) will be really nice.
scipy.constants.physical_constants returns (value, unit, uncertainty) tuples for many specific physical constants. The units are given in the form of a string. (For example, one of the options for the universal gas constant has a unit field of 'J kg^-1 K^-1'.)
At first blush, this seems pretty useful. Keeping track of your units is very important in scientific calculations, but, for the life of me, I haven't been able to find any facilities for parsing these strings into something that can be tracked. Without that, there's no way to simplify the combined units after different values have been added, subtracted, etc with eachother.
I know I can manually declare the units of constants with separate libraries such as what's available in SymPy, but that would make ScyPy's own units completely useless (maybe just a convenience for printouts). That sounds pretty absurd. I can't imagine that ScyPy doesn't know how to deal with units.
What am I missing?
Edit:
I know that SciPy is a stack, and I am well aware of what libraries are part of it. My questions is about if SciPy knows how to work with the very units it spits out with its constants (or if I have to throw out those units and manually redefine everything). As far as I can see, it can't actually parse its own unit strings (and nothing else in the ecosystem seems to know how to make heads or tails of them either). This doesn't make sense to me because if SciPy proper can't deal with these units, why would they be there in the first place? Not to mention, keeping track of your units across your calculations is the exact kind of thing you need to do in science. Forcing manual redefinitions of all the units someone went through the trouble of associating with all these constants doesn't make sense.
No, scipy the library does not have any notion of quantities with units and makes no guarantees when operating on quantities with units (from e.g. pint, astropy.Quantity or other objects from other unit-handling packages).
To begin with this might more be a conceptual question than a programming one. I will not write my code below, but if you want to really see it you can take a look at this similar code link. The code's not the important thing here, so don't get confused trying to understand everything all the physics (electrostatics).
Let's say I want to solve this problem. I know the values on the boundaries and I guess a initial solution on the rectangular grid inside these boundaries, see figure below.
Potential=10 on boundaries; potential=0 inside
I know that eventually the field will converge to the same potential as the boundaries. If I let the program iterate long enough the whole area on the inside will also become yellow. The figure below shows an intermediate step towards equilibrium:
Now, in this project I am working on I am supposed to stop the simulation when the accuracy of the simulation is 1%. Is there a general definition of accuracy in these cases when working with a two dimensional grid? There are several grid nodes, all with different values, are these supposed to be 1% or less from equilibrium (all yellow)?
(This might be the wrong forum, I am aware of that.)
Best regards
First a bit of background.This is a problem that we are facing while making the software pipeline of a newly launched spacecraft.
The telescopes on board are looking at a specific target. However as you might expect the the telescope is not exactly stable an wobbles slightly. Hence at different time instants it is looking at SLIGHTLY different portions of the sky.
To fix for this we have a lab made template (basically a 2-d array of zeros and ones) that tells us which portion of the sky is being looked at a specific time instant(lets say t). It looks like this.
Here the white portion signifies the part of the telescope that is actually observing. This array is actually 2400x2400(for accuracy. Cant be reduced because it will cause loss of information. Also it is not an array of 0s and 1s, but instead real numbers because of other effects). Now knowing the wobbles of the telescope, we also know that this template will wobble by the same amount. Hence we need to shift(using np.roll) the array in either by x or y direction(sometimes even rotate if the spacecraft is rotating) accordingly and accumulate(so that we know which portion of the sky has been observed for how long. However this process is EXTREMELY time consuming and lengthy(even with numpy implementation of add and roll). Moreover we need to do this in the pipeline at least 500 times a second. Is there a way to avoid it ? We are looking for an algorithmic solution maybe a fundamentally new way of approaching the whole problem. Any help is welcome. Also if any part is unclear let me know. I will happily explain it further.
A previous question related to the same topic:
Click Here
We are implementing the pipeline in python(I know a bad choice probably)
If you want to use shifted array contents for some calculation (apply mask etc), you don't need to move it physically - just use modified index scheme to address the same elements.
For example, to virtually shift array by dx to the right, use in calculations
A[y][x-dx] instead of A[y][x]
This method becomes some more complex when rotation takes place, but still solvable (one should compare time for real array rotation and coordinate recalculations)
I am working on a fluid dynamics simulation tool in Python. Traditionally (at least in my field), integer indices refer to the center of a cell. Some quantities are stored on the faces between cells, and in the literature are denoted by half-integer indices. In codes, however, these are shifted to integers to fit into arrays. The problem is, the shift is not always consistent: do you add a half or subtract? If you switch between codes enough, you quickly lose track of the conventions for any particular code. And honestly, there are enough quirky conventions in each code that eliminating a few would be a big help... but not if the remedy is worse than the original problem.
I have thought about using even indices for cell centers and odd for faces, but that is counterintuitive. Also, it's rare for a quantity to exist on both faces and in cell centers, so you never use half of your indices. You could also implement functions plus_half(i) and minus_half(i), but that gets verbose and inelegant (at least in my opinion). And of course floating point comparisons are always problematic in case someone gets cute in how they calculate 1/2.
Does anyone have a suggestion for an elegant way to implement half-integer indices in Python? I'm sure I'm not the first person to wish for this, but I've never seen it done in a simple way that is obvious to a user (without requiring the user to memorize the shift convention you've chosen).
And just to clarify: I assume there is likely to be a remap step hidden from the user to get to integer indices (I intend to wrap NumPy arrays for my data grids). I'm thinking of the interface exposed to the user, rather than how the data is stored.
You can make __getitem__ which takes arbitrary objects as indices (and floating point numbers in particular).