Python iterator for unique arrangements of Quarto game board - python

I'm working on a programatic solution to a combinatorics problem involving the board game Quarto. In Quarto there are sixteen pieces each with four binary properties. This means we can represent each piece as a tuple (i, j, k, l) where each element is either zero or one. To solve my problem I need to iterate over each unique way to arrange all of the pieces on a 4x4 playing board. I could do something like
from itertools import permutations
for board_orientation in permutations(pieces, 16):
do_stuff(board_orientation) #takes 1 or 2 full seconds
but this would mean 16! (over 20 trillion) iterations. To avoid this I'm trying to create a generator that yields only unique board orientations - that is orientations that are unique under rotation, reflection, and inversion of one or more properties (the first two properties are described by the dihedral group D4). I have found a similar question for Tic-Tac-Toe, but I'm struggling on how to extend it to this more complex iteration problem.
I think the solution involves mapping each board orientation to a numerical value via a hash tree, and then seeing how the number changes under the various symmetry operations, but struggling to convert this into code.

A board is 'isomorphic' to 16 boards by applying inversions, and to at most 8 boards by applying rotations and mirroring. That is set of isomorphic boards is at most 16*8=128. With that there are at least 15!/8 (1.6 * 10^11) board configurations.
Using inversions each board can be 'transformed' into a board with 0 in top-left corner. Fixing one corner covers all of symmetries except mirroring on diagonal through top-left corner (and lower-right.)
That symmetry can be covers by choosing two 'opposite' fields on that symmetry (like (1,2) and (2,1)), and requiring smaller value in one of them (e.g. B[1,2] < B[2,1]). That means if B[1,2] > B[2,1] than
perform diagonal mirroring. Board transformed in described way can be coded by 15 hexadecimal digits string (top-left field is always 0.) Call this coding normalization by top-left corner.
In the same way one board can be normalized by other corners. One board have 4 corner normalizations, and let call board ID minimum of these normalizations. That ID uniquely codes group of isometric boards.
Now the nice part :-), it is not needed to store generated IDs in a configuration generation process. It is enough to generate boards in lexicographic ordered of one corner normalized forms (e.f. top-left),
calculate other three normalizations and if any of other three normalizations are lower than generated than we already pass that configuration. That is due configurations are generated in lexicographic order.
Note: it is possible to optimize code by checking normalization values inside creation board process, instead of creating whole board and performing upper checks. Like, fill two ordered fields ((1,2), (2,1)) than fill other corner with it's two ordered fields, if normalization of second corner has to be smaller than normalization of top-left corner (checking prefix of only two fields) than there is no need to generate further. For that coding has to have ordered fields as first two digits. Extension is to next fill third's corner fields, perform check, than fourth's corner fields and perform check.

Related

Fastest polygon union when all polygons are rectangles?

At my job we have to union many polygons together for some spatial aggregation. One problem we have is limited runtime and memory constraints (aws lambda), so for larger feature collections our current geopandas implementations hit their limits.
My main question is: is there a faster polygon union / algorithm I could be using other than shapely unary union (which i assume is what geopandas dissolve is using), that could take advantage of the fact that all polygons are rectangles, with no holes. (I.e. hoping that unary union having to account for arbitrary shapes may leave performance on the table)
The following algorithm doesn't really use the rectangular property of items.
The initial process is to create additional points for each intersection : in case of intersection between two rectangle sides, side(s) may be split in 2 or more polygon segment.
The next step consist in finding the top left point of all polygons and
assume that the virtual previous point is at the left of this point.
To find the next point of the envelope, choose among all polygons sharing the current point, the vector CN (current point to next point) whose angle CP^CN is the maximum.
When next point is top left point, it is completed.
Process all points and remove those that are inside the envelope(s).
If there are remaining points, use the same algorithm to find other envelopes (archipelago case).
For performance issues, I recommend to have a relation between points and the list of [polygon, point index] to which points belong.

Verlet integration angular constraint in 3D

guys! I'm trying to code verlet physics from scratch with Python. I have already done it in two dimensional space and it works as expected. My code contains points and edges, which limit the distance between points. As you know, it's quite easy to achieve it with Verlet integration. And if we want some points to maintain certain angle between each other, we can easily add invisible edge between them and play with it's length.
Now lets imagine I have chain of 10 points in 3D, consecutively connected with edges. I need to maintain a specific angle between first and third point. And I can't figure out the way do this having three rotational axis, because as you can imagine, single invisible edge is not enough in this case. And, of course, adding two more lines from the same points will make no sense.
I also should NOT create invisible edges to non-adjastent points, because if we speak about angle between first and third point, it should not affect directly the middle or the end of the chain.

What is the best way to represent honeycomb lattice?

I am trying to simulate a honeycomb lattice, I.e, each particle interacts with each of it's 3 neighbors. I want to represent in array such that it is automated. I.e, if any atom is picked at random, code should evaluate some parameter based on connected neighbors.
I can think of it in this way:
initialize n*n array.
every particle interacts with particles next to it on right and left sides
particle on odd row and odd column interact with it's bottom particle
particle on odd row and even column interact with it's top particle
particle on even row and odd column interact with it's top particle
particle on even row and even column interact with it's bottom particle
The equivalence is shown below in the pictures.
Is there a better method to represent the same?
There is a reference here that might be of use:
https://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MARTIN/Hex.pdf
Also, there is a SE answer that might be of help:
Generate, fill and plot a hexagonal lattice in Python
Also, this:
https://www.redblobgames.com/grids/hexagons/

Calculating object labelling consensus area

Scenario: four users are annotating images with one of four labels each. These are stored in a fairly complex format - either as polygons or as centre-radius circles. I'm interested in quantifying, for each class, the area of agreement between individual raters – in other words, I'm looking to get an m x n matrix, where M_i,j will be some metric, such as the IoU (intersection over union), between i's and j's ratings (with a 1 diagonal, obviously). There are two problems I'm facing.
One, I don't know what works best in Python for this. Shapely doesn't implement circles too well, for instance.
Two, is there a more efficient way for this than comparing it annotator-by-annotator?
IMO the simplest is to fill the shapes using polygon filling / circle filling (this is simple, you can roll your own) / path filling (from a seed). Then finding the area of overlap is an easy matter.

Linear shift between 2 sets of coordinates

My Problem is the following:
For my work I need to compare images of scanned photographic plates with a catalogue of a sample of known stars within the general area of the sky the plates cover (I call it the master catalogue). To that end I extract information, like the brightness on the image and the position in the sky, of the objects in the images and save it in tables. I then use python to create a polynomial fit for the calibration of the magnitude of the stars in the image.
That works up to a certain accuracy pretty well, but unfortunately not well enough, since there is a small shift between the coordinates the object has in the photographic plates and in the master catalogue.
Here the green circles indicate the positions (center of the circle) of objects in the master catalogue. As you can see the actual stars are always situated to the upper left of the objects in the master catalogue.
I have looked a little bit in the comparison of images (i.e. How to detect a shift between images) but I'm a little at a loss now, because I'm not actually comparing images but arrays with the coordinates of the objects. An additional problem here is that (as you can see in the image) there are objects in the master catalogue that are not visible on the plates and not all plates have the same depth (meaning some show more stars than others do).
What I would like to know is a way to find and correct the linear shift between the 2 arrays of different size of coordinates in python. There shouldn't be any rotations, so it is just a shift in x and y directions. The arrays are normal numpy recarrays.
I would change #OphirYoktan's suggestion slightly. You have these circles. I assume you know the radius, and you have that radius value for a reason.
Instead of randomly choosing points, filter the master catalog for x,y within radius of your sample. Then compute however many vectors you need to compute for all possible master catalog entries within range of your sample. Do the same thing repeatedly, then collect a histogram of the vectors. Presumably a small number will occur repeatedly, those are the likely true translations. (Ideally, "small number" == 1.)
There are several possible solutions
Note - these are high level pointers, you'll need some work to convert it to working code
The original solution (cross correlation) can be adapted to the current data structure, and should work
A believe that RANSAC will be better in your case
basically it means:
create a model based on a small number of data points (the minimal number that are required to define a relevant model), and verify it's correctness using the full data set.
specifically, if you have only translation to consider (and not scale):
select one of your points
match it to a random point in the catalog [you may do "educated guesses", if you have some prior about what translation is more likely]
this matching gives you the translation
verify this translation matches the rest of your points
repeat until you find a good match
I'm assuming here the objects aren't necessarily in the same order in both the photo plate and master catalogue.
Consider the set of position vectors, A, of the objects in the photo plate, and the set of position vectors, B, of the objects in the master catalogue. You're looking for a vector, v, such that for each a in A, a + v is approximately some element in b.
The most obvious algorithm to me would be to say for each a, for each b, let v = b - a. Now, for each element in A, check that there is a corresponding element in B that is sufficiently close (within some distance e that you choose) to that element + v. Once you find the v that meets this condition, v is your shift.

Categories