What is the best way to represent honeycomb lattice? - python

I am trying to simulate a honeycomb lattice, I.e, each particle interacts with each of it's 3 neighbors. I want to represent in array such that it is automated. I.e, if any atom is picked at random, code should evaluate some parameter based on connected neighbors.
I can think of it in this way:
initialize n*n array.
every particle interacts with particles next to it on right and left sides
particle on odd row and odd column interact with it's bottom particle
particle on odd row and even column interact with it's top particle
particle on even row and odd column interact with it's top particle
particle on even row and even column interact with it's bottom particle
The equivalence is shown below in the pictures.
Is there a better method to represent the same?

There is a reference here that might be of use:
https://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/AV0405/MARTIN/Hex.pdf
Also, there is a SE answer that might be of help:
Generate, fill and plot a hexagonal lattice in Python
Also, this:
https://www.redblobgames.com/grids/hexagons/

Related

Verlet integration angular constraint in 3D

guys! I'm trying to code verlet physics from scratch with Python. I have already done it in two dimensional space and it works as expected. My code contains points and edges, which limit the distance between points. As you know, it's quite easy to achieve it with Verlet integration. And if we want some points to maintain certain angle between each other, we can easily add invisible edge between them and play with it's length.
Now lets imagine I have chain of 10 points in 3D, consecutively connected with edges. I need to maintain a specific angle between first and third point. And I can't figure out the way do this having three rotational axis, because as you can imagine, single invisible edge is not enough in this case. And, of course, adding two more lines from the same points will make no sense.
I also should NOT create invisible edges to non-adjastent points, because if we speak about angle between first and third point, it should not affect directly the middle or the end of the chain.

Select a smaller sample of "uniformly" distributed co-ordinates, out of a larger population of co-ordinates

I have a set of co-ordinates(latitudes and longitudes) of different buildings of a city. The sample size is around 16,000. I plan to use these co-ordinates as the central point of their locality/neighbourhood, and do some analysis on the different neighbourhoods of the city. The "radius/size" for each neighbourhood is still undecided as of now.
However, a lot of these co-ordinates are too close to each other. So, many of them actually represent the same locality/neighbourhood.
As a result, I want to select a smaller sample(say, 3-6k) of co-ordinates that will be more evenly spread out.
Example:- If two of the co-ordinates are representing two neighbouring buildings, I don't want to include both as they pretty much represent the same area. So we must select only one of them.
This way, I was hoping to reduce the population to a smaller size, while at the same time being able to cover most of the city through the remaining co-ordinates.
One way I was imagining the solution is to plot these co-ordinates on a 2D graph(for visualisation). Then, we can select different values of "radius" to see how many co-ordinates would remain. But I do not know how to implement such a "graph".
I am doing this analysis in Python. Is there a way I can obtain such a sample of these co-ordinates that are evenly distributed with minimal overlap?
Thanks for your help,
It seems like for your use case, you might need clustering instead of sampling to reduce your analysis set.
Given that you'd want to reduce your "houses" data to "neighborhoods" data, I'd suggest exploring geospatial clustering to cluster houses that are closer together and then take your ~3-4K clusters as your data set to begin with.
That being said, if your objective still is to remove houses that are closer together, you can obviously create an N*N matrix of the geospatial distance between each house vs. others and remove pairs that are within (0, X] where X is your threshold.

Using Python to get unblocked area

I have a rather complicated problem. Suppose I have the shape below. You can think of the red dot as a person and the pointy shape inside the big polygon as an obstacle. My goal is to compute the total unblocked vision of the person inside the big polygon, which is the area of the polygon minus the red shaded area.
I want to write a function that takes in the coordinates of the person, the coordinates of the ordered vertices of the obstacle(s) and those of the ordered vertices of the big polygon, and returns the area of the unblocked vision.
I have tried multiple things, and I'm aware of the shoelace algorithm, but the only way that I can come up with is through monte carlo. Can I get a hint on a more intelligent and efficient way to compute the area in a closed-form way?
I think your problem is solved by finding the "Visibility polygon".
https://en.wikipedia.org/wiki/Visibility_polygon
You can use https://karlobermeyer.github.io/VisiLibity1/ library to compute the visibility area.
The first task is to get the two extreme lines of sight from the person.
A simple brute-force checking. I doubt there's a better method, unless you need this calculation at each frame. (See (a) below).
Calculate the angle (relative to X-axis, or whatever) of the line person-to-obstacle_vertex for every vertex.
Find the lowest and highest values. This can be tricky if the obstacle may somehow warp around the person.
So yo can calculate the angle of each pair of sight lines (combinatory issue), an get that with maximum angle. For this job use the Dot-Product.
The second task is to get the area of the shaded region.
You need to get the two intersections of the sight lines and the outer polygon. And then build a list of vertices of the polygon between the two intersections. Add these intersections to that list.
The area can be calculated as the sum of area of triangles, those from the person to each edge (two points in that list). While you have all coordinates, an easier way is to use the Shoelace Algorithm.
(a) If the obstacle has thousands of vertices and the person moves continuosly I'd try to reduce the number of pairs to check. You can mantain a list of shown/hidden vertices, and when the person moves check the last two used vertices and their neighbours, until you get a new couple of ending vertices.

Python iterator for unique arrangements of Quarto game board

I'm working on a programatic solution to a combinatorics problem involving the board game Quarto. In Quarto there are sixteen pieces each with four binary properties. This means we can represent each piece as a tuple (i, j, k, l) where each element is either zero or one. To solve my problem I need to iterate over each unique way to arrange all of the pieces on a 4x4 playing board. I could do something like
from itertools import permutations
for board_orientation in permutations(pieces, 16):
do_stuff(board_orientation) #takes 1 or 2 full seconds
but this would mean 16! (over 20 trillion) iterations. To avoid this I'm trying to create a generator that yields only unique board orientations - that is orientations that are unique under rotation, reflection, and inversion of one or more properties (the first two properties are described by the dihedral group D4). I have found a similar question for Tic-Tac-Toe, but I'm struggling on how to extend it to this more complex iteration problem.
I think the solution involves mapping each board orientation to a numerical value via a hash tree, and then seeing how the number changes under the various symmetry operations, but struggling to convert this into code.
A board is 'isomorphic' to 16 boards by applying inversions, and to at most 8 boards by applying rotations and mirroring. That is set of isomorphic boards is at most 16*8=128. With that there are at least 15!/8 (1.6 * 10^11) board configurations.
Using inversions each board can be 'transformed' into a board with 0 in top-left corner. Fixing one corner covers all of symmetries except mirroring on diagonal through top-left corner (and lower-right.)
That symmetry can be covers by choosing two 'opposite' fields on that symmetry (like (1,2) and (2,1)), and requiring smaller value in one of them (e.g. B[1,2] < B[2,1]). That means if B[1,2] > B[2,1] than
perform diagonal mirroring. Board transformed in described way can be coded by 15 hexadecimal digits string (top-left field is always 0.) Call this coding normalization by top-left corner.
In the same way one board can be normalized by other corners. One board have 4 corner normalizations, and let call board ID minimum of these normalizations. That ID uniquely codes group of isometric boards.
Now the nice part :-), it is not needed to store generated IDs in a configuration generation process. It is enough to generate boards in lexicographic ordered of one corner normalized forms (e.f. top-left),
calculate other three normalizations and if any of other three normalizations are lower than generated than we already pass that configuration. That is due configurations are generated in lexicographic order.
Note: it is possible to optimize code by checking normalization values inside creation board process, instead of creating whole board and performing upper checks. Like, fill two ordered fields ((1,2), (2,1)) than fill other corner with it's two ordered fields, if normalization of second corner has to be smaller than normalization of top-left corner (checking prefix of only two fields) than there is no need to generate further. For that coding has to have ordered fields as first two digits. Extension is to next fill third's corner fields, perform check, than fourth's corner fields and perform check.

Smallest circles enclosing points with minimized cost

I am trying to find the smallest circles enclosing points using a hierarchical search (in a tree). I searched a lot and I seem to only find smallest enclosing circle (singular) algorithms online. This is for a university class so I am asking for possible solutions and ideas more than actual code.
My problem is that I have a formula that involves two constants and the radius of a circle to compute its cost and I need to minimise the total cost. This means that for a set of points (x,y), I could find one circle enclosing all points, or multiples circles, each enclosing a part of the points, depending on the cost of each circle.
As an example, if the formulae is 1+2*radius**2, my answer will surely have multiple small circles. All points must be in a circle at the end.
My goal is to use a graph search algorithm like a*, branch and bound or breadth first and build a tree using a state and its possible actions.
I am currently trying to write my possible actions as adding a circle, removing a circle and change a circle's radius. To limit compute time, I decided to only try those actions on positions that are between two points or between two sets of points (where the center of my circles could be). But this algorithm seems to be far from optimal. If you have any ideas, it would really help me.
Thanks anyway for your help.
If the question is unclear, please tell me.
I'm going to focus on finding optimal solutions. You have a lot more options if you're open to approximate solutions, and I'm sure there will be other answers.
I would approach this problem by formulating it as an integer program. Abstractly, the program looks like
variable x_C: 1 if circle C is chosen; 0 if circle C is not chosen
minimize sum_C cost(C) * x_C
subject to
for all points p, sum_{C containing p} x_C >= 1
for all circles C, x_C in {0, 1}.
Now, there are of course infinitely many circles, but assuming that one circle that contains strictly more area than another costs more, there are O(n^3) circles that can reasonably be chosen, where n is the number of points. These are the degenerate circles covering exactly one point; the circles with two points forming a diameter; and the circles that pass through three points. You'll write code to expand the abstract integer program into a concrete one in a format accepted by an integer program solver (e.g., GLPK) and then run the solver.
The size of the integer program is O(n^4), which is prohibitively expensive for your larger instances. To get the cost down, you'll want to do column generation. This is where you'll need to figure out your solver's programmatic interface. You'll be looking for an option that, when solving the linear relaxation of the integer program, calls your code back with the current price of each point and expects an optional circle whose cost is less than the sum of the prices of the points that it encloses.
The naive algorithm to generate columns is still O(n^4), but if you switch to a sweep algorithm, the cost will be O(n^3 log n). Given a pair of points, imagine all of the circles passing by those points. All of the circle centers lie on the perpendicular bisector. For each other point, there is an interval of centers for which the circle encloses this point. Compute all of these event points, sort them, and then process the events in order, updating the current total price of the enclosed points as you go. (Since the circles are closed, process arrivals before departures.)
If you want to push this even further, look into branch and price. The high-level branching variables would be the decision to cover two points with the same circle or not.

Categories