Is it possible to vectorize scipy.optimize.fminbound? - python

I have some data points of a trajectory that is parameterized by time, and I want to know the shortest distance of each point to the curve fitted to them. There seem to be several ways to go about this (e.g. here or here), but I chose scipy.optimize.fminbound, as proposed here (because it seemed to be the slickest way of going about it, and because I actually got it to work):
import numpy as np
import pandas as pd
from scipy.optimize import fminbound
data = np.array(
[
(212.82275865, 1650.40828168, 0., 0),
(214.22056952, 1649.99898924, 10.38, 0),
(212.86786868, 1644.25228805, 116.288, 0),
(212.78680031, 1643.87461108, 122.884, 0),
(212.57489485, 1642.01124032, 156.313, 0),
(212.53483954, 1641.61858242, 162.618, 0),
(212.43922274, 1639.58782771, 196.314, 0),
(212.53726315, 1639.13842423, 202.619, 0),
(212.2888428, 1637.24641296, 236.306, 0),
(212.2722447, 1636.92307229, 242.606, 0),
(212.15559302, 1635.0529813, 276.309, 0),
(212.17535631, 1634.60618711, 282.651, 0),
(212.02545613, 1632.72139574, 316.335, 0),
(211.99988779, 1632.32053329, 322.634, 0),
(211.33419846, 1631.07592039, 356.334, 0),
(211.58972239, 1630.21971902, 362.633, 0),
(211.70332122, 1628.2088542, 396.316, 0),
(211.74610735, 1627.67591368, 402.617, 0),
(211.88819518, 1625.67310022, 436.367, 0),
(211.90709414, 1625.39410321, 442.673, 0),
(212.00090348, 1623.42655008, 476.332, 0),
(211.9249017, 1622.94540583, 482.63, 0),
(212.34321938, 1616.32949197, 597.329, 0),
(213.09638942, 1615.2869643, 610.4, 0),
(219.41313491, 1580.22234313, 1197.332, 0),
(220.38660128, 1579.20043302, 1210.37, 0),
(236.35472669, 1542.30863041, 1798.267, 0),
(237.41755384, 1541.41679119, 1810.383, 0),
(264.08373622, 1502.66620597, 2398.244, 0),
(265.65655239, 1501.64308908, 2410.443, 0),
(304.66999824, 1460.94068336, 2997.263, 0),
(306.22550945, 1459.75817211, 3010.38, 0),
(358.88879764, 1416.472238, 3598.213, 0),
(361.14046402, 1415.40942931, 3610.525, 0),
(429.96379858, 1369.7972467, 4198.282, 0),
(432.06565776, 1368.22265539, 4210.505, 0),
(519.30493383, 1319.01141844, 4798.277, 0),
(522.12134083, 1317.68234967, 4810.4, 0),
(630.00294242, 1265.05368942, 5398.236, 0),
(633.67624272, 1263.63633508, 5410.431, 0),
(766.29767476, 1206.91262814, 5997.266, 0),
(770.78300649, 1205.48393374, 6010.489, 0),
(932.92308019, 1145.87780431, 6598.279, 0),
(937.54373403, 1141.55438694, 6609.525, 0),
], dtype=[
('x', 'f8'), ('y', 'f8'), ('t', 'f8'), ('dmin', 'f8'),
]
)
# fyi my data comes as a structured array; unfortunately, simply passing
# data[['x', 'y']] to np.polyfit does not work. using
# pd.DataFrame(data[['x', y']]).values seems to be the easiest solution:
# https://stackoverflow.com/a/26175750/5472354
coeffs = np.polyfit(
data['t'], pd.DataFrame(data[['x', 'y']]).values, 3
)
def curve(t):
# this can probably also be done in one statement, but I don't know how
x = np.polyval(coeffs[:, 0], t)
y = np.polyval(coeffs[:, 1], t)
return x, y
def f(t, p):
x, y = curve(t)
return np.hypot(x - p['x'], y - p['y'])
# instead of this:
for point in data:
tmin = fminbound(f, -50, 6659.525, args=(point, ))
point['dmin'] = f(tmin, point)
# do something like this:
# tmin = fminbound(f, -50, 6659.525, args=(data, ))
# data['dmin'] = f(tmin, data)
But as you can see, I use a for-loop to calculate the shortest distances for each data point, which slows down my program significantly, as this is performed several thousand times. Thus I would like to vectorize the operation / improve the performance, but haven't found a way. There are related posts to this (e.g. here or here), but I don't know how the suggested solutions apply in my case.

No, it is not possible to vectorize fminbound since it expects a scalar function of one variable. However, you can still vectorize the loop by reformulating the underlying mathematical optimization problem:
The N scalar optimization problems
min f_1(t) s.t. t_l <= t <= t_u
min f_2(t) s.t. t_l <= t <= t_u
.
.
.
min f_N(t) s.t. t_l <= t <= t_u
for scalar functions f_i are equivalent to one optimization problem
in N variables:
min f_1(t_1)**2 + ... + f_N(t_N)**2 s.t. t_l <= t_i <= t_u for all i = 1, .., N
which can be solved by means of scipy.optimize.minimize. Depending on your whole algorithm, you could use this approach to further eliminate more loops, i.e. you only solve one large-scale optimization problem instead of multiple thousands of scalar optimization problems.
After cleaning up your code, this can be done as follows:
import numpy as np
from scipy.optimize import minimize
data = np.array([
(212.82275865, 1650.40828168, 0., 0),
(214.22056952, 1649.99898924, 10.38, 0),
(212.86786868, 1644.25228805, 116.288, 0),
(212.78680031, 1643.87461108, 122.884, 0),
(212.57489485, 1642.01124032, 156.313, 0),
(212.53483954, 1641.61858242, 162.618, 0),
(212.43922274, 1639.58782771, 196.314, 0),
(212.53726315, 1639.13842423, 202.619, 0),
(212.2888428, 1637.24641296, 236.306, 0),
(212.2722447, 1636.92307229, 242.606, 0),
(212.15559302, 1635.0529813, 276.309, 0),
(212.17535631, 1634.60618711, 282.651, 0),
(212.02545613, 1632.72139574, 316.335, 0),
(211.99988779, 1632.32053329, 322.634, 0),
(211.33419846, 1631.07592039, 356.334, 0),
(211.58972239, 1630.21971902, 362.633, 0),
(211.70332122, 1628.2088542, 396.316, 0),
(211.74610735, 1627.67591368, 402.617, 0),
(211.88819518, 1625.67310022, 436.367, 0),
(211.90709414, 1625.39410321, 442.673, 0),
(212.00090348, 1623.42655008, 476.332, 0),
(211.9249017, 1622.94540583, 482.63, 0),
(212.34321938, 1616.32949197, 597.329, 0),
(213.09638942, 1615.2869643, 610.4, 0),
(219.41313491, 1580.22234313, 1197.332, 0),
(220.38660128, 1579.20043302, 1210.37, 0),
(236.35472669, 1542.30863041, 1798.267, 0),
(237.41755384, 1541.41679119, 1810.383, 0),
(264.08373622, 1502.66620597, 2398.244, 0),
(265.65655239, 1501.64308908, 2410.443, 0),
(304.66999824, 1460.94068336, 2997.263, 0),
(306.22550945, 1459.75817211, 3010.38, 0),
(358.88879764, 1416.472238, 3598.213, 0),
(361.14046402, 1415.40942931, 3610.525, 0),
(429.96379858, 1369.7972467, 4198.282, 0),
(432.06565776, 1368.22265539, 4210.505, 0),
(519.30493383, 1319.01141844, 4798.277, 0),
(522.12134083, 1317.68234967, 4810.4, 0),
(630.00294242, 1265.05368942, 5398.236, 0),
(633.67624272, 1263.63633508, 5410.431, 0),
(766.29767476, 1206.91262814, 5997.266, 0),
(770.78300649, 1205.48393374, 6010.489, 0),
(932.92308019, 1145.87780431, 6598.279, 0),
(937.54373403, 1141.55438694, 6609.525, 0)])
# the coefficients
coeffs = np.polyfit(data[:, 2], data[:, 0:2], 3)
# the points
points = data[:, :2]
# vectorized version of your objective function
# i.e. evaluates f_1, ..., f_N
def f(t, points):
poly_x = np.polyval(coeffs[:, 0], t)
poly_y = np.polyval(coeffs[:, 1], t)
return np.hypot(poly_x - points[:, 0], poly_y - points[:, 1])
# the scalar objective function we want to minimize
def obj_vec(t, points):
vals = f(t, points)
return np.sum(vals**2)
# variable bounds
bnds = [(-50, 6659.525)]*len(points)
# solve the optimization problem
res = minimize(lambda t: obj_vec(t, points), x0=np.zeros(len(points)), bounds=bnds)
dmins = f(res.x, points)
In order to further accelerate the optimization, it's highly recommended to pass the exact gradient of the objective function to minimize. Currently, the gradient is approximated by finite differences, which is quite slow:
In [7]: %timeit res = minimize(lambda t: obj_vec(t, points), x0=np.zeros(len(points)), bounds=bnds)
91.5 ms ± 868 µs per loop (mean ± std. dev. of 7 runs, 10 loops each)
Since the gradient can easily be computed by the chain rule, I'll leave it as an exercise for the reader :). By the chain rule, the gradient reads as
def grad(t, points):
poly_x = np.polyval(coeffs[:, 0], t)
poly_y = np.polyval(coeffs[:, 1], t)
poly_x_deriv = np.polyval(np.polyder(coeffs[:, 0], m=1), t)
poly_y_deriv = np.polyval(np.polyder(coeffs[:, 1], m=1), t)
return 2*poly_x_deriv*(poly_x - points[:, 0]) + 2*(poly_y_deriv)*(poly_y - points[:, 1])
and passing it to minimize significantly reduces the runtime:
In [9]: %timeit res = minimize(lambda t: obj_vec(t, points), jac=lambda t: grad(t, points), x0=np.zeros(len(points)),
...: bounds=bnds)
6.13 ms ± 63.2 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Finally, setting another starting point might also lead to fewer iterations, as you already noted in the comments.

Related

Generate grid of coordinate tuples

Assume a d-dimensional integer grid, containing n^d (n >= 1) points.
I am trying to write a function that takes the number of domain points n and the number of dimensions d and returns a set that contains all the coordinate points in the grid, as tuples.
Example: intGrid (n=2, dim=2) should return the set:
{(0,0), (0,1), (1,0), (1,1)}
Note: I cannot use numpy or any external imports.
Python has a good set of built-in modules that provides most of the basic functionality you will probably need to start getting your things done.
One of such good modules is itertools, where you will find all sorts of functions related to iterations and combinatorics. The perfect function for you is product, that you can use as below:
from itertools import product
def grid(n, dim):
return set(product(range(n), repeat=dim))
print(grid(2, 2))
# {(0, 0), (0, 1), (1, 0), (1, 1)}
print(grid(2, 3))
# {(0, 0, 0), (0, 0, 1), (0, 1, 0), (0, 1, 1), (1, 0, 0), (1, 0, 1), (1, 1, 0), (1, 1, 1)}

More Pythonic way to build random clusters in Python

I would like to create a function which will create a uniform distribution random cluster centered around a set of co-ordinates and with a specified radius, I have done this with the below:
import numpy as np
# create cluster builder
def cluster(center, radius=10, n=50):
xx = np.random.uniform(center[0]-radius, center[0]+radius, size=n)
yy = np.random.uniform(center[1]-radius, center[1]+radius, size=n)
zz = np.random.uniform(center[2]-radius, center[2]+radius, size=n)
return xx, yy, zz
# create random cluster
xx1, yy1, zz1 = cluster((25, 15, 5))
This works as expected, but I just feel that they're must be a more Pythonic way to build the cluster function. Does anyone have any suggestions?
np.random.uniform also accepts low and high as arrays/lists. Hence, we can simply do -
c = np.asarray(center)
xx,yy,zz = np.random.uniform(c-radius, c+radius, size=(n,3)).T
If any older version only supported scalar low and high, we can use some scaling -
xx,yy,zz = np.random.uniform(size=(3,n))*radius*2 + c[:,None] - radius
You can define cluster() so that it vectorizes all operations
def cluster(center, radius=10, n=50):
center = np.atleast_1d(np.asarray(center))[:, None, None]
radius = np.atleast_1d(np.asarray(radius))[None, :, None]
shape = (center.shape[0], radius.shape[1], n)
return np.random.uniform(center - radius, center + radius, size=shape)
and get all values in one single call:
cluster(25, 10).shape # (1, 1, 50)
cluster((25, 15, 5), 10).shape # (3, 1, 50)
cluster((25, 15, 5), (10, 5)).shape # (3, 2, 50)
cluster((25, 15, 5), (10, 5), n=100).shape # (3, 2, 100)
of course you can still separate the result into xx, yy, zz:
xx, yy, zz = cluster((25, 15, 5), (10, 5), n=100)
xx.shape # (2, 100)
yy.shape # (2, 100)
zz.shape # (2, 100)

pyspark reduce key being a tuple values nested lists

My problem is the following: I am parsing users interactions, each time an interaction is detected I emit ((user1,user2),((date1,0),(0,1))). The zero's are here for the direction of the interaction.
I cannot figure out why I cannot reduce this output with the following reduce function:
def myFunc2(x1,x2):
return (min(x1[0][0],x2[0][0]),max(x1[0][0],x2[0][0]),min(x1[0][1],x2[0][1]),max(x1[0][1],x2[0][1]),x1[1][0]+x2[1][0],x1[1][1]+x2[1][1])
The output of my mapper (flatmap(myFunc)) is correct:
((7401899, 5678002), ((1403185440.0, 0), (1, 0)))
((82628194, 22251869), ((0, 1403185452.0), (0, 1)))
((2162276, 98056200), ((1403185451.0, 0), (1, 0)))
((0509420, 4827510), ((1403185449.0, 0), (1, 0)))
((7974923, 9235930), ((1403185450.0, 0), (1, 0)))
((250259, 6876774), ((0, 1403185450.0), (0, 1)))
((642369, 6876774), ((0, 1403185450.0), (0, 1)))
((82628194, 22251869), ((0, 1403185452.0), (0, 1)))
((2162276, 98056200), ((1403185451.0, 0), (1, 0)))
But running
lines.flatMap(myFunc) \
.map(lambda x: (x[0], x[1])) \
.reduceByKey(myFunc2)
Gives me the error
return (min(x1[0][0],x2[0][0]),max(x1[0][0],x2[0][0]),min(x1[0][1],x2[0][1]),max(x1[0][1],x2[0][1]),x1[1][0]+x2[1][0],x1[1][1]+x2[1][1])
TypeError: 'int' object has no attribute 'getitem'
I guess I am messing something up in my keys but I don't know why (I tried to recast the key to tuple as said here but same error)
Some idea ? Thanks a lot
Okay, I think the problem here is that you are indexing too deep in items that don't go as deep as you think.
Let's examine myFunc2
def myFunc2(x1,x2):
return (min(x1[0][0],x2[0][0]),max(x1[0][0],x2[0][0]),min(x1[0][1],x2[0][1]),max(x1[0][1],x2[0][1]),x1[1][0]+x2[1][0],x1[1][1]+x2[1][1])
Given your question above, the input data will look like this:
((467401899, 485678002), ((1403185440.0, 0), (1, 0)))
Let's go ahead and assign that data row equal to a variable.
x = ((467401899, 485678002), ((1403185440.0, 0), (1, 0)))
What happens when we run x[0]? We get (467401899, 485678002). When we run x[1]? We get ((1403185440.0, 0), (1, 0)). That's what your map statement is doing, I believe.
Okay. That's clear.
In your function myFunc2, you have two parameters, x1 and x2. Those correspond to the variables above: x1 = x[0] = (467401899, 485678002) and x2 = x[1] = ((1403185440.0, 0), (1, 0))
Now let's examine just the first part of your return statement in your function.
min(x1[0][0], x2[0][0])
So, x1 = (467401899, 485678002). Cool. Now, what's x1[0]? Well, that's 467401899. Obviously. But wait! What's x1[0][0]? You're tryinig to get the zeroth index of the item at x1[0], but the item at x1[0] isn't a list or a tuple, it's just an int. And objects of <type 'int'> don't have a method called getitem.
To summarize: you're digging too deep into objects that are not nested that deeply. Think carefully about what you are passing into myFunc2, and how deep your objects are.
I think the first part of the return statement for myFunc2 should look like:
return min(x1[0], x2[0][0]). You can index deeper on x2 because x2 has more deeply nested tuples!
When I run the following, it works just fine:
a = sc.parallelize([((7401899, 5678002), ((1403185440.0, 0), (1, 0))),
((82628194, 22251869), ((0, 1403185452.0), (0, 1))),
((2162276, 98056200), ((1403185451.0, 0), (1, 0))),
((1509420, 4827510), ((1403185449.0, 0), (1, 0))),
((7974923, 9235930), ((1403185450.0, 0), (1, 0))),
((250259, 6876774), ((0, 1403185450.0), (0, 1))),
((642369, 6876774), ((0, 1403185450.0), (0, 1))),
((82628194, 22251869), ((0, 1403185452.0), (0, 1))),
((2162276, 98056200), ((1403185451.0, 0), (1, 0)))])
b = a.map(lambda x: (x[0], x[1])).reduceByKey(myFunc2)
b.collect()
[((1509420, 4827510), ((1403185449.0, 0), (1, 0))),
((2162276, 98056200), (1403185451.0, 1403185451.0, 0, 0, 2, 0)),
((7974923, 9235930), ((1403185450.0, 0), (1, 0))),
((7401899, 5678002), ((1403185440.0, 0), (1, 0))),
((642369, 6876774), ((0, 1403185450.0), (0, 1))),
((82628194, 22251869), (0, 0, 1403185452.0, 1403185452.0, 0, 2)),
((250259, 6876774), ((0, 1403185450.0), (0, 1)))]

Python / Shapely Touching Contains

I'm trying to write an algorithm, that can check if a polygon is within another, but they share a border.
from shapely.geometry import Polygon
poly = Polygon(((0, 0), (0, 2), (2, 2), (2, 0)))
poly2 = Polygon(((0, 0), (0, 1), (1, 1), (1, 0)))
# poly.contains(poly2) will return False
Is there a other way than checking if at least one point from poly2 is within poly and they don't cross (points within and outside poly)?
from shapely.geometry import Polygon
poly = Polygon(((0, 0), (0, 2), (2, 2), (2, 0)))
poly2 = Polygon(((0, 0), (0, 1), (1, 1), (1, 0)))
poly3 = Polygon(((0, 0), (0, 1), (-1, 1), (-1, 0)))
# desired result poly.func(poly2) == True poly.func(poly3) == False
You can examine detailed spatial relationships using the DE-9IM model, which "intersects", "contains", etc., are based on. This data can be obtained from relate:
A = poly
B = poly2
rel = A.relate(B)
print(rel) # 212F11FF2
And as viewed in JTS TestBuilder:
So to find "at least one point from poly2 is within poly and they don't cross (points within and outside poly)" and if they "share a border", this is spatial-predicate-speak this requires all conditions to be true:
I(A) ∩ I(B) = 2 or rel[0] == '2' to find if at least one point from B is within A resulting with an areal intersection
B(A) ∩ B(B) = {0, 1} or rel[5] in '01' to see if the two share a border (point or line)
E(A) ∩ I(B) = F or rel[6] == 'F' to find if B don't cross outside A
So, make your own spatial predicate:
def my_criteria(A, B):
rel = A.relate(B)
return rel[0] == '2' and rel[5] in '01' and rel[6] == 'F'
my_criteria(poly, poly2) # True
my_criteria(poly2, poly) # False

python comprehension with multiple 'for' clauses and single 'if'

Imagine a discrete x,y,z space : I am trying to create an iterator which will return all points which lie within a sphere of some radial distance from a point.
My approach was to first look at all points within a larger cube which is guaranteed to contain all the points needed and then cull or skip points which are too far away.
My first attempt was:
x,y,z=(0,0,1)
dist=2
#this doesn't work
it_0=((x+xp,y+yp,z+zp) for xp in range(-dist,dist+1) for yp in range(-dist,dist+1) for zp in range(-dist,dist+1) if ( ((x-xp)**2+(y-yp)**2+(z-zp)**2) <= dist**2+sys.float_info.epsilon ) )
a simple
for d,e,f in it_0:
#print(d,e,f)
print( ((x-d)**2+(y-e)**2+(z-f)**2) <= dist**2+sys.float_info.epsilon, d,e,f)
verifies that it_0 does not produce correct results. I believe it is applying the conditional only to the third (ie: z) 'for' clause
The following works:
it_1=((x+xp,y+yp,z+zp) for xp in range(-dist,dist+1) for yp in range(-dist,dist+1) for zp in range(-dist,dist+1))
it_2=filter( lambda p: ((x-p[0])**2+(y-p[1])**2+(z-p[2])**2) <= dist**2+sys.float_info.epsilon, it_1)
It collects all the points, then filter those which don't fit the conditional.
I was hoping there might be a way to correct the first attempted implementation, or make these expressions more readable or compact.
First of all, I suggest you replace the triply-nested for loop with itertools.product(), like so:
import itertools as it
it_1 = it.product(range(-dist, dist+1), repeat=3)
If you are using Python 2.x, you should use xrange() here instead of range().
Next, instead of using filter() you could just use a generator expression:
it_2=(x, y, z for x, y, z in it_1 if ((x-p[0])**2+(y-p[1])**2+(z-p[2])**2) <= dist**2+sys.float_info.epsilon)
This would avoid some overhead in Python 2.x (since filter() builds a list), but for Python 3.x would be about the same; and even in Python 2.x you could use itertools.ifilter().
But for readability, I would package the whole thing up into a generator, like so:
import itertools as it
import sys
def sphere_points(radius=0, origin=(0,0,0), epsilon=sys.float_info.epsilon):
x0, y0, z0 = origin
limit = radius**2 + epsilon
for x, y, z in it.product(range(-radius, radius+1), repeat=3):
if (x**2 + y**2 + z**2) <= limit:
yield (x+x0, y+y0, z+z0)
I just changed the code from your original code. Each range for x, y, and z is adjusted to center around the origin point. When I test this code with a radius of 0, I correctly get back a single point, the origin point.
Note that I provided arguments to the function letting you specify radius, origin point, and even the value to use for epsilon, with defaults for each. I also unpacked the origin point tuple into explicit variables; I'm not sure if Python would optimize away the indexing operation or not, but this way we know there won't be any indexing going on inside the loop. (I think the Python compiler would probably hoist the limit calculation out of the loop, but I actually prefer it on its own line as shown here, for readability.)
I think the above is about as fast as you can write it in native Python, and I think it is a big improvement in readability.
P.S. This code would probably run a lot faster if it was redone using Cython.
http://cython.org/
EDIT: Code simplified as suggested by #eryksun in comments.
You've confused the meanings of xp, yp, zp in your generator expression:
it_0=((x+xp,y+yp,z+zp) for xp in range(-dist,dist+1) for yp in range(-dist,dist+1) for zp in range(-dist,dist+1) if ( ((x-xp)**2+(y-yp)**2+(z-zp)**2) <= dist**2+sys.float_info.epsilon ) )
xp, yp, and zp are already the distances from the center of the sphere along the various axes. Thus you should not be taking the difference from x, y, z again. The expression should be:
it_0=((x+xp,y+yp,z+zp) for xp in range(-dist,dist+1) for yp in range(-dist,dist+1) for zp in range(-dist,dist+1) if ( (xp**2+yp**2+zp**2) <= dist**2+sys.float_info.epsilon ) )
Your use of epsilon is a bit off. The documentation for sys.float_info describes it as the difference between 1 and the next representable float, so it is too small to make a difference on 2, let alone 2**2.
Secondly, all your points are measured from x,y,z in the filter/if then offset by it in the result expression (x+xp,y+yp,z+zp). You'll get a rather off-centre sphere that way, so try (xp,yp,zp).
Others have pointed out the logic errors. I'll address readability. Also note with the data given, there is no floating point involved, so an epsilon is not needed.
from itertools import product
from pprint import pprint
x,y,z = 0,0,1
r = 2
def points_in_circle(radius):
'''return a generator of all integral points in circle of given radius.'''
return ((x,y,z)
for x,y,z in product(range(-dist,dist+1),repeat=3)
if x**2 + y**2 + z**2 <= radius**2)
# List integral points of radius r around point (x,y,z).
pprint([(x+xp,y+yp,z+zp) for xp,yp,zp in points_in_circle(r)])
Output
[(-2, 0, 1),
(-1, -1, 0),
(-1, -1, 1),
(-1, -1, 2),
(-1, 0, 0),
(-1, 0, 1),
(-1, 0, 2),
(-1, 1, 0),
(-1, 1, 1),
(-1, 1, 2),
(0, -2, 1),
(0, -1, 0),
(0, -1, 1),
(0, -1, 2),
(0, 0, -1),
(0, 0, 0),
(0, 0, 1),
(0, 0, 2),
(0, 0, 3),
(0, 1, 0),
(0, 1, 1),
(0, 1, 2),
(0, 2, 1),
(1, -1, 0),
(1, -1, 1),
(1, -1, 2),
(1, 0, 0),
(1, 0, 1),
(1, 0, 2),
(1, 1, 0),
(1, 1, 1),
(1, 1, 2),
(2, 0, 1)]

Categories