Cannot iterate over reverb replay buffer due to dimensionality issue - python

I am trying to follow tensorflow's REINFORCE agent tutorial. It works when I use their code, but when I substitute my own environment I get this error:
Received incompatible tensor at flattened index 0 from table 'uniform_table'. Specification has (dtype, shape): (int32, [?]). Tensor has (dtype, shape): (int32, [92,1]).
Table signature: 0: Tensor<name: 'step_type/step_type', dtype: int32, shape: [?]>, 1: Tensor<name: 'observation/observation', dtype: double, shape: [?,18]>, 2: Tensor<name: 'action/action', dtype: float, shape: [?,2]>, 3: Tensor<name: 'next_step_type/step_type', dtype: int32, shape: [?]>, 4: Tensor<name: 'reward/reward', dtype: float, shape: [?]>, 5: Tensor<name: 'discount/discount', dtype: float, shape: [?]> [Op:IteratorGetNext]
This is interesting because 92 is exactly the number of steps in the episode.
The table signature when using my environment is:
Trajectory(
{'action': BoundedTensorSpec(shape=(None, 2), dtype=tf.float32, name='action', minimum=array(0., dtype=float32), maximum=array(3.4028235e+38, dtype=float32)),
'discount': BoundedTensorSpec(shape=(None,), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)),
'next_step_type': TensorSpec(shape=(None,), dtype=tf.int32, name='step_type'),
'observation': BoundedTensorSpec(shape=(None, 18), dtype=tf.float64, name='observation', minimum=array([0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00, 0.0000e+00,
7.5189e+02, 6.1000e-01, 1.0860e+01, 1.0000e+00, 0.0000e+00,
0.0000e+00, 0.0000e+00, 0.0000e+00]), maximum=array(1.79769313e+308)),
'policy_info': (),
'reward': TensorSpec(shape=(None,), dtype=tf.float32, name='reward'),
'step_type': TensorSpec(shape=(None,), dtype=tf.int32, name='step_type')})
And when using the working tutorial environment:
Trajectory(
{'action': BoundedTensorSpec(shape=(None,), dtype=tf.int64, name='action', minimum=array(0), maximum=array(1)),
'discount': BoundedTensorSpec(shape=(None,), dtype=tf.float32, name='discount', minimum=array(0., dtype=float32), maximum=array(1., dtype=float32)),
'next_step_type': TensorSpec(shape=(None,), dtype=tf.int32, name='step_type'),
'observation': BoundedTensorSpec(shape=(None, 4), dtype=tf.float32, name='observation', minimum=array([-4.8000002e+00, -3.4028235e+38, -4.1887903e-01, -3.4028235e+38],
dtype=float32), maximum=array([4.8000002e+00, 3.4028235e+38, 4.1887903e-01, 3.4028235e+38],
dtype=float32)),
'policy_info': (),
'reward': TensorSpec(shape=(None,), dtype=tf.float32, name='reward'),
'step_type': TensorSpec(shape=(None,), dtype=tf.int32, name='step_type')})
The only dimensional differences are that in my case the agent produces an action composed of 2 scalar numbers while in the tutorial the action is composed of only one, and my observation is longer. Regardless, the unknown dimension precedes the known dimension.
The trajectories that are used as input for the replay buffer also match up; I printed their dimensions as they were created first for my version:
[(92, 1), (92, 1, 18), (92, 1, 2), (92, 1), (92, 1), (92, 1)]
[(92, 1), (92, 1, 18), (92, 1, 2), (92, 1), (92, 1), (92, 1)]
[(92, 1), (92, 1, 18), (92, 1, 2), (92, 1), (92, 1), (92, 1)]
[(92, 1), (92, 1, 18), (92, 1, 2), (92, 1), (92, 1), (92, 1)]
and then for the tutorial version:
[(9, 1), (9, 1, 4), (9, 1), (9, 1), (9, 1), (9, 1)]
[(11, 1), (11, 1, 4), (11, 1), (11, 1), (11, 1), (11, 1)]
[(10, 1), (10, 1, 4), (10, 1), (10, 1), (10, 1), (10, 1)]
[(10, 1), (10, 1, 4), (10, 1), (10, 1), (10, 1), (10, 1)]
[(10, 1), (10, 1, 4), (10, 1), (10, 1), (10, 1), (10, 1)]
[(10, 1), (10, 1, 4), (10, 1), (10, 1), (10, 1), (10, 1)]
[(9, 1), (9, 1, 4), (9, 1), (9, 1), (9, 1), (9, 1)]
[(9, 1), (9, 1, 4), (9, 1), (9, 1), (9, 1), (9, 1)]
[(9, 1), (9, 1, 4), (9, 1), (9, 1), (9, 1), (9, 1)]
[(9, 1), (9, 1, 4), (9, 1), (9, 1), (9, 1), (9, 1)]
So each of the entries in the trajectory for both versions have the shape (number of steps, batch size, (if entry itself is a list) value length).
I get the error mentioned at the start of the question when running the second of these two lines of code:
iterator = iter(replay_buffer.as_dataset(sample_batch_size=1))
trajectories, _ = next(iterator)
However, these lines of code run successfully using the tutorial's code, and 'trajectories' is as follows:
Trajectory(
{'action': <tf.Tensor: shape=(1, 50), dtype=int64, numpy=
array([[0, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0,
1, 1, 0, 1, 0, 1, 0, 1, 0, 0, 0, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1, 0,
1, 0, 0, 0, 1, 1]])>,
'discount': <tf.Tensor: shape=(1, 50), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
0., 1.]], dtype=float32)>,
'next_step_type': <tf.Tensor: shape=(1, 50), dtype=int32, numpy=
array([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 2, 0]], dtype=int32)>,
'observation': <tf.Tensor: shape=(1, 50, 4), dtype=float32, numpy=
array([[[ 0.02992676, 0.01392324, 0.03861422, -0.04107672],
[ 0.03020522, -0.18173054, 0.03779269, 0.26353496],
[ 0.02657061, -0.37737098, 0.04306339, 0.56789446],
[ 0.01902319, -0.18287869, 0.05442128, 0.2890832 ],
[ 0.01536562, 0.01142669, 0.06020294, 0.0140486 ],
[ 0.01559415, 0.20563589, 0.06048391, -0.25904846],
[ 0.01970687, 0.39984456, 0.05530294, -0.53205734],
[ 0.02770376, 0.59414685, 0.04466179, -0.80681443],
[ 0.0395867 , 0.39844212, 0.02852551, -0.50042385],
[ 0.04755554, 0.2029299 , 0.01851703, -0.19888948],
[ 0.05161414, 0.39778218, 0.01453924, -0.48567408],
[ 0.05956978, 0.59269595, 0.00482576, -0.7737395 ],
[ 0.0714237 , 0.39750797, -0.01064903, -0.47954214],
[ 0.07937386, 0.5927786 , -0.02023987, -0.7755622 ],
[ 0.09122943, 0.3979408 , -0.03575112, -0.48931554],
[ 0.09918825, 0.20334099, -0.04553743, -0.20811091],
[ 0.10325507, 0.39908352, -0.04969965, -0.5148037 ],
[ 0.11123674, 0.59486884, -0.05999572, -0.82272476],
[ 0.12313411, 0.40061677, -0.07645022, -0.54949903],
[ 0.13114645, 0.20664726, -0.0874402 , -0.2818491 ],
[ 0.1352794 , 0.01287431, -0.09307718, -0.0179748 ],
[ 0.13553688, -0.18079808, -0.09343667, 0.24395113],
[ 0.13192092, -0.37446988, -0.08855765, 0.50576115],
[ 0.12443152, -0.17821889, -0.07844243, 0.18653633],
[ 0.12086715, 0.01793264, -0.0747117 , -0.12982464],
[ 0.1212258 , -0.17604397, -0.0773082 , 0.13838378],
[ 0.11770492, 0.02009523, -0.07454053, -0.17765227],
[ 0.11810682, -0.17388523, -0.07809357, 0.09061581],
[ 0.11462912, 0.02226418, -0.07628125, -0.22564775],
[ 0.1150744 , -0.17168939, -0.08079421, 0.04203164],
[ 0.11164062, 0.02449259, -0.07995357, -0.27500907],
[ 0.11213046, -0.16940299, -0.08545376, -0.00857614],
[ 0.10874241, -0.36320207, -0.08562528, 0.2559689 ],
[ 0.10147836, -0.5570038 , -0.0805059 , 0.52046335],
[ 0.09033829, -0.3608463 , -0.07009663, 0.20353697],
[ 0.08312136, -0.55489945, -0.06602589, 0.47331032],
[ 0.07202338, -0.7490298 , -0.05655969, 0.7444739 ],
[ 0.05704278, -0.5531748 , -0.04167021, 0.43454146],
[ 0.04597928, -0.35748845, -0.03297938, 0.12901925],
[ 0.03882951, -0.16190998, -0.03039899, -0.17388314],
[ 0.03559131, 0.03363356, -0.03387666, -0.47599885],
[ 0.03626398, 0.22921707, -0.04339663, -0.77916366],
[ 0.04084833, 0.42490798, -0.05897991, -1.0851783 ],
[ 0.04934648, 0.6207563 , -0.08068347, -1.39577 ],
[ 0.06176161, 0.4267255 , -0.10859887, -1.1293658 ],
[ 0.07029612, 0.623089 , -0.13118619, -1.4540412 ],
[ 0.0827579 , 0.42979917, -0.16026701, -1.205056 ],
[ 0.09135389, 0.23706956, -0.18436813, -0.96658343],
[ 0.09609528, 0.04483784, -0.2036998 , -0.7370203 ],
[ 0.09699203, 0.24210311, -0.2184402 , -1.0862749 ]]],
dtype=float32)>,
'policy_info': (),
'reward': <tf.Tensor: shape=(1, 50), dtype=float32, numpy=
array([[1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
1., 0.]], dtype=float32)>,
'step_type': <tf.Tensor: shape=(1, 50), dtype=int32, numpy=
array([[0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1,
1, 1, 1, 1, 1, 2]], dtype=int32)>})
So when everything is working correctly, feeding trajectories with entries of shape (number of steps, batch size, (if entry itself is a list) value length) to the replay buffer facilitates creation of a dataset where each entry in each row has shape (batch size, number of steps, (if entry itself is a list) value length).
However, in my version, each entry in each row of the dataset keeps its original shape, causing the error. Does anyone experienced with reverb know why this might be happening?

I did a lot more digging into the tensorflow backend and the problem is caused by the fact that the cartpole gym wrapper creates a non-batched python environment while the default is a batched environment, so when I run my code an additional (batch) dimension is being added to the trajectories before they are stored in the reverb table. However, since I am using the same table signature, when I attempt to pull an entry out of the table an exception is raised that the dimensions are incorrect because that signature conflicts with the actual shape of the entries

Related

How to implement Multinomial conditional distributions depending on the conditional binary value in Tensorflow Probability?

I am trying to build a graphical model in Tensorflow Probability, where we first sample a number of positive (1) and negative (0) examples (count_i) from Categorical distribution and then construct Multinomial distribution (Y_i) depending on the value of (count_i). These events (Y_i) are mutually exclusive :
Y_1 ~ Multinomial([.9, 0.1, 0.05, 0.05, 0.1], total_count = [tf.reduce_sum(tf.cast(count==1, tf.float32))
Y_2 ~ Multinomial([0.99, 0.01, 0., 0., 0.], total_count = [tf.reduce_sum(tf.cast(count==0, tf.float32))
I have read these tutorials, however I am stuck with two issues:
This code generates two arrays of length 500, whereas I only need 1 array of 500. What should I change so we only get 1 sample from Categorical distribution and then depending on the overall count of the value we are conditioning on, Multinomial is constructed ?
The sample from Categorical distribution gives only values of 0, whereas it should be a blend between 0 and 1. What am I doing wrong here?
My code is as follows. You can run these to replicate the behaviour:
def simplied_model():
return tfd.JointDistributionSequential([
tfd.Uniform(low=0., high = 1., name = 'e'), #e
lambda e: tfd.Sample(tfd.Categorical(probs = tf.stack([e, 1.-e], 0)), sample_shape =500), #count #should it be independent?
lambda count: tfd.Multinomial(probs = tf.constant([[.9, 0.1, 0.05, 0.05, 0.1], [0.99, 0.01, 0., 0., 0.]]), total_count = tf.cast(tf.stack([tf.reduce_sum(tf.cast(count==1, tf.float32)),tf.reduce_sum(tf.cast(count==0, tf.float32))], 0), dtype= tf.float32))
])
tt = simplied_model()
tt.resolve_graph()
tt.sample(1)
The first array will be your Y_{1} and the second will be your Y_{2}. The key is that your output will always be of shape (2, 5) because that is the length of the probabilities you are passing to tfd.Multinomial.
Code:
import tensorflow as tf
import tensorflow_probability as tfp
from tensorflow_probability import distributions as tfd
# helper function
def _get_counts(vec):
zeros = tf.reduce_sum(tf.cast(vec == 0, tf.float32))
ones = tf.reduce_sum(tf.cast(vec == 1, tf.float32))
return tf.stack([ones, zeros], 0)
joint = tfd.JointDistributionSequential([
tfd.Sample( # sample from uniform to make it 2D
tfd.Uniform(0., 1., name="e"), 1),
lambda e: tfd.Sample(
tfd.Categorical(probs=tf.stack([e, 1.-e], -1)), 500),
lambda c: tfd.Multinomial(
probs=[
[0.9, 0.1, 0.05, 0.05, 0.1],
[0.99, 0.01, 0., 0., 0.],
],
total_count=_get_counts(c),
)
])
joint.sample(5) # or however many you want to sample
Output:
# [<tf.Tensor: shape=(5, 1), dtype=float32, numpy=
# array([[0.5611458 ],
# [0.48223293],
# [0.6097224 ],
# [0.94013655],
# [0.14861858]], dtype=float32)>,
# <tf.Tensor: shape=(5, 1, 500), dtype=int32, numpy=
# array([[[1, 0, 0, ..., 1, 0, 1]],
#
# [[1, 1, 1, ..., 1, 0, 0]],
#
# [[0, 0, 0, ..., 1, 0, 0]],
#
# [[0, 0, 0, ..., 0, 0, 0]],
#
# [[1, 0, 1, ..., 1, 0, 1]]], dtype=int32)>,
# <tf.Tensor: shape=(2, 5), dtype=float32, numpy=
# array([[ 968., 109., 0., 0., 0.],
# [1414., 9., 0., 0., 0.]], dtype=float32)>]

How to find adjacent lines on a regular 3D grid in python

I have the coordinate of a bunch of points and want to create surfaces out of them in a python package. I want to arrange my data before importing them into the package. Points are coming from a regular grid. Firstly, I am creating lines based on the location of points. In this step I just define which point numbers create my lines. My input data is:
coord = np.array(
[[0., 0., 2.], [0., 1., 3.], [0., 2., 2.], [1., 0., 1.], [1., 1., 3.],
[1., 2., 1.], [2., 0., 1.], [2., 1., 1.], [3., 0., 1.], [4., 0., 1.]])
The figure below shows the numbers of the grid points (gray) and the numbers of the lines (blue and red).
The lines are modeled through dictionaries, in which the key is the line number, and the value is a tuple with the start and end points numbers:
In [906]: blue_line
Out[906]: {1: (1, 2), 2: (2, 3), 3: (4, 5), 4: (5, 6), 5: (7, 8)}
In [907]: red_line
Out[907]:
{6: (1, 4),
7: (2, 5),
8: (3, 6),
9: (4, 7),
10: (5, 8),
11: (7, 9),
12: (9, 10)}
To learn more about how the lines are generated, check out this thread. The lines that are used to create the surfaces are stored in a list:
surfaces = [(1, 6, 3, 7), (2, 7, 4, 8), (3, 9, 5, 10)]
As the last step, I want to find the number of lines which are not used in creating the surfaces or are used but are closer than a limit the the dashed line in the figure above. Again, I have the coordinate of the two points creating that dashed line:
coord_dash = [(2., 2., 2.), (5., 0., 1.)]
adjacency_threshold = 2
I want to have these adjacent lines as another list (shown by a red arrow in the figure):
adjacent_lines = [4, 10, 5, 11, 12]
I have only this rough idea and do not know how to code it in Python. I can only create line numbers and surfaces and need help to find those close lines.
Determining what lines have not been used is straightforward (NumPy's setdiff1d comes in handy for this task):
In [924]: all_line = {**blue_line, **red_line}
In [925]: lines = list(all_line.keys())
In [926]: lines
Out[926]: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
In [927]: used_lines = np.ravel(surfaces)
In [928]: used_lines
Out[928]: array([ 1, 6, 3, 7, 2, 7, 4, 8, 3, 9, 5, 10])
In [929]: unused_lines = np.setdiff1d(lines, used_lines)
In [930]: unused_lines
Out[930]: array([11, 12])
The adjacent lines can be obtained by using NumPy's linalg.norm:
In [954]: midpoints
Out[954]:
{1: array([0. , 0.5, 2.5]),
2: array([0. , 1.5, 2.5]),
3: array([1. , 0.5, 2. ]),
4: array([1. , 1.5, 2. ]),
5: array([2. , 0.5, 1. ]),
6: array([0.5, 0. , 1.5]),
7: array([0.5, 1. , 3. ]),
8: array([0.5, 2. , 1.5]),
9: array([1.5, 0. , 1. ]),
10: array([1.5, 1. , 2. ]),
11: array([2.5, 0. , 1. ]),
12: array([3.5, 0. , 1. ])}
In [955]: mid_dash = np.array(coord_dash).mean(axis=0)
In [956]: mid_dash
Out[956]: array([3.5, 1. , 1.5])
In [957]: adjacent_lines = []
...: for idx, point in midpoints.items():
...: if np.linalg.norm(point - mid_dash) < adjacency_threshold:
...: adjacent_lines.append(idx)
In [958]: adjacent_lines
Out[958]: [5, 11, 12]

What is Tensorflow equivalent of pytorch's conv1d?

Just wondering how I can perform 1D convolution in tensorflow. Specifically, looking to replace this code to tensorflow:
inputs = F.pad(inputs, (kernel_size-1,0), 'constant', 0)
output = F.conv1d(inputs, weight, padding=0, groups=num_heads)
Tensorflow equivalent of PyTorch's
torch.nn.functional.conv1d() is
tf.nn.conv1d() and torch.nn.functional.pad() is tf.pad().
For Example:
(PyTorch code)
import torch.nn as nn
import torch
inputs = torch.tensor([1, 0, 2, 3, 0, 1, 1], dtype=torch.float32)
filters = torch.tensor([2, 1, 3], dtype=torch.float32)
inputs = inputs.unsqueeze(0).unsqueeze(0) # torch.Size([1, 1, 7])
filters = filters.unsqueeze(0).unsqueeze(0) # torch.Size([1, 1, 3])
conv_res = F.conv1d(inputs, filters, padding=0, groups=1) # torch.Size([1, 1, 5])
pad_res = F.pad(conv_res, (1, 1), mode='constant', value=0) # torch.Size([1, 1, 7])
output:
tensor([[[ 0., 8., 11., 7., 9., 4., 0.]]])
(Tensorflow code)
import tensorflow as tf
tf.enable_eager_execution()
i = tf.constant([1, 0, 2, 3, 0, 1, 1], dtype=tf.float32)
k = tf.constant([2, 1, 3], dtype=tf.float32, name='k')
data = tf.reshape(i, [1, int(i.shape[0]), 1], name='data')
kernel = tf.reshape(k, [int(k.shape[0]), 1, 1], name='kernel')
res = tf.nn.conv1d(data, kernel, 1, 'VALID')
res = tf.pad(res[0], [[1, 1], [0, 0]], "CONSTANT")
output:
<tf.Tensor: id=555, shape=(7, 1), dtype=float32, numpy=
array([[ 0.],
[ 8.],
[11.],
[ 7.],
[ 9.],
[ 4.],
[ 0.]], dtype=float32)>

Get an index of a sorted matrix

I have a 2D np.array:
array([[ 1523., 172., 1613.],
[ 3216., 117., 1999.],
[ 85., 1271., 4.]])
I would to extract the sorted indexes of this np.array by value.
The results should be (for example) :
[[2,2],[2,0],[1,1],[0,1],[2,1],[0,0],[0,2],[1,2],[1,0]]
I already saw how to extract the min :
np.unravel_index(np.argmin(act),act.shape) #(2,2)
Thank you
Using numpy.argsort with axis=None (assuming flatten array):
>>> import numpy as np
>>>
>>> act = np.array([[ 1523., 172., 1613.],
... [ 3216., 117., 1999.],
... [ 85., 1271., 4.]])
>>> n = act.shape[1]
>>> zip(*np.argsort(act, axis=None).__divmod__(n))
[(2, 2), (2, 0), (1, 1), (0, 1), (2, 1), (0, 0), (0, 2), (1, 2), (1, 0)]

Split NumPy array according to values in the array (a condition)

I have an array:
arr = [(1,1,1), (1,1,2), (1,1,3), (1,1,4)...(35,1,22),(35,1,23)]
I want to split my array according to the third value in each ordered pair. I want each third value of 1 to be the start
of a new array. The results should be:
[(1,1,1), (1,1,2),...(1,1,35)][(1,2,1), (1,2,2),...(1,2,46)]
and so on. I know numpy.split should do the trick but I'm lost as to how to write the condition for the split.
Here's a quick idea, working with a 1d array. It can be easily extended to work with your 2d array:
In [385]: x=np.arange(10)
In [386]: I=np.where(x%3==0)
In [387]: I
Out[387]: (array([0, 3, 6, 9]),)
In [389]: np.split(x,I[0])
Out[389]:
[array([], dtype=float64),
array([0, 1, 2]),
array([3, 4, 5]),
array([6, 7, 8]),
array([9])]
The key is to use where to find the indecies where you want split to act.
For a 2d arr
First make a sample 2d array, with something interesting in the 3rd column:
In [390]: arr=np.ones((10,3))
In [391]: arr[:,2]=np.arange(10)
In [392]: arr
Out[392]:
array([[ 1., 1., 0.],
[ 1., 1., 1.],
...
[ 1., 1., 9.]])
Then use the same where and boolean to find indexes to split on:
In [393]: I=np.where(arr[:,2]%3==0)
In [395]: np.split(arr,I[0])
Out[395]:
[array([], dtype=float64),
array([[ 1., 1., 0.],
[ 1., 1., 1.],
[ 1., 1., 2.]]),
array([[ 1., 1., 3.],
[ 1., 1., 4.],
[ 1., 1., 5.]]),
array([[ 1., 1., 6.],
[ 1., 1., 7.],
[ 1., 1., 8.]]),
array([[ 1., 1., 9.]])]
I cannot think of any numpy functions or tricks to do this . A simple solution using for loop would be -
In [48]: arr = [(1,1,1), (1,1,2), (1,1,3), (1,1,4),(1,2,1),(1,2,2),(1,2,3),(1,3,1),(1,3,2),(1,3,3),(1,3,4),(1,3,5)]
In [49]: result = []
In [50]: for i in arr:
....: if i[2] == 1:
....: tempres = []
....: result.append(tempres)
....: tempres.append(i)
....:
In [51]: result
Out[51]:
[[(1, 1, 1), (1, 1, 2), (1, 1, 3), (1, 1, 4)],
[(1, 2, 1), (1, 2, 2), (1, 2, 3)],
[(1, 3, 1), (1, 3, 2), (1, 3, 3), (1, 3, 4), (1, 3, 5)]]
From looking at the documentation it seems like specifying the index of where to split on will work best. For your specific example the following works if arr is already a 2dimensional numpy array:
np.split(arr, np.where(arr[:,2] == 1)[0])
arr[:,2] returns a list of the 3rd entry in each tuple. The colon says to take every row and the 2 says to take the 3rd column, which is the 3rd component.
We then use np.where to return all the places where the 3rd coordinate is a 1. We have to do np.where()[0] to get at the array of locations directly.
We then plug in the indices we've found where the 3rd coordinate is 1 to np.split which splits at the desired locations.
Note that because the first entry has a 1 in the 3rd coordinate it will split before the first entry. This gives us one extra "split" array which is empty.

Categories