How can I quickly fancy-reorder a flattened "jagged" numpy array - python

So I have lots of data in a single, flat array that is grouped into irregularly sized chunks. The sizes of these chunks are given in another array. What I need to do is rearrange the chunks based on a third index array (think fancy indexing)
These chunks are always >= 3 long, usually 4, but technically unbounded, so it's not feasible to pad up to a max length and mask. Also, due to technical reasons I only have access to numpy, so nothing like scipy or pandas.
Just to be easier to read, the data in this example is easily grouped. In the real data, the numbers can be anything and do not follow this pattern.
[EDIT] Updated with less confusing data
data = np.array([1,2,3,4, 11,12,13, 21,22,23,24, 31,32,33,34, 41,42,43, 51,52,53,54])
chunkSizes = np.array([4, 3, 4, 4, 3, 4])
newOrder = np.array([0, 5, 4, 5, 2, 1])
The expected output in this case would be
np.array([1,2,3,4, 51,52,53,54, 41,42,43, 51,52,53,54, 21,22,23,24, 11,12,13])
Since the real data can be millions long, I'm hoping for some kind of numpy magic that can do this without python loops.

Approach #1
Here's a vectorized one based on creating a regular array and masking -
def chunk_rearrange(data, chunkSizes, newOrder):
m = chunkSizes[:,None] > np.arange(chunkSizes.max())
d1 = np.empty(m.shape, dtype=data.dtype)
d1[m] = data
return d1[newOrder][m[newOrder]]
Output for given sample -
In [4]: chunk_rearrange(data, chunkSizes, newOrder)
Out[4]: array([0, 0, 0, 0, 5, 5, 5, 5, 4, 4, 4, 5, 5, 5, 5, 2, 2, 2, 2, 1, 1, 1])
Approach #2
Another vectorized one based on cumsum and with smaller footprint for those very-ragged chunksizes -
def chunk_rearrange_cumsum(data, chunkSizes, newOrder):
# Setup ID array that will hold specific values at those interval starts,
# such that a final cumsum would lead us to the indices which when indexed
# by the input array gives us the re-arranged o/p
idar = np.ones(len(data), dtype=int)
# New chunk lengths
newlens = chunkSizes[newOrder]
# Original chunk intervals
c = np.r_[0,chunkSizes[:-1].cumsum()]
# Indices from original order that form the interval starts in new arrangement
d1 = c[newOrder]
# Starts of chunks in new arrangement where those from d1 are to be assigned
c2 = np.r_[0,newlens[:-1].cumsum()]
# Offset required for the starts in new arrangement for final cumsum to work
diffs = np.diff(d1)+1-np.diff(c2)
idar[c2[1:]] = diffs
idar[0] = d1[0]
# Final cumsum and indexing leads to desired new arrangement
out = data[idar.cumsum()]
return out

You can use np.split to create views into your data array corresponding to the chunkSizes, if you build up the indices with np.cumsum. You can then reorder the views according to the newOrder indices using fancy indexing. This should be reasonably efficient since the data is only copied to the new array when you call np.concatenate on the reordered views:
import numpy as np
data = np.array([0,0,0,0, 1,1,1, 2,2,2,2, 3,3,3,3, 4,4,4, 5,5,5,5])
chunkSizes = np.array([4, 3, 4, 4, 3, 4])
newOrder = np.array([0, 5, 4, 5, 2, 1])
cumIndices = np.cumsum(chunkSizes)
splitArray = np.array(np.split(data, cumIndices[:-1]))
targetArray = np.concatenate(splitArray[newOrder])
# >>> targetArray
# array([0, 0, 0, 0, 5, 5, 5, 5, 4, 4, 4, 5, 5, 5, 5, 2, 2, 2, 2, 1, 1, 1])

Related

transform broadcasting to something calculateable. matrix np.multipy

I'm trying to calculate this type of calculation:
arr = np.arange(4)
# array([0, 1, 2, 3])
arr_t =arr.reshape((-1,1))
# array([[0],
# [1],
# [2],
# [3]])
mult_arr = np.multiply(arr,arr_t) # <<< the multiplication
# array([[0, 0, 0, 0],
# [0, 1, 2, 3],
# [0, 2, 4, 6],
# [0, 3, 6, 9]])
to eventually perform it in a bigger matrix index of single row, and to sum all the matrices that are reproduced by the calculation:
arr = np.random.random((600,150))
arr_t =arr.reshape((-1,arr.shape[1],1))
mult = np.multiply(arr[:,None],arr_t)
summed = np.sum(mult,axis=0)
summed
Till now its all pure awesomeness, the problem starts when I try to covert on a bigger dataset, for example this array instead :
arr = np.random.random((6000,1500))
I get the following error - MemoryError: Unable to allocate 101. GiB for an array with shape (6000, 1500, 1500) and data type float64
which make sense, but my question is:
can I get around this anyhow without being forced to use loops that slow down the process entirely ??
my question is mainly about performance and solution that require long running tasks more then 30 secs is not an option.
Looks like you are simply trying to perform a dot product:
arr.T#arr
or
arr.T.dot(arr)
checking this is what you want
arr = np.random.random((600,150))
arr_t =arr.reshape((-1,arr.shape[1],1))
mult = np.multiply(arr[:,None],arr_t)
summed = np.sum(mult,axis=0)
np.allclose((arr.T#arr), summed)
# True

Use information of two arrays to create a third one

I have two numpy-arrays and want to create a third one with the information in these twos.
Here is a simple example:
have = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
use = np.array([[2], [3]])
solution = np.array([[1, 1, 3, 4], [5, 5, 5, 8]])
What I want is to use the "use"-array, which gives me the number of how often I want to use the first element in each row from my "have"-array.
So the 2 in "use" means, that I want to have two times a "1" in my new array "solution". Similary for the "3" in use, I want that my new array has 3 times a "5". The rest from have should be the same.
It is important to use the "use"-array for doing this (or a numpy-array in general).
Do you have some ideas?
If there are only small such data structures and performance is not an issue then you can do this so simple:
np.array([ [a[0]]*b[0]+list(a[b[0]:]) for a,b in zip(have,use)])
Simply iterate through the have and replace the values based on the use.
Use:
for i in range(use.shape[0]):
have[i, :use[i, 0]] = np.repeat(have[i, 0], use[i, 0])
Using only numpy operations:
First create a boolean mask of same size as have. mask(i, j) is True if j < use[i, j] otherwise it's False. So mask is True for indices which are to be replaced by first column value. Now use np.where to replace.
n, m = have.shape
mask = np.repeat(np.arange(m)[None, :], n, axis = 0) < use
have = np.where(mask, have[:, 0:1], have)
Output:
>>> have
array([[1, 1, 3, 4],
[5, 5, 5, 8]])
If performance matters, you can use np.apply_along_axis().
import numpy as np
have = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
use = np.array([[2], [3]])
def rep1st(arr):
rep = arr[0]
res = np.repeat(arr[1], rep)
res = np.concatenate([res, arr[rep+1:]])
return res
solution = np.apply_along_axis(rep1st, 1, np.concatenate([use, have], axis=1))
update:
As #hpaulj said, actually the method using apply_along_axis above is not as efficient as I expected. I misunderstood it. Reference: numpy np.apply_along_axis function speed up?.
However, I made some test on current methods:
import numpy as np
from timeit import timeit
def rep1st(arr):
rep = arr[0]
res = np.repeat(arr[1], rep)
res = np.concatenate([res, arr[rep + 1:]])
return res
def test(row, col, run):
have = np.random.randint(0, 100, size=(row, col))
use = np.random.randint(0, col, size=(row, 1))
d = locals()
d.update(globals())
# method by me
t1 = timeit("np.apply_along_axis(rep1st, 1, np.concatenate([use, have], axis=1))", number=run, globals=d)
# method by #quantummind
t2 = timeit("np.array([[a[0]] * b[0] + list(a[b[0]:]) for a, b in zip(have, use)])", number=run, globals=d)
# method by #Amit Vikram Singh
t3 = timeit(
"np.where(np.repeat(np.arange(have.shape[1])[None, :], have.shape[0], axis=0) < use, have[:, 0:1], have)",
number=run, globals=d
)
print(f"{t1:8.6f}, {t2:8.6f}, {t3:8.6f}")
test(1000, 10, 10)
test(100, 100, 10)
test(10, 1000, 10)
test(1000000, 10, 1)
test(100000, 100, 1)
test(10000, 1000, 1)
test(1000, 10000, 1)
test(100, 100000, 1)
test(10, 1000000, 1)
results:
0.062488, 0.028484, 0.000408
0.010787, 0.013811, 0.000270
0.001057, 0.009146, 0.000216
6.146863, 3.210017, 0.044232
0.585289, 1.186013, 0.034110
0.091086, 0.961570, 0.026294
0.039448, 0.917052, 0.022553
0.028719, 0.919377, 0.022751
0.035121, 1.027036, 0.025216
It shows that the second method proposed by #Amit Vikram Singh always works well even when the arrays are huge.

Numpy reshape - automatic filling or removal

I would like to find a reshape function that is able to transform my arrays of different dimensions in arrays of the same dimension. Let me explain it:
import numpy as np
a = np.array([[[1,2,3,3],[1,2,3,3]],[[1,2,3,3],[1,2,3,3]]])
b = np.array([[[1,2,3,3],[1,2,3,3]],[[1,2,3,3],[1,2,3,3]],[[1,2,3,3],[1,2,3,4]]])
c = np.array([[[1,2,3,3],[1,2,3,3]]])
I would like to be able to make b,c shapes equal to a shape. However, np.reshape throws an error because as explained here (Numpy resize or Numpy reshape) the function is explicitly made to handle the same dimensions.
I would like some version of that function that adds zeros at the start of the first dimension if the shape is smaller or remove the start if the shape is bigger. My example will look like this:
b = np.array([[[1,2,3,3],[1,2,3,3]],[[1,2,3,3],[1,2,3,4]]])
c = np.array([[[0,0,0,0],[0,0,0,0]],[[1,2,3,3],[1,2,3,3]]])
Do I need to write my own function to do that?
This is similar to above solution but will also work also if lower dimensions don't match
def custom_reshape(a, b):
result = np.zeros_like(a).ravel()
result[-min(a.size, b.size):] = b.ravel()[-min(a.size, b.size):]
return result.reshape(a.shape)
custom_reshape(a,b)
I would write a function like this:
def align(a,b):
out = np.zeros_like(a)
x = min(a.shape[0], b.shape[0])
out[-x:] = b[-x:]
return out
Output:
align(a,b)
# array([[[1, 2, 3, 3],
# [1, 2, 3, 3]],
# [[1, 2, 3, 3],
# [1, 2, 3, 4]]])
align(a,c)
# array([[[0, 0, 0, 0],
# [0, 0, 0, 0]],
# [[1, 2, 3, 3],
# [1, 2, 3, 3]]])

In TensorFlow, how can I assert the values of a list are in certain set?

I have a one-dimensional tf.uint8 tensor x and want to assert that all values inside that tensor are in set s I define. s is fixed at graph definition time, so it's not a dynamically computed tensor.
In plain Python, I want to do sth. like the following:
x = [1, 2, 3, 1, 11, 3, 5]
s = {1, 2, 3, 11, 12, 13}
assert all(el in s for el in x), "This should fail, as 5 is not in s"
I know that I can use tf.Assert for the assertion part but I'm struggling with defining the condition part (el in s). What's the simplest/most canonical way to do this?
The older answer Determining if A Value is in a Set in TensorFlow is not sufficient to me: first of all, it's complex to write down and understand, and, second, it's using a broadcasted tf.equal, which is more expensive computation wise than a proper set-based check.
A simple way could be something like this:
import tensorflow as tf
x = [1, 2, 3, 1, 11, 3, 5]
s = {1, 2, 3, 11, 12, 13}
x_t = tf.constant(x, dtype=tf.uint8)
s_t = tf.constant(list(s), dtype=tf.uint8)
# Check every value in x against every value in s
xs_eq = tf.equal(x_t[:, tf.newaxis], s_t)
# Check every element in x is equal to at least one element in s
assert_op = tf.Assert(tf.reduce_all(tf.reduce_any(xs_eq, axis=1)), [x_t])
with tf.control_dependencies([assert_op]):
# Use x_t...
This creates an intermediate tensor with size (len(x), len(s)). If that is problematic, you could also split the problem into independent tensors, for example like this:
import tensorflow as tf
x = [1, 2, 3, 1, 11, 3, 5]
s = {1, 2, 3, 11, 12, 13}
x_t = tf.constant(x, dtype=tf.uint8)
# Count where each x matches each s
x_in_s = [tf.cast(tf.equal(x_t, si), tf.int32) for si in s]
# Add matches and check there is at least one match per x
assert_op = tf.Assert(tf.reduce_all(tf.add_n(x_in_s) > 0), [x_t])
EDIT:
Actually, since you said your values are tf.uint8, you can make things even better with boolean arrays:
import tensorflow as tf
x = [1, 2, 3, 1, 11, 3, 5]
s = {1, 2, 3, 11, 12, 13}
x_t = tf.constant(x, dtype=tf.uint8)
s_t = tf.constant(list(s), dtype=tf.uint8)
# One-hot vectors of values included in x and s
x_bool = tf.scatter_nd(tf.cast(x_t[:, tf.newaxis], tf.int32),
tf.ones_like(x_t, dtype=tf.bool), [256])
s_bool = tf.scatter_nd(tf.cast(s_t[:, tf.newaxis], tf.int32),
tf.ones_like(s_t, dtype=tf.bool), [256])
# Check that all values in x are in s
assert_op = tf.Assert(tf.reduce_all(tf.equal(x_bool, x_bool & s_bool)), [x_t])
This takes linear time and constant memory.
EDIT 2: While the last method is theoretically the best in this case, doing a couple of quick benchmarks I can only see a significant difference in performance when I go up to hundreds of thousands of elements, and in any case the three are still quite fast with tf.uint8.

Binning a numpy array

I have a numpy array which contains time series data. I want to bin that array into equal partitions of a given length (it is fine to drop the last partition if it is not the same size) and then calculate the mean of each of those bins.
I suspect there is numpy, scipy, or pandas functionality to do this.
example:
data = [4,2,5,6,7,5,4,3,5,7]
for a bin size of 2:
bin_data = [(4,2),(5,6),(7,5),(4,3),(5,7)]
bin_data_mean = [3,5.5,6,3.5,6]
for a bin size of 3:
bin_data = [(4,2,5),(6,7,5),(4,3,5)]
bin_data_mean = [7.67,6,4]
Just use reshape and then mean(axis=1).
As the simplest possible example:
import numpy as np
data = np.array([4,2,5,6,7,5,4,3,5,7])
print data.reshape(-1, 2).mean(axis=1)
More generally, we'd need to do something like this to drop the last bin when it's not an even multiple:
import numpy as np
width=3
data = np.array([4,2,5,6,7,5,4,3,5,7])
result = data[:(data.size // width) * width].reshape(-1, width).mean(axis=1)
print result
Since you already have a numpy array, to avoid for loops, you can use reshape and consider the new dimension to be the bin:
In [33]: data.reshape(2, -1)
Out[33]:
array([[4, 2, 5, 6, 7],
[5, 4, 3, 5, 7]])
In [34]: data.reshape(2, -1).mean(0)
Out[34]: array([ 4.5, 3. , 4. , 5.5, 7. ])
Actually this will just work if the size of data is divisible by n. I'll edit a fix.
Looks like Joe Kington has an answer that handles that.
Try this, using standard Python (NumPy isn't necessary for this). Assuming Python 2.x is in use:
data = [ 4, 2, 5, 6, 7, 5, 4, 3, 5, 7 ]
# example: for n == 2
n=2
partitions = [data[i:i+n] for i in xrange(0, len(data), n)]
partitions = partitions if len(partitions[-1]) == n else partitions[:-1]
# the above produces a list of lists
partitions
=> [[4, 2], [5, 6], [7, 5], [4, 3], [5, 7]]
# now the mean
[sum(x)/float(n) for x in partitions]
=> [3.0, 5.5, 6.0, 3.5, 6.0]
I just wrote a function to apply it to all array size or dimension you want.
data is your array
axis is the axis you want to been
binstep is the number of points between each bin (allow overlapping bins)
binsize is the size of each bin
func is the function you want to apply to the bin (np.max for maxpooling, np.mean for an average ...)
def binArray(data, axis, binstep, binsize, func=np.nanmean):
data = np.array(data)
dims = np.array(data.shape)
argdims = np.arange(data.ndim)
argdims[0], argdims[axis]= argdims[axis], argdims[0]
data = data.transpose(argdims)
data = [func(np.take(data,np.arange(int(i*binstep),int(i*binstep+binsize)),0),0) for i in np.arange(dims[axis]//binstep)]
data = np.array(data).transpose(argdims)
return data
In you case it will be :
data = [4,2,5,6,7,5,4,3,5,7]
bin_data_mean = binArray(data, 0, 2, 2, np.mean)
or for the bin size of 3:
bin_data_mean = binArray(data, 0, 3, 3, np.mean)

Categories