Remove small values from matrix - python

I have a (n, m) tensor X where I want to zero out all values smaller than some threshold t. I.e.,
X = X * tf.cast(tf.greater(X, t), X.dtype)
I was wondering, is there a more efficient way to do this? Because X in my setup is huge and as I understand it, the tf.cast(tf.greater(X, t), X.dtype) constructs an other tensor that needs as much memory as X.

What is wrong with the good old
for i in range(n):
for j in range(m):
if X[n][m] < t: X[n][m] = 0

I am not sure if this will more efficient
x = tf.constant([1, 2, 3, 4, 5, 6, 7])
y = tf.where(tf.greater(x, tf.constant(5)),
x, # if ture
tf.zeros_like(x)) # if false
with tf.Session() as sess:
a = sess.run(y)
# a is [0, 0, 0, 0, 0, 6, 7]

If X is your matrix (a numpy array I assume) you can try:
x[x<small_value]=0
if creating the boolean array takes too much memory you can try doing that through a loop by individual columns.

foo = tf.constant([1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
threshold_map = tf.greater(foo, tf.constant(5.))
threshold_map_index = tf.reshape(tf.where(threshold_map), [-1])
foo_threshold = tf.gather(foo, threshold_map_index)
# foo_threshold = [6., 7., 8., 9., 10.]
( this won't work with more then one-dimension )

Related

What does -1 mean in pytorch view?

As the question says, what does -1 do in pytorch view?
>>> a = torch.arange(1, 17)
>>> a
tensor([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14., 15., 16.])
>>> a.view(1,-1)
tensor([[ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.,
11., 12., 13., 14., 15., 16.]])
>>> a.view(-1,1)
tensor([[ 1.],
[ 2.],
[ 3.],
[ 4.],
[ 5.],
[ 6.],
[ 7.],
[ 8.],
[ 9.],
[ 10.],
[ 11.],
[ 12.],
[ 13.],
[ 14.],
[ 15.],
[ 16.]])
Does it (-1) generate additional dimension?
Does it behave the same as numpy reshape -1?
Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.
For instance:
import torch
x = torch.arange(6)
print(x.view(3, -1)) # inferred size will be 2 as 6 / 3 = 2
# tensor([[ 0., 1.],
# [ 2., 3.],
# [ 4., 5.]])
print(x.view(-1, 6)) # inferred size will be 1 as 6 / 6 = 1
# tensor([[ 0., 1., 2., 3., 4., 5.]])
print(x.view(1, -1, 2)) # inferred size will be 3 as 6 / (1 * 2) = 3
# tensor([[[ 0., 1.],
# [ 2., 3.],
# [ 4., 5.]]])
# print(x.view(-1, 5)) # throw error as there's no int N so that 5 * N = 6
# RuntimeError: invalid argument 2: size '[-1 x 5]' is invalid for input with 6 elements
print(x.view(-1, -1, 3)) # throw error as only one dimension can be inferred
# RuntimeError: invalid argument 1: only one dimension can be inferred
I love the answer that Benjamin gives https://stackoverflow.com/a/50793899/1601580
Yes, it does behave like -1 in numpy.reshape(), i.e. the actual value for this dimension will be inferred so that the number of elements in the view matches the original number of elements.
but I think the weird case edge case that might not be intuitive for you (or at least it wasn't for me) is when calling it with a single -1 i.e. tensor.view(-1).
My guess is that it works exactly the same way as always except that since you are giving a single number to view it assumes you want a single dimension. If you had tensor.view(-1, Dnew) it would produce a tensor of two dimensions/indices but would make sure the first dimension to be of the correct size according to the original dimension of the tensor. Say you had (D1, D2) you had Dnew=D1*D2 then the new dimension would be 1.
For real examples with code you can run:
import torch
x = torch.randn(1, 5)
x = x.view(-1)
print(x.size())
x = torch.randn(2, 4)
x = x.view(-1, 8)
print(x.size())
x = torch.randn(2, 4)
x = x.view(-1)
print(x.size())
x = torch.randn(2, 4, 3)
x = x.view(-1)
print(x.size())
output:
torch.Size([5])
torch.Size([1, 8])
torch.Size([8])
torch.Size([24])
History/Context
I feel a good example (common case early on in pytorch before the flatten layer was official added was this common code):
class Flatten(nn.Module):
def forward(self, input):
# input.size(0) usually denotes the batch size so we want to keep that
return input.view(input.size(0), -1)
for sequential. In this view x.view(-1) is a weird flatten layer but missing the squeeze (i.e. adding a dimension of 1). Adding this squeeze or removing it is usually important for the code to actually run.
Example2
if you are wondering what x.view(-1) does it flattens the vector. Why? Because it has to construct a new view with only 1 dimension and infer the dimension -- so it flattens it. In addition it seems this operation avoids the very nasty bugs .resize() brings since the order of the elements seems to be respected. Fyi, pytorch now has this op for flattening: https://pytorch.org/docs/stable/generated/torch.flatten.html
#%%
"""
Summary: view(-1, ...) keeps the remaining dimensions as give and infers the -1 location such that it respects the
original view of the tensor. If it's only .view(-1) then it only has 1 dimension given all the previous ones so it ends
up flattening the tensor.
ref: my answer https://stackoverflow.com/a/66500823/1601580
"""
import torch
x = torch.arange(6)
print(x)
x = x.reshape(3, 2)
print(x)
print(x.view(-1))
output
tensor([0, 1, 2, 3, 4, 5])
tensor([[0, 1],
[2, 3],
[4, 5]])
tensor([0, 1, 2, 3, 4, 5])
see the original tensor is returned!
I guess this works similar to np.reshape:
The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.
If you have a = torch.arange(1, 18) you can view it various ways like a.view(-1,6),a.view(-1,9), a.view(3,-1) etc.
From the PyTorch documentation:
>>> x = torch.randn(4, 4)
>>> x.size()
torch.Size([4, 4])
>>> y = x.view(16)
>>> y.size()
torch.Size([16])
>>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions
>>> z.size()
torch.Size([2, 8])
-1 infers to 2, for instance, if you have
>>> a = torch.rand(4,4)
>>> a.size()
torch.size([4,4])
>>> y = x.view(16)
>>> y.size()
torch.size([16])
>>> z = x.view(-1,8) # -1 is generally inferred as 2 i.e (2,8)
>>> z.size()
torch.size([2,8])
-1 is a PyTorch alias for "infer this dimension given the others have all been specified" (i.e. the quotient of the original product by the new product). It is a convention taken from numpy.reshape().
Hence t.view(1,17) in the example would be equivalent to t.view(1,-1) or t.view(-1,17).

How to make sum with variable finite number of elements?

This is one of the first things I try to code in python (and any programming language) and my first question here, so I hope I provide everything neccessary to help me.
I have upper triangular matrix and I need to solve system of equations Wx=y, where W (3x3 matrix) and y (vector) are given. I cannot use numpy.linalg functions, so I try to implement this, but backwards of course.
After several failed attempts, I limited my task to 3x3 matrix. Without loop, code looks like this:
x[0,2]=y[2]/W[2,2]
x[0,1]=(y[1]-W[1,2]*x[0,2])/W[1,1]
x[0,0]=(y[0]-W[0,2]*x[0,2]-W[0,1]*x[0,1])/W[0,0]
Now, every new sum contains more elements, which are schematic, but nevertheless need to be defined somehow. I suppose there must be sum function in numpy, but not linalg, which does such things, but I cannot find it.
My newest, partial "attempt" begins with something like this:
n=3
for k in range(n):
for i in range(n-k-1):
x[0,n-k-1]=y[n-k-1]/W[n-k-1,n-k-1]
Which, of course, contains only first element of each sum.
I would be thankful for any assistance.
Example I am working on:
y=np.array([ 0.80064077, 2.64300842, -0.74912957])
W=np.array([[6.244998,2.88230677,-5.44435723],[0.,2.94827198,2.26990852],[0.,0.,0.45441135]]
n=W.shape[1]
x=np.zeros((1,n), dtype=np.float)
Proper solution should look like:
[-2.30857143 2.16571429 -1.64857143]
Here's one approach to use generic n and with one-loop -
def one_loop(y, W, n):
out = np.zeros((1,n))
for i in range(n-1,-1,-1):
sums = (W[i,i+1:]*out[0,i+1:]).sum()
out[0,i] = (y[i] - sums)/W[i,i]
return out
For performance, we can replace that sum-reduction step with a dot-product. Thus, sums could be alternatively computed like so -
sums = W[i,i+1:].dot(x[0,i+1:])
Sample runs
1) n = 3 :
In [149]: y
Out[149]: array([ 5., 8., 7.])
In [150]: W
Out[150]:
array([[ 6., 6., 2.],
[ 3., 3., 3.],
[ 4., 8., 5.]])
In [151]: x = np.zeros((1,3))
...: x[0,2]=y[2]/W[2,2]
...: x[0,1]=(y[1]-W[1,2]*x[0,2])/W[1,1]
...: x[0,0]=(y[0]-W[0,2]*x[0,2]-W[0,1]*x[0,1])/W[0,0]
...:
In [152]: x
Out[152]: array([[-0.9 , 1.26666667, 1.4 ]])
In [154]: one_loop(y, W, n=3)
Out[154]: array([[-0.9 , 1.26666667, 1.4 ]])
2) n = 4 :
In [156]: y
Out[156]: array([ 5., 8., 7., 6.])
In [157]: W
Out[157]:
array([[ 6., 2., 3., 3.],
[ 3., 4., 8., 5.],
[ 8., 6., 6., 4.],
[ 8., 4., 2., 2.]])
In [158]: x = np.zeros((1,4))
...: x[0,3]=y[3]/W[3,3]
...: x[0,2]=(y[2]-W[2,3]*x[0,3])/W[2,2]
...: x[0,1]=(y[1]-W[1,3]*x[0,3]-W[1,2]*x[0,2])/W[1,1]
...: x[0,0]=(y[0]-W[0,3]*x[0,3]-W[0,2]*x[0,2]-W[0,1]*x[0,1])/W[0,0]
...:
In [159]: x
Out[159]: array([[-0.22222222, -0.08333333, -0.83333333, 3. ]])
In [160]: one_loop(y, W, n=4)
Out[160]: array([[-0.22222222, -0.08333333, -0.83333333, 3. ]])
One more take (now updated to the state-of-the-art provided by Divakar in another answer):
import numpy as np
y=np.array([ 0.80064077, 2.64300842, -0.74912957])
W=np.array([[6.244998,2.88230677,-5.44435723],[0.,2.94827198,2.26990852],[0.,0.,0.45441135]])
n=W.shape[1]
x=np.zeros((1,n), dtype=np.float)
for i in range(n-1, -1, -1):
x[0,i] = (y[i]-W[i,i+1:].dot(x[0,i+1:]))/W[i,i]
print(x)
gives:
[[-2.30857143 2.16571429 -1.64857143]]
My take
n=3
for k in range(n):
print("s=y[%d]"% (n-k-1))
s = y[n-k-1]
for i in range(0,k):
print("s - W[%d,%d]*x[0,%d]" % (n-k-1, n-i-1, n-i-1))
s = s - W[n-k-1,n-i-1]*x[0,n-i-1]
print("x[0,%d] = s/W[%d,%d]" % (n-k-1,n-k-1,n-k-1))
x[0,n-k-1] = s/W[n-k-1,n-k-1]
print(x)
and without print statements
n=3
for k in range(n):
s = y[n-k-1]
for i in range(0,k):
s = s - W[n-k-1,n-i-1]*x[0,n-i-1]
x[0,n-k-1] = s/W[n-k-1,n-k-1]
print(x)
Output
s=y[2]
x[0,2] = s/W[2,2]
s=y[1]
s - W[1,2]*x[0,2]
x[0,1] = s/W[1,1]
s=y[0]
s - W[0,2]*x[0,2]
s - W[0,1]*x[0,1]
x[0,0] = s/W[0,0]
[[-2.30857143 2.16571429 -1.64857143]]

'numpy.float64' object is not iterable

I'm trying to iterate an array of values generated with numpy.linspace:
slX = numpy.linspace(obsvX, flightX, numSPts)
slY = np.linspace(obsvY, flightY, numSPts)
for index,point in slX:
yPoint = slY[index]
arcpy.AddMessage(yPoint)
This code worked fine on my office computer, but I sat down this morning to work from home on a different machine and this error came up:
File "C:\temp\gssm_arcpy.1.0.3.py", line 147, in AnalyzeSightLine
for index,point in slX:
TypeError: 'numpy.float64' object is not iterable
slX is just an array of floats, and the script has no problem printing the contents -- just, apparently iterating through them. Any suggestions for what is causing it to break, and possible fixes?
numpy.linspace() gives you a one-dimensional NumPy array. For example:
>>> my_array = numpy.linspace(1, 10, 10)
>>> my_array
array([ 1., 2., 3., 4., 5., 6., 7., 8., 9., 10.])
Therefore:
for index,point in my_array
cannot work. You would need some kind of two-dimensional array with two
elements in the second dimension:
>>> two_d = numpy.array([[1, 2], [4, 5]])
>>> two_d
array([[1, 2], [4, 5]])
Now you can do this:
>>> for x, y in two_d:
print(x, y)
1 2
4 5

Better way to shuffle two numpy arrays in unison

I have two numpy arrays of different shapes, but with the same length (leading dimension). I want to shuffle each of them, such that corresponding elements continue to correspond -- i.e. shuffle them in unison with respect to their leading indices.
This code works, and illustrates my goals:
def shuffle_in_unison(a, b):
assert len(a) == len(b)
shuffled_a = numpy.empty(a.shape, dtype=a.dtype)
shuffled_b = numpy.empty(b.shape, dtype=b.dtype)
permutation = numpy.random.permutation(len(a))
for old_index, new_index in enumerate(permutation):
shuffled_a[new_index] = a[old_index]
shuffled_b[new_index] = b[old_index]
return shuffled_a, shuffled_b
For example:
>>> a = numpy.asarray([[1, 1], [2, 2], [3, 3]])
>>> b = numpy.asarray([1, 2, 3])
>>> shuffle_in_unison(a, b)
(array([[2, 2],
[1, 1],
[3, 3]]), array([2, 1, 3]))
However, this feels clunky, inefficient, and slow, and it requires making a copy of the arrays -- I'd rather shuffle them in-place, since they'll be quite large.
Is there a better way to go about this? Faster execution and lower memory usage are my primary goals, but elegant code would be nice, too.
One other thought I had was this:
def shuffle_in_unison_scary(a, b):
rng_state = numpy.random.get_state()
numpy.random.shuffle(a)
numpy.random.set_state(rng_state)
numpy.random.shuffle(b)
This works...but it's a little scary, as I see little guarantee it'll continue to work -- it doesn't look like the sort of thing that's guaranteed to survive across numpy version, for example.
Your can use NumPy's array indexing:
def unison_shuffled_copies(a, b):
assert len(a) == len(b)
p = numpy.random.permutation(len(a))
return a[p], b[p]
This will result in creation of separate unison-shuffled arrays.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y, random_state=0)
To learn more, see http://scikit-learn.org/stable/modules/generated/sklearn.utils.shuffle.html
Your "scary" solution does not appear scary to me. Calling shuffle() for two sequences of the same length results in the same number of calls to the random number generator, and these are the only "random" elements in the shuffle algorithm. By resetting the state, you ensure that the calls to the random number generator will give the same results in the second call to shuffle(), so the whole algorithm will generate the same permutation.
If you don't like this, a different solution would be to store your data in one array instead of two right from the beginning, and create two views into this single array simulating the two arrays you have now. You can use the single array for shuffling and the views for all other purposes.
Example: Let's assume the arrays a and b look like this:
a = numpy.array([[[ 0., 1., 2.],
[ 3., 4., 5.]],
[[ 6., 7., 8.],
[ 9., 10., 11.]],
[[ 12., 13., 14.],
[ 15., 16., 17.]]])
b = numpy.array([[ 0., 1.],
[ 2., 3.],
[ 4., 5.]])
We can now construct a single array containing all the data:
c = numpy.c_[a.reshape(len(a), -1), b.reshape(len(b), -1)]
# array([[ 0., 1., 2., 3., 4., 5., 0., 1.],
# [ 6., 7., 8., 9., 10., 11., 2., 3.],
# [ 12., 13., 14., 15., 16., 17., 4., 5.]])
Now we create views simulating the original a and b:
a2 = c[:, :a.size//len(a)].reshape(a.shape)
b2 = c[:, a.size//len(a):].reshape(b.shape)
The data of a2 and b2 is shared with c. To shuffle both arrays simultaneously, use numpy.random.shuffle(c).
In production code, you would of course try to avoid creating the original a and b at all and right away create c, a2 and b2.
This solution could be adapted to the case that a and b have different dtypes.
Very simple solution:
randomize = np.arange(len(x))
np.random.shuffle(randomize)
x = x[randomize]
y = y[randomize]
the two arrays x,y are now both randomly shuffled in the same way
James wrote in 2015 an sklearn solution which is helpful. But he added a random state variable, which is not needed. In the below code, the random state from numpy is automatically assumed.
X = np.array([[1., 0.], [2., 1.], [0., 0.]])
y = np.array([0, 1, 2])
from sklearn.utils import shuffle
X, y = shuffle(X, y)
from np.random import permutation
from sklearn.datasets import load_iris
iris = load_iris()
X = iris.data #numpy array
y = iris.target #numpy array
# Data is currently unshuffled; we should shuffle
# each X[i] with its corresponding y[i]
perm = permutation(len(X))
X = X[perm]
y = y[perm]
Shuffle any number of arrays together, in-place, using only NumPy.
import numpy as np
def shuffle_arrays(arrays, set_seed=-1):
"""Shuffles arrays in-place, in the same order, along axis=0
Parameters:
-----------
arrays : List of NumPy arrays.
set_seed : Seed value if int >= 0, else seed is random.
"""
assert all(len(arr) == len(arrays[0]) for arr in arrays)
seed = np.random.randint(0, 2**(32 - 1) - 1) if set_seed < 0 else set_seed
for arr in arrays:
rstate = np.random.RandomState(seed)
rstate.shuffle(arr)
And can be used like this
a = np.array([1, 2, 3, 4, 5])
b = np.array([10,20,30,40,50])
c = np.array([[1,10,11], [2,20,22], [3,30,33], [4,40,44], [5,50,55]])
shuffle_arrays([a, b, c])
A few things to note:
The assert ensures that all input arrays have the same length along
their first dimension.
Arrays shuffled in-place by their first dimension - nothing returned.
Random seed within positive int32 range.
If a repeatable shuffle is needed, seed value can be set.
After the shuffle, the data can be split using np.split or referenced using slices - depending on the application.
you can make an array like:
s = np.arange(0, len(a), 1)
then shuffle it:
np.random.shuffle(s)
now use this s as argument of your arrays. same shuffled arguments return same shuffled vectors.
x_data = x_data[s]
x_label = x_label[s]
There is a well-known function that can handle this:
from sklearn.model_selection import train_test_split
X, _, Y, _ = train_test_split(X,Y, test_size=0.0)
Just setting test_size to 0 will avoid splitting and give you shuffled data.
Though it is usually used to split train and test data, it does shuffle them too.
From documentation
Split arrays or matrices into random train and test subsets
Quick utility that wraps input validation and
next(ShuffleSplit().split(X, y)) and application to input data into a
single call for splitting (and optionally subsampling) data in a
oneliner.
This seems like a very simple solution:
import numpy as np
def shuffle_in_unison(a,b):
assert len(a)==len(b)
c = np.arange(len(a))
np.random.shuffle(c)
return a[c],b[c]
a = np.asarray([[1, 1], [2, 2], [3, 3]])
b = np.asarray([11, 22, 33])
shuffle_in_unison(a,b)
Out[94]:
(array([[3, 3],
[2, 2],
[1, 1]]),
array([33, 22, 11]))
One way in which in-place shuffling can be done for connected lists is using a seed (it could be random) and using numpy.random.shuffle to do the shuffling.
# Set seed to a random number if you want the shuffling to be non-deterministic.
def shuffle(a, b, seed):
np.random.seed(seed)
np.random.shuffle(a)
np.random.seed(seed)
np.random.shuffle(b)
That's it. This will shuffle both a and b in the exact same way. This is also done in-place which is always a plus.
EDIT, don't use np.random.seed() use np.random.RandomState instead
def shuffle(a, b, seed):
rand_state = np.random.RandomState(seed)
rand_state.shuffle(a)
rand_state.seed(seed)
rand_state.shuffle(b)
When calling it just pass in any seed to feed the random state:
a = [1,2,3,4]
b = [11, 22, 33, 44]
shuffle(a, b, 12345)
Output:
>>> a
[1, 4, 2, 3]
>>> b
[11, 44, 22, 33]
Edit: Fixed code to re-seed the random state
Say we have two arrays: a and b.
a = np.array([[1,2,3],[4,5,6],[7,8,9]])
b = np.array([[9,1,1],[6,6,6],[4,2,0]])
We can first obtain row indices by permutating first dimension
indices = np.random.permutation(a.shape[0])
[1 2 0]
Then use advanced indexing.
Here we are using the same indices to shuffle both arrays in unison.
a_shuffled = a[indices[:,np.newaxis], np.arange(a.shape[1])]
b_shuffled = b[indices[:,np.newaxis], np.arange(b.shape[1])]
This is equivalent to
np.take(a, indices, axis=0)
[[4 5 6]
[7 8 9]
[1 2 3]]
np.take(b, indices, axis=0)
[[6 6 6]
[4 2 0]
[9 1 1]]
If you want to avoid copying arrays, then I would suggest that instead of generating a permutation list, you go through every element in the array, and randomly swap it to another position in the array
for old_index in len(a):
new_index = numpy.random.randint(old_index+1)
a[old_index], a[new_index] = a[new_index], a[old_index]
b[old_index], b[new_index] = b[new_index], b[old_index]
This implements the Knuth-Fisher-Yates shuffle algorithm.
Shortest and easiest way in my opinion, use seed:
random.seed(seed)
random.shuffle(x_data)
# reset the same seed to get the identical random sequence and shuffle the y
random.seed(seed)
random.shuffle(y_data)
most solutions above work, however if you have column vectors you have to transpose them first. here is an example
def shuffle(self) -> None:
"""
Shuffles X and Y
"""
x = self.X.T
y = self.Y.T
p = np.random.permutation(len(x))
self.X = x[p].T
self.Y = y[p].T
With an example, this is what I'm doing:
combo = []
for i in range(60000):
combo.append((images[i], labels[i]))
shuffle(combo)
im = []
lab = []
for c in combo:
im.append(c[0])
lab.append(c[1])
images = np.asarray(im)
labels = np.asarray(lab)
I extended python's random.shuffle() to take a second arg:
def shuffle_together(x, y):
assert len(x) == len(y)
for i in reversed(xrange(1, len(x))):
# pick an element in x[:i+1] with which to exchange x[i]
j = int(random.random() * (i+1))
x[i], x[j] = x[j], x[i]
y[i], y[j] = y[j], y[i]
That way I can be sure that the shuffling happens in-place, and the function is not all too long or complicated.
Just use numpy...
First merge the two input arrays 1D array is labels(y) and 2D array is data(x) and shuffle them with NumPy shuffle method. Finally split them and return.
import numpy as np
def shuffle_2d(a, b):
rows= a.shape[0]
if b.shape != (rows,1):
b = b.reshape((rows,1))
S = np.hstack((b,a))
np.random.shuffle(S)
b, a = S[:,0], S[:,1:]
return a,b
features, samples = 2, 5
x, y = np.random.random((samples, features)), np.arange(samples)
x, y = shuffle_2d(train, test)

Growing matrices columnwise in NumPy

In pure Python you can grow matrices column by column pretty easily:
data = []
for i in something:
newColumn = getColumnDataAsList(i)
data.append(newColumn)
NumPy's array doesn't have the append function. The hstack function doesn't work on zero sized arrays, thus the following won't work:
data = numpy.array([])
for i in something:
newColumn = getColumnDataAsNumpyArray(i)
data = numpy.hstack((data, newColumn)) # ValueError: arrays must have same number of dimensions
So, my options are either to remove the initalization iside the loop with appropriate condition:
data = None
for i in something:
newColumn = getColumnDataAsNumpyArray(i)
if data is None:
data = newColumn
else:
data = numpy.hstack((data, newColumn)) # works
... or to use a Python list and convert is later to array:
data = []
for i in something:
newColumn = getColumnDataAsNumpyArray(i)
data.append(newColumn)
data = numpy.array(data)
Both variants seem a little bit awkward to be. Are there nicer solutions?
NumPy actually does have an append function, which it seems might do what you want, e.g.,
import numpy as NP
my_data = NP.random.random_integers(0, 9, 9).reshape(3, 3)
new_col = NP.array((5, 5, 5)).reshape(3, 1)
res = NP.append(my_data, new_col, axis=1)
your second snippet (hstack) will work if you add another line, e.g.,
my_data = NP.random.random_integers(0, 9, 16).reshape(4, 4)
# the line to add--does not depend on array dimensions
new_col = NP.zeros_like(my_data[:,-1]).reshape(-1, 1)
res = NP.hstack((my_data, new_col))
hstack gives the same result as concatenate((my_data, new_col), axis=1), i'm not sure how they compare performance-wise.
While that's the most direct answer to your question, i should mention that looping through a data source to populate a target via append, while just fine in python, is not idiomatic NumPy. Here's why:
initializing a NumPy array is relatively expensive, and with this conventional python pattern, you incur that cost, more or less, at each loop iteration (i.e., each append to a NumPy array is roughly like initializing a new array with a different size).
For that reason, the common pattern in NumPy for iterative addition of columns to a 2D array is to initialize an empty target array once(or pre-allocate a single 2D NumPy array having all of the empty columns) the successively populate those empty columns by setting the desired column-wise offset (index)--much easier to show than to explain:
>>> # initialize your skeleton array using 'empty' for lowest-memory footprint
>>> M = NP.empty(shape=(10, 5), dtype=float)
>>> # create a small function to mimic step-wise populating this empty 2D array:
>>> fnx = lambda v : NP.random.randint(0, 10, v)
populate NumPy array as in the OP, except each iteration just re-sets the values of M at successive column-wise offsets
>>> for index, itm in enumerate(range(5)):
M[:,index] = fnx(10)
>>> M
array([[ 1., 7., 0., 8., 7.],
[ 9., 0., 6., 9., 4.],
[ 2., 3., 6., 3., 4.],
[ 3., 4., 1., 0., 5.],
[ 2., 3., 5., 3., 0.],
[ 4., 6., 5., 6., 2.],
[ 0., 6., 1., 6., 8.],
[ 3., 8., 0., 8., 0.],
[ 5., 2., 5., 0., 1.],
[ 0., 6., 5., 9., 1.]])
of course if you don't known in advance what size your array should be
just create one much bigger than you need and trim the 'unused' portions
when you finish populating it
>>> M[:3,:3]
array([[ 9., 3., 1.],
[ 9., 6., 8.],
[ 9., 7., 5.]])
Usually you don't keep resizing a NumPy array when you create it. What don't you like about your third solution? If it's a very large matrix/array, then it might be worth allocating the array before you start assigning its values:
x = len(something)
y = getColumnDataAsNumpyArray.someLengthProperty
data = numpy.zeros( (x,y) )
for i in something:
data[i] = getColumnDataAsNumpyArray(i)
The hstack can work on zero sized arrays:
import numpy as np
N = 5
M = 15
a = np.ndarray(shape = (N, 0))
for i in range(M):
b = np.random.rand(N, 1)
a = np.hstack((a, b))
Generally it is expensive to keep reallocating the NumPy array - so your third solution is really the best performance wise.
However I think hstack will do what you want - the cue is in the error message,
ValueError: arrays must have same number of dimensions`
I'm guessing that newColumn has two dimensions (rather than a 1D vector), so you need data to also have two dimensions..., for example, data = np.array([[]]) - or alternatively make newColumn a 1D vector (generally if things are 1D it is better to keep them 1D in NumPy, so broadcasting, etc. work better). in which case use np.squeeze(newColumn) and hstack or vstack should work with your original definition of the data.

Categories