After defining an arrray a with zeros, I can create a view to the leftmost column with the following function:
a = np.zeros((5, 5))
a_left_col = a[:, 0]
a_left_col[:] = 2.
which prints for a:
array([[2., 0., 0., 0., 0.],
[2., 0., 0., 0., 0.],
[2., 0., 0., 0., 0.],
[2., 0., 0., 0., 0.],
[2., 0., 0., 0., 0.]])
If I subsequently reinitialize a with
a = np.zeros((5, 5))
then the view still exists, but it refers to nothing anymore. How does Python handle the situation if I do a_left_col[:] = 2 again? Is this undefined behavior like in C or C++, or does Python handle it properly, and if so, why doesn't it throw an error?
The original object still exists because it is referenced by the view. (Although it can be no longer accessed through variable a.)
Let's have a detailed look at the object's reference counts:
import sys
import numpy as np
a = np.zeros((5, 5))
print(sys.getrefcount(a)) # 2
a_left_col = a[:, 0]
print(sys.getrefcount(a)) # 3
print(sys.getrefcount(a_left_col.base)) # 3
print(a_left_col.base is a) # True
a = np.ones((5, 5))
print(sys.getrefcount(a_left_col.base)) # 2
Note that a_left_col.base is the reference to the original array. When we reassing a The reference count on the object decreases but it still exists because it is reachable throgh a_left_col.
Behaviour is not undefined. You are merely creating a new object a. The old one is not deallocated, but still exists in memory, since a_left_col still references it. Once you reinitialize a_left_col, the original array can be deallocated.
Related
I have a .npy file here
Its just a file with an object that is a list of images and their labels. for example:
{
'2007_002760': array([0., 0., 0., 0., 0., 0., 0., 1., 0., 0., 0., 0., 0., 0., 0., 0., 0.,0., 0., 0.], dtype=float32),
'2008_004036': array([0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0., 0.,0., 0., 0.], dtype=float32)
}
I want to open the file and get its length, and then possibly add to it or modify it
I am able to open the file, but I cant get the length of items in it.
Heres how i open it:
import numpy as np
file = np.load('cls_labels.npy', allow_pickle = True)
print(file.size)
What am I missing here?
Your file contains a dictionary wrapped inside a 0-dimensional numpy object. The magic to extract the actual information is:
my_dictionary = file[()]
This is a standard dictionary whose keys are strings like '2008_004036' and whose values are numpy arrays.
Edit: And as mentioned above, you shouldn't be saving dictionaries using numpy.save(), you should have been using pickle. You end up with horrors like file[()].
here is the correct and easiest way to do it:
cls_labels = np.load('cls_labels.npy', allow_pickle = True).item()
During initializing, I tried to reduce the repeats in my code, so instead of:
output= (torch.zeros(2, 3),
torch.zeros(2, 3))
I wrote:
z = torch.zeros(2, 3)
output= (z,z)
However, I find that the second method is wrong.
If I assign the data to variables h,c, any change on h would also be applied to c
h,c = output
print(h,c)
h +=torch.ones(2,3)
print('-----------------')
print(h,c)
results of the test above:
tensor([[0., 0., 0.],
[0., 0., 0.]]) tensor([[0., 0., 0.],
[0., 0., 0.]])
-----------------
tensor([[1., 1., 1.],
[1., 1., 1.]]) tensor([[1., 1., 1.],
[1., 1., 1.]])
Is there a more elegent way to initialize two indenpendent variables?
I agree that your initial line needs no modification but if you do want an alternative, consider:
z = torch.zeros(2, 3)
output= (z,z.clone())
The reason the other one (output = (z,z)) doesn't work, as you've correctly discovered is that no copy is made. You're only passing the same reference in each entry of the tuple to z
Assign it in a single statement but use separate reference for each, as below:
h, c = output, output
I am trying to replace/overwrite values in a array using the following commands:
import numpy as np
test = np.array([[4,5,0],[0,0,0],[0,0,6]])
test
Out[20]:
array([[4., 5., 0.],
[0., 0., 0.],
[0., 0., 6.]])
test[np.where(test[...,0] != 0)][...,1:3] = np.array([[10,11]])
test
Out[22]:
array([[4., 5., 0.],
[0., 0., 0.],
[0., 0., 6.]])
However, as one can see in Out22, the array test has not been modified. So I am concluding that it is not possible to simply overwrite a part of a array or just few cells.
Nevertheless, in other contexts, it is possible to overwrite few cells of a array. For example, in the below code:
test = np.array([[1,2,0],[0,0,0],[0,0,3]])
test
Out[11]:
array([[1., 2., 0.],
[0., 0., 0.],
[0., 0., 3.]])
test[test>0]
Out[12]:
array([1., 2., 3.])
test[test>0] = np.array([4,5,6])
test
Out[14]:
array([[4., 5., 0.],
[0., 0., 0.],
[0., 0., 6.]])
Therefore, my 2 questions:
1- Why does the first command
test[np.where(test[...,0] != 0)][...,1:3] = np.array([10,11])
does not allow modifying the array test ? Why does not it allow accessing the array cells and overwrite them?
2- How could I make it work considering that for my code I would need to select the cells using the command above?
Many thanks!
I'll do you one up. This does work:
test[...,1:3][np.where(test[...,0] != 0)] = np.array([[10,11]])
array([[ 4, 10, 11],
[ 0, 0, 0],
[ 0, 0, 6]])
Why? It's the combination of two factors - numpy indexing and .__setitem__ calls.
The python interpreter sort of reads lines backwards. And when it gets to =, it tries to call .__setitem__ on the furthest thing to the left. __setitem__ is (hopefully) a method of the object, and has two inputs, the target and the indices (whatever is between [...] just before it).
a[b] = c #is intepreted as
a.__setitem__(b, c)
Now, when we index in numpy we have three basic ways we can do it.
slicing (returns views)
'advanced indexing' (returns copies)
'simple indexing' (also returns copies)
One major difference between "advanced" and "simple" indexing is that a numpy array's __setitem__ function can interpret advanced indexes. And views mean the data addresses are the same, so we don't need __setitem__ to get to them.
So:
test[np.where(test[...,0] != 0)][...,1:3] = np.array([[10,11]]) #is intepreted as
(test[np.where(test[...,0] != 0)]).__setitem__( slice([...,1:3]),
np.array([[10,11]]))
But, since np.where(test[...,0] != 0) is an advanced index, (test[np.where(test[...,0] != 0)]) returns a copy, which is then lost because it is never assigned. It does take the elements we want and set them to [10,11], but the result is lost in the buffer somewhere.
If we do:
test[..., 1:3][np.where(test[..., 0] != 0)] = np.array([[10, 11]]) #is intepreted as
(test[..., 1:3]).__setitem__( np.where(test[...,0] != 0), np.array([[10,11]]) )
test[...,1:3] is a view, so it still points to the same memory. Now setitem looks for the locations in test[...,1:3] that correspond to np.where(test[...,0] != 0), and set them equal to np.array([[10,11]]). And everything works.
You can also do this:
test[np.where(test[...,0] != 0), 1:3] = np.array([10, 11])
Now, since all the indexing is in one set of brackets, it's calling test.__setitem__ on those indices, which sets the data correctly as well.
Even simpler (and most pythonic) would be:
test[test[...,0] != 0, 1:3] = np.array([10,11])
Python 3.x
I have for loop which is making some calculations and creating one Slice/2D Array lets say (x = 3, y = 3) per iteration and I want at the same time in the same for loop (append?/stack) them in a third dimension.
I have been trying with Numpy stack, vstack, hstack, dstack but I still don't get how to get them together in the 3rd dimension as I want.
So I would like to have at them end something like this:
(z = 10, x = 3, y = 3)
array([ [[0., 0., 0.],
[0., 0., 0.],
[0., 0., 0.]],
[[1., 1., 1.],
[1., 1., 1.],
[1., 1., 1.]],
[[2., 2., 2.],
[2., 2., 2.],
[2., 2., 2.]],
.
.
.
])
Thanks,
you can do it like this
arrays = []
for i in range(5):
arr = np.full((3,3), i)
arrays.append(arr)
np.asarray(arrays)
If you want to you can do np.asarray(arrays) inside loop. But it will be not very efficient. Not that np.concatenate will also effectively creates new numpy array so efficiency will be similar. Doing these operation once outside the loop is better
For example I need 30x30 numpy arrays created from images to be fed to a neural net. If I have a directory of images to predict, I should be able to loop through the directory, get image data and create an (n,30,30) shape np array
This is my current method, I intend to reshape each row before feeding to the model
def get_image_vectors(path):
img_list=os.listdir(path)
print(img_list)
X=np.empty((900,))
for img_file in img_list:
img= Image.open(os.path.join(path,img_file))
img_grey= img.convert("L")
resized = img_grey.resize((30,30))
flattened = np.array(resized.getdata())
# print(flattened.shape)
X=np.vstack((X,flattened))
print(img_file,'=>',X.shape)
return X[1:,:]
Instead of appending to an existing array, it will probably be better to use a list initially, appending to it, and converting to an array at the end. thus saving many redundant modifications of np arrays.
Here a toy example:
import numpy as np
def get_image_vectors():
X= [] #Create empty list
for i in range(10):
flattened = np.zeros(900)
X.append(flattened) #Append some np array to it
return np.array(X) #Create array from the list
With result:
array([[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
...,
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.],
[0., 0., 0., ..., 0., 0., 0.]])