PyTorch in-place operator issue in NumPy conversion - python

I converted a PyTorch Tensor into a NumPy ndarray, and as it's shown below 'a' and 'b' both have different ids so they point to different memory locations.
I know in PyTorch underscore indicates in-place operation so a.add_(1) does not change the memory location of a, but why does it change the content of the array b (due to the difference in id)?
import torch
a = torch.ones(5)
b = a.numpy()
a.add_(1)
print(id(a), id(b))
print(a)
print(b)
Results:
139851293933696 139851293925904
tensor([2., 2., 2., 2., 2.])
[2. 2. 2. 2. 2.]
PyTorch documentation:
Converting a torch Tensor to a NumPy array and vice versa is a breeze.
The torch Tensor and NumPy array will share their underlying memory
locations, and changing one will change the other.
But why?
(They have different IDs so must be independent) :(

If you use numpy.ndarray.__array_interface__ and torch.Tensor.data_ptr(), you can find the memory location of their data_array.
a = torch.ones(5)
b = a.numpy()
a.add_(1)
print(b.__array_interface__)
# > {'data': (1848226926144, False),
# 'strides': None,
# 'descr': [('', '<f4')],
# 'typestr': '<f4',
# 'shape': (5,),
# 'version': 3}
print(a.data_ptr())
# > 1848226926144
a.data_ptr() == b.__array_interface__["data"][0]
# > True
Both the array and the tensor share the same data_array.
We can reasonably conclude that the method torch.Tensor.numpy() creates a numpy array from the Tensor by passing it the reference to its data_array.
b[0] = 4
print(a)
# > tensor([4., 2., 2., 2., 2.])

My guess is b is a pointer to a, and the memory addresses id() returns are just where they are stored in memory. Try using copy.deepcopy() to create a from b and that should make them entirely separate.

Related

Appending an empty list to a numpy array changes its dtype

I have a numpy array of integers. In my code I need to append some other integers from a list, which works fine and gives me back an array of dtype int64 as expected. But it may happen that the list of integers to append is empty. In that case, numpy returns an array of float64 values. Exemplary code below:
import numpy as np
a = np.arange(10, dtype='int64')
np.append(a, [10]) # dtype is int64
# array([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
np.append(a, []) # dtype is float64
# array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.])
Is this expected behaviour? If so, what is the rationale behind this? Could this even be a bug?
The documentation for np.append states that the return value is
A copy of arr with values appended to axis.
Since there are no values to append, shouldn't it just return a copy of the array?
(Numpy version: 1.22.4, Python version: 3.8.0)
The default dtype for a numpy array constructed from an empty list is float64:
>>> np.array([])
array([], dtype=float64)
A float64 dtype will "win" over the int64 dtype and promote everything to float64, because converting the other way around would cause loss of precision (i.e. truncate any float64). Of course this is an extreme case because there is no value to truncate in an empty array, but the check is only done on the dtype. More info on this is in the doc for numpy.result_type().
In fact:
>>> a = np.array(dtype='int64')
>>> a
array([], dtype=int64)
>>> b = np.array([])
>>> b
array([], dtype=float64)
>>> np.result_type(a, b)
dtype('float64')
The np.promote_types() function is used to determine the type to promote to:
>>> np.promote_types('int64', 'float64')
dtype('float64')
See also: How the dtype of numpy array is calculated internally?
You can cast your array when appending and specify the type you want:
import numpy as np
a = np.arange(10, dtype="int64")
# array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64)
np.append(a, [])
array([0., 1., 2., 3., 4., 5., 6., 7., 8., 9.], dtype=float64)
np.append(a, np.array([], dtype=a.dtype))
# array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9], dtype=int64)
In the source code for numpy.append, we can see that it calls numpy.concatenate. Looking at the documentation for this method, we see that it accepts two type-related arguments: dtype and casting, the default of which is None. Since numpy.append does not provide values for either of these, the defaults are assumed. The default for casting is same-kind, which allows for safe casting within a kind, as described here. Since all of this is done in C, my guess is that concatenate tried to guess the type of the array you provided. Since it was empty, however, no type could be assumed so it took the widest possible type that would fit both inputs and assigned that to the output.
If you want to avoid this, you should probably append using numpy arrays instead of Python lists:
np.append(a, np.array([], dtype = 'int64'))
Can you avoid appending the the empty list with some sort of if statement?
I think its expected behaviour as [] is still a value that gets appended. Obviously [] can't be an integer so it probably assumes its a float (default casting).

Add a level to Numpy array

I have a problem with a numpy array.
In particular, suppose to have a matrix
x = np.array([[1., 2., 3.], [4., 5., 6.]])
with shape (2,3), I want to convert the float numbers into list so to obtain the array [[[1.], [2.], [3.]], [[4.], [5.], [6.]]] with shape (2,3,1).
I tried to convert each float number to a list (i.e., x[0][0] = [x[0][0]]) but it does not work.
Can anyone help me? Thanks
What you want is adding another dimension to your numpy array. One way of doing it is using reshape:
x = x.reshape(2,3,1)
output:
[[[1.]
[2.]
[3.]]
[[4.]
[5.]
[6.]]]
There is a function in Numpy to perform exactly what #Valdi_Bo mentions. You can use np.expand_dims and add a new dimension along axis 2, as follows:
x = np.expand_dims(x, axis=2)
Refer:
np.expand_dims
Actually, you want to add a dimension (not level).
To do it, run:
result = x[...,np.newaxis]
Its shape is just (2, 3, 1).
Or save the result back under x.
You are trying to add a new dimension to the numpy array. There are multiple ways of doing this as other answers mentioned np.expand_dims, np.new_axis, np.reshape etc. But I usually use the following as I find it the most readable, especially when you are working with vectorizing multiple tensors and complex operations involving broadcasting (check this Bounty question that I solved with this method).
x[:,:,None].shape
(2,3,1)
x[None,:,None,:,None].shape
(1,2,1,3,1)
Well, maybe this is an overkill for the array you have, but definitely the most efficient solution is to use np.lib.stride_tricks.as_strided. This way no data is copied.
import numpy as np
x = np.array([[1., 2., 3.], [4., 5., 6.]])
newshape = x.shape[:-1] + (x.shape[-1], 1)
newstrides = x.strides + x.strides[-1:]
a = np.lib.stride_tricks.as_strided(x, shape=newshape, strides=newstrides)
results in:
array([[[1.],
[2.],
[3.]],
[[4.],
[5.],
[6.]]])
>>> a.shape
(2, 3, 1)

What are the differences between these Numpy array creation functions? [duplicate]

What is the difference between NumPy's np.array and np.asarray? When should I use one rather than the other? They seem to generate identical output.
The definition of asarray is:
def asarray(a, dtype=None, order=None):
return array(a, dtype, copy=False, order=order)
So it is like array, except it has fewer options, and copy=False. array has copy=True by default.
The main difference is that array (by default) will make a copy of the object, while asarray will not unless necessary.
Since other questions are being redirected to this one which ask about asanyarray or other array creation routines, it's probably worth having a brief summary of what each of them does.
The differences are mainly about when to return the input unchanged, as opposed to making a new array as a copy.
array offers a wide variety of options (most of the other functions are thin wrappers around it), including flags to determine when to copy. A full explanation would take just as long as the docs (see Array Creation, but briefly, here are some examples:
Assume a is an ndarray, and m is a matrix, and they both have a dtype of float32:
np.array(a) and np.array(m) will copy both, because that's the default behavior.
np.array(a, copy=False) and np.array(m, copy=False) will copy m but not a, because m is not an ndarray.
np.array(a, copy=False, subok=True) and np.array(m, copy=False, subok=True) will copy neither, because m is a matrix, which is a subclass of ndarray.
np.array(a, dtype=int, copy=False, subok=True) will copy both, because the dtype is not compatible.
Most of the other functions are thin wrappers around array that control when copying happens:
asarray: The input will be returned uncopied iff it's a compatible ndarray (copy=False).
asanyarray: The input will be returned uncopied iff it's a compatible ndarray or subclass like matrix (copy=False, subok=True).
ascontiguousarray: The input will be returned uncopied iff it's a compatible ndarray in contiguous C order (copy=False, order='C').
asfortranarray: The input will be returned uncopied iff it's a compatible ndarray in contiguous Fortran order (copy=False, order='F').
require: The input will be returned uncopied iff it's compatible with the specified requirements string.
copy: The input is always copied.
fromiter: The input is treated as an iterable (so, e.g., you can construct an array from an iterator's elements, instead of an object array with the iterator); always copied.
There are also convenience functions, like asarray_chkfinite (same copying rules as asarray, but raises ValueError if there are any nan or inf values), and constructors for subclasses like matrix or for special cases like record arrays, and of course the actual ndarray constructor (which lets you create an array directly out of strides over a buffer).
The difference can be demonstrated by this example:
Generate a matrix.
>>> A = numpy.matrix(numpy.ones((3, 3)))
>>> A
matrix([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
Use numpy.array to modify A. Doesn't work because you are modifying a copy.
>>> numpy.array(A)[2] = 2
>>> A
matrix([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
Use numpy.asarray to modify A. It worked because you are modifying A itself.
>>> numpy.asarray(A)[2] = 2
>>> A
matrix([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 2., 2., 2.]])
The differences are mentioned quite clearly in the documentation of array and asarray. The differences lie in the argument list and hence the action of the function depending on those parameters.
The function definitions are :
numpy.array(object, dtype=None, copy=True, order=None, subok=False, ndmin=0)
and
numpy.asarray(a, dtype=None, order=None)
The following arguments are those that may be passed to array and not asarray as mentioned in the documentation :
copy : bool, optional If true (default), then the object is copied.
Otherwise, a copy will only be made if __array__ returns a copy, if
obj is a nested sequence, or if a copy is needed to satisfy any of the
other requirements (dtype, order, etc.).
subok : bool, optional If True, then sub-classes will be
passed-through, otherwise the returned array will be forced to be a
base-class array (default).
ndmin : int, optional Specifies the minimum number of dimensions that
the resulting array should have. Ones will be pre-pended to the shape
as needed to meet this requirement.
asarray(x) is like array(x, copy=False)
Use asarray(x) when you want to ensure that x will be an array before any other operations are done. If x is already an array then no copy would be done. It would not cause a redundant performance hit.
Here is an example of a function that ensure x is converted into an array first.
def mysum(x):
return np.asarray(x).sum()
Here's a simple example that can demonstrate the difference.
The main difference is that array will make a copy of the original data and using different object we can modify the data in the original array.
import numpy as np
a = np.arange(0.0, 10.2, 0.12)
int_cvr = np.asarray(a, dtype = np.int64)
The contents in array (a), remain untouched, and still, we can perform any operation on the data using another object without modifying the content in original array.
Let's Understand the difference between np.array() and np.asarray() with the example:
np.array(): Convert input data (list, tuple, array, or other sequence type) to an ndarray and copies the input data by default.
np.asarray(): Convert input data to an ndarray but do not copy if the input is already an ndarray.
#Create an array...
arr = np.ones(5); # array([1., 1., 1., 1., 1.])
#Now I want to modify `arr` with `array` method. Let's see...
arr = np.array(arr)[3] = 200; # array([1., 1., 1., 1., 1.])
No change in the array because we are modify a copy of the arr.
Now, modify arr with asarray() method.
arr = np.asarray(arr)[3] = 200; # array([1., 200, 1., 1., 1.])
The change occur in this array because we are work with the original array now.

Is there a tensorflow equivalent to np.empty?

Numpy has this helper function, np.empty, which will:
Return a new array of given shape and type, without initializing entries.
I find it pretty useful when I want to create a tensor using tf.concat since:
The number of dimensions of the input tensors must match, and all dimensions except axis must be equal.
So it comes in handy to start with an empty tensor of an expected shape. Is there any way to achieve this in tensorflow?
[edit]
A simplified example of why I want this
netInput = np.empty([0, 4])
netTarget = np.empty([0, 4])
inputWidth = 2
for step in range(data.shape.as_list()[-2]-frames_width-1):
netInput = tf.concat([netInput, data[0, step:step + frames_width, :]], -2)
target = tf.concat([target, data[0, step + frames_width + 1:step + frames_width + 2, :]], -2)
In this example, if netInput or netTarget are initialized, I'll be concatenating an extra example with that initialization. And to initialize them with the first value, I need to hack the loop. Nothing mayor, I just wondered if there is a 'tensorflow' way to solve this.
In TF 2,
tensor = tf.reshape(tf.convert_to_tensor(()), (0, n))
worked for me.
If you're creating an empty tensor, tf.zeros will do
>>> a = tf.zeros([0, 4])
>>> tf.concat([a, [[1, 2, 3, 4], [5, 6, 7, 8]]], axis=0)
<tf.Tensor: shape=(2, 4), dtype=float32, numpy=
array([[1., 2., 3., 4.],
[5., 6., 7., 8.]], dtype=float32)>
The closest thing you can do is create a variable that you do not initialize. If you use tf.global_variables_initializer() to initialize your variables, disable putting your variable in the list of global variables during initialization by setting collections=[].
For example,
import numpy as np
import tensorflow as tf
x = tf.Variable(np.empty((2, 3), dtype=np.float32), collections=[])
y = tf.Variable(np.empty((2, 3), dtype=np.float32))
sess = tf.InteractiveSession()
tf.global_variables_initializer().run()
# y has been initialized with the content of "np.empty"
y.eval()
# x is not initialized, you have to do it yourself later
x.eval()
Here np.empty is provided to x only to specify its shape and type, not for initialization.
Now for operations such as tf.concat, you actually don't have (indeed cannot) manage the memory yourself -- you cannot preallocate the output as some numpy functions allow you to. Tensorflow already manages memory and does smart tricks such as reusing memory block for the output if it detects it can do so.

Assigning values to two dimensional array from two one dimensional ones

Most probably somebody else already asked this but I couldn't find it. The question is how can I assign values to a 2D array from two 1D arrays. For example:
import numpy as np
#a is the 2D array. b is the 1D array and should be assigned
#to second coordinate. In this exaple the first coordinate is 1.
a=np.zeros((3,2))
b=np.asarray([1,2,3])
c=np.ones(3)
a=np.vstack((c,b)).T
output:
[[ 1. 1.]
[ 1. 2.]
[ 1. 3.]]
I know the way I am doing it so naive, but I am sure there should be a one line way of doing this.
P.S. In real case that I am dealing with, this is a subarray of an array, and therefore I cannot set the first coordinate from the beginning to one. The whole array's first coordinate are different, but after applying np.where they become constant.
How about 2 lines?
>>> c = np.ones((3, 2))
>>> c[:, 1] = [1, 2, 3]
And the proof it works:
>>> c
array([[ 1., 1.],
[ 1., 2.],
[ 1., 3.]])
Or, perhaps you want np.column_stack:
>>> np.column_stack(([1.,1,1],[1,2,3]))
array([[ 1., 1.],
[ 1., 2.],
[ 1., 3.]])
First, there's absolutely no reason to create the original zeros array that you stick in a, never reference, and replace with a completely different array with the same name.
Second, if you want to create an array the same shape and dtype as b but with all ones, use ones_like.
So:
b = np.array([1,2,3])
c = np.ones_like(b)
d = np.vstack((c, b).T
You could of course expand b to a 3x1-array instead of a 3-array, in which case you can use hstack instead of needing to vstack then transpose… but I don't think that's any simpler:
b = np.array([1,2,3])
b = np.expand_dims(b, 1)
c = np.ones_like(b)
d = np.hstack((c, b))
If you insist on 1 line, use fancy indexing:
>>> a[:,0],a[:,1]=[1,1,1],[1,2,3]

Categories