Why does the Numpy Diag function behaves weird? - python

The diag function does not save the result to a variable.
import numpy as np
A = np.random.rand(4,4)
d = np.diag(A)
print d
# above gives the diagonal entries of A
# let us change one entry
A[0, 0] = 0
print d
# above gives updated diagonal entries of A
Why does the diag function behave in this fashion?

np.diag returns a view to the original array. This means later changes to the original array are reflected in the view. (The upside, however, is that the operation is in much faster than creating a copy.)
Note this is only the behavior in some versions of numpy. In others, a copy is returned.
To "freeze" the result, you can copy it like d = np.diag(A).copy()

Related

The sum of the products of a two-dimensional array python

I have 2 arrays of a million elements (created from an image with the brightness of each pixel)
I need to get a number that is the sum of the products of the array elements of the same name. That is, A(1,1) * B(1,1) + A(1,2) * B(1,2)...
In the loop, python takes the value of the last variable from the loop (j1) and starts running through it, then adds 1 to the penultimate variable and runs through the last one again, and so on. How can I make it count elements of the same name?
res1, res2 - arrays (specifically - numpy.ndarray)
Perhaps there is a ready-made function for this, but I need to make it as open as possible, without a ready-made one.
sum = 0
for i in range(len(res1)):
for j in range(len(res2[i])):
for i1 in range(len(res2)):
for j1 in range(len(res1[i1])):
sum += res1[i][j]*res2[i1][j1]
In the first part of my answer I'll explain how to fix your code directly. Your code is almost correct but contains one big mistake in logic. In the second part of my answer I'll explain how to solve your problem using numpy. numpy is the standard python package to deal with arrays of numbers. If you're manipulating big arrays of numbers, there is no excuse not to use numpy.
Fixing your code
Your code uses 4 nested for-loops, with indices i and j to iterate on the first array, and indices i1 and j1 to iterate on the second array.
Thus you're multiplying every element res1[i][j] from the first array, with every element res2[i1][j1] from the second array. This is not what you want. You only want to multiply every element res1[i][j] from the first array with the corresponding element res2[i][j] from the second array: you should use the same indices for the first and the second array. Thus there should only be two nested for-loops.
s = 0
for i in range(len(res1)):
for j in range(len(res1[i])):
s += res1[i][j] * res2[i][j]
Note that I called the variable s instead of sum. This is because sum is the name of a builtin function in python. Shadowing the name of a builtin is heavily discouraged. Here is the list of builtins: https://docs.python.org/3/library/functions.html ; do not name a variable with a name from that list.
Now, in general, in python, we dislike using range(len(...)) in a for-loop. If you read the official tutorial and its section on for loops, it suggests that for-loop can be used to iterate on elements directly, rather than on indices.
For instance, here is how to iterate on one array, to sum the elements on an array, without using range(len(...)) and without using indices:
# sum the elements in an array
s = 0
for row in res1:
for x in row:
s += x
Here row is a whole row, and x is an element. We don't refer to indices at all.
Useful tools for looping are the builtin functions zip and enumerate:
enumerate can be used if you need access both to the elements, and to their indices;
zip can be used to iterate on two arrays simultaneously.
I won't show an example with enumerate, but zip is exactly what you need since you want to iterate on two arrays:
s = 0
for row1, row2 in zip(res1, res2):
for x, y in zip(row1, row2):
s += x * y
You can also use builtin function sum to write this all without += and without the initial = 0:
s = sum(x * y for row1,row2 in zip(res1, res2) for x,y in zip(row1, row2))
Using numpy
As I mentioned in the introduction, numpy is a standard python package to deal with arrays of numbers. In general, operations on arrays using numpy is much, much faster than loops on arrays in core python. Plus, code using numpy is usually easier to read than code using core python only, because there are a lot of useful functions and convenient notations. For instance, here is a simple way to achieve what you want:
import numpy as np
# convert to numpy arrays
res1 = np.array(res1)
res2 = np.array(res2)
# multiply elements with corresponding elements, then sum
s = (res1 * res2).sum()
Relevant documentation:
sum: .sum() or np.sum();
pointwise multiplication: np.multiply() or *;
dot product: np.dot.
Solution 1:
import numpy as np
a,b = np.array(range(100)), np.array(range(100))
print((a * b).sum())
Solution 2 (more open, because of use of pd.DataFrame):
import pandas as pd
import numpy as np
a,b = np.array(range(100)), np.array(range(100))
df = pd.DataFrame(dict({'col1': a, 'col2': b}))
df['vect_product'] = df.col1 * df.col2
print(df['vect_product'].sum())
Two simple and fast options using numpy are: (A*B).sum() and np.dot(A.ravel(),B.ravel()). The first method sums all elements of the element-wise multiplication of A and B. np.sum() defaults to sum(axis=None), so we will get a single number. In the second method, you create a 1D view into the two matrices and then apply the dot-product method to get a single number.
import numpy as np
A = np.random.rand(1000,1000)
B = np.random.rand(1000,1000)
s = (A*B).sum() # method 1
s = np.dot(A.ravel(),B.ravel()) # method 2
The second method should be extremely fast, as it doesn't create new copies of A and B but a view into them, so no extra memory allocations.

Numpy - Different behavior for 1-d and 2-d arrays

I was reviewing some numpy code and came across this issue. numpy is exhibiting different behavior for 1-d array and 2-d array. In the first case, it is creating a reference while in the second, it is creating a deep copy.
Here's the code snippet
import numpy as np
# Case 1: when using 1d-array
arr = np.array([1,2,3,4,5])
slice_arr = arr[:3] # taking first three elements, behaving like reference
slice_arr[2] = 100 # modifying the value
print(slice_arr)
print (arr) # here also value gets changed
# Case 2: when using 2d-array
arr = np.array([[1,2,3],[4,5,6],[7,8,9]])
slice_arr = arr[:,[0,1]] # taking all rows and first two columns, behaving like deep copy
slice_arr[0,1] = 100 # modifying the value
print(slice_arr)
print() # newline for clarity
print (arr) # here value doesn't change
Can anybody explain the reason for this behavior?
The reason is that you are not slicing in the same way, it's not about 1D vs 2D.
slice_arr = arr[:3]
Here you are using the slicing operator, so numpy can make a view on your original data and returns it.
slice_arr = arr[:,[0,1]]
Here you are using a list of elements you want, and it's not a slice (even if it could be represented by a slice), in that case, numpy returns a copy.
All these are getters, so they can return either view or copy.
For setters, it's always modifying the current array.

numpy.ndarray sent as argument doesn't need loop for iteration?

In this code np.linspace() assigns to inputs 200 evenly spaced numbers from -20 to 20.
This function works. What I am not understanding is how could it work. How can inputs be sent as an argument to output_function() without needing a loop to iterate over the numpy.ndarray?
def output_function(x):
return 100 - x ** 2
inputs = np.linspace(-20, 20, 200)
plt.plot(inputs, output_function(inputs), 'b-')
plt.show()
numpy works by defining operations on vectors the way that you really want to work with them mathematically. So, I can do something like:
a = np.arange(10)
b = np.arange(10)
c = a + b
And it works as you might hope -- each element of a is added to the corresponding element of b and the result is stored in a new array c. If you want to know how numpy accomplishes this, it's all done via the magic methods in the python data model. Specifically in my example case, the __add__ method of numpy's ndarray would be overridden to provide the desired behavior.
What you want to use is numpy.vectorize which behaves similarly to the python builtin map.
Here is one way you can use numpy.vectorize:
outputs = (np.vectorize(output_function))(inputs)
You asked why it worked, it works because numpy arrays can perform operations on its array elements en masse, for example:
a = np.array([1,2,3,4]) # gives you a numpy array of 4 elements [1,2,3,4]
b = a - 1 # this operation on a numpy array will subtract 1 from every element resulting in the array [0,1,2,3]
Because of this property of numpy arrays you can perform certain operations on every element of a numpy array very quickly without using a loop (like what you would do if it were a regular python array).

Is there any way to use the "out" argument of a Numpy function when modifying an array in place?

If I want to get the dot product of two arrays, I can get a performance boost by specifying an array to store the output in instead of creating a new array (if I am performing this operation many times)
import numpy as np
a = np.array([[1.0,2.0],[3.0,4.0]])
b = np.array([[2.0,2.0],[2.0,2.0]])
out = np.empty([2,2])
np.dot(a,b, out = out)
Is there any way I can take advantage of this feature if I need to modify an array in place? For instance, if I want:
out = np.array([[3.0,3.0],[3.0,3.0]])
out *= np.dot(a,b)
Yes, you can use the out argument to modify an array (e.g. array=np.ones(10)) in-place, e.g. np.multiply(array, 3, out=array).
You can even use in-place operator syntax, e.g. array *= 2.
To confirm if the array was updated in-place, you can check the memory address array.ctypes.data before and after the modification.

Swap Array Data in NumPy

I have many large multidimensional NP arrays (2D and 3D) used in an algorithm. There are numerous iterations in this, and during each iteration the arrays are recalculated by performing calculations and saving into temporary arrays of the same size. At the end of a single iteration the contents of the temporary arrays are copied into the actual data arrays.
Example:
global A, B # ndarrays
A_temp = numpy.zeros(A.shape)
B_temp = numpy.zeros(B.shape)
for i in xrange(num_iters):
# Calculate new values from A and B storing in A_temp and B_temp...
# Then copy values from temps to A and B
A[:] = A_temp
B[:] = B_temp
This works fine, however it seems a bit wasteful to copy all those values when A and B could just swap. The following would swap the arrays:
A, A_temp = A_temp, A
B, B_temp = B_temp, B
However there can be other references to the arrays in other scopes which this won't change.
It seems like NumPy could have an internal method for swapping the internal data pointer of two arrays, such as numpy.swap(A, A_temp). Then all variables pointing to A would be pointing to the changed data.
Even though you way should work as good (I suspect the problem is somewhere else), you can try doing it explicitly:
import numpy as np
A, A_temp = np.frombuffer(A_temp), np.frombuffer(A)
It's not hard to verify that your method works as well:
>>> import numpy as np
>>> arr = np.zeros(100)
>>> arr2 = np.ones(100)
>>> print arr.__array_interface__['data'][0], arr2.__array_interface__['data'][0]
152523144 152228040
>>> arr, arr2 = arr2, arr
>>> print arr.__array_interface__['data'][0], arr2.__array_interface__['data'][0]
152228040 152523144
... pointers succsessfully switched
Perhaps you could solve this by adding a level of indirection.
You could have an "array holder" class. All that would do is keep a reference to the underlying NumPy array. Implementing a cheap swap operation for a pair of these would be trivial.
If all external references are to these holder objects and not directly to the arrays, none of those references would get invalidated by a swap.
I realize this is an old question, but for what it's worth you could also swap data between two ndarray buffers (without a temp copy) by performing an xor swap:
A_bytes = A.view('ubyte')
A_temp_bytes = A.view('ubyte')
A_bytes ^= A_temp_bytes
A_temp_bytes ^= A_bytes
A_bytes ^= A_temp_bytes
Since this was done on views, if you look at the original A and A_temp arrays (in whatever their original dtype was) their values should be correctly swapped. This is basically equivalent to the numpy.swap(A, A_temp) you were looking for. It's unfortunate that it requires 3 loops--if this were implemented as a ufunc (maybe it should be) it would be a lot faster.

Categories