How do i compute with (numpy) -arrays eloquently in python - python

How can i express this construct in a more efficient way?
x = [2, 4, 6, 8, 10]
for p in x:
x = x/2
print x
there has to be a good way to do this.

If you are trying to divide every element of x by 2, then the following will do it:
x = np.array([2, 4, 6, 8, 10])
x /= 2
The resulting value of x is array([1, 2, 3, 4, 5]).
Note that the above uses integer (truncating) division. If you want floating-point division, either make x into a floating-point array:
x = np.array([2, 4, 6, 8, 10], dtype='float64')
or change the division to:
x = x / 2.0

If it is a numpy array You can do it all at once:
In [4]: from numpy import array
In [5]: x = array([2, 4, 6, 8, 10])
In [6]: print x/2
[1 2 3 4 5]

Related

How to reverse a numpy array and then also switch each 'pair' of positions?

For example, how would you do this sequence of operations on a np 1D array, x:
[1,2,3,4,5,6,7,8]
[8,7,6,5,4,3,2,1]
[7,8,5,6,3,4,1,2]
The transition from state 1 to state 2 can be done with numpy.flip(x):
x = numpy.flip(x)
How can you go from this intermediate state to the final state, in which each 'pair' of positions switches positions
Notes: this is a variable length array, and will always be 1D
It is assumed that the length is always even. At this time, you only need to reshape, reverse and flatten:
>>> ar = np.arange(1, 9)
>>> ar.reshape(-1, 2)[::-1].ravel()
array([7, 8, 5, 6, 3, 4, 1, 2])
This always creates a copy, because the elements in the original array cannot be continuous after transformation, but ndarray.ravel() must create a continuous view.
If it is necessary to transition from state 2 to state 3:
>>> ar = ar[::-1]
>>> ar # state 2
array([8, 7, 6, 5, 4, 3, 2, 1])
>>> ar.reshape(-1, 2)[:, ::-1].ravel()
array([7, 8, 5, 6, 3, 4, 1, 2])
This should work (assumin you have a even number of elements, otherwise you might want to check this before)
x = x.reshape((len(x)//2, 2)) #split in two wolumns
x[:,0], x[:,1] = x[:,1], x[:,0].copy() # switch the columns
x = x.reshape(2*len(x)) # reshape back in a 1D array
You can do:
import numpy as np
arr = np.array([8,7,6,5,4,3,2,1])
result = np.vstack((arr[1::2], arr[::2])).T.flatten()
output:
array([7, 8, 5, 6, 3, 4, 1, 2])

numpy.union that preserves order

Two arrays have been produced by dropping random values of an original array (with unique and unsorted elements):
orig = np.array([2, 1, 7, 5, 3, 8])
Let's say these arrays are:
a = np.array([2, 1, 7, 8])
b = np.array([2, 7, 3, 8])
Given just these two arrays, I need to merge them so that the dropped values are on their correct positions.
The result should be:
result = np.array([2, 1, 7, 3, 8])
Another example:
a1 = np.array([2, 1, 7, 5, 8])
b1 = np.array([2, 5, 3, 8])
# the result should be: [2, 1, 7, 5, 3, 8]
Edit:
This question is ambiguous because it is unclear what to do in this situation:
a2 = np.array([2, 1, 7, 8])
b2 = np.array([2, 5, 3, 8])
# the result should be: ???
What I have in reality + solution:
Elements of these arrays are indices of two data frames containing time series. I can use pandas.merge_ordered in order to achieve the ordered indices as I want.
My previous attempts:
numpy.union1d is not suitable, because it always sorts:
np.union1d(a, b)
# array([1, 2, 3, 7, 8]) - not what I want
Maybe pandas could help?
These methods use the first array in full, and then append the leftover values of the second one:
pd.concat([pd.Series(index=a, dtype=int), pd.Series(index=b, dtype=int)], axis=1).index.to_numpy()
pd.Index(a).union(b, sort=False).to_numpy() # jezrael's version
# array([2, 1, 7, 8, 3]) - not what I want
Idea is join both arrays with flatten and then remove duplicates in order:
a = np.array([2, 1, 7, 8])
b = np.array([2, 7, 3, 8])
c = np.vstack((a, b)).ravel(order='F')
_, idx = np.unique(c, return_index=True)
c = c[np.sort(idx)]
print (c)
[2 1 7 3 8]
Pandas solution:
c = pd.DataFrame([a,b]).unstack().unique()
print (c)
[2 1 7 3 8]
If different number of values:
a = np.array([2, 1, 7, 8])
b = np.array([2, 7, 3])
c = pd.DataFrame({'a':pd.Series(a), 'b':pd.Series(b)}).stack().astype(int).unique()
print (c)
[2 1 7 3 8]

A tricky way of replacing insertion of a numpy array into another array

I want my inserted array to replace as many elements in the target array as there are in this replacing array, just like this:
x = np.array([1, 2, 3, 4, 5])
y = np.array([6, 7, 8])
# Doing unknown stuff
x = array([6, 7, 8, 4, 5])
Is there some numpy method or something tricky to implement this?
I would also like to know if this can be done despite of how the lengths of the arrays relate to each other, for instance like this:
x = np.array([1, 2])
y = np.array([6, 7, 8])
# Doing unknown stuff
x = array([6, 7])
Another solution
x[:len(y)] = y[:len(x)]
First case
>>> x
array([6, 7, 8, 4, 5])
Second case
>>> x
array([6, 7])
Here is one solution:
x = np.append(y, x[len(y):])
Numpy is relatively fast:
x = np.array([1, 2, 3, 4, 5])
y = np.array([6, 7, 8])
xnew = np.concatenate((y, x[len(y):]))
>>> xnew
array([6, 7, 8, 4, 5])

Inconsistent Numpy array aliasing behavior

The following behavior is expected and is what I get. This is consistent with how aliasing works for native Python objects like lists.
>>> x = np.array([1, 2, 3])
>>> y = x
>>> x
array([1, 2, 3])
>>> y
array([1, 2, 3])
>>> x = x + np.array([2, 3, 4])
>>> x
array([3, 5, 7])
>>> y
array([1, 2, 3])
But the following behavior is unexpected by changing x = x + np.array([2, 3, 4]) to x += np.array([2, 3, 4])
>>> x += np.array([2, 3, 4])
>>> x
array([3, 5, 7])
>>> y
array([3, 5, 7])
The Numpy version is 1.16.4 on my machine. Is this a bug or feature? If it is a feature how x = x + np.array([2, 3, 4]) differs from x += np.array([2, 3, 4])
Your line y = x doesn't create a copy of the array; it simply tells y to point to the same data as x, which you can see if you look at their ids:
x = np.array([1,2,3])
y = x
print(id(x), id(y))
(140644627505280, 140644627505280)
x = x + np.array([2, 3, 4]) will do a reassignment of x to a new id, while x += np.array([2, 3, 4]) will modify it in place. Thus, the += will also modify y, while x = x + ... won't.
x += np.array([2, 3, 4])
print(id(x))
print(x, y)
x = x + np.array([2, 3, 4])
print(id(x))
print(x, y)
140644627505280
[3 5 7] [3 5 7]
140644627175744
[ 5 8 11] [3 5 7]

How to compare two arrarys and create new array with multiple conditional statement?

I have a two arrays x and y with same dimensions. I want to do multiple comparisons (or operator) values in these arrays and generate a new array with same dimensions. The new array should have the values assigned by me. Here is the little demonstration of what I am trying to do:-
In [1]: import numpy
In [2]: import numpy as np
In [3]: x = np.array([5, 2, 3, 1, 4, 5])
In [4]: y = np.array([2, 3, 3, 8, 8, 6])
In [5]: result_array = [y > 3] or [x < 5]
In [6]: print(result_array)
[array([False, False, False, True, True, True], dtype=bool)]
I am able to compare multiple statement and result in new array. However, I would like to replace the True with value 10. So when I try this line, it gives me an error:-
result_array = 10 if [y > 3] or [x < 5]:
File "<ipython-input-21-780bf095bc56>", line 1
result_array = 10 if [y > 3] or [x < 5]:
^
SyntaxError: invalid syntax
What I am expecting is:-
[array([False, False, False, 10, 10, 10], dtype=bool)]
Any help is appreciated
You need to convert your result to integer in order to see the 1:
x = np.array([5, 2, 3, 1, 4, 5])
y = np.array([2, 3, 3, 8, 8, 6])
result_array = np.logical_or(y > 3, x < 5)
res = result_array.astype(int)
res[result_array] = 10
print(res)
Output:
[ 0 10 10 10 10 10]
You can get close to the result you mentioned to be expecting using this:
import numpy as np
x = np.array([5, 2, 3, 1, 4, 5])
y = np.array([2, 3, 3, 8, 8, 6])
result_array = np.where(y > 3, 10, False)
print(result_array)
Result:
[ 0 0 0 10 10 10]
Note that there are 0s instead of Falses, because it contains only numbers

Categories