Regarding direct multiplication of scalar with list - python

The snippet was supposed to store/print instantaneous voltage/current values generated by the sin and the arrange functions in numpy. I first had tolist() after those functions, but the multiplication of the magnitude (230 in the case of voltage, 5 in the case of current) had no effect on the result unless I removed the tolist(). Why does that occur?
V_magnitude = 230
I_magnitude = 5
voltage = V_magnitude*np.sin(np.arange(0,10,0.01)).tolist()
current = I_magnitude*np.sin(np.arange(-0.3,9.7,0.01))
What I've tried
-> making both the magnitudes as the second operand for multiplication
-> with and without tolist()

When you multiply a list by X you extend the list to contain X times the values in it.
When you multiply numpy array by X you multiply the values inside the array by X.
Try it with a simple example
lst = [1, 2, 3]
print(lst * 3) # [1, 2, 3, 1, 2, 3, 1, 2, 3]
print(np.array(lst) * 3) # [3 6 9]

Related

List of lists equivalent for numpy 2D indexing [:,0]

I need to access the first element of each list within a list. Usually I do this via numpy arrays by indexing:
import numpy as np
nparr=np.array([[1,2,3],[4,5,6],[7,8,9]])
first_elements = nparr[:,0]
b/c:
print(nparr[0,:])
[1 2 3]
print(nparr[:,0])
[1 4 7]
Unfortunately I have to tackle non-rectangular dynamic arrays now so numpy won't work.
But Pythons standard lists behave strangely (at least for me):
pylist=[[1,2,3],[4,5,6],[7,8,9]]
print(pylist[0][:])
[1, 2, 3]
print(pylist[:][0])
[1, 2, 3]
I guess either lists doesn't support this (which would lead to a second question: What to use instead) or I got the syntax wrong?
You have a few options. Here's one.
pylist=[[1,2,3],[4,5,6],[7,8,9]]
print(pylist[0]) # [1, 2, 3]
print([row[0] for row in pylist]) # [1, 4, 7]
Alternatively, if you want to transpose pylist (make its rows into columns), you could do the following.
pylist_transpose = [*zip(*pylist)]
print(pylist_transpose[0]) # [1, 4, 7]
pylist_transpose will always be a rectangular array with a number of rows equal to the length of the shortest row in pylist.

How to Expand or "scale up" an 1d array?

I have a piece of C code that can only handle an array of size 20. The array that my instrument outputs is much smaller than what the function requires. Is there a numpy or math function that can "scale up" an array to any specific size while maintaining its structural integrity? For example:
I have a 8 element array that is basically a two ramp "sawtooth" meaning its values are :
[1 2 3 4 1 2 3 4]
What I need for the C code is a 20 element array. So I can scale it, by padding linear intervals of the original array with "0"s , like:
[1,0,0,2,0,0,3,0,0,4,0,0,1,0,0,2,0,0,3,0]
so it adds up to 20 elements. I would think this process is the opposite of "decimation". (I apologize ,I'm simplifying this process so it will be a bit more understandable)
Based on your example, I guess the following approach could be tweaked to do what you want:
upsample with 0s: upsampled_l = [[i, 0, 0] for i in l] with l being your initial list
Flatten the array flat_l = flatten(upsampled_l) using a method from
How to make a flat list out of a list of lists? for instance
Get the expected length final_l = flat_l[:20]
For instance, the following code gives the output you gave in your example:
l = [1, 2, 3, 4, 1, 2, 3, 4]
upsampled_l = [[i, 0, 0] for i in l]
flat_l = [item for sublist in upsampled_l for item in sublist]
final_l = flat_l[:20]
However, the final element of the initial list (the second 4) is missing from the final list. Perhaps it's worth upsampling with only one 0 in between ([i, 0] instead of [i, 0, 0]) and finally do final_l.extend([0 for _ in range(20 - len(final_l))]).
Hope this helps!
You can manage it in a one-liner by adding zeros as another axis, then flattening:
sm = np.array([1, 2, 3, 4, 1, 2, 3, 4])
np.concatenate([np.reshape(sm, (8, 1)), np.zeros((8, 3))], axis=1).flatten()

Python: Obtain a matrix containing all differences between all elements from another matrix

I need to calculate all differences (in absolute value) between all combinations of the elements from a given matrix. For example, given some matrix M such as
M = [[1 2]
[5 8]]
I need to obtain a matrix X defined as
X = [[1 4 7]
[1, 3, 6]
[4, 3, 3]
[7, 6, 3]]
where I associated each row to each element in M and each column with its substraction from every other element (except with itself). I've been trying to make some for cicles, but I haven't been able to get something close to what I want.
In my code, I used numpy, thus defining all the matrices as
M = np.zeros([nx, ny])
and then replacing the values as the code progresses such as
M[i, j] = 5
While this can be easily done by looping over elements, here is a faster and more pythonic way.
import numpy as np
M = np.array([[1,2],[5,8]])
# flatten M matrix
flatM = M.reshape(-1)
# get all pairwise differences
X = flatM[:,None] - flatM[None,:]
# remove diagonal elements, since you don't want differences between same elements
mask = np.where(~np.eye(X.shape[0],dtype=bool))
X = X[mask]
# reshape X into desired form
X = X.reshape(len(flatM),-1)
# take absolute values
X = np.abs(X)
print(X)
Removal of diagonal elements was done using approach suggested here How to get indices of non-diagonal elements of a numpy array?
Sensibly the same approach as the very good answer of #wizzzz1, but with a simpler syntax and faster execution:
a = M.ravel()
X = (abs(a-a[:, None])
[~np.eye(M.size, dtype=bool)]
.reshape(-1, M.size-1)
)
Output:
array([[1, 4, 7],
[1, 3, 6],
[4, 3, 3],
[7, 6, 3]])

Array Prints like List but its a single integer in variable explorer? Why?

When ı print out the following code Q is prints like it suppose to be (3 5 7 9) sum of the numbers with the next one. but in the variable explorer its a single integer ı want to get the result Q as an array like
Q = [3, 5, 7, 9]
import numpy as np
A = [1, 2, 3, 4, 5]
for i in range(0,4):
Q = np.array(A[i]+A[i+1])
print(Q)
for i in range(0,4):
Q = []
Q.append(Q[i] + A[i]+A[i+1])
print(Q)
This also doesnt work
Currently you're just re-declaring Q each time and it's never added to some collection of values
Instead, start with an empty list (or perhaps a numpy array in your case) and outside of your loop and append the values to it at each loop cycle
Q is a numpy array, but it's not what you're expecting!
It has no dimensions and only references a single value
>>> type(Q)
<class 'numpy.ndarray'>
>>> print(repr(Q))
array(9)
>>> import numpy as np
>>> A = [1, 2, 3, 4, 5]
>>> Q = np.array([], dtype=np.uint8)
>>> for i in range(4):
... Q = np.append(Q, A[i]+A[i+1]) # reassign each time for np
...
>>> print(Q)
[3 5 7 9]
Note that numpy arrays should be reassigned via np.append, while a normal python list has a .append() method (which does not return the list, but directly appends to it)
>>> l = ['a', 'b', 'c'] # start with a list of values
>>> l.append('d') # use the append method
>>> l # display resulting list
['a', 'b', 'c', 'd']
If you're not forced to use a numpy array to begin with, this can be done with a list comprehension
The resulting list can also be made into a numpy array afterwards
>>> [(x + x + 1) for x in range(1, 5)]
[3, 5, 7, 9]
All together with simplified math
>>> np.array([x*2+3 for x in range(4)])
array([3, 5, 7, 9])
If you want to use Numpy, then use Numpy. Start with a Numpy array (one-dimensional, containing the values), which looks like this:
A = np.array([1, 2, 3, 4, 5])
(Yes, you initialize it from the list).
Or you can create that kind of patterned data using Numpy's built-in tool:
A = np.arange(1, 6) # it works similarly to the built-in `range` type,
# but it does create an actual array.
Now we can get the values to use on the left-hand and right-hand sides of the addition:
# You can slice one-dimensional Numpy arrays just like you would lists.
# With more dimensions, you can slice in each dimension.
X = A[:-1]
Y = A[1:]
And add the values together element-wise:
Q = X + Y # yes, really that simple!
And that last line is the reason you would use Numpy to solve a problem like this. Otherwise, just use a list comprehension:
A = list(range(1, 6)) # same as [1, 2, 3, 4, 5]
# Same slicing, but now we have to do more work for the addition,
# by explaining the process of pairing up the elements.
Q = [x + y for x, y in zip(A[:-1], A[1:])]

Is there any easy way to sparsely store a matrix with a redundant pattern in python?

The type of matrix I am dealing with was created from a vector as shown below:
Start with a 1-d vector V of length L.
To create a matrix A from V with N rows, make the i'th column of A the first N entries of V, starting from the i'th entry of V, so long as there are enough entries left in V to fill up the column. This means A has L - N + 1 columns.
Here is an example:
V = [0, 1, 2, 3, 4, 5]
N = 3
A =
[0 1 2 3
1 2 3 4
2 3 4 5]
Representing the matrix this way requires more memory than my machine has. Is there any reasonable way of storing this matrix sparsely? I am currently storing N * (L - N + 1) values, when I only need to store L values.
You can take a view of your original vector as follows:
>>> import numpy as np
>>> from numpy.lib.stride_tricks import as_strided
>>>
>>> v = np.array([0, 1, 2, 3, 4, 5])
>>> n = 3
>>>
>>> a = as_strided(v, shape=(n, len(v)-n+1), strides=v.strides*2)
>>> a
array([[0, 1, 2, 3],
[1, 2, 3, 4],
[2, 3, 4, 5]])
This is a view, not a copy of your original data, e.g.
>>> v[3] = 0
>>> v
array([0, 1, 2, 0, 4, 5])
>>> a
array([[0, 1, 2, 0],
[1, 2, 0, 4],
[2, 0, 4, 5]])
But you have to be careful no to do any operation on a that triggers a copy, since that would send your memory use through the ceiling.
If you're already using numpy, use its strided or sparse arrays, as Jaime explained.
If you're not already using numpy, you may to strongly consider using it.
If you need to stick with pure Python, there are three obvious ways to do this, depending on your use case.
For strided or sparse-but-clustered arrays, you could do effectively the same thing as numpy.
Or you could use a simple run-length-encoding scheme, plus maybe a higher-level list of runs for, or list of pointers to every Nth element, or even a whole stack of such lists (one for every 100 elements, one for every 10000, etc.).
But for mostly-uniformly-dense arrays, the easiest thing is to simply store a dict or defaultdict mapping indices to values. Random-access lookups or updates are still O(1)—albeit with a higher constant factor—and the storage you waste storing (in effect) a hash, key, and value instead of just a value for each non-default element is more than made up for by not storing values for the default elements, as long as you're less than 0.33 density.

Categories