python two dimensional array with rows and columns - python

Create a two-dimensional array named A with ROWS rows and COLS columns. ROWS and COLSS are specified by the user at run time. Fill A with randomly-chosen integers from the range [ -10,99 ], then repeatedly perform the following steps until end-of-file(1) input an integer x(2) search for x in A(3) when x is found in A, output the coordinate (row,col) where x is found, otherwise output the message "x not found!"
I need help I am wondering how can we define two-dimensional array named A with ROWS rows and COLS columns. ROWS and COLSS are specified by the user at runtime in python latest version
#--------------------------------------
#Hw 7
#E80
#---------------------------------------
A = [[Rows],[ColSS]] #I really dont know how to defend this part
for i in range (-10,99): #dont worry about this its just the logic not the actual code
x = int(input("Enter a number : "))
if x is found in A
coordinate row and clumn
otherwise output "x is not found"

The idiomatic way to create a 2D array in Python is:
rows,cols = 5,10
A = [[0]*cols for _ in range(rows)]
Explanation:
>>> A = [0] * 5 # Multiplication on a list creates a new list with duplicated entries.
>>> A
[0, 0, 0, 0, 0]
>>> A = [[0] * 5 for _ in range(2)] # Create multiple lists, in a list, using a comprehension.
>>> A
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> A[0][0] = 1
>>> A
[[1, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
Note you do not want to create duplicate lists of lists. It duplicates the list references so you have multiple references to the same list:
>>> A = [[0] * 5] * 2
>>> A
[[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]]
>>> A[0][0] = 1
>>> A
[[1, 0, 0, 0, 0], [1, 0, 0, 0, 0]] # both rows changed!

Related

Replace all but the first 1 in an array with 0

I am trying to find a way to replace all of the duplicate 1 with 0. As an example:
[[0,1,0,1,0],
[1,0,0,1,0],
[1,1,1,0,1]]
Should become:
[[0,1,0,0,0],
[1,0,0,0,0],
[1,0,0,0,0]]
I found a similar problem, however the solution does not seem to work numpy: setting duplicate values in a row to 0
Assume array contains only zeros and ones, you can find the max value per row using numpy.argmax and then use advanced indexing to reassign the values on the index to a zeros array.
arr = np.array([[0,1,0,1,0],
[1,0,0,1,0],
[1,1,1,0,1]])
res = np.zeros_like(arr)
idx = (np.arange(len(res)), np.argmax(arr, axis=1))
res[idx] = arr[idx]
res
array([[0, 1, 0, 0, 0],
[1, 0, 0, 0, 0],
[1, 0, 0, 0, 0]])
Try looping through each row of the grid
In each row, find all the 1s. In particular you want their indices (positions within the row). You can do this with a list comprehension and enumerate, which automatically gives an index for each element.
Then, still within that row, go through every 1 except for the first, and set it to zero.
grid = [[0, 1, 0, 1, 0], [1, 0, 0, 1, 0], [1, 1, 1, 0, 1]]
for row in grid:
ones = [i for i, element in enumerate(row) if element==1]
for i in ones[1:]:
row[i] = 0
print(grid)
Gives: [[0, 1, 0, 0, 0], [1, 0, 0, 0, 0], [1, 0, 0, 0, 0]]
You can use cumsum:
(arr.cumsum(axis=1).cumsum(axis=1) == 1) * 1
this will create a cummulative sum, by then checking if a value is 1 you can find the first 1s

Python: Integrate the number of columns into a variable

I am new to python and could need your help.
I have the variable 'sequence' which shows the optimal order of products.
sequence = seq_2
with for example:
seq_2 = [[0, 0], [1, 0]]
seq_4 = [[0, 0, 0, 0], [1, 0, 0, 0], [0, 1, 0, 0], [0, 0, 1, 0]]
What I want to do is to change the '2' according to the number of columns of a matrix I have generated.
For example, if the matrix has 6 columns (= 6 products), the variable should be:
sequence = seq_6
I know that the number of columns can be generated with:
columns = len(df.columns)
But how to combine the result to my "sequence"-varible?
Best regards
Amy
is this the answer you are looking for?
sequence = 'seq_' + str(len(df.columns))
With the string function seq_ can be concatenated to the number columns in the df

Python - Create constant array of unique elements [duplicate]

This question already has answers here:
List of lists changes reflected across sublists unexpectedly
(17 answers)
Closed 5 years ago.
I recently tried to instantiate a 4x4 constant (0's) array by using
a = [[0] * 4] * 4
which instantiates the array a as
[[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]
However, this does not create an array of unique elements as altering an element in any of the arrays changes all of them, e.g.:
a[0][0] = 1
alters a to
[[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0]]
I think I understand why this happens (copies of lists copy the list's pointer and do not create separate copies unless specified, unlike int's, etc.) but am left wondering:
Is there any quick and easy way to instantiate a constant array (without using any external modules, such as NumPy) with unique elements that can later be altered by simple a[i][j] = x addressing?
a = [[0 for _ in xrange(4)] for _ in xrange(4)]
should do it, it'll create separate lists
Just for free. What is going on here ? When one does
>>> a = [[0] * 4] * 4
first, one creates one list [0] * 4 with four 0 elements. Let call this list li.
Then when you do [li] * 4, one actually creates a list which refers four times to the same object. See
>>> [id(el) for el in a]
[8696976, 8696976, 8696976, 8696976] # in my case
Whence the (not that) curious result one gets when entry-wise assigning like so
>>> a[0][0] = 1
[[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0],
[1, 0, 0, 0]]
A solution is simply to ensure that each element of the list really is unique. For example doing
#Python2
>>> a = map(lambda _: [0]*4, range(4))
#Python3
>>> a = list(map(lambda _: [0]*4, range(4)))
#Python2&3
>>> a[0][0] = 1
[[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0]]

What is wrong with the following program code, attempting to initialize a 4 x 4 matrix of integers?

What is wrong with the following program code, attempting to initialize a 4 x 4 matrix of integers? How should the initialization be done?
line = [0] * 4
matrix = [line, line, line, line]
Use a list comprehension:
>>> line = [[0]*4 for _ in xrange(4)]
>>> line
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
Don't do this though:
>>> line = [[0]*4]*4
>>> line
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]
The output looks same, but the problem here is all inner lists are actually the same object repeated 4 times:
>>> [id(x) for x in line]
[156931756, 156931756, 156931756, 156931756]
So, changing one of them is going to affect all:
>>> line[2][0] = 10
>>> line
[[10, 0, 0, 0], [10, 0, 0, 0], [10, 0, 0, 0], [10, 0, 0, 0]]
Same thing is applicable to your code:
>>> line = [0] * 4
>>> matrix = [line, line, line, line]
>>> [id(x) for x in matrix]
[158521804, 158521804, 158521804, 158521804]
If line contains only immutable object then you can change your code do:
>>> matrix = [line[:] for _ in xrange(4)]
But, if line contains mutable objects itself, then you'd have to use either copy.deepcopy or better create a new line object inside the list comprehension.
you could use numpy for that if you want to perform computations on your matrix
import numpy as np
zeros = np.zeros([4,4])
The problem is:
>>> line = [0] * 4
>>> matrix = [line, line, line, line]
>>> matrix[0][0] = 5
>>> matrix
[[5, 0, 0, 0], [5, 0, 0, 0], [5, 0, 0, 0], [5, 0, 0, 0]]
You have and array of references to the same vector.
What is wrong here that you create a list of 4 references to line list. If you change any of sub-lists or line itself, you affect all sub-lists of matrix, since they (sub-lists) are essentially the same list
Here is a little demonstration
In [108]: line = [0] * 4
In [109]: matrix = [line, line, line, line]
In [110]: line[1]=2
In [111]: matrix
Out[111]: [[0, 2, 0, 0], [0, 2, 0, 0], [0, 2, 0, 0], [0, 2, 0, 0]]
In [112]: matrix[1][3] = 4
In [113]: matrix
Out[113]: [[0, 2, 0, 4], [0, 2, 0, 4], [0, 2, 0, 4], [0, 2, 0, 4]]
In [114]: for row in matrix:
.....: print id(row)
.....:
3065415660
3065415660
3065415660
3065415660
You need to do this:
matrix = [[0 for row in range(4)] for col in range(4)]
Your code works ok, but it's not flexible one. So, if you want to create 5*5 martix you have explicitly add more line objects to matrix init code. So, using for + xrange generators looks more suitable.
Also, not sure about case of using this matrix - but be aware about using the same object (list) as matrix line. So, if you change it's element - it would be modified in all rows:
matrix[0][0] = 'new value'
print matrix
[['new value', 0, 0, 0], ['new value', 0, 0, 0], ['new value', 0, 0, 0], ['new value', 0, 0, 0]]

Set rows of scipy.sparse matrix that meet certain condition to zeros

I wonder what is the best way to replaces rows that do not satisfy a certain condition with zeros for sparse matrices. For example (I use plain arrays for illustration):
I want to replace every row whose sum is greater than 10 with a row of zeros
a = np.array([[0,0,0,1,1],
[1,2,0,0,0],
[6,7,4,1,0], # sum > 10
[0,1,1,0,1],
[7,3,2,2,8], # sum > 10
[0,1,0,1,2]])
I want to replace a[2] and a[4] with zeros, so my output should look like this:
array([[0, 0, 0, 1, 1],
[1, 2, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 1, 0, 1],
[0, 0, 0, 0, 0],
[0, 1, 0, 1, 2]])
This is fairly straight forward for dense matrices:
row_sum = a.sum(axis=1)
to_keep = row_sum >= 10
a[to_keep] = np.zeros(a.shape[1])
However, when I try:
s = sparse.csr_matrix(a)
s[to_keep, :] = np.zeros(a.shape[1])
I get this error:
raise NotImplementedError("Fancy indexing in assignment not "
NotImplementedError: Fancy indexing in assignment not supported for csr matrices.
Hence, I need a different solution for sparse matrices. I came up with this:
def zero_out_unfit_rows(s_mat, limit_row_sum):
row_sum = s_mat.sum(axis=1).T.A[0]
to_keep = row_sum <= limit_row_sum
to_keep = to_keep.astype('int8')
temp_diag = get_sparse_diag_mat(to_keep)
return temp_diag * s_mat
def get_sparse_diag_mat(my_diag):
N = len(my_diag)
my_diags = my_diag[np.newaxis, :]
return sparse.dia_matrix((my_diags, [0]), shape=(N,N))
This relies on the fact that if we set 2nd and 4th elements of the diagonal in the identity matrix to zero, then rows of the pre-multiplied matrix are set to zero.
However, I feel that there is a better, more scipynic, solution. Is there a better solution?
Not sure if it is very scithonic, but a lot of the operations on sparse matrices are better done by accessing the guts directly. For your case, I personally would do:
a = np.array([[0,0,0,1,1],
[1,2,0,0,0],
[6,7,4,1,0], # sum > 10
[0,1,1,0,1],
[7,3,2,2,8], # sum > 10
[0,1,0,1,2]])
sps_a = sps.csr_matrix(a)
# get sum of each row:
row_sum = np.add.reduceat(sps_a.data, sps_a.indptr[:-1])
# set values to zero
row_mask = row_sum > 10
nnz_per_row = np.diff(sps_a.indptr)
sps_a.data[np.repeat(row_mask, nnz_per_row)] = 0
# ask scipy.sparse to remove the zeroed entries
sps_a.eliminate_zeros()
>>> sps_a.toarray()
array([[0, 0, 0, 1, 1],
[1, 2, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 1, 1, 0, 1],
[0, 0, 0, 0, 0],
[0, 1, 0, 1, 2]])
>>> sps_a.nnz # it does remove the entries, not simply set them to zero
10

Categories