How does this Python Hackerrank function work? - python

I am a complete beginner in terms of programming and I just started learning my first language, which is Python. Lately, I've been practising solving problems in Hackerrank and I got stuck at some "diagonal difference" problem.
The problem is completely new to me so I search the web for some answers and come across with this function that somebody made in github.
def diagonalDifference(arr):
prim =0
sec=0
length = len(arr[0])
i=0 #what does i=0 even do here?
for count in range(length):
prim += arr[count][count] #don't understand what "[count][count]" mean
sec += arr[count][(length-count-1)] #don't understand this either
return abs(prim-sec)

Here is same code with further explanation. Basically this function sums the elements
of the upper left to bottom right diagonal storing the running total in prim, and sums
the elements of the upper right to bottom left diagonal storing the running total in sec. Then
the absolute value of the difference is returned. For an array the indexes are: arr[row][column]
from 0 to one less than the lenght of the array. Hope it helps
import numpy as np
def diagonalDifference(arr):
prim = 0
sec = 0
length = len(arr[0])
for i in range(length):
print("Iteration:", i,
"UL to BR Diagonal:", arr[i][i],
"UR to BL Diagonal:", arr[i][(length-i-1)])
# Get value of arr in the ith row and ith column (i.e. the UL to BR diagonal)
# Add to the cummulative sum
prim = prim + arr[i][i]
# Get the value of arr in the ith row and the (length-i-1)th column
# Columns traverse right to left (i.e. the UR to BL diagonal)
sec = sec + arr[i][(length-i-1)]
print("UL to BR Diagonal Sum:", prim,
"----",
"UR to BL Diagonal Sum:", sec)
# Take the absolute value of the difference between the running totals
return abs(prim-sec)
a = np.array([[1, 2, 4], [3, 4, 6], [3, 8, 1]])
print(a)
diagonalDifference(a)
Output:
[[1 2 4]
[3 4 6]
[3 8 1]]
Iteration: 0 UL to BR Diagonal: 1 UR to BL Diagonal: 4
Iteration: 1 UL to BR Diagonal: 4 UR to BL Diagonal: 4
Iteration: 2 UL to BR Diagonal: 1 UR to BL Diagonal: 3
UL to BR Diagonal Sum: 6 ---- UR to BL Diagonal Sum: 11

First of all i here is unnecessary. Now, let's say we have a square matrix:
arr =
[[1, 2, 4],
[3, 5, 8],
[6, 2, 1]]
The indices will be:
[[(0,0), (0,1), (0,2)],
[(1,0), (1,1), (1,2)],
[(2,0), (2,1), (2,2)]]
So the primary diagonal is [(0,0),(1,1),(2,2)]
And the secondary diagonal is: [(0,2),(1,1),(2,0)]
Now in the function:
length = len(arr[0])
arr[0] is := [1, 2, 4], i.e. the first row,
so length = 3
for count in range(length):
so count will have values: [0, 1, 2]
Now, for all the iterations:
arr[count][count] will yield: arr[0][0], arr[1][1] and arr[2][2],
hence giving the first diagonal.
And
arr[count][(length-count-1)] will yield: arr[0][(3-0-1)], arr[1][(3-1-1)],
and arr[2][(3-2-1)], i.e arr[0][2], arr[1][1] and arr[2][0],
which is the second diagonal

Related

Error when trying to implement MERGE algorithm merging to sorted lists of integers in python?

I'm new to both algorithms AND programming.
As an intro to the MERGE algorithms the chapter introduces first the MERGE algorithm by itself. It merges and sorts an array consisting of 2 sorted sub-arrays.
I did the pseudocode on paper according to the book:
Source: "Introduction to Algorithms
Third Edition" Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein
Since I am implementing it in python3 I had to change some lines given that indexing in python starts at 0 unlike in the pseudocode example of the book.
Keep in mind that the input is one array that contains 2 SORTED sub-arrays which are then merged and sorted, and returned. I kept the prints in my code, so you can see my checks...
#!/anaconda3/bin/python3
import math
import argparse
# For now only MERGE slides ch 2 -- Im defining p q and r WITHIN the function
# But for MERGE_SORT p,q and r are defined as parameters!
def merge(ar):
'''
Takes as input an array. This array consists of 2 subarrays that ARE ALLREADY sorted
(small to large). When splitting the array into half, the left
part will be longer by one if not divisible by 2. These subarrays will be
called left and right. Each of the subarrays must already be sorted. Merge() then
merges these sorted arrays into one big sorted array. The sorted array is returned.
'''
print(ar)
p=0 # for now defining always as 0
if len(ar)%2==0:
q=len(ar)//2-1 # because indexing starts from ZERO in py
else:
q=len(ar)//2 # left sub array will be 1 item longer
r=len(ar)-1 # again -1 because indexing starts from ZERO in py
print('p', p, 'q', q, 'r', r)
# lets see if n1 and n2 check out
n_1 = q-p+1 # lenght of left subarray
n_2 = r-q # lenght of right subarray
print('n1 is: ', n_1)
print('n2 is: ', n_2)
left = [0]*(n_1+1) # initiating zero list of lenght n1
right=[0]*(n_2+1)
print(left, len(left))
print(right, len(right))
# filling left and right
for i in range(n_1):# because last value will always be infinity
left[i] = ar[p+i]
for j in range(n_2):
right[j] = ar[q+j+1]
#print(ar[q+j+1])
#print(right[j])
# inserting infinity at last index for each subarray
left[n_1]=math.inf
right[n_2]=math.inf
print(left)
print(right)
# merging: initiating indexes at 0
i=0
j=0
print('p', p)
print('r', r)
for k in range(p,r):
if left[i] <= right[j]:
ar[k]=left[i]
# increase i
i += 1
else:
ar[k]=right[j]
#increase j
j += 1
print(ar)
#############################################################################################################################
# Adding parser
#############################################################################################################################
parser = argparse.ArgumentParser(description='MERGE algorithm from ch 2')
parser.add_argument('-a', '--array', type=str, metavar='', required=True, help='One List of integers composed of 2 sorted halves. Sorting must start from smallest to largest for each of the halves.')
args = parser.parse_args()
args_list_st=args.array.split(',') # list of strings
args_list_int=[]
for i in args_list_st:
args_list_int.append(int(i))
if __name__ == "__main__":
merge(args_list_int)
The problem:
When I try to sort the array as shown in the book the merged array that is returned contains two 6es and the 7 is lost.
$ ./2.merge.py -a=2,4,5,7,1,2,3,6
[2, 4, 5, 7, 1, 2, 3, 6]
p 0 q 3 r 7
n1 is: 4
n2 is: 4
[0, 0, 0, 0, 0] 5
[0, 0, 0, 0, 0] 5
[2, 4, 5, 7, inf]
[1, 2, 3, 6, inf]
p 0
r 7
[1, 2, 2, 3, 4, 5, 6, 6]
This does how ever not happen with arrays of any number higher than 6.
$ ./2.merge.py -a=2,4,5,7,1,2,3,8
[2, 4, 5, 7, 1, 2, 3, 8]
p 0 q 3 r 7
n1 is: 4
n2 is: 4
[0, 0, 0, 0, 0] 5
[0, 0, 0, 0, 0] 5
[2, 4, 5, 7, inf]
[1, 2, 3, 8, inf]
p 0
r 7
[1, 2, 2, 3, 4, 5, 7, 8]
I showed it to a colleague in my class without success. And I've walked it through manually with numbers on paper snippets but withouth success. I hope someone can find my silly mistake because I'm completely stuck.
Thanks
As r is the index of the last value in arr, you need to add one to it to make a range that also includes that final index:
for k in range(p, r + 1):
# ^^^^^
Note that your code could be greatly reduced if you would use list slicing.
Brother you made a very small mistake in this line
for k in range(p,r):
Here you loop is running from p to r-1 and your last index i.e r, will not get iterated.
So you have to use
for k in range(p,r+1):
And in the second testcase a=[2,4,5,7,1,2,3,8]
You are getting the correct output even with your wrong code because you are overwriting the values in array ar and your current code was able to sort the array till index r-1 and the number present at index r will be the same which was present before the execution of your merge function i.e 8
Try using this testcase: [2, 4, 5, 8, 1, 2, 3, 7]
And your output will be [1, 2, 2, 3, 4, 5, 7, 7]
Hope this helped

Leetcode Problem: Sort the Matrix Diagonally | Help (but not too much)

Problem:
A matrix diagonal is a diagonal line of cells starting from some cell in either the topmost row or leftmost column and going in the bottom-right direction until reaching the matrix's end. For example, the matrix diagonal starting from mat[2][0], where mat is a 6 x 3 matrix, includes cells mat[2][0], mat3, and mat[4][2].
Given an m x n matrix mat of integers, sort each matrix diagonal in ascending order and return the resulting matrix.
My Problem:
I figured out, that the difference of row and col of the diagonal cells is the same for one specific diagonal line. For example the difference of line 3,2,1 (mat[0][0]... mat[i][i]) is obviously 0, beyond that its -1, -2 etc...
I can extract all cells with the difference < 0.. but that's not helpful, since i want the order of the diagonal cells/lines
lst = [[3,3,1,1],[2,2,1,2],[1,1,1,2]]
new_list = []
for i in range(len(lst)):
for j in range(len(lst[0])):
if i-j < 0:
new_list.append(lst[i][j])
print(new_list)
output -> [3, 1, 1, 1, 2, 2]
So far so good, i cant think of a code that prioritises the specific difference. I want all -1 cells first, then -2 cells ... etc. I could write a lot of if-statements, but that's not the optimal solution...
I'm pretty much stuck here. Pls help me with a few fresh ideas. Pls don't tell me the complete solution. Just push me into the right direction, appreciate the help!
What you could do is to identify each position with a diagonal ID. Then sort the flattened values by diagonal-ID/value and place them back in a resulting matrix:
def sortDiag(M):
rows,cols = len(M),len(M[0]) # matrix dimensions
diags = range(rows + cols - 1) # diagonal identifiers
diagNo = [ (d,r,c) for r in range(rows)
for c,d in enumerate(diags[-cols-r::][:cols]) ]
diagVal = [ (d,M[r][c]) for d,r,c in diagNo ] # values per diagonal
result = [[None]*cols for _ in range(rows)]
for (_,v),(_,r,c) in zip(sorted(diagVal),sorted(diagNo)):
result[r][c] = v # assign position with sorted value
return result
output:
M = [[3,3,1,1],[2,2,1,2],[1,1,1,2]]
print(*M,sep="\n")
[3, 3, 1, 1]
[2, 2, 1, 2]
[1, 1, 1, 2]
print(*sortDiag(M),sep="\n")
[1, 1, 1, 1]
[1, 2, 2, 2]
[1, 2, 3, 3]
In this example, the diagonal identifiers are numbers from 0 to 5 and the coordinate list in diagNo corresponds to the following assignments:
[2, 3, 4, 5]
[1, 2, 3, 4]
[0, 1, 2, 3]
By mapping the sorted positions (per diagonal) with the sorted values (per diagonal) we can assign the sorted values to the positions corresponding to each diagonal.

Python: Efficienctly calculate mean of off-diagonal elements for Cadzow filter

I have currently implementing a Gadzow filter in Python.
To put in some context. You begin with an 1 dimensional array (let's take range(10) as an example) and build a Hankel-like matrix out of it like
H= [[0, 1, 2, 3, 4, 5],
[1, 2, 3, 4, 5, 6],
[2, 3, 4, 5, 6, 7],
[3, 4, 5, 6, 7, 8],
[4, 5, 6, 7, 8, 9]])
Afterwards you do some linear algebra with this matrix which is no problem. Afterwards, the most time consuming step follows which is an averaging problem.
In a new matrix B you average the elements of the resulting matrix. In the first row you average all elements by the path which is given by the accurances in H. So something like the off diagonals but going from top right to bottom left. In the second slice you ignore the first line and so on.
Matrix $H$ would be invariant under this analysis step but for example the Matrix
1 2 2 1
1 1 1 1
1 1 1 1
would become
1 1.5 1.33 1
1 1 1 1
1 1 1 1
Okay, I hope you understand the problem. My (working but inefficient) code is
def av_diag(A,i,j):
dim = A.shape
# get the "borders" of A
lim = min((dim[0]-i,j+1))
# calculate the mean
return np.mean([A[i+it,j-it] for it in range(lim)])
def avHankel(A):
# get the mean for all elements by nested list comprehension
return np.array([[av_diag(A,i,j) for j in range(len(A[0]))] for i in range(len(A))])
This takes a while for my data, containing 2048 data points, resulting in a 1024x1023 matrix.
And I would be glad for possible tricks to speed this up.
Thanks
You can convolve your input matrix with a filter matrix to speed up your code. The filter matrix can be defined so that at each step of the convolution, it extracts only the antidiagonals at the given coordinates. Basically, your filter matrix is simply an anti-identity matrix. Finally, as the convolution will only sum the elements of the anti-diagonals, you have to divide the output by the correct number of samples to obtain the mean:
import numpy as np
from scipy.signal import fftconvolve
from time import time
def av_diag(A,i,j):
dim = A.shape
lim = min((dim[0]-i,j+1))
return np.mean([A[i+it,j-it] for it in range(lim)])
def avHankel(A):
return np.array([[av_diag(A,i,j) for j in range(len(A[0]))] for i in range(len(A))])
def fast_avHankel(A):
m, n = A.shape
filt = np.eye(m)[:,::-1]
Apad = np.pad(A, ((0, m-1), (m-1, 0)), mode = "constant", constant_values = 0)
Asum = fftconvolve(Apad, filt, mode = "valid")
Adiv = np.array([ [ min(m-i, j+1) for j in range(n) ] for i in range(m) ])
return Asum / Adiv
if __name__ == "__main__":
A = np.random.rand(500, 500)
starttime = time()
Hold = avHankel(A)
print(time() - starttime) # 10.6 seconds on a laptop
starttime = time()
Hnew = fast_avHankel(A)
print(time() - starttime) # 0.26 seconds on a laptop

Largest Subset whose sum is less than equal to a given sum

A list is defined as follows: [1, 2, 3]
and the sub-lists of this are:
[1], [2], [3],
[1,2]
[1,3]
[2,3]
[1,2,3]
Given K for example 3 the task is to find the largest length of sublist with sum of elements is less than equal to k.
I am aware of itertools in python but it will result in segmentation fault for larger lists. Is there any other efficient algorithm to achieve this? Any help would be appreciated.
My code is as allows:
from itertools import combinations
def maxLength(a, k):
#print a,k
l= []
i = len(a)
while(i>=0):
lst= list(combinations(sorted(a),i))
for j in lst:
#rint list(j)
lst = list(j)
#print sum(lst)
sum1=0
sum1 = sum(lst)
if sum1<=k:
return len(lst)
i=i-1
You can use the dynamic programming solution that #Apy linked to. Here's a Python example:
def largest_subset(items, k):
res = 0
# We can form subset with value 0 from empty set,
# items[0], items[0...1], items[0...2]
arr = [[True] * (len(items) + 1)]
for i in range(1, k + 1):
# Subset with value i can't be formed from empty set
cur = [False] * (len(items) + 1)
for j, val in enumerate(items, 1):
# cur[j] is True if we can form a set with value of i from
# items[0...j-1]
# There are two possibilities
# - Set can be formed already without even considering item[j-1]
# - There is a subset with value i - val formed from items[0...j-2]
cur[j] = cur[j-1] or ((i >= val) and arr[i-val][j-1])
if cur[-1]:
# If subset with value of i can be formed store
# it as current result
res = i
arr.append(cur)
return res
ITEMS = [5, 4, 1]
for i in range(sum(ITEMS) + 1):
print('{} -> {}'.format(i, largest_subset(ITEMS, i)))
Output:
0 -> 0
1 -> 1
2 -> 1
3 -> 1
4 -> 4
5 -> 5
6 -> 6
7 -> 6
8 -> 6
9 -> 9
10 -> 10
In above arr[i][j] is True if set with value of i can be chosen from items[0...j-1]. Naturally arr[0] contains only True values since empty set can be chosen. Similarly for all the successive rows the first cell is False since there can't be empty set with non-zero value.
For rest of the cells there are two options:
If there already is a subset with value of i even without considering item[j-1] the value is True
If there is a subset with value of i - items[j - 1] then we can add item to it and have a subset with value of i.
As far as I can see (since you treat sub array as any items of the initial array) you can use greedy algorithm with O(N*log(N)) complexity (you have to sort the array):
1. Assign entire array to the sub array
2. If sum(sub array) <= k then stop and return sub array
3. Remove maximim item from the sub array
4. goto 2
Example
[1, 2, 3, 5, 10, 25]
k = 12
Solution
sub array = [1, 2, 3, 5, 10, 25], sum = 46 > 12, remove 25
sub array = [1, 2, 3, 5, 10], sum = 21 > 12, remove 10
sub array = [1, 2, 3, 5], sum = 11 <= 12, stop and return
As an alternative you can start with an empty sub array and add up items from minimum to maximum while sum is less or equal then k:
sub array = [], sum = 0 <= 12, add 1
sub array = [1], sum = 1 <= 12, add 2
sub array = [1, 2], sum = 3 <= 12, add 3
sub array = [1, 2, 3], sum = 6 <= 12, add 5
sub array = [1, 2, 3, 5], sum = 11 <= 12, add 10
sub array = [1, 2, 3, 5, 10], sum = 21 > 12, stop,
return prior one: [1, 2, 3, 5]
Look, for generating the power-set it takes O(2^n) time. It's pretty bad. You can instead use the dynamic programming approach.
Check in here for the algorithm.
http://www.geeksforgeeks.org/dynamic-programming-subset-sum-problem/
And yes, https://www.youtube.com/watch?v=s6FhG--P7z0 (Tushar explains everything well) :D
Assume everything is positive. (Handling negatives is a simple extension of this and is left to the reader as an exercise). There exists an O(n) algorithm for the described problem. Using the O(n) median select, we partition the array based on the median. We find the sum of the left side. If that is greater than k, then we cannot take all elements, we must thus recur on the left half to try to take a smaller set. Otherwise, we subtract the sum of the left half from k, then we recur on the right half to see how many more elements we can take.
Partitioning the array based on median select and recurring on only 1 of the halves yields a runtime of n+n/2 +n/4 +n/8.. which geometrically sums up to O(n).

Remove elements that appear more often than once from numpy array

The question is, how can I remove elements that appear more often than once in an array completely. Below you see an approach that is very slow when it comes to bigger arrays.
Any idea of doing this the numpy-way? Thanks in advance.
import numpy as np
count = 0
result = []
input = np.array([[1,1], [1,1], [2,3], [4,5], [1,1]]) # array with points [x, y]
# count appearance of elements with same x and y coordinate
# append to result if element appears just once
for i in input:
for j in input:
if (j[0] == i [0]) and (j[1] == i[1]):
count += 1
if count == 1:
result.append(i)
count = 0
print np.array(result)
UPDATE: BECAUSE OF FORMER OVERSIMPLIFICATION
Again to be clear: How can I remove elements appearing more than once concerning a certain attribute from an array/list ?? Here: list with elements of length 6, if first and second entry of every elements both appears more than once in the list, remove all concerning elements from list. Hope I'm not to confusing. Eumiro helped me a lot on this, but I don't manage to flatten the output list as it should be :(
import numpy as np
import collections
input = [[1,1,3,5,6,6],[1,1,4,4,5,6],[1,3,4,5,6,7],[3,4,6,7,7,6],[1,1,4,6,88,7],[3,3,3,3,3,3],[456,6,5,343,435,5]]
# here, from input there should be removed input[0], input[1] and input[4] because
# first and second entry appears more than once in the list, got it? :)
d = {}
for a in input:
d.setdefault(tuple(a[:2]), []).append(a[2:])
outputDict = [list(k)+list(v) for k,v in d.iteritems() if len(v) == 1 ]
result = []
def flatten(x):
if isinstance(x, collections.Iterable):
return [a for i in x for a in flatten(i)]
else:
return [x]
# I took flatten(x) from http://stackoverflow.com/a/2158522/1132378
# And I need it, because output is a nested list :(
for i in outputDict:
result.append(flatten(i))
print np.array(result)
So, this works, but it's impracticable with big lists.
First I got
RuntimeError: maximum recursion depth exceeded in cmp
and after applying
sys.setrecursionlimit(10000)
I got
Segmentation fault
how could I implement Eumiros solution for big lists > 100000 elements?
np.array(list(set(map(tuple, input))))
returns
array([[4, 5],
[2, 3],
[1, 1]])
UPDATE 1: If you want to remove the [1, 1] too (because it appears more than once), you can do:
from collections import Counter
np.array([k for k, v in Counter(map(tuple, input)).iteritems() if v == 1])
returns
array([[4, 5],
[2, 3]])
UPDATE 2: with input=[[1,1,2], [1,1,3], [2,3,4], [4,5,5], [1,1,7]]:
input=[[1,1,2], [1,1,3], [2,3,4], [4,5,5], [1,1,7]]
d = {}
for a in input:
d.setdefault(tuple(a[:2]), []).append(a[2])
d is now:
{(1, 1): [2, 3, 7],
(2, 3): [4],
(4, 5): [5]}
so we want to take all key-value pairs, that have single values and re-create the arrays:
np.array([k+tuple(v) for k,v in d.iteritems() if len(v) == 1])
returns:
array([[4, 5, 5],
[2, 3, 4]])
UPDATE 3: For larger arrays, you can adapt my previous solution to:
import numpy as np
input = [[1,1,3,5,6,6],[1,1,4,4,5,6],[1,3,4,5,6,7],[3,4,6,7,7,6],[1,1,4,6,88,7],[3,3,3,3,3,3],[456,6,5,343,435,5]]
d = {}
for a in input:
d.setdefault(tuple(a[:2]), []).append(a)
np.array([v for v in d.itervalues() if len(v) == 1])
returns:
array([[[456, 6, 5, 343, 435, 5]],
[[ 1, 3, 4, 5, 6, 7]],
[[ 3, 4, 6, 7, 7, 6]],
[[ 3, 3, 3, 3, 3, 3]]])
This is a corrected, faster version of Hooked's answer. count_unique counts the number of the number of occurrences for each unique key in keys.
import numpy as np
input = np.array([[1,1,3,5,6,6],
[1,1,4,4,5,6],
[1,3,4,5,6,7],
[3,4,6,7,7,6],
[1,1,4,6,88,7],
[3,3,3,3,3,3],
[456,6,5,343,435,5]])
def count_unique(keys):
"""Finds an index to each unique key (row) in keys and counts the number of
occurrences for each key"""
order = np.lexsort(keys.T)
keys = keys[order]
diff = np.ones(len(keys)+1, 'bool')
diff[1:-1] = (keys[1:] != keys[:-1]).any(-1)
count = np.where(diff)[0]
count = count[1:] - count[:-1]
ind = order[diff[1:]]
return ind, count
key = input[:, :2]
ind, count = count_unique(key)
print key[ind]
#[[ 1 1]
# [ 1 3]
# [ 3 3]
# [ 3 4]
# [456 6]]
print count
[3 1 1 1 1]
ind = ind[count == 1]
output = input[ind]
print output
#[[ 1 3 4 5 6 7]
# [ 3 3 3 3 3 3]
# [ 3 4 6 7 7 6]
# [456 6 5 343 435 5]]
Updated Solution:
From the comments below, the new solution is:
idx = argsort(A[:, 0:2], axis=0)[:,1]
kidx = where(sum(A[idx,:][:-1,0:2]!=A[idx,:][1:,0:2], axis=1)==0)[0]
kidx = unique(concatenate((kidx,kidx+1)))
for n in arange(0,A.shape[0],1):
if n not in kidx:
print A[idx,:][n]
> [1 3 4 5 6 7]
[3 3 3 3 3 3]
[3 4 6 7 7 6]
[456 6 5 343 435 5]
kidx is a index list of the elements you don't want. This preserves rows where the first two inner elements do not match any other inner element. Since everything is done with indexing, it should be fast(ish), though it requires a sort on the first two elements. Note that original row order is not preserved, though I don't think this is a problem.
Old Solution:
If I understand it correctly, you simply want to filter out the results of a list of lists where the first element of each inner list is equal to the second element.
With your input from your update A=[[1,1,3,5,6,6],[1,1,4,4,5,6],[1,3,4,5,6,7],[3,4,6,7,7,6],[1,1,4,6,88,7],[3,3,3,3,3,3],[456,6,5,343,435,5]], the following line removes A[0],A[1] and A[4]. A[5] is also removed since that seems to match your criteria.
[x for x in A if x[0]!=x[1]]
If you can use numpy, there is a really slick way of doing the above. Assume that A is an array, then
A[A[0,:] == A[1,:]]
Will pull out the same values. This is probably faster than the solution listed above if you want to loop over it.
Why not create another array to hold the output?
Iterate through your main list and for each i check if i is in your other array and if not append it.
This way, your new array will not contain more than one of each element

Categories