Hello I've been coding for a couple of months now and know the basics, but I'm having a set membership problem for which I can't find a solution.
I have a list of lists of pairs of integers, and I want to remove the list that have the "a" integer in them. I thought using sets was the easiest way. Bellow is the code:
## This is the item to test against.
a = set([3])
## This is the list to test.
groups = [[3, 2], [3, 4], [1, 2], [5, 4], [4, 3]]
## This is a list that will contain the lists present
## in groups which do not contain "a"
groups_no_a = []
for group in groups:
group = set(group)
if a in group:
groups_no_a.append(group)
## I thought the problem had something to do with
## clearing the variable so I put this in,
## but to no remedy.
group.clear()
print groups_no_a
I had also tried using s.issubset(t) until I realized that this tested if every element in s in t.
Thank you!
You want to test if there is no intersection:
if not a & group:
or
if not a.intersection(group):
or, inversely, that the sets are disjoint:
if a.isdisjoint(group):
The method forms take any iterable, you don't even have to turn group into a set for that. The following one-liner would work too:
groups_no_a = [group for group in groups if a.isdisjoint(group)]
Demo:
>>> a = set([3])
>>> groups = [[3, 2], [3, 4], [1, 2], [5, 4], [4, 3]]
>>> [group for group in groups if a.isdisjoint(group)]
[[1, 2], [5, 4]]
If all you are testing for is one element, then it could be that creating sets is going to cost more in performance than what you gain in testing for membership, and just doing:
3 not in group
where group is a short list.
You can use the timeit module to compare pieces of Python code to see what works best for your specific typical list sizes.
Maybe you could use List Comprehension:
a = 3
groups = [[3, 2], [3, 4], [1, 2], [5, 4], [4, 3]]
print [x for x in groups if a not in x]
Edit based on a comment:
Well to those curious, what I want to do is; I have a list like the
following: [ [error, [ [group_item_1, group_item_2], [...], [...],
[...] ] ], [more like this previous], [...] ], and I want to get the
item with least error and that doesn't have "a" in group_item_1 or
group_item_2. The lists are already sorted by error. I sorta almost go
it :D
This should do the trick:
from itertools import chain, iterfilter
def flatten(listOfLists):
"Flatten one level of nesting"
return chain.from_iterable(listOfLists)
errors_list = [ ['error0', [ [30, 2], [3, 4], [1, 2], [5, 4], [4, 3] ] ], ['error1', [ [31, 2], [3, 4], [1, 2], [5, 4], [4, 3] ] ] ]
a = 30
result = next(ifilter(lambda err: a not in flatten(err[1]), reversed(errors_list)), None)
print result #finds error1 as it has no 30 on its list
Rather than making a = set([3]), why not do the following?
a = 3
groups = [[3, 2], [3, 4], [1, 2], [5, 4], [4, 3]]
groups_no_a = [group for group in groups if a not in group]
You don't need to use sets here, you can test for membership of elements in lists. You also seem to have in, where I think you should have not in.
This code is similar to yours, and should work:
## This is the item to test against.
a = 3
## This is the list to test.
groups = [[3, 2], [3, 4], [1, 2], [5, 4], [4, 3]]
## This is a list that will contain the lists present
## in groups which do not contain a
groups_no_a = []
for group in groups:
if a not in group:
groups_no_a.append(group)
print groups_no_a
However, a shorter, more Pythonic way uses list comprehensions:
groups_no_a = [i for i in groups if a not in i]
If you are testing a whether an item is in a much longer list, you should use sets instead for performance.
Related
I got a fairly large array of arrays of length 2 (List[List[int, int]])
How can I unique arrays of them? Preferably without using different libraries
I've seen several solutions that use numpy, but I'm unlikely to be able to use this in olympiads
# Example input:
nums = [[2, 9], [3, 6], [9, 2], [6, 3]]
for i in nums:
# some code here
# Output:
# nums = [[2, 9], [3, 6]]
I tried doing this but I guess it's not a very fast solution
# Example input:
nums = [[2, 9], [3, 6], [9, 2], [6, 3]]
unique = []
for i in nums:
if sorted(i) not in unique:
unique.append(sorted(i))
# Output:
print(unique) # [[2, 9], [3, 6]]
To deal with sets not being hashable, you can create a set of frozensets this way:
unique = {frozenset(i) for i in nums}
Then you can use whichever means to turn the results into the objects you want; for example:
unique = [list(i) for i in unique]
Turn each pair into a sorted tuple, put them all into a set, then turn that back into a list of lists.
>>> nums = [[2, 9], [3, 6], [9, 2], [6, 3]]
>>> {tuple(sorted(n)) for n in nums}
{(2, 9), (3, 6)}
>>> [list(t) for t in {tuple(sorted(n)) for n in nums}]
[[2, 9], [3, 6]]
The tuple is necessary because a set (which is created via the {} set comprehension expression) needs to contain hashable (immutable) objects.
This answer is wrong. The set constructor does not sort its elements.
As #Swifty mentioned, you can use set to solve this problem. Send each pair through the set constructor to sort the pair, then convert it to tuple to make it hashable and use set again to remove duplicate tuples.
nums = [[2, 9], [3, 6], [9, 2], [6, 3]]
num_tuples = set(tuple(set(pair)) for pair in nums)
print(num_tuples) # {(9, 2), (3, 6)}
Warning: As pointed out by #Samwise
This is a little dodgy because you're assuming that tuple(set(pair)) will deterministically create the same tuple ordering for any given set. This is probably true in practice (IIRC when you iterate over a set the items always come out in hash order, at least in CPython 3) but I'm not sure it's necessarily guaranteed by the Python spec. –
I have a dictionary, each key of dictionary has a list of list (nested list) as its value. What I want is imagine we have:
x = {1: [[1, 2], [3, 5]], 2: [[2, 1], [2, 6]], 3: [[1, 5], [5, 4]]}
My question is how can I access each element of the dictionary and concatenate those with same index: for example first list from all keys:
[1,2] from first keye +
[2,1] from second and
[1,5] from third one
How can I do this?
You can access your nested list easily when you're iterating through your dictionary and append it to a new list and the you apply the sum function.
Code:
x={1: [[1,2],[3,5]] , 2:[[2,1],[2,6]], 3:[[1,5],[5,4]]}
ans=[]
for key in x:
ans += x[key][0]
print(sum(ans))
Output:
12
Assuming you want a list of the first elements, you can do:
>>> x={1: [[1,2],[3,5]] , 2:[[2,1],[2,6]], 3:[[1,5],[5,4]]}
>>> y = [a[0] for a in x.values()]
>>> y
[[1, 2], [2, 1], [1, 5]]
If you want the second element, you can use a[1], etc.
The output you expect is not entirely clear (do you want to sum? concatenate?), but what seems clear is that you want to handle the values as matrices.
You can use numpy for that:
summing the values
import numpy as np
sum(map(np.array, x.values())).tolist()
output:
[[4, 8], [10, 15]] # [[1+2+1, 2+1+5], [3+2+5, 5+6+4]]
concatenating the matrices (horizontally)
import numpy as np
np.hstack(list(map(np.array, x.values()))).tolist()
output:
[[1, 2, 2, 1, 1, 5], [3, 5, 2, 6, 5, 4]]
As explained in How to iterate through two lists in parallel?, zip does exactly that: iterates over a few iterables at the same time and generates tuples of matching-index items from all iterables.
In your case, the iterables are the values of the dict. So just unpack the values to zip:
x = {1: [[1, 2], [3, 5]], 2: [[2, 1], [2, 6]], 3: [[1, 5], [5, 4]]}
for y in zip(*x.values()):
print(y)
Gives:
([1, 2], [2, 1], [1, 5])
([3, 5], [2, 6], [5, 4])
I have two lists like the following:
A = [[1, 2, 3], [1, 2, 4], [4, 5, 6]]
and
B = [[1, 2, 3], [1, 2, 6], [4, 5, 6], [4, 3, 6]]
And I wish to calculate the difference, which is equal to the following:
A - B =[[1, 2, 4]]
In other words, I want to treat A and B as a set of lists (all of the sample size, in this example it is 3) and find the difference (i.e, remove all lists in B, which are also in A and return the rest.).
Is there a faster way than using multiple for loops for this?
Simple list comprehension will do the trick:
[a for a in A if a not in B]
output:
[[1, 2, 4]]
If you convert the second list to a set first, then membership tests are asymptotically faster; the downside is you have to convert the rows to tuples so that they can be in a set. (Consider having the rows as tuples instead of lists in the first place.)
def list_of_lists_subtract(a, b):
b_set = {tuple(row) for row in b}
return [row for row in a if tuple(row) not in b_set]
Note that "asymptotically faster" only means this should be faster for large inputs; the simpler version will likely be faster for small inputs. If performance is critical then it's up to you to benchmark the alternatives on realistic data.
You can try this.
Convert the first list of lists to a set of tuples S1
Convert the second list of lists to a set of tuples S2
Use the difference method or simply S1 - S2 to get the set of tuples that are present in S1 but not in S2
Convert the result obtained to the desired format (in this case, a list of lists).
# (Untested)
A = [[1, 2, 3], [1, 2, 4], [4, 5, 6]]
B = [[1, 2, 3], [1, 2, 6], [4, 5, 6], [4, 3, 6]]
set_A = set([tuple(item) for item in A])
set_B = set([tuple(item) for item in B])
difference_set = set_A - set_B
difference_list = [list(item) for item in sorted(difference_set)]
print(difference_list)
I have a list of pairs given as:
a = [[0, 1], [1, 3], [2, 1], [3, 1]]
I would like to return unique matches as a new list for all numbers inside 'a'. As an example, I'd like to create something like below - i.e. where I can select any number from 'a' and see all other numbers associated with it from all of the pairs. The list of lists, b, below is an example of what I'd like to achieve:
b = [ [0,[1]] , [1,[0,2,3]] , [2,[1]] , [3,[1]] ]
I am open to more efficient/better ways of displaying this. The example above is just one way that came to mind.
from collections import defaultdict
a = [[0, 1], [1, 3], [2, 1], [3, 1]]
c = defaultdict(set) # default dicts let you manipulate keys as if they already exist
for item in [[0, 1], [1, 3], [2, 1], [3, 1]]:
# using sets avoids duplicates easily
c[item[0]].update([item[1]])
c[item[1]].update([item[0]])
# turn the sets into lists
b = [[item[0], list(item[1])] for item in c.items()]
In case if your list contains lists with different length:
from collections import defaultdict
a = [[0, 1], [1, 3], [2, 1], [3, 1]]
b = defaultdict(set)
for items in a:
for item in items:
b[item] |= set(items) ^ {item}
To get exact output you ask for use:
c = [[key, list(value)] for key, value in b.items()]
I have a list of lists representing a connectivity graph in Python. This list look like a n*2 matrix
example = [[1, 2], [1, 5], [1, 8], [2, 1], [2, 9], [2,5] ]
what I want to do is to find the value of the first elements of the lists where the second element is equal to a user defined value. For instance :
input 1 returns [2] (because [2,1])
input 5 returns [1,2] (because [1,5] and [2,5])
input 7 returns []
in Matlab, I could use
output = example(example(:,1)==input, 2);
but I would like to do this in Python (in the most pythonic and efficient way)
You can use list comprehension as a filter, like this
>>> example = [[1, 2], [1, 5], [1, 8], [2, 1], [2, 9], [2,5]]
>>> n = 5
>>> [first for first, second in example if second == n]
[1, 2]
You can work with the Python functions map and filter very comfortable:
>>> example = [[1, 2], [1, 5], [1, 8], [2, 1], [2, 9], [2,5] ]
>>> n = 5
>>> map(lambda x: x[0], filter(lambda x: n in x, example))
[1,2]
With lambda you can define anonyme functions...
Syntax:
lambda arg0,arg1...: e
arg0,arg1... are your parameters of the fucntion, and e is the expression.
They use lambda functions mostly in functions like map, reduce, filter etc.
exemple = [[1, 2], [1, 5], [1, 8], [2, 1], [2, 9], [2,5] ]
foundElements = []
** input = [...] *** List of Inputs
for item in exemple:
if item[1] in input :
foundElements.append(item[0])