I am trying to generate the unordered pairs of disjoint subsets of a set S of integers.
As shown in [1], when S consists of n integers, we generate around 3n/2 pairs.
Now, I know how to generate all 2n subsets of S (i.e. the powerset of S), and for every subset (consisting of k integers) I could thus generate the kC2 (k choose 2) possible pairs.
But this is inefficient, because pairs will end up being generated more than once.
Therefore, I am wondering if there is some efficient (recursive) way to generate these pairs of subsets from S? I could not find any existing implementations and my own attempts using for example Python's itertools were not succesful so far.
[1] Total number of unordered pairs of disjoint subsets of S (MathOverflow)
Related
I have a list of tuples that I want to randomly choose a subset from, but using weights to affect how likely an element is to be chosen and that doesn't use replacement.
I've tried random.choices(), which handles the subset and the weights element, but it uses replacement so I'm getting the same element repeatedly in the subset. For example, if my large set is [red (10&),orange (10%),blue(10%),yellow(10%),green(50%)], and I want a subset of 3 of them, random.choices often results in [green,green,blue].
I've also looked at random.sample(), which doesn't use replacement but doesn't allow for weighting, and at numpy.random.choice(), which requires a 1D array (which a list of tuples is not).
Is there another method I should be looking at?
you can do this with numpy
from numpy.random import choice
list_len = len(tuple_list)
np_list = np.arange(list_len)
draw = choice(np_list, number_of_items_to_pick, p=list_prob, replace=False)
selected = []
for n in draw:
selected.append(tuple_list[n])
this will choose the tuples without replacement by setting replace=False with tuple_list being your list of tuples, and list_prob being their probabilities of being chosen. Creating a list of indexes with np.arange(list_len) allows you to get around the two dimensional issue by randomly selecting the indexes of the tuples you want rather than the tuples directly.
Given a data set consisting lists of varying lengths similar to this example : pairwise comparisons within a dataset
more context : Given a set of update requests for a table how can I use this to split requests?
I'm looking to output a the set of pairwise disjoint sets such that if you union all the fields they are unique (I'm unsure, if the term pairwise disjoint for this output is correct).
for example input,
[1,2,5,6,8]
[4,5,7,9]
[10,11]
[23,45]
can give an output of
[1,2,4,5,6,7,8,9] -- is combined as 5 is common.
[10,11]
[23,45]
Ideally I'm splitting the given datasets, to unique sets which are disjoint.
My list size can be ~150.
Total dataset can be of size ~700.
I tried this : Find in python combinations of mutually exclusive sets from a list's elements but it does not do what i need.
I have roughly 300,000 (300K) sets, each containing 0-100 elements.
s1={a,b,x,y}
s2={a}
s3={a,x,y}
s4={x,y}
My question is, How do I efficiently find a collection of sets (say I need a collection 5000 sets from 300K sets) where there is maximum intersection of elements among those selected sets?
i.e.
Among all possible combinations of 5000 sets that can be picked from 300K sets, I need that one collection of 5000 sets such that intersection(number of common elements) among it's 5000 sets is greater(>=) than any other combination of 5000 sets that are possible from 300K sets.
For example : From the sets shown above,
Say I need 2 sets where there is maximum intersection of elements among them.The resulting collection would be -> C = {s1,s3} with [common elements={a,x,y}, common elements count=3]
Say I need 3 sets where there is maximum intersection of elements among them.The resulting collection would be -> C = {s1,s3,s4} with [common elements={x,y}, common elements count=2]
Bruteforce method is not an option since the total number of possible combinations of 5000 sets from a collection of 300K sets is huge.
300K choose 5000 = O(10^11041)
Are there any smart data structures and algorithms that I can use to get the desired collection of sets?
Also, is there are any available python library that I can use for this?
I'm trying to write a function that generates all possible configurations of a list by swapping certain allowable pairs of elements.
For example, if we have the list:
lst = [1, 2, 3, 4, 5]
And we only allow the swapping of the following pairs of elements:
pairs = [[0, 2], [4, 1]]
i.e., we can only swap the 0th element if the list with the 2nd, and the 4th element with the 1st (there can be any number of allowed pairs of swaps).
I would like the function to return the number of distinct configurations of the list given the allowable swaps.
Since I'm planning on running this for large lists and many allowable swaps, it would be preferable for the function to be as efficient as possible.
I've found examples that generate permutations by swapping all the elements, two at a time, but I can't find a way to specify certain pairs of allowable swaps.
You've been lured off other productive paths by the common term "swap". Switch your attack. Instead, note that you need the product of [a[0], a[2]] and [a[1], a[4]] to get all the possible permutations. You take each of these products (four of them) and distribute the elements in your result sets in the proper sequence. It will look vaguely like this ... I'm using Python as pseudo-code, to some extent.
seq = itertools.product([a[0], a[2]], [a[1], a[4]])
for soln in seq:
# each solution "soln" is a list of 4 elements to be distributed.
# Construct a permutation "b" by putting each in its proper place.
# Map the first two soln values to b[0] and b[2];
# and the last two values to b[1] and b[4]
b = [soln[0], soln[2], soln[1], a[3], soln[4]]
Can you take it from there? That's the idea; I'll leave you to generalize the algorithm.
If I have a variable number of sets (let's call the number n), which have at most m elements each, what's the most efficient way to calculate the pairwise intersections for all pairs of sets? Note that this is different from the intersection of all n sets.
For example, if I have the following sets:
A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}
I want to be able to find:
intersect_AB={"c"}
intersect_BC={"c", "e"}
intersect_AC={"a", "c"}
Another acceptable format (if it makes things easier) would be a map of items in a given set to the sets that contain that same item. For example:
intersections_C={"a": {"A", "C"},
"c": {"A", "B", "C"}
"e": {"B", "C"}}
I know that one way to do so would be to create a dictionary mapping each value in the union of all n sets to a list of the sets in which it occurs and then iterate through all of those values to create lists such as intersections_C above, but I'm not sure how that scales as n increases and the sizes of the set become too large.
Some additional background information:
Each of the sets are of roughly the same length, but are also very large (large enough that storing them all in memory is a realistic concern, and an algorithm which avoids that would be preferred though is not necessary)
The size of the intersections between any two sets is very small compared to the size of the sets themselves
If it helps, we can assume anything we need to about the ordering of the input sets.
this ought to do what you want
import random as RND
import string
import itertools as IT
mock some data
fnx = lambda: set(RND.sample(string.ascii_uppercase, 7))
S = [fnx() for c in range(5)]
generate an index list of the sets in S so the sets can be referenced more concisely below
idx = range(len(S))
get all possible unique pairs of the items in S; however, since set intersection is commutative, we want the combinations rather than permutations
pairs = IT.combinations(idx, 2)
write a function perform the set intersection
nt = lambda a, b: S[a].intersection(S[b])
fold this function over the pairs & key the result from each function call to its arguments
res = dict([ (t, nt(*t)) for t in pairs ])
the result below, formatted per the first option recited in the OP, is a dictionary in which the values are the set intersections of two sequences; each values keyed to a tuple comprised of the two indices of those sequences
this solution, is really just two lines of code: (i) calculate the permutations; (ii) then apply some function over each permutation, storing the returned value in a structured container (key-value) container
the memory footprint of this solution is minimal, but you can do even better by returning a generator expression in the last step, ie
res = ( (t, nt(*t)) for t in pairs )
notice that with this approach, neither the sequence of pairs nor the corresponding intersections have been written out in memory--ie, both pairs and res are iterators.
If we can assume that the input sets are ordered, a pseudo-mergesort approach seems promising. Treating each set as a sorted stream, advance the streams in parallel, always only advancing those where the value is the lowest among all current iterators. Compare each current value with the new minimum every time an iterator is advanced, and dump the matches into your same-item collections.
How about using intersection method of set. See below:
A={"a","b","c"}
B={"c","d","e"}
C={"a","c","e"}
intersect_AB = A.intersection(B)
intersect_BC = B.intersection(C)
intersect_AC = A.intersection(C)
print intersect_AB, intersect_BC, intersect_AC