Python list of combinations of positions - python

I'm trying to generate a list of all possible 1-dimensional positions for an arbitrary number of identical objects. I want it formatted so each coordinate is the distance from the previous object, so for 3 objects (0,5,2) would mean one object is at position 0, another is at position 5 and another is at position 7.
So the main restraint is that the sum of the coordinates is <=D. Nested for loops works well for this. For example, with 3 objects with maximum coordinate D:
def positions(D):
output=[]
for i in range(D+1):
for j in range(D+1-i):
for k in range(D+1-i-j):
output.append((i,j,k))
return(output)
What's the best way to extend this to an arbitrary number of objects? I can't find a good way without explicitly writing a specific number of for loops.

I think you can combine itertools.combinations, which will give you the locations, with taking the difference, which should give you your "distance from the previous object" behaviour. For example, using
def diff(loc):
return [y-x for x,y in zip((0,) + loc, loc)]
we have
In [114]: list(itertools.combinations(range(4), 3))
Out[114]: [(0, 1, 2), (0, 1, 3), (0, 2, 3), (1, 2, 3)]
for the possible positions, and then
In [115]: [diff(x) for x in itertools.combinations(range(4), 3)]
Out[115]: [[0, 1, 1], [0, 1, 2], [0, 2, 1], [1, 1, 1]]
for your relative-distance version.

Related

Unpack `np.unravel_index()` in for loop [duplicate]

This question already has answers here:
python matrix transpose and zip
(7 answers)
Closed 1 year ago.
I'm trying to directly unpack the tuple returned from np.unravel_index() in a for loop definition, but I'm running into the following issue:
for i,j in np.unravel_index(range(len(10)), (2, 5)):
print(i,j)
returns:
ValueError: too many values to unpack (expected 2)
I can solve the issue by doing:
idx = np.unravel_index(range(len(10)), (2, 5))
for i, j in zip(idx[0], idx[1]):
print(i, j)
but I really feel I should be able to do it all in the for loop assignment.
I have looked through StackOverflow and found nothing that could help me with my specific question.
Solution:
As the accepted solution I think that this does exactly what I want i.e., unpack directly in the for loop assignment and without previous knowledge of the dimensions of idx:
for i, j in zip(*np.unravel_index(range(len(10)), (2, 5)):
print(i, j)
Your idx is a tuple of arrays:
In [559]: idx = np.unravel_index(np.arange(5),(2,5))
In [560]: idx
Out[560]: (array([0, 0, 0, 0, 0]), array([0, 1, 2, 3, 4]))
The tuple works great for indexing, e.g. data[idx]. In fact that's its intended purpose.
Your zip turns that into a list/iteration on 2 element tuples:
In [561]: list(zip(*idx))
Out[561]: [(0, 0), (0, 1), (0, 2), (0, 3), (0, 4)]
np.transpose can also turn it into a (n,2) array:
In [562]: np.transpose(idx)
Out[562]:
array([[0, 0],
[0, 1],
[0, 2],
[0, 3],
[0, 4]])
Iteration on [562] will be just as slow as the iteration on the zip, possibly slower. But if you don't need to iterate, [562] may be better.
Notice I used zip(*idx) above, so your expression could written as:
for i, j in zip(*np.unravel_index(range(len(neuron_sample)), (2, 5))):
print(i, j)

Is there a pattern to itertools.permutations

When i am iterating over an itertools.permutations, I would like to know at what indexes specific combinations of numbers would show up, without slowly iterating over the whole thing.
For example:
When I have a list, foo, which equals list(itertools.permutations(range(10))), I would like to know at which indexes the first character will be a zero, and the seventeenth a three. A simple way to do this would be to check every combination and see whether it fits my requirement.
n = 10
foo = list(itertools.permutations(range(n)))
solutions = []
for i, permutation in foo:
if permutation[0] == 0 and permutation[16] == 3:
solutions.append(i)
However, as n gets larger, this becomes incredibly slow, and very memory inefficient.
Is there some pattern that I could use so that instead of creating a long list I could simply say that if (a*i+b)%c == 0 then I know that it will fit my pattern.
EDIT: in reality I will be having many conditions some of which also involve more than 2 positions, therefore I hope that by combining those conditions I can limit the amount of possibilities to the point where this becomes doable. Also, the 100 might have been a big bit, I am expecting n to not get larger than 20.
You need to do a mapping between permutations of not fixed elements and corresponding permutations with fixed cells enrolled. For example, if you count permutations over list [0, 1, 2, 3, 4] and require a value 1, for zero cell and a value 2 for third cell, permutation (0, 4, 3) will be mapped to (1, 0, 4, 2, 3). I know, tuples are not friendly for this case because they are immutable but lists has insert method which is pretty useful here. That's why I convert them to lists and then back to tuples.
import itertools
def item_padding(item, cells):
#returns padding of item, e.g. (0, 4, 3) -> (1, 0, 4, 2, 3)
listed_item = list(item)
for idx in sorted(cells):
listed_item.insert(idx, cells[idx])
return tuple(listed_item)
array = range(5)
cells = {0:1, 3:2} #indexes and their fixed values
remaining_items = set(list(array)) - set(list(cells.values()))
print(list(map(lambda x: item_padding(x, cells), itertools.permutations(remaining_items))))
Output:
[(1, 0, 3, 2, 4), (1, 0, 4, 2, 3), (1, 3, 0, 2, 4), (1, 3, 4, 2, 0), (1, 4, 0, 2, 3), (1, 4, 3, 2, 0)]
To sum up, list conversions are quite slow as well as iterations. Despite that, I think this algorithm is a conceptually good example that reveals what can be done here. Use numpy instead if you really need to optimise it.
Update:
It works 6 seconds on my laptop if array is range(12) (with 3628800 permutations). It's three times more than returning not padded tuples.

Find the points with the steepest slope python

I have a list of float points such as [x1,x2,x3,x4,....xn] that are plotted as a line graph. I would like to find the set of points where the slope is the steepest.
Right now, Im calculating the difference between a set of points in a loop and using the max() function to determine the maximum point.
Any other elegant way of doing this?
Assuming points is the list of your values, you can calculate the differences in a single line using:
max_slope = max([x - z for x, z in zip(points[:-1], points[1:])])
But what you gain in compactness, you probably lose in readability.
What happens in this list comprehension is the following:
Two lists are created based on the original one, namely points[:-1] & points[1:]. Points[:-1] starts from the beginning of the original list and goes to the second to last item (inclusive). Points[1:] starts from the second item and goes all the way to the last item (inclusive again.)
Example
example_list = [1, 2, 3, 4, 5]
ex_a = example_list[:-1] # [1, 2, 3, 4]
ex_b = example_list[1:] # [2, 3, 4, 5]
Then you zip the two lists creating an object from which you can draw x, z pairs to calculate your differences. Note that zip does not create a list in Python 3 so you need to pass it's return value to the list argument.
Like:
example_list = [1, 2, 3, 4, 5]
ex_a = example_list[:-1] # [1, 2, 3, 4]
ex_b = example_list[1:] # [2, 3, 4, 5]
print(list(zip(ex_a, ex_b))) # [(1, 2), (2, 3), (3, 4), (4, 5), (5, 6)]
Finally, you calculate the differences using the created pairs, store the results in a list and get the maximum value.
If the location of the max slope is also interesting you can get the index from the created list by using the .index() method. In that case, though, it would probably be better to save the list created by the comprehension and not just use it.
Numpy has a number of tools for working with arrays. For example, you could:
import numpy as np
xx = np.array([x1, x2, x3, x4, ...]) # your list of values goes in there
print(np.argmax(xx[:-1] - xx[1:])) # for all python versions

Check if two nested lists are equivalent upon substitution

For some context, I'm trying to enumerate the number of unique situations that can occur when calculating the Banzhaf power indices for four players, when there is no dictator and there are either four or five winning coalitions.
I am using the following code to generate a set of lists that I want to iterate over.
from itertools import chain, combinations
def powerset(iterable):
s = list(iterable)
return chain.from_iterable(map(list, combinations(s, r)) for r in range(2, len(s)+1))
def superpowerset(iterable):
s = powerset(iterable)
return chain.from_iterable(map(list, combinations(s, r)) for r in range(4, 6))
set_of_lists = superpowerset([1,2,3,4])
However, two lists in this set shouldn't be considered unique if they are equivalent under remapping.
Using the following list as an example:
[[1, 2], [1, 3], [2, 3], [1, 2, 4]]
If each element 2 is renamed to 3 and vice-versa, we would get:
[[1, 3], [1, 2], [3, 2], [1, 3, 4]]
The order in each sub-list is unimportant, and the order of the sub-lists is also un-important. Thus, the swapped list can be rewritten as:
[[1, 2], [1, 3], [2, 3], [1, 3, 4]]
There are 4 values, so there are P(4,4)=24 possible remappings that could occur (including the trivial mapping).
Is there any way to check this easily? Or, even better, is there are way to avoid generating these lists to begin with?
I'm not even sure how I would go about transforming the first list into the second list (but could brute force it from there). Also, I'm not restricted to data type (to a certain extent) and using frozenset would be fine.
Edit: The solution offered by tobias_k answers the "checking" question but, as noted in the comments, I think I have the wrong approach to this problem.
This is probably no complete solution yet, but it might show you a direction to investigate further.
You could map each element to some characteristics concerning the "topology", how it is "connected" with other elements. You have to be careful not to take the ordering in the sets into account, or -- obviously -- the element itself. You could, for example, consider how often the element appears, in what sized groups it appears, and something like this. Combine those metrics to a key function, sort the elements by that key, and assign them new names in that order.
def normalize(lists):
items = set(x for y in lists for x in y)
counter = itertools.count()
sorter = lambda x: sorted(len(y) for y in lists if x in y)
mapping = {k: next(counter) for k in sorted(items, key=sorter)}
return tuple(sorted(tuple(sorted(mapping[x] for x in y)) for y in lists))
This maps your two example lists to the same "normalized" list:
>>> normalize([[1, 2], [1, 3], [2, 3], [1, 2, 4]])
((0, 1), (0, 2), (1, 2), (1, 2, 3))
>>> normalize([[1, 3], [1, 2], [3, 2], [1, 3, 4]])
((0, 1), (0, 2), (1, 2), (1, 2, 3))
When applied to all the lists, it gets the count down from 330 to 36. I don't know if this is minimal, but it looks like a good start.
>>> normalized = set(map(normalize, set_of_lists))
>>> len(normalized)
36

python appending a list to a tuple

I have following tuple:
vertices = ([0,0],[0,0],[0,0]);
And on each loop I want to append the following list:
[x, y]
How should I approach it?
You can't append a list to a tuple because tuples are "immutable" (they can't be changed). It is however easy to append a tuple to a list:
vertices = [(0, 0), (0, 0), (0, 0)]
for x in range(10):
vertices.append((x, y))
You can add tuples together to create a new, longer tuple, but that strongly goes against the purpose of tuples, and will slow down as the number of elements gets larger. Using a list in this case is preferred.
You can't modify a tuple. You'll either need to replace the tuple with a new one containing the additional vertex, or change it to a list. A list is simply a modifiable tuple.
vertices = [[0,0],[0,0],[0,0]]
for ...:
vertices.append([x, y])
You can concatenate two tuples:
>>> vertices = ([0,0],[0,0],[0,0])
>>> lst = [10, 20]
>>> vertices = vertices + tuple([lst])
>>> vertices
([0, 0], [0, 0], [0, 0], [10, 20])
You probably want a list, as mentioned above. But if you really need a tuple, you can create a new tuple by concatenating tuples:
vertices = ([0,0],[0,0],[0,0])
for x in (1, 2):
for y in (3, 4):
vertices += ([x,y],)
Alternatively, and for more efficiency, use a list while you're building the tuple and convert it at the end:
vertices = ([0,0],[0,0],[0,0])
#...
vlist = list(vertices)
for x in (1, 2):
for y in (3, 4):
vlist.append([x, y])
vertices = tuple(vlist)
At the end of either one, vertices is:
([0, 0], [0, 0], [0, 0], [1, 3], [1, 4], [2, 3], [2, 4])
Not sure I understand you, but if you want to append x,y to each vertex you can do something like :
vertices = ([0,0],[0,0],[0,0])
for v in vertices:
v[0] += x
v[1] += y

Categories