python set matrix add operation - python

I have a input matrix looks like this
grid = [[1,1,2],[1,2,3],[3,2,4]]
I am using the following code to construct a matrix of sets.
m,n = len(grid), len(grid[0])
valuesets = [[set()]*n for _ in range(m)]
for j in range(n):
s = sum(grid[0][:j+1])
valuesets[0][j].add(s)
print valuesets[0][0]
The output gives me
set([1])
set([1, 2])
set([1, 2, 4])
I am wondering why valuesets[0][0] is being updated each time in the for loop but not stays the same value set([1]) like I expected? Thanks.

The reason is [set()] * n creates a list by copying the same set() instance.
If you want to get a list of distinct set objects, use [set() for _ in range(n)] instead.

Related

using loops to create a new array that gets its values from a different array, starting on the second element and multiplying by the previous element

I am trying to make a function that creates an array from the functions original array, that starts on the second element and multiplies by the previous element.
Example
input: [2,3,4,5]
output: [6,12,20]
I am trying to use a loop to get this done and here is my code so far
def funct(array1):
newarray = []
for x in array1[1:]:
newarray.append(x*array1)
return newarray
I am at a loss as I am just learning python, and i've tried various other options but with no success. Any help is appreciated
try
A = [2, 3, 4, 5]
output = [a*b for a, b in zip(A[1:], A[:-1])]
You can use a list comprehension like so:
inp = [2,3,4,5]
out = [j * inp[i-1] for i,j in enumerate(inp) if i != 0]
Output:
[6,12,20]

Assign values to array during loop - Python

I am currently learning Python (I have a strong background in Matlab). I would like to write a loop in Python, where the size of the array increases with every iteration (i.e., I can assign a newly calculated value to a different index of a variable). For the sake of this question, I am using a very simple loop to generate the vector t = [1 2 3 4 5]. In Matlab, programming my desired loop would look something like this:
t = [];
for i = 1:5
t(i,1) = i;
end
I have managed to achieve the same thing in Python with the following code:
result_t = []
for i in range(1,5):
t = i
result_t.append(t)
Is there a more efficient way to assign values to an array as we iterate in Python? Why is it not possible to do t[i,1] = i (error: list indices must be integers or slices, not tuple) or t.append(t) = t (error: 'int' object has no attribute 'append')?
Finally, I have used the example above for simplicity. I am aware that if I wanted to generate the vector [1 2 3 4 5] in Python, that I could use the function "np.arange(1,5,1)"
Thanks in advance for your assistance!
-> My real intention isn't to produce the vector [1 2 3 4 5], but rather to assign calculated values to the index of the vector variable. For example:
result_b = []
b = 2
for i in range(1,5):
t = i + b*t
result_b.append(t)
Why can I not directly write t.append(t) or use indexing (i.e., t[i] = i + b*t)?
Appending elements while looping using append() is correct and it's a built-in method within Python lists.
However you can have the same result:
Using list comprehension:
result_t = [k for k in range(1,6)]
print(result_t)
>>> [1, 2, 3, 4, 5]
Using + operator:
result_t = []
for k in range(1,6):
result_t += [k]
print(result_t)
>>> [1, 2, 3, 4, 5]
Using special method __iadd__:
result_t = []
for k in range(1,6):
result_t.__iadd__([k])
print(result_t)
>>> [1, 2, 3, 4, 5]
The range function returns an iterator in modern Python. The list function converts an iterator to a list. So the following will fill your list with the values 1 to 5:
result_t = list(range(1,6)) # yields [1, 2, 3, 4, 5]
Note that in order to include 5 in the list, the range argument has to be 6.
Your last example doesn't parse unless you assign t a value before the loop. Assuming you do that, what you're doing in that case is modifying t each time through the loop, not just producing a linear range. You can get this effect using the map function:
t = 0
b = 2
def f(i):
global t
t = i + b*t
return t
result_b = list(map(f, range(1, 5))) # Yields [1, 4, 11, 26]
The map function applies the f function to each element of the range and returns an iterator, which is converted into a list using the list function. Of course, this version is more verbose than the loop, for this small example, but the technique itself is useful.
a better example from UI Testing using selenium.
print('Assert Pagination buttons displayed?')
all_spans = self.web_driver.find_elements_by_tag_name('span')
# Identify the button texts
pagination_buttons = ["Previous Page", "Next Page", "First Page"]
# Filter from all spans, only the required ones.
filtered_spans = [s for s in all_spans if s.text in pagination_buttons]
# From the filtered spans, assert all for is_displayed()
for a_span in filtered_spans:
assert a_span.is_displayed()
print('Asserted Pagination buttons displayed.')
You can try this
data = ['Order-'+str(i) for i in range(1,6)]
print(data)
>>> ['Order-1', 'Order-2', 'Order-3', 'Order-4', 'Order-5']

How to unpack a tuple for looping without being dimension specific

I'd like to do something like this:
if dim==2:
a,b=grid_shape
for i in range(a):
for j in range(b):
A[i,j] = ...things...
where dim is simply the number of elements in my tuple grid_shape. A is a numpy array of dimension dim.
Is there a way to do it without being dimension specific?
Without having to write ugly code like
if dim==2:
a,b=grid_shape
for i in range(a):
for j in range(b):
A[i,j] = ...things...
if dim==3:
a,b,c=grid_shape
for i in range(a):
for j in range(b):
for k in range(c):
A[i,j,k] = ...things...
Using itertools, you can do it like this:
for index in itertools.product(*(range(x) for x in grid_shape)):
A[index] = ...things...
This relies on a couple of tricks. First, itertools.product() is a function which generates tuples from iterables.
for i in range(a):
for j in range(b):
index = i,j
do_something_with(index)
can be reduced to
for index in itertools.product(range(a),range(b)):
do_something_with(index)
This works for any number of arguments to itertools.product(), so you can effectively create nested loops of arbitrary depth.
The other trick is to convert your grid shape into the arguments for itertools.product:
(range(x) for x in grid_shape)
is equivalent to
(range(grid_shape[0]),range(grid_shape[1]),...)
That is, it is a tuple of ranges for each grid_shape dimension. Using * then expands this into the arguments.
itertools.product(*(range(x1),range(x2),...))
is equivalent to
itertools.product(range(x1),range(x2),...)
Also, since A[i,j,k] is equivalent to A[(i,j,k)], we can just use A[index] directly.
As DSM points out, since you are using numpy, you can reduce
itertools.product(*(for range(x) for x in grid_shape))
to
numpy.ndindex(grid_shape)
So the final loop becomes
for index in numpy.ndindex(grid_shape):
A[index] = ...things...
You can catch the rest of the tuple by putting a star in front of the last variable and make a an array by putting parentheses around it.
>>> tupl = ((1, 2), 3, 4, 5, 6)
>>> a, *b = tupl
>>> a
(1, 2)
>>> b
[3, 4, 5, 6]
>>>
And then you can loop through b. So it would look something like
a,*b=grid_shape
for i in a:
for j in range(i):
for k in b:
for l in range(k):
A[j, l] = ...things...

Random generation in heavily nested loops

I have a little game I've been making for a school project, and it has worked up until now.
I used a very messy nested list system for multiple screens, each with a 2D array for objects on screen. These 2D "level" arrays are also arranged in their own 2D array, which makes up the "world". The strings correspond to an object tile, which is drawn using pygame.
My problem is that every level array is the same in the world array, and I can't understand why that is.
def generate_world(load):
# This bit not important
if load is True:
in_array()
# This is
else:
for world_y in Game_world.game_array:
for world_x in world_y:
generate_clutter(world_x)
print Game_world.game_array
out_array()
# Current_level.array = Level.new_level_array
def generate_clutter(world_x):
for level_y in world_x:
for level_x, _ in enumerate(level_y):
### GENERATE CLUTTER ###
i = randrange(1, 24)
if i == 19 or i == 20:
level_y[level_x] = "g1"
elif i == 21 or i == 22:
level_y[level_x] = "g2"
elif i == 23:
level_y[level_x] = "c1"
else:
level_y[level_x] = "-"
I'm sure it's something simple I'm overlooking, but to me it seems the random generation should be carried out for every single list item individually, so I can't understand the duplication.
I know quadruple nested lists aren't pretty, but I think I'm in too deep to make any serious changes now.
EDIT:
This is the gist of how the lists/arrays are initially created. Their size doesn't ever change, existing strings are just replaced.
class World:
def __init__(self, name, load):
if load is False:
n = [["-" for x in range(20)]for x in range(15)]
self.game_array = [[n, n, n, n, n, n, n],
[n, n, n, n, n, n, n],
[n, n, n, n, n, n, n]]
In Python, everything is an object - even integer values. How you initialize an 'empty' array can have some surprising results.
Consider this initialization:
>>> l=[[1]*2]*2
>>> l
[[1, 1], [1, 1]]
You appear to have created a 2x2 matrix with each cell containing the value 1. In fact, you have created a list of two lists (each containing [1,1]). Deeper still, you have created a list of two references to a single list [1,1].
The results of this can be seen if you now modify one of the cells
>>> l[0][0]=2
>>> l
[[2, 1], [2, 1]]
>>>
Notice that both l[0][0] and l[1][0] were modified.
To avoid this effect, you need to jump through some hoops
>>> l2 = [[1 for _ in range(2)] for _ in range(2)]
>>> l2
[[1, 1], [1, 1]]
>>> l2[0][0]=2
>>> l2
[[2, 1], [1, 1]]
>>>
If you used the former approach to initialize Game_world.game_array every assignment to level_y[level_x] will be modifying multiple cells in your array.
Just as an additional comment, your generate_clutter function can be simplified slightly using a dict
def generate_clutter(world_x):
clutter_map = {19:"g1", 20:"g1", 21:"g2", 22:"g2", 23:"c1"}
for level_y in world_x:
for level_x, _ in enumerate(level_y):
level_y[level_x] = clutter_map.get(randrange(1,24),'-')
This separates the logic of selecting the clutter representation from the actual mapping of values and will be much easier to expand and maintain.
Looking at your edit, the initialization needs to be something like:
self.game_array = [
[
[
["-" for x in range(20)]
for x in range(15)
]
for x in range(7)
]
for x in range(3)
]

numpy array equivalent for += operator

I often do the following:
import numpy as np
def my_generator_fun():
yield x # some magically generated x
A = []
for x in my_generator_fun():
A += [x]
A = np.array(A)
Is there a better solution to this which operates on a numpy array from the start and avoids the creation of a standard python list?
Note that the += operator allows to extend an empty and dimensionless array with an arbitrarily dimensioned array whereas np.append and np.concatenate demand for equally dimensioned arrays.
Use np.fromiter:
def f(n):
for j in range(n):
yield j
>>> np.fromiter(f(5), dtype=np.intp)
array([0, 1, 2, 3, 4])
If you know beforehand the number of items the iterator is going to return, you can speed things up using the count keyword argument:
>>> np.fromiter(f(5), dtype=np.intp, count=5)
array([0, 1, 2, 3, 4])
To get the same array A, do:
A = numpy.arange(5)
Arrays are not in general meant to be dynamically sized, but you could use numpy.concatenate.

Categories