Python3 initializing 2D Array ignores index specification range(start,end,step) - python

I want to initialize an array in python3 with an index NOT starting at zero and 2 dimensions.
x_length=16
y_length=4
x_start=-4
y_start=-400
# Later passing those variables with:
level_matrix=generate_area(x_length,y_length,x_start,y_start)
Part of the function:
def generate_area(xlen,ylen,xstart,ystart):
matrix=[[0 for y in range(ystart, ystart+ylen)] for x in range(xstart, xstart+xlen)]
for index, value in enumerate(matrix):
print(f"S1_vec('{index}')<= {value}")
for index, value in enumerate(matrix[0]):
print(f"S1_vec('{index}')<= {value}")
for x in range(xstart, xstart+xlen):
for y in range(ystart, ystart+ylen):
print("Y:"+str(y))
print("X:"+str(x))
Output:
S1_vec('0')<= [0, 0, 0, 0]
S1_vec('1')<= [0, 0, 0, 0]
S1_vec('2')<= [0, 0, 0, 0]
S1_vec('3')<= [0, 0, 0, 0]
S1_vec('4')<= [0, 0, 0, 0]
S1_vec('5')<= [0, 0, 0, 0]
S1_vec('6')<= [0, 0, 0, 0]
S1_vec('7')<= [0, 0, 0, 0]
S1_vec('8')<= [0, 0, 0, 0]
S1_vec('9')<= [0, 0, 0, 0]
S1_vec('10')<= [0, 0, 0, 0]
S1_vec('11')<= [0, 0, 0, 0]
S1_vec('12')<= [0, 0, 0, 0]
S1_vec('13')<= [0, 0, 0, 0]
S1_vec('14')<= [0, 0, 0, 0]
S1_vec('15')<= [0, 0, 0, 0]
S1_vec('0')<= 0
S1_vec('1')<= 0
S1_vec('2')<= 0
S1_vec('3')<= 0
Y:-400
X:-4
IndexError: list assignment index out of range
Well, as you can clearly see, there is no negative indexes in the array. This also does not properly work with positive offset as well. The loops want to access offset index values and the script obviously fails, since the index only gets created from 0 to the var1 in case of in range(var1, var2). This makes no sense, since it should work like: range(starting_point, end_point, steps_if_needed). And the copy paste for loop syntax gets successfully used later in the script in multiple instances.
What causes such weird behavior and how to fix this without changing anything else except the initialization of the array in the code? I need 2D arrays to exactly work within the specified region. Is there a simple solution?
Edit:
Just to clarify the goal:
I need a 2D array with negative index capabilities. The range is known, but each entry needs to be defined. Append is useless, because it will not add a specific negative index for me.
If I for example need to define matrix[-4][-120]="Jeff", this needs to work. I do not even care at all, if there is a solution like in Bash, where you have to write matrix["-4,-120"]. Yet I need a reasonable way to address such entries.

You can use a virtual indexing strategy to do the same. Here is my approach:
offset = 5
size = 4
a = [x for x in range(offset,offset+size)]
print(a) # [5, 6, 7, 8]
def get_loc(idx):
return a[idx-offset]
def set_loc(idx, val):
a[idx-offset] = val
set_loc(6,15)
print(a) # [5, 15, 7, 8]
set_loc(8, 112)
print(a) # [5, 15, 7, 112]
This code was just to understand the what i mean by virtual indexing strategy, you can simply use below to get and set values:
# to get a[n] use a[n-offset]
print(a[8-offset]) # 112
# to set a[n] = x | Use a[n-offset] = x
a[7-offset] = 21
print(a) # [5, 15, 21, 112]

Related

Counting gaps for a tetris ai in python

I am trying to make a simple tetris ai in python(no genetic algorithms)
I want to count the gaps in the grid and make the best choice depending on it.
By gap I mean where you wont be able to place a piece without clearing some lines.
My grid is something like this:
[0, 0, 0, 0, 0]
['#ff0000', ....]
[...]
0 represents a blank space, while the hex code represents its covered by a block
I have tried to calculate gaps like this:
def grid_gaps(grid):
gaps = 0
for x in range(len(grid[0])):
for y in range(len(grid)):
if grid[y][x] == 0 and \
(y > 0 and grid[y - 1][x] != 0):
gaps += 1
return gaps
It works good when the grid is like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0]
1 is some color, it correctly tells me that there are 3 gaps but when the grid is someting like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 1, 0]
It again returns 3 but I want it to return 6.
I think the problem is that and grid[y - 1][x] != 0 is only looking at the cell directly above the current cell, so your bottom 3 cells in the second example aren't being counted.
One quick fix I can think of is to set a gap cell to some non-zero value once it's counted, that way the gap cells below will be counted too. (Then set them back to 0 after you're done, if you're using the same grid and not a copy for the rest of the game.)
The problem is that you're looking "up" to see whether there's a blocker, but you're only looking up one row. I think you want to reorganize this so you iterate over columns, and for each column, iterate down until you hit a 1, and then continue iterating and add to the gap count for each 0 that's encountered.

slicing python list beyond its length

I have to slice a Python list/numpy array from an index to -dx and +dx.
where dx is constant. for example:
(the position/index that contains 1 only for illustration, as the center index).
A = [0, 0, 0, 0, 1, 0, 0, 0, 0]
dx=3
print(A[4-dx: 4+dx+1]) # 4 is the position of '1'
>>>[0, 0, 0, 1, 0, 0, 0]
But for this case,
B = [0, 1, 0, 0, 0 ,0, 0 ,0, 0]
print(B[1-dx: 1+dx+1])
>>>[] # because 1-dx <0.
but what i need from case B is [0, 1, 0, 0, 0]
so i did something like this, to prevent empty list/array, say n is the center index:
if n-dx <0:
result= B[:n+dx+1]
Although the above method works fine.
The original code is quite complicated, and I have to put this if...#complicated version# everywhere.
Is there any other way around? Maybe I miss something.
Thank you!
You can use the max() function to bound the index at 0.
print(A[max(0,4-dx): 4+dx+1])

Count number of tails since the last head

Consider a sequence of coin tosses: 1, 0, 0, 1, 0, 1 where tail = 0 and head = 1.
The desired output is the sequence: 0, 1, 2, 0, 1, 0
Each element of the output sequence counts the number of tails since the last head.
I have tried a naive method:
def timer(seq):
if seq[0] == 1: time = [0]
if seq[0] == 0: time = [1]
for x in seq[1:]:
if x == 0: time.append(time[-1] + 1)
if x == 1: time.append(0)
return time
Question: Is there a better method?
Using NumPy:
import numpy as np
seq = np.array([1,0,0,1,0,1,0,0,0,0,1,0])
arr = np.arange(len(seq))
result = arr - np.maximum.accumulate(arr * seq)
print(result)
yields
[0 1 2 0 1 0 1 2 3 4 0 1]
Why arr - np.maximum.accumulate(arr * seq)? The desired output seemed related to a simple progression of integers:
arr = np.arange(len(seq))
So the natural question is, if seq = np.array([1, 0, 0, 1, 0, 1]) and the expected result is expected = np.array([0, 1, 2, 0, 1, 0]), then what value of x makes
arr + x = expected
Since
In [220]: expected - arr
Out[220]: array([ 0, 0, 0, -3, -3, -5])
it looks like x should be the cumulative max of arr * seq:
In [234]: arr * seq
Out[234]: array([0, 0, 0, 3, 0, 5])
In [235]: np.maximum.accumulate(arr * seq)
Out[235]: array([0, 0, 0, 3, 3, 5])
Step 1: Invert l:
In [311]: l = [1, 0, 0, 1, 0, 1]
In [312]: out = [int(not i) for i in l]; out
Out[312]: [0, 1, 1, 0, 1, 0]
Step 2: List comp; add previous value to current value if current value is 1.
In [319]: [out[0]] + [x + y if y else y for x, y in zip(out[:-1], out[1:])]
Out[319]: [0, 1, 2, 0, 1, 0]
This gets rid of windy ifs by zipping adjacent elements.
Using itertools.accumulate:
>>> a = [1, 0, 0, 1, 0, 1]
>>> b = [1 - x for x in a]
>>> list(accumulate(b, lambda total,e: total+1 if e==1 else 0))
[0, 1, 2, 0, 1, 0]
accumulate is only defined in Python 3. There's the equivalent Python code in the above documentation, though, if you want to use it in Python 2.
It's required to invert a because the first element returned by accumulate is the first list element, independently from the accumulator function:
>>> list(accumulate(a, lambda total,e: 0))
[1, 0, 0, 0, 0, 0]
The required output is an array with the same length as the input and none of the values are equal to the input. Therefore, the algorithm must be at least O(n) to form the new output array. Furthermore for this specific problem, you would also need to scan all the values for the input array. All these operations are O(n) and it will not get any more efficient. Constants may differ but your method is already in O(n) and will not go any lower.
Using reduce:
time = reduce(lambda l, r: l + [(l[-1]+1)*(not r)], seq, [0])[1:]
I try to be clear in the following code and differ from the original in using an explicit accumulator.
>>> s = [1,0,0,1,0,1,0,0,0,0,1,0]
>>> def zero_run_length_or_zero(seq):
"Return the run length of zeroes so far in the sequnece or zero"
accumulator, answer = 0, []
for item in seq:
accumulator = 0 if item == 1 else accumulator + 1
answer.append(accumulator)
return answer
>>> zero_run_length_or_zero(s)
[0, 1, 2, 0, 1, 0, 1, 2, 3, 4, 0, 1]
>>>

Python — Randomly fill 2D array with set number of 1's

Suppose I have a 2D array (8x8) of 0's. I would like to fill this array with a predetermined number of 1's, but in a random manner. For example, suppose I want to place exactly 16 1's in the grid at random, resulting in something like this:
[[0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 1]]
The resulting placement of the 1's does not matter in the slightest, as long as it is random (or as random as Python will allow).
My code technically works, but I imagine it's horrendously inefficient. All I'm doing is setting the probability of each number becoming a 1 to n/s, where n is the number of desired 1's and s is the size of the grid (i.e. number of elements), and then I check to see if the correct number of 1's was added. Here's the code (Python 2.7):
length = 8
numOnes = 16
while True:
board = [[(random.random() < float(numOnes)/(length**2))*1 for x in xrange(length)] for x in xrange(length)]
if sum([subarr.count(1) for subarr in board]) == 16:
break
print board
While this works, it seems like a roundabout method. Is there a better (i.e. more efficient) way of doing this? I foresee running this code many times (hundreds of thousands if not millions), so speed is a concern.
Either shuffle a list of 16 1s and 48 0s:
board = [1]*16 + 48*[0]
random.shuffle(board)
board = [board[i:i+8] for i in xrange(0, 64, 8)]
or fill the board with 0s and pick a random sample of 16 positions to put 1s in:
board = [[0]*8 for i in xrange(8)]
for pos in random.sample(xrange(64), 16):
board[pos//8][pos%8] = 1
I made the ones, made the zeros, concatenated them, shuffle them, and reshaped.
import numpy as np
def make_board(shape, ones):
o = np.ones(ones, dtype=np.int)
z = np.zeros(np.product(shape) - ones, dtype=np.int)
board = np.concatenate([o, z])
np.random.shuffle(board)
return board.reshape(shape)
make_board((8,8), 16)
Edit.
For what it's worth, user2357112's approach with numpy is fast...
def make_board(shape, ones):
size = np.product(shape)
board = np.zeros(size, dtype=np.int)
i = np.random.choice(np.arange(size), ones)
board[i] = 1
return board.reshape(shape)

How to extract the bits of larger numeric Numpy data types

Numpy has a library function, np.unpackbits, which will unpack a uint8 into a bit vector of length 8. Is there a correspondingly fast way to unpack larger numeric types? E.g. uint16 or uint32. I am working on a question that involves frequent translation between numbers, for array indexing, and their bit vector representations, and the bottleneck is our pack and unpack functions.
You can do this with view and unpackbits
Input:
unpackbits(arange(2, dtype=uint16).view(uint8))
Output:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
For a = arange(int(1e6), dtype=uint16) this is pretty fast at around 7 ms on my machine
%%timeit
unpackbits(a.view(uint8))
100 loops, best of 3: 7.03 ms per loop
As for endianness, you'll have to look at http://docs.scipy.org/doc/numpy/user/basics.byteswapping.html and apply the suggestions there depending on your needs.
This is the solution I use:
def unpackbits(x, num_bits):
if np.issubdtype(x.dtype, np.floating):
raise ValueError("numpy data type needs to be int-like")
xshape = list(x.shape)
x = x.reshape([-1, 1])
mask = 2**np.arange(num_bits, dtype=x.dtype).reshape([1, num_bits])
return (x & mask).astype(bool).astype(int).reshape(xshape + [num_bits])
This is a completely vectorized solution that works with any dimension ndarray and can unpack however many bits you want.
I have not found any function for this too, but maybe using Python's builtin struct.unpack can help make the custom function faster than shifting and anding longer uint (note that I am using uint64).
>>> import struct
>>> N = np.uint64(2 + 2**10 + 2**18 + 2**26)
>>> struct.unpack('>BBBBBBBB', N)
(2, 4, 4, 4, 0, 0, 0, 0)
The idea is to convert those to uint8, use unpackbits, concatenate the result. Or, depending on your application, it may be more convenient to use structured arrays.
There is also built-in bin() function, which produces string of 0s and 1s, but I am not sure how fast it is and it requires postprocessing too.
This works for arbitrary arrays of arbitrary uint (i.e. also for multidimensional arrays and also for numbers larger than the uint8 max value).
It cycles over the number of bits, rather than over the number of array elements, so it is reasonably fast.
def my_ManyParallel_uint2bits(in_intAr,Nbits):
''' convert (numpyarray of uint => array of Nbits bits) for many bits in parallel'''
inSize_T= in_intAr.shape
in_intAr_flat=in_intAr.flatten()
out_NbitAr= numpy.zeros((len(in_intAr_flat),Nbits))
for iBits in xrange(Nbits):
out_NbitAr[:,iBits]= (in_intAr_flat>>iBits)&1
out_NbitAr= out_NbitAr.reshape(inSize_T+(Nbits,))
return out_NbitAr
A=numpy.arange(256,261).astype('uint16')
# array([256, 257, 258, 259, 260], dtype=uint16)
B=my_ManyParallel_uint2bits(A,16).astype('uint16')
# array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]], dtype=uint16)

Categories