Suppose I have a 2D array (8x8) of 0's. I would like to fill this array with a predetermined number of 1's, but in a random manner. For example, suppose I want to place exactly 16 1's in the grid at random, resulting in something like this:
[[0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 1]]
The resulting placement of the 1's does not matter in the slightest, as long as it is random (or as random as Python will allow).
My code technically works, but I imagine it's horrendously inefficient. All I'm doing is setting the probability of each number becoming a 1 to n/s, where n is the number of desired 1's and s is the size of the grid (i.e. number of elements), and then I check to see if the correct number of 1's was added. Here's the code (Python 2.7):
length = 8
numOnes = 16
while True:
board = [[(random.random() < float(numOnes)/(length**2))*1 for x in xrange(length)] for x in xrange(length)]
if sum([subarr.count(1) for subarr in board]) == 16:
break
print board
While this works, it seems like a roundabout method. Is there a better (i.e. more efficient) way of doing this? I foresee running this code many times (hundreds of thousands if not millions), so speed is a concern.
Either shuffle a list of 16 1s and 48 0s:
board = [1]*16 + 48*[0]
random.shuffle(board)
board = [board[i:i+8] for i in xrange(0, 64, 8)]
or fill the board with 0s and pick a random sample of 16 positions to put 1s in:
board = [[0]*8 for i in xrange(8)]
for pos in random.sample(xrange(64), 16):
board[pos//8][pos%8] = 1
I made the ones, made the zeros, concatenated them, shuffle them, and reshaped.
import numpy as np
def make_board(shape, ones):
o = np.ones(ones, dtype=np.int)
z = np.zeros(np.product(shape) - ones, dtype=np.int)
board = np.concatenate([o, z])
np.random.shuffle(board)
return board.reshape(shape)
make_board((8,8), 16)
Edit.
For what it's worth, user2357112's approach with numpy is fast...
def make_board(shape, ones):
size = np.product(shape)
board = np.zeros(size, dtype=np.int)
i = np.random.choice(np.arange(size), ones)
board[i] = 1
return board.reshape(shape)
Related
I want to initialize an array in python3 with an index NOT starting at zero and 2 dimensions.
x_length=16
y_length=4
x_start=-4
y_start=-400
# Later passing those variables with:
level_matrix=generate_area(x_length,y_length,x_start,y_start)
Part of the function:
def generate_area(xlen,ylen,xstart,ystart):
matrix=[[0 for y in range(ystart, ystart+ylen)] for x in range(xstart, xstart+xlen)]
for index, value in enumerate(matrix):
print(f"S1_vec('{index}')<= {value}")
for index, value in enumerate(matrix[0]):
print(f"S1_vec('{index}')<= {value}")
for x in range(xstart, xstart+xlen):
for y in range(ystart, ystart+ylen):
print("Y:"+str(y))
print("X:"+str(x))
Output:
S1_vec('0')<= [0, 0, 0, 0]
S1_vec('1')<= [0, 0, 0, 0]
S1_vec('2')<= [0, 0, 0, 0]
S1_vec('3')<= [0, 0, 0, 0]
S1_vec('4')<= [0, 0, 0, 0]
S1_vec('5')<= [0, 0, 0, 0]
S1_vec('6')<= [0, 0, 0, 0]
S1_vec('7')<= [0, 0, 0, 0]
S1_vec('8')<= [0, 0, 0, 0]
S1_vec('9')<= [0, 0, 0, 0]
S1_vec('10')<= [0, 0, 0, 0]
S1_vec('11')<= [0, 0, 0, 0]
S1_vec('12')<= [0, 0, 0, 0]
S1_vec('13')<= [0, 0, 0, 0]
S1_vec('14')<= [0, 0, 0, 0]
S1_vec('15')<= [0, 0, 0, 0]
S1_vec('0')<= 0
S1_vec('1')<= 0
S1_vec('2')<= 0
S1_vec('3')<= 0
Y:-400
X:-4
IndexError: list assignment index out of range
Well, as you can clearly see, there is no negative indexes in the array. This also does not properly work with positive offset as well. The loops want to access offset index values and the script obviously fails, since the index only gets created from 0 to the var1 in case of in range(var1, var2). This makes no sense, since it should work like: range(starting_point, end_point, steps_if_needed). And the copy paste for loop syntax gets successfully used later in the script in multiple instances.
What causes such weird behavior and how to fix this without changing anything else except the initialization of the array in the code? I need 2D arrays to exactly work within the specified region. Is there a simple solution?
Edit:
Just to clarify the goal:
I need a 2D array with negative index capabilities. The range is known, but each entry needs to be defined. Append is useless, because it will not add a specific negative index for me.
If I for example need to define matrix[-4][-120]="Jeff", this needs to work. I do not even care at all, if there is a solution like in Bash, where you have to write matrix["-4,-120"]. Yet I need a reasonable way to address such entries.
You can use a virtual indexing strategy to do the same. Here is my approach:
offset = 5
size = 4
a = [x for x in range(offset,offset+size)]
print(a) # [5, 6, 7, 8]
def get_loc(idx):
return a[idx-offset]
def set_loc(idx, val):
a[idx-offset] = val
set_loc(6,15)
print(a) # [5, 15, 7, 8]
set_loc(8, 112)
print(a) # [5, 15, 7, 112]
This code was just to understand the what i mean by virtual indexing strategy, you can simply use below to get and set values:
# to get a[n] use a[n-offset]
print(a[8-offset]) # 112
# to set a[n] = x | Use a[n-offset] = x
a[7-offset] = 21
print(a) # [5, 15, 21, 112]
I am trying to make a simple tetris ai in python(no genetic algorithms)
I want to count the gaps in the grid and make the best choice depending on it.
By gap I mean where you wont be able to place a piece without clearing some lines.
My grid is something like this:
[0, 0, 0, 0, 0]
['#ff0000', ....]
[...]
0 represents a blank space, while the hex code represents its covered by a block
I have tried to calculate gaps like this:
def grid_gaps(grid):
gaps = 0
for x in range(len(grid[0])):
for y in range(len(grid)):
if grid[y][x] == 0 and \
(y > 0 and grid[y - 1][x] != 0):
gaps += 1
return gaps
It works good when the grid is like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0]
1 is some color, it correctly tells me that there are 3 gaps but when the grid is someting like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 1, 0]
It again returns 3 but I want it to return 6.
I think the problem is that and grid[y - 1][x] != 0 is only looking at the cell directly above the current cell, so your bottom 3 cells in the second example aren't being counted.
One quick fix I can think of is to set a gap cell to some non-zero value once it's counted, that way the gap cells below will be counted too. (Then set them back to 0 after you're done, if you're using the same grid and not a copy for the rest of the game.)
The problem is that you're looking "up" to see whether there's a blocker, but you're only looking up one row. I think you want to reorganize this so you iterate over columns, and for each column, iterate down until you hit a 1, and then continue iterating and add to the gap count for each 0 that's encountered.
I have to slice a Python list/numpy array from an index to -dx and +dx.
where dx is constant. for example:
(the position/index that contains 1 only for illustration, as the center index).
A = [0, 0, 0, 0, 1, 0, 0, 0, 0]
dx=3
print(A[4-dx: 4+dx+1]) # 4 is the position of '1'
>>>[0, 0, 0, 1, 0, 0, 0]
But for this case,
B = [0, 1, 0, 0, 0 ,0, 0 ,0, 0]
print(B[1-dx: 1+dx+1])
>>>[] # because 1-dx <0.
but what i need from case B is [0, 1, 0, 0, 0]
so i did something like this, to prevent empty list/array, say n is the center index:
if n-dx <0:
result= B[:n+dx+1]
Although the above method works fine.
The original code is quite complicated, and I have to put this if...#complicated version# everywhere.
Is there any other way around? Maybe I miss something.
Thank you!
You can use the max() function to bound the index at 0.
print(A[max(0,4-dx): 4+dx+1])
I need to generate a two-dimensional python list. It would consist of 10 columns, with each column value being either '1' or '0'. Given these conditions, my list needs to be an exhaustive list of all the combinations that can be formed in this way. It'd naturally end up being 1024 rows long (2^10). However, I have no clue where to get started on this. Can anyone help?
So this is how I worked through this problem. First I saw that we wanted to loop over all 1024 combinations of 0s and 1s. This is essentially counting from 0 to 1023 in binary. So I made a for loop from 0 to 1023 and at each iteration, I converted the iteration variable i into binary with format(i, 'b') and then turned it into a list with the list method. In the case of a number like 1, this gets converted into ['1'] but we want to convert that into ['0', '0', '0', '0', '0', '0', '0', '0', '0', '1'] which is what line
4 does. Finally, we append each result into the 'table' variable.
table=[]
for i in range(1024):
binaryRepresentation = list(format(i, 'b'))
finalRepresentation = ['0']*(10-len(binaryRepresentation)) + binaryRepresentation
table.append(finalRepresentation)
You can use combinations from itertools module, that can create a list of tuples, not a list of lists:
from itertools import combinations
# Generate all the combinations of 0 and 1
# in a list of tuples where each tuple is formed by 10 elements
# Which leads to 184756 combinations
gen_list = combinations([0,1]*10, 10)
# remove duplicates
unique_elements = list(set(gen_list))
# len(unique_elements)
# >>> 1024
An overview of the created list:
>>> unique_elements
[
(0, 1, 0, 1, 0, 1, 0, 1, 0, 1)
(0, 1, 0, 1, 0, 1, 0, 1, 0, 0)
...
(1, 1, 1, 0, 1, 0, 0, 1, 1, 1)
]
You should use numpy for this. For array creation, you could do:
import numpy as np
rows, cols = 1024, 10
arr = np.zeros((rows, cols))
Now, for setting certain values to 1 based on your condition, this could be used:
# this should be changed based on your needs
arr[arr > 0] = 1 # set all values greater than 0 to 1
If you need to create an array initialized with random binary data, you could do:
arr = np.random.randint(2, size=(rows,cols))
If performance isn't a concern, you could always count from 0 to 1023, create the string representation of that number in base 2 then convert to a list.
def binary_cols(n_cols):
for i in range(2**n_cols):
k = [int(j) for j in "{:b}".format(i)]
k = [0]*(n_cols - len(k)) + k
yield k
for col in binary_cols(10):
print col
gives
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 0, 1]
[0, 0, 0, 0, 0, 0, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 0, 0, 0, 1, 1]
...
[1, 1, 1, 1, 1, 1, 1, 1, 0, 1]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 0]
[1, 1, 1, 1, 1, 1, 1, 1, 1, 1]
print(len(list(binary_cols(10))))
1024
edit: I've just noticed that my answer is essentially a duplicate of Saleh Hindi's. Leaving mine here as there is a big enough difference in the specific tools used for the task, but their answer has a better explanation.
Numpy has a library function, np.unpackbits, which will unpack a uint8 into a bit vector of length 8. Is there a correspondingly fast way to unpack larger numeric types? E.g. uint16 or uint32. I am working on a question that involves frequent translation between numbers, for array indexing, and their bit vector representations, and the bottleneck is our pack and unpack functions.
You can do this with view and unpackbits
Input:
unpackbits(arange(2, dtype=uint16).view(uint8))
Output:
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0]
For a = arange(int(1e6), dtype=uint16) this is pretty fast at around 7 ms on my machine
%%timeit
unpackbits(a.view(uint8))
100 loops, best of 3: 7.03 ms per loop
As for endianness, you'll have to look at http://docs.scipy.org/doc/numpy/user/basics.byteswapping.html and apply the suggestions there depending on your needs.
This is the solution I use:
def unpackbits(x, num_bits):
if np.issubdtype(x.dtype, np.floating):
raise ValueError("numpy data type needs to be int-like")
xshape = list(x.shape)
x = x.reshape([-1, 1])
mask = 2**np.arange(num_bits, dtype=x.dtype).reshape([1, num_bits])
return (x & mask).astype(bool).astype(int).reshape(xshape + [num_bits])
This is a completely vectorized solution that works with any dimension ndarray and can unpack however many bits you want.
I have not found any function for this too, but maybe using Python's builtin struct.unpack can help make the custom function faster than shifting and anding longer uint (note that I am using uint64).
>>> import struct
>>> N = np.uint64(2 + 2**10 + 2**18 + 2**26)
>>> struct.unpack('>BBBBBBBB', N)
(2, 4, 4, 4, 0, 0, 0, 0)
The idea is to convert those to uint8, use unpackbits, concatenate the result. Or, depending on your application, it may be more convenient to use structured arrays.
There is also built-in bin() function, which produces string of 0s and 1s, but I am not sure how fast it is and it requires postprocessing too.
This works for arbitrary arrays of arbitrary uint (i.e. also for multidimensional arrays and also for numbers larger than the uint8 max value).
It cycles over the number of bits, rather than over the number of array elements, so it is reasonably fast.
def my_ManyParallel_uint2bits(in_intAr,Nbits):
''' convert (numpyarray of uint => array of Nbits bits) for many bits in parallel'''
inSize_T= in_intAr.shape
in_intAr_flat=in_intAr.flatten()
out_NbitAr= numpy.zeros((len(in_intAr_flat),Nbits))
for iBits in xrange(Nbits):
out_NbitAr[:,iBits]= (in_intAr_flat>>iBits)&1
out_NbitAr= out_NbitAr.reshape(inSize_T+(Nbits,))
return out_NbitAr
A=numpy.arange(256,261).astype('uint16')
# array([256, 257, 258, 259, 260], dtype=uint16)
B=my_ManyParallel_uint2bits(A,16).astype('uint16')
# array([[0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0],
# [0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0]], dtype=uint16)