I am trying to make a simple tetris ai in python(no genetic algorithms)
I want to count the gaps in the grid and make the best choice depending on it.
By gap I mean where you wont be able to place a piece without clearing some lines.
My grid is something like this:
[0, 0, 0, 0, 0]
['#ff0000', ....]
[...]
0 represents a blank space, while the hex code represents its covered by a block
I have tried to calculate gaps like this:
def grid_gaps(grid):
gaps = 0
for x in range(len(grid[0])):
for y in range(len(grid)):
if grid[y][x] == 0 and \
(y > 0 and grid[y - 1][x] != 0):
gaps += 1
return gaps
It works good when the grid is like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0]
1 is some color, it correctly tells me that there are 3 gaps but when the grid is someting like this:
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[1, 1, 1, 0, 0],
[0, 0, 0, 1, 0],
[0, 0, 0, 1, 0]
It again returns 3 but I want it to return 6.
I think the problem is that and grid[y - 1][x] != 0 is only looking at the cell directly above the current cell, so your bottom 3 cells in the second example aren't being counted.
One quick fix I can think of is to set a gap cell to some non-zero value once it's counted, that way the gap cells below will be counted too. (Then set them back to 0 after you're done, if you're using the same grid and not a copy for the rest of the game.)
The problem is that you're looking "up" to see whether there's a blocker, but you're only looking up one row. I think you want to reorganize this so you iterate over columns, and for each column, iterate down until you hit a 1, and then continue iterating and add to the gap count for each 0 that's encountered.
Related
I want to initialize an array in python3 with an index NOT starting at zero and 2 dimensions.
x_length=16
y_length=4
x_start=-4
y_start=-400
# Later passing those variables with:
level_matrix=generate_area(x_length,y_length,x_start,y_start)
Part of the function:
def generate_area(xlen,ylen,xstart,ystart):
matrix=[[0 for y in range(ystart, ystart+ylen)] for x in range(xstart, xstart+xlen)]
for index, value in enumerate(matrix):
print(f"S1_vec('{index}')<= {value}")
for index, value in enumerate(matrix[0]):
print(f"S1_vec('{index}')<= {value}")
for x in range(xstart, xstart+xlen):
for y in range(ystart, ystart+ylen):
print("Y:"+str(y))
print("X:"+str(x))
Output:
S1_vec('0')<= [0, 0, 0, 0]
S1_vec('1')<= [0, 0, 0, 0]
S1_vec('2')<= [0, 0, 0, 0]
S1_vec('3')<= [0, 0, 0, 0]
S1_vec('4')<= [0, 0, 0, 0]
S1_vec('5')<= [0, 0, 0, 0]
S1_vec('6')<= [0, 0, 0, 0]
S1_vec('7')<= [0, 0, 0, 0]
S1_vec('8')<= [0, 0, 0, 0]
S1_vec('9')<= [0, 0, 0, 0]
S1_vec('10')<= [0, 0, 0, 0]
S1_vec('11')<= [0, 0, 0, 0]
S1_vec('12')<= [0, 0, 0, 0]
S1_vec('13')<= [0, 0, 0, 0]
S1_vec('14')<= [0, 0, 0, 0]
S1_vec('15')<= [0, 0, 0, 0]
S1_vec('0')<= 0
S1_vec('1')<= 0
S1_vec('2')<= 0
S1_vec('3')<= 0
Y:-400
X:-4
IndexError: list assignment index out of range
Well, as you can clearly see, there is no negative indexes in the array. This also does not properly work with positive offset as well. The loops want to access offset index values and the script obviously fails, since the index only gets created from 0 to the var1 in case of in range(var1, var2). This makes no sense, since it should work like: range(starting_point, end_point, steps_if_needed). And the copy paste for loop syntax gets successfully used later in the script in multiple instances.
What causes such weird behavior and how to fix this without changing anything else except the initialization of the array in the code? I need 2D arrays to exactly work within the specified region. Is there a simple solution?
Edit:
Just to clarify the goal:
I need a 2D array with negative index capabilities. The range is known, but each entry needs to be defined. Append is useless, because it will not add a specific negative index for me.
If I for example need to define matrix[-4][-120]="Jeff", this needs to work. I do not even care at all, if there is a solution like in Bash, where you have to write matrix["-4,-120"]. Yet I need a reasonable way to address such entries.
You can use a virtual indexing strategy to do the same. Here is my approach:
offset = 5
size = 4
a = [x for x in range(offset,offset+size)]
print(a) # [5, 6, 7, 8]
def get_loc(idx):
return a[idx-offset]
def set_loc(idx, val):
a[idx-offset] = val
set_loc(6,15)
print(a) # [5, 15, 7, 8]
set_loc(8, 112)
print(a) # [5, 15, 7, 112]
This code was just to understand the what i mean by virtual indexing strategy, you can simply use below to get and set values:
# to get a[n] use a[n-offset]
print(a[8-offset]) # 112
# to set a[n] = x | Use a[n-offset] = x
a[7-offset] = 21
print(a) # [5, 15, 21, 112]
I have to slice a Python list/numpy array from an index to -dx and +dx.
where dx is constant. for example:
(the position/index that contains 1 only for illustration, as the center index).
A = [0, 0, 0, 0, 1, 0, 0, 0, 0]
dx=3
print(A[4-dx: 4+dx+1]) # 4 is the position of '1'
>>>[0, 0, 0, 1, 0, 0, 0]
But for this case,
B = [0, 1, 0, 0, 0 ,0, 0 ,0, 0]
print(B[1-dx: 1+dx+1])
>>>[] # because 1-dx <0.
but what i need from case B is [0, 1, 0, 0, 0]
so i did something like this, to prevent empty list/array, say n is the center index:
if n-dx <0:
result= B[:n+dx+1]
Although the above method works fine.
The original code is quite complicated, and I have to put this if...#complicated version# everywhere.
Is there any other way around? Maybe I miss something.
Thank you!
You can use the max() function to bound the index at 0.
print(A[max(0,4-dx): 4+dx+1])
A teacher is in the process of generating few reports based on the marks scored by the students of her class in a project based assessment.
Assume that the marks of her 10 students are available in a tuple. The marks are out of 25.
Write a python program to implement the following functions:
find_more_than_average(): Find and return the percentage of students who have scored more than the average mark of the class
sort_marks(): Sort the marks in the increasing order from 0 to 25. The sorted values should be populated in a list and returned
generate_frequency(): Find how many students have scored the same marks. For example, how many have scored 0, how many have scored 1, how many have scored 3….how many have scored 25. The result should be populated in a list and returned.
i got the average and sorted parts correct.but for the frequency, if the element is repeated twice i got the frequency as 1
list_of_marks=(12,18,25,24,2,5,18,20,20,21)
def find_more_than_average():
sumi=0
count=0
sumi=sum(list_of_marks)
avg=sumi/len(list_of_marks)
for i in list_of_marks:
if(i>avg):
count=count+1
morethanavg=(count/len(list_of_marks))*100
return morethanavg
#Remove pass and write your logic here
def sort_marks():
return sorted(list_of_marks)
#Remove pass and write your logic here
def generate_frequency():
#Remove pass and write your logic here
gener=[]
for i in range(0,26):
if i in list_of_marks:
gener.append(1)
else:
gener.append(0)
return gener
print(find_more_than_average())
print(generate_frequency())
print(sort_marks())
expected-[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 1]
actual-[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 1, 1, 0, 0, 1, 1]
When you do:
if i in list_of_marks:
gener.append(1)
else:
gener.append(0)
it should be clear that you can never get a value other than 0 or 1. But you want the counts of those values not just a 1 indicating the value is in the list. One options is to create a list of zeros first, then step through the marks and add one to the index corresponding to the mark:
def generate_frequency():
gener = [0] * 26
for m in list_of_marks:
gener[m] += 1
return gener
Now when you see 20 twice you will increase generator[20] twice with the result:
[0, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 2, 1, 0, 0, 1, 1]
Suppose I have a 2D array (8x8) of 0's. I would like to fill this array with a predetermined number of 1's, but in a random manner. For example, suppose I want to place exactly 16 1's in the grid at random, resulting in something like this:
[[0, 0, 0, 1, 0, 0, 1, 0],
[1, 0, 0, 0, 0, 0, 0, 1],
[0, 0, 1, 1, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0, 0],
[0, 1, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 0, 1, 1, 0, 0],
[0, 1, 0, 0, 0, 1, 0, 0],
[0, 1, 1, 0, 0, 0, 0, 1]]
The resulting placement of the 1's does not matter in the slightest, as long as it is random (or as random as Python will allow).
My code technically works, but I imagine it's horrendously inefficient. All I'm doing is setting the probability of each number becoming a 1 to n/s, where n is the number of desired 1's and s is the size of the grid (i.e. number of elements), and then I check to see if the correct number of 1's was added. Here's the code (Python 2.7):
length = 8
numOnes = 16
while True:
board = [[(random.random() < float(numOnes)/(length**2))*1 for x in xrange(length)] for x in xrange(length)]
if sum([subarr.count(1) for subarr in board]) == 16:
break
print board
While this works, it seems like a roundabout method. Is there a better (i.e. more efficient) way of doing this? I foresee running this code many times (hundreds of thousands if not millions), so speed is a concern.
Either shuffle a list of 16 1s and 48 0s:
board = [1]*16 + 48*[0]
random.shuffle(board)
board = [board[i:i+8] for i in xrange(0, 64, 8)]
or fill the board with 0s and pick a random sample of 16 positions to put 1s in:
board = [[0]*8 for i in xrange(8)]
for pos in random.sample(xrange(64), 16):
board[pos//8][pos%8] = 1
I made the ones, made the zeros, concatenated them, shuffle them, and reshaped.
import numpy as np
def make_board(shape, ones):
o = np.ones(ones, dtype=np.int)
z = np.zeros(np.product(shape) - ones, dtype=np.int)
board = np.concatenate([o, z])
np.random.shuffle(board)
return board.reshape(shape)
make_board((8,8), 16)
Edit.
For what it's worth, user2357112's approach with numpy is fast...
def make_board(shape, ones):
size = np.product(shape)
board = np.zeros(size, dtype=np.int)
i = np.random.choice(np.arange(size), ones)
board[i] = 1
return board.reshape(shape)
Lets say I have two arrays, both with values representing a brightness of the sun. The first array has values measured in the morning and second one has values measured in the evening. In the real case I have around 80 arrays. I'm going to plot the pictures using matplotlib. The plotted circle will (in both cases) be the same size. However the position of the image will change a bit because of the Earth's motion and this should be avoided.
>>> array1
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 3, 1, 0]
[0, 0, 1, 1, 2, 0]
[0, 0, 1, 1, 1, 0]
[0, 0, 0, 0, 0, 0]
>>> array2
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 2, 1, 0]
[0, 0, 1, 1, 4, 0]
[0, 0, 1, 1, 1, 0]
In the example above larger values mean brighter spots and zero values are plotted as black space. The arrays are always the same size. How do I align the significant values (not zero) in array2 with the ones in array1? So the outcome should be like this.
>>> array2(aligned)
[0, 0, 0, 0, 0, 0]
[0, 0, 0, 0, 0, 0]
[0, 0, 1, 2, 1, 0]
[0, 0, 1, 1, 4, 0]
[0, 0, 1, 1, 1, 0]
[0, 0, 0, 0, 0, 0]
This must be done in order to post-process arrays in a meaningful way e.q. calculating average or sum etc. Note! Finding a mass center point and aligning accordingly doesn't work because of possible high values on the edges that change during a day.
One thing that may cause problems with this kind of data is that the images are not nicely aligned with the pixels. I try to illustrate my point with two arrays with a square in them:
array1:
0 0 0 0 0
0 2 2 2 0
0 2 2 2 0
0 2 2 2 0
0 0 0 0 0
array2:
0 0 0 0 0
0 1 2 2 1
0 1 2 2 1
0 1 2 2 1
0 0 0 0 0
As you see, the limited resolution is a challenge, as the image has moved 0.5 pixels.
Of course, it is easy to calculate the COG of both of these, and see that it is (row,column=2,2) for the first array and (2, 2.5) for the second array. But if we move the second array by .5 to the left, we get:
array2_shifted:
0 0 0 0 0
0.5 1.5 2.0 1.5 0.5
0.5 1.5 2.0 1.5 0.5
0.5 1.5 2.0 1.5 0.5
0 0 0 0 0
So that things start to spread out.
Of course, it may be that your arrays are large enough so that you can work without worrying about subpixels, but if you only have a few or a few dozen pixels in each direction, this may become a nuisance.
One way out of this is to first increase the image size by suitable extrapolation (such as done with an image processing program; the cv2 module is full of possibilities with this). Then the images can be fitted together with single-pixel precision and downsampled back.
In any case you'll need a method to find out where the fit between the images is the best. There are a lot of choices to make. One important thing to notice is that you may not want to align the images with the first image, you may want to alignt all images with a reference. The reference could in this case be a perfect circle in the center of the image. Then you will just need to move all images to match this reference.
Once you have chosen your reference, you need to choose the method which gives you some metrics about the alignment of the images. There are several possibilities, but you may start with these:
Calculate the center of gravity of the image.
Calculate the correlation between an image and the reference. The highest point(s) of the resulting correlation array give you the best match.
Do either of the above but only after doing some processing for the image (typically limiting the dynamic range at each or both ends).
I would start by something like this:
possibly upsample the image (if the resolution is low)
limit the high end of the dynamic range (e.g. clipped=np.clip(image,0,max_intensity))
calculate the center of gravity (e.g. scipy.ndimage.center_of_mass(clipped))
translate the image by the offset of the center of gravity
Translation of a 2D array requires a bit of code but should not be excessively difficult. If you are sure you have black all around, you can use:
translated = np.roll(np.roll(original, deltar, axis=0), deltac, axis=1)
This rolls the leftmost pixels to the right (or vice versa). If that is bad, then you'll need to zero them out. (Or have a look at: python numpy roll with padding).
A word of warning about the alignment procedures: The simples (COG, correlation) fail, if you have an intensity gradient across the image. Due to this you may want to look for edges and then correlate. The intensity limiting also helps here, if your background is really black.