stuck on a simple task any help would be much appreciated. It creates a graphic window, and depending on where the user clicks it will draw a different colour circle. Right hand = Yellow and the left hand will be red depending on where the user clicks. However i cant get my if statement to work, and all its returning is 10 yellow circles. Any help would be appreciated , thanks
def circles():
win = GraphWin ("circles", 400,100)
for i in range (10):
point = win.getMouse()
circleFill = Circle(point, 10)
circleFill.draw(win)
if str(point) >= str(200):
circleFill = Circle (point, 10)
circleFill.setFill("Yellow")
circleFill.draw(win)
else:
circleFill = Circle (point, 10)
circleFill.setFill("Red")
circleFill.draw(win)
You're trying to compare a point to a number. This doesn't make any sense. Is the upper-right corner of your screen more than 200? What about the lower-right? Or the upper-left?
Of course you can convert them both to strings, then compare those, because you can always compare strings—but then you're just asking whether something like 'Point(1600, 0)' would comes before or after '200' in the dictionary, which doesn't tell you anything useful.
Your next attempt, trying to compare a point to a point, still doesn't make any sense. Is (1600, 20) more or less than (100, 1280)? Of course there are various ways you could answer that (e.g., you could treat them as vectors rather than points and ask for their norms), but nothing that seems relevant to your question.
I think what you might want to do here is to compare the X coordinate of the point to a number:
if point.getX() >= 200:
That makes sense. That covers the whole right part of the screen, whether way up at the top or way down at the bottom, because whether you're at (1600, 0) or (1600, 1200), that 1600 part is bigger than 200.
That may not actually be what you want, but hopefully if it isn't, it gives you the idea to get unstuck.
Related
i have this simple game where there is a ball bouncing on the screen and the player can move left and right of the screen and shoot an arrow up to pop the ball, every time the player hits a ball, the ball bursts and splits into two smaller balls until they reach a minimum size and disappear.
I am trying to solve this game with a genetic algorithm based on the python neat library and on this tutorial on flappy bird https://www.youtube.com/watch?v=MMxFDaIOHsE&list=PLzMcBGfZo4-lwGZWXz5Qgta_YNX3_vLS2, so I have a configuration file in which I have to specify how many input nodes must be in the network, I had thought to give as input the player's x coordinate, the distance between the player's x-coordinate and the ball's x-coordinate and the distance between the player's y-coordinate and the ball's y-coordinate.
My problem is that at the beginning of the game I have only one ball but after a few moves I could have more balls in the screen so I should have a greater number of input nodes,the more balls there are on the screen the more input coordinates I have to provide to the network.
So how to set the number of input nodes in a variable way?
config-feedforward.txt file
"""
# network parameters
num_hidden = 0
num_inputs = 3 #this needs to be variable
num_outputs = 3
"""
python file
for index,player in enumerate(game.players):
balls_array_x = []
balls_array_y = []
for ball in game.balls:
balls_array_x.append(ball.x)
balls_array_x.append(ball.y)
output = np.argmax(nets[index].activate(("there may be a number of variable arguments here")))
#other...
final code
for index,player in enumerate(game.players):
balls_array_x = []
balls_array_y = []
for ball in game.balls:
balls_array_x.append(ball.x)
balls_array_y.append(ball.y)
distance_list = []
player_x = player.x
player_y = player.y
i = 0
while i < len(balls_array_x):
dist = math.sqrt((balls_array_x[i] - player_x) ** 2 + (balls_array_y[i] - player_y) ** 2)
distance_list.append(dist)
i+=1
i = 0
if len(distance_list) > 0:
nearest_ball = min(distance_list)
output = np.argmax(nets[index].activate((player.x,player.y,nearest_ball)))
This is a good question and as far as I can tell from a quick Google search hasn't been addressed for simple ML algorithms like NEAT.
Traditionally resizing methods of Deep NN (padding, cropping, RNNs, middle-layers, etc) can obviously not be applied here since NEAT explicitly encodes each single neuron and connection.
I am also not aware of any general method/trick to make the input size mutable for the traditional NEAT algorithm and frankly don't think there is one. Though I can think of a couple of changes to the algorithm that would make this possible, but that's of no help to you I suppose.
In my opinion you therefore have 3 options:
You increase the input size to the maximum number of balls the algorithm should track and set the x-diff/y-diff value of non-existent balls to an otherwise impossible number (e.g. -1). If balls come into existence you actually set the values for those x-diff/y-diff input neurons and set them to -1 again when they are gone. Then you let NEAT figure it out. Also worth thinking about concatenating 2 separate NEAT NNs, with the first NN having 2 inputs, 1 output and the second NN having 1 (player pos) + x (max number of balls) inputs and 2 outputs (left, right). The first NN produces an output for each ball position (and is identical for each ball) and the second NN takes the first NNs output and turns it into an action. Also: The maximum number of balls doesn't have to be the maximum number of displayable balls, but can also be limited to 10 and only considering the 10 closest balls.
You only consider 1 ball for each action side (making your input 1 + 2*2). This could be the consideration of the lowest ball on each side or the closest ball on each side. Such preprocessing can make such simple NN tasks however quite easy to solve. Maybe you can add inertia into your test environment and thereby add a non-linearity that makes it not so straightforward to always teleport/hurry to the lowest ball.
You input the whole observation space into NEAT (or a uniformly downsampled fraction), e.g. the whole game at whatever resolution is lowest but still sensible. I know that this observation space is huge, but NEAT works quite well in handling such spaces.
I know that this is not the variable input size option of NEAT that you might have hoped for, but I don't know about any such general option/trick without changing the underlying NEAT algorithm significantly.
However, I am very happy to be corrected if someone knows a better option!
This is my first question ever, and I am a complete and utter beginner, so please don't eat me :) What I am trying to to is to draw a fibonacci sequence using the Python turtle module. My code is as follows:
import turtle
zuf = turtle.Turtle()
while True:
zuf.forward(10)
zuf.left(3.1415)
This, however, drives around in circles only. I have tried to create a variable, say X, and assign a fibonacci rule to it xn = xn-1 + xn-2 then I'd put it in here zuf.forward(x) but it doesn't work. I tried multiple variations of that, but none seems to work. Please don't give a whole solution, only some hint, thanks a lot.
I think I can get you from where you are to where you want to be. First, your invocation of:
zuf.left(3.1415)
seems to indicate you're thinking in radians, which is fine. But you need to tell your turtle that:
zuf = turtle.Turtle()
zuf.radians()
this will still make your code go in circles, but very different circles. Next, we want to replace 10 with our fibonacci value. Before the while loop, initialize your fibonacci counters:
previous, current = 0, 1
as the last statement in the while loop, bump them up:
previous, current = current, current + previous
and in your forward() call, replace 10 with current. Next, we need to turn the line that it's drawing into a square. To do this, we need to do two things. First, loop the drawing code four times:
for i in range(4):
zuf.forward(current)
zuf.left(3.1415)
And second, replace your angle with pi/2 instead:
zuf.left(3.1415 / 2)
If you assemble this all correctly, you should end up with a figure like:
showing the increasing size of the fibonacci values. Not the greatest looking image, you'll still have to do some work on it to clean it up to look nice.
Finally, I was impressed with the fibonacci drawing code that #IvanS95 linked to in his comment, that I wrote a high speed version of it that uses stamping instead of drawing:
from turtle import Screen, Turtle
SCALE = 5
CURSOR_SIZE = 20
square = Turtle('square', visible=False)
square.fillcolor('white')
square.speed('fastest')
square.right(90)
square.penup()
previous_scaled, previous, current = 0, 0, 1
for _ in range(10):
current_scaled = current * SCALE
square.forward(current_scaled/2 + previous_scaled/2)
square.shapesize(current_scaled / CURSOR_SIZE)
square.left(90)
square.forward(current_scaled/2 - previous_scaled/2)
square.stamp()
previous_scaled, previous, current = current_scaled, current, current + previous
screen = Screen()
screen.exitonclick()
This is not a whole solution for you, only a hint of what can be done as you're drawing your squares and this is a stamp-based solution which plays by different rules.
This is my folium code:
import folium
mp = folium.Map(location=[37, -102],
zoom_start=1,
tiles="Stamen Terrain",
)
display(mp)
This is the output I get:
There are two problems with leaflet map:
The continents are displayed 2 times or more in a loop.
The map can be panned endlessly from left to right or vice-versa, in a loop.
Both of these are nuisance. The first issue can be addressed temporarily by setting the zoom_start to something else than 1. But even then, zooming out of the map bring this issue back again. The less said about the second one the better.
Now what I want is to limit the boundary of my map to, say, [-150, 150, -70, 70] or smaller. And I don't want to display beyond this bound, either by panning or zooming. Neither do I want my map to pan infinitely in a loop.
Is it possible to do that in Folium?
It's possible! Just use the min_zoom (and max_zoom for the opposite problem) attribute!
f = folium.Figure(width=1000, height=500)
m = folium.Map(location= initial_location, tiles="openstreetmap",
zoom_start=zoom_start_defined, min_zoom = min_zoom_defined).add_to(f)
I think a min_zoom of 2 should do the work
One easy way is to use the max_bounds parameter in the Map() function and set it to True. Using this parameter restricts the map to one view of the continents.
Here's an example :
m = folium.Map(location=loc,max_bounds=True)
Thanks for this discussion. I've been trying to get my maps to look better and control better, so this is helpful. For what it's worth,
max_bounds = True
doesn't keep me from zooming out to see multiple versions of the different continents. If I grab the map and move left/right, I can do so, but the view springs back to keep my initial map (say North America) towards the middle of the screen.
f = folium.Figure(width=1000, height=500)
m = folium.Map(location= initial_location,
tiles="openstreetmap",
zoom_start=zoom_start_defined,
min_zoom = min_zoom_defined
).add_to(f)
This works in so far as it keeps the user from zooming out too far and revealing multiple copies of the same continent. But if the user grabs the map and moves left/right, it is possible to scroll to a different version of (say) North America.
But when I combined these two ideas, it did work:
f = folium.Figure(width=1000, height=500)
m = folium.Map(location= initial_location,
tiles="openstreetmap",
zoom_start=zoom_start_defined,
min_zoom = min_zoom_defined,
max_bounds = True
).add_to(f)
This keeps the user from zooming out too far. The user can still grab a the map and scroll left/right to get to another version of (say) North America, but as soon as the mouse button is released, the view springs back to the original version of NA. This isn't as nice as not allowing the user to scroll to the second (or third) version of NA, but since it jumps back, it is an improvement.
I made the original battleship and now I'm looking to upgrade my AI from random guessing to guessing statistically probably locations. I'm having trouble finding algorithms online, so my question is what kinds of algorithms already exist for this application? And how would I implement one?
Ships: 5, 4, 3, 3, 2
Field: 10X10
Board:
OCEAN = "O"
FIRE = "X"
HIT = "*"
SIZE = 10
SEA = [] # Blank Board
for x in range(SIZE):
SEA.append([OCEAN] * SIZE)
If you'd like to see the rest of the code, I posted it here: (https://github.com/Dbz/Battleship/blob/master/BattleShip.py); I didn't want to clutter the question with a lot of irrelevant code.
The ultimate naive solution wold be to go through every possible placement of ships (legal given what information is known) and counting the number of times each square is full.
obviously, in a relatively empty board this will not work as there are too many permutations, but a good start might be:
for each square on board: go through all ships and count in how many different ways it fits in that square, i.e. for each square of the ships length check if it fits horizontally and vertically.
an improvement might be to also check for each possible ship placement if the rest of the ships can be placed legally whilst covering all known 'hits' (places known to contain a ship).
to improve performance, if only one ship can be placed in a given spot, you no longer need to test it on other spots. also, when there are many 'hits', it might be quicker to first cover all known 'hits' and for each possible cover go through the rest.
edit: you might want to look into DFS.
Edit: Elaboration on OP's (#Dbz) suggestion in the comments:
hold a set of dismissed placements ('dissmissed') of ships (can be represented as string, say "4V5x3" for the placement of length 4 ship in 5x3, 5x4, 5x5, 5x6), after a guess you add all the placements the guess dismisses, then for each square hold a set of placements that intersect with it ('placements[x,y]') then the probability would be:
34-|intersection(placements[x,y], dissmissed)|/(3400-|dismissed|)
To add to the dismissed list:
if guess at (X,Y) is a miss add placements[x,y]
if guess at (X,Y) is a hit:
add neighboring placements (assuming that ships cannot be placed adjacently), i.e. add:
<(2,3a,3b,4,5)>H<X+1>x<Y>, <(2,3a,3b,4,5)>V<X>x<Y+1>
<(2,3a,3b,4,5)>H<X-(2,3,3,4,5)>x<Y>, <(2,3a,3b,4,5)>V<X>x<Y-(2,3,3,4,5)>
2H<X+-1>x<Y+(-2 to 1)>, 3aH<X+-1>x<Y+(-3 to 1)> ...
2V<X+(-2 to 1)>x<Y+-1>, 3aV<X+(-3 to 1)>x<Y+-1> ...
if |intersection(placements[x,y], dissmissed)|==33, i.e. only one placement possible add ship (see later)
check if any of the previews hits has only one possible placement left, if so, add the ship
check to see if any of the ships have only possible placement, if so, add the ship
adding a ship:
add all other placements of that ship to dismissed
for each (x,y) of the ships placement add placements[x,y] with out the actual placement
for each (x,y) of the ships placement mark as hit guess (if not already known) run stage 2
for each (x,y) neighboring the ships placement mark as miss guess (if not already known) run stage 1
run stage 3 and 4.
i might have over complicated this, there might be some redundant actions, but you get the point.
Nice question, and I like your idea for statistical approach.
I think I would have tried a machine learning approach for this problem as follows:
First model your problem as a classification problem.
The classification problem is: Given a square (x,y) - you want to tell the likelihood of having a ship in this square. Let this likelihood be p.
Next, you need to develop some 'features'. You can take the surrounding of (x,y) [as you might have partial knowledge on it] as your features.
For example, the features of the middle of the following mini-board (+ indicates the square you want to determine if there is a ship or not in):
OO*
O+*
?O?
can be something like:
f1 = (0,0) = false
f2 = (0,1) = false
f3 = (0,2) = true
f4 = (1,0) = false
**note skipping (1,1)
f5 = (1,2) = true
f6 = (2,0) = unknown
f7 = (2,1) = false
f8 = (2,2) = unknown
I'd implement features relative to the point of origin (in this case - (1,1)) and not as absolute location on board (so the square up to (3,3) will also be f2).
Now, create a training set. The training set is a 'labeled' set of features - based on some real boards. You can create it manually (create a lot of boards), automatically by a random generator of placements, or by some other data you can gather.
Feed the training set to a learning algorithm. The algorithm should be able to handle 'unknowns' and be able to give probability of "true" and not only a boolean answer. I think a variation of Naive Bayes can fit well here.
After you have got a classifier - exploit it with your AI.
When it's your turn, choose to fire upon a square which has the maximal value of p. At first, the shots will be kinda random - but with more shots you fire, you will have more information on the board, and the AI will exploit it for better predictions.
Note that I gave features based on a square of size 1. You can of course choose any k and find features on this bigger square - it will give you more features, but each might be less informative. There is no rule of thumb which will be better - and it should be tested.
Main question is, how are you going to find statistically probable locations. Are they already known or you want to figure them out?
Either case, I'd just make the grid weighed. In your case, the initial weight for each slot would be 1.0/(SIZE^2). The sum of weights must be equal to 1.
You can then adjust weights based on the statistics gathered from N last played games.
Now, when your AI makes a choice, it chooses a coordinate to hit based on weighed probabilities. The quick and simple way to do that would be:
Generate a random number R in range [0..1]
Start from slot (0, 0) adding the weights, i.e. S = W(0, 0) + W(0, 1) + .... where W(n, m) is the weight of the corresponding slot. Once S >= R, you've got the coordinate to hit.
This can be optimised by pre-calculating cumulative weights for each row, have fun :)
Find out which ships are still alive:
alive = [2,2,3,4] # length of alive ships
Find out spots where you have not shot, for example with a numpy.where()
Loop over spots where you can shoot.
Check the sides of the given position. Go left and right, how many spaces? Go up and down, how many spaces? If you can fit a boat in that many spaces, you can fit any smaller boat, so this loop I'd do it from the largest ship downwards, and I'd add to the counts in this position as many +1 as ships smaller than the one that fits.
Once you have done all of this, the position with more points should be the most probable to attack and hit something.
Of course, it can get as complicated as you want. You can also ask yourself, instead of which is my next hit, which combinations of hits will give me the victory in less number of hits or any other combination/parametrization of the problem. Good luck!
First of all, I'm fairly sure snapping to grid is fairly easy, however I've run into some odd trouble in this situation and my maths are too weak to work out specifically what is wrong.
Here's the situation
I have an abstract concept of a grid, with Y steps exactly Y_STEP apart (the x steps are working fine so ignore them for now)
The grid is in an abstract coordinate space, and to get things to line up I've got a magic offset in there, let's call it Y_OFFSET
to snap to the grid I've got the following code (python)
def snapToGrid(originalPos, offset, step):
index = int((originalPos - offset) / step) #truncates the remainder away
return index * gap + offset
so I pass the cursor position, Y_OFFSET and Y_STEP into that function and it returns me the nearest floored y position on the grid
That appears to work fine in the original scenario, however when I take into account the fact that the view is scrollable things get a little weird.
Scrolling is made as basic as I can get it, I've got a viewPort that keeps count of the distance scrolled along the Y Axis and just offsets everything that goes through it.
Here's a snippet of the cursor's mouseMotion code:
def mouseMotion(self, event):
pixelPos = event.pos[Y]
odePos = Scroll.pixelPosToOdePos(pixelPos)
self.tool.positionChanged(odePos)
So there's two things to look at there, first the Scroll module's translation from pixel position to the abstract coordinate space, then the tool's positionChanged function which takes the abstract coordinate space value and snaps to the nearest Y step.
Here's the relevant Scroll code
def pixelPosToOdePos(pixelPos):
offsetPixelPos = pixelPos - self.viewPortOffset
return pixelsToOde(offsetPixelPos)
def pixelsToOde(pixels):
return float(pixels) / float(pixels_in_an_ode_unit)
And the tools update code
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(originalPos, Y_OFFSET, Y_STEP)
The last relevant chunk is when the tool goes to render itself. It goes through the Scroll object, which transforms the tool's snapped coordinate space position into an onscreen pixel position, here's the code:
#in Tool
def render(self, screen):
Scroll.render(screen, self.image, self.snappedPos)
#in Scroll
def render(self, screen, image, odePos):
pixelPos = self.odePosToPixelPos(odePos)
screen.blit(image, pixelPos) # screen is a surface from pygame for the curious
def odePosToPixelPos(self.odePos):
offsetPos = odePos + self.viewPortOffset
return odeToPixels(offsetPos)
def odeToPixels(odeUnits):
return int(odeUnits * pixels_in_an_ode_unit)
Whew, that was a long explanation. Hope you're still with me...
The problem I'm now getting is that when I scroll up the drawn image loses alignment with the cursor.
It starts snapping to the Y step exactly 1 step below the cursor.
Additionally it appears to phase in and out of allignment.
At some scrolls it is out by 1 and other scrolls it is spot on.
It's never out by more than 1 and it's always snapping to a valid grid location.
Best guess I can come up with is that somewhere I'm truncating some data in the wrong spot, but no idea where or how it ends up with this behavior.
Anyone familiar with coordinate spaces, scrolling and snapping?
Ok, I'm answering my own question here, as alexk mentioned, using int to truncate was my mistake.
The behaviour I'm after is best modeled by math.floor().
Apologies, the original question does not contain enough information to really work out what the problem is. I didn't have the extra bit of information at that point.
With regards to the typo note, I think I may be using the context in a confusing manner... From the perspective of the positionChanged() function, the parameter is a new position coming in.
From the perspective of the snapToGrid() function the parameter is an original position which is being changed to a snapped position.
The language is like that because part of it is in my event handling code and the other part is in my general services code. I should have changed it for the example
Do you have a typo in positionChanged() ?
def positionChanged(self, newPos):
self.snappedPos = snapToGrid(newPos, Y_OFFSET, Y_STEP)
I guess you are off by one pixel because of the accuracy problems during float division. Try changing your snapToGrid() to this:
def snapToGrid(originalPos, offset, step):
EPS = 1e-6
index = int((originalPos - offset) / step + EPS) #truncates the remainder away
return index * gap + offset
Thanks for the answer, there may be a typo, but I can't see it...
Unfortunately the change to snapToGrid didn't make a difference, so I don't think that's the issue.
It's not off by one pixel, but rather it's off by Y_STEP. Playing around with it some more I've found that I can't get it to be exact at any point that the screen is scrolled up and also that it happens towards the top of the screen, which I suspect is ODE position zero, so I'm guessing my problem is around small or negative values.