I am trying to create a rectangle tool for a paint program using python. Basically, I would like for the user to be able to click on the canvas and to be able to draw rectangles from that specific point just as any rectangle tool in paint programs. This is the code I have right now. It currently gives me very small cross-like structure. I am not sure what is causing this output and would just like some insight on how the problem can be fixed. Thank you.
if mb[0] == 1 and canvas.collidepoint(mx,my):
screen.set_clip(canvas)
if tool == "rectangle":
screen.blit(copy,(0,0))
x,y = mouse.get_pos()
mx,my = mouse.get_pos()
draw.rect(screen,(c),(x,y,mx-x,my-y),sz)
screen.set_clip(None)
Instead of grabbing the current position, use mouse events.
On mouse down, store start coordinate
On mouse up, draw rect at start to current coordinates
You could draw in-progress rectangles with MOUSEMOTION's coordinates.
Related
I am making a game using python and pygame. I had a problem a few days ago that I needed to give my games a functionality of being resized and maintain the aspect ratio. Also everything on the screen is resized proportionately. And luckily I got a quick solution to create two different pygame surfaces. One is the screen visible to the user and the other is to manage the blitting functionality. Actually, fake screen has everything blitted and then it itself is blitted to the main screen by using
main_screen.blit((pygame.transform.scale(fake_screen, main_screen.get_rect().size), [0, 0]).
The main problem is that now since the MOUSEBUTTONDOWN events are getting triggered on the main screen and not on fake screen, But
the clicks are getting processed according to the fake screen. This means that when I click on a button after resizing, the button appears to be their but actually its at its respective position on the fake screen. This makes all the buttons loose their functionality after the VIDEORESIZE event. Can anyone help me out with this? I hope that I was able to explain.
Easy answer: use the pygame.SCALED display flag.
It resizes the main screen for you and the mouse events too, without your program needing to know anything about it. Documented on this page: https://www.pygame.org/docs/ref/mixer.html
Using this means you wouldn’t need to use a fake screen at all, or do anything at all with scaling on your end.
DIY answer:
If you still want to control the scaling yourself, you just have to scale the mouse events along with the screen. Like scale then the opposite way you scale the fake screen.
In your case it looks like that would involve dividing the mouse event x by the ratio between fakescreen width and screen width, and same with y (with heights ofc).
I got a very easy solution to this myself. I just after getting mouse x and y coordinates, changed them to proportionately corresponding points. With a simple math. I mean, if x coordinate is 15% of main screen width, then convert it to 15% of fake screen width. This way, the fake screen will get properly scaled coordinates. The mathematical equation can be as follows:-
mouse_x = mouse_x/(xd/100)
mouse_x *= 10
mouse_y = mouse_y/(yd/100)
mouse_y *= 6
Here xd and yd are width and height of the resizable main screen respectively. And 10 and 6 are 1% of 1000 and 600 which are the width and height of the fake screen.
This solved my problem and game is now working perfectly.
Thank You.
I am working on tiny program to capture screen print, I want to do it in a similar fashion that Win Snipping Tool is working. First I need to overlay all screens with a 50% opacity layer and then, using the mouse, draw a rectangle and read vertices coordinates. Honestly, I have no idea how to bite this. I tried with win32api / gui and it is great to get mouse coordinates, but still was unable to draw a rectangle. My idea (one of many) is to (using PIL / ImageGrab) take shots of both displays, put an overlay and print them as a full screen on all windows, but I failed while doing this. Other idea is to take img grab and create two new windows using BeeWare / Toga (that is GUI framework I am using) in full screen, but I was unable to find any method to open window on second display. Any ideas and hints will be greatly appreciated, I am really counting on you, as I feel I reached dead end.
Well,It is very easy to use tkinter.
Ok,It is the principle when I make my screenshot application:
User presses the button to start.
Make a new window whose width and height should full cover all the screens,and hide the title bar(If it is had to achieve,maybe use width=9999 and height=9999).
Take a screenshot of all the desktop(You can use ImageGrab.grab((),all_screens=True)) to do that.
Make the screenshot showed in a Canvas(I know that toga have this widget).
Start your mouse listener thread and save the position of pressed.
When user moves his mouse,create a rectangle(toga's Canvas have a function rect()).Maybe use this rect(pressed_x,pressed_y,move_x,move_y).And delete the last rectangle(Then it will always show only one rectangle).
When user released his mouse,save the position of released.And use ImageGrab.grab((pressed_x,pressed_y,released_x,released_y),all_screens=True) to crop the selected area.
If you want to show it in application interface.toga has a widget called ImageView.You can put the image in it.
I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse.
I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this.
Just to clarify I want the camera (or scene) to move a bit like blender (the program).
Thanks in advance
Ok I think i have found what you should do
just for the people that have trouble with this like I did this is the way you should do it:
to rotate around a cube with the camera in opengl:
your x mouse value has to be added to the z rotator of your scene
and the cosinus of your y mouse value has to be added to the x rotator
and then the sinus of your y mouse value has to be subtracted of your y rotator
that should do it
Hi I am trying to make a punny cookie clicker type game called py clicker and made an invisible circle over the sprite which is a pie. How do I detect if the mouse is within the circle so when the user clicks it checks if it is in the circle and adds one to the counter?
If you know the x,y of the center of the circle and it's radius then you can calculate the distance from the center of the circle to your mouse pointer when you click. If it's greater than the radius then you are outside. There is a built in method that might help called math.hypot that will return the length between two points.
You could try pygame.sprite.collide_circle() . But you will need another Sprite with small radius and mouse position.
you can use the graphics library and use the method called getMouse.
I made a 2D project with a lot of tile sprites, and one player sprite. I'm trying to get the camera to follow the player, and for the most part it's working. However, there's one problem:
If you go to the edge of the map, it scrolls normally, but instead of the black background, it displays copies of the sprites on the edge of the map instead of the background (black). It has the same problem if I leave some squares empty, when I move it displays a copy of the tile that was previously there.
The camera works like this:
Select sprites that should be visible
Do sprite.visible = 1 for them, and sprite.visible = 0 for all other sprites
Set the position sprite.rect of all sprites to coords - offset
Update the screen (I use flip(), because the camera moves every turn, so the whole screen has to be updated every turn)
All DirtySprites have dirty = 2.
Does anyone know why it's displaying copies of the sprites on the edge instead of the background?
Help would be appreciated!
Unless you manually clear your screen surface, flip will not change its content.
Thus, if you neglect to draw to a certain location, it will remain the same.
If you want to get rid of this effect, usually called "hall of mirrors", you will have to keep track of what portions of the screen have not been drawn to yet and draw over these yourself.
It may be easier to define background sprites around your map's contours and block your camera from going off too far.
Since you use a "dirty/clean" approach to only redrawing what's changed, you won't have the option to just fill the whole screen surface before you draw your frame, because that would draw over anything that's stayed the same since the last frame.