Python: Find when figure window has been moved - python

Good evening. I should define the x and y coordinates of the figure window when i move it. There is a way to know when a window has been moved/dragged across the screen in python?
I use the method fig.canvas.manager.window.winfo_geometry() to get the x and y coordinates but how do I get a callback when I move the window?
thanks.

Related

Overlay all screens and draw rectangle with a mouse

I am working on tiny program to capture screen print, I want to do it in a similar fashion that Win Snipping Tool is working. First I need to overlay all screens with a 50% opacity layer and then, using the mouse, draw a rectangle and read vertices coordinates. Honestly, I have no idea how to bite this. I tried with win32api / gui and it is great to get mouse coordinates, but still was unable to draw a rectangle. My idea (one of many) is to (using PIL / ImageGrab) take shots of both displays, put an overlay and print them as a full screen on all windows, but I failed while doing this. Other idea is to take img grab and create two new windows using BeeWare / Toga (that is GUI framework I am using) in full screen, but I was unable to find any method to open window on second display. Any ideas and hints will be greatly appreciated, I am really counting on you, as I feel I reached dead end.
Well,It is very easy to use tkinter.
Ok,It is the principle when I make my screenshot application:
User presses the button to start.
Make a new window whose width and height should full cover all the screens,and hide the title bar(If it is had to achieve,maybe use width=9999 and height=9999).
Take a screenshot of all the desktop(You can use ImageGrab.grab((),all_screens=True)) to do that.
Make the screenshot showed in a Canvas(I know that toga have this widget).
Start your mouse listener thread and save the position of pressed.
When user moves his mouse,create a rectangle(toga's Canvas have a function rect()).Maybe use this rect(pressed_x,pressed_y,move_x,move_y).And delete the last rectangle(Then it will always show only one rectangle).
When user released his mouse,save the position of released.And use ImageGrab.grab((pressed_x,pressed_y,released_x,released_y),all_screens=True) to crop the selected area.
If you want to show it in application interface.toga has a widget called ImageView.You can put the image in it.

remove pyqt data points with mouse clicks

I have a pyqtgraph.PlotWidget with several curves and want to remove data points within a range specified via mouse clicks. I can get the position of the mouse clicks with
def mousePressEvent(self, QMouseEvent):
pos = QMouseEvent.pos()
but this position is of course in pixels of the widget and not in the units of the plot (time and amplitude). To find the data points that should be removed, I now have to either transform this pixel range to plot units, or access the pixels where the data points are displayed.
As this code should be integrated into a rather big project, I cannot mess too much with the given classes. I have read about the mapToDevice method as used in this question, but I was not able to get that working.
Does someone have an idea? Is someone able to explain to me how to use the mapTo* methods propperly in this case? Or can someone show me where to find a propper tutorial on interactive pyqtgraphs?
Thanks in advance.

PyOpenGL how to rotate a scene with the mouse

I am trying to create a simple scene in 3d (in python) where you have a cube in front of you, and you are able to rotate it around with the mouse.
I understand that you should rotate the complete scene to mimic camera movement but i can't figure out how you should do this.
Just to clarify I want the camera (or scene) to move a bit like blender (the program).
Thanks in advance
Ok I think i have found what you should do
just for the people that have trouble with this like I did this is the way you should do it:
to rotate around a cube with the camera in opengl:
your x mouse value has to be added to the z rotator of your scene
and the cosinus of your y mouse value has to be added to the x rotator
and then the sinus of your y mouse value has to be subtracted of your y rotator
that should do it

Getting windows (client coordinates) for python window

I have loaded an image using Tkinter in Python. It opens up in a python window. If I try to get coordinates of a part of the image, it gives me coordinates relative to the python window that opened up. However, I would like to actually be able to find the client coordinates of that part of the image i.e. coordinates in relation to the actual computer screen.
I looked into using win32gui and somehow trying to get coordinates from the device context, however I could not figure out a way.
Any help is much appreciated!
You can use root.winfo_rootx() and root.winfo_rooty() to get the coordinates of the top left corner of the tkinter window. You can add those to the event.x and event.yto get the screen coordinates.

Rectangle Tool in Python

I am trying to create a rectangle tool for a paint program using python. Basically, I would like for the user to be able to click on the canvas and to be able to draw rectangles from that specific point just as any rectangle tool in paint programs. This is the code I have right now. It currently gives me very small cross-like structure. I am not sure what is causing this output and would just like some insight on how the problem can be fixed. Thank you.
if mb[0] == 1 and canvas.collidepoint(mx,my):
screen.set_clip(canvas)
if tool == "rectangle":
screen.blit(copy,(0,0))
x,y = mouse.get_pos()
mx,my = mouse.get_pos()
draw.rect(screen,(c),(x,y,mx-x,my-y),sz)
screen.set_clip(None)
Instead of grabbing the current position, use mouse events.
On mouse down, store start coordinate
On mouse up, draw rect at start to current coordinates
You could draw in-progress rectangles with MOUSEMOTION's coordinates.

Categories