I am using pyautogui to move a window to the top left corner of the screen. For some applications such as Excel and Skype, this works correctly. For other applications such as Chrome and Notepad, the window is instead moved to 10 pixels to the right of the upper left hand corner. Why is this happening?
Note that I am using python 3.6 and Windows 10.
See code example below.
import pyautogui
window = pyautogui.getWindow('MyChromeWindow')
window.move(0,0)
#window instead moves to (10, 0).
I can't recreate the problem , but you can overcome this issue wiht move to picture as i suppose (0,0) is standardbackroud rgb.
Related
I'm trying to develop a code that open an image where you can select a point quit the mouse and drag to form a rectangle until you don't release the left button.
Then from python I should receive the starting coordinates and the height and width in pixel of the rectangle, how can I do it?
I saw that the packages argparse and cv2 can be used, but I don't really know how to approach it.
I won't do the job for you but I'm willing to help.
You will need 2 blocks of code:
an image displayer
a mouse-event listener
To start, you may forget about the image displayer. You may concentrate on the mouse listener while you draw your rectangle anywhere on the screen.
Select a mouse listener library. There are many on pypi.org.
I propose pynput because it is easy to work with and is well documented.
read documentation (focus on "on_click")
write your code to implement your mouse listener. It's simple (less than 10 lines). At the end of your program, add a statement:
input(">")
run your program. Click anywhere on the screen and drag to another point. Release.
your on_click() function will be called twice (once for button press and once for button release). Record the two sets of X-Y coordinates (unit is pixels).
once the button is released, compute the size of the rectangle (in pixels).
press any key on the keyboard to end the program.
Once your program is working you may work on the imager. If the image is large, you may have to use a scaling factor to reduce it. You will have to introduce the scaling factor in your sizing equations.
When a program skeleton will exist, do not hesitate to ask questions.
Asking for help when there is no visible sweat will not bring you many answers.
I am working on tiny program to capture screen print, I want to do it in a similar fashion that Win Snipping Tool is working. First I need to overlay all screens with a 50% opacity layer and then, using the mouse, draw a rectangle and read vertices coordinates. Honestly, I have no idea how to bite this. I tried with win32api / gui and it is great to get mouse coordinates, but still was unable to draw a rectangle. My idea (one of many) is to (using PIL / ImageGrab) take shots of both displays, put an overlay and print them as a full screen on all windows, but I failed while doing this. Other idea is to take img grab and create two new windows using BeeWare / Toga (that is GUI framework I am using) in full screen, but I was unable to find any method to open window on second display. Any ideas and hints will be greatly appreciated, I am really counting on you, as I feel I reached dead end.
Well,It is very easy to use tkinter.
Ok,It is the principle when I make my screenshot application:
User presses the button to start.
Make a new window whose width and height should full cover all the screens,and hide the title bar(If it is had to achieve,maybe use width=9999 and height=9999).
Take a screenshot of all the desktop(You can use ImageGrab.grab((),all_screens=True)) to do that.
Make the screenshot showed in a Canvas(I know that toga have this widget).
Start your mouse listener thread and save the position of pressed.
When user moves his mouse,create a rectangle(toga's Canvas have a function rect()).Maybe use this rect(pressed_x,pressed_y,move_x,move_y).And delete the last rectangle(Then it will always show only one rectangle).
When user released his mouse,save the position of released.And use ImageGrab.grab((pressed_x,pressed_y,released_x,released_y),all_screens=True) to crop the selected area.
If you want to show it in application interface.toga has a widget called ImageView.You can put the image in it.
I'm looking to make a Python program, that while running in the background (e.g. started through command line), will change the screen resolution of Windows (and shift the screen position). And then the user is free to continue to use their computer in this different resolution.
E.g.: (fake code below)
import os
os.changeResolution(800,600)
# the entire windows desktop resolution changes to a (800,600) box, with black/empty around it
os.changeScreenPosition(100,200)
# shifts the (800,600) window of the desktop to position (100,200)
while 1:
# do nothing, just keep the screen like we set it above while this little python program is running somewhere
continue
Picture below showing before/after:
after:
(screen is shrunk to new resolution, position is offset, background surrounding is black)
Now while this program is running minimized somewhere the user can go about their other desktop tasks. Is this possible with Python and Windows 10?
As a follow-up, what if I wanted to change the shape, from say a rectangular box, to a circle? E.g. to distort / bulge the screen.
Resizing the window will only make it fill your monitor at a lower resolution.
You can mess about with the magnify function to make it larger or live copy an area at the same resolution.
You can use thumbnails (the way windows 10 shows a preview when you hover over a window in the task bar) to make it look smaller, but it won't pass control to a smaller window.
Both are by the DWM (Desktop Window Manager) and don't let you intervene with the image short of adjusting the colour (magnify can tint the window or make it black and white)
Distorting to a round window is a whole can of worms. There are a few options explored in my old post below. I've still to give it a go, when I get some time, but think I'll be going down the route of trying to hook into DWM.
Realtime video processing for the complete Windows desktop
I have loaded an image using Tkinter in Python. It opens up in a python window. If I try to get coordinates of a part of the image, it gives me coordinates relative to the python window that opened up. However, I would like to actually be able to find the client coordinates of that part of the image i.e. coordinates in relation to the actual computer screen.
I looked into using win32gui and somehow trying to get coordinates from the device context, however I could not figure out a way.
Any help is much appreciated!
You can use root.winfo_rootx() and root.winfo_rooty() to get the coordinates of the top left corner of the tkinter window. You can add those to the event.x and event.yto get the screen coordinates.
Is there a way to check if part or an entire window is over/under another window in python?
I have two windows and I'd like to make them not appear over each other. This is in Windows, using Tkinter.
You can use the methods winfo_rootx and winfo_rooty to get the x/y in the upper left corner. You can use winfo_width and winfo_height to get the width and height of the window. From that it's just a little math to figure out if two windows overlap. You can then use the geometry method to position the windows anywhere on the screen.