Why I don't get my real resolution of my screen? [duplicate] - python

How do I from Winapi (in C or C++) detect the current screen resolution?
Some background:
I want to start a new OpenGL fullscreen window, but want it open with the same horizontal and vertical size which the desktop already is set to. (Now when everyone uses LCD screens, I figured this is the best way to get the native resolution of the screen.)
I don't desperately need to also know the desktop color depth, although that would be a nice bonus.

Size of the primary monitor: GetSystemMetrics SM_CXSCREEN / SM_CYSCREEN (GetDeviceCaps can also be used)
Size of all monitors (combined): GetSystemMetrics SM_CX/YVIRTUALSCREEN
Size of work area (screen excluding taskbar and other docked bars) on primary monitor: SystemParametersInfo SPI_GETWORKAREA
Size of a specific monitor (work area and "screen"): GetMonitorInfo
Edit:
It is important to remember that a monitor does not always "begin" at 0x0 so just knowing the size is not enough to position your window. You can use MonitorFromWindow to find the monitor your window is on and then call GetMonitorInfo
If you want to go the low-level route or change the resolution you need to use EnumDisplayDevices, EnumDisplaySettings and ChangeDisplaySettings (This is the only way to get the refresh rate AFAIK, but GetDeviceCaps will tell you the color depth)

When system use DPI virtualization (Vista and above) using GetSystemMetrics or GetWindowRect will fail to get the real screen resolution (you will get the virtual resolution) unless you created DPI Aware Application.
So the best option here (simple and backward compatible) is to use EnumDisplaySettings with ENUM_CURRENT_SETTINGS.

It's GetSystemMetrics with these parameters:
SM_CXSCREEN < width
SM_CYSCREEN < height
As it says (SM_CXSCREEN):
The width of the screen of the primary
display monitor, in pixels. This is
the same value obtained by calling
GetDeviceCaps as follows:
GetDeviceCaps( hdcPrimaryMonitor,
HORZRES).

I think SystemParametersInfo might be useful.
Edit: Look at GetMonitorInfo too.

MFC Example Multiple monitor support with GetSystemMetrics EnumDisplayMonitors and GetMonitorInfo
Follow this link: Monitor enumeration with source code

I use the GetSystemMetrics function
GetSystemMetrics(SM_CXSCREEN) returns screen width(in pixels)
GetSystemMetrics(SM_CYSCREEN) - height in pixels
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724385%28v=vs.85%29.aspx

Deleted about a week ago, then edited 3-4-13.
Here's a good one for situations where the user has decided to
run their desktop in a lower resolution (bad idea) or corner
cases where a person decided to get a monitor that their
graphics controller couldn't take full advantage of:
// get actual size of desktop
RECT actualDesktop;
GetWindowRect(GetDesktopWindow(), &actualDesktop);

To get real monitor resolution
void GetMonitorRealResolution(HMONITOR monitor, int* pixelsWidth, int* pixelsHeight)
{
MONITORINFOEX info = { sizeof(MONITORINFOEX) };
winrt::check_bool(GetMonitorInfo(monitor, &info));
DEVMODE devmode = {};
devmode.dmSize = sizeof(DEVMODE);
winrt::check_bool(EnumDisplaySettings(info.szDevice, ENUM_CURRENT_SETTINGS, &devmode));
*pixelsWidth = devmode.dmPelsWidth;
*pixelsHeight = devmode.dmPelsHeight;
}
It will return that native resolution in any case, even if the OS tries to lie to you due to the DPI awareness of the process.
To get the scaling ratio between the virtual resolution and real resolution
float GetMonitorScalingRatio(HMONITOR monitor)
{
MONITORINFOEX info = { sizeof(MONITORINFOEX) };
winrt::check_bool(GetMonitorInfo(monitor, &info));
DEVMODE devmode = {};
devmode.dmSize = sizeof(DEVMODE);
winrt::check_bool(EnumDisplaySettings(info.szDevice, ENUM_CURRENT_SETTINGS, &devmode));
return (info.rcMonitor.right - info.rcMonitor.left) / static_cast<float>(devmode.dmPelsWidth);
}
This will give you a ratio of the real resolution relative to the virtual resolution of the given monitor.
If the main DPI of the main monitor is 225% and on the second monitor it is 100%, and you run this function for the second monitor, you will get 2.25. because 2.25 * real resolution = the virtual resolution of the monitor.
If the second monitor has 125% scaling (while the main monitor is still 225% scaling), then this function will return you 1.79999995 because 125% relative to 225% is this value (225/125 = 1.8), and again - 1.8 * real resolution=the virtual resolution of 125%`
To get the real DPI value (not relative to anything)
Given that monitor, A has 225% DPI, and monitor B has 125% DPI, as I said above, you will not get 1.25 for the second monitor (if you run the function on the second monitor. You will get 1.8 as I said).
To overcome this, use this function:
float GetRealDpiForMonitor(HMONITOR monitor)
{
return GetDpiForSystem() / 96.0 / GetMonitorScalingRatio(monitor);
}
This function depends on the previous function that I wrote above (the function GetMonitorScalingRatio that you need to copy)
This will give you the correct value

Related

Python GTK3 How to bring widgets to the front?

I have an application (actually a plugin for another application) that presents a GTK notebook. Each tab contains a technical drawing of an operation, with a set of SpinButtons that allow you to alter the dimensions of the operation.
If you need more context, it's here: https://forum.linuxcnc.org/41-guis/26550-lathe-macros?start=150#82743
As can be seen above, this all worked fine in GTK2. The widgets (first iteration in a GTK_Fixed, then moved to a GTK_Table) were pre-positioned and the image (a particular layer of a single SVG) was plonked in behind.
Then we updated to GTK3 (and Python 3) and it stopped working. The SVG image now appears on top of the input widgets, and they can no-longer be seen or operated.
I am perfectly happy to change the top level container[1], if that will help. But the code that used to work (and now doesn't) is:
def on_expose(self,nb,data=None):
tab_num = nb.get_current_page()
tab = nb.get_nth_page(tab_num)
cr = tab.get_property('window').cairo_create()
cr.set_operator(cairo.OPERATOR_OVER)
alloc = tab.get_allocation()
x, y, w, h = (alloc.x, alloc.y, alloc.width, alloc.height)
sw = self.svg.get_dimensions().width
sh = self.svg.get_dimensions().height
cr.translate(0, y)
cr.scale(1.0 *w / sw, 1.0*h/sh)
#TODO: gtk3 drawing works, but svg is drawn over the UI elements
self.svg.render_cairo_sub(cr = cr, id = '#layer%i' % tab_num)
[1] In fact I will probably go back to GTK_Fixed and move the elements about in the handler when the window resizes, scaled according to the original position. The GTK_Table (deprecated) version takes over 2 minutes to open in the Glade editor.
Unless there is a more elegant way to do this too?

Detect DPI/scaling factor in Python TkInter application

I'd like my application to be able to detect if it's running on a HiDPI screen, and if so, scale itself up so as to be usable. As said in this question, I know I need to set a scaling factor, and that this factor should be my DPI divided by 72; my trouble is in getting my DPI. Here's what I have:
def get_dpi(window):
MM_TO_IN = 1/25.4
pxw = window.master.winfo_screenwidth()
inw = window.master.winfo_screenmmwidth() * MM_TO_IN
return pxw/inw
root = Tk()
root.tk.call('tk', 'scaling', get_dpi(root)/72)
This doesn't work (testing on my 4k laptop screen). Upon further inspection, I realized get_dpi() was returning 96.0, and that winfo_screenmmwidth() was returning 1016! (Thankfully, my laptop is not over a meter wide).
I assume that TkInter is here calculating the width in mm from some internally-detected DPI, wrongly detected as 96, but I'm not sure where it's getting this; I'm currently on Linux, and xrdb -query returns a DPI of 196, so it's not getting the DPI from the X server.
Does anyone know a cross-platform way to get my screen DPI, or to make TkInter be able to get it properly? Or, more to the point: how can I make TkInter play nice with HiDPI screens and also work fine on normal ones? Thanks!
This answer is from this link and left as a comment above, but it took hours of searching to find. I have not had any issues with it yet, but please let me know if it does not work on your system!
import tkinter
root = tkinter.Tk()
dpi = root.winfo_fpixels('1i')
The documentation for this says:
winfo_fpixels(number)
# Return the number of pixels for the given distance NUMBER (e.g. "3c") as float
A distance number is a digit followed by a unit, so 3c means 3 centimeters, and the function gives the number of pixels on 3 centimeters of the screen (as found here).
So to get dpi, we ask the function for the number of pixels in 1 inch of screen ("1i").
I know I'm answering this question late, but I'd like to expand upon #Andrew Pye 's idea. You are right, GUI's with tkinter look different across different monitors with different DPI's anytime you use a 'width' or 'height' or 'pady' or anything that is measured in pixels. I noticed this when I made a GUI on my desktop, but then later ran the same GUI on my 4K laptop (The window and the widgets appeared much smaller on the laptop). This is what I did to fix it, and it worked for me.
from tkinter import *
ORIGINAL_DPI = 240.23645320197045 # This is the DPI of the computer you're making/testing the script on.
def get_dpi():
screen = Tk()
current_dpi = screen.winfo_fpixels('1i')
screen.destroy()
return current_dpi
SCALE = get_dpi()/ORIGINAL_DPI # Now this is the appropriate scale factor you were mentioning.
# Now every time you use a dimension in pixels, replace it with scaled(*pixel dimension*)
def scaled(original_width):
return round(original_width * SCALE)
if __name__ == '__main__':
root = Tk()
root.geometry(f'{scaled(500)}x{scaled(500)}') # This window now has the same size across all monitors. Notice that the scaled factor is one if the script is being run on a the same computer with ORIGINAL_DPI.
root.mainloop()
I'm using TclTk, not TkInter, and the only way I know how to do this is to work it out from the font metrics...
% font metrics Tk_DefaultFont
-ascent 30 -descent 8 -linespace 38 -fixed 0
The linespace is approximately 0.2x the DPI (currently set to 192 here)

tkinter winfo_screenwidth() when used with dual monitors

With tkinter canvas, to calculate the size of the graphics I display, I normally use the function winfo_screenwidth(), and size my objects accordingly.
But when used on a system with two monitors, winfo_screenwidth() returns the combined width of both monitors -- which messes up my graphics.
How can I find out the screen width in pixels of each monitor, separately?
I have had this problem with several versions of Python 3.x and several versions of tkinter (all 8.5 or above) on a variety of Linux machines (Ubuntu and Mint).
For example, the first monitor is 1440 pixels wide. The second is 1980 pixels wide. winfo_screenwidth() returns 3360.
I need to find a way to determine the screenwidth for each monitor independently.
Thanks!
It is an old question, but still: for a cross-platform solution, you could try the screeninfo module, and get information about every monitor with:
import screeninfo
screeninfo.get_monitors()
If you need to know on which monitor one of your windows is located, you could use:
def get_monitor_from_coord(x, y):
monitors = screeninfo.get_monitors()
for m in reversed(monitors):
if m.x <= x <= m.width + m.x and m.y <= y <= m.height + m.y:
return m
return monitors[0]
# Get the screen which contains top
current_screen = get_monitor_from_coord(top.winfo_x(), top.winfo_y())
# Get the monitor's size
print current_screen.width, current_screen.height
(where top is your Tk root)
Based on this slightly different question, I would suggest the following:
t.state('zoomed')
m_1_height= t.winfo_height()
m_1_width= t.winfo_width() #this is the width you need for monitor 1
That way the window will zoom to fill one screen. The other monitor's width is just wininfo_screenwidth()-m_1_width
I also would point you to the excellent ctypes method of finding monitor sizes for windows found here. NOTE: unlike the post says, ctypes is in stdlib! No need to install anything.

Issue with Paint Event on Mac OSX 10.8.5 in wxPython

In my program I have an image (bitmap) loaded into a wxScrolledWindow. I'm trying to draw a grid over the image but I just cannot get it to work. My job is to port this program over from Windows, which it was originally developed on, and make it work on Mac as well but it is a bigger pain the butt than I expected.
def OnPaint(self, event):
dc = wx.BufferedPaintDC(self.staticBitmap,self.staticBitmap.GetBitmap())
dc.Clear()
dc.DrawBitmap(self.wxBitmap, 0, 0)
self.drawGrid(dc)
event.Skip()
def drawGrid(self, dc):
gridWid, gridHgt = self.staticBitmap.GetBitmap().GetSize()
numRows, numCols = self.gridSize, self.gridSize
if self.controlPanel.showGridBox.IsChecked():
dc.SetPen(wx.Pen(self.gridColor, self.gridThickness))
dc.SetTextForeground(self.gridColor)
cellWid = float( gridWid - 1) / numRows
cellHgt = float( gridHgt - 1) / numCols
for rowNum in xrange( numRows + 1) :
dc.DrawLine( 0, rowNum*cellHgt, gridWid, rowNum*cellHgt )
for colNum in xrange( numCols + 1 ) :
dc.DrawLine( colNum*cellWid, 0, colNum*cellWid, gridHgt )
This code works just fine on Windows 7, but I keep getting this error when running it on Mac:
Traceback (most recent call last):
File "/Users/kyra/Documents/workspace/ADAPT/src/GUI.py", line 1617, in OnPaint
dc = wx.BufferedPaintDC(self.staticBitmap, self.staticBitmap.GetBitmap())
File "/usr/local/lib/wxPython-3.0.2.0/lib/python2.7/site-packages/wx-3.0-osx_cocoa/wx/_gdi.py", line 5290, in __init__
_gdi_.BufferedPaintDC_swiginit(self,_gdi_.new_BufferedPaintDC(*args, **kwargs))
wx._core.PyAssertionError: C++ assertion "window->MacGetCGContextRef() != NULL" failed at /BUILD/wxPython-src-3.0.2.0/src/osx/carbon/dcclient.cpp(195) in wxPaintDCImpl(): using wxPaintDC without being in a native paint event
self.staticBitmap is a wxStaticBitmap, and self.wxBitmap is the same exact image. My guess is it has something to do with a GraphicsContext, perhaps? There was a similar question asked here: How to send PaintEvent in wxpython but this did not help me. I did what they suggested with self.Refresh() but I have the same error coming up. Why would this be working on Windows but not on Mac? No drawing seems to be occurring on the image.
First, you shouldn't be handling the paint event for a native widget. Sometimes it will work, like this case on Win7, but other times it won't, and it is not officialy supported by wxWidgets. (The behavior is "undefined")
Second, why bother painting the wx.StaticBitmap at all? If you need to change the bitmap that the widget is displaying you can just give it a new one with its SetBitmap method. In this case if the grid you are drawing is dynamic (needs to change over time) then you could use a wx.MemoryDC to make a new bitmap with the grid (IOW, draw the bitmap and call drawGrid on the memory DC) and then pass that new bitmap to SetBitmap.
Third, you don't usually see calls to event.Skip in paint event handlers. There may be cases where this could cause problems too, unless the base classes are expecting it.
Forth, it's not really a problem but using wx.BufferedPaintDC on Mac is superfluous as the platform is already double-buffering everything. GTK does so in most cases as well. There is a wx.AutoBufferedPaintDC that will either be a PaintDC or a BufferedPaintDC depending on if buffering is needed or not for the given platform. Or you can decide which to use in your own code by looking at the return value of window.IsDoubleBuffered().
Finally, if you would rather handle this problem using paint events instead of generating and swapping images in the wx.StaticBitmap then another approach you could take is to make a custom class similar to wx.StaticBitmap that simply paints a bitmap on itself, but also knows how to manage drawing the grid when it is needed, then you could use that class in place of the wx.StaticBitmap. You could use the wx.lib.statbmp module as a starting point.

Reisze wx.Dialog horizontally only

Is there a way to allow a custom wx.Dialog to be resized in the horizontal direction only? I've tried using GetSize() and then setting the min and max height of the window using SetSizeHints(), but for some reason it always allows the window to be resized just a little, and it looks rather tacky. The only other alternative I have come up with is to hard-code the min and max height, but I don't think that would be such a good idea...
Relevant code:
class SomeDialog(wx.Dialog):
def __init__(self, parent):
wx.Dialog.__init__(self, parent, title="blah blah",
style=wx.DEFAULT_DIALOG_STYLE|wx.RESIZE_BORDER)
size = self.GetSize()
self.SetSizeHints(minW=size.GetWidth(), minH=size.GetHeight(),
maxH=size.GetHeight())
os: Windows 7
python version: 2.5.4
wxPython version: 2.8.10
If you don't want the height to change, why would it be a bad idea to set min and max height to the same value (the one you want to force)? You can of course get the system estimate of the "best value" with GetBetSize or related methods. Though I find the fact that setting the size hints doesn't have the same effect (as I think it should) peculiar... what platform are you using wxpython on, and what version of Python, wxpython, and the platform/OS itself...?

Categories