Only mirror screen rotation - Python - python

I have problem with programmatic monitor rotation
import win32api as win32
import win32con
MY_SCREEN_NUMBER = 1
device = win32.EnumDisplayDevices(None,MY_SCREEN_NUMBER)
dm = win32.EnumDisplaySettings(device.DeviceName,win32con.ENUM_CURRENT_SETTINGS)
dm.DisplayOrientation = win32con.DMDO_180
dm.Fields = dm.Fields & win32con.DM_DISPLAYORIENTATION
win32.ChangeDisplaySettingsEx(device.DeviceName,dm)
Code rotating screen only in opposite direction, for example
if screen in win32con.DMDO_DEFAULT it can only rotate it in win32con.DMDO_180,
if screen in win32con.DMDO_90 only rotates in win32con.DMDO_270
otherwise it gives me -2 (DISP_CHANGE_BADMODE) error code
But i can successfully rotate it with windows graphical screen params settings
How do I rotate the screen correctly?

def rotateTo(degree):
import win32api as win32
import win32con
MY_SCREEN_NUMBER = 1
device = win32.EnumDisplayDevices(None,MY_SCREEN_NUMBER)
dm = win32.EnumDisplaySettings(device.DeviceName,win32con.ENUM_CURRENT_SETTINGS)
newDisplayOrientationInWin32Format = degree/90
currentDisplayOrientationInWin32Format = dm.DisplayOrientation
if(newDisplayOrientationInWin32Format + currentDisplayOrientationInWin32Format) % 2 == 1 : #test the new resolution on the perpendicular to the previous
tmp = dm.PelsHeight
dm.PelsHeight=dm.PelsWidth
dm.PelsWidth=tmp
dm.DisplayOrientation = newDisplayOrientationInWin32Format
return win32.ChangeDisplaySettingsEx(device.DeviceName,dm)
I checked https://msdn.microsoft.com/en-us/library/ms812499.aspx and found C example, i need to swap height and width, but they don't test the new resolution on the perpendicular to the previous. I updated code with this test and it start working in any possible screen orientation for me.

Related

Communication with Thorlabs uc480 camera

I was able to get the current image from a Thorlabs uc480 camera using instrumental. My issue is when I try to adjust the parameters for grab_image. I can change cx and left to any value and get an image. But cy and top only works if cy=600 and top=300. The purpose is to create a GUI so that the user can select values for these parameters to zoom in/out an image.
Here is my code
import instrumental
from instrumental.drivers.cameras import uc480
from matplotlib.figure import Figure
import matplotlib.pyplot as plt
paramsets = instrumental.list_instruments()
cammer = instrumental.instrument(paramsets[0])
plt.figure()
framer= cammer.grab_image(timeout='1s',copy=True,n_frames=1,exposure_time='5ms',cx=640,
left=10,cy=600,top=300)
plt.pcolormesh(framer)
The above code does not give an image if I choose cy=600 and top=10. Are there any particular value set to be used for these parameters? How can I get an image of the full sensor size?
Thorlabs has a Python programming interface available as a download on their website. It is very well documented, and can be installed locally via pip.
Link:
https://www.thorlabs.com/software_pages/ViewSoftwarePage.cfm?Code=ThorCam
Here is an example of a simple capture algorithm that might help get you started:
from thorlabs_tsi_sdk.tl_camera import TLCameraSDK
from thorlabs_tsi_sdk.tl_mono_to_color_processor import MonoToColorProcessorSDK
from thorlabs_tsi_sdk.tl_camera_enums import SENSOR_TYPE
# open the TLCameraSDK dll
with TLCameraSDK() as sdk:
cameras = sdk.discover_available_cameras()
if len(cameras) == 0:
print("Error: no cameras detected!")
with sdk.open_camera(cameras[0]) as camera:
#camera.disarm() # ensure any previous session is closed
# setup the camera for continuous acquisition
camera.frames_per_trigger_zero_for_unlimited = 0
camera.image_poll_timeout_ms = 2000 # 2 second timeout
camera.arm(2)
# need to save the image width and height for color processing
image_width = camera.image_width_pixels
image_height = camera.image_height_pixels
# initialize a mono to color processor if this is a color camera
is_color_camera = (camera.camera_sensor_type == SENSOR_TYPE.BAYER)
mono_to_color_sdk = None
mono_to_color_processor = None
if is_color_camera:
mono_to_color_sdk = MonoToColorProcessorSDK()
mono_to_color_processor = mono_to_color_sdk.create_mono_to_color_processor(
camera.camera_sensor_type,
camera.color_filter_array_phase,
camera.get_color_correction_matrix(),
camera.get_default_white_balance_matrix(),
camera.bit_depth
)
# begin acquisition
camera.issue_software_trigger()
# get the next frame
frame = camera.get_pending_frame_or_null()
# initialize frame attempts and max limit
frame_attempts = 0
max_attempts = 10
# if frame is null, try to get a frame until
# successful or until max_attempts is reached
if frame is None:
while frame is None:
frame = camera.get_pending_frame_or_null()
frame_attempts += 1
if frame_attempts == max_attempts:
raise TimeoutError("Timeout was reached while polling for a frame, program will now exit")
image_data = frame.image_buffer
if is_color_camera:
# transform the raw image data into RGB color data
color_data = mono_to_color_processor.transform_to_24(image_data, image_width, image_height)
save_data = np.reshape(color_data,(image_height, image_width,3))
camera.disarm()
You can also process the image after capture with the PIL library.

ROS programming: Trying to make my turtlebot wander without running into obstacles

I'm new to ROS.
I'm trying to make my turtlebot move forward until it is within a minimal distance from an obstacle, then rotate until the path is clear, then move forward again and so on...
I wrote this code:
import rospy, sys
from stopper import Stopper
if __name__ == "__main__":
rospy.init_node("stopper_node", argv=sys.argv)
forward_speed = 0.5
angular_speed = 45
if rospy.has_param('~forward_speed'):
forward_speed = rospy.get_param('~forward_speed')
if rospy.has_param('~angular_speed'):
angular_speed = rospy.get_param('~angular_speed')
my_stopper = Stopper(forward_speed, angular_speed)
my_stopper.start_moving();
and this:
import rospy
import math
from geometry_msgs.msg import Twist
from sensor_msgs.msg import LaserScan
class Stopper(object):
def __init__(self, forward_speed, angular_speed):
self.forward_speed = forward_speed
self.angular_speed = angular_speed
self.min_scan_angle = -30 /180 * math.pi
self.max_scan_angle = 30 / 180 * math.pi
self.min_dist_from_obstacle = 1
self.keep_moving = True
self.keep_rotating = False
self.command_pub = rospy.Publisher("/cmd_vel_mux/input/teleop", Twist, queue_size=10)
self.laser_subscriber = rospy.Subscriber("scan",LaserScan, self.scan_callback, queue_size=1)
def start_moving(self):
print "started moving1"
rate = rospy.Rate(1)
rospy.loginfo("Starting to move")
while not rospy.is_shutdown() and self.keep_moving:
self.move_forward()
rate.sleep()
self.start_rotating()
def move_forward(self):
move_msg = Twist()
move_msg.linear.x = self.forward_speed
self.command_pub.publish(move_msg)
def start_rotating(self):
print "started rotating1"
rate = rospy.Rate(1)
rospy.loginfo("Starting to rotate")
while not rospy.is_shutdown() and self.keep_rotating:
self.rotate()
rate.sleep()
self.start_moving()
def rotate(self):
move_msg = Twist()
move_msg.angular.z = self.angular_speed * 2 * math.pi / 360
self.command_pub.publish(move_msg)
def scan_callback(self, scan_msg):
for dist in scan_msg.ranges:
print dist
if self.keep_moving and dist < self.min_dist_from_obstacle:
self.keep_moving = False
print "keep moving is now false"
self.keep_rotating = True
print "keep rotating is now true"
break
if self.keep_rotating and math.isnan(dist):
self.keep_rotating = False
print "keep rotating is now false"
self.keep_moving = True
print "keep moving is now true"
break
But even though I can't find any logical mistakes in it, it just does not work and occasionaly bumps into stuff. I am running it with the gazebo simulation
"turtlebot_world". I would love to get some help.
Thanks!
I have a solution with the bugs navigation algorithms, hope help you up:
with these repositories (agn_gazebo, bugs), you could bring up a Gazebo simulation with a bounded map and some obstacle and a mobile wheeled robot platform (Pioneer P3dx) which is equipped with a laser scanner (Hokuyo URG) for environmental perception.
USAGE:
After cloning from these repositories in ~/catkin_ws/src using:
git clone https://github.com/agn-7/agn_gazebo.git
git clone https://github.com/agn-7/bugs.git
Then build them with catkin_make in your catkin workspace.
After package building, you need to change the agn_gazebo/worlds/final.world file:
Open it and change all /home/agn/catkin_ws/src/... with your target path with Ctrl + F or Ctrl + H
I've opened an issue to make it as a dynamic path instead of static, but currently, I couldn't do it.
Then Bring up the simulator: roslaunch agn_gazebo gazebo.launch
Running any bugs algorithm with a position target:
rosrun bugs bug.py bug0 11 0
or
rosrun bugs bug.py bug1 11 0
or
rosrun bugs bug.py bug2 11 0
Okay, I've implemented something that should work pretty well.
I have 3 files that make the wander turtlebot work as you asked.
wander_bot.launch
Responsibility: This file is a launch file that will run the gazebo world and will store the configurable parameters.
turtlebot_node.py
Responsibility: This python file will initialize the ROS node and it will load the configurable parameters, and then it will run the code that makes the turtlebot move.
turtlebot_logic.py
Responsibility: This python file will listen to the laser, it will implement all the logic stuff to make the wander bot work.
wander_bot.launch
<launch>
<param name="/use_sim_time" value="true" />
<!-- Launch turtle bot world -->
<include file="$(find turtlebot_gazebo)/launch/turtlebot_world.launch"/>
<!-- Launch stopper node -->
<node name="wander_bot" pkg="wander_bot" type="turtlebot_node.py" output="screen">
<param name="forward_speed" type="double" value="0.2"/>
<param name="minimum_distance_from_obstacle" type="double" value="1.0"/>
<param name="rotation_speed" type="int" value="2"/>
<param name="minimum_angle" type="int" value="-30"/>
<param name="maximum_angle" type="int" value="30"/>
</node>
</launch>
turtlebot_node.py
#!/usr/bin/python
import rospy
import sys
from turtlebot_logic import RosTurtlebotLogic
def loadParams():
forwardSpeed = 0.2
rotationSpeed = 2
minDistanceFromObstacle = 1.0
minimumAngle = -30
maximumAngle = 30
if rospy.has_param('~forward_speed'):
forwardSpeed = rospy.get_param('~forward_speed')
if rospy.has_param('~rotation_speed'):
rotationSpeed = rospy.get_param('~rotation_speed')
if rospy.has_param('~minimum_distance_from_obstacle'):
minDistanceFromObstacle = rospy.get_param('~minimum_distance_from_obstacle')
if rospy.has_param('~minimum_angle'):
minimumAngle = rospy.get_param('~minimum_angle')
if rospy.has_param('~maximum_angle'):
maximumAngle = rospy.get_param('~maximum_angle')
return forwardSpeed, rotationSpeed, minDistanceFromObstacle, minimumAngle, maximumAngle
if __name__ == "__main__":
rospy.loginfo("Started program.")
rospy.init_node("stopper_node", argv=sys.argv)
forwardSpeed, rotationSpeed, minDistanceFromObstacle, minimumAngle, maximumAngle = loadParams()
rospy.loginfo("Finished import parameters.")
my_stopper = RosTurtlebotLogic(forwardSpeed, rotationSpeed, minDistanceFromObstacle, minimumAngle, maximumAngle)
my_stopper.startMoving()
turtlebot_logic.py
#!/usr/bin/python
import math
import rospy
from geometry_msgs.msg import Twist
from sensor_msgs.msg import LaserScan
class RosTurtlebotLogic(object):
def __init__(self, forwardSpeed, rotationSpeed, minDistanceFromObstacle, minimumAngle, maximumAngle):
self.forwardSpeed = forwardSpeed
self.rotationSpeed = rotationSpeed
self.minDistanceFromObstacle = minDistanceFromObstacle
# Keeps the current minimum distance from obstacle from laser.
self.minimumRangeAhead = 5
# In which direction to rotate, left or right.
self.rotationDirection = 0
self.minimumIndexLaser = 360 * (90 + minimumAngle) / 90
self.maximumIndexLaser = 360 * (90 + maximumAngle) / 90 - 1
self.keepMoving = True
self.commandPub = rospy.Publisher("/cmd_vel_mux/input/teleop", Twist, queue_size=10)
rospy.Subscriber("scan", LaserScan, self.scanCallback, queue_size=10)
def startMoving(self):
rate = rospy.Rate(10)
while(not rospy.is_shutdown()):
if(self.keepMoving):
if (self.minimumRangeAhead < self.minDistanceFromObstacle):
self.keepMoving = False
else:
if(self.minimumRangeAhead > self.minDistanceFromObstacle):
self.keepMoving = True
twist = Twist()
if(self.keepMoving):
twist.linear.x = self.forwardSpeed
else:
twist.angular.z = self.rotationDirection
self.commandPub.publish(twist)
rate.sleep()
def scanCallback(self, scan_msg):
minimum = 100
index = 0
# Find the minimum distance to obstacle and we keep the minimum distance and the index.
# We need the minimum distance in order to know if we can go forward or not.
# We need the index to know in which direction to rotate.
if(not math.isnan(scan_msg.ranges[self.minimumIndexLaser])):
minimum = scan_msg.ranges[self.minimumIndexLaser]
for i in range(self.minimumIndexLaser, self.maximumIndexLaser + 1):
if(not math.isnan(scan_msg.ranges[i]) and scan_msg.ranges[i] < minimum):
minimum = scan_msg.ranges[i]
index = i
self.minimumRangeAhead = minimum
if(index <= 359):
self.rotationDirection = self.rotationSpeed
else:
self.rotationDirection = -self.rotationSpeed
The laser looks at 180 degrees, the right angle is index 0
at ranges array, forward is index 359 at ranges array, and left is index 719 at ranges array, ranges array is coming from the subscriber:
rospy.Subscriber("scan", LaserScan, self.scanCallback, queue_size=10)
The callback function is scanCallback
Well, that kind of straight forward, I don't think that I need to explain the code, it's well documented and easy to understand, for any further questions please comment here.

How to take snapshot from given region of the screen with python?

Here is my thoughts:
1) snap shot the given region from the screen like (100,100,80,60), save the result as image
2) process the image with OpenCV python interface
Just be first to python and wonder if this is good solution,be specific,wonder how to snapshot with python.
Thanks,
It's fairly simple using CGRectMake in Apple's CoreGraphics api:
CG.CGRectMake(x, y, w, h)
This allows you to define the horizontal/vertical positions, and width/height respectively.
Code:
#!/usr/bin/python
import Quartz
import LaunchServices
from Cocoa import NSURL
import Quartz.CoreGraphics as CG
def screenshot(path, dpi, region = None):
if region is None:
region = CG.CGRectInfinite
image = CG.CGWindowListCreateImage(
region,
CG.kCGWindowListOptionOnScreenOnly,
CG.kCGNullWindowID,
CG.kCGWindowImageDefault)
url = NSURL.fileURLWithPath_(path)
dest = Quartz.CGImageDestinationCreateWithURL(
url,
LaunchServices.kUTTypePNG, 1, None
)
prop = {
Quartz.kCGImagePropertyDPIWidth: dpi,
Quartz.kCGImagePropertyDPIHeight: dpi,
}
Quartz.CGImageDestinationAddImage(dest, image, prop)
Quartz.CGImageDestinationFinalize(dest)
spec = (0, 0, 800, 600) # x, y, w, h
path = '/path/to/screnshot_region.png' # save path
area = CG.CGRectMake(*spec) # set area to cgrect
screenshot(path, dpi=72, region=area) # call function
To use, just call the function:
screenshot(path, dpi=72, region=area)
*dpi will set the image output resolution; leave out the region argument for fullscreen capture. In regards to the OpenCV portion - I don't use it often enough to provide anything at this time.

Python + GStreamer: scale video to window

I'm having some trouble rescaling video output of GStreamer to the dimension of the window the video is displayed in (retaining aspect ratio of the video). The problem is that I first need to preroll the video to be able to determine its dimensions by retrieving the negotiated caps, and then calculate the dimensions it needs to be displayed in to fit the window. Once I have prerolled the video and got the dimension caps, I cannot change the video's dimension anymore. Setting the new caps still results in the video being output in its original size. What must I do to solve this?
Just to be complete. In the current implementation I cannot render to an OpenGL texture which would have easily solved this problem because you could simply render output to the texture and scale it to fit the screen. I have to draw the output on a pygame surface, which needs to have the correct dimensions. pygame does offer functionality to scale its surfaces, but I think such an implementation (as I have now) is slower than retrieving the frames in their correct size directly from GStreamer (am I right?)
This is my code for loading and displaying the video (I omitted the main loop stuff):
def calcScaledRes(self, screen_res, image_res):
"""Calculate image size so it fits the screen
Args
screen_res (tuple) - Display window size/Resolution
image_res (tuple) - Image width and height
Returns
tuple - width and height of image scaled to window/screen
"""
rs = screen_res[0]/float(screen_res[1])
ri = image_res[0]/float(image_res[1])
if rs > ri:
return (int(image_res[0] * screen_res[1]/image_res[1]), screen_res[1])
else:
return (screen_res[0], int(image_res[1]*screen_res[0]/image_res[0]))
def load(self, vfile):
"""
Loads a videofile and makes it ready for playback
Arguments:
vfile -- the uri to the file to be played
"""
# Info required for color space conversion (YUV->RGB)
# masks are necessary for correct display on unix systems
_VIDEO_CAPS = ','.join([
'video/x-raw-rgb',
'red_mask=(int)0xff0000',
'green_mask=(int)0x00ff00',
'blue_mask=(int)0x0000ff'
])
self.caps = gst.Caps(_VIDEO_CAPS)
# Create videoplayer and load URI
self.player = gst.element_factory_make("playbin2", "player")
self.player.set_property("uri", vfile)
# Enable deinterlacing of video if necessary
self.player.props.flags |= (1 << 9)
# Reroute frame output to Python
self._videosink = gst.element_factory_make('appsink', 'videosink')
self._videosink.set_property('caps', self.caps)
self._videosink.set_property('async', True)
self._videosink.set_property('drop', True)
self._videosink.set_property('emit-signals', True)
self._videosink.connect('new-buffer', self.__handle_videoframe)
self.player.set_property('video-sink', self._videosink)
# Preroll movie to get dimension data
self.player.set_state(gst.STATE_PAUSED)
# If movie is loaded correctly, info about the clip should be available
if self.player.get_state(gst.CLOCK_TIME_NONE)[0] == gst.STATE_CHANGE_SUCCESS:
pads = self._videosink.pads()
for pad in pads:
caps = pad.get_negotiated_caps()[0]
self.vidsize = caps['width'], caps['height']
else:
raise exceptions.runtime_error("Failed to retrieve video size")
# Calculate size of video when fit to screen
self.scaledVideoSize = self.calcScaledRes((self.screen_width,self.screen_height), self.vidsize)
# Calculate the top left corner of the video (to later center it vertically on screen)
self.vidPos = ((self.screen_width - self.scaledVideoSize [0]) / 2, (self.screen_height - self.scaledVideoSize [1]) / 2)
# Add width and height info to video caps and reload caps
_VIDEO_CAPS += ", width={0}, heigh={1}".format(self.scaledVideoSize[0], self.scaledVideoSize[1])
self.caps = gst.Caps(_VIDEO_CAPS)
self._videosink.set_property('caps', self.caps) #??? not working, video still displayed in original size
def __handle_videoframe(self, appsink):
"""
Callback method for handling a video frame
Arguments:
appsink -- the sink to which gst supplies the frame (not used)
"""
buffer = self._videosink.emit('pull-buffer')
img = pygame.image.frombuffer(buffer.data, self.vidsize, "RGB")
# Upscale image to new surfuace if presented fullscreen
# Create the surface if it doesn't exist yet and keep rendering to this surface
# for future frames (should be faster)
if not hasattr(self,"destSurf"):
self.destSurf = pygame.transform.scale(img, self.destsize)
else:
pygame.transform.scale(img, self.destsize, self.destSurf)
self.screen.blit(self.destSurf, self.vidPos)
# Swap the buffers
pygame.display.flip()
# Increase frame counter
self.frameNo += 1
I'm pretty sure that your issue was (has it is very long time since you asked this question) that you never hooked up the bus to watch for messages that were emitted.
The code for this is usually something like this:
def some_function(self):
#code defining Gplayer (the pipeline)
#
# here
Gplayer.set_property('flags', self.GST_VIDEO|self.GST_AUDIO|self.GST_TEXT|self.GST_SOFT_VOLUME|self.GST_DEINTERLACE)
# more code
#
# finally
# Create the bus to listen for messages
bus = Gplayer.get_bus()
bus.add_signal_watch()
bus.enable_sync_message_emission()
bus.connect('message', self.OnBusMessage)
bus.connect('sync-message::element', self.OnSyncMessage)
# Listen for gstreamer bus messages
def OnBusMessage(self, bus, message):
t = message.type
if t == Gst.MessageType.ERROR:
pass
elif t == Gst.MessageType.EOS:
print ("End of Audio")
return True
def OnSyncMessage(self, bus, msg):
if msg.get_structure() is None:
return True
if message_name == 'prepare-window-handle':
imagesink = msg.src
imagesink.set_property('force-aspect-ratio', True)
imagesink.set_window_handle(self.panel1.GetHandle())
The key bit for your issue is setting up a call back for the sync-message and in that call-back, setting the property force-aspect-ratio to True.
This property ensures that the video fits the window it is being displayed in at all times.
Note the self.panel1.GetHandle() function names the panel in which you are displaying the video.
I appreciate that you will have moved on but hopefully this will help someone else trawling through the archives.

Get monitor count and resolutions in Python without xrandr command line tool

I'm running Ubuntu and I want to get the number of attached monitors, their current resolution and, if possible, their position in relation to each other.
Because I don't like parsing Console output of the xrandr command line tool—at least not if I don't have to—I would like to do that with Python-XLib or a similar Pythonic approach.
This is the xrandr output for my display config:
$ xrandr
Screen 0: minimum 320 x 200, current 2960 x 1050, maximum 8192 x 8192
DVI-0 connected 1680x1050+0+0 (normal left inverted right x axis y axis) 473mm x 296mm
1680x1050 60.0*+
[some lines cut]
VGA-0 connected 1280x1024+1680+26 (normal left inverted right x axis y axis) 376mm x 301mm
1280x1024 60.0 + 75.0*
[some more lines cut]
I want to get these values with Python, in a way like this:
monitors = get_monitors()
print monitors[0].width # out: 1680
print monitors[1].width # out: 1280
print monitors[0].x_position # out: 0
print monitors[1].x_position # out: 1680
When trying to get informations via Python-XLib (or other libs like pyGTK and pygame), it seems that all monitors are always handled as one single display. For example this is what I got with XLib so far:
import Xlib
import Xlib.display
display = Xlib.display.Display(':0')
print display.screen_count() # output: 1
root = display.screen().root
print root.get_geometry().width # output: 2960 -> no way to get width of single monitor?
print root.get_geometry().height # output: 1050
But as I said I would prefer a cleaner approach without having to parse Console output.
Is there really no way to get (detailed) Display informations with Python without having to parse xrandr output?
xrandr is just a client to access the "RandR" X11 extension from the command line. You can access the functionality directly from Python-Xlib. Here's an example (from Python-Xlib's own code!).
Just in case the URL changes again, here's a minimal piece of code that gets us the display modes. We need to create window (it doesn't matter the size, etc):
from __future__ import print_function
from Xlib import X, display
from Xlib.ext import randr
d = display.Display()
s = d.screen()
window = s.root.create_window(0, 0, 1, 1, 1, s.root_depth)
Then we can query the screen resources using it. Eg, following OP's example:
res = randr.get_screen_resources(window)
for mode in res.modes:
w, h = mode.width, mode.height
print("Width: {}, height: {}".format(w, h))
In my computer I get:
$ python minimal.py
Xlib.protocol.request.QueryExtension
Width: 1600, height: 900
Width: 1440, height: 900
Width: 1360, height: 768
Width: 1360, height: 768
Width: 1152, height: 864
Width: 1024, height: 768
Width: 800, height: 600
Width: 800, height: 600
Width: 640, height: 480
Latest snippet. It extracts all modes with current resolution from all connected monitors.
from Xlib import display
from Xlib.ext import randr
def find_mode(id, modes):
for mode in modes:
if id == mode.id:
return "{}x{}".format(mode.width, mode.height)
def get_display_info():
d = display.Display(':0')
screen_count = d.screen_count()
default_screen = d.get_default_screen()
result = []
screen = 0
info = d.screen(screen)
window = info.root
res = randr.get_screen_resources(window)
for output in res.outputs:
params = d.xrandr_get_output_info(output, res.config_timestamp)
if not params.crtc:
continue
crtc = d.xrandr_get_crtc_info(params.crtc, res.config_timestamp)
modes = set()
for mode in params.modes:
modes.add(find_mode(mode, res.modes))
result.append({
'name': params.name,
'resolution': "{}x{}".format(crtc.width, crtc.height),
'available_resolutions': list(modes)
})
return result
print(get_display_info())
You just need to pip3 install Xlib and use this code to get the info:
from Xlib import display
d = display.Display()
screen_count = d.screen_count()
default_screen = d.get_default_screen()
for screen in range(0,screen_count):
info = d.screen(screen)
print("Screen: %s. Default: %s" % (screen, screen==default_screen))
print("Width: %s, height: %s" % (info.width_in_pixels,info.height_in_pixels))
The list of properties you get from there can be found here:
http://python-xlib.sourceforge.net/doc/html/python-xlib_16.html#SEC15
Use the python Xlib library instead of calling the command line. They both do the same.
import Xlib.display
display = Xlib.display.Display()
root = display.screen().root
for m in root.xrandr_get_monitors().monitors:
connector = display.get_atom_name(m.name)
print(f'{connector}, {m.width_in_pixels}x{m.height_in_pixels}, '\
f'x={m.x}, y={m.y}')
returns for me:
HDMI-0, 1920x1080, x=0, y=1050
DVI-0, 1680x1050, x=1920, y=1050
DVI-1, 1680x1050, x=0, y=0
Note that “display” and “screen” are X11 technical terms which do not correlate with real world usage. “Monitor” is often for the real thing—in X11, though, you can join multiple monitors into virtual monitors with Xrandr...
X11 display refers to entire X-server
different X11 display on the same host → probably a different user
all keyboards and mice a user has (usually one of each) belong to one X11 display
one X11 screen per X11 display these days
multiple X11 screens per X11 display actually refers to a concept which died out
imagine a mouse and keyboard shared by otherwise totally
different output devices
can not share or move windows between X11 screens
could run different window managers per X11 screen
or at least has to run totally independent instances of a window manager
did not prove very useful
today’s X11 screen has to encompass all monitors
usually a big rectangle enclosing them all
there is a desktop term from the (X11) EWMH spec
it is up to the window manager if it considers the X11 screen one desktop
...or if it thinks of each monitor as a desktop
an X11 application (“client”) wanting to move its window(s) in between (X11 EWMH) desktops may need to do it differently for different window managers
but the apps mostly leave it to the window manager & user to full-screen or organize them over the (X11 EWMH) desktops or monitors
To get a display count in windows you can use
import win32api
print(len(win32api.EnumDisplayMonitors()))

Categories