How to monitor and read Ubuntu's "Night Light" status via D-Bus using Python with dasbus? I can't figure out the API docs on how to read a property or subscribe to a signal.
Likely candidates:
dasbus.client.property.get()
GLibClient.subscribe()
The following is adapted from the basic examples and prints the interfaces and properties/signals of the object:
#!/usr/bin/env python3
from dasbus.connection import SessionMessageBus
bus = SessionMessageBus()
# dasbus.client.proxy.ObjectProxy
proxy = bus.get_proxy(
"org.gnome.SettingsDaemon.Color", # bus name
"/org/gnome/SettingsDaemon/Color", # object path
)
print(proxy.Introspect())
# read and print properties "NightLightActive" and "Temperature" from interface "org.gnome.SettingsDaemon.Color" in (callback) function
# subscribe to signal "PropertiesChanged" in interface "org.freedesktop.DBus.Properties" / register callback function
Resources
https://pypi.org/project/dbus-python/
What is recommended to use pydbus or dbus-python and what are the differences?
https://wiki.python.org/moin/DbusExamples
Migration from dbus to GDbus in Python 3
Looking at the dasbus examples and the Introspection data it looks like to get the property the dasbus is pythonic so proxy.<property name> works. For your example of NightLightActive it would be:
print("Night light active?", proxy.NightLightActive)
For the signal you need to connect to the signal on the proxy so that seems to take the form of proxy.<signal name>.connect so for example:
proxy.PropertiesChanged.connect(callback)
And this will need to have an EventLoop running.
My entire test was:
from dasbus.connection import SessionMessageBus
from dasbus.loop import EventLoop
bus = SessionMessageBus()
loop = EventLoop()
# dasbus.client.proxy.ObjectProxy
proxy = bus.get_proxy(
"org.gnome.SettingsDaemon.Color", # bus name
"/org/gnome/SettingsDaemon/Color", # object path
)
print("Night light active?", proxy.NightLightActive)
print("Temperature is set to:", proxy.Temperature)
def callback(iface, prop_changed, prop_invalidated):
print("The notification:",
iface, prop_changed, prop_invalidated)
proxy.PropertiesChanged.connect(callback)
loop.run()
Related
I am programming a BLE device and therefore need to get some information from the org.freedesktop.DBus.Properties interface, but can't get it to work from dbus python API. From the console this is no problem. For example, from dbus-send I can invoke following method call successfully (with correct mac address of course):
$ dbus-send --system --dest=org.bluez --print-reply "/org/bluez/hci0/dev_XX_XX_XX_XX_XX_XX" org.freedesktop.DBus.Properties.Get string:'org.bluez.Device1' string:'Paired'
>> method return time=1645780543.222377 sender=:1.7 -> destination=:1.329 serial=1113 reply_serial=2
variant boolean true
Now, what I'm trying to do is actually something like this:
import dbus
bus = dbus.SystemBus()
connected = bus.call_blocking(
'org.bluez', #bus_name
'/org/bluez/hci0/dev_XX_XX_XX_XX_XX_XX', #object_path
'org.freedesktop.DBus.Properties', #dbus_interface
'Get', #method
signature='(ss)', #signature
args=['org.bluez.Device1', 'Connected'], #args
)
print(connected)
which gives me the error: ERROR:dbus.connection:Unable to set arguments ['org.bluez.Device1', 'Paired'] according to signature '(ss)': <class 'TypeError'>: Fewer items found in struct's D-Bus signature than in Python arguments
I tried also with no signature with no success. And I also found a similar question here, but for C-API. So I tried to adapt it to the python dbus API, but still can't get it to work. Moreover, the official documentation isn't very helpful as well, as there is no clear statement on how the argument mechanism works here or a reference to such an explanation. This is pretty annoying, since I can invoke a blocking call for instance on the GetManagedObjects method from org.freedesktop.DBus.ObjectManager interface that way, but that one takes no arguments of course...
Any help appreciated.
There are more Pythonic libraries for D-Bus such as pydbus
device_path = "/org/bluez/hci0/dev_XX_XX_XX_XX_XX_XX"
bus = pydbus.SystemBus()
device = bus.get('org.bluez', device_path)
print(device.Connected)
If you did want to do it with the deprecated python-dbus library then I have always done it this way:
BLUEZ_SERVICE_NAME = 'org.bluez'
DEVICE_INTERFACE = 'org.bluez.Device1'
remote_device_path = device_path
remote_device_obj = self.bus.get_object(BLUEZ_SERVICE_NAME,
remote_device_path)
remote_device_props = dbus.Interface(remote_device_obj,
dbus.PROPERTIES_IFACE)
print(remote_device_props.Get(DEVICE_INTERFACE, 'Connected'))
If you want to do it with PyGObject library then an example of that is:
from gi.repository import Gio, GLib
bus_type = Gio.BusType.SYSTEM
bus_name = 'org.bluez'
object_path = '/org/bluez/hci0'
prop_iface = 'org.freedesktop.DBus.Properties'
adapter_iface = 'org.bluez.Adapter1'
adapter_props_proxy = Gio.DBusProxy.new_for_bus_sync(
bus_type=bus_type,
flags=Gio.DBusProxyFlags.NONE,
info=None,
name=bus_name,
object_path=object_path,
interface_name=prop_iface,
cancellable=None)
all_props = adapter_props_proxy.GetAll('(s)', adapter_iface)
print(all_props)
powered = adapter_props_proxy.Get('(ss)', adapter_iface, 'Powered')
print(powered)
Just for completeness and if someone stumble upon this:
You can get the deprecated API call to work if you change the signature to just ss instead of (ss) for some reason. This seems not to be consistent with other dbus APIs, which must have the signature as tuple.
connected = bus.call_blocking(
'org.bluez', #bus_name
'/org/bluez/hci0/dev_XX_XX_XX_XX_XX_XX', #object_path
'org.freedesktop.DBus.Properties', #dbus_interface
'Get', #method
signature='ss', #signature
args=['org.bluez.Device1', 'Connected'], #args
)
I need to produce a small script that will watch for accidental changes made by users to a large shared file structure.
I have found I can get the change events using the ReadDirectoryChanges API as per
http://timgolden.me.uk/python/win32_how_do_i/watch_directory_for_changes.html
However I cant see how I can identify the user account that has made the changes so that I can send out a notification.
Is it possible to get the name of the user account that moved the file/directory.
Tricky question, I'll get you an answer in two ways:
First, as optional part, you can to watch file modifications themselves, and add custom actions.
Example of file modification tracking, working on Windows / Linux / Mac / BSD
import time
import watchdog.events
import watchdog.observers
class StateHandler(watchdog.events.PatternMatchingEventHandler):
def on_modified(self, event):
print(event.event_type)
print(event.key)
print(event.src_path)
# Add your code here to do whatever you want on file modification
def on_created(self, event):
pass
def on_moved(self, event):
pass
def on_deleted(self, event):
pass
fs_event_handler = StateHandler()
fs_observer = watchdog.observers.Observer()
fs_observer.schedule(fs_event_handler, r'C:\Users\SomeUser\SomeFolder', recursive=True)
fs_observer.start()
try:
while True:
time.sleep(2)
except KeyboardInterrupt:
fs_observer.stop()
fs_observer.join()
Using the above filesystem observer, you can trigger security event log reviews.
You might also trigger them as scheduled task, but it's more fun to trigger them on file system modifications.
In order for security event logs to contain file modification information, you need to enable file auditing for the required directories using SACL lists (right click on your folder, security, auditing).
Then you can go through the security logs on file events.
Going through security logs can be done with windows_tools.
Get it installed with python -m pip install windows_tools.wmi_queries (obviously only works under Windows)
Then do the following:
from windows_tools.wmi_queries import *
result = query_wmi('SELECT * FROM Win32_NTLogEvent WHERE Logfile="Security" AND TimeGenerated > "{}"'.format(create_current_cim_timestamp(hour_offset=1)))
for r in result:
print(r)
You can add WHERE clauses like EventCode={integer} in order to filter only the events (file modifications or else) you need.
Usually the event codes you're searching are 4656, 4660, 4663, 4670 (open delete, edit, create).
See This microsoft article in order to know what WHERE clauses the event log class accepts.
DISCLAIMER: I'm the author of windows_tools package.
On an UNIX environment you can use os and pwd to retrieve the user account that has changed the file.
import os
import pwd
file_stat = os.stat("<changed_file>")
user_name = pwd.getpwuid(file_stat.st_uid).pw_name
Our server logical structure is based on the concept of "cluster" and "instance". A cluster is a set of instances of Weblogic and/or Karaf and/or Apache and/or Nginx and/or Jboss. The instances "MAY or MAY-NOT" be spread across multiple hosts.
A classic cluster would be:
hosts: a, b
Instances on a: tst13-apache, tst13-weblogic-ecom, tst13-weblogic-fulfill
Instances on b: tst13-karaf-rest-bus, tst13-karf-rest-svcs, tst13-nginx-cache
I have created a super-class module (myMod) and 5 subclasses (myApache, myNginx, myJboss, myKaraf, myWeblogic). The "instance" information (inventory variables) are kept in an ldap database. The instances are not "registered" in rc.d directories on the hosts. Each server has "generic scripts" for starting an apache instance and a weblogic instance and a JBoss instance and ... by "instance name" so calling Ansible modules such as "httpd" was not practical.
My module will be used for "stop, start, status, kill, etc" (actions) taken against a list of instances.
Ideally, I would like to create an ssh connection to the destination "host" for EACH instance of software that is to be actioned on the destination host (which I believe would mean an instance of a module per instance of server-software that needs an action taken); however, thus far, it seems ansible makes 1 ssh connection per destination "host".
Since a "host" can be running many instances of different types of software (2 apache instances, 4 weblogic instances, and 3 karaf instances could all be on 1 host); it seemed practical to create my "myMod" as follows:
#####
## <ANSIBLE_HOME>/modules/controller.py
#####
import socket
from ansible.module_utils.ldapData import ldapData
from ansible.module_utils.myMod import myMod
from ansible.module_utils.myModules import myApache, myNginx, myWeblogic, myKaraf, myJboss
def main():
arg_spec = dict(
action = dict(default='status', choices=['start', 'stop', 'kill', 'status', 'warm', 'startsession', 'stopsession']),
instances = dict(default='#', required=False, type='str'),
)
host = socket.gethostname()
serverdata = ldapData()
cluster = serverdata.hosts[host]["cluster"]
clusterData = serverdata.clusters[cluster]
hostData = serverdata.hosts[host]
classList = list(myMod.__subclasses__())
for classType in classList:
localMod = classType(arg_spec, hostData, clusterData)
localMod.process()
if __name__ == '__main__':
main()
The process method in myMod looks like this:
#####
## <ANSIBLE_HOME>/module_utils/myMod.py
#####
def process(self):
fPointers = {
'start': self.start,
'stop': self.stop,
'kill': self.kill,
'status': self.status,
'stopsession': self.stopsession,
'startsesson': self.startsession,
'warm': self.warm
}
act = self.params['action']
for inst in self.instances:
if fPointers[act]:
fPointers[act](inst)
else:
undefined(inst)
self.exit_json(changed=self.changed, msg=self.out, warnings=self.warn, errors=self.err)
Currently, I am only getting back the "out" and "warn" and "err" of the first module that gets "process" called. Other modules are either not executed or not reported. I am expecting that exit_json is "exiting the entire Ansible run" and not just "the current module".
Am I correct? Is there a concept at play that I am not aware of? What is the best change here to ensure that the ansible "report" at the end of the run shows "all output of all instances"?
For every task Ansible packages the module with all required libs and parameters before sending it to execution via SSH. So your main module (wrapper) should done all the job in a single run.
exit_json method stops the whole process with sys.exit(0), so when you call this method the first time your module's run is finished.
I'd recommend you to make one AnsibleModule class and a pack of helpers that return data to your parent module. Then you just combine all helpers' results into single object and flush it to stdout (with exit_json).
I'm trying to modify the simple python script provided with my Pipsta Printer so that instead of printing a single line of text, it prints out a list of things.
I know there are probably better ways to do this, but using the script below, could somebody please tell me what changes I need to make around the "txt = " part of the script, so that I can print out a list of items and not just one item?
Thanks.
# BasicPrint.py
# Copyright (c) 2014 Able Systems Limited. All rights reserved.
'''This simple code example is provided as-is, and is for demonstration
purposes only. Able Systems takes no responsibility for any system
implementations based on this code.
This very simple python script establishes USB communication with the Pipsta
printer sends a simple text string to the printer.
Copyright (c) 2014 Able Systems Limited. All rights reserved.
'''
import argparse
import platform
import sys
import time
import usb.core
import usb.util
FEED_PAST_CUTTER = b'\n' * 5
USB_BUSY = 66
# NOTE: The following section establishes communication to the Pipsta printer
# via USB. YOU DO NOT NEED TO UNDERSTAND THIS SECTION TO PROGRESS WITH THE
# TUTORIALS! ALTERING THIS SECTION IN ANY WAY CAN CAUSE A FAILURE TO COMMUNICATE
# WITH THE PIPSTA. If you are interested in learning about what is happening
# herein, please look at the following references:
#
# PyUSB: http://sourceforge.net/apps/trac/pyusb/
# ...which is a wrapper for...
# LibUSB: http://www.libusb.org/
#
# For full help on PyUSB, at the IDLE prompt, type:
# >>> import usb
# >>> help(usb)
# 'Deeper' help can be trawled by (e.g.):
# >>> help(usb.core)
#
# or at the Linux prompt, type:
# pydoc usb
# pydoc usb.core
PIPSTA_USB_VENDOR_ID = 0x0483
PIPSTA_USB_PRODUCT_ID = 0xA053
def parse_arguments():
'''Parse the arguments passed to the script looking for a font file name
and a text string to print. If either are mssing defaults are used.
'''
txt = 'Hello World from Pipsta!'
parser = argparse.ArgumentParser()
parser.add_argument('text', help='the text to print',
nargs='*', default=txt.split())
args = parser.parse_args()
return ' '.join(args.text)
def main():
"""The main loop of the application. Wrapping the code in a function
prevents it being executed when various tools import the code.
"""
if platform.system() != 'Linux':
sys.exit('This script has only been written for Linux')
# Find the Pipsta's specific Vendor ID and Product ID
dev = usb.core.find(idVendor=PIPSTA_USB_VENDOR_ID,
idProduct=PIPSTA_USB_PRODUCT_ID)
if dev is None: # if no such device is connected...
raise IOError('Printer not found') # ...report error
try:
# Linux requires USB devices to be reset before configuring, may not be
# required on other operating systems.
dev.reset()
# Initialisation. Passing no arguments sets the configuration to the
# currently active configuration.
dev.set_configuration()
except usb.core.USBError as ex:
raise IOError('Failed to configure the printer', ex)
# The following steps get an 'Endpoint instance'. It uses
# PyUSB's versatile find_descriptor functionality to claim
# the interface and get a handle to the endpoint
# An introduction to this (forming the basis of the code below)
# can be found at:
cfg = dev.get_active_configuration() # Get a handle to the active interface
interface_number = cfg[(0, 0)].bInterfaceNumber
# added to silence Linux complaint about unclaimed interface, it should be
# release automatically
usb.util.claim_interface(dev, interface_number)
alternate_setting = usb.control.get_interface(dev, interface_number)
interface = usb.util.find_descriptor(
cfg, bInterfaceNumber=interface_number,
bAlternateSetting=alternate_setting)
usb_endpoint = usb.util.find_descriptor(
interface,
custom_match=lambda e:
usb.util.endpoint_direction(e.bEndpointAddress) ==
usb.util.ENDPOINT_OUT
)
if usb_endpoint is None: # check we have a real endpoint handle
raise IOError("Could not find an endpoint to print to")
# Now that the USB endpoint is open, we can start to send data to the
# printer.
# The following opens the text_file, by using the 'with' statemnent there is
# no need to close the text_file manually. This method ensures that the
# close is called in all situation (including unhandled exceptions).
txt = parse_arguments()
usb_endpoint.write(b'\x1b!\x00')
# Print a char at a time and check the printers buffer isn't full
for x in txt:
usb_endpoint.write(x) # write all the data to the USB OUT endpoint
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
while res[0] == USB_BUSY:
time.sleep(0.01)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
usb_endpoint.write(FEED_PAST_CUTTER)
usb.util.dispose_resources(dev)
# Ensure that BasicPrint is ran in a stand-alone fashion (as intended) and not
# imported as a module. Prevents accidental execution of code.
if __name__ == '__main__':
main()
Pipsta uses linux line-feed ('\n', 0x0A, 10 in decimal) to mark a new line. For a quick test change BasicPrint.py as follows:
#txt = parse_arguments()
txt = 'Recipt:\n========\n1. food 1 - 100$\n2. drink 1 - 200$\n\n\n\n\n'
usb_endpoint.write(b'\x1b!\x00')
for x in txt:
usb_endpoint.write(x)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
while res[0] == USB_BUSY:
time.sleep(0.01)
res = dev.ctrl_transfer(0xC0, 0x0E, 0x020E, 0, 2)
I commented out parameter parsing and injected a test string with multi line content.
How to display a SOAP message generated by ZSI.ServiceProxy and a respond from a Web Service when a Web Service method is invoked?
Here is some documentation on the ServiceProxy class. The constructor accepts a tracefile argument which can be any object with a write method, so this looks like what you are after. Modifying the example from the documentation:
from ZSI import ServiceProxy
import BabelTypes
import sys
dbgfile = open('dbgfile', 'w') # to log trace to a file, or
dbgfile = sys.stdout # to log trace to stdout
service = ServiceProxy('http://www.xmethods.net/sd/BabelFishService.wsdl',
tracefile=dbgfile,
typesmodule=BabelTypes)
value = service.BabelFish('en_de', 'This is a test!')
dbgfile.close()