How to export an object on a custom dbus using Python? - python

I want to provide dbus methods and signals on a custom bus (i.e. not SessionBus or SystemBus). If I start a test copy of the dbus-daemon from the command line, as described in dbus-daemon man page, like so:
dbus-daemon --session --print-address
then this returns for example the address:
unix:abstract=/tmp/dbus-vthAiAw4am,guid=60da6b6ef244a0dbdb9710a800002218
I can use this address in d-feet to "Connect to Other Bus", and there is nothing there. Now, I would like to claim a name on that bus and export objects to provide dbus methods and signals using Python. I have tried reading the code behind dbus.service.BusName where I would normally pass in the Session or System bus, but I simply get lost. Anyone know how to do this (if even possible)?

Looking at the source code for d-feet was of course an easier way to find the answer than browsing the entire dbus-python lib. An address like the one in the question could be used when claiming a bus name, by passing in a dbus.bus.BusConnection object with the address as argument, like so:
bus_name = dbus.service.BusName('my.testbus.test',
dbus.bus.BusConnection('unix:abstract=/tmp/dbus-vthAiAw4am'))
I can then export methods and emit signals on this bus.

Related

Zabbix Action - How to use Default Field in Custom Script

I am making some script in python which is run by Zabbix Action.
I want to add value in
Default subject and Default message in Action fields and then use this values in my script. So I am running script and forward all needed macros in script parameters like:
python /path/script.py -A "{HOST.NAME}" -B "{ALERT.MESSAGE}" -C "{ALERT.SUBJECT}"
and i can get only HOST.NAME value, for others I get only macros name but no value
Have you any idea where is the problem? Those macros are unavailable using by Custom scripts?
example
After doing some research & testing myself, it seems as if these Alert macros are indeed not available in a custom script operation.1
You have two options for a workaround:
If you need to be able to execute this script on the host itself, the quick option is to simply replace the macro with the actual text of your subject & alert names. Some testing is definitely necessary to make sure it will work with your environment, and it's not the most elegant solution, but something like this may well work with little extra effort:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
Verifying of course that e.g. the newlines do not break your custom script in your environment.
It doesn't look pretty but it may well be the easiest option.
If you can run the command on any host, the nicer option is to create a new Media type, which will let you use these variables and may even make adding this script to other hosts much easier. These macros can definitely be used as part of a custom Media type (see Zabbix Documentation - Media Types) which can include custom scripts.
You'll need to make a bash or similar script file for the Zabbix server to run (which means doing anything on a host outside the Zabbix server itself is going to be more difficult, but not impossible).
Once the media type is setup, as a bit of a workaround (not ideal, of course) you'll need a user to 'send' to; assigning that media type to the user and then 'sending' the alert to the user with that media type should execute your script with the macros just like executing the custom command.
1: While I did do my own testing on this, I couldn't found any documentation which specifically states that these macros aren't supported in this case, and they definitely look like they should be - more than happy to edit/revoke this answer if anyone can find documentation that confirms or denies this.
I should also explain how it works now, so I did sth like:
python /path/script.py -A "{HOST.NAME}" -B "Problem: {EVENT.NAME}" -C "Problem started at {EVENT.TIME} on {EVENT.DATE}
Problem name: {EVENT.NAME}
Host: {HOST.NAME}
Severity: {EVENT.SEVERITY}
Original problem ID: {EVENT.ID}
{TRIGGER.URL}"
works for me :)

Does twisted epollreactor use non-blocking dns lookup?

It seems obvious that it would use the twisted names api and not any blocking way to resolve host names.
However digging in the source code, I have been unable to find the place where the name resolution occurs. Could someone point me to the relevant source code where the host resolution occurs ( when trying to do a connectTCP, for example).
I really need to be sure that connectTCP wont use blocking DNS resolution.
It seems obvious, doesn't it?
Unfortunately:
Name resolution is not always configured in the obvious way. You think you just have to read /etc/resolv.conf? Even in the specific case of Linux and DNS, you might have to look in an arbitrary number of files looking for name servers.
Name resolution is much more complex than just DNS. You have to do mDNS resolution, possibly look up some LDAP computer records, and then you have to honor local configuration dictating the ordering between these such as /etc/nsswitch.conf.
Name resolution is not exposed via a standard or useful non-blocking API. Even the glibc-specific getaddrinfo_a exposes its non-blockingness via SIGIO, not just a file descriptor you can watch. Which means that, like POSIX AIO, it's probably just a kernel thread behind your back anyway.
For these reasons, among others, Twisted defaults to using a resolver that just calls gethostbyname in a thread.
However, if you know that for your application it is appropriate to have DNS-only hostname resolution, and you'd like to use twisted.names rather than your platform resolver - in other words, if scale matters more to you than esoteric name-resolution use-cases - that is supported. You can install a resolver from twisted.names.client onto the reactor, appropriately configured for your application and all future built-in name resolutions will be made with that resolver.
I'm not massively familiar with twisted, I only recently started used it. It looks like it doesn't block though, but only on platforms that support threading.
In twisted.internet.base in ReactorBase it looks like it does the resolving through it's resolve method which returns a deferred from self.resolver.getHostByName.
self.resolver is an instance of BlockingResolver by default which does block, but it looks like that if the platform supports threading the resolver instance is replaced by ThreadedResolver in the ReactorBase._initThreads method.

python: how to tell socket.gethostbyaddr() which dns server to use

is there any way to specify dns server should be used by socket.gethostbyaddr()?
Please correct me, if I'm wrong, but isn't this operating system's responsibility? gethostbyaddr is just a part of libc and according to man:
The gethostbyname(), gethostbyname2() and gethostbyaddr() functions each return a
pointer to an object with the following structure describing an internet host refer-
enced by name or by address, respectively. This structure contains either the infor-
mation obtained from the name server, named(8), or broken-out fields from a line in
/etc/hosts. If the local name server is not running these routines do a lookup in
/etc/hosts.
So I would say there's no way of simply telling Python (from the code's point of view) to use a particular DNS, since it's part of system's configuration.
Take a look at PyDNS.

Django and root processes

In my Django project I need to be able to check whether a host on the LAN is up using an ICMP ping. I found this SO question which answers how to ping something in Python and this SO question which links to resources explaining how to use the sodoers file.
The Setting
A Device model stores an IP address for a host on the LAN, and after adding a new Device instance to the DB (via a custom view, not the admin) I envisage checking to see if the device responds to a ping using an AJAX call to an API which exposes the capability.
The Problem
However (from the docstring of a library suggested in the the first SO question) "Note that ICMP messages can only be sent from processes running as root."
I don't want to run Django as the root user, since it is bad practice. However this part of the process (sending and ICMP ping) needs to run as root. If with a Django view I wish to send off a ping packet to test the liveness of a host then Django itself is required to be running as root since that is the process which would be invoking the ping.
Solutions
These are the solutions I can think of, and my question is are there any better ways to only execute select parts of a Django project as root, other than these:
Run Django as root (please no!)
Put a "ping request" in a queue that another processes -- run as root -- can periodically check and fulfil. Maybe something like celery.
Is there not a simpler way?
I want something like a "Django run as root" library, is this possible?
Absolutely no way, do not run the Django code as root!
I would run a daemon as root (written in Python, why not) and then IPC between the Django instance and your daemon. As long as you're sure to validate the content and properly handle it (e.g. use subprocess.call with an array etc) and only pass in data (not commands to execute) it should be fine.
Here is an example client and server, using web.py
Server: http://gist.github.com/788639
Client: http://gist.github.com/788658
You'll need to install webpy.org but it's worth having around anyway. If you can hard-wire the IP (or hostname) into the server and remove the argument, all the better.
What's your OS here? You might be able to write a little program that does what you want given a parameter, and stick that in the sudoers file, and give your django user permission to run it as root.
/etc/sudoers
I don't know what kind of system you're on, but on any box I've encountered, one does not have to be root to run the command-line ping program (it has the suid bit set, so it becomes root as necessary). So you could just invoke that. It's a bit more overhead, but probably negligible compared to network latency.

How do I create a D-Bus service that dynamically creates multiple objects?

I'm new to D-Bus (and to Python, double whammy!) and I am trying to figure out the best way to do something that was discussed in the tutorial.
However, a text editor application
could as easily own multiple bus names
(for example, org.kde.KWrite in
addition to generic TextEditor), have
multiple objects (maybe
/org/kde/documents/4352 where the
number changes according to the
document), and each object could
implement multiple interfaces, such as
org.freedesktop.DBus.Introspectable,
org.freedesktop.BasicTextField,
org.kde.RichTextDocument.
For example, say I want to create a wrapper around flickrapi such that the service can expose a handful of Flickr API methods (say, urls_lookupGroup()). This is relatively straightforward if I want to assume that the service will always be specifying the same API key and that the auth information will be the same for everyone using the service.
Especially in the latter case, I cannot really assume this will be true.
Based on the documentation quoted above, I am assuming there should be something like this:
# Get the connection proxy object.
flickrConnectionService = bus.get_object("com.example.FlickrService",
"/Connection")
# Ask the connection object to connect, the return value would be
# maybe something like "/connection/5512" ...
flickrObjectPath = flickrConnectionService.connect("MY_APP_API_KEY",
"MY_APP_API_SECRET",
flickrUsername)
# Get the service proxy object.
flickrService = bus.get_object("com.example.FlickrService",
flickrObjectPath);
# As the flickr service object to get group information.
groupInfo = flickrService.getFlickrGroupInfo('s3a-belltown')
So, my questions:
1) Is this how this should be handled?
2) If so, how will the service know when the client is done? Is there a way to detect if the current client has broken connection so that the service can cleanup its dynamically created objects? Also, how would I create the individual objects in the first place?
3) If this is not how this should be handled, what are some other suggestions for accomplishing something similar?
I've read through a number of D-Bus tutorials and various documentation and about the closest I've come to seeing what I am looking for is what I quoted above. However, none of the examples look to actually do anything like this so I am not sure how to proceed.
1) Mostly yes, I would only change one thing in the connect method as I explain in 2).
2) D-Bus connections are not persistent, everything is done with request/response messages, no connection state is stored unless you implement this in third objects as you do with your flickerObject. The d-bus objects in python bindings are mostly proxies that abstract the remote objects as if you were "connected" to them, but what it really does is to build messages based on the information you give to D-Bus object instantiation (object path, interface and so). So the service cannot know when the client is done if client doesn't announce it with other explicit call.
To handle unexpected client finalization you can create a D-Bus object in the client and send the object path to the service when connecting, change your connect method to accept also an ObjectPath parameter. The service can listen to NameOwnerChanged signal to know if a client has died.
To create the individual object you only have to instantiate an object in the same service as you do with your "/Connection", but you have to be sure that you are using an unexisting name. You could have a "/Connection/Manager", and various "/Connection/1", "/Connection/2"...
3) If you need to store the connection state, you have to do something like that.

Categories