Check if Rhythmbox is running via Python - python

I am trying to extract information from Rhythmbox via dbus, but I only want to do so, if Rhythmbox is running. Is there a way to check if Rhythmbox is running via Python without starting it if it is not running?
Whenever I invoke the dbus code like this:
bus = dbus.Bus()
obj = bus.get_object("org.gnome.Rhythmbox", "/org/gnome/Rhythmbox/Shell")
iface = dbus.Interface(obj, "org.gnome.Rhythmbox.Shell)
and Rhythmbox is not running, it then starts it.
Can I check via dbus if Rhythmbox is running without actually starting it? Or is there any other way, other than parsing the list of currently running processes, to do so?

This is similar to Rosh Oxymoron's answer, but arguably neater (albeit untested):
bus = dbus.SessionBus()
if bus.name_has_owner('org.gnome.Rhythmbox'):
# ...
If you want to be notified when Rhythmbox starts or stops, you can use:
def rhythmbox_owner_changed(new_owner):
if new_owner == '':
print 'Rhythmbox is no longer running'
else:
print 'Rhythmbox is now running'
bus.watch_name_owner('org.gnome.Rhythmbox')
See the documentation for dbus.bus.BusConnection for more details.

dbus_main_object = bus.get_object("org.freedesktop.DBus", "/")
dbus_names = dbus_main_object.ListNames(dbus_interface='org.freedesktop.DBus')
if 'org.gnome.Rhythmbox' in dbus_names:
do_whatever()

Related

Start systemd timer from Python

I'm trying to start a dbus timer from python.
At the moment I was able to launch it through this script:
import dbus
from subprocess import call
def scheduleWall( time, message ):
call(['systemd-run --on-active='+str(time) +' --unit=scheduled-message --description="'+ message +'" wall "'+ message +'"'], shell=True)
I'd like to not use "call", but try to use "StartTransientUnit", but I wasn't able to understand the format of the call at all! I'm rather new to dbus and python.
def scheduleWall( time, message ):
try:
bus = dbus.SystemBus()
systemd1 = bus.get_object("org.freedesktop.systemd1"," /org/freedesktop/systemd1")
manager = dbus.Interface(systemd1, 'org.freedesktop.systemd1.Manager')
obj = manager.StartTransientUnit('scheduled-message.timer','fail',[????],[????])
except:
pass
Is startTransientUnit the right method to call? how should I call it?
TL;DR: stick to systemd-run :)
I don’t think StartTransientUnit is quite the right method – you need to create two transient units, after all: the timer unit, and the service unit that it will start (which will run wall later). Perhaps you can use StartTransientUnit for the timer, but at least not for the service. You also need to set all the properties that the two units need (OnActiveSec= for the timer, ExecStart= for the service, probably some more…) – you can see how systemd-run does it by running busctl monitor org.freedesktop.systemd1 and then doing systemctl run --on-active 1s /bin/true in another terminal. (The main calls seem to be UnitNew and JobNew.)
I’ll admit, to me this seems rather complicated, and if systemd-run already exists to do the job for you, why not use it? The only change I would make is to eliminate the shell part and pass an array of arguments instead of a single space-separated string, with something like this (untested):
subprocess.run(['systemd-run', '--on-active', str(time), ' --unit', 'scheduled-message', '--description', message, 'wall', message)

Communication between two separate Python engines

The problem statement is as follows:
I am working with Abaqus, a program for analyzing mechanical problems. It is basically a standalone Python interpreter with its own objects etc. Within this program, I run a python script to set up my analysis (so this script can be modified). It also contains a method which has to be executed when an external signal is received. These signals come from the main script that I am running in my own Python engine.
For now, I have the following workflow:
The main script sets a boolean to True when the Abaqus script has to execute a specific function, and pickles this boolean into a file. The Abaqus script regularly checks this file to see whether the boolean has been set to true. If so, it does an analysis and pickles the output, so that the main script can read this output and act on it.
I am looking for a more efficient way to signal the other process to start the analysis, since there is a lot of unnecessary checking going on right know. Data exchange via pickle is not an issue for me, but a more efficient solution is certainly welcome.
Search results always give me solutions with subprocess or the like, which is for two processes started within the same interpreter. I have also looked at ZeroMQ since this is supposed to achieve things like this, but I think this is overkill and would like a solution in python. Both interpreters are running python 2.7 (although different versions)
Edit:
Like #MattP, I'll add this statement of my understanding:
Background
I believe that you are running a product called abaqus. The abaqus product includes a linked-in python interpreter that you can access somehow (possibly by running abaqus python foo.py on the command line).
You also have a separate python installation, on the same machine. You are developing code, possibly including numpy/scipy, to run on that python installation.
These two installations are different: they have different binary interpreters, different libraries, different install paths, etc. But they live on the same physical host.
Your objective is to enable the "plain python" programs, written by you, to communicate with one or more scripts running in the "Abaqus python" environment, so that those scripts can perform work inside the Abaqus system, and return results.
Solution
Here is a socket based solution. There are two parts, abqlistener.py and abqclient.py. This approach has the advantage that it uses a well-defined mechanism for "waiting for work." No polling of files, etc. And it is a "hard" API. You can connect to a listener process from a process on the same machine, running the same version of python, or from a different machine, or from a different version of python, or from ruby or C or perl or even COBOL. It allows you to put a real "air gap" into your system, so you can develop the two parts with minimal coupling.
The server part is abqlistener. The intent is that you would copy some of this code into your Abaqus script. The abq process would then become a server, listening for connections on a specific port number, and doing work in response. Sending back a reply, or not. Et cetera.
I am not sure if you need to do setup work for each job. If so, that would have to be part of the connection. This would just start ABQ, listen on a port (forever), and deal with requests. Any job-specific setup would have to be part of the work process. (Maybe send in a parameter string, or the name of a config file, or whatever.)
The client part is abqclient. This could be moved into a module, or just copy/pasted into your existing non-ABQ program code. Basically, you open a connection to the right host:port combination, and you're talking to the server. Send in some data, get some data back, etc.
This stuff is mostly scraped from example code on-line. So it should look real familiar if you start digging into anything.
Here's abqlistener.py:
# The below usage example is completely bogus. I don't have abaqus, so
# I'm just running python2.7 abqlistener.py [options]
usage = """
abacus python abqlistener.py [--host 127.0.0.1 | --host mypc.example.com ] \\
[ --port 2525 ]
Sets up a socket listener on the host interface specified (default: all
interfaces), on the given port number (default: 2525). When a connection
is made to the socket, begins processing data.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import SocketServer
import json
class AbqRequestHandler(SocketServer.BaseRequestHandler):
"""Request handler for our socket server.
This class is instantiated whenever a new connection is made, and
must override `handle(self)` in order to handle communicating with
the client.
"""
def do_work(self, data):
"Do some work here. Call abaqus, whatever."
print "DO_WORK: Doing work with data!"
print data
return { 'desc': 'low-precision natural constants','pi': 3, 'e': 3 }
def handle(self):
# Allow the client to send a 1kb message (file path?)
self.data = self.request.recv(1024).strip()
print "SERVER: {} wrote:".format(self.client_address[0])
print self.data
result = self.do_work(self.data)
self.response = json.dumps(result)
print "SERVER: response to {}:".format(self.client_address[0])
print self.response
self.request.sendall(self.response)
if __name__ == '__main__':
print args
server = SocketServer.TCPServer((args.host, args.port), AbqRequestHandler)
print "Server starting. Press Ctrl+C to interrupt..."
server.serve_forever()
And here's abqclient.py:
usage = """
python2.7 abqclient.py [--host HOST] [--port PORT]
Connect to abqlistener on HOST:PORT, send a message, wait for reply.
"""
import argparse
parser = argparse.ArgumentParser(description='Abacus listener',
add_help=True,
usage=usage)
parser.add_argument('-H', '--host', metavar='INTERFACE', default='',
help='Interface IP address or name, or (default: empty string)')
parser.add_argument('-P', '--port', metavar='PORTNUM', type=int, default=2525,
help='port number of listener (default: 2525)')
args = parser.parse_args()
import json
import socket
message = "I get all the best code from stackoverflow!"
print "CLIENT: Creating socket..."
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
print "CLIENT: Connecting to {}:{}.".format(args.host, args.port)
s.connect((args.host, args.port))
print "CLIENT: Sending message:", message
s.send(message)
print "CLIENT: Waiting for reply..."
data = s.recv(1024)
print "CLIENT: Got response:"
print json.loads(data)
print "CLIENT: Closing socket..."
s.close()
And here's what they print when I run them together:
$ python2.7 abqlistener.py --port 3434 &
[2] 44088
$ Namespace(host='', port=3434)
Server starting. Press Ctrl+C to interrupt...
$ python2.7 abqclient.py --port 3434
CLIENT: Creating socket...
CLIENT: Connecting to :3434.
CLIENT: Sending message: I get all the best code from stackoverflow!
CLIENT: Waiting for reply...
SERVER: 127.0.0.1 wrote:
I get all the best code from stackoverflow!
DO_WORK: Doing work with data!
I get all the best code from stackoverflow!
SERVER: response to 127.0.0.1:
{"pi": 3, "e": 3, "desc": "low-precision natural constants"}
CLIENT: Got response:
{u'pi': 3, u'e': 3, u'desc': u'low-precision natural constants'}
CLIENT: Closing socket...
References:
argparse, SocketServer, json, socket are all "standard" Python libraries.
To be clear, my understanding is that you are running Abaqus/CAE via a Python script as an independent process (let's call it abq.py), which checks for, opens, and reads a trigger file to determine if it should run an analysis. The trigger file is created by a second Python process (let's call it main.py). Finally, main.py waits to read the output file created by abq.py. You want a more efficient way to signal abq.py to run an analysis, and you're open to different techniques to exchange data.
As you mentioned, subprocess or multiprocessing might be an option. However, I think a simpler solution is to combine your two scripts, and optionally use a callback function to monitor the solution and process your output. I'll assume there is no need to have abq.py constantly running as a separate process, and that all analyses can be started from main.py whenever it is appropriate.
Let main.py have access to the Abaqus Mdb. If it's already built, you open it with:
mdb = openMdb(FileName)
A trigger file is not needed if main.py starts all analyses. For example:
if SomeCondition:
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
j.waitForCompletion()
Once complete, main.py can read the output file and continue. This is straightforward if the data file was generated by the analysis itself (e.g. .dat or .odb files). OTH, if the output file is generated by some code in your current abq.py, then you can probably just include it in main.py instead.
If that doesn't provide enough control, instead of the waitForCompletion method you can add a callback function to the monitorManager object (which is automatically created when you import the abaqus module: from abaqus import *). This allows you to monitor and respond to various messages from the solver, such as COMPLETED, ITERATION, etc. The callback function is defined like:
def onMessage(jobName, messageType, data, userData):
if messageType == COMPLETED:
# do stuff
else:
# other stuff
Which is then added to the monitorManager and the job is called :
monitorManager.addMessageCallback(jobName=MyJobName,
messageType=ANY_MESSAGE_TYPE, callback=onMessage, userData=MyDataObj)
j = mdb.Job(name=MyJobName, model=MyModelName)
j.submit()
One of the benefits to this approach is that you can pass in a Python object as the userData argument. This could potentially be your output file, or some other data container. You could probably figure out how to process the output data within the callback function - for example, access the Odb and get the data, then do any manipulations as needed without needing the external file at all.
I agree with the answer, except for some minor syntax problems.
defining instance variables inside the handler is a no no. not to mention they are not being defined in any sort of init() method. Subclass TCPServer and define your instance variables in TCPServer.init(). Everything else will work the same.

Check how the application has been launched?

My question is simple - is it possible with the Python to check, which way application has been launched/spawned?
More information:
I got an application something.exe and now I need to check if the something.exe has been launched whether by the user or rather with some third party application running it as a child process.
Is it possible to check?
With standard library it is not possible in windows. In Unix-like all processes (excepting init) have a parent.
import os
parent = os.getppid()
You can try to check os.environ. Different methods to run can set slightly different environment or not set any variable.
Also look at psutil. It has many functions for process management.
I tried on Windows using psutil this approach:
import psutil
def get_process_mode(process_name):
process_mode = None
plist = psutil.get_process_list()
for process in plist:
try:
if process.name == process_name:
if process.parent:
process_mode = "third party app"
else:
process_mode = "user launched"
break
except psutil.AccessDenied:
print "'%s' Process is not allowing us to check its parent!" % process
return process_mode
get_process_mode("something.exe")
But it didn't worked good in some cases...

Is it possible to have a python shell in a py2exe build?

I distribute a software package for windows via distutils and py2exe. For development purposes, I'd like to be able to have access to a python console inside the py2exe build. I see there is a python27.dll file in the py2exe build, so I hope I can leverage that to launch a python terminal.
Is it possible to take an existing, or modify distutils/py2exe to get end user access to a Python shell in the py2exe environment?
There's a really bare-bones way to accomplish this as documented by Matt Anderson from the pymntos google group. I've seen some variations on it, but this one came up first when I googled. :p
The juice is in the stdlib code module, leveraging code.InteractiveInterpeter. The only thing you'd have to do is add this in as a thread as the application starts. Then, when the app starts you can telnet 'localhost 7777' and you should drop into a Python interpreter.
The problem with doing it as a thread though - you can't very easily twiddle variables / data in the main thread without doing some sort of queue and passing things around.
You could alternatively have an async socket - that way you could twiddle stuff as a main-thread participant. Thats inherently dangerous for a host of reasons. But, we're talking bare metal.
If you use the Twisted library, you could use Twisted Conch, which allows you to create an SSH or Telnet server that can talk to the rest of your app. However, this might be a problem since you're using the event loop from the UI to process events - you can't have two event loops. If you're using Qt, there is a Twisted Qt Reactor event loop. If it's windows or something else.. I have no idea. But, this should at least give you a few things to consider.
Original link is: https://groups.google.com/forum/?fromgroups#!topic/pymntos/-Mjviu7R2bs
import socket
import code
import sys
class MyConsole(code.InteractiveConsole):
def __init__(self, rfile, wfile, locals=None):
self.rfile = rfile
self.wfile = wfile
code.InteractiveConsole.__init__(
self, locals=locals, filename='<MyConsole>')
def raw_input(self, prompt=''):
self.wfile.write(prompt)
return self.rfile.readline().rstrip()
def write(self, data):
self.wfile.write(data)
netloc = ('', 7777)
servsock = socket.socket()
servsock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, True)
servsock.bind(netloc)
servsock.listen(5)
print 'listening'
sock, _ = servsock.accept()
print 'accepted'
rfile = sock.makefile('r', 0)
sys.stdout = wfile = sock.makefile('w', 0)
console = MyConsole(rfile, wfile)
console.interact()

developing for modularity & reusability: how to handle While True loops?

I've been playing around with the pybluez module recently to scan for nearby Bluetooth devices. What I want to do now is extend the program to also find nearby WiFi client devices.
The WiFi client scanner will have need to have a While True loop to continually monitor the airwaves. If I were to write this as a straight up, one file program, it would be easy.
import ...
while True:
client = scan()
print client['mac']
What I want, however, is to make this a module. I want to be able to reuse it later and, possible, have others use it too. What I can't figure out is how to handle the loop.
import mymodule
scan()
Assuming the first example code was 'mymodule', this program would simply print out the data to stdout. I would want to be able to use this data in my program instead of having the module print it out...
How should I code the module?
I think the best approach is going to be to have the scanner run on a separate thread from the main program. The module should have methods that start and stop the scanner, and another that returns the current access point list (using a lock to synchronize). See the threading module.
How about something pretty straightforward like:
mymodule.py
import ...
def scanner():
while True:
client = scan()
yield client['mac']
othermodule.py
import mymodule
for mac in mymodule.scanner():
print mac
If you want something more useful than that, I'd also suggest a background thread as #kindall did.
Two interfaces would be useful.
scan() itself, which returned a list of found devices, such that I could call it to get an instantaneous snapshot of available bluetooth. It might take a max_seconds_to_search or a max_num_to_return parameter.
A "notify on found" function that accepted a callback. For instance (maybe typos, i just wrote this off the cuff).
def find_bluetooth(callback_func, time_to_search = 5.0):
already_found = []
start_time = time.clock()
while 1:
if time.clock()-start_time > 5.0: break
found = scan()
for entry in found:
if entry not in already_found:
callback_func(entry)
already_found.append(entry)
which would be used by doing this:
def my_callback(new_entry):
print new_entry # or something more interesting...
find_bluetooth(my_callback)
If I get your question, you want scan() in a separate file, so that it can be reused later.
Create utils.py
def scan():
# write code for scan here.
Create WiFi.py
import utils
def scan_wifi():
while True:
cli = utils.scan()
...
return

Categories