import dbus
from dbus.mainloop.glib import DBusGMainLoop
from gi.repository import GLib
DBusGMainLoop(set_as_default=True)
dbus_loop = DBusGMainLoop()
loop = GLib.MainLoop()
bus2 = dbus.SystemBus(mainloop=dbus_loop)
bus2_object = bus2.get_object('org.freedesktop.DBus', '/org/freedesktop/DBus')
def handler():
print("Received")
bus2.add_signal_receiver(handler, "idle", 'org.freedesktop.DBus', None, '/org/freedesktop/DBus')
loop.run()
I tried going through several examples this is a small demo program I created to show what I am trying out currently but I get same error in all of them.
RuntimeError: To make asynchronous calls, receive signals or export objects, D-Bus connections must be attached to a main loop by passing mainloop=... to the constructor or calling dbus.set_default_main_loop(...)
Related
So I have been struggling with this one error of pickle which is driving me crazy. I have the following master Engine class with the following code :
import eventlet
import socketio
import multiprocessing
from multiprocessing import Queue
from multi import SIOSerever
class masterEngine:
if __name__ == '__main__':
serverObj = SIOSerever()
try:
receiveData = multiprocessing.Process(target=serverObj.run)
receiveData.start()
receiveProcess = multiprocessing.Process(target=serverObj.fetchFromQueue)
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
and I have another file called multi which runs like the following :
import multiprocessing
from multiprocessing import Queue
import eventlet
import socketio
class SIOSerever:
def __init__(self):
self.cycletimeQueue = Queue()
self.sio = socketio.Server(cors_allowed_origins='*',logger=False)
self.app = socketio.WSGIApp(self.sio, static_files={'/': 'index.html',})
self.ws_server = eventlet.listen(('0.0.0.0', 5000))
#self.sio.on('production')
def p_message(sid, message):
self.cycletimeQueue.put(message)
print("I logged : "+str(message))
def run(self):
eventlet.wsgi.server(self.ws_server, self.app)
def fetchFromQueue(self):
while True:
cycle = self.cycletimeQueue.get()
print(cycle)
As you can see I can trying to create two processes of def run and fetchFromQueue which i want to run independently.
My run function starts the python-socket server to which im sending some data from a html web page ( This runs perfectly without multiprocessing). I am then trying to push the data received to a Queue so that my other function can retrieve it and play with the data received.
I have a set of time taking operations that I need to carry out with the data received from the socket which is why im pushing it all into a Queue.
On running the master Engine class I receive the following :
Can't pickle <class 'threading.Thread'>: it's not the same object as threading.Thread
I ended!
[Finished in 0.5s]
Can you please help with what I am doing wrong?
From multiprocessing programming guidelines:
Explicitly pass resources to child processes
On Unix using the fork start method, a child process can make use of a shared resource created in a parent process using a global resource. However, it is better to pass the object as an argument to the constructor for the child process.
Apart from making the code (potentially) compatible with Windows and the other start methods this also ensures that as long as the child process is still alive the object will not be garbage collected in the parent process. This might be important if some resource is freed when the object is garbage collected in the parent process.
Therefore, I slightly modified your example by removing everything unnecessary, but showing an approach where the shared queue is explicitly passed to all processes that use it:
import multiprocessing
MAX = 5
class SIOSerever:
def __init__(self, queue):
self.cycletimeQueue = queue
def run(self):
for i in range(MAX):
self.cycletimeQueue.put(i)
#staticmethod
def fetchFromQueue(cycletimeQueue):
while True:
cycle = cycletimeQueue.get()
print(cycle)
if cycle >= MAX - 1:
break
def start_server(queue):
server = SIOSerever(queue)
server.run()
if __name__ == '__main__':
try:
queue = multiprocessing.Queue()
receiveData = multiprocessing.Process(target=start_server, args=(queue,))
receiveData.start()
receiveProcess = multiprocessing.Process(target=SIOSerever.fetchFromQueue, args=(queue,))
receiveProcess.start()
receiveData.join()
receiveProcess.join()
except Exception as error:
print(error)
0
1
...
python-running-autobahnpython-asyncio-websocket-server-in-a-separate-subproce
can-an-asyncio-event-loop-run-in-the-background-without-suspending-the-python-in
Was trying to solve my issue with this two links above but i have not.
I have the following error : RuntimeError: There is no current event loop in thread 'Thread-1'.
Here the code sample (python 3):
from autobahn.asyncio.wamp import ApplicationSession
from autobahn.asyncio.wamp import ApplicationRunner
from asyncio import coroutine
import time
import threading
class PoloniexWebsocket(ApplicationSession):
def onConnect(self):
self.join(self.config.realm)
#coroutine
def onJoin(self, details):
def on_ticker(*args):
print(args)
try:
yield from self.subscribe(on_ticker, 'ticker')
except Exception as e:
print("Could not subscribe to topic:", e)
def poloniex_worker():
runner = ApplicationRunner("wss://api.poloniex.com:443", "realm1")
runner.run(PoloniexWebsocket)
def other_worker():
while True:
print('Thank you')
time.sleep(2)
if __name__ == "__main__":
polo_worker = threading.Thread(None, poloniex_worker, None, (), {})
thank_worker = threading.Thread(None, other_worker, None, (), {})
polo_worker.start()
thank_worker.start()
polo_worker.join()
thank_worker.join()
So, my final goal is to have 2 threads launched at the start. Only one need to use ApplicationSession and ApplicationRunner. Thank you.
A separate thread must have it's own event loop. So if poloniex_worker needs to listen to a websocket, it needs its own event loop:
def poloniex_worker():
asyncio.set_event_loop(asyncio.new_event_loop())
runner = ApplicationRunner("wss://api.poloniex.com:443", "realm1")
runner.run(PoloniexWebsocket)
But if you're on a Unix machine, you will face another error if you try to do this. Autobahn asyncio uses Unix signals, but those Unix signals only work in the main thread. You can simply turn off Unix signals if you don't plan on using them. To do that, you have to go to the file where ApplicationRunner is defined. That is wamp.py in python3.5 > site-packages > autobahn > asyncio on my machine. You can comment out the signal handling section of the code like so:
# try:
# loop.add_signal_handler(signal.SIGTERM, loop.stop)
# except NotImplementedError:
# # signals are not available on Windows
# pass
All this is a lot of work. If you don't absolutely need to run your ApplicationSession in a separate thread from the main thread, it's better to just run the ApplicationSession in the main thread.
So I am trying to use threads to implement a blocking operation in a Python3 based application.
#!/usr/bin/env python3
import gi, os, threading, Skype4Py
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk, GLib, GObject
skype = Skype4Py.Skype()
def ConnectSkype():
skype.Attach()
class Contacts_Listbox_Row(Gtk.ListBoxRow):
def __init__(self, name):
# super is not a good idea, needs replacement.
super(Gtk.ListBoxRow, self).__init__()
self.names = name
self.add(Gtk.Label(label=name))
class MainInterfaceWindow(Gtk.Window):
"""The Main User UI"""
def __init__(self):
Gtk.Window.__init__(self, title="Python-GTK-Frontend")
# Set up Grid object
main_grid = Gtk.Grid()
self.add(main_grid)
# Create a listbox which will contain selectable contacts
contacts_listbox = Gtk.ListBox()
for handle, name in self.GetContactTuples():
GLib.idle_add(contacts_listbox.add, Contacts_Listbox_Row(name))
GLib.idle_add(main_grid.add, contacts_listbox)
# Test label for debug
label = Gtk.Label()
label.set_text("Test")
GLib.idle_add(main_grid.attach_next_to, label, contacts_listbox, Gtk.PositionType.TOP, 2, 1)
def GetContactTuples(self):
"""
Returns a list of tuples in the form: (username, display name).
Return -1 if failure.
"""
print([(user.Handle, user.FullName) for user in skype.Friends]) # debug
return [(user.Handle, user.FullName) for user in skype.Friends]
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()
The basic idea is this simple program should fetch a list of contacts from the Skype API, and build a list of tuples. The GetContactTuples function succeeds in its design, the print call I placed verifies that. However, the program hangs indefinitely, and never renders an interface. Sometimes, it will yield random errors involving threads and/or resource availability. Once such error is
(example.py:31248): Gdk-WARNING **: example.py: Fatal IO error 11 (Resource temporarily unavailable) on X server :1.
I know it is related to the use of threads, but based on the documentation here, it seems like just adding GLib.idle_add calls before interface updates should be sufficient. So the questions are, why does this not work, and how could I correct the above sample?
UPDATE:
If GLib.idle_add is prepended to every line that interacts with GTK that it can be, I get a different error.
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python: xcb_io.c:179: dequeue_pending_request: Assertion '!xcb_xlib_unknown_req_in_deq' failed.
Aborted (core dumped)
Depending on your library version (this was no longer necessary in Gobject 3.10.2) you might need to actually need to explicitly initialize your threads using GObject.threads_init() as below:
if __name__ == '__main__':
threads = []
thread = threading.Thread(target=ConnectSkype) # potentially blocking operation
thread.start()
threads.append(thread)
main_window = MainInterfaceWindow()
main_window.connect("delete-event", Gtk.main_quit)
GObject.threads_init()
main_window.show_all()
print('Calling Gtk.main')
Gtk.main()
I'm automating Minitab 17 using Python's win32com library, and while all of commands execute correctly, I can't seem to get the process started by the Minitab process to exit when my script ends. My structure looks like
from myapi import get_data
import pythoncom
from win32com.client import gencache
def process_data(data):
# In case of threading
pythoncom.CoInitialize()
app = gencache.EnsureDispatch('Mtb.Application')
try:
# do some processing
pass
finally:
# App-specific command that is supposed to close the software
app.Quit()
# Ensure the object is released
del mtb
# In case of threading
pythoncom.CoUninitialize()
def main():
data = get_data()
process_data(data)
if __name__ == '__main__':
main()
I don't get any exceptions raised or error messages printed, the Mtb.exe process is still listed in task manager. Even more frustrating is if I run the following in an IPython session:
>>> from win32com.client import gencache
>>> app = gencache.EnsureDispatch('Mtb.Application')
>>> ^D
The Minitab process is closed immediately. I observe the same behavior in a normal python interactive session. Why would the process get closed correctly when running in an interactive session but not in a standalone script? What is done differently there that isn't being performed in my script?
I've also tried running process_data in a threading.Thread and in a multiprocessing.Process with no luck.
EDIT:
If I have a script containing nothing but
from win32com.client import gencache
app = gencache.EnsureDispatch('Mtb.Application')
then when I run it I see the Mtb.exe process in task manager, but once the script exits the process is killed. So instead my question is why does it matter if this COM object is declared at top-level vs. inside a function?
I don't have minitab so I can't verify but try forcing a shutdown of COM server by setting app = None just after the call to app.Quit? Python uses ref counting to manage object life cycle, so assuming there are no other refs to app then setting it to none should cause it to be finalized immediately. I have seen that cause similar issues. You should not need weak reference, something else is going on. The following, based on your answer, should work:
def process_data(mtb, data):
try:
mtb.do_something(data)
finally:
mtb.Quit()
def main(mtb):
data = get_data()
process_data(mtb, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb = gencache.EnsureDispatch('Mtb.Application')
main(mtb)
mtb.Quit()
mtb = None
pythoncom.CoUninitialize()
The problem was that the the garbage collector could clean up the reference to the underlying IUnknown object (the base type for all COM objects), and without the gc doing it's job the process stayed alive. I solve the problem by using the weakref module to immediately wrap the COM object in a weakref so it could be more easily deferenced:
from myapi import get_data
import weakref
from win32com.client import gencache
import pythoncom
def process_data(mtb_ref, data):
try:
mtb_ref().do_something(data)
finally:
mtb_ref().Quit()
def main(mtb_ref):
data = get_data()
process_data(mtb_ref, data)
if __name__ == '__main__':
pythoncom.CoInitialize()
mtb_ref = weakref.ref(gencache.EnsureDispatch('Mtb.Application'))
main(mtb_ref)
pythoncom.CoUninitialize()
I'm not sure I understand fully why this makes a difference, but I believe it's because there's never a direct reference to the object, only a weak reference, so all the functions that use the COM object only do so indirectly, allowing the GC to know that the object can be collected sooner. For whatever reason it still needs to be created at the top level of the module, but this at least makes it possible for me to write more reusable code that cleanly exits.
after pythoncom.CoUninitialize() i still see process
for me it help (based):
from comtypes.automation import IDispatch
from ctypes import c_void_p, cast, POINTER, byref
def release_reference(self, obj):
logger.debug("release com object")
oleobj = obj._oleobj_
addr = int(repr(oleobj).split()[-1][2:-1], 16)
pointer = POINTER(IDispatch)()
cast(byref(pointer), POINTER(c_void_p))[0] = addr
pointer.Release()
I have written this little script to show current track playing on xmms2 on a notification widget using xmms client and pynotify, so when i run it i can see the widget popup with current artist and title using xmmsclient methods.
Can anybody give some hints about how to detect track change to notify automatically without having to run the script manually?
You connect the client library to a main loop, and register as a listener via the broadcast_ playback_current_id method. If you want the currently playing id when the script starts as well you can call the playback_current_id method.
Here is a small adaptation of tutorial6 in the xmms2-tutorial.git which uses the GLib Mainloop to drive the connection:
import xmmsclient
import xmmsclient.glib
import os
import sys
import gobject
def cb(result):
if not result.is_error():
print "Current: %(artist)s - %(title)s" % result.value()
ml = gobject.MainLoop(None, False)
xc = xmmsclient.XMMS("stackoverflow")
xc.connect()
conn = xmmsclient.glib.GLibConnector(xc)
xc.broadcast_playback_current_id(lambda r: xc.medialib_get_info(r.value(), cb))
ml.run()