Emitting Signals on dbus using Python-dbus - python

I want to be able to at first call a simple script to enable or disable an external monitor from my netbook. I am running Fedora 17 with XFCE as my desktop. I see that I should be able to use python and python-dbus to flip toggle active on and off. My problem is that I can't figure out how to emit a signal to get the new setting to go active. Unfortunately Python is not a language that I use often. The code that I have in place is:
import dbus
item = 'org.xfce.Xfconf'
path = '/org/xfce/Xfconf'
channel = 'displays'
base = '/'
setting = '/Default/VGA1/Active'
bus = dbus.SessionBus()
remote_object = bus.get_object(item, path)
remote_interface = dbus.Interface(remote_object, "org.xfce.Xfconf")
if remote_interface.GetProperty(channel, setting):
remote_interface.SetProperty(channel, setting, '0')
remote_object.PropertyChanged(channel, setting, '0')
else:
remote_interface.SetProperty(channel, setting, '1')
remote_object.PropertyChanged(channel, setting, '0')
It is failing and kicking out:
Traceback (most recent call last): File "./vgaToggle", line 31, in <module>
remote_object.PropertyChanged(channel, setting, '0')
File "/usr/lib/python2.7/site-packages/dbus/proxies.py", line 140, in __call__
**keywords)
File "/usr/lib/python2.7/site-packages/dbus/connection.py", line 630, in call_blocking
message, timeout) dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.UnknownMethod: Method "PropertyChanged"
with signature "sss" on interface "(null)" doesn't exist
I spent a bit of time searching and I am not finding many python examples doing anything close to this. Thanks in advance.

PropertyChanged is a signal, not a method. The services you are communicating with are responsible for emitting signals. In this case, the PropertyChanged should fire implicitly, whenever the value of the property on the respective objects or interfaces have changed.
This should happen implicitly when you call remote_interface.SetProperty(...), and you should not need to explicitly "call" the signal like a method.
If you are interested in receiving the signals, you will need to set up a glib main loop and call connect_to_signal on your proxy object, passing it a callback method to invoke.

Related

Inherit class Worker on Odoo15

In one of my Odoo installation I need to setup the socket_timeout variable of WorkerHTTP class directly from Python code, bypassing the usage of environment variable ODOO_HTTP_SOCKET_TIMEOUT.
If you never read about it, you can check here for more info: https://github.com/odoo/odoo/commit/49e3fd102f11408df00f2c3f6360f52143911d74#diff-b4207a4658979fdb11f2f2fa0277f483b4e81ba59ed67a5e84ee260d5837ef6d
In Odoo15, which i'm using, Worker classes are located at odoo/service/server.py
My idea was to inherit constructor for Worker class and simply setup self.sock_timeout = 10 or another value, but I can't make it work with inheritance.
EDIT: I almost managed it to work, but I have problems with static methods.
STEP 1:
Inherit WorkerHTTP constructor and add self.socket_timeout = 10
Then, I also have to inherit PreforkServer and override process_spawn() method so I can pass WorkerHttpExtend instead of WorkerHTTP, as argument for worker_spawn() method.
class WorkerHttpExtend(WorkerHTTP):
""" Setup sock_timeout class variable when WorkerHTTP object gets initialized"""
def __init__(self, multi):
super(WorkerHttpExtend, self).__init__(multi)
self.sock_timeout = 10
logging.info(f'SOCKET TIMEOUT: {self.sock_timeout}')
class PreforkServerExtend(PreforkServer):
""" I have to inherit PreforkServer and override process_spawn()
method so I can pass WorkerHttpExtend
instead of WorkerHTTP, as argument for worker_spawn() method.
"""
def process_spawn(self):
if config['http_enable']:
while len(self.workers_http) < self.population:
self.worker_spawn(WorkerHttpExtend, self.workers_http)
if not self.long_polling_pid:
self.long_polling_spawn()
while len(self.workers_cron) < config['max_cron_threads']:
self.worker_spawn(WorkerCron, self.workers_cron)
STEP 2:
static method start() should initialize PreforkServer with PreforkServerExtend, not with PreforkServer (last line in the code below). This is where I start to have problems.
def start(preload=None, stop=False):
"""Start the odoo http server and cron processor."""
global server
load_server_wide_modules()
if odoo.evented:
server = GeventServer(odoo.service.wsgi_server.application)
elif config['workers']:
if config['test_enable'] or config['test_file']:
_logger.warning("Unit testing in workers mode could fail; use --workers 0.")
server = PreforkServer(odoo.service.wsgi_server.application)
STEP 3:
At this point if I wanna go further (which I did) I should copy the whole start() method and import all package I need to make it work
import odoo
from odoo.service.server import WorkerHTTP, WorkerCron, PreforkServer, load_server_wide_modules, \
GeventServer, _logger, ThreadedServer, inotify, FSWatcherInotify, watchdog, FSWatcherWatchdog, _reexec
from odoo.tools import config
I did it and then in my custom start() method I wrote line
server = PreforkServerExtend(odoo.service.wsgi_server.application)
but even then, how do I tell to execute my start() method, instead of the original one??
I'm sure this would eventually work (mabe not safely, but would work) because at some point I wasn't 100% sure what I was doing, so I put my inherit classes WorkerHttpExtend and PreforkServerExtend in the original odoo/service/server.py and initialized server obj with PreforkServerExtend instead of PreforkServer.
server = PreforkServer(odoo.service.wsgi_server.application)
It works then: I get custom socket timeout value, print and logging info when Odoo service start, because PreforkServerExtend will call custom class on cascade at that point, otherwise my inherited class are there but they will never be called.
So I guess if I could tell the system to run my start() method I would have done it.
STEP 4 (not reached yet):
I'm pretty sure that start() method is called in odoo/cli/server.py, in main() method:
rc = odoo.service.server.start(preload=preload, stop=stop)
I could go deeper but I don't think the effort is worth for what I need.
So technically if I would be able to tell the system which start() method to choose, I would have done it. Still not sure it is safe procedure (probably not much actually, but at this point I was just experimenting), but I wonder if there is an easier method to set up socket timeout without using environment variable ODOO_HTTP_SOCKET_TIMEOUT.
I'm pretty sure there is an easier method than i'm doing, with low level python or maybe even with a class in odoo/service/server, but I can't figure out for now. If some one has an idea, let me know!
Working solution: I have been introduced to Monkeypatch in this post
Possible for a class to look down at subclass constructor?
This has solved my problem, now I'm able to patch process_request method of class WorkerHTTP :
import errno
import fcntl
import socket
import odoo
import odoo.service.server as srv
class WorkerHttpProcessRequestPatch(srv.WorkerHTTP):
def process_request(self, client, addr):
client.setblocking(1)
# client.settimeout(self.sock_timeout)
client.settimeout(10) # patching timeout setup to a needed value
client.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
flags = fcntl.fcntl(client, fcntl.F_GETFD) | fcntl.FD_CLOEXEC
fcntl.fcntl(client, fcntl.F_SETFD, flags)
self.server.socket = client
try:
self.server.process_request(client, addr)
except IOError as e:
if e.errno != errno.EPIPE:
raise
self.request_count += 1
# Switch process_request class attribute - this is what I needed to make it work
odoo.service.server.WorkerHTTP.process_request = WorkerHttpProcessRequestPatch.process_request

redis Python catching exception that doesn't inherit from BaseException

We're seeing exceptions in our log like the following:
ERROR Exception ignored in: <function Connection.__del__ at 0x7f9b70a5cc20>
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.7/site-packages/redis/connection.py", line 537, in __del__
File "/app/.heroku/python/lib/python3.7/site-packages/redis/connection.py", line 667, in disconnect
TypeError: catching classes that do not inherit from BaseException is not allowed
According to the Redis source code, the offending line is the except in the following snippet:
try:
if os.getpid() == self.pid:
shutdown(self._sock, socket.SHUT_RDWR)
self._sock.close()
except socket.error:
pass
Which would indicate that socket.exception doesn't inherit from BaseException. However, as far as I can tell (based on the docs and the mro class method), socket.exception does inherit from BaseException.
Why is this happening? What can I do to prevent it?
By the way, our code doesn't call Redis directly. We are using Redis Queue (rq), which is implemented using Redis.
This would happen if you didn't close the redis client explicitly. That's why you saw __del__ in the traceback.
I'm not using rq but I'll take celery as an example, yet the idea could also be applied to rq.
# tasks/__init__.py
from celeryapp import redis_client
#worker_shutdown.connect # this signal means it's about to shut down the worker
def cleanup(**kwargs):
redis_client.close() # without this you may see error
# celeryapp.py
from celery import Celery
import redis
app = Celery(config_source="celeryconfig")
redis_client = redis.StrictRedis()

Gui with button to open port scanner

I am making a GUI with tkinter that allows me to click a button that will run a port scan. I have a script for a port scan that functions correctly, I have managed to open the port scanner through the button on the GUI but then I receive an error that I otherwise don't receive when running the port scanner alone.
Exception in Tkinter callback
Traceback (most recent call last):
File "C:\Users\Steve\AppData\Local\Programs\Python\Python35-32\lib\tkinter\__init__.py", line 1550, in __call__
return self.func(*args)
File "<string>", line 51, in Scan
NameError: name 'IP_Input' is not defined
My code:
class CallWrapper:
"""Internal class. Stores function to call when some user
defined Tcl function is called e.g. after an event occurred."""
def __init__(self, func, subst, widget):
"""Store FUNC, SUBST and WIDGET as members."""
self.func = func
self.subst = subst
self.widget = widget
def __call__(self, *args):
"""Apply first function SUBST to arguments, than FUNC."""
try:
if self.subst:
args = self.subst(*args)
return self.func(*args) # THIS IS THE ERROR #
except SystemExit:
raise
except:
self.widget._report_exception()
class XView:
"""Mix-in class for querying and changing the horizontal position
of a widget's window."""
def xview(self, *args):
"""Query and change the horizontal position of the view."""
res = self.tk.call(self._w, 'xview', *args)
THIS IS THE CODE FOLLOWING FOR THE LINE 51 ERROR
def Scan():
print ('Scan Called.') #Debugging
IP = str(IP_Input.get(0.0, tkinter.END)) #THIS IS ERROR LINE 51#
print ("IP #Debugging")
Start = int(PortS.get(0.0, tkinter.END))
End = int(PortE.get(0.0, tkinter.END))
TestSocket = socket.socket()
CurrentPort = Start
OpenPorts = 0
print ('Starting scan...')
HowFar = int(CurrentPort/End * 100)
ProgText = HowFar, r'%'
Label1.config(text=('Percentage Done:', ProgText))
The problem is with your exec statement. You're opening your other .py file named port_scanner.py and then calling exec(open("./port scanner.py)).
This just isn't going to work.
Why this doesn't work:
When you do exec(open("path to .py file").read()) exec is of course executing this code, but the problem is that the global variables in this file aren't within the scope.
So, to make this work (which I don't recommend) you'd have to use:
exec(open(path).read(), globals())
From the documentation
If the globals dictionary does not contain a value for the key builtins, a reference to the dictionary of the built-in module builtins is inserted under that key. That way you can control what builtins are available to the executed code by inserting your own builtins dictionary into globals before passing it to exec().
If you really want to call your file this way then you should just use os.system.
Alternative approach:
You really don't need to call your file this way. You now have two instances of Tk() running. If you need another window then a widget is provided for this purpose. It is the Toplevel widget. You can restructure your code to create a Toplevel instance containing the port scanner app on your button click. An example being, create your port scanner app with the Toplevel widget (in your other file if you wish) then import the "app" into your file and on the button click have it initialize the app.
Additional Notes:
You're calling a while loop and if this runs (for any noticeable amount of time) then this is going to block the GUI's main event loop and causing your GUI to "hang".
Your first guess should not be that a part of the widely tested and used python standard library is flawed. The problem is (99.9% of the time)
while True:
print("In your own code.")

python: httplib.CannotSendRequest when nesting threaded SimpleXMLRPCServers

I am intermittently receiving a httplib.CannotSendRequest exception when using a chain of SimpleXMLRPCServers that use the SocketServer.ThreadingMixin.
What I mean by 'chain' is the following:
I have a client script which uses xmlrpclib to call a function on a SimpleXMLRPCServer. That server, in turn, calls another SimpleXMLRPCServer. I realise how convoluted that sounds, but there are good reasons that this architecture has been selected, and I don't see a reason it shouldn't be possible.
(testclient)client_script ---calls-->
(middleserver)SimpleXMLRPCServer ---calls--->
(finalserver)SimpleXMLRPCServer --- does something
If I do not use SocketServer.ThreadingMixin then this issue doesn't occur (but I need the requests to be multi-threaded so this doesn't help.)
If I only have a single level of services (ie just client script calling final server directly) this doesn't happen.
I have been able to reproduce the issue in the simple test code below. There are three snippets:
finalserver:
import SocketServer
import time
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
class AsyncXMLRPCServer(SocketServer.ThreadingMixIn,SimpleXMLRPCServer): pass
# Create server
server = AsyncXMLRPCServer(('', 9999), SimpleXMLRPCRequestHandler)
server.register_introspection_functions()
def waste_time():
time.sleep(10)
return True
server.register_function(waste_time, 'waste_time')
server.serve_forever()
middleserver:
import SocketServer
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
import xmlrpclib
class AsyncXMLRPCServer(SocketServer.ThreadingMixIn,SimpleXMLRPCServer): pass
# Create server
server = AsyncXMLRPCServer(('', 8888), SimpleXMLRPCRequestHandler)
server.register_introspection_functions()
s = xmlrpclib.ServerProxy('http://localhost:9999')
def call_waste():
s.waste_time()
return True
server.register_function(call_waste, 'call_waste')
server.serve_forever()
testclient:
import xmlrpclib
s = xmlrpclib.ServerProxy('http://localhost:8888')
print s.call_waste()
To reproduce, the following steps should be used:
Run python finalserver.py
Run python middleserver.py
Run python testclient.py
While (3) is still running, run another instance of python testclient.py
Quite often (almost every time) you will get the error below the first time you try to run step 4. Interestingly, if you immediately try to run step (4) again the error will not occur.
Traceback (most recent call last):
File "testclient.py", line 6, in <module>
print s.call_waste()
File "/usr/lib64/python2.7/xmlrpclib.py", line 1224, in __call__
return self.__send(self.__name, args)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1578, in __request
verbose=self.__verbose
File "/usr/lib64/python2.7/xmlrpclib.py", line 1264, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1297, in single_request
return self.parse_response(response)
File "/usr/lib64/python2.7/xmlrpclib.py", line 1473, in parse_response
return u.close()
File "/usr/lib64/python2.7/xmlrpclib.py", line 793, in close
raise Fault(**self._stack[0])
xmlrpclib.Fault: <Fault 1: "<class 'httplib.CannotSendRequest'>:">
The internet appears to say that this exception can be caused by multiple calls to httplib.HTTPConnection.request without intervening getresponse calls. However, the internet doesn't discuss this in the context of SimpleXMLRPCServer. Any pointers in the direction of resolving the httplib.CannotSendRequest issue would be appreciated.
===========================================================================================
ANSWER:
Okay, I'm a bit stupid. I think I was staring at the code for too protracted a period of time that I missed the obvious solution staring me in the face (quite literally, because the answer is actually in the actual question.)
Basically, the CannotSendRequest occurs when an httplib.HTTPConnection is interrupted by an intervening 'request' operation. Each httplib.HTTPConnection.request must be paired with a .getresponse() call. If that pairing is interrupted by another request operation, the second request will produce the CannotSendRequest error. so:
connection = httplib.HTTPConnection(...)
connection.request(...)
connection.request(...)
will fail because you have two requests on the same connection before any getresponse is called.
Linking that back to my question:
the only place in the three programs where such connections are being made are in the serverproxy calls.
the problem only occurs during threading, so it's likely a race condition.
the only place a serverproxy call is shared is in middleserver.py
The solution then, is obviously to have each thread create it's own serverproxy. The fixed version of middleserver is below, and it works:
import SocketServer
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
import xmlrpclib
class AsyncXMLRPCServer(SocketServer.ThreadingMixIn,SimpleXMLRPCServer): pass
# Create server
server = AsyncXMLRPCServer(('', 8888), SimpleXMLRPCRequestHandler)
server.register_introspection_functions()
def call_waste():
# Each call to this function creates its own serverproxy.
# If this function is called by concurrent threads, each thread
# will safely have its own serverproxy.
s = xmlrpclib.ServerProxy('http://localhost:9999')
s.waste_time()
return True
server.register_function(call_waste, 'call_waste')
server.serve_forever()
Since this version results in each thread having its own xmlrpclib.serverproxy, there is no risk of the same instance of serverproxy invoking HTTPConnection.request more than once in succession. The programs work as intended.
Sorry for the bother.
Okay, I'm a bit stupid. I think I was staring at the code for to protracted a period of time that I missed the obvious solution staring me in the face (quite literally, because the answer is actually in the actual question.)
Basically, the CannotSendRequest occurs when an httplib.HTTPConnection is interrupted by an intervening 'request' operation. Basically, each httplib.HTTPConnection.request must be paired with a .getresponse() call. If that pairing is interrupted by another request operation, the second request will produce the CannotSendRequest error. so:
connection = httplib.HTTPConnection(...)
connection.request(...)
connection.request(...)
will fail because you have two requests on the same connection before any getresponse is called.
Linking that back to my question:
the only place in the three programs where such connections are being made are in the serverproxy calls.
the problem only occurs during threading, so it's likely a race condition.
the only place a serverproxy call is shared is in middleserver.py
The solution then, is obviously to have each thread create it's own serverproxy. The fixed version of middleserver is below, and it works:
import SocketServer
from SimpleXMLRPCServer import SimpleXMLRPCServer
from SimpleXMLRPCServer import SimpleXMLRPCRequestHandler
import xmlrpclib
class AsyncXMLRPCServer(SocketServer.ThreadingMixIn,SimpleXMLRPCServer): pass
# Create server
server = AsyncXMLRPCServer(('', 8888), SimpleXMLRPCRequestHandler)
server.register_introspection_functions()
def call_waste():
# Each call to this function creates its own serverproxy.
# If this function is called by concurrent threads, each thread
# will safely have its own serverproxy.
s = xmlrpclib.ServerProxy('http://localhost:9999')
s.waste_time()
return True
server.register_function(call_waste, 'call_waste')
server.serve_forever()
Since this version results in each thread having its own xmlrpclib.serverproxy, there is no risk of the serverproxy invoking HTTPConnection.request more than once in succession. The programs work as intended.
Sorry for the bother.

How can I debug pserve using Eclipse?

I'm getting started with Pyramid development on Windows. I have Python 2.7 installed. I used virtualenv to create a nice sandbox for my Pyramid app. I also created PyDev 2.4 on Eclipse Indigo. I also created a separate PyDev interpreter just for my virutalenv, so it should have access to all the directories.
I set up a new debug configuration.
Project: testapp (the only project in the workspace)
Main module: ${workspace_loc:testapp/Scripts/pserve-script.py}
Args: development.ini
Working dir: Other: ${workspace_loc:testapp/testapp}
When I hit Debug, the output is:
pydev debugger: starting Starting server in PID 2208.
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ Unhandled exception in thread started by
Traceback (most recent call last):
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
Unhandled exception in thread started by
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__
TypeErrorTraceback (most recent call last):
self.original_func(*self.args, **self.kwargs) :
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeError: self.original_func(*self.args, **self.kwargs) : ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple :
ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
serving on http://0.0.0.0:6543
Even though it says the server is running, it's not. Nothing is listening on that port.
Any idea on how to fix this? Debugging certainly isn't necessary, but I like having a fully set up development environment. Thanks!
Pyramid includes remarkably good debug support in the form of the debug toolbar.
Make sure that the line
pyramid.includes = pyramid_debugtoolbar
in your development.ini isn't commented out to enable it. It doesn't support Eclipse breakpoints, but gives almost everything else you'd want.
Haven't gotten into that error, but usually, on difficult to debug environments, the remote debugger (http://pydev.org/manual_adv_remote_debugger.html) may be used (that way it works kind of like pdb: add code to add a breakpoint, so, until that point, your program runs as usual).
Pyramid's pserve seems to use multiple threads like Fabio suggests might be the case. I found I could make breakpoints work by monkey-patching the ThreadTaskDispatcher before invoking pserve:
# Allow attaching PyDev to the web app
import sys;sys.path.append('..../pydev/2.5.0-2/plugins/org.python.pydev.debug_2.4.0.201208051101/pysrc/')
# Monkey patch the thread task dispatcher, so it sets up the tracer in the worker threads
from waitress.task import ThreadedTaskDispatcher
_prev_start_new_thread = ThreadedTaskDispatcher.start_new_thread
def start_new_thread(ttd, fn, args):
def settrace_and_call(*args, **kwargs):
import pydevd ; pydevd.settrace(suspend=False)
return fn(*args, **kwargs)
from thread import start_new_thread
start_new_thread(settrace_and_call, args)
ThreadedTaskDispatcher.start_new_thread = start_new_thread
Note, I also tried:
set_trace(..., trace_only_current_thread=False)
But this either makes the app unusably slow, or doesn't work for some other reason.
Having done the above, when run the app will automatically register it with pydev debug server running locally. See:
http://pydev.org/manual_adv_remote_debugger.html

Categories