Python Pika blocking connection on a dedicated class - callback not being call - python

When i run a simple app (producer / consumer) - all is working. aka connecting to rabbitmq, consuming msgs etc.
when I run the below code inside a class (in order not to block the flask app) --> the callback not being called.
I added some prints and it seems that all is fine during the connection initialization.
The msgs are consumed (the queue is is being clean) --> but the callback is not even being called.
Any idea ?
TIA
here is the code:
currentprices = ({"1":5.5, "2":3.5})
def initialparams(): #being called from outside
print('S-init')
RMQ = RMQPriceListener()
RMQ.create_currentprices_channel()
print('F-init')
class RMQPriceListener():
def create_currentprices_channel(self):
try:
credential_params = pika.PlainCredentials('un', 'pw')
connection_params = pika.ConnectionParameters(
host='127.0.0.1',port=5555,
credentials=credential_params) # i use port 5555
connection = pika.BlockingConnection(connection_params)
channel = connection.channel()
channel.queue_declare(queue='formatedrates')
print('c0') # being printed
channel.basic_consume(queue='formatedrates',
on_message_callback=self.callback,
auto_ack=True) # without this (above) self - the app
# is stuck here (which is logical)
print('c1') # being printed
channel.start_consuming() # working good and cleaning the queue
return ('lost connection to RMQ')
except:
return ('No connection to RMQ')
** the problematic func that works good on regular app but here inside the class not**
def callback(ch, method, properties,msg):
print(f'msg={msg}') # never printed
global currentprices # has an initial value and being call from the below
getcurrentrates method and never change its value
currentprices = msg # never being updated
print(currentprices) # never printed
def getcurrentrates():
print('get' + str(currentprices)) # printed
print(type(currentprices)) # printed - ' dict '
return currentprices # returned with initial value

Since you define the on_message_callback=self.callback, you have to define the callback function as a class method with self as first function parameter as below
def callback(self, ch, method, properties, msg):
....

Related

Do I need to explicitly del this object?

class Tokenizer()
def __init__(self):
self.name = 'MyTokenizer'
self.tokenizer = Language.create_tokenizer(nlp)
def __call__(self, text):
if text:
with CoreClient(timeout=60000) as client:
doc = client.annotate(text, output_format='json')
else:
doc = Document("")
...
The question I am having is with the creation of 'CoreClient', which creates a http request to a server. The current code introduced by "with ... as client", can insure that the client is destroyed when 'client.annotate' is out of scope after it's done. However, the problem is that, the object 'client' has to be created for each request of processing 'text'. In order to avoid this, I had better create the object in the init method:
self.client = CoreClient(timeout=60000)
But then:
1) How to destroy the 'client' after all requests have been completed? OR
2) Is the current way of creating a Coreclient OK for each request? The creation of the object is heavy, which needs a lot of initialization.
EDIT:
def __enter__(self):
self.start()
return self
def start(self):
if self.start_cmd:
if self.be_quiet:
# Issue #26: subprocess.DEVNULL isn't supported in python 2.7.
stderr = open(os.devnull, 'w')
else:
stderr = self.stderr
print(f"Starting server with command: {' '.join(self.start_cmd)}")
self.server = subprocess.Popen(self.start_cmd,
stderr=stderr,
stdout=stderr)
To make it more clear, I added the implementation of the method enter. It seems it simply returns the object 'self'.
You only need to create the instance of CoreClient once. The with statement just ensures that the __enter__ and __exit__ methods of that instance are called before and after the body of the with statement; you don't need to create a new instance each time.
class Tokenizer()
def __init__(self):
self.name = 'MyTokenizer'
self.tokenizer = Language.create_tokenizer(nlp)
self.client = CoreClient(timeout=60000) # Create client here
def __call__(self, text):
if text:
with self.client:
doc = self.client.annotate(text, output_format='json')
else:
doc = Document("")
It appears that __enter__ and __exit__ together spin up and tear down a new server each time the CoreClient instance is used as a context manager.
The client will be collected when the Tokenizer instance gets collected. However, unless you are in an active with statement, the CoreClient instance isn't doing anything.
In this case I wouldn't worry about it because when the reference count goes to zero, Python will take care of it. Also, del does not actually delete and object. It might, but it might not. del will decrement the reference count to an object.
Take this for example:
In [1]: class Test:
...: def __del__(self):
...: print('deleted')
...:
In [2]: t = Test()
In [3]: del t
deleted
In [4]: t = Test()
In [5]: t1 = t
In [6]: del t # Nothing gets printed here because t1 still exists
In [7]: del t1 # reference count goes to 0 and now gets printed
deleted
This is why I think you should just let Python handle the destruction of your objects. Python keeps track of objects reference counts and knows when they are no longer needed. So let it take care of that stuff for you.

Python: Passing a class member function to another class's callback

Can I pass class A into class B so that B can run a callback using A's member function?
I am trying to write a Python leg class for a robot I am building. I am using a Raspberry Pi as the main computer, and Martin O'Hanlon's KY040 rotary encoder library KY040 to detect each 1/4 rotation of the leg. To this end, I watch for the first of several clicks, sleep for a short time, stop the servo, and now a 1/4 rotation has been achieved. In standalone, unthreaded code this works fine, but creating a class has been a challenge.
Details:
A threaded sentinel loop watches a boolean (quarterTurn) to signal that a rotation must be carried out.
def run(self):
print "leg running"
while self._running:
sleep(.0001)
if self.quarterTurn:
print "quarterTurn is: " + str(self.quarterTurn)
self.qTurn(self.quarterCount)
qTurn accesses a pwm controller to activate the motors, and reset quarterTurn to false.
def qTurn(self, quarters):
count = 0
while count < quarters:
sleep(.0001)
self.setMotor(self.maxPulse)
if self.ClickedOnce:
count = count + 1
sleep(.17)
self.parkMotor()
sleep(.04)
self.clickedOnce = False
self.quarterTurn = False
The trick is that O'Hanlon's class is already threaded. On one hand, it is convenient, on the other, it makes my class more complex. The KY040 makes use of a callback function to provide feedback, but using this within my class is the source of my trouble.
I need the callback to modify a a boolean in my leg class, but this function is only called by the KY040 class, which tries to pass itself into the function.
def rotaryChange(self, pin):
self.clickedOnce = True
Since the code is open source (thank you, O'Hanlon), I thought I could modify the constructor of the KY040 to let me pass my leg class into it, so that I could modify the correct data.
O'Hanlon's Original Constructor:
def __init__(self, clockPin, dataPin, switchPin=None, rotaryCallback=None, switchCallback=None,rotaryBouncetime=250, switchBouncetime=300):
# persist values
self.clockPin = clockPin
self.dataPin = dataPin
self.switchPin = switchPin
self.rotaryCallback = rotaryCallback
self.switchCallback = switchCallback
self.rotaryBouncetime = rotaryBouncetime
self.switchBouncetime = switchBouncetime
#setup pins
GPIO.setup(clockPin, GPIO.IN)
GPIO.setup(dataPin, GPIO.IN)
if None != self.switchPin:
GPIO.setup(switchPin, GPIO.IN, pull_up_down=GPIO.PUD_UP)
I added a "host" variable, into which I pass the leg class:
def __init__(self, clockPin, dataPin, switchPin=None, rotaryCallback=None, switchCallback=None, host=None, rotaryBouncetime=250, switchBouncetime=300):
# persist values
self.clockPin = clockPin
self.dataPin = dataPin
self.switchPin = switchPin
self.rotaryCallback = rotaryCallback
self.switchCallback = switchCallback
self.rotaryBouncetime = rotaryBouncetime
self.switchBouncetime = switchBouncetime
# My Change
self.host = host
#setup pins
GPIO.setup(clockPin, GPIO.IN)
GPIO.setup(dataPin, GPIO.IN)
if None != self.switchPin:
GPIO.setup(switchPin, GPIO.IN, pull_up_down=GPIO.PUD_UP)
The modified constructor would be called like so:
self.encoder = KY040(self.clockPin, self.dataPin, rotaryCallback=self.rotaryChange, host=self)
O'Hanlon's callback now passes the host along:
def _clockCallback(self, pin):
# My change
self.rotaryCallback(pin, self.host)
My new callback:
def rotaryChange(pin, host):
host.clickedOnce = True
Unfortunately, after making sure the modified code is installed with the setup script, it doesn't seem to acknowledge my new additions. I run my program and receive the follwing error:
Traceback (most recent call last):
File "ctf.py", line 18, in <module>
LR = leg.leg(lr_chan, lr_max, lr_park, lr_clk, lr_data);
File "/home/[user]/hexacrescentapod/leg.py", line 47, in __init__
self.encoder = KY040(self.clockPin, self.dataPin,
rotaryCallback=self.rotaryChange, host=self)
TypeError: __init__() got an unexpected keyword argument 'host'
This is a little confusing because of your wording. Are you actually trying to pass a class in as you say, or an instance of that class as you seem to be doing? Which class is rotaryChange defined in?
Anyway, it looks like what you're actually trying to do is pass self.rotaryChange as a callback.
This already works, without any changes. self.rotaryChange is a bound method, meaning it knows what that self was when it was created, and will pass it when it's called. This may be easier to see with an example:
>>> class Spam:
... def eggs(self):
... pass
>>> spam = Spam()
>>> spam
<__main__.Spam at 0x119947630>
>>> spam.eggs
<bound method Spam.eggs of <__main__.Spam object at 0x119947630>>
Notice that it's a bound method of the spam object. When you call spam.eggs(), that spam object will be passed as the self argument.
This means you don't need to pass a host in, because it's already available as self. And, since that's the only thing you do with host, you don't need to pass around host in the first place. Which means you can revert all of your changes to the library code.
You do need to define your callback method as a proper method, with self as the first argument. But that's it. Then you can just pass rotaryCallback=self.rotaryChange to the constructor, and everything will work.
At a first look, it looks like your new callback is missing a self field?
The original function was
def rotaryChange(self, pin):
self.clickedOnce = True
But your implementation is:
def rotaryChange(pin, host):
host.clickedOnce = True
If this function sits inside a class it needs to have a self parameter

Keeping context-manager object alive through function calls

I am running into a bit of an issue with keeping a context manager open through function calls. Here is what I mean:
There is a context-manager defined in a module which I use to open SSH connections to network devices. The "setup" code handles opening the SSH sessions and handling any issues, and the teardown code deals with gracefully closing the SSH session. I normally use it as follows:
from manager import manager
def do_stuff(device):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
return processed_output
In order to keep the SSH session open and not have to re-establish it across function calls, I would like to do add an argument to "do_stuff" which can optionally return the SSH session along with the data returned from the SSH session, as follows:
def do_stuff(device, return_handle=False):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
if return_handle:
return (processed_output, conn)
else:
return processed_output
I would like to be able to call this function "do_stuff" from another function, as follows, such that it signals to "do_stuff" that the SSH handle should be returned along with the output.
def do_more_stuff(device):
data, conn = do_stuff(device, return_handle=True)
output = conn.send_command("show users")
#process output...
return processed_output
However the issue that I am running into is that the SSH session is closed, due to the do_stuff function "returning" and triggering the teardown code in the context-manager (which gracefully closes the SSH session).
I have tried converting "do_stuff" into a generator, such that its state is suspended and perhaps causing the context-manager to stay open:
def do_stuff(device, return_handle=False):
with manager(device) as conn:
output = conn.send_command("show ip route")
#process output...
if return_handle:
yield (processed_output, conn)
else:
yield processed_output
And calling it as such:
def do_more_stuff(device):
gen = do_stuff(device, return_handle=True)
data, conn = next(gen)
output = conn.send_command("show users")
#process output...
return processed_output
However this approach does not seem to be working in my case, as the context-manager gets closed, and I get back a closed socket.
Is there a better way to approach this problem? Maybe my generator needs some more work...I think using a generator to hold state is the most "obvious" way that comes to mind, but overall should I be looking into another way of keeping the session open across function calls?
Thanks
I found this question because I was looking for a solution to an analogous problem where the object I wanted to keep alive was a pyvirtualdisplay.display.Display instance with selenium.webdriver.Firefox instances in it.
I also wanted any opened resources to die if an exception were raised during the display/browser instance creations.
I imagine the same could be applied to your database connection.
I recognize this probably only a partial solution and contains less-than-best practices. Help is appreciated.
This answer is the result of an ad lib spike using the following resources to patch together my solution:
https://docs.python.org/3/library/contextlib.html#contextlib.ContextDecorator
http://www.wefearchange.org/2013/05/resource-management-in-python-33-or.html
(I do not yet fully grok what is described here though I appreciate the potential. The second link above eventually proved to be the most helpful by providing analogous situations.)
from pyvirtualdisplay.display import Display
from selenium.webdriver import Firefox
from contextlib import contextmanager, ExitStack
RFBPORT = 5904
def acquire_desktop_display(rfbport=RFBPORT):
display_kwargs = {'backend': 'xvnc', 'rfbport': rfbport}
display = Display(**display_kwargs)
return display
def release_desktop_display(self):
print("Stopping the display.")
# browsers apparently die with the display so no need to call quits on them
self.display.stop()
def check_desktop_display_ok(desktop_display):
print("Some checking going on here.")
return True
class XvncDesktopManager:
max_browser_count = 1
def __init__(self, check_desktop_display_ok=None, **kwargs):
self.rfbport = kwargs.get('rfbport', RFBPORT)
self.acquire_desktop_display = acquire_desktop_display
self.release_desktop_display = release_desktop_display
self.check_desktop_display_ok = check_desktop_display_ok \
if check_desktop_display_ok is None else check_desktop_display_ok
#contextmanager
def _cleanup_on_error(self):
with ExitStack() as stack:
"""push adds a context manager’s __exit__() method
to stack's callback stack."""
stack.push(self)
yield
# The validation check passed and didn't raise an exception
# Accordingly, we want to keep the resource, and pass it
# back to our caller
stack.pop_all()
def __enter__(self):
url = 'http://stackoverflow.com/questions/30905121/'\
'keeping-context-manager-object-alive-through-function-calls'
self.display = self.acquire_desktop_display(self.rfbport)
with ExitStack() as stack:
# add XvncDesktopManager instance's exit method to callback stack
stack.push(self)
self.display.start()
self.browser_resources = [
Firefox() for x in range(self.max_browser_count)
]
for browser_resource in self.browser_resources:
for url in (url, ):
browser_resource.get(url)
"""This is the last bit of magic.
ExitStacks have a .close() method which unwinds
all the registered context managers and callbacks
and invokes their exit functionality."""
# capture the function that calls all the exits
# will be called later outside the context in which it was captured
self.close_all = stack.pop_all().close
# if something fails in this context in enter, cleanup
with self._cleanup_on_error() as stack:
if not self.check_desktop_display_ok(self):
msg = "Failed validation for {!r}"
raise RuntimeError(msg.format(self.display))
# self is assigned to variable after "as",
# manually call close_all to unwind callback stack
return self
def __exit__(self, *exc_details):
# had to comment this out, unable to add this to callback stack
# self.release_desktop_display(self)
pass
I had a semi-expected result with the following:
kwargs = {
'rfbport': 5904,
}
_desktop_manager = XvncDesktopManager(check_desktop_display_ok=check_desktop_display_ok, **kwargs)
with ExitStack() as stack:
# context entered and what is inside the __enter__ method is executed
# desktop_manager will have an attribute "close_all" that can be called explicitly to unwind the callback stack
desktop_manager = stack.enter_context(_desktop_manager)
# I was able to manipulate the browsers inside of the display
# and outside of the context
# before calling desktop_manager.close_all()
browser, = desktop_manager.browser_resources
browser.get(url)
# close everything down when finished with resource
desktop_manager.close_all() # does nothing, not in callback stack
# this functioned as expected
desktop_manager.release_desktop_display(desktop_manager)

How to make spydlay module to work like httplib/http.client?

I have to test server based on Jetty. This server can work with its own protocol, HTTP, HTTPS and lastly it started to support SPDY. I have some stress tests which are based on httplib /http.client -- each thread start with similar URL (some data in query string are variable), adds execution time to global variable and every few seconds shows some statistics. Code looks like:
t_start = time.time()
connection.request("GET", path)
resp = connection.getresponse()
t_stop = time.time()
check_response(resp)
QRY_TIMES.append(t_stop - t_start)
Client working with native protocol shares httplib API, so connection may be native, HTTPConnection or HTTPSConnection.
Now I want to add SPDY test using spdylay module. But its interface is opaque and I don't know how to change its opaqueness into something similar to httplib interface. I have made test client based on example but while 2nd argument to spdylay.urlfetch() is class name and not object I do not know how to use it with my tests. I have already add tests to on_close() method of my class which extends spdylay.BaseSPDYStreamHandler, but it is not compatibile with other tests. If it was instance I would use it outside of spdylay.urlfetch() call.
How can I use spydlay in a code that works based on httplib interfaces?
My only idea is to use global dictionary where url is a key and handler object is a value. It is not ideal because:
new queries with the same url will overwrite previous response
it is easy to forget to free handler from global dictionary
But it works!
import sys
import spdylay
CLIENT_RESULTS = {}
class MyStreamHandler(spdylay.BaseSPDYStreamHandler):
def __init__(self, url, fetcher):
super().__init__(url, fetcher)
self.headers = []
self.whole_data = []
def on_header(self, nv):
self.headers.append(nv)
def on_data(self, data):
self.whole_data.append(data)
def get_response(self, charset='UTF8'):
return (b''.join(self.whole_data)).decode(charset)
def on_close(self, status_code):
CLIENT_RESULTS[self.url] = self
def spdy_simply_get(url):
spdylay.urlfetch(url, MyStreamHandler)
data_handler = CLIENT_RESULTS[url]
result = data_handler.get_response()
del CLIENT_RESULTS[url]
return result
if __name__ == '__main__':
if '--test' in sys.argv:
spdy_response = spdy_simply_get('https://localhost:8443/test_spdy/get_ver_xml.hdb')
I hope somebody can do spdy_simply_get(url) better.

Callbacks and events in python

I'm making a bot to link IRC and DC (direct connect) together. There is an existing implementation in C++ I've been following, but it doesn't have all the feature's we're after.
I'm using an IRC library for python which is really well coded. I can register some callback handlers for various IRC events (specifically receiving a public message). This callback function is able to reference objects created in the main python execution from the thread within the IRC library.
Here are my callbacks:
def on_connect(connection, event):
connection.join(ircSettings['channel'])
def on_disconnect(connection, event):
sys.exit()
def on_pubmsg(connection, event):
hubClient.sendMessage(event.source.split('!')[0] + ': ' + event.arguments[0])
And here's how I set them up:
# Create the IRC client
ircClient = irc.client.IRC()
try:
ircConnection = ircClient.server().connect(ircSettings['server'], ircSettin$
except irc.client.ServerConnectionError, x:
print x
sys.exit()
# Set the IRC event handlers
ircConnection.add_global_handler("welcome", on_connect)
ircConnection.add_global_handler("pubmsg", on_pubmsg)
ircConnection.add_global_handler("disconnect", on_disconnect)
I really like this solution, as it makes for very tidy code (particularly in this example). However, I have no idea how to modify my DC library to generate these events.
The main point of interest is the callback's ability to reference the hubClient, which is created in the main python program like so:
# Create the DC client
hubClient = DC.DirectConnect(dcSettings)
hubClient.connect(dcSettings['hub'])
Initially, I passed a function pointer to my DC library to run whenever a message is received:
def messageHandler(nick, msg):
if nick is not ircSettings['nick']:
ircConnection.privmsg(ircSettings['channel'], nick + ': ' + msg)
dcSettings = {
'nick': 'dans_bot',
'sharesize': 10*1024**3, # 10GB
'ip': '0.0.0.0', # XXX: This might not matter, but needed for library
'hub': ('192.168.1.129', 411),
'handler': messageHandler
}
But I get the error:
NameError: global name 'ircConnection' is not defined
How can I set up my DC client to create a callback in a way that I can still reference these local (to the main execution) objects?
Edit: I added a declaration for 'ircConnection'.
I suppose ircConnection is a third party module. And a simple import of that module may solve this error of global nameircConnectionis not defined. Try import ircConnection in your main module
The only problem in your code is that the reference to ircConnection is first seen inside the try-except block and if it fails then the var will be None. Just write ircConnection = None before try.
# Create the IRC client
ircClient = irc.client.IRC()
ircConnection = None
try:
ircConnection = ircClient.server().connect(ircSettings['server'], ircSettin$
except irc.client.ServerConnectionError, x:
print x
sys.exit()
# Set the IRC event handlers
ircConnection.add_global_handler("welcome", on_connect)
ircConnection.add_global_handler("pubmsg", on_pubmsg)
ircConnection.add_global_handler("disconnect", on_disconnect)

Categories