This implementation of the AWS MQTT broker is confusing me.
In the code, there is this function definition:
def on_message_received(topic, payload, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
global received_count
received_count += 1
if received_count == args.count:
received_all_event.set()
And then the function is called like this:
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=args.topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=on_message_received
)
subscribe_result = subscribe_future.result()
There are two things which are confusing me:
Why the on_message_received function is called without parameters?
If I want to pass a variable to on_message_received and do something to it from within that function, which would be the correct approach?
About point (2), consider the following example, which seems to be working:
last_message = ''
def on_message_received(topic, payload, **kwargs):
print("Received message from topic '{}': {}".format(topic, payload))
last_message = payload
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=args.topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=on_message_received
)
subscribe_result = subscribe_future.result()
But I don't think it's the correct approach. But I don't know how I should pass the external variable to the function.
The example you posted doesn't actually work because last_message is a local variable -- you can't access it from outside the function. I'd suggest putting it in a handler object, like this:
class MessageHandler:
def __init__(self):
self.last_message = ''
def on_message_received(self, topic, payload, **_kwargs):
print(f"Received message from topic '{topic}': {payload}")
self.last_message = payload
handler = MessageHandler()
subscribe_future, packet_id = mqtt_connection.subscribe(
topic=args.topic,
qos=mqtt.QoS.AT_LEAST_ONCE,
callback=handler.on_message_received
)
subscribe_result = subscribe_future.result()
print("Last message:", handler.last_message)
The on_message_received function is not called by your code, it is passed as an argument to mqtt_connection.subscribe. The API is then what will call it (later, when there's a message), and at that time it should provide the topic and payload args to your function.
Calling subscribe_future.result() causes your program to stop and wait for a message to be received (i.e. it waits for the Future object to produce a result). Note that in real life you usually don't want to immediately block on a future (because the point of a future is that it's something that will happen in the future and you probably have other things to do while you wait for that future to happen).
Related
I have an api that returns response of pagination only 10 records at a time. I want to process 10 record (index=0 and limit=10) then next 10(index=10 and limit=10) and so on till it returns empty array.
I want to do it in async way.
I am using the following deps:
yarl==1.6.0
Mako==1.1.3
asyncio==3.4.3
aiohttp==3.6.2
The code is:
loop = asyncio.get_event_loop()
loop.run_until_complete(getData(id, token,0, 10))
logger.info("processed all data")
async def getData(id, token, index, limit):
try:
async with aiohttp.ClientSession() as session:
response = await fetch_data_from_api(session, id, token, index, limit)
if response == []:
logger.info('Fetched all data')
else:
# process data(response)
getData(session, id, limit, limit+10)
except Exception as ex:
raise Exception(ex)
async def fetch_data_from_api(
session, id, token, index, limit
):
try:
url = f"http://localhost:8080/{id}?index={index}&limit={limit}"
async with session.post(
url=url,
headers={"Authorization": token}
) as response:
response.raise_for_status()
response = await response.json()
return json.loads(json.dumps(response))
except Exception as ex:
raise Exception(
f"Exception {ex} occurred"
)
I issue is that it works fine for first time but when i am calling the method getData(session, id, limit, limit+10) again from async def getData(id, token, index, limit). It is not been called.
How can i resolve the issue?
There are a few issues I see in your code.
First, and this is what you talk about, is the getData method.
It is unclear to me a bit by looking at the code, what is that "second" getData. In the function definition your arguments are getData(id, token, index, limit), but when you call it from within the function you call it with getData(session, id, limit, limit+10) where the id is the second parameter. Is that intentional? This looks to me like there is another getData method, or it's a bug.
In case of the first option: (a) you probably need to show us that code as well, as it's important for us to be able to give you better responses, and (b), more importantly, it will not work. Python doesn't support overloading and the getData you are referencing from within the wrapping getData is the same wrapping method.
In case it's the second option: (a) you might have an issue with the function parameters, and (b) - you are missing an await before the getData (i.e. await getData). This is actually probably also relevant in case it's the "first option".
Other than that, your exception handling is redundant. You basically just re raise the exception, so I don't see any point in having the try-catch blocks. Even more, for some reason in the first method, you create an Exception from the base exception class (not to be confused with BaseException). Just don't have the try block.
I'm following this Route_Guide sample.
The sample in question fires off and reads messages without replying to a specific message. The latter is what i'm trying to achieve.
Here's what i have so far:
import grpc
...
channel = grpc.insecure_channel(conn_str)
try:
grpc.channel_ready_future(channel).result(timeout=5)
except grpc.FutureTimeoutError:
sys.exit('Error connecting to server')
else:
stub = MyService_pb2_grpc.MyServiceStub(channel)
print('Connected to gRPC server.')
this_is_just_read_maybe(stub)
def this_is_just_read_maybe(stub):
responses = stub.MyEventStream(stream())
for response in responses:
print(f'Received message: {response}')
if response.something:
# okay, now what? how do i send a message here?
def stream():
yield my_start_stream_msg
# this is fine, i receive this server-side
# but i can't check for incoming messages here
I don't seem to have a read() or write() on the stub, everything seems to be implemented with iterators.
How do i send a message from this_is_just_read_maybe(stub)?
Is that even the right approach?
My Proto is a bidirectional stream:
service MyService {
rpc MyEventStream (stream StreamingMessage) returns (stream StreamingMessage) {}
}
What you're trying to do is perfectly possible and will probably involve writing your own request iterator object that can be given responses as they arrive rather than using a simple generator as your request iterator. Perhaps something like
class MySmarterRequestIterator(object):
def __init__(self):
self._lock = threading.Lock()
self._responses_so_far = []
def __iter__(self):
return self
def _next(self):
# some logic that depends upon what responses have been seen
# before returning the next request message
return <your message value>
def __next__(self): # Python 3
return self._next()
def next(self): # Python 2
return self._next()
def add_response(self, response):
with self._lock:
self._responses.append(response)
that you then use like
my_smarter_request_iterator = MySmarterRequestIterator()
responses = stub.MyEventStream(my_smarter_request_iterator)
for response in responses:
my_smarter_request_iterator.add_response(response)
. There will probably be locking and blocking in your _next implementation to handle the situation of gRPC Python asking your object for the next request that it wants to send and your responding (in effect) "wait, hold on, I don't know what request I want to send until after I've seen how the next response turned out".
Instead of writing a custom iterator, you can also use a blocking queue to implement send and receive like behaviour for client stub:
import queue
...
send_queue = queue.SimpleQueue() # or Queue if using Python before 3.7
my_event_stream = stub.MyEventStream(iter(send_queue.get, None))
# send
send_queue.push(StreamingMessage())
# receive
response = next(my_event_stream) # type: StreamingMessage
This makes use of the sentinel form of iter, which converts a regular function into an iterator that stops when it reaches a sentinel value (in this case None).
I would like to improve my coding style with a more robust grasp of try, except and raise in designing API, and less verbose code.
I have nested functions, and when one catches an execption, I am passing the exception to the other one and so on.
But like this, I could propagate multiple checks of a same error.
I am referring to:
[Using try vs if in python
for considering cost of try operation.
How would you handle an error only once across nested functions ?
E.g.
I have a function f(key) doing some operations on key; result is
passed to other functions g(), h()
if result comply with
expected data structure, g() .. h() will manipulate and return
updated result
a decorator will return final result or return the
first error that was met, that is pointing out in which method it was raised (f(),g() or h()).
I am doing something like this:
def f(key):
try:
#do something
return {'data' : 'data_structure'}
except:
return {'error': 'there is an error'}
#application.route('/')
def api_f(key):
data = f(k)
try:
# do something on data
return jsonify(data)
except:
return jsonify({'error':'error in key'})
IMO try/except is the best way to go for this use case. Whenever you want to handle an exceptional case, put in a try/except. If you can’t (or don’t want to) handle the exception in some sane way, let it bubble up to be handled further up the stack. Of course there are various reasons to take different approaches (e.g. you don’t really care about an error and can return something else without disrupting normal operation; you expect “exceptional” cases to happen more often than not; etc.), but here try/except seems to make the most sense:
In your example, it’d be best to leave the try/except out of f() unless you want to…
raise a different error (be careful with this, as this will reset your stack trace):
try:
### Do some stuff
except:
raise CustomError('Bad things')
do some error handling (e.g. logging; cleanup; etc.):
try:
### Do some stuff
except:
logger.exception('Bad things')
cleanup()
### Re-raise the same error
raise
Otherwise, just let the error bubble up.
Subsequent functions (e.g. g(); h()) would operate the same way. In your case, you’d probably want to have some jsonify helper function that jsonifies when possible but also handles non-json data:
def handle_json(data):
try:
return json.dumps(data)
except TypeError, e:
logger.exception('Could not decode json from %s: %s', data, e)
# Could also re-raise the same error
raise CustomJSONError('Bad things')
Then, you would have handler(s) further up the stack to handle either the original error or the custom error, ending with a global handler that can handle any error. In my Flask application, I created custom error classes that my global handler is able to parse and do something with. Of course, the global handler is configured to handle unexpected errors as well.
For instance, I might have a base class for all http errors…
### Not to be raised directly; raise sub-class instances instead
class BaseHTTPError(Exception):
def __init__(self, message=None, payload=None):
Exception.__init__(self)
if message is not None:
self.message = message
else:
self.message = self.default_message
self.payload = payload
def to_dict(self):
"""
Call this in the the error handler to serialize the
error for the json-encoded http response body.
"""
payload = dict(self.payload or ())
payload['message'] = self.message
payload['code'] = self.code
return payload
…which is extended for various http errors:
class NotFoundError(BaseHTTPError):
code = 404
default_message = 'Resource not found'
class BadRequestError(BaseHTTPError):
code = 400
default_message = 'Bad Request'
class NotFoundError(BaseHTTPError):
code = 500
default_message = 'Internal Server Error'
### Whatever other http errors you want
And my global handler looks like this (I am using flask_restful, so this gets defined as a method on my extended flask_restful.Api class):
class RestAPI(flask_restful.Api):
def handle_error(self, e):
code = getattr(e, 'code', 500)
message = getattr(e, 'message', 'Internal Server Error')
to_dict = getattr(e, 'to_dict', None)
if code == 500:
logger.exception(e)
if to_dict:
data = to_dict()
else:
data = {'code': code, 'message': message}
return self.make_response(data, code)
With flask_restful, you may also just define your error classes and pass them as a dictionary to the flask_restful.Api constructor, but I prefer the flexibility of defining my own handler that can add payload data dynamically. flask_restful automatically passes any unhandled errors to handle_error. As such, this is the only place I’ve needed to convert the error to json data because that is what flask_restful needs in order to return an https status and payload to the client. Notice that even if the error type is unknown (e.g. to_dict not defined), I can return a sane http status and payload to the client without having had to convert errors lower down the stack.
Again, there are reasons to convert errors to some useful return value at other places in your app, but for the above, try/except works well.
I'm confused as to when to return self inside a class and when to return a value which may or may not possibly be used to check the method ran correctly.
def api_request(self, data):
#api web request code
return response.text
def connect(self):
#login to api, set some vars defined in __init__
return self
def send_message(self, message):
#send msg code
return self
So above theres a few examples. api_request I know having the text response is a must. But with send_message what should I return?
which is then converted to a dict to check a key exists, else raise error).
Should it return True, the response->dict, or self?
Thanks in advance
Since errors tend to be delivered as exceptions and hence success/fail return values are rarely useful, a lot of object-modifier functions wind up with no return value at all—or more precisely, return None, since you can't return nothing-at-all. (Consider some of Python's built-in objects, like list, where append and extend return None, and dict, where dict.update returns None.)
Still, returning self is convenient for chaining method calls, even if some Pythonistas don't like it. See kindall's answer in Should internal class methods returnvalues or just modify instance variables in python? for example.
Edit to add some examples based on comment:
What you "should" return—or raise an exception, in which case, "what exception"—depends on the problem. Do you want send_message() to wait for a response, validate that response, and verify that it was good? If so, do you want it to raise an error if there is no response, the validation fails, or the response was valid but says "message rejected"? If so, do you want different errors for each failure, etc? One reasonable (for some value of reasonable) method is to capture all failures with a "base" exception, and make each "type" of failure a derivative of that:
class ZorgError(Exception): # catch-all "can't talk via the Zorg-brand XML API"
pass
class ZorgRemoteDown(ZorgError): # connect or send failed, or no response/timeout
pass
class ZorgNuts(ZorgError): # remote response incomprehensible
pass
class ZorgDenied(ZorgError): # remote says "permission denied"
pass
# add more if needed
Now some of your functions might look something like this (note, none of this is tested):
def connect(self):
"""connect to server, log in"""
... # do some prep work
addr = self._addr
try:
self._sock.connect(addr)
except socket.error as err:
if err.errno == errno.ECONNREFUSED: # server is down
raise ZorgRemoteDown(addr) # translate that to our Zorg error
# add more special translation here if needed
raise # some other problem, propagate it
... # do other stuff now that we're connected, including trying to log in
response = self._get_response()
if response == 'login denied' # or whatever that looks like
raise ZorgDenied() # maybe say what exactly was denied, maybe not
# all went well, return None by not returning anything
def send_message(self, msg):
"""encode the message in the way the remote likes, send it, and wait for
a response from the remote."""
response = self._send_and_wait(self._encode(msg))
if response == 'ok':
return
if response == 'permission denied':
raise ZorgDenied()
# don't understand what we got back, so say the remote is crazy
raise ZorgNuts(response)
Then you need some "internal" functions like these:
def _send_and_wait(self, raw_xml):
"""send raw XML to server"""
try:
self._sock.sendall(raw_xml)
except socket.error as err:
if err.errno in (errno.EHOSTDOWN, errno.ENETDOWN) # add more if needed
raise ZorgRemoteDown(self._addr)
raise
return self._get_response()
def _get_response(self):
"""wait for a response, which is supposedly XML-encoded"""
... some code here ...
if we_got_a_timeout_while_waiting:
raise ZorgRemoteDown(self._addr)
try:
return some_xml_decoding_stuff(raw_xml)
except SomeXMLDecodeError:
raise ZorgNuts(raw_xml) # or something else suitable for debug
You might choose not to translate socket.errors at all, and not have all your own errors; perhaps you can squeeze your errors into ValueError and KeyError and so on, for instance.
These choices are what programming is all about!
Generally, objects in python are mutable. You therefore do not return self, as the modifications you make in a method are reflected in the object itself.
To use your example:
api = API() # initialise the API
if api.connect(): # perhaps return a bool, indicating that the connection succeeded
api.send_message() # you now know that this API instance is connected, and can send messages
I'm trying to make an IRC bot using the twisted.words.protocols.irc module.
The bot will parse messages from a channel and parse them for command strings.
Everything works fine except when I need the bot to identify a nick by sending a whois command. The whois reply will not be handled until the privmsg method (the method from which I'm doing the parsing) returns.
example:
from twisted.words.protocols import irc
class MyBot(irc.IRClient):
..........
def privmsg(self, user, channel, msg):
"""This method is called when the client recieves a message"""
if msg.startswith(':whois '):
nick = msg.split()[1]
self.whois(nick)
print(self.whoislist)
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""This method is called when the client recieves a reply for whois"""
self.whoislist[prefix] = params
Is there a way to somehow make the bot wait for a reply after self.whois(nick)?
Perhaps use a thread (I don't have any experience with those).
Deferred is a core concept in Twisted, you must be familiar with it to use Twisted.
Basically, your whois checking function should return a Deferred that will be fired when you receive whois-reply.
I managed to fix this by running all handler methods as threads, and then setting a field, following
kirelagin's suggestion, before running a whois query, and modifying the method that recieves the data
to change the field when it recieves a reply. Its not the most elegant solution but it works.
Modified code:
class MyBot(irc.IRClient):
..........
def privmsg(self, user, channel, msg):
"""This method is called when the client recieves a message"""
if msg.startswith(':whois '):
nick = msg.split()[1]
self.whois_status = 'REQUEST'
self.whois(nick)
while not self.whois_status == 'ACK':
sleep(1)
print(self.whoislist)
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""This method is called when the client recieves a reply for whois"""
self.whoislist[prefix] = params
def handleCommand(self, command, prefix, params):
"""Determine the function to call for the given command and call
it with the given arguments.
"""
method = getattr(self, "irc_%s" % command, None)
try:
# all handler methods are now threaded.
if method is not None:
thread.start_new_thread(method, (prefix, params))
else:
thread.start_new_thread(self.irc_unknown, (prefix, command, params))
except:
irc.log.deferr()
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""docstring for irc_RPL_WHOISCHANNELS"""
self.whoislist[prefix] = params
def irc_RPL_ENDOFWHOIS(self, prefix, params):
self.whois_status = 'ACK'