I'm following this Route_Guide sample.
The sample in question fires off and reads messages without replying to a specific message. The latter is what i'm trying to achieve.
Here's what i have so far:
import grpc
...
channel = grpc.insecure_channel(conn_str)
try:
grpc.channel_ready_future(channel).result(timeout=5)
except grpc.FutureTimeoutError:
sys.exit('Error connecting to server')
else:
stub = MyService_pb2_grpc.MyServiceStub(channel)
print('Connected to gRPC server.')
this_is_just_read_maybe(stub)
def this_is_just_read_maybe(stub):
responses = stub.MyEventStream(stream())
for response in responses:
print(f'Received message: {response}')
if response.something:
# okay, now what? how do i send a message here?
def stream():
yield my_start_stream_msg
# this is fine, i receive this server-side
# but i can't check for incoming messages here
I don't seem to have a read() or write() on the stub, everything seems to be implemented with iterators.
How do i send a message from this_is_just_read_maybe(stub)?
Is that even the right approach?
My Proto is a bidirectional stream:
service MyService {
rpc MyEventStream (stream StreamingMessage) returns (stream StreamingMessage) {}
}
What you're trying to do is perfectly possible and will probably involve writing your own request iterator object that can be given responses as they arrive rather than using a simple generator as your request iterator. Perhaps something like
class MySmarterRequestIterator(object):
def __init__(self):
self._lock = threading.Lock()
self._responses_so_far = []
def __iter__(self):
return self
def _next(self):
# some logic that depends upon what responses have been seen
# before returning the next request message
return <your message value>
def __next__(self): # Python 3
return self._next()
def next(self): # Python 2
return self._next()
def add_response(self, response):
with self._lock:
self._responses.append(response)
that you then use like
my_smarter_request_iterator = MySmarterRequestIterator()
responses = stub.MyEventStream(my_smarter_request_iterator)
for response in responses:
my_smarter_request_iterator.add_response(response)
. There will probably be locking and blocking in your _next implementation to handle the situation of gRPC Python asking your object for the next request that it wants to send and your responding (in effect) "wait, hold on, I don't know what request I want to send until after I've seen how the next response turned out".
Instead of writing a custom iterator, you can also use a blocking queue to implement send and receive like behaviour for client stub:
import queue
...
send_queue = queue.SimpleQueue() # or Queue if using Python before 3.7
my_event_stream = stub.MyEventStream(iter(send_queue.get, None))
# send
send_queue.push(StreamingMessage())
# receive
response = next(my_event_stream) # type: StreamingMessage
This makes use of the sentinel form of iter, which converts a regular function into an iterator that stops when it reaches a sentinel value (in this case None).
Related
When I run my gRPC client and it attempts to stream a request to the server I get this error: "TypeError: has type list_iterator, but expected one of: bytes, unicode"
Do I need to encode the text I'm sending in some way? Error message makes some sense, as I am definitely passing in an iterator. I assumed from the gRPC documentation that this is what was needed. (https://grpc.io/docs/tutorials/basic/python.html#request-streaming-rpc)Anyway, sending a list or string yields a similar error.
At the moment I am sending a small test list of strings to the server in the request, but I plan to stream requests with very large amounts of text in the future.
Here's some of my client code.
def gen_tweet_space(text):
for tweet in text:
yield tweet
def run():
channel = grpc.insecure_channel('localhost:50050')
stub = ProseAndBabel_pb2_grpc.ProseAndBabelStub(channel)
while True:
iterator = iter(block_of_text)
response = stub.UserMarkov(ProseAndBabel_pb2.UserTweets(tweets=iterator))
Here's relevant server code:
def UserMarkov(self, request_iterator, context):
return ProseAndBabel_pb2.Babel(prose=markov.get_sentence(request_iterator.tweets))
Here's the proto where the rpc and messages are defined:
service ProseAndBabel {
rpc GetHaiku (BabelRequest) returns (Babel) {}
rpc GetBabel (BabelRequest) returns (Babel) {}
rpc UserMarkov (stream UserTweets) returns (UserBabel) {}
}
message BabelRequest{
string ask = 1;
}
message Babel{
string prose = 1;
}
message UserTweets{
string tweets = 1;
}
message UserBabel{
string prose = 1;
}
I've been successful getting the non-streaming rpc to work, but having trouble finding walkthroughs for request side streaming for python applications so I'm sure I'm missing something here. Any guidance/direction appreciated!
You need to pass the iterator of requests to the gRPC client stub, not to the protobuf constructor. The current code tries to instantiate a UserTweets protobuf with an iterator rather than an individual string, resulting in the type error.
response = stub.UserMarkov(ProseAndBabel_pb2.UserTweets(tweets=iterator))
You'll instead need to have your iterator to return instances of ProseAndBabel_pb2.UserTweets, each of which wraps one of the request strings you would like to send, and pass the iterator itself to the stub. Something like:
iterator = iter([ProseAndBabel_pb2.UserTweets(tweets=x) for x in block_of_text])
response = stub.UserMarkov(iterator)
I have an api code snippet:
#app.route("/do_something", method=['POST', 'OPTIONS'])
#CORS is enabled
def initiate_trade():
'''
post json
some Args: *input
'''
if request.method == 'OPTIONS':
yield {}
else:
response.headers['Content-type'] = 'application/json'
data = (request.json)
print data
for dump in json.dumps(function(input)): yield dump
The corresponding function is:
def function(*input):
#========= All about processing foo input ==========#
....
#========= All about processing foo input ends ==========#
worker = []
for this in foo_data:
#doing something
for _ in xrange(this):
#doing smthng again
worker.append(gevent.spawn(foo_fi, args))
result = gevent.joinall(worker)
some_dict.update({this: [t.value for t in worker]})
gevent.killall(worker)
worker = []
yield {this:some_dict[this]}
#gevent.sleep(2)
When I run the DHC rest client, w/o the gevent.sleep(2), it gives everything as if a synchronous return value. BUT, with the gevent.sleep(2) uncommented, nothing gets back.
What's wrong?
I thought sleep will cause a delay and "dump" value will be streamed one by one as is available.
Also im no javascript guy but I can read the code somewhat. But even ajax wouldn't receive the code if the server code is not being returned. So I am assuming that negates any possibilities of client side code malfunction and has everything to do with this code snippet.
Please note that instead of yielding, if I just return the value as
def function(*input):
.
.
return some_dict
and on api side I do:
return json.dumps(function(input))
then everything works fine on the client side.
I'm confused as to when to return self inside a class and when to return a value which may or may not possibly be used to check the method ran correctly.
def api_request(self, data):
#api web request code
return response.text
def connect(self):
#login to api, set some vars defined in __init__
return self
def send_message(self, message):
#send msg code
return self
So above theres a few examples. api_request I know having the text response is a must. But with send_message what should I return?
which is then converted to a dict to check a key exists, else raise error).
Should it return True, the response->dict, or self?
Thanks in advance
Since errors tend to be delivered as exceptions and hence success/fail return values are rarely useful, a lot of object-modifier functions wind up with no return value at all—or more precisely, return None, since you can't return nothing-at-all. (Consider some of Python's built-in objects, like list, where append and extend return None, and dict, where dict.update returns None.)
Still, returning self is convenient for chaining method calls, even if some Pythonistas don't like it. See kindall's answer in Should internal class methods returnvalues or just modify instance variables in python? for example.
Edit to add some examples based on comment:
What you "should" return—or raise an exception, in which case, "what exception"—depends on the problem. Do you want send_message() to wait for a response, validate that response, and verify that it was good? If so, do you want it to raise an error if there is no response, the validation fails, or the response was valid but says "message rejected"? If so, do you want different errors for each failure, etc? One reasonable (for some value of reasonable) method is to capture all failures with a "base" exception, and make each "type" of failure a derivative of that:
class ZorgError(Exception): # catch-all "can't talk via the Zorg-brand XML API"
pass
class ZorgRemoteDown(ZorgError): # connect or send failed, or no response/timeout
pass
class ZorgNuts(ZorgError): # remote response incomprehensible
pass
class ZorgDenied(ZorgError): # remote says "permission denied"
pass
# add more if needed
Now some of your functions might look something like this (note, none of this is tested):
def connect(self):
"""connect to server, log in"""
... # do some prep work
addr = self._addr
try:
self._sock.connect(addr)
except socket.error as err:
if err.errno == errno.ECONNREFUSED: # server is down
raise ZorgRemoteDown(addr) # translate that to our Zorg error
# add more special translation here if needed
raise # some other problem, propagate it
... # do other stuff now that we're connected, including trying to log in
response = self._get_response()
if response == 'login denied' # or whatever that looks like
raise ZorgDenied() # maybe say what exactly was denied, maybe not
# all went well, return None by not returning anything
def send_message(self, msg):
"""encode the message in the way the remote likes, send it, and wait for
a response from the remote."""
response = self._send_and_wait(self._encode(msg))
if response == 'ok':
return
if response == 'permission denied':
raise ZorgDenied()
# don't understand what we got back, so say the remote is crazy
raise ZorgNuts(response)
Then you need some "internal" functions like these:
def _send_and_wait(self, raw_xml):
"""send raw XML to server"""
try:
self._sock.sendall(raw_xml)
except socket.error as err:
if err.errno in (errno.EHOSTDOWN, errno.ENETDOWN) # add more if needed
raise ZorgRemoteDown(self._addr)
raise
return self._get_response()
def _get_response(self):
"""wait for a response, which is supposedly XML-encoded"""
... some code here ...
if we_got_a_timeout_while_waiting:
raise ZorgRemoteDown(self._addr)
try:
return some_xml_decoding_stuff(raw_xml)
except SomeXMLDecodeError:
raise ZorgNuts(raw_xml) # or something else suitable for debug
You might choose not to translate socket.errors at all, and not have all your own errors; perhaps you can squeeze your errors into ValueError and KeyError and so on, for instance.
These choices are what programming is all about!
Generally, objects in python are mutable. You therefore do not return self, as the modifications you make in a method are reflected in the object itself.
To use your example:
api = API() # initialise the API
if api.connect(): # perhaps return a bool, indicating that the connection succeeded
api.send_message() # you now know that this API instance is connected, and can send messages
I'm trying to make an IRC bot using the twisted.words.protocols.irc module.
The bot will parse messages from a channel and parse them for command strings.
Everything works fine except when I need the bot to identify a nick by sending a whois command. The whois reply will not be handled until the privmsg method (the method from which I'm doing the parsing) returns.
example:
from twisted.words.protocols import irc
class MyBot(irc.IRClient):
..........
def privmsg(self, user, channel, msg):
"""This method is called when the client recieves a message"""
if msg.startswith(':whois '):
nick = msg.split()[1]
self.whois(nick)
print(self.whoislist)
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""This method is called when the client recieves a reply for whois"""
self.whoislist[prefix] = params
Is there a way to somehow make the bot wait for a reply after self.whois(nick)?
Perhaps use a thread (I don't have any experience with those).
Deferred is a core concept in Twisted, you must be familiar with it to use Twisted.
Basically, your whois checking function should return a Deferred that will be fired when you receive whois-reply.
I managed to fix this by running all handler methods as threads, and then setting a field, following
kirelagin's suggestion, before running a whois query, and modifying the method that recieves the data
to change the field when it recieves a reply. Its not the most elegant solution but it works.
Modified code:
class MyBot(irc.IRClient):
..........
def privmsg(self, user, channel, msg):
"""This method is called when the client recieves a message"""
if msg.startswith(':whois '):
nick = msg.split()[1]
self.whois_status = 'REQUEST'
self.whois(nick)
while not self.whois_status == 'ACK':
sleep(1)
print(self.whoislist)
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""This method is called when the client recieves a reply for whois"""
self.whoislist[prefix] = params
def handleCommand(self, command, prefix, params):
"""Determine the function to call for the given command and call
it with the given arguments.
"""
method = getattr(self, "irc_%s" % command, None)
try:
# all handler methods are now threaded.
if method is not None:
thread.start_new_thread(method, (prefix, params))
else:
thread.start_new_thread(self.irc_unknown, (prefix, command, params))
except:
irc.log.deferr()
def irc_RPL_WHOISCHANNELS(self, prefix, params):
"""docstring for irc_RPL_WHOISCHANNELS"""
self.whoislist[prefix] = params
def irc_RPL_ENDOFWHOIS(self, prefix, params):
self.whois_status = 'ACK'
Update It seems to be the way untagged responses are handled by twisted, the only example I have found seem to iterate through the data received and somehow collect the response to their command though I am not sure how...
I am trying to implement the IMAP4 quota commands as defined in RFC 2087 ( https://www.rfc-editor.org/rfc/rfc2087 ).
Code - ImapClient
class SimpleIMAP4Client(imap4.IMAP4Client):
"""
A client with callbacks for greeting messages from an IMAP server.
"""
greetDeferred = None
def serverGreeting(self, caps):
self.serverCapabilities = caps
if self.greetDeferred is not None:
d, self.greetDeferred = self.greetDeferred, None
d.callback(self)
def lineReceived(self, line):
print "<" + str(line)
return imap4.IMAP4Client.lineReceived(self, line)
def sendLine(self, line):
print ">" + str(line)
return imap4.IMAP4Client.sendLine(self, line)
Code - QUOTAROOT Implementation
def cbExamineMbox(result, proto):
"""
Callback invoked when examine command completes.
Retrieve the subject header of every message in the mailbox.
"""
print "Fetching storage space"
cmd = "GETQUOTAROOT"
args = _prepareMailboxName("INBOX")
resp = ("QUOTAROOT", "QUOTA")
d = proto.sendCommand(Command(cmd, args, wantResponse=resp))
d.addCallback(cbFetch, proto)
return d
def cbFetch(result, proto):
"""
Finally, display headers.
"""
print "Got Quota"
print result
Output
Fetching storage space
>0005 GETQUOTAROOT INBOX
<* QUOTAROOT "INBOX" ""
<* QUOTA "" (STORAGE 171609 10584342)
<0005 OK Success
Got Quota
([], 'OK Success')
So I am getting the data but the result doesn't contain it, I am thinking it is because they are untagged responses?
Since the IMAP4 protocol mixes together lots of different kinds of information as "untagged responses", you probably also need to update some other parts of the parsing code in the IMAP4 client implementation.
Specifically, take a look at twisted.mail.imap4.Command and its finish method. Also look at twisted.mail.imap4.IMAP4Client._extraInfo, which is what is passed as the unusedCallback to Command.finish.
To start, you can check to see if the untagged responses to the QUOTA command are being sent to _extraInfo (and then dropped (well, logged)).
If so, I suspect you want to teach Command to recognize QUOTA and QUOTAROOT untagged responses to the QUOTA command, so that it collects them and sends them as part of the result it fires its Deferred with.
If not, you may need to dig a bit deeper into the logic of Command.finish to see where the data does end up.
You may also want to actually implement the Command.wantResponse feature, which appears to be only partially formed currently (ie, lots of client code tries to send interesting values into Command to initialize that attribute, but as far as I can tell nothing actually uses the value of that attribute).