I have simple bidirectional rpc proto interface, such as:
rpc RouteData(stream RouteNote) returns (stream ProcessedRouteNote) {}
The thing is that it might takes a while until I can return ProcessedRouteNote.
I would like to know what is the recommended way to store away a connected client so I could stream back a response(i.e. "ProcessedRouteNote") at a later time?
"def RouteData(self, request_iterator, servicer_context)"
It seems that saving "request_iterator" of "def RouteData", which is "RpcMethodHandler", and then calling directly to stream_stream would do the job.
Will appreciate any feedback.
I could probably simplify this question further by asking: How can I send data/response to a specific client that has previously sent a request(bidirectional) to the server? It should be noted that the intention is to send the response not in the context of the server's RPC handler. Moreover, there could be dozen of requests but only single response. So I have no interest to block RPC handler to wait for response. I really hope this is possible with grpc, otherwise it is really a deal breaker for us.
Thanks,
Mike
You can have a generator yielding response values and waiting for a threading.Event object to trigger that might be stored in a hashtable somewhere depending on your application logic.
Related
I've setup a running Pymodbus server based on the 'Updating Server' (v2.5.3) example.
https://pymodbus.readthedocs.io/en/v2.5.3/source/example/updating_server.html
Everything works ok.
Now i want to trigger a function (which will simply increment a value by 1), when (any) client is requesting to read/poll the contents of the holdingregisters.
Console output, when client requests function code 3
My knowledge of Pymodbus is limited, so any help would be great.
There is likely a way to overload the StartTCPServer function, though looking through the source code it seems tricky since it's built on asyncio (not like that's bad, just less conducive to easily overloading this one thing you're trying to do).
However there are several modbus libraries you can use, and sourceperl's pyModbusTCP makes it very easy to do exactly what you're trying to do. Check out the example https://github.com/sourceperl/pyModbusTCP/blob/master/examples/server_change_log.py
There they use the built-in hooks for doing just that, but you can get easily overload any part of the server if you wanted to for example capture the incoming modbus message before it gets processed.
I have used python's requests module to do a POST call(within a loop) to a single URL with varying sets of data in each iteration. I have already used session reuse to use the underlying TCP connection for each call to the URL during the loop's iterations.
However, I want to further speed up my processing by 1. caching the URL and the authentication values(user id and password) as they would remain the same in each call 2. Spawning multiple sub-processes which could take a certain number of calls and pass them as a group thus allowing for parallel processing of these smaller sub-processes
Please note that I pass my authentication as headers in base64 format and a pseudo code of my post call would typically look like this:
S=requests.Session()
url='https://example.net/'
Loop through data records:
headers={'authorization':authbase64string,'other headers'}
data="data for this loop"
#Post call
r=s.post(url,data=data,headers=headers)
response=r.json()
#end of loop and program
Please review the scenario and suggest any techniques/tips which might be of help-
Thanks in advance,
Abhishek
You can:
do it as you described (if you want to make it faster then you can run it using multiprocessing) and e.g. add headers to session, not request.
modify target server and allow to send one post request with multiple data (so you're going to limit time spent on connecting, etc)
do some optimalizations on server side, so it's going to reply faster (or just store requests and send you response using some callback)
It would be much easier if you described the use case :)
I am using python 2 with requests. This question is more of a curiosity of how I can improve this performance.
The issue now is that I must send a cryptographic signature in the header of the request to a HTTPS server. This signature includes a "nonce" which must be a timestamp, and ALWAYS must increase (on the server side).
Obviously this can wreak havoc on running multiple HTTP sessions on multiple threads. Requests ended up sent out not in order because they get interrupted between generating the headers and sending the HTTPS POST request.
The solution is to lock the thread from before creating the signature till the end of recieving HTTPS data. Ideally, I would like to release the LOCK after the HTTP request was SENT, and not have to wait for the data to be recieved. Is there any way I can release the lock, using requests, after just the HTTP headers are SENT? See code sample:
self.lock is a Threading.Lock. This instance of this class (self) is shared amongst multiple Threads.
def get_nonce(self):
return int(1000*time.time())
def do_post_request(self, endpoint, parameters):
with self.lock:
url = self.base + endpoint
urlpath = endpoint
parameters['nonce'] = self.get_nonce()
postdata = urllib.urlencode(parameters)
message = urlpath + hashlib.sha256(str(parameters['nonce']) + postdata).digest()
signature = hmac.new(base64.b64decode(self.secret_key), message, hashlib.sha512)
headers = {
'API-Key': self.api_key,
'API-Sign': base64.b64encode(signature.digest())
}
data = urllib.urlencode(parameters)
response = requests.post(url, data=data, headers=headers, verify=True).json()
return response
It sounds like the requests library doesn't have any support for sending asynchronously.
With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.
If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks. Two excellent examples are grequests and requests-futures.
I saw in a comment that you hesitate to add more dependencies, so the only suggestions I have are:
Add retry logic when your nonce is rejected. This seems like the most pythonic solution, and should work fine as long as the nonce isn't rejected very often.
Throttle the nonce generator. Hold the timestamp used for the previous nonce, and sleep if it hasn't been long enough when the next nonce is requested.
Batch the messages. If the protocol allows it, you may find that throughput actually goes up when you add a delay to wait for other messages and send them as a batch.
Change the server so the nonce values don't have to increase. If you control the server, making the messages independent of each other will give you a much more flexible protocol.
Use a session pool. I'm guessing that the nonce values only have to increase within a single session. If you create a thread pool and have each thread open its own session, you could still have reasonable throughput without the timing problems you currently have.
Obviously, you'd have to measure the performance results of making these changes.
Even if you do decide to add a dependency that lets you release the lock after sending the headers, you may still find that you occasionally have timing issues. The message packets with the headers could be delayed on their way to the server.
I'm using the Django Python framework with the Django REST Framework. When a new instance of a model is saved, I need to generate a PDF that is saved locally on the server. Is there a way that I can branch off the task of generating the PDF so that the user immediately gets a 201 return while the server is generating the PDF? I don't know if this would be a suitable situation for multithreading.
The parent's save function is called before starting the PDF generation so right in between there it would be safe to return 201.
def save(self, *args, **kwargs):
set_pdf = False
if self.id is None and self.nda_pdf is not None and len(self.nda_pdf) > 0:
set_pdf = True
super(Visitor, self).save(*args, **kwargs)
if set_pdf: generate_pdf(self)
I want to call that generate_pdf(self) function after returning something to the client.
Depending on how much does it take to generate PDF, you may want to block the response until the file is generated and only then return HTTP 201.
It has no influence on multithreading, neither for the client, nor for the server:
The client should do non-blocking requests any way (or at least do them from a thread different than the one which handles UI events). Moreover, if the client doesn't care about the response (i.e. whether the PDF is generated correctly or not), it's up to the client to send the request without waiting for the response.
The server... well, the server has to do PDF generation anyway. Returning HTTP 201 immediately won't change anything. Also, the fact that the server is currently responding to one request doesn't mean it won't process another one (unless you have too many requests or use a very weirdly configured HTTP server).
If PDF generation actually takes a long time (say more than a minute), then returning HTTP 202 Accepted (and not HTTP 201!) can be a solution in order to avoid timeouts or situations where clients won't understand why the server is not responding for too long.
I'm really having troubles wrapping my head around this and maybe someone can point me in the right direction:
I'm using python (django framework) for an web-application and i have an additional web socket server that receives chunked binary data from the browser. I want to send (or stream) those chunks to another server using the python-requests library.
According to the official documentation you have to provide a generator as the data attribute:
arr = []
def streamer():
global arr
for i in arr:
yield i
#lets say this function will get called when a "stream-start" message is sent to the web-socket server
def onStart():
resp = requests.post("http://some.url/chunked", data=streamer())
#lets say this function will get called when a chunk of binary data is sent to the web-socket server
def onChunk(chunk):
arr.append(chunk)
In this scenario, how would I possible be able to send anything since arr is empty when I send the request. How can I keep the connection open, so that every chunk will be sent?
I think there is some major issue that I don't understand about streaming in general. So, next to hints about solving my actual problem I would also appreciate any recommendation of tutorials or a good read on this subject.