Python - Using nonces with multithreading - python

I am using python 2 with requests. This question is more of a curiosity of how I can improve this performance.
The issue now is that I must send a cryptographic signature in the header of the request to a HTTPS server. This signature includes a "nonce" which must be a timestamp, and ALWAYS must increase (on the server side).
Obviously this can wreak havoc on running multiple HTTP sessions on multiple threads. Requests ended up sent out not in order because they get interrupted between generating the headers and sending the HTTPS POST request.
The solution is to lock the thread from before creating the signature till the end of recieving HTTPS data. Ideally, I would like to release the LOCK after the HTTP request was SENT, and not have to wait for the data to be recieved. Is there any way I can release the lock, using requests, after just the HTTP headers are SENT? See code sample:
self.lock is a Threading.Lock. This instance of this class (self) is shared amongst multiple Threads.
def get_nonce(self):
return int(1000*time.time())
def do_post_request(self, endpoint, parameters):
with self.lock:
url = self.base + endpoint
urlpath = endpoint
parameters['nonce'] = self.get_nonce()
postdata = urllib.urlencode(parameters)
message = urlpath + hashlib.sha256(str(parameters['nonce']) + postdata).digest()
signature = hmac.new(base64.b64decode(self.secret_key), message, hashlib.sha512)
headers = {
'API-Key': self.api_key,
'API-Sign': base64.b64encode(signature.digest())
}
data = urllib.urlencode(parameters)
response = requests.post(url, data=data, headers=headers, verify=True).json()
return response

It sounds like the requests library doesn't have any support for sending asynchronously.
With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.
If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks. Two excellent examples are grequests and requests-futures.
I saw in a comment that you hesitate to add more dependencies, so the only suggestions I have are:
Add retry logic when your nonce is rejected. This seems like the most pythonic solution, and should work fine as long as the nonce isn't rejected very often.
Throttle the nonce generator. Hold the timestamp used for the previous nonce, and sleep if it hasn't been long enough when the next nonce is requested.
Batch the messages. If the protocol allows it, you may find that throughput actually goes up when you add a delay to wait for other messages and send them as a batch.
Change the server so the nonce values don't have to increase. If you control the server, making the messages independent of each other will give you a much more flexible protocol.
Use a session pool. I'm guessing that the nonce values only have to increase within a single session. If you create a thread pool and have each thread open its own session, you could still have reasonable throughput without the timing problems you currently have.
Obviously, you'd have to measure the performance results of making these changes.
Even if you do decide to add a dependency that lets you release the lock after sending the headers, you may still find that you occasionally have timing issues. The message packets with the headers could be delayed on their way to the server.

Related

How to handle callbacks in Python 3?

I have a custom HTTP method/verb (lets say LISTEN) which allows me to listen for an update on a resource stored on a remote server. The API available for this has a blocking call which will get my client code to listen for an update till I interrupt the execution of that call. Just to provide an example, if I were to perform a curl as follows:
curl -X LISTEN http://<IP-Address>:<Port>/resource
The execution of this creates a blocking call, providing me updates on the resource whenever a new value for this resource is pushed to the server (similar to a pub-sub model), the response for that would look similar to this:
{"data":"value update 1","id":"id resource"}
{"data":"value update 2","id":"id resource"}
(...)
If I were to write code to handle this in Python, how do I call my url using this custom verb and handle the blocking call/call back while ensuring that this does not block the execution of the rest of my code?
If you're using Python requests lib with a custom HTTP verb and need to read stream content, you can do something like this:
import json
import requests # sudo pip3 install requests
url = "http://........."
r = requests.request('LISTEN', url, stream=True)
for line in r.iter_lines():
# filter out keep-alive new lines
if line:
decoded_line = line.decode('utf-8')
print(json.loads(decoded_line))
Note: by default all requests calls are blocking, so you need to run this code in a separate thread/process to avoid that.
...while ensuring that this does not block the execution of the rest of my code
Since you provided no details about your application, I will try to list some general thoughts on question.
Your task can be solved in many ways. Solution depends on your app architecture.
If this is a web server, you can take a look at tornado(see streaming callback) or aiohttp streaming examples.
On the other hand you can run the code above in a separate process and communicate with other applications/services using RabbitMQ for example (or other ipc mechanism).

storing client connection on bidirectional call

I have simple bidirectional rpc proto interface, such as:
rpc RouteData(stream RouteNote) returns (stream ProcessedRouteNote) {}
The thing is that it might takes a while until I can return ProcessedRouteNote.
I would like to know what is the recommended way to store away a connected client so I could stream back a response(i.e. "ProcessedRouteNote") at a later time?
"def RouteData(self, request_iterator, servicer_context)"
It seems that saving "request_iterator" of "def RouteData", which is "RpcMethodHandler", and then calling directly to stream_stream would do the job.
Will appreciate any feedback.
I could probably simplify this question further by asking: How can I send data/response to a specific client that has previously sent a request(bidirectional) to the server? It should be noted that the intention is to send the response not in the context of the server's RPC handler. Moreover, there could be dozen of requests but only single response. So I have no interest to block RPC handler to wait for response. I really hope this is possible with grpc, otherwise it is really a deal breaker for us.
Thanks,
Mike
You can have a generator yielding response values and waiting for a threading.Event object to trigger that might be stored in a hashtable somewhere depending on your application logic.

Python requests caching authentication headers

I have used python's requests module to do a POST call(within a loop) to a single URL with varying sets of data in each iteration. I have already used session reuse to use the underlying TCP connection for each call to the URL during the loop's iterations.
However, I want to further speed up my processing by 1. caching the URL and the authentication values(user id and password) as they would remain the same in each call 2. Spawning multiple sub-processes which could take a certain number of calls and pass them as a group thus allowing for parallel processing of these smaller sub-processes
Please note that I pass my authentication as headers in base64 format and a pseudo code of my post call would typically look like this:
S=requests.Session()
url='https://example.net/'
Loop through data records:
headers={'authorization':authbase64string,'other headers'}
data="data for this loop"
#Post call
r=s.post(url,data=data,headers=headers)
response=r.json()
#end of loop and program
Please review the scenario and suggest any techniques/tips which might be of help-
Thanks in advance,
Abhishek
You can:
do it as you described (if you want to make it faster then you can run it using multiprocessing) and e.g. add headers to session, not request.
modify target server and allow to send one post request with multiple data (so you're going to limit time spent on connecting, etc)
do some optimalizations on server side, so it's going to reply faster (or just store requests and send you response using some callback)
It would be much easier if you described the use case :)

How to run a function after returning 201 view

I'm using the Django Python framework with the Django REST Framework. When a new instance of a model is saved, I need to generate a PDF that is saved locally on the server. Is there a way that I can branch off the task of generating the PDF so that the user immediately gets a 201 return while the server is generating the PDF? I don't know if this would be a suitable situation for multithreading.
The parent's save function is called before starting the PDF generation so right in between there it would be safe to return 201.
def save(self, *args, **kwargs):
set_pdf = False
if self.id is None and self.nda_pdf is not None and len(self.nda_pdf) > 0:
set_pdf = True
super(Visitor, self).save(*args, **kwargs)
if set_pdf: generate_pdf(self)
I want to call that generate_pdf(self) function after returning something to the client.
Depending on how much does it take to generate PDF, you may want to block the response until the file is generated and only then return HTTP 201.
It has no influence on multithreading, neither for the client, nor for the server:
The client should do non-blocking requests any way (or at least do them from a thread different than the one which handles UI events). Moreover, if the client doesn't care about the response (i.e. whether the PDF is generated correctly or not), it's up to the client to send the request without waiting for the response.
The server... well, the server has to do PDF generation anyway. Returning HTTP 201 immediately won't change anything. Also, the fact that the server is currently responding to one request doesn't mean it won't process another one (unless you have too many requests or use a very weirdly configured HTTP server).
If PDF generation actually takes a long time (say more than a minute), then returning HTTP 202 Accepted (and not HTTP 201!) can be a solution in order to avoid timeouts or situations where clients won't understand why the server is not responding for too long.

Can you hook pyramid into twisted, skipping the wsgi part?

Given that :
WSGI doesn't play very well with async.
Twisted ergonomics suck.
Pyramid is very clean and component oriented.
How could I use Pyramid and Twisted ?
I can imagine making a twisted protocol to get the raw HTML request. But then I can't see how to parse it into a pyramid request objects. All documented pyramid tools seems to expect some wsgi interface at some point.
I could use waitress code to parse the request and turn it into a WSGI env then pass the env to pyramid but that's a lot of work with many issues I'm sure I can't even imagine down the road.
I know twisted includes a WSGI server, but it implies synchronicity in the app code, which does not serve my purpose. I want to be able to use the request and response objects, renderers, routers, and others pyramid tools in a twisted asynchronous protocol, with an asynchronous, non blocking app code as well. Hence I won't want to use WSGI.
Twisted API is verbose, heavy and uninuitive compared to any other asynchronous toolkit you'll find in Python or even other languages. Hence the critic about its ergonomics. I can use it, but training newcomers in my team to do it has a high cost. I wish to lower it.
Indeed, it packs a lot of power that I want to use.
To elaborate on my needs, I'm building a tool using crossbar.io and cyclone to have a WAMP/HTTP framework a bit friendlier to my team that the current tools. But cyclone is not as complete as pyramid, and I was hoping pyramid components were decoupled enough that the WSGI paradigm was not enforced, so I could leverage the tremendous work they did on it. All I need is an entry point : somewhere to get the HTML, and parse it into a request objet, and somewhere to take a response object, and returns HTML to the client. I wish i don't have to write a protocol manually for this, http is tricky and I'm sure I'll get it wrong in many ways.
One precision : i don't wish to use the full pyramid framework, just some components here and there, such as rooting, cookie parsing, CSRF protection, etc. I won't use their view system for it assumes a synchronous API.
Looking at Pyramid, I can see that it expects the entire request be be parsed and turned into a request object. it also returns the response as an object as well. So a part of the problem, to hook twisted and pyramid together, is to :
get the http request text as one big chunk from twisted;
parse it into the request object somehow (couldn't find a simple function to do this, but if I can turn it into an WSGI environ + request object, pyramid can convert it to it's format).
get the pyramid response object and turn it into a generator of strings (an adaptor can be find since that's what WSGI does).
Send the response back with twisted from this generator of strings.
Alternatives can be to use something simpler than pyramid like werkzeug for the glue.
Twisted Web lets you interpret HTTP request bodies (regardless of content-type, HTML or otherwise) incrementally as they're received - but it doesn't make doing so very easy. There's a very old ticket that we never seem to make much progress on for improving this situation. Until it's resolved, there probably isn't a better answer than the one I'm about to give. This incremental HTTP request body delivery, I think, is what you're looking for here (because you said you expect requests to "be a big HTML chunk").
The hook for incremental request body handling is Request.handleContentChunk. You can see a complete demonstration of its use in my answer to Python server for streaming request body content.
This gives you the data as it arrives at the server. If you want to use Pyramid, you'll have to construct a Pyramid request that uses this data. Most of the initialization of the Pyramid request object should be straightforward (eg filling the environ dictionary with the request headers - you can take these from Request.requestHeaders). The slightly trickier part will be initializing the Pyramid request object's body - which is supposed to be a file-like object that provides synchronous access to the request body.
On the one hand, if you dispatch the request before the request body has been completely received then you avoid the cost of buffering the entire request body in memory. On the other hand, if you let application code begin to read the request body then you have to deal with the circumstance that it tries to read beyond the point in the data which has actually arrived at the server. This can be dealt with. The body file-like object is expected to present a blocking interface. All you have to do is block until the data is available.
Here's a brief (incomplete, not meant to actually work) sketch of what I mean:
# XXX Note: Queue is not actually thread-safe. Use a safer primitive.
from Queue import Queue
class Body(object):
def __init__(self):
self._buffer = Queue()
self._pending = b""
self._eof = False
def read(self, how_many):
if self._eof:
return b""
if self._pending == b"":
data = self._buffer.get()
if data is None:
self._eof = True
return b""
else:
self._pending = data
if self._pending is None:
result = self._pending[:how_many]
self._pending = self._pending[how_many:]
return result
def _add_data(self, data):
self._buffer.put(data)
You can create an instance of this type, initialize the Pyramid request object's body attribute with it, and then call _add_data on it in the Twisted Request class's handleContentChunk callback.
You could also implement this as an enhancement to Twisted's own WSGI server. For the sake of simplicity, Twisted's WSGI server does read the entire request body before dispatching the request to the WSGI application - but it doesn't have to. If this is the only problem with WSGI then it'd be better to improve the quality of the WSGI implementation and keep the interface rather than both implementing the improvement and stepping outside of the interface (tying you more closely to both Twisted and Pyramid - unnecessarily).
The second half of the problem, generating response bodies incrementally, shouldn't really be a problem. Twisted's WSGI container will write out response data as the WSGI application object yields it. Or if you use twisted.web.resource instead of the WSGI interface, you can call request.write as many times as you like, at any time you like (up until you call request.finish). The only trick is that if you want to do this you must return NOT_DONE_YET from the render method.

Categories