I have used python's requests module to do a POST call(within a loop) to a single URL with varying sets of data in each iteration. I have already used session reuse to use the underlying TCP connection for each call to the URL during the loop's iterations.
However, I want to further speed up my processing by 1. caching the URL and the authentication values(user id and password) as they would remain the same in each call 2. Spawning multiple sub-processes which could take a certain number of calls and pass them as a group thus allowing for parallel processing of these smaller sub-processes
Please note that I pass my authentication as headers in base64 format and a pseudo code of my post call would typically look like this:
S=requests.Session()
url='https://example.net/'
Loop through data records:
headers={'authorization':authbase64string,'other headers'}
data="data for this loop"
#Post call
r=s.post(url,data=data,headers=headers)
response=r.json()
#end of loop and program
Please review the scenario and suggest any techniques/tips which might be of help-
Thanks in advance,
Abhishek
You can:
do it as you described (if you want to make it faster then you can run it using multiprocessing) and e.g. add headers to session, not request.
modify target server and allow to send one post request with multiple data (so you're going to limit time spent on connecting, etc)
do some optimalizations on server side, so it's going to reply faster (or just store requests and send you response using some callback)
It would be much easier if you described the use case :)
Related
I have a Cloud Function (Python) who does some long(not heavy) calculation that depend on other external APIs so the respond might take some time ( 30 seconds).
def test(request):
request_json = request.get_json()
for x in y:
r = get_external_api_respond()
calculate r and return partial respond
The problems and questions are :
Is there a way to start returning to the web client results as they arrive to the Function? right now I know http can only return once to the message and close connection.
Pagination in this case will be too complicated to achieve, as results depend on previous results, etc. Are there any solutions in Google Cloud to return live results as they come ? other type of Function ?
Will it be very expensive if the function stay open for a minute even tough it does not have heavy calculations, just doing multiple API request in loop ?
You need to use some intermediary storage, which you will top-up from your function, and read in your HTTP request from web page. I wouldn't call it producer-consumer pattern, really, as you produce once, but consumes as many times as you need to.
You can use a Table Storage or Blob Storage if you use Azure.
https://learn.microsoft.com/en-us/azure/storage/tables/table-storage-overview
https://azure.microsoft.com/en-gb/products/storage/blobs/
With Table, you can just add records as you get them calculated.
With Blob, you can use Append blob type, or just read and write blob again (it seems like you use single producer).
As a bonus, you can distribute your task across multiple functions and get results much faster. This is called scale-out.
would it be possible to implement a rate limiting feature on my tornado app? like limit the number of HTTP request from a specific client if they are identified to send too many requests per second (which red flags them as bots).
I think I could it manually by storing the requests on a database and analyzing the requests per IP address but I was just checking if there is already an existing solution for this feature.
I tried checking the github page of tornado, I have the same questions as this post but no explicit answer was provided. checked tornado's wiki links as well but I think rate limiting is not handled yet.
Instead of storing them in the DB, would be better to have them in a dictionary stored in memory for easy usability.
Also can you share the details whether the api has a load-balancer and which web-server is used.
The enterprise grade solution to your problem is ambassador.
You can use ambassador's solutions like envoy proxy and edge stack and have it set up that can do the needful.
additional to tore the data, you can use any popular cached db, or d that store as key:value pairs, for example redis.
if you doing this for a very small project, can use some npm/pip packages.
Read the docs: https://www.getambassador.io/products/edge-stack/api-gateway/
You should probably do this before your requests reach Tornado.
But if it's an application level feature (limiting requests depending on level of subscription), then you can do it in Tornado in lots of ways, depending on how complex you want the rate limiting to be.
Probably the simplest way is to have a dict on your tornado.web.Application that uses the ip as the key and the timestamp of the last request as the value and check every request against it in prepare- if not enough time passed since last request, raise a tornado.web.HTTPError(429) (ideally with a Retry-After header). If you do this, you will still need to clean up this dict now and then to remove entries that have not made a request recently or it will grow (you could do it finish on every request).
If you have another fast/in-memory storage attached (memcache, redis, sqlite), you should use that, but you definitely should not use an RDBMS as all those writes will not be great for its performance.
I have a custom HTTP method/verb (lets say LISTEN) which allows me to listen for an update on a resource stored on a remote server. The API available for this has a blocking call which will get my client code to listen for an update till I interrupt the execution of that call. Just to provide an example, if I were to perform a curl as follows:
curl -X LISTEN http://<IP-Address>:<Port>/resource
The execution of this creates a blocking call, providing me updates on the resource whenever a new value for this resource is pushed to the server (similar to a pub-sub model), the response for that would look similar to this:
{"data":"value update 1","id":"id resource"}
{"data":"value update 2","id":"id resource"}
(...)
If I were to write code to handle this in Python, how do I call my url using this custom verb and handle the blocking call/call back while ensuring that this does not block the execution of the rest of my code?
If you're using Python requests lib with a custom HTTP verb and need to read stream content, you can do something like this:
import json
import requests # sudo pip3 install requests
url = "http://........."
r = requests.request('LISTEN', url, stream=True)
for line in r.iter_lines():
# filter out keep-alive new lines
if line:
decoded_line = line.decode('utf-8')
print(json.loads(decoded_line))
Note: by default all requests calls are blocking, so you need to run this code in a separate thread/process to avoid that.
...while ensuring that this does not block the execution of the rest of my code
Since you provided no details about your application, I will try to list some general thoughts on question.
Your task can be solved in many ways. Solution depends on your app architecture.
If this is a web server, you can take a look at tornado(see streaming callback) or aiohttp streaming examples.
On the other hand you can run the code above in a separate process and communicate with other applications/services using RabbitMQ for example (or other ipc mechanism).
I am using python 2 with requests. This question is more of a curiosity of how I can improve this performance.
The issue now is that I must send a cryptographic signature in the header of the request to a HTTPS server. This signature includes a "nonce" which must be a timestamp, and ALWAYS must increase (on the server side).
Obviously this can wreak havoc on running multiple HTTP sessions on multiple threads. Requests ended up sent out not in order because they get interrupted between generating the headers and sending the HTTPS POST request.
The solution is to lock the thread from before creating the signature till the end of recieving HTTPS data. Ideally, I would like to release the LOCK after the HTTP request was SENT, and not have to wait for the data to be recieved. Is there any way I can release the lock, using requests, after just the HTTP headers are SENT? See code sample:
self.lock is a Threading.Lock. This instance of this class (self) is shared amongst multiple Threads.
def get_nonce(self):
return int(1000*time.time())
def do_post_request(self, endpoint, parameters):
with self.lock:
url = self.base + endpoint
urlpath = endpoint
parameters['nonce'] = self.get_nonce()
postdata = urllib.urlencode(parameters)
message = urlpath + hashlib.sha256(str(parameters['nonce']) + postdata).digest()
signature = hmac.new(base64.b64decode(self.secret_key), message, hashlib.sha512)
headers = {
'API-Key': self.api_key,
'API-Sign': base64.b64encode(signature.digest())
}
data = urllib.urlencode(parameters)
response = requests.post(url, data=data, headers=headers, verify=True).json()
return response
It sounds like the requests library doesn't have any support for sending asynchronously.
With the default Transport Adapter in place, Requests does not provide any kind of non-blocking IO. The Response.content property will block until the entire response has been downloaded. If you require more granularity, the streaming features of the library (see Streaming Requests) allow you to retrieve smaller quantities of the response at a time. However, these calls will still block.
If you are concerned about the use of blocking IO, there are lots of projects out there that combine Requests with one of Python’s asynchronicity frameworks. Two excellent examples are grequests and requests-futures.
I saw in a comment that you hesitate to add more dependencies, so the only suggestions I have are:
Add retry logic when your nonce is rejected. This seems like the most pythonic solution, and should work fine as long as the nonce isn't rejected very often.
Throttle the nonce generator. Hold the timestamp used for the previous nonce, and sleep if it hasn't been long enough when the next nonce is requested.
Batch the messages. If the protocol allows it, you may find that throughput actually goes up when you add a delay to wait for other messages and send them as a batch.
Change the server so the nonce values don't have to increase. If you control the server, making the messages independent of each other will give you a much more flexible protocol.
Use a session pool. I'm guessing that the nonce values only have to increase within a single session. If you create a thread pool and have each thread open its own session, you could still have reasonable throughput without the timing problems you currently have.
Obviously, you'd have to measure the performance results of making these changes.
Even if you do decide to add a dependency that lets you release the lock after sending the headers, you may still find that you occasionally have timing issues. The message packets with the headers could be delayed on their way to the server.
I always had the idea that doing a HEAD request instead of a GET request was faster (no matter the size of the resource) and therefore had it advantages in certain solutions.
However, while making a HEAD request in Python (to a 5+ MB dynamic generated resource) I realized that it took the same time as making a GET request (almost 27 seconds instead of the 'less than 2 seconds' I was hoping for).
Used some urllib2 solutions to make a HEAD request found here and even used pycurl (setting headers and nobody to True). Both of them took the same time.
Am I missing something conceptually? is it possible, using Python, to do a 'quick' HEAD request?
The server is taking the bulk of the time, not your requester or the network. If it's a dynamic resource, it's likely that the server doesn't know all the header information - in particular, Content-Length - until it's built it. So it has to build the whole thing whether you're doing HEAD or GET.
The response time is dominated by the server, not by your request. The HEAD request returns less data (just the headers) so conceptually it should be faster, but in practice, many static resources are cached so there is almost no measureable difference (just the time for the additional packets to come down the wire).
Chances are, the bulk of that request time is actually whatever process generates the 5+MB response on the server rather than the time to transfer it to you.
In many cases, a web application will still execute the full script when responding to a HEAD request--it just won't send the full body back to the requester.
If you have access to the code that is processing that request, you may be able to add a condition in there to make it handle the request differently depending on the the method, which could speed it up dramatically.