aiohttp: How to efficiently check HTTP headers before downloading response body? - python

I am writing a web crawler using asyncio/aiohttp. I want the crawler to only want to download HTML content, and skip everything else. I wrote a simple function to filter URLS based on extensions, but this is not reliable because many download links do not include a filename/extension in them.
I could use aiohttp.ClientSession.head() to send a HEAD request, check the Content-Type field to make sure it's HTML, and then send a separate GET request. But this will increase the latency by requiring two separate requests per page (one HEAD, one GET), and I'd like to avoid that if possible.
Is it possible to just send a regular GET request, and set aiohttp into "streaming" mode to download just the header, and then proceed with the body download only if the MIME type is correct? Or is there some (fast) alternative method for filtering out non-HTML content that I should consider?
UPDATE
As requested in the comments, I've included some example code of what I mean by making two separate HTTP requests (one HEAD request and one GET request):
import asyncio
import aiohttp
urls = ['http://www.google.com', 'http://www.yahoo.com']
results = []
async def get_urls_async(urls):
loop = asyncio.get_running_loop()
async with aiohttp.ClientSession() as session:
tasks = []
for u in urls:
print(f"This is the first (HEAD) request we send for {u}")
tasks.append(loop.create_task(session.get(u)))
results = []
for t in asyncio.as_completed(tasks):
response = await t
url = response.url
if "text/html" in response.headers["Content-Type"]:
print("Sending the 2nd (GET) request to retrive body")
r = await session.get(url)
results.append((url, await r.read()))
else:
print(f"Not HTML, rejecting: {url}")
return results
results = asyncio.run(get_urls_async(urls))

This is a protocol problem, if you do a GET, the server wants to send the body. If you don't retrieve the body you have to discard the connection (this is in fact what it does if you don't do a read() before __aexit__ on the response).
So the above code should do more of less what you want. NOTE the server may send in the first chunk already more than just the headers

Related

Python Request URL response to slow, how make more quickle?

I have this code in python:
session = requests.Session()
for i in range(0, len(df_1)):
page = session.head(df_1['listing_url'].loc[i], allow_redirects=False, stream=True)
if page.status_code == 200:
df_1['condition'][i] = 'active'
else:
df_1['condition'][i] = 'false'
df_1 is my data frame and the column "listing_url" have more than 500 lines.
I want to Request if the URL list is active and append this in my data frame. But this code demands a long time. How can I reduce my time?
The problem with your current approach is that requests runs sequentially (synchronously), which means that a new request can't be sent before the prior one is finished.
What you are looking for is handling those requests asynchronously. Sadly, requests library does not support asynchronous requests. A newer library that has similar API to requests but can do that is httpx. aiohttp is another popular choice. With httpx you can do something like this:
import asyncio
import httpx
listing_urls = list(df_1['listing_url'])
async def do_tasks():
async with httpx.AsyncClient() as client:
tasks = [client.head(url) for url in listing_urls]
responses = await asyncio.gather(*tasks)
return {r.url: r.status_code for r in responses}
url_2_status = asyncio.run(do_tasks())
This will give you a mapping of {url: status_code}. You should be able to go from there.
This solution assumes you are using Python3.7 or newer. Also remember to install httpx.

FastAPI - How to get the response body in Middleware

Is there any way to get the response content in a middleware?
The following code is a copy from here.
#app.middleware("http")
async def add_process_time_header(request: Request, call_next):
start_time = time.time()
response = await call_next(request)
process_time = time.time() - start_time
response.headers["X-Process-Time"] = str(process_time)
return response
The response body is an iterator, which once it has been iterated through, it cannot be re-iterated again. Thus, you either have to save all the iterated data to a list (or bytes variable) and use that to return a custom Response, or initiate the iterator again. The options below demonstrate both approaches. In case you would like to get the request body inside the middleware as well, please have a look at this answer.
Option 1
Save the data to a list and use iterate_in_threadpool to initiate the iterator again, as described here - which is what StreamingResponse uses, as shown here.
from starlette.concurrency import iterate_in_threadpool
#app.middleware("http")
async def some_middleware(request: Request, call_next):
response = await call_next(request)
response_body = [chunk async for chunk in response.body_iterator]
response.body_iterator = iterate_in_threadpool(iter(response_body))
print(f"response_body={response_body[0].decode()}")
return response
Note 1: If your code uses StreamingResponse, response_body[0] would return only the first chunk of the response. To get the entire response body, you should join that list of bytes (chunks), as shown below (.decode() returns a string representation of the bytes object):
print(f"response_body={(b''.join(response_body)).decode()}")
Note 2: If you have a StreamingResponse streaming a body that wouldn't fit into your server's RAM (for example, a response of 30GB), you may run into memory errors when iterating over the response.body_iterator (this applies to both options listed in this answer), unless you loop through response.body_iterator (as shown in Option 2), but instead of storing the chunks in an in-memory variable, you store it somewhere on the disk. However, you would then need to retrieve the entire response data from that disk location and load it into RAM, in order to send it back to the client (which could extend the delay in responding to the client even more)—in that case, you could load the contents into RAM in chunks and use StreamingResponse, similar to what has been demonstrated here, here, as well as here, here and here (in Option 1, you can just pass your iterator/generator function to iterate_in_threadpool). However, I would not suggest following that approach, but instead have such endpoints returning large streaming responses excluded from the middleware, as described in this answer.
Option 2
The below demosntrates another approach, where the response body is stored in a bytes object (instead of a list, as shown above), and is used to return a custom Response directly (along with the status_code, headers and media_type of the original response).
#app.middleware("http")
async def some_middleware(request: Request, call_next):
response = await call_next(request)
response_body = b""
async for chunk in response.body_iterator:
response_body += chunk
print(f"response_body={response_body.decode()}")
return Response(content=response_body, status_code=response.status_code,
headers=dict(response.headers), media_type=response.media_type)

httpx: How to access specific responses from gathered request tasks?

I want to use HTTPX (within FastAPI, if that matters) to make asynchronous http requests to an outside API and store the responses as individual variables for processing in slightly different ways depending on which URL was fetched. I'm modifying the code from this StackOverflow answer.
import asyncio
import httpx
async def perform_request(client, url):
response = await client.get(url)
return response.text
async def gather_tasks(*urls):
async with httpx.AsyncClient() as client:
tasks = [perform_request(client, url) for url in urls]
result = await asyncio.gather(*tasks)
return result
async def f():
url1 = "https://api.com/object=562"
url2 = "https://api.com/object=383"
url3 = "https://api.com/object=167"
url4 = "https://api.com/object=884"
result = await gather_tasks(url1, url2, url3, url4)
# print(result[0])
# print(result[1])
# DO THINGS WITH url2, SOMETHING ELSE WITH url4, ETC.
if __name__ == '__main__':
asyncio.run(f())
What's the best way to access the individual responses? (If I use result[n] I wouldn't know which response I'm working with.)
And I'm pretty new to httpx and async operations in general so please share if you have any suggestions for how to achieve it in a better way.
Regardless of AsyncIO, I would probably put the logic inside gather_tasks. There you know the response, and you can define all the if else logic you want to proceed with the right path.
In my opinion you have two options:
1 - Process the request right away
In this case f would only initialize the urls and trigger the processing, everything else would happen inside gather_tasks.
2 - "Enrich" the response
In gather_tasks you can understand which kind of operation to do next, and "attach" to the response some sort of code to define it. For example, you could return a dict with two keys: response and operation. This would be the most explicit way of doing this, but you could also use a list or a tuple, you just need to know where the response and the "next step code" is within them.
This is useful if the further processing must happen later instead of right away.
Makes sense?

Long running requests with asyncio and aiohttp

Apologies for asking with what may be considered redundant, but I'm finding it extremely difficult to figure out what are the current recommended best practices for using asyncio and aiohttp.
I'm working with an API that ultimately returns a link to a generated CSV file. There are two steps in using the API.
Submit request the triggers a long running process and returns a status URL.
Poll the status URL until the status_code is 201 and then get the URL of the CSV file from the headers.
Here's a stripped down example of how I can successfully do this synchronously with requests.
import time
import requests
def submit_request(id):
"""Submit request to create CSV for specified id"""
body = {'id': id}
response = requests.get(
url='https://www.example.com/endpoint',
json=body
)
response.raise_for_status()
return response
def get_status(request_response):
"""Check whether the CSV has been created."""
status_response = requests.get(
url=request_response.headers['Location']
)
status_response.raise_for_status()
return status_response
def get_data_url(id, poll_interval=10):
"""Submit request to create CSV for specified ID, wait for it to finish,
and return the URL of the CSV.
Wait between status requests based on poll_interval.
"""
response = submit_request(id)
while True:
status_response = get_status(response)
if status_response.status_code == 201:
break
time.sleep(poll_interval)
data_url = status_response.headers['Location']
return data_url
What I'd like to do is be able to submit a group of requests at once, and then wait on all of them to be finished. But I'm not clear on how to structure this with asyncio and aiohttp.
One option would be to first submit all of the requests and then use await.gather (or something) to get all of the status URLs. Then start another event loop where I continuously poll the status_urls until they have all completed and I end up with a list of data URLs.
Alternatively, I suppose I could create a single function that submits the request, gets the status URL, and then polls that until it completes. In that case I would just have a single event loop where I submit each of the IDs that I want processed.
If some pseudo code for those options would be useful I can try to provide it. I've looked at a lot of different examples where you submit requests for a bunch of URLs asynchronously -- this for example -- but I'm finding that I get a bit lost when trying to translate them to this slightly more complicated scenario where I submit the request and then get back a new URL to poll.
FYI based on the comments above my current solution is something like this.
import asyncio
import aiohttp
async def get_data_url(session, id):
url = 'https://www.example.com/endpoint'
body = {'id': id}
async with session.post(url=url, json=body) as response:
response.raise_for_status()
status_url = response.headers['Location']
while True:
async with session.get(url=status_url) as status_response:
status_response.raise_for_status()
if status_response.status == 201:
return status_response.headers['Location']
await asyncio.sleep(10)
async def main(access_token, id):
headers = {'token': access_token}
async with aiohttp.ClientSession(headers=headers) as session:
data_url = await get_data_url(session, id)
return data_url
This works though I'm still not sure on best practices for submitting a set of IDs. I think asyncio.gather would work but it looks like it's deprecated. Ideally I would have a queue of say 100 IDs and only have 5 requests running at any given time. I've found some examples like this but they depend on asyncio.Queue which is also deprecated.

How to make client request to external server avoiding cache using aiohttp

We are using aiohttp to make multiple requests to various website vendors to grab their latest data.
Some of the content providers serve the data from a cache. Is it possible to request the data from the server directly? We have tried to pass in the headers parameter with no luck.
async def fetch(url):
global response
headers = {'Cache-Control': 'no-cache'}
async with ClientSession() as session:
async with session.get(url, headers=headers, proxy="OUR-PROXY") as response:
return await response.read()
The goal is to get the last-modified date header, which is not provided from the cache request.
Try to add some additional variable with dynamic value to URL (e.g. timestamp).
This will prevent caching on the server side even if it ignores Cache-Control.
Example:
from: https://example.com/test
to: https://example.com/test?timestamp=20180724181234

Categories