I'm building a new retry feature in my Orchestrate script and I want to know how many times, and, if possible, what error my request method got when trying to connect to a specific URL.
For now, I need this for logging purposes, because I'm working on a messaging system and I may need this 'retry' information to understand when and why I'm facing any kind of problem in HTTP requests, once I work in a micro-service environment.
So far, I debugged and certify that retries are working as expected (I have a mocked flask server for all micro services that we use), but I couldn't find a way to got the 'retries history' data.
In other words, for example, I want to see if a specific micro-service may respond only after the third request, and those kind of thing.
Below is the code that I'm using now:
from requests import exceptions, Session
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
def open_request_session():
# Default retries configs
toggle = True #loaded from config file
session = Session()
if toggle:
# loaded from config file as
parameters = {'total': 5,
'backoff_factor': 0.0,
'http_error_to_retry': [400]}well
retries = Retry(total=parameters['total'],
backoff_factor=parameters['backoff_factor'],
status_forcelist=parameters['http_error_to_retry'],
# Do not force an exception when False
raise_on_status=False)
session.mount('http://', HTTPAdapter(max_retries=retries))
session.mount('https://', HTTPAdapter(max_retries=retries))
return session
# Request
with open_request_session() as request:
my_response = request.get(url, timeout=10)
I see in the urllib3 documentation that Retry has a history attribute, but when I try to consult the attribute it is empty.
I don't know if I'm doing wrong or forgetting something, once Software Development is not my best skill.
So, I have two questions:
Does anyone know a way to got this history information?
How can I do create tests to verify if the retries behavior is working as expected? (So far I only test in debug mode)
I'm using Python 3.6.8.
I know that I can create a while statement to 'control' this, but I'm trying to avoid complexity. And this is why I'm here, I'm looking for an alternative based on Python and community best practices.
A bit late, but I just figured this out myself so thought I'd share what I found.
Short answer:
response.raw.retries.history
will get you what you are looking for.
Long answer:
You cannot get the history off the original Retry instance created. Under the covers, urllib3 creates a new Retry instance for every attempt.
urllib3 does store the last Retry instance on the response when one is returned. However, the response from the requests library is a wrapper around the urllib3 response. Luckily, requests stores the original urllib3 response on the raw field.
Related
I would like to test a site with APIs that at this stage of development is classified as "not secure".
In the Locust documentation is said that:
Safe mode:
The HTTP client is configured to run in safe_mode. What this does is that any request that fails due to a connection error, timeout, or similar will not raise an exception, but rather return an empty dummy Response object. The request will be reported as a failure in User’s statistics. The returned dummy Response’s content attribute will be set to None, and its status_code will be 0.
I would like to know if there is a configuration that let you disable this option.
Thank you for your time
safe_mode has nothing to do with SSL/TLS, it refers to requests not throwing an exception when the request fails (for whatever reason).
To make HttpUser/requests ignore SSL issues, add verify=False to your call.
——
You're looking at a very old Locust documentation, right? I dont think the text you mention has been there for quite a while.
safe_mode has been removed from requests (it used to look like this https://github.com/psf/requests/pull/604) but the same behaviour is implemented by Locust here https://github.com/locustio/locust/blob/51b1d5038a2be6e2823b2576c4436f2ff9f7c7c2/locust/clients.py#L195
I am testing some code with pytest and using the vcrpy decorator as follows:
#pytest.mark.vcr(record_mode='none')
def test_something():
make_requests_in_a_thread_and_save_to_queue()
logged_responses = log_responses_from_queue_in_a_thread()
assert logged_responses == expected_logged_responses
The test fails because the logged_responses are new responses, which are the results of new HTTP requests that have been made during test_something().
I have a cassette saved in the correct place, but this is probably irrelevant because even if I didn't I should be getting a vcrpy CassetteError rather than a failed test.
Does record_mode='none' not apply to code executed within threads?
If not, how should I approach the testing problem? Thank you!
I found out what the problem was. I was using a stream API rather than sending http requests. record_mode='none' refers to http requests.
By default boto3 (AWS Python SDK) implements an incremental back-oof retry strategy for (all?) clients. That can be customized via the retries.max_attempts entry at the Botocore Config. That works pretty well for me in many scenarios. But I have no traces about how many attempts have been actually required, besides when you can notice them in the client latencies.
So, is there any way to consistently get the number of retries used after a successful request to a boto3 client?
Inspecting the code it looks like handler.context.attempt_number stored that, but I have no idea how to reach that after a successful invocation to a client.
It looks like the ResponseMetada from every client comes with that information :-)
Minimal code example:
import logging
import boto3
client = boto3.client('s3')
response = client.list_buckets()
logging.info("Retry Attempts: %d", response['ResponseMetadata']['RetryAttempts'])
I'm using soundcloud-python https://github.com/soundcloud/soundcloud-python for Soundcloud API on Ubuntu Server 16.04.1 (installed with pip install soundcloud).
Soundcloud API Rate Limits official page https://developers.soundcloud.com/docs/api/rate-limits#play-requests says that, in case an app exceeds the API rate limits, the body of the 429 Client Error response would be a JSON object, containing some additional info.
I'm interested in getting reset_time field, to inform the user when the block will be over.
The problem is that when, for example, like rate limits is exceeded, doing response = client.put('/me/favorites/%d' % song_id) the app crashes and response is null.
How can I get the JSON response body?
Why don't you read the package's source code and find out by yourself ?
Let's see... You don't explain you got that client object but browsing the source code we can see there's a "client.py" module that defines a Client class. This class does'nt not define a put method explicitely but it defines the __getattr__ hook :
def __getattr__(self, name, **kwargs):
"""Translate an HTTP verb into a request method."""
if name not in ('get', 'post', 'put', 'head', 'delete'):
raise AttributeError
return partial(self._request, name, **kwargs)
Ok, so Client.put(...) returns a partial object wrapping Client._request, which is a quite uselessly convoluted way to define Client.put(**kwargs) as return self._request("put", **kwargs).
Now let's look at Client._request: it basically make a couple sanity checks, updates **kwargs and returns wrapped_resource(make_request(method, url, kwargs)).
Looking up the imports at the beginning of the module, we can see that make_request comes from "request.py" and wrapped_resource from "resources.py".
You mention that doing an api call while over the rate limit "crashes the application" - I assume you mean "raises an exception" (BTW please post exceptions and tracebacks when saking about such problems) - so assuming this is handled at the lower level, let's start with request.make_request. A lot of data formatting / massaging obviously and finally the interesting part: a call to response.raise_for_status(). This is a hint that we are actually delegating to the famous python-requests package, which is actually confirmed a few lines above and in the requirements file
If we read python-requests fine manual, we find out what raise_for_status does - it raises a requests.exceptions.HTTPError for client (4XX) and server (5XX) status codes.
Ok now we know what exception we have. Note that you had all those informations already in your exception and traceback, which would have saved us a lot of pain here had you posted it.
But anyway... It looks like we won't get the response content, does it ? Well, wait, we're not done yet - python-requests is a fairly well designed package, so chances are we can still rescue our response. And indeed, if we look at requests.exceptions source code, we find out that HttpError is a subclass of RequestException, and that RequestException is "Initialize(d)" with "request and response objects."
Hurray, we do have our response - in the exception. So all we have to do is catch the exception and check it's response attribute - which should contains the "additional informations".
Now please understand that this took me more than half an hour to write, but about 7 minutes to sort out without the traceback - with the traceback it would have boiled down to a mere 2 minutes, the time to go to the requests.exceptions source code and make sure it keeped the request and response. Ok I'm cheating, I'm used to read source code and I use python-requests a lot, but still: you could have solved this by yourself in less than an hour, specially with python's interactive shell which let's you explore and test live objects in real time.
Related to older version of requests question: Can I set max_retries for requests.request?
I have not seen an example to cleanly incorporate max_retries in a requests.get() or requests.post() call.
Would love a
requests.get(url, max_retries=num_max_retries))
implementation
A quick search of the python-requests docs will reveal exactly how to set max_retries when using a Session.
To pull the code directly from the documentation:
import requests
s = requests.Session()
a = requests.adapters.HTTPAdapter(max_retries=3)
b = requests.adapters.HTTPAdapter(max_retries=3)
s.mount('http://', a)
s.mount('https://', b)
s.get(url)
What you're looking for, however, is not configurable for several reasons:
Requests no longer provides a means for configuration
The number of retries is specific to the adapter being used, not to the session or the particular request.
If one request needs one particular maximum number of requests, that should be sufficient for a different request.
This change was introduced in requests 1.0 over a year ago. We kept it for 2.0 purposefully because it makes the most sense. We also will not be introducing a parameter to configure the maximum number of retries or anything else, in case you were thinking of asking.
Edit Using a similar method you can achieve a much finer control over how retries work. You can read this to get a good feel for it. In short, you'll need to import the Retry class from urllib3 (see below) and tell it how to behave. We pass that on to urllib3 and you will have a better set of options to deal with retries.
from requests.packages.urllib3 import Retry
import requests
# Create a session
s = requests.Session()
# Define your retries for http and https urls
http_retries = Retry(...)
https_retries = Retry(...)
# Create adapters with the retry logic for each
http = requests.adapters.HTTPAdapter(max_retries=http_retries)
https = requests.adapters.HTTPAdapter(max_retries=https_retries)
# Replace the session's original adapters
s.mount('http://', http)
s.mount('https://', https)
# Start using the session
s.get(url)