By default boto3 (AWS Python SDK) implements an incremental back-oof retry strategy for (all?) clients. That can be customized via the retries.max_attempts entry at the Botocore Config. That works pretty well for me in many scenarios. But I have no traces about how many attempts have been actually required, besides when you can notice them in the client latencies.
So, is there any way to consistently get the number of retries used after a successful request to a boto3 client?
Inspecting the code it looks like handler.context.attempt_number stored that, but I have no idea how to reach that after a successful invocation to a client.
It looks like the ResponseMetada from every client comes with that information :-)
Minimal code example:
import logging
import boto3
client = boto3.client('s3')
response = client.list_buckets()
logging.info("Retry Attempts: %d", response['ResponseMetadata']['RetryAttempts'])
Related
I am testing some code with pytest and using the vcrpy decorator as follows:
#pytest.mark.vcr(record_mode='none')
def test_something():
make_requests_in_a_thread_and_save_to_queue()
logged_responses = log_responses_from_queue_in_a_thread()
assert logged_responses == expected_logged_responses
The test fails because the logged_responses are new responses, which are the results of new HTTP requests that have been made during test_something().
I have a cassette saved in the correct place, but this is probably irrelevant because even if I didn't I should be getting a vcrpy CassetteError rather than a failed test.
Does record_mode='none' not apply to code executed within threads?
If not, how should I approach the testing problem? Thank you!
I found out what the problem was. I was using a stream API rather than sending http requests. record_mode='none' refers to http requests.
I'm building a new retry feature in my Orchestrate script and I want to know how many times, and, if possible, what error my request method got when trying to connect to a specific URL.
For now, I need this for logging purposes, because I'm working on a messaging system and I may need this 'retry' information to understand when and why I'm facing any kind of problem in HTTP requests, once I work in a micro-service environment.
So far, I debugged and certify that retries are working as expected (I have a mocked flask server for all micro services that we use), but I couldn't find a way to got the 'retries history' data.
In other words, for example, I want to see if a specific micro-service may respond only after the third request, and those kind of thing.
Below is the code that I'm using now:
from requests import exceptions, Session
from urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
def open_request_session():
# Default retries configs
toggle = True #loaded from config file
session = Session()
if toggle:
# loaded from config file as
parameters = {'total': 5,
'backoff_factor': 0.0,
'http_error_to_retry': [400]}well
retries = Retry(total=parameters['total'],
backoff_factor=parameters['backoff_factor'],
status_forcelist=parameters['http_error_to_retry'],
# Do not force an exception when False
raise_on_status=False)
session.mount('http://', HTTPAdapter(max_retries=retries))
session.mount('https://', HTTPAdapter(max_retries=retries))
return session
# Request
with open_request_session() as request:
my_response = request.get(url, timeout=10)
I see in the urllib3 documentation that Retry has a history attribute, but when I try to consult the attribute it is empty.
I don't know if I'm doing wrong or forgetting something, once Software Development is not my best skill.
So, I have two questions:
Does anyone know a way to got this history information?
How can I do create tests to verify if the retries behavior is working as expected? (So far I only test in debug mode)
I'm using Python 3.6.8.
I know that I can create a while statement to 'control' this, but I'm trying to avoid complexity. And this is why I'm here, I'm looking for an alternative based on Python and community best practices.
A bit late, but I just figured this out myself so thought I'd share what I found.
Short answer:
response.raw.retries.history
will get you what you are looking for.
Long answer:
You cannot get the history off the original Retry instance created. Under the covers, urllib3 creates a new Retry instance for every attempt.
urllib3 does store the last Retry instance on the response when one is returned. However, the response from the requests library is a wrapper around the urllib3 response. Luckily, requests stores the original urllib3 response on the raw field.
I am fighting with tornado and the official python oauth2client, gcloud... modules.
These modules accept an alternate http client passed with http=, as long as it has a method called request which can be called by any of these libraries, whenever an http request must be sent to google and/or to renew the access tokens using the refresh tokens.
I have created a simple class which has a self.client = AsyncHttpClient()
Then in its request method, returns self.client.fetch(...)
My goal is to be able to yield any of these libraries calls, so that tornado will execute them in asynchronously.
The thing is that they are highly dependant on what the default client - set to httplib2.Http() returns: (response, content)
I am really stuck and cannot find a clean way of making this async
If anyone already found a way, please help.
Thank you in advance
These libraries do not support asynchronous. The porting process is not always easy.
oauth2client
Depending on what you want to do maybe Tornado's GoogleOAuth2Mixin or tornado-alf will be enough.
gcloud
Since I am not aware of any Tornado/asyncio implementation of gcloud-python, so you could:
you may write it yourself. Again it's not simple transport change of Connection.http or request, all the stuff around must be able to use/yield future/coroutines.
wrap it in ThreadPoolExecutor (as #Apero mentioned). This is high level API, so any nested api calls within that yield will be executed in same thread (not using the pool). It could work well.
external app (with ProcessPoolExecutor or Popen).
When I had similar problem with AWS couple years ago, I've ended up with executing, asynchronously, CLI (Tornado + subprocess.Popen + some cli (awscli, or boto based)) and simple cases (like S3, basic EC2 operations) with plain AsyncHTTPClient.
I am trying to add authentication to a xmlrpc server (which will be running on nodes of a P2P network) without using user:password#host as this will reveal the password to all attackers. The authentication is so to basically create a private network, preventing unauthorised users from accessing it.
My solution to this was to create a challenge response system very similar to this but I have no clue how to add this to the xmlrpc server code.
I found a similar question (Where custom authentication was needed) here.
So I tried creating a module that would be called whenever a client connected to the server. This would connect to a challenge-response server running on the client and if the client responded correctly would return True. The only problem was that I could only call the module once and then I got a reactor cannot be restarted error. So is there some way of having a class that whenever the "check()" function is called it will connect and do this?
Would the simplest thing to do be to connect using SSL? Would that protect the password? Although this solution would not be optimal as I am trying to avoid having to generate SSL certificates for all the nodes.
Don't invent your own authentication scheme. There are plenty of great schemes already, and you don't want to become responsible for doing the security research into what vulnerabilities exist in your invention.
There are two very widely supported authentication mechanisms for HTTP (over which XML-RPC runs, therefore they apply to XML-RPC). One is "Basic" and the other is "Digest". "Basic" is fine if you decide to run over SSL. Digest is more appropriate if you really can't use SSL.
Both are supported by Twisted Web via twisted.web.guard.HTTPAuthSessionWrapper, with copious documentation.
Based on your problem description, it sounds like the Secure Remote Password Protocol might be what you're looking for. It's a password-based mechanism that provides strong, mutual authentication without the complexity of SSL certificate management. It may not be quite as flexible as SSL certificates but it's easy to use and understand (the full protocol description fits on a single page). I've often found it a useful tool for situations where a trusted third party (aka Kerberos/CA authorities) isn't appropriate.
For anyone that was looking for a full example below is mine (thanks to Rakis for pointing me in the right direction). In this the user and password is stored in a file called 'passwd' (see the first useful link for more details and how to change it).
Server:
#!/usr/bin/env python
import bjsonrpc
from SRPSocket import SRPSocket
import SocketServer
from bjsonrpc.handlers import BaseHandler
import time
class handler(BaseHandler):
def time(self):
return time.time()
class SecureServer(SRPSocket.SRPHost):
def auth_socket(self, socket):
server = bjsonrpc.server.Server(socket, handler_factory=handler)
server.serve()
s = SocketServer.ForkingTCPServer(('', 1337), SecureServer)
s.serve_forever()
Client:
#! /usr/bin/env python
import bjsonrpc
from bjsonrpc.handlers import BaseHandler
from SRPSocket import SRPSocket
import time
class handler(BaseHandler):
def time(self):
return time.time()
socket, key = SRPSocket.SRPSocket('localhost', 1337, 'dht', 'testpass')
connection = bjsonrpc.connection.Connection(socket, handler_factory=handler)
test = connection.call.time()
print test
time.sleep(1)
Some useful links:
http://members.tripod.com/professor_tom/archives/srpsocket.html
http://packages.python.org/bjsonrpc/tutorial1/index.html
Please advise library for working with soap in python.
Now, I'm trying to use "suds" and I can't undestand how get http headers from server reply
Code example:
from suds.client import Client
url = "http://10.1.0.36/money_trans/api3.wsdl"
client = Client(url)
login_res = client.service.Login("login", "password")
variable "login_res" contain xml answer and doesnt contain http headers. But I need to get session id from them.
I think you actually want to look in the Cookie Jar for that.
client = Client(url)
login_res = client.service.Login("login", "password")
for c in client.options.transport.cookiejar:
if "sess" in str(c).lower():
print "Session cookie:", c
I'm not sure. I'm still a SUDS noob, myself. But this is what my gut tells me.
The response from Ishpeck is on the right path. I just wanted to add a few things about the Suds internals.
The suds client is a big fat abstraction layer on top of a urllib2 HTTP opener. The HTTP client, cookiejar, headers, request and responses are all stored in the transport object. The problem is that none of this activity is cached or stored inside of the transport other than, maybe, the cookies within the cookiejar, and even tracking these can sometimes be problematic.
If you want to see what's going on when debugging, my suggestion would be to add this to your code:
import logging
logging.basicConfig(level=logging.INFO)
logging.getLogger('suds.client').setLevel(logging.DEBUG)
logging.getLogger('suds.transport').setLevel(logging.DEBUG)
Suds makes use of the native logging module and so by turning on debug logging, you get to see all of the activity being performed underneath including headers, variables, payload, URLs, etc. This has saved me tons of times.
Outside of that, if you really need to definitively track state on your headers, you're going to need to create a custom subclass of a suds.transport.http.HttpTransport object and overload some of the default behavior and then pass that to the Client constructor.
Here is a super-over-simplified example:
from suds.transport.http import HttpTransport, Reply, TransportError
from suds.client import Client
class MyTransport(HttpTransport):
# custom stuff done here
mytransport_instance = MyTransport()
myclient = Client(url, transport=mytransport_instance)
I think Suds library has a poor documentation so, I recommend you to use Zeep. It's a SOAP requests library in Python. Its documentation isn't perfect, but it's very much clear than Suds Doc.