I run test scripts for AWS IOT in a bitbucket pipeline using python + boto3
It worked fine until recently, now i get the following error:
Traceback (most recent call last):
File "/localDebugRepo/tests/aws/test_iot_api.py", line 119, in test_set_get_owner
self.iot_util.set_owner(owner, self.test_thing)
File "/localDebugRepo/aws/iot_api.py", line 176, in set_owner
self.iot_data.update_thing_shadow(thingName=thing, payload=payload)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 663, in _make_api_call
operation_model, request_dict, request_context)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 256, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/usr/local/lib/python3.6/site-packages/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for https://data.iot.eu-central-1.amazonaws.com/things/thing-unittest/shadow [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)
While I cannot reproduce this on my local system, reproducing the error with the default python:3.6.4 docker image is successful indicating that there might be an invalid certificate.
Intrestingly, running the following command in pipeline is succesfull:
openssl s_client -connect data.iot.eu-central-1.amazonaws.com:443
root#f30a34330be5:/localDebugRepo# openssl s_client -connect data.iot.eu-central-1.amazonaws.com:443
CONNECTED(00000003)
depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = "(c) 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN = Symantec Class 3 Secure Server CA - G4
verify return:1
depth=0 C = US, ST = Washington, L = Seattle, O = "Amazon.com, Inc.", CN = *.iot.eu-central-1.amazonaws.com
verify return:1
140686038922896:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177:
---
Certificate chain
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.eu-central-1.amazonaws.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
---
any advice on how can i debug this further would be greatly apreciated
It would appear that AWS has bad certs for the last several hours.
I do not subscribe to a support tier, so I don't know how to tell them.
I am getting the same problem; boto3 reports that bad cert (which you can verify in a browser).
All of my IoT functions are affected, though if I run it locally (not as a lambda), it seems to work.
Perhaps someone has a way to tell Amazon their little problem?
Edit:
See:
https://forums.aws.amazon.com/thread.jspa?messageID=967311󬊏
and
https://github.com/boto/boto3/issues/2686
for the fix. You shouldn't use the defaults for creating your dataplane client, because certifi (python) has been fixed to ignore the Symantec CA for the URL, and Amazon isn't going to fix it.
The solution pointed out by Eric Lyons did not worked for me directly. The problem was with the endpoint provided by:
iot_client = boto3.client("iot", region_name=os.getenv("IOT_REGION"))
iot_client.describe_endpoint(endpointType="iot:Data-ATS").get("endpointAddress")
It fails during authentication:
I fixed it by getting the endpoint directly from the IOT-Core settings page:
client('iot-data',
aws_access_key_id = '<MY ACCESS KEY>',
aws_secret_access_key = '<MY ACCESS SECRET KEY>',
endpoint_url = '<MY ENDPOINT>');
Related
I am trying to connect to aws s3 using following steps. But the command s3.meta.client.head_bucket hanged almost 30min. Is there anyway to know the reason for hang or do we keep any checks before connecting to aws s3 to make sure the connection is proper or can we set the timeout?
import boto3
import botocore
boto3.setup_default_session(profile_name='aws_profile')
s3=boto3.resource('s3')
s3.meta.client.head_bucket(Bucket='pha-bucket')
Traceback (most recent call last):
File "", line 1, in
File "/opt/freeware/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/client.py", line 531, in _make_api_call
operation_model, request_dict)
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 170, in _send_request
success_response, exception):
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 249, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/opt/freeware/lib/python2.7/site-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 183, in call
if self._checker(attempts, response, caught_exception):
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 251, in call
caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 317, in call
caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 223, in call
attempt_number, caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.ap-south-1.amazonaws.com/pha-bucket"
logs from logging module:
02-12-2021T04:30:35|connectionpool.py[735]|INFO:Starting new HTTPS connection (1): s3.ap-south-1.amazonaws.com
02-12-2021T04:35:55|connectionpool.py[735]|INFO:Starting new HTTPS connection (2): s3.ap-south-1.amazonaws.com
02-12-2021T04:41:17|connectionpool.py[735]|INFO:Starting new HTTPS connection (3): s3.ap-south-1.amazonaws.com
02-12-2021T04:46:40|connectionpool.py[735]|INFO:Starting new HTTPS connection (4): s3.ap-south-1.amazonaws.com
02-12-2021T04:52:07|connectionpool.py[735]|INFO:Starting new HTTPS connection (5): s3.ap-south-1.amazonaws.com
Another way to access aws resources, is using a session.
A session manages state about a particular configuration. By default,
a session is created for you when needed. However, it's possible and
recommended that in some scenarios you maintain your own session.
Sessions typically store the following:
Credentials
AWS Region
Other configurations related to your profile (for example assume a role with more permissions or less permissions depending on use case)
Here is an example:
import boto3
session = boto3.Session(
aws_access_key_id='AWS_ACCESS_KEY_ID', #set manually or by envvar
aws_secret_access_key='AWS_SECRET_ACCESS_KEY', #set manually or by envvar
)
s3 = session.resource('s3')
bucket = s3.Bucket('my-personal-test')
for my_bucket_object in bucket.objects.all():
print(my_bucket_object)
You need to mention the access_key and secret_key when connecting to a resources, if you haven't configured aws using the aws-configure command previously.
s3=boto3.resource('s3', access_key='xyz', secret_key='scz')
I just tried the same code as you have shared, it just worked.:
In [1]: import boto3
In [2]: import botocore
In [3]: boto3.setup_default_session(profile_name='myprofile')
In [4]: s3=boto3.resource('s3')
...: s3.meta.client.head_bucket(Bucket='zappaadsaziuis7v4f')
Out[4]:
{'ResponseMetadata': {'RequestId': '0CCCB0E3D17D9948',
'HostId': 'Eu96QWMyG+Ip9XedndlUBemQ7eE9Ps9Lzl1q2NOqi3fbcADEbdo=',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amz-id-2': 'Eu96QWMyG+Ip9XedndlUBemQ7eE9fbcADEbdo=',
'x-amz-request-id': '0CCCB0E3D17D9948',
'date': 'Tue, 16 Feb 2021 20:46:57 GMT',
'x-amz-bucket-region': 'eu-central-1',
'content-type': 'application/xml',
'server': 'AmazonS3'},
'RetryAttempts': 1}}
In [7]: s3.meta.client.head_bucket(Bucket='mytestbucketzpl')
Out[7]:
{'ResponseMetadata': {'RequestId': '4F10A0EBE7577A78',
'HostId': 'vfA4aUVnbcrO1glIGe7rm9WMyvwg7b5ZT1NTrq',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amz-id-2': '',
'x-amz-request-id': '',
'date': 'Tue, 16 Feb 2021 20:36:00 GMT',
'x-amz-bucket-region': 'ap-south-1',},
'RetryAttempts': 0}}
I would suggest taking a look at this How can I troubleshoot the "Could not connect to the endpoint URL" error when I run the sync command on my Amazon S3 bucket?
Or try an alternate approach as described in boot3 docs:
Accessing a bucket
# Boto3
import botocore
bucket = s3.Bucket('mybucket')
exists = True
try:
s3.meta.client.head_bucket(Bucket='mybucket')
except botocore.exceptions.ClientError as e:
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = e.response['Error']['Code']
if error_code == '404':
exists = False
I'm trying to use my aws credentials file in boto but can't seem to get it to work. I'm new to python and boto so I'm looking at a bunch of stuff online trying to understand this.
All I'm trying to do right now is to just get all ec2 instances...here is my python code:
import boto
from boto import ec2
ec2conn = ec2.connection.EC2Connection(profile_name='profile_name')
ec2conn.get_all_instances()
when I run that, I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 1170, in get_list
response = self.make_request(action, params, path, verb)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 1116, in make_request
return self._mexe(http_request)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 913, in _mexe
self.is_secure)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 705, in get_http_connection
return self.new_http_connection(host, port, is_secure)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 747, in new_http_connection
connection = self.proxy_ssl(host, is_secure and 443 or 80)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 835, in proxy_ssl
ca_certs=self.ca_certificates_file)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 943, in wrap_socket
ciphers=ciphers)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 611, in __init__
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 840, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:661)
I've also tried ec2conn.get_all_reservations() but got the same result...
In boto3, I can do this which works:
import boto3
session = boto3.Session(profile_name='dev')
session = boto3.Session(profile_name='profile_name')
dev_ec2 = session.client('ec2')
dev_ec2.describe_instances()
------EDIT--------
So I found this link on stack...Recommended way to manage credentials with multiple AWS accounts? and what I did was exported my AWS_PROFILE var
export AWS_PROFILE="profile_nm"
that worked when I did this:
>>> import boto
>>> conn = boto.connect_s3()
>>> conn.get_all_buckets()
And I got all of the s3 buckets back...
but when I did the above to get all the ec2 instances back...i still got the ssl.SSLEOFError above. It seems to work with s3 but not ec2 now...So, is the way I get all the Ec2 instances wrong?
I'm trying to send apple push notifications with django-ios-notifications app, but i've faced with WantReadError.
Here is a stacktrace:
File ".../local/lib/python2.7/site-packages/ios_notifications/models.py", line 110, in push_notification_to_devices
self._write_message(notification, devices, chunk_size)
File ".../local/lib/python2.7/site-packages/ios_notifications/models.py", line 132, in _write_message
self._connect()
File ".../local/lib/python2.7/site-packages/ios_notifications/models.py", line 100, in _connect
return super(APNService, self)._connect(self.certificate, self.private_key, self.passphrase)
File ".../local/lib/python2.7/site-packages/ios_notifications/models.py", line 64, in _connect
self.connection.do_handshake()
File ".../local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 1076, in do_handshake
self._raise_ssl_error(self._ssl, result)
File ".../local/lib/python2.7/site-packages/OpenSSL/SSL.py", line 847, in _raise_ssl_error
raise WantReadError()
WantReadError
I have no idea how to fix it.
I've created required APNService with correct Certificate and Private key, hostname is gateway.sandbox.push.apple.com.
Command like this
openssl s_client -CApath /etc/ssl/certs/ -connect gateway.sandbox.push.apple.com:2195 -cert my.pem works fine in Ubuntu, it returns
Start Time: 1458724825
Timeout : 300 (sec)
Verify return code: 0 (ok)
UPDATE: problem was solved.
According to https://github.com/stephenmuss/django-ios-notifications/issues/11 it's required to install https://github.com/mjs/gevent_openssl
I try to make a local HTTPS connection to a XMLRPC api. Since I upgrade to python 2.7.9 that enable by default certificates verification, I got a CERTIFICATE_VERIFY_FAILED error when I use my API
>>> test=xmlrpclib.ServerProxy('https://admin:bz15h9v9n#localhost:9999/API',verbose=False, use_datetime=True)
>>> test.list_satellites()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1233, in __call__
return self.__send(self.__name, args)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1591, in __request
verbose=self.__verbose
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1273, in request
return self.single_request(host, handler, request_body, verbose)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1301, in single_request
self.send_content(h, request_body)
File "/usr/local/lib/python2.7/xmlrpclib.py", line 1448, in send_content
connection.endheaders(request_body)
File "/usr/local/lib/python2.7/httplib.py", line 997, in endheaders
self._send_output(message_body)
File "/usr/local/lib/python2.7/httplib.py", line 850, in _send_output
self.send(msg)
File "/usr/local/lib/python2.7/httplib.py", line 812, in send
self.connect()
File "/usr/local/lib/python2.7/httplib.py", line 1212, in connect
server_hostname=server_hostname)
File "/usr/local/lib/python2.7/ssl.py", line 350, in wrap_socket
_context=self)
File "/usr/local/lib/python2.7/ssl.py", line 566, in __init__
self.do_handshake()
File "/usr/local/lib/python2.7/ssl.py", line 788, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
>>> import ssl
>>> ssl._create_default_https_context = ssl._create_unverified_context
>>> test.list_satellites()
[{'paired': True, 'serial': '...', 'enabled': True, 'id': 1, 'date_paired': datetime.datetime(2015, 5, 26, 16, 17, 6)}]
Does exists a pythonic way to disable default certificate verification in python 2.7.9 ?
I don't realy know if it's good to change "private" global SSL attribute (ssl._create_default_https_context = ssl._create_unverified_context)
You have to provide an unverified SSL context, constructed by hand or using the private function _create_unverified_context() from ssl module:
import xmlrpclib
import ssl
test = xmlrpclib.ServerProxy('https://admin:bz15h9v9n#localhost:9999/API',
verbose=False, use_datetime=True,
context=ssl._create_unverified_context())
test.list_satellites()
Note: this code only works with python >= 2.7.9 (contextparameter was added in Python 2.7.9)
If you want to have a code compatible with previous Python version, you have to use the transport parameter:
import xmlrpclib
import ssl
context = hasattr(ssl, '_create_unverified_context') and ssl._create_unverified_context() \
or None
test = xmlrpclib.ServerProxy('https://admin:bz15h9v9n#localhost:9999/API',
verbose=False, use_datetime=True,
transport=xmlrpclib.SafeTransport(use_datetime=True,
context=context))
test.list_satellites()
It's possible to disable verification using the public ssl APIs existing on Python 2.7.9+:
import xmlrpclib
import ssl
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
test = xmlrpclib.ServerProxy('https://admin:bz15h9v9n#localhost:9999/API',
verbose=False, use_datetime=True,
context=ssl_ctx)
test.list_satellites()
I think another way to disable certificate verification could be:
import xmlrpclib
import ssl
s=ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
s.verify_mode=ssl.CERT_NONE
test=xmlrpclib.Server('https://admin:bz15h9v9n#localhost:9999/API',verbose=0,context=s)
With Python 2.6.6 for example:
s = xmlrpclib.ServerProxy('https://admin:bz15h9v9n#localhost:9999/API', transport=None, encoding=None, verbose=0,allow_none=0, use_datetime=0)
It works for me...
I am trying to visit gateway.playneverwinter.com with splinter
from splinter import Browser
browser = Browser()
browser.visit('https://gateway.playneverwinter.com')
if browser.is_text_present('Neverwinter'):
print("Yes, we made it to the entrance of the Prime Material Plane!")
else:
print("Fumble")
browser.quit()
It fails with
File "gateway_bot.py", line 10, in <module>
browser.visit('https://gateway.playneverwinter.com')
File "/usr/local/lib/python3.4/dist-packages/splinter/driver/webdriver/__init__.py", line 53, in visit
self.connect(url)
File "/usr/local/lib/python3.4/dist-packages/splinter/request_handler/request_handler.py", line 23, in connect
self._create_connection()
File "/usr/local/lib/python3.4/dist-packages/splinter/request_handler/request_handler.py", line 53, in _create_connection
self.conn.endheaders()
File "/usr/lib/python3.4/http/client.py", line 1061, in endheaders
self._send_output(message_body)
File "/usr/lib/python3.4/http/client.py", line 906, in _send_output
self.send(msg)
File "/usr/lib/python3.4/http/client.py", line 841, in send
self.connect()
File "/usr/lib/python3.4/http/client.py", line 1205, in connect
server_hostname=server_hostname)
File "/usr/lib/python3.4/ssl.py", line 364, in wrap_socket
_context=self)
File "/usr/lib/python3.4/ssl.py", line 578, in __init__
self.do_handshake()
File "/usr/lib/python3.4/ssl.py", line 805, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:598)
Firefox is able to connect and browse this site without issue, tough. After some diagnostic
$ openssl s_client -connect gateway.playneverwinter.com:443
CONNECTED(00000003)
139745006343840:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177:
I found that it looked like a fixed issue in OpenSSL and that forcing either SSLv3 or TLSv1 allowed me to connect (and that I could then download the target with cURL) e.g. either of
openssl s_client -ssl3 -connect gateway.playneverwinter.com:443
openssl s_client -tls1 -connect gateway.playneverwinter.com:443
According to the comments in the OpenSSL ticket, I expect that the issue is on the server side, but as I do not have access to it, it is quite unhelpful. So, for a quick fix, is there a way to force splinter to use SSLv3 or TLSv1?
After looking into it, the only way I can think of doing that would to be to go into that client.py file and change the initialization of their ssl stuff.
Following #Natecat suggestion, I wrote a monkey patch to force SSLv3 when this error occurs
# Monkey patch splinter to force SSLv3 on `ssl.SSLEOFError`
from splinter import request_handler
import ssl
from http import client as http_client
_old_req = request_handler.request_handler.RequestHandler._create_connection
def _splinter_sslv3_patch(self):
try:
_old_req(self)
except ssl.SSLEOFError:
self.conn = http_client.HTTPSConnection(self.host, self.port,
context=ssl.SSLContext(ssl.PROTOCOL_SSLv3))
self.conn.putrequest('GET', self.path)
self.conn.putheader('User-agent', 'python/splinter')
if self.auth:
self.conn.putheader("Authorization", "Basic %s" % self.auth)
self.conn.endheaders()
request_handler.request_handler.RequestHandler._create_connection = _splinter_sslv3_patch