I am trying to connect to aws s3 using following steps. But the command s3.meta.client.head_bucket hanged almost 30min. Is there anyway to know the reason for hang or do we keep any checks before connecting to aws s3 to make sure the connection is proper or can we set the timeout?
import boto3
import botocore
boto3.setup_default_session(profile_name='aws_profile')
s3=boto3.resource('s3')
s3.meta.client.head_bucket(Bucket='pha-bucket')
Traceback (most recent call last):
File "", line 1, in
File "/opt/freeware/lib/python2.7/site-packages/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/client.py", line 531, in _make_api_call
operation_model, request_dict)
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 141, in make_request
return self._send_request(request_dict, operation_model)
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 170, in _send_request
success_response, exception):
File "/opt/freeware/lib/python2.7/site-packages/botocore/endpoint.py", line 249, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/opt/freeware/lib/python2.7/site-packages/botocore/hooks.py", line 227, in emit
return self._emit(event_name, kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/hooks.py", line 210, in _emit
response = handler(**kwargs)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 183, in call
if self._checker(attempts, response, caught_exception):
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 251, in call
caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 317, in call
caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 223, in call
attempt_number, caught_exception)
File "/opt/freeware/lib/python2.7/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "https://s3.ap-south-1.amazonaws.com/pha-bucket"
logs from logging module:
02-12-2021T04:30:35|connectionpool.py[735]|INFO:Starting new HTTPS connection (1): s3.ap-south-1.amazonaws.com
02-12-2021T04:35:55|connectionpool.py[735]|INFO:Starting new HTTPS connection (2): s3.ap-south-1.amazonaws.com
02-12-2021T04:41:17|connectionpool.py[735]|INFO:Starting new HTTPS connection (3): s3.ap-south-1.amazonaws.com
02-12-2021T04:46:40|connectionpool.py[735]|INFO:Starting new HTTPS connection (4): s3.ap-south-1.amazonaws.com
02-12-2021T04:52:07|connectionpool.py[735]|INFO:Starting new HTTPS connection (5): s3.ap-south-1.amazonaws.com
Another way to access aws resources, is using a session.
A session manages state about a particular configuration. By default,
a session is created for you when needed. However, it's possible and
recommended that in some scenarios you maintain your own session.
Sessions typically store the following:
Credentials
AWS Region
Other configurations related to your profile (for example assume a role with more permissions or less permissions depending on use case)
Here is an example:
import boto3
session = boto3.Session(
aws_access_key_id='AWS_ACCESS_KEY_ID', #set manually or by envvar
aws_secret_access_key='AWS_SECRET_ACCESS_KEY', #set manually or by envvar
)
s3 = session.resource('s3')
bucket = s3.Bucket('my-personal-test')
for my_bucket_object in bucket.objects.all():
print(my_bucket_object)
You need to mention the access_key and secret_key when connecting to a resources, if you haven't configured aws using the aws-configure command previously.
s3=boto3.resource('s3', access_key='xyz', secret_key='scz')
I just tried the same code as you have shared, it just worked.:
In [1]: import boto3
In [2]: import botocore
In [3]: boto3.setup_default_session(profile_name='myprofile')
In [4]: s3=boto3.resource('s3')
...: s3.meta.client.head_bucket(Bucket='zappaadsaziuis7v4f')
Out[4]:
{'ResponseMetadata': {'RequestId': '0CCCB0E3D17D9948',
'HostId': 'Eu96QWMyG+Ip9XedndlUBemQ7eE9Ps9Lzl1q2NOqi3fbcADEbdo=',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amz-id-2': 'Eu96QWMyG+Ip9XedndlUBemQ7eE9fbcADEbdo=',
'x-amz-request-id': '0CCCB0E3D17D9948',
'date': 'Tue, 16 Feb 2021 20:46:57 GMT',
'x-amz-bucket-region': 'eu-central-1',
'content-type': 'application/xml',
'server': 'AmazonS3'},
'RetryAttempts': 1}}
In [7]: s3.meta.client.head_bucket(Bucket='mytestbucketzpl')
Out[7]:
{'ResponseMetadata': {'RequestId': '4F10A0EBE7577A78',
'HostId': 'vfA4aUVnbcrO1glIGe7rm9WMyvwg7b5ZT1NTrq',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amz-id-2': '',
'x-amz-request-id': '',
'date': 'Tue, 16 Feb 2021 20:36:00 GMT',
'x-amz-bucket-region': 'ap-south-1',},
'RetryAttempts': 0}}
I would suggest taking a look at this How can I troubleshoot the "Could not connect to the endpoint URL" error when I run the sync command on my Amazon S3 bucket?
Or try an alternate approach as described in boot3 docs:
Accessing a bucket
# Boto3
import botocore
bucket = s3.Bucket('mybucket')
exists = True
try:
s3.meta.client.head_bucket(Bucket='mybucket')
except botocore.exceptions.ClientError as e:
# If a client error is thrown, then check that it was a 404 error.
# If it was a 404 error, then the bucket does not exist.
error_code = e.response['Error']['Code']
if error_code == '404':
exists = False
Related
I am building a python client-side application that uses Firestore. I have successfully used Google Identity Platform to sign up and sign in to the Firebase project, and created a working Firestore client using google.cloud.firestore.Client which is authenticated as a user:
import json
import requests
import google.oauth2.credentials
from google.cloud import firestore
request_url = f"https://identitytoolkit.googleapis.com/v1/accounts:signInWithPassword?key={self.__api_key}"
headers = {"Content-Type": "application/json; charset=UTF-8"}
data = json.dumps({"email": self.__email, "password": self.__password, "returnSecureToken": True})
response = requests.post(request_url, headers=headers, data=data)
try:
response.raise_for_status()
except (HTTPError, Exception):
content = response.json()
error = f"error: {content['error']['message']}"
raise AuthError(error)
json_response = response.json()
self.__token = json_response["idToken"]
self.__refresh_token = json_response["refreshToken"]
credentials = google.oauth2.credentials.Credentials(self.__token,
self.__refresh_token,
client_id="",
client_secret="",
token_uri=f"https://securetoken.googleapis.com/v1/token?key={self.__api_key}"
)
self.__db = firestore.Client(self.__project_id, credentials)
I have the problem, however, that when the token has expired, I get the following error:
Traceback (most recent call last):
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable
return callable_(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 826, in __call__
return _end_unary_response_blocking(state, call, False, None)
File "/usr/local/lib/python3.7/dist-packages/grpc/_channel.py", line 729, in _end_unary_response_blocking
raise _InactiveRpcError(state)
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "Missing or invalid authentication."
debug_error_string = "{"created":"#1613043524.699081937","description":"Error received from peer ipv4:172.217.16.74:443","file":"src/core/lib/surface/call.cc","file_line":1055,"grpc_message":"Missing or invalid authentication.","grpc_status":16}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/usr/lib/python3.7/threading.py", line 917, in _bootstrap_inner
self.run()
File "/home/my_app/src/controllers/im_alive.py", line 20, in run
self.__device_api.set_last_updated(utils.device_id())
File "/home/my_app/src/api/firestore/firestore_device_api.py", line 21, in set_last_updated
"lastUpdatedTime": self.__firestore.SERVER_TIMESTAMP
File "/home/my_app/src/api/firestore/firestore.py", line 100, in update
ref.update(data)
File "/usr/local/lib/python3.7/dist-packages/google/cloud/firestore_v1/document.py", line 382, in update
write_results = batch.commit()
File "/usr/local/lib/python3.7/dist-packages/google/cloud/firestore_v1/batch.py", line 147, in commit
metadata=self._client._rpc_metadata,
File "/usr/local/lib/python3.7/dist-packages/google/cloud/firestore_v1/gapic/firestore_client.py", line 1121, in commit
request, retry=retry, timeout=timeout, metadata=metadata
File "/usr/local/lib/python3.7/dist-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/usr/local/lib/python3.7/dist-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/usr/local/lib/python3.7/dist-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.Unauthenticated: 401 Missing or invalid authentication.
I have tried omitting the token and only specifying the refresh token, and then calling credentials.refresh(), but the expires_in in the response from the https://securetoken.googleapis.com/v1/token endpoint is a string instead of a number (docs here), which makes _parse_expiry(response_data) in google.oauth2._client.py:257 raise an exception.
Is there any way to use the firestore.Client from either google.cloud or firebase_admin and have it automatically handle refreshing tokens, or do I need to switch to the manually calling the Firestore RPC API and refreshing tokens at the correct time?
Note: There are no users interacting with the python app, so the solution must not require user interaction.
Can't you just pass the string cast as integer _parse_expiry(int(float(response_data))) ?
If it is not working you could try to make a call and refresh token after getting and error 401, see my answer for the general idea on how to handle tokens.
As mentioned by #Marco, it is recommended that you use a service account if it's going to be used in an environment without user. When you use service account, you can just set GOOGLE_APPLICATION_CREDENTIALS environment variable to location of service account json file and just instantiate the firestore Client without any credentials (The credentials will be picked up automatically):
import firestore
client = firestore.Client()
and run it as (assuming Linux):
$ export GOOGLE_APPLICATION_CREDENTIALS=/path/to/credentials.json
$ python file.py
Still, if you really want to use user credentials for the script, you can install the Google Cloud SDK, then:
$ gcloud auth application-default login
This will open browser and for you to select account and login. After logging in, it creates a "virtual" service account file corresponding to your user account (that will also be loaded automatically by clients). Here too, you don't need to pass any parameters to your client.
See also: Difference between “gcloud auth application-default login” and “gcloud auth login”
I run test scripts for AWS IOT in a bitbucket pipeline using python + boto3
It worked fine until recently, now i get the following error:
Traceback (most recent call last):
File "/localDebugRepo/tests/aws/test_iot_api.py", line 119, in test_set_get_owner
self.iot_util.set_owner(owner, self.test_thing)
File "/localDebugRepo/aws/iot_api.py", line 176, in set_owner
self.iot_data.update_thing_shadow(thingName=thing, payload=payload)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 663, in _make_api_call
operation_model, request_dict, request_context)
File "/usr/local/lib/python3.6/site-packages/botocore/client.py", line 682, in _make_request
return self._endpoint.make_request(operation_model, request_dict)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 102, in make_request
return self._send_request(request_dict, operation_model)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 137, in _send_request
success_response, exception):
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 256, in _needs_retry
caught_exception=caught_exception, request_dict=request_dict)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 356, in emit
return self._emitter.emit(aliased_event_name, **kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 228, in emit
return self._emit(event_name, kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/hooks.py", line 211, in _emit
response = handler(**kwargs)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 183, in __call__
if self._checker(attempts, response, caught_exception):
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 251, in __call__
caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 277, in _should_retry
return self._checker(attempt_number, response, caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 317, in __call__
caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 223, in __call__
attempt_number, caught_exception)
File "/usr/local/lib/python3.6/site-packages/botocore/retryhandler.py", line 359, in _check_caught_exception
raise caught_exception
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 200, in _do_get_response
http_response = self._send(request)
File "/usr/local/lib/python3.6/site-packages/botocore/endpoint.py", line 269, in _send
return self.http_session.send(request)
File "/usr/local/lib/python3.6/site-packages/botocore/httpsession.py", line 281, in send
raise SSLError(endpoint_url=request.url, error=e)
botocore.exceptions.SSLError: SSL validation failed for https://data.iot.eu-central-1.amazonaws.com/things/thing-unittest/shadow [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:777)
While I cannot reproduce this on my local system, reproducing the error with the default python:3.6.4 docker image is successful indicating that there might be an invalid certificate.
Intrestingly, running the following command in pipeline is succesfull:
openssl s_client -connect data.iot.eu-central-1.amazonaws.com:443
root#f30a34330be5:/localDebugRepo# openssl s_client -connect data.iot.eu-central-1.amazonaws.com:443
CONNECTED(00000003)
depth=2 C = US, O = "VeriSign, Inc.", OU = VeriSign Trust Network, OU = "(c) 2006 VeriSign, Inc. - For authorized use only", CN = VeriSign Class 3 Public Primary Certification Authority - G5
verify return:1
depth=1 C = US, O = Symantec Corporation, OU = Symantec Trust Network, CN = Symantec Class 3 Secure Server CA - G4
verify return:1
depth=0 C = US, ST = Washington, L = Seattle, O = "Amazon.com, Inc.", CN = *.iot.eu-central-1.amazonaws.com
verify return:1
140686038922896:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:177:
---
Certificate chain
0 s:/C=US/ST=Washington/L=Seattle/O=Amazon.com, Inc./CN=*.iot.eu-central-1.amazonaws.com
i:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
1 s:/C=US/O=Symantec Corporation/OU=Symantec Trust Network/CN=Symantec Class 3 Secure Server CA - G4
i:/C=US/O=VeriSign, Inc./OU=VeriSign Trust Network/OU=(c) 2006 VeriSign, Inc. - For authorized use only/CN=VeriSign Class 3 Public Primary Certification Authority - G5
---
any advice on how can i debug this further would be greatly apreciated
It would appear that AWS has bad certs for the last several hours.
I do not subscribe to a support tier, so I don't know how to tell them.
I am getting the same problem; boto3 reports that bad cert (which you can verify in a browser).
All of my IoT functions are affected, though if I run it locally (not as a lambda), it seems to work.
Perhaps someone has a way to tell Amazon their little problem?
Edit:
See:
https://forums.aws.amazon.com/thread.jspa?messageID=967311󬊏
and
https://github.com/boto/boto3/issues/2686
for the fix. You shouldn't use the defaults for creating your dataplane client, because certifi (python) has been fixed to ignore the Symantec CA for the URL, and Amazon isn't going to fix it.
The solution pointed out by Eric Lyons did not worked for me directly. The problem was with the endpoint provided by:
iot_client = boto3.client("iot", region_name=os.getenv("IOT_REGION"))
iot_client.describe_endpoint(endpointType="iot:Data-ATS").get("endpointAddress")
It fails during authentication:
I fixed it by getting the endpoint directly from the IOT-Core settings page:
client('iot-data',
aws_access_key_id = '<MY ACCESS KEY>',
aws_secret_access_key = '<MY ACCESS SECRET KEY>',
endpoint_url = '<MY ENDPOINT>');
I'm trying to access google cloud storage bucket from cloud functions (python) instance and it's throwing mystic 500 error.
I have given the service account editor role too. It didn't make any change.
I also checked if any of the quota is going off limit. The limits were not even close.
Please, anyone can help me find cause of this error?
here is the code
from google.cloud import storage
import os
import base64
storage_client = storage.Client()
def init_analysis(event, context):
print("event", event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
bucket_name = 'my-bucket'
bucket = storage_client.get_bucket(bucket_name)
blobs = bucket.list_blobs()
for blob in blobs:
print(blob.name)
Error:
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 99, in refresh service_account=self._service_account_email) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 208, in get_service_account_token 'instance/service-accounts/{0}/token'.format(service_account)) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 140, in get url, response.status, response.data), response) google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'", <google.auth.transport.requests._Response object at 0x2b0ef9edf438>) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 21, in init_analysis bucket = storage_client.get_bucket(bucket_name) File "/env/local/lib/python3.7/site-packages/google/cloud/storage/client.py", line 227, in get_bucket bucket.reload(client=self) File "/env/local/lib/python3.7/site-packages/google/cloud/storage/_helpers.py", line 130, in reload _target_object=self, File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 315, in api_request target_object=_target_object, File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 192, in _make_request return self._do_request(method, url, headers, data, target_object) File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 221, in _do_request return self.http.request(url=url, method=method, headers=headers, data=data) File "/env/local/lib/python3.7/site-packages/google/auth/transport/requests.py", line 205, in request self._auth_request, method, url, request_headers) File "/env/local/lib/python3.7/site-packages/google/auth/credentials.py", line 122, in before_request self.refresh(request) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 102, in refresh six.raise_from(new_exc, caught_exc) File "<string>", line 3, in raise_from google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'", <google.auth.transport.requests._Response object at 0x2b0ef9edf438>)
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'"
The error you are getting it's because your Cloud Functions service account doesn't have the cloudfunctions.serviceAgent role. As you can see on the documentation:
Authenticating as the runtime service account from inside your function may fail if you change the Cloud Functions service account's permissions.
However, I found that sometimes you can not add this role as it doesn't show up as an option. I have reported this issue to the Google Cloud Functions engineering team and they are working to solve it.
Nevertheless, you can add the role again using this gcloud command:
gcloud projects add-iam-policy-binding <project_name> --role=roles/cloudfunctions.serviceAgent --member=serviceAccount:service-<project_number>#gcf-admin-robot.iam.gserviceaccount.com
I'm using BigQuery API client library for Python from Google Compute Engine. While making query it throws network unreachable error.
[INFO:2018-01-02 16:16:04,887:oauth2client.transport] Attempting refresh to obtain initial access_token
[INFO:2018-01-02 16:16:04,924:oauth2client.client] Refreshing access_token
Traceback (most recent call last):
File "/test/data/reports/ga_bigquery.py", line 130, in ga_table_string
if bq_dataset.table("ga_sessions_{}".format(str_date)).exists():
File "/test/data/reports/.venv/lib/python2.7/site-packages/gcloud/bigquery/table.py", line 472, in exists
query_params={'fields': 'id'})
File "/test/data/reports/.venv/lib/python2.7/site-packages/gcloud/connection.py", line 343, in api_request
target_object=_target_object)
File "/test/data/reports/.venv/lib/python2.7/site-packages/gcloud/connection.py", line 241, in _make_request
return self._do_request(method, url, headers, data, target_object)
File "/test/data/reports/.venv/lib/python2.7/site-packages/gcloud/connection.py", line 270, in _do_request
body=data)
File "/test/data/reports/.venv/lib/python2.7/site-packages/oauth2client/transport.py", line 153, in new_request
credentials._refresh(orig_request_method)
File "/test/data/reports/.venv/lib/python2.7/site-packages/oauth2client/client.py", line 765, in _refresh
self._do_refresh_request(http_request)
File "/test/data/reports/.venv/lib/python2.7/site-packages/oauth2client/client.py", line 797, in _do_refresh_request
self.token_uri, method='POST', body=body, headers=headers)
File "/test/data/reports/.venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/test/data/reports/.venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/test/data/reports/.venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1272, in _conn_request
conn.connect()
File "/test/data/reports/.venv/lib/python2.7/site-packages/httplib2/__init__.py", line 1075, in connect
raise socket.error, msg
socket.error: [Errno 101] Network is unreachable
What could be the reason?
I run the code with following option :
[root#myserver]# strace -ff -e poll,select,connect,recvfrom,sendto python run.py --date=20180102 >> strace.log
[root#myserver]# cat strace.log | grep unreachable
connect(3, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “2404:6800:4003:c02::54”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network isunreachable)
connect(4, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “2404:6800:4003:803::200a”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)
connect(5, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “2404:6800:4003:808::200d”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network is unreachable)
connect(6, {sa_family=AF_INET6, sin6_port=htons(443), inet_pton(AF_INET6, “2404:6800:4003:c03::5f”, &sin6_addr), sin6_flowinfo=0, sin6_scope_id=0}, 28) = -1 ENETUNREACH (Network isunreachable)
From log it looks like compute instance is blocking outgoing ipv6 request.
Is there any way we can unblcok ipv6 requests in google compute engine ?
First, if you are using the BigQuery API Client Library for Python, you may want to try the BigQuery Cloud Client Library for Python, instead.
One reason for this could be that the API client library uses httplib2 to make requests to BigQuery, which has problems in some network setups such as behind proxies. The Cloud library uses the more standard requests library, so it should be more reliable.
Second, even if you don't switch libraries, you should change what authentication library you are using. Your stack trace shows oauth2client, but oauth2client is deprecated. Use google-auth-httplib2 to use the API client libraries with the google-auth library.
So I'm running my google endpoint locally with dev_appserver.py.
I use the API explorer to test the application.
The code I'm using to create the Service, so I can call the API is the following:
from apiclient.discovery import build
from oauth2client.client import GoogleCredentials
credentials = GoogleCredentials.get_application_default()
service = build('speech', 'v1beta1', credentials=credentials)
I receive an SSL error (Invalid and/or missing SSL certificate), even though when I access the stated URL via browser it works fine (that is, the green padlock shows up).
I'm not sure what changed, but this was working fine not long ago.
I tried to disable SSL checking, but was unable to.
Full logs below:
INFO 2017-01-02 03:12:02,724 discovery.py:267] URL being requested: GET https://www.googleapis.com/discovery/v1/apis/speech/v1beta1/rest?userIp=0.2.0.3
ERROR 2017-01-02 03:12:03,022 wsgi.py:263]
Traceback (most recent call last):
File "/home/vini/opt/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 240, in Handle
handler = _config_handle.add_wsgi_middleware(self._LoadHandler())
File "/home/vini/opt/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 299, in _LoadHandler
handler, path, err = LoadObject(self._handler)
File "/home/vini/opt/google-cloud-sdk/platform/google_appengine/google/appengine/runtime/wsgi.py", line 85, in LoadObject
obj = __import__(path[0])
File "/mnt/b117/home/vini/udacity/cerci-endpoint/api.py", line 28, in <module>
service = build('speech', 'v1beta1', credentials=credentials)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/oauth2client/_helpers.py", line 133, in positional_wrapper
return wrapped(*args, **kwargs)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/googleapiclient/discovery.py", line 222, in build
cache)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/googleapiclient/discovery.py", line 269, in _retrieve_discovery_doc
resp, content = http.request(actual_url)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/httplib2/__init__.py", line 1609, in request
(response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/httplib2/__init__.py", line 1351, in _request
(response, content) = self._conn_request(conn, request_uri, method, body, headers)
File "/mnt/b117/home/vini/udacity/cerci-endpoint/lib/httplib2/__init__.py", line 1307, in _conn_request
response = conn.getresponse()
File "/home/vini/opt/google-cloud-sdk/platform/google_appengine/google/appengine/dist27/gae_override/httplib.py", line 532, in getresponse
raise HTTPException(str(e))
HTTPException: Invalid and/or missing SSL certificate for URL: https://www.googleapis.com/discovery/v1/apis/speech/v1beta1/rest?userIp=0.2.0.3
Any ideas what could be causing this problem?
Do I have to "install" or update the SSL certificates used by python?
According to App Engine issue 13477 it seems that some of the certificates found in urlfetch_cacerts.txt that is included in the App Engine Python SDK / gcloud-sdk expired 2017-01-01.
As a temporary workaround, you can replace the contents of <your-cloud-sdk-path>/platform/google_appengine/lib/cacerts/urlfetch_cacerts.txt with https://curl.haxx.se/ca/cacert.pem
To build on the answer by #danielx for those on macOS, this is what worked for me. The path to the certificates for me was:
/Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/cacerts/urlfetch_cacerts.txt
To update it, I used the following steps:
cd /Applications/GoogleAppEngineLauncher.app/Contents/Resources/GoogleAppEngine-default.bundle/Contents/Resources/google_appengine/lib/cacerts
mv urlfetch_cacerts.txt urlfetch_cacerts.bup
curl -o urlfetch_cacerts.txt -k https://curl.haxx.se/ca/cacert.pem
If you don't have curl installed, you can manually download the certificates an move them to the folder above.
Don't forget to restart the App Engine dev server if it is already running.
Got this error on local dev environment as recently as Aug 2017. The fix is to update all urlfetch calls and force validation of the certs:
urlfetch.fetch(url=url, validate_certificate=True)
Did not have to touch the gcloud certs (MacOS). See Issuing an HTTPS request.