I am using python and boto
this is my code:
key = bucket.get_key(key_name)
if not key:
print 'error, key does not exist'
return
data = key.get_contents_as_string()
sometimes (appears randomly) i get this exception:
S3ResponseError: S3ResponseError: 404 Not Found
NOTE: the file is uploaded by one server and then immediately afterwards another server (located in a different continent) is running the code above.
the traceback:
Traceback (most recent call last): File "/test.py", line 222, in _process_response
data = key.get_contents_as_string() File "/usr/lib/python2.6/site-packages/boto-2.1.1-py2.6.egg/boto/s3/key.py",
line 1201, in get_contents_as_string
response_headers=response_headers) File "/usr/lib/python2.6/site-packages/boto-2.1.1-py2.6.egg/boto/s3/key.py",
line 1093, in get_contents_to_file
response_headers=response_headers) File "/usr/lib/python2.6/site-packages/boto-2.1.1-py2.6.egg/boto/s3/key.py",
line 996, in get_file
override_num_retries=override_num_retries) File "/usr/lib/python2.6/site-packages/boto-2.1.1-py2.6.egg/boto/s3/key.py",
line 211, in open
override_num_retries=override_num_retries) File "/usr/lib/python2.6/site-packages/boto-2.1.1-py2.6.egg/boto/s3/key.py",
line 165, in open_read
self.resp.reason, body) S3ResponseError: S3ResponseError: 404 Not Found
NoSuchKeyThe specified key does not
exist.key_nameidhost_id
so i get the key but then when i try and read from it i get 'not found'.
any idea ?
This is expected behaviour, according to Amazon S3 developer guide:
... However, information about the changes might not immediately replicate across Amazon S3 and you might observe the following behaviors: A process writes a new object to Amazon S3 and immediately attempts to read it. Until the change is fully propagated, Amazon S3 might report "key does not exist."
Related
I'm new to AWS and DynamoDB, I'm trying to send data to a table.
I'm running this code:
import boto3
db = boto3.resource('dynamodb')
table = db.Table('Whales')
table.put_item(
Item={
"id": "1573138502",
"transaction_type": "transfer",
})
and I get this error:
Traceback (most recent call last):
File "/Users/---/Desktop/---/---/test.py", line 6, in <module>
table.put_item(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/boto3/resources/factory.py", line 520, in do_action
response = action(self, *args, **kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/boto3/resources/action.py", line 83, in __call__
response = getattr(parent.meta.client, operation_name)(*args, **params)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/botocore/client.py", line 676, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ResourceNotFoundException: An error occurred (ResourceNotFoundException) when calling the PutItem operation: Requested resource not found
I installed boto3, I identified myself using the aws cli, I also tried running the code from AWS Cloud9 EC2 instead, but it didn't work, same error.
I have not been able to send anything to the db from Python, I don't understand where the problem comes from or what's causing it.
You most likely have the DynamoDB Table in a different region than the one specified in your ~/.aws/config.
Try cat ~/.aws/config and check what region you are connecting with. ie us-east-1.
Verify that your DynamoDB is in the same region using the AWS Console or other CLI tools.
From what I see, your code should not by syntactically wrong.
I'm trying to access google cloud storage bucket from cloud functions (python) instance and it's throwing mystic 500 error.
I have given the service account editor role too. It didn't make any change.
I also checked if any of the quota is going off limit. The limits were not even close.
Please, anyone can help me find cause of this error?
here is the code
from google.cloud import storage
import os
import base64
storage_client = storage.Client()
def init_analysis(event, context):
print("event", event)
pubsub_message = base64.b64decode(event['data']).decode('utf-8')
print(pubsub_message)
bucket_name = 'my-bucket'
bucket = storage_client.get_bucket(bucket_name)
blobs = bucket.list_blobs()
for blob in blobs:
print(blob.name)
Error:
Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 99, in refresh service_account=self._service_account_email) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 208, in get_service_account_token 'instance/service-accounts/{0}/token'.format(service_account)) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/_metadata.py", line 140, in get url, response.status, response.data), response) google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'", <google.auth.transport.requests._Response object at 0x2b0ef9edf438>) The above exception was the direct cause of the following exception: Traceback (most recent call last): File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 383, in run_background_function _function_handler.invoke_user_function(event_object) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 217, in invoke_user_function return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py", line 214, in call_user_function event_context.Context(**request_or_event.context)) File "/user_code/main.py", line 21, in init_analysis bucket = storage_client.get_bucket(bucket_name) File "/env/local/lib/python3.7/site-packages/google/cloud/storage/client.py", line 227, in get_bucket bucket.reload(client=self) File "/env/local/lib/python3.7/site-packages/google/cloud/storage/_helpers.py", line 130, in reload _target_object=self, File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 315, in api_request target_object=_target_object, File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 192, in _make_request return self._do_request(method, url, headers, data, target_object) File "/env/local/lib/python3.7/site-packages/google/cloud/_http.py", line 221, in _do_request return self.http.request(url=url, method=method, headers=headers, data=data) File "/env/local/lib/python3.7/site-packages/google/auth/transport/requests.py", line 205, in request self._auth_request, method, url, request_headers) File "/env/local/lib/python3.7/site-packages/google/auth/credentials.py", line 122, in before_request self.refresh(request) File "/env/local/lib/python3.7/site-packages/google/auth/compute_engine/credentials.py", line 102, in refresh six.raise_from(new_exc, caught_exc) File "<string>", line 3, in raise_from google.auth.exceptions.RefreshError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'", <google.auth.transport.requests._Response object at 0x2b0ef9edf438>)
google.auth.exceptions.TransportError: ("Failed to retrieve http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token from the Google Compute Enginemetadata service. Status: 500 Response:\nb'Could not fetch URI /computeMetadata/v1/instance/service-accounts/my-project#appspot.gserviceaccount.com/token\\n'"
The error you are getting it's because your Cloud Functions service account doesn't have the cloudfunctions.serviceAgent role. As you can see on the documentation:
Authenticating as the runtime service account from inside your function may fail if you change the Cloud Functions service account's permissions.
However, I found that sometimes you can not add this role as it doesn't show up as an option. I have reported this issue to the Google Cloud Functions engineering team and they are working to solve it.
Nevertheless, you can add the role again using this gcloud command:
gcloud projects add-iam-policy-binding <project_name> --role=roles/cloudfunctions.serviceAgent --member=serviceAccount:service-<project_number>#gcf-admin-robot.iam.gserviceaccount.com
Can anybody help me i want to create a solrcloud on aws using this code https://github.com/LucidWorks/solr-scale-tk i try to build using cmd [fab demo:demo1,n=1] getting below error I'm getting this while pulling instances after connection to amazon server.
ERROR: boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
Appreciate your help
thanks in advance
root#adminuser-VirtualBox:/opt/febric/solr-scale-tk# fab demo:demo1,n=1
Going to launch 1 new EC2 m3.medium instances using AMI ami-8d52b9e6
Setup Instance store BlockDeviceMapping: /dev/sdb -> ephemeral0
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/fabric/main.py", line 743, in main
*args, **kwargs
File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 427, in execute
results['<local-only>'] = task.run(*args, **new_kwargs)
File "/usr/local/lib/python2.7/dist-packages/fabric/tasks.py", line 174, in run
return self.wrapped(*args, **kwargs)
File "/opt/febric/solr-scale-tk/fabfile.py", line 1701, in demo
ec2hosts = new_ec2_instances(cluster=demoCluster, n=n, instance_type=instance_type)
File "/opt/febric/solr-scale-tk/fabfile.py", line 1163, in new_ec2_instances
placement_group=placement_group)
File "/usr/local/lib/python2.7/dist-packages/boto/ec2/connection.py", line 973, in run_instances
verb='POST')
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1208, in get_object
raise self.ResponseError(response.status, response.reason, body)
boto.exception.EC2ResponseError: EC2ResponseError: 400 Bad Request
<?xml version="1.0" encoding="UTF-8"?>
<Response><Errors><Error><Code>InvalidParameterValue</Code><Message>Value () for parameter groupId is invalid. The value cannot be empty</Message></Error></Errors><RequestID>ca03b6d4-ce0e-46d3-99e3-ccad4a43c4ff</RequestID></Response>
You need to create a security group at your region with the name exactly called "solr-scale-tk" and has the required ports opened up. Please refer to this blog and follow the instructions in Setup Amazon Account part.
import boto
from boto.s3.connection import S3Connection
from boto.s3.connection import OrdinaryCallingFormat
conn = S3Connection(access_key, secret_key, calling_format=OrdinaryCallingFormat())
bucket = conn.get_bucket(file_name)
print(bucket.name)
And the console display:
raise err
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
I have seen many post about the same problem but I can't figure out how to solve it...
note that I am not the owner of the bucket, but I succeed to connect and download the file with a gui tool. I need to process it by script for automation.
EDIT:
Succeed to connect, but still struggling...
I begin to hesitate to process it automatically rather than manually ...
conn = S3Connection(access_key, secret_key, calling_format=OrdinaryCallingFormat())
bucket = conn.get_bucket(bucket_name, validate=False)
print('bucket:', bucket)
print('bucket.name:', bucket.name)
key = bucket.get_key(file_name)
print("key: {name}\t{size}\t{modified}".format(name = key.name,size = key.size,modified = key.last_modified))
print('bucket.list():',bucket.list(prefix='GA-Exports/Events_3112/DEV'))
for key in bucket.list(prefix='DEV/',delimiter='/'):
print('bucket list -> key:',key)
console :
bucket: <Bucket: GA-Exports/Events_3112/>
bucket.name: GA-Exports/Events_3112/
key: DEV/EVENTS_3113_120002892.csv.gz 3826 Sat, 16 May 2015 10:05:44 GMT
bucket.list(): <boto.s3.bucketlistresultset.BucketListResultSet object at 0x0000000004E9F7F0>
Traceback (most recent call last):
File "D:\Python\lib\xml\sax\expatreader.py", line 207, in feed
self._parser.Parse(data, isFinal)
xml.parsers.expat.ExpatError: no element found: line 1, column 0
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\Francois\OneDrive\IDE\Workspace\eclipse\Python_test\etltest.py", line 31, in <module>
for key in bucket.list(prefix='DEV/',delimiter='/'):
File "D:\Python\lib\site-packages\boto\s3\bucketlistresultset.py", line 34, in bucket_lister
encoding_type=encoding_type)
File "D:\Python\lib\site-packages\boto\s3\bucket.py", line 472, in get_all_keys
'', headers, **params)
File "D:\Python\lib\site-packages\boto\s3\bucket.py", line 406, in _get_all
xml.sax.parseString(body, h)
File "D:\Python\lib\xml\sax\__init__.py", line 46, in parseString
parser.parse(inpsrc)
File "D:\Python\lib\xml\sax\expatreader.py", line 107, in parse
xmlreader.IncrementalParser.parse(self, source)
File "D:\Python\lib\xml\sax\xmlreader.py", line 125, in parse
self.close()
File "D:\Python\lib\xml\sax\expatreader.py", line 217, in close
self.feed("", isFinal = 1)
File "D:\Python\lib\xml\sax\expatreader.py", line 211, in feed
self._err_handler.fatalError(exc)
File "D:\Python\lib\xml\sax\handler.py", line 38, in fatalError
raise exception
xml.sax._exceptions.SAXParseException: <unknown>:1:0: no element found
By default, boto will attempt to validate the bucket when you call get_bucket by performing a HEAD request on the bucket. You may not have permission to do this even though you may have permission to retrieve objects from the bucket. Try this to disable the validation step:
bucket = conn.get_bucket(bucket_name, validate=False)
Also, make sure you are passing in the name of the bucket. Your example code is passing in file_name which doesn't sound right.
There are a few other questions on this issue:
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
S3ResponseError: S3ResponseError: 403 Forbidden
S3ResponseError: 403 Forbidden using boto
Python: Amazon S3 cannot get the bucket: says 403 Forbidden
However, it seems I may be having a different problem (e.g., clock skew is not an issue and I already tried setting validate=False, and I believe I have the correct key and secret key because trying a bogus key or secret key gives me different errors). Here is my script:
import boto
import sys
from boto.s3.key import Key
BUCKET_NAME = sys.argv[1]
AWS_ACCESS_KEY_ID = sys.argv[2]
AWS_SECRET_ACCESS_KEY = sys.argv[3]
conn = boto.connect_s3(AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY)
bucket = conn.get_bucket(BUCKET_NAME, validate=False)
k = Key(bucket)
k.key = 'barbaz'
k.set_contents_from_filename('/tmp/barbaz.txt')
And the result:
Traceback (most recent call last):
File "/home/jonderry/sdmain/src/scripts/jenkins/upload_to_s3.py", line 16, in <module>
k.set_contents_from_filename('/tmp/barbaz.txt')
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1360, in set_contents_from_filename
encrypt_key=encrypt_key)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 1291, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 748, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 949, in _send_file_internal
query_args=query_args
File "/usr/local/lib/python2.7/dist-packages/boto/s3/connection.py", line 664, in make_request
retry_handler=retry_handler
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 1068, in make_request
retry_handler=retry_handler)
File "/usr/local/lib/python2.7/dist-packages/boto/connection.py", line 939, in _mexe
request.body, request.headers)
File "/usr/local/lib/python2.7/dist-packages/boto/s3/key.py", line 882, in sender
response.status, response.reason, body)
boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>***someRequestId***</RequestId><HostId>***someHostId</HostId></Error>
Any ideas what is the problem, or how to diagnose further?
This will also happen if your machine's time settings are incorrect
It looks like that you do not have the right to write on this bucket. What is the bucket policy? Can you make sure that this IAM user can put on this bucket?
I had this issue too where I tried validate=False, and ntpdate, and giving "Authenticated Users" the permission to upload/delete on AWS. My resolution is probably rare, but just in case anyone else did this:
I started running my Django app with credentials in my environment for my bucket 'xyz'. Then I changed the credentials to upload to my friend's bucket 'abc'. There was a mismatch between these credentials, so all I needed to do was restart gunicorn.