I'm trying to use my aws credentials file in boto but can't seem to get it to work. I'm new to python and boto so I'm looking at a bunch of stuff online trying to understand this.
All I'm trying to do right now is to just get all ec2 instances...here is my python code:
import boto
from boto import ec2
ec2conn = ec2.connection.EC2Connection(profile_name='profile_name')
ec2conn.get_all_instances()
when I run that, I get the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/ec2/connection.py", line 585, in get_all_instances
max_results=max_results)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/ec2/connection.py", line 681, in get_all_reservations
[('item', Reservation)], verb='POST')
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 1170, in get_list
response = self.make_request(action, params, path, verb)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 1116, in make_request
return self._mexe(http_request)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 913, in _mexe
self.is_secure)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 705, in get_http_connection
return self.new_http_connection(host, port, is_secure)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 747, in new_http_connection
connection = self.proxy_ssl(host, is_secure and 443 or 80)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/boto/connection.py", line 835, in proxy_ssl
ca_certs=self.ca_certificates_file)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 943, in wrap_socket
ciphers=ciphers)
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 611, in __init__
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/ssl.py", line 840, in do_handshake
self._sslobj.do_handshake()
ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:661)
I've also tried ec2conn.get_all_reservations() but got the same result...
In boto3, I can do this which works:
import boto3
session = boto3.Session(profile_name='dev')
session = boto3.Session(profile_name='profile_name')
dev_ec2 = session.client('ec2')
dev_ec2.describe_instances()
------EDIT--------
So I found this link on stack...Recommended way to manage credentials with multiple AWS accounts? and what I did was exported my AWS_PROFILE var
export AWS_PROFILE="profile_nm"
that worked when I did this:
>>> import boto
>>> conn = boto.connect_s3()
>>> conn.get_all_buckets()
And I got all of the s3 buckets back...
but when I did the above to get all the ec2 instances back...i still got the ssl.SSLEOFError above. It seems to work with s3 but not ec2 now...So, is the way I get all the Ec2 instances wrong?
Related
I am using MongoDB atlas for my discord bot but recently ran into an error. On the hosting (Heroku) everything works without errors, I first updated all the modules, but the error has not disappeared.
I am using motor as a driver to work with MongoDB atlas.
Checked the database connection URL, everything is correct.
Python version 3.10(on Heroku too)
Traceback (most recent call last):
File "D:\Проекты\AkainuBot\main.py", line 23, in <module>
mongo = AsyncIOMotorClient(
File "D:\Python\lib\site-packages\motor\core.py", line 159, in __init__
delegate = self.__delegate_class__(*args, **kwargs)
File "D:\Python\lib\site-packages\pymongo\mongo_client.py", line 718, in __init__
self.__options = options = ClientOptions(
File "D:\Python\lib\site-packages\pymongo\client_options.py", line 165, in __init__
self.__pool_options = _parse_pool_options(options)
File "D:\Python\lib\site-packages\pymongo\client_options.py", line 132, in _parse_pool_options
ssl_context, ssl_match_hostname = _parse_ssl_options(options)
File "D:\Python\lib\site-packages\pymongo\client_options.py", line 98, in _parse_ssl_options
ctx = get_ssl_context(
File "D:\Python\lib\site-packages\pymongo\ssl_support.py", line 159, in get_ssl_context
ctx.load_verify_locations(certifi.where())
File "D:\Python\lib\site-packages\pymongo\pyopenssl_context.py", line 276, in load_verify_locations
self._callback_data.trusted_ca_certs = _load_trusted_ca_certs(cafile)
File "D:\Python\lib\site-packages\pymongo\ocsp_support.py", line 79, in _load_trusted_ca_certs
_load_pem_x509_certificate(cert_data, backend))
File "D:\Python\lib\site-packages\cryptography\x509\base.py", line 436, in load_pem_x509_certificate
return rust_x509.load_pem_x509_certificate(data)
ValueError: error parsing asn1 value: ParseError { kind: InvalidValue, location: ["RawCertificate::tbs_cert", "TbsCertificate::serial"] }
I've solved my problem getting the parsing asn1 value error by adding the ssl_cert_reqs option on the MongoClient (and obviously importing ssl).
dbClient = pymongo.MongoClient(uri, ssl_cert_reqs=ssl.CERT_NONE)
My certificate is present in /etc/certs/mycer.pem file and I am able to authenticate with ldap using code below:
import ldap
import os
ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_DEMAND)
ls = ldap.initialize('ldaps://<ip>:10636')
ls.set_option(ldap.OPT_REFERRALS, 0)
ls.set_option(ldap.OPT_PROTOCOL_VERSION, 3)
ls.set_option(ldap.OPT_X_TLS_CACERTFILE, "/etc/certs/mycer.pem") # line 7
ls.set_option(ldap.OPT_X_TLS_NEWCTX, 0)
ls.simple_bind_s('uid=admin,ou=system', 'secret')
But if I change OPT_X_TLS_CACERTFILE to OPT_X_TLS_CACERTDIR in line 7 as:
ls.set_option(ldap.OPT_X_TLS_CACERTDIR, "/etc/certs/")
Code is not able to authenticate and throws error
>>> ls.simple_bind_s('uid=admin,ou=system', 'secret')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.6/dist-packages/ldap/ldapobject.py", line 445, in simple_bind_s
msgid = self.simple_bind(who,cred,serverctrls,clientctrls)
File "/usr/local/lib/python3.6/dist-packages/ldap/ldapobject.py", line 439, in simple_bind
return self._ldap_call(self._l.simple_bind,who,cred,RequestControlTuples(serverctrls),RequestControlTuples(clientctrls))
File "/usr/local/lib/python3.6/dist-packages/ldap/ldapobject.py", line 331, in _ldap_call
reraise(exc_type, exc_value, exc_traceback)
File "/usr/local/lib/python3.6/dist-packages/ldap/compat.py", line 44, in reraise
raise exc_value
File "/usr/local/lib/python3.6/dist-packages/ldap/ldapobject.py", line 315, in _ldap_call
result = func(*args,**kwargs)
ldap.SERVER_DOWN: {'desc': "Can't contact LDAP server", 'info': '(unknown error code)'}
There is only one file in the directory. How can python-ldap use multiple certificate files if this is not the correct way.
As I understand it, python will use OpenSSL and it subsequently scans the OPT_X_TLS_CACERTDIR directory for certificate files named *.crt.
But I cannot for the life of me find the source of that wisdom.
Edit:
I found an old script of mine that contains both:
conn.set_option(ldap.OPT_X_TLS_CACERTFILE, "")
conn.set_option(ldap.OPT_X_TLS_CACERTDIR, "/etc/ssl/cacerts")
Could it be that this empty OPT_X_TLS_CACERTFILE setting is somehow required?
I'm trying to access objects from my S3 bucket with s3cmd with path style urls. This is no problem with the Java SDK like.
s3Client.setS3ClientOptions(S3ClientOptions.builder()
.setPathStyleAccess(true).build());
I want to do the same with s3cmd. I have set this up in my s3conf file:
host_base = s3.eu-central-1.amazonaws.com
host_bucket = s3.eu-central-1.amazonaws.com/%(bucket)s
This works for bucket listing with:
$ s3cmd ls
2016-08-24 12:36 s3://test
When trying to list all objects of a bucket I get the following error:
Traceback (most recent call last):
File "/usr/local/bin/s3cmd", line 2919, in <module>
rc = main()
File "/usr/local/bin/s3cmd", line 2841, in main
rc = cmd_func(args)
File "/usr/local/bin/s3cmd", line 120, in cmd_ls
subcmd_bucket_list(s3, uri)
File "/usr/local/bin/s3cmd", line 153, in subcmd_bucket_list
response = s3.bucket_list(bucket, prefix = prefix)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 297, in bucket_list
for dirs, objects in self.bucket_list_streaming(bucket, prefix, recursive, uri_params):
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 324, in bucket_list_streaming
response = self.bucket_list_noparse(bucket, prefix, recursive, uri_params)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 343, in bucket_list_noparse
response = self.send_request(request)
File "/usr/local/lib/python2.7/site-packages/S3/S3.py", line 1081, in send_request
conn = ConnMan.get(self.get_hostname(resource['bucket']))
File "/usr/local/lib/python2.7/site-packages/S3/ConnMan.py", line 192, in get
conn.c.connect()
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 836, in connect
self.timeout, self.source_address)
File "/usr/local/Cellar/python/2.7.11/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 557, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
gaierror: [Errno 8] nodename nor servname provided, or not known
Assuming that there is no other issue with your configuration, the value that you used for "host_bucket" is wrong.
It should be:
host_bucket = %(bucket)s.s3.eu-central-1.amazonaws.com
or
host_bucket = s3.eu-central-1.amazonaws.com
The second one will for "path style" to be used. But, if you are using amazon s3 and the first host_bucket value that I propose, s3cmd will automatically use dns-based or path-based buckets depending of what characters you are using in your bucket name.
Is it a particular reason why you would want to only use path-based style?
I use Duplicity to run backups from my local server to Amazon S3. This has been working fine for over a year. Three days ago, I started getting the following errors:
Traceback (most recent call last):
File "/usr/lib64/python2.7/site-packages/duplicity/backends/_boto_multi.py", line 204, in _upload
num_cb=max(2, 8 * bytes / (1024 * 1024))
File "/usr/lib/python2.7/site-packages/boto/s3/multipart.py", line 260, in upload_part_from_file
query_args=query_args, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 1291, in set_contents_from_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 748, in send_file
chunked_transfer=chunked_transfer, size=size)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 949, in _send_file_internal
query_args=query_args
File "/usr/lib/python2.7/site-packages/boto/s3/connection.py", line 664, in make_request
retry_handler=retry_handler
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 1068, in make_request
retry_handler=retry_handler)
File "/usr/lib/python2.7/site-packages/boto/connection.py", line 939, in _mexe
request.body, request.headers)
File "/usr/lib/python2.7/site-packages/boto/s3/key.py", line 842, in sender
http_conn.send(chunk)
File "/usr/lib64/python2.7/httplib.py", line 805, in send
self.sock.sendall(data)
File "/usr/lib64/python2.7/ssl.py", line 229, in sendall
v = self.send(data[count:])
File "/usr/lib64/python2.7/ssl.py", line 198, in send
v = self._sslobj.write(data)
error: [Errno 104] Connection reset by peer
These are still occurring even after I tried the following:
--added "s3-use-multiprocessing" to my script file
--added the following two lines to /etc/sysctl.conf:
net.ipv4.tcp_wmem = 4096 16384 512000
net.ipv4.tcp_rmem = 4096 87380 512000
-- ran sysctl -p to start using the above.
Three days ago, I started running Duplicity on a couple of other servers, backing up to a different bucket on the same account. That was when THIS server started reporting connection reset errors. The other servers are working fine, and all of them are using the same versions of Duplicity and Python. They are in different locations on different subnets, but that shouldn't make a difference.
The original chunk size on the problem server was 25MB. It's 250MB on the others. What else can I look for? I'm guessing Amazon is resetting the connection, but why single out this server?
I am trying to download a file from FTP using python. I was able to successfully move into the directory but can't download the file.
The command I use is ftp.retrbinary('master.idx', open(fname,'wb').write)
And error is below. It looks like the command is looking for MASTER.IDX instead of master.idx
The full path to the file I want to download is ftp://ftp.sec.gov/edgar/full-index/2011/QTR2/master.idx
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python2.7/ftplib.py", line 406, in retrbinary
conn = self.transfercmd(cmd, rest)
File "/usr/lib/python2.7/ftplib.py", line 368, in transfercmd
return self.ntransfercmd(cmd, rest)[0]
File "/usr/lib/python2.7/ftplib.py", line 331, in ntransfercmd
resp = self.sendcmd(cmd)
File "/usr/lib/python2.7/ftplib.py", line 244, in sendcmd
return self.getresp()
File "/usr/lib/python2.7/ftplib.py", line 219, in getresp
raise error_perm, resp
ftplib.error_perm: 500 MASTER.IDX not understood
I can't say why the name changes to uppercase. In any case, when using FTP, I make like this, it may help you:
server = "URL.of.server"
directory = "directory/where/the/file/is"
filename = "nameoffile.txt"
from ftplib import FTP
ftp = FTP(server) #Set server address
ftp.login() # Connect to server
ftp.cwd(directory) # Move to the desired folder in server
ftp.retrbinary('RETR ' + filename,open(filename, 'wb').write) # Download file from server
ftp.close() # Close connection
I think that it may be the 'RETR ', if you don't write, it the server may not understand what you want to do
use wget module of python instead. Here is an example snippet
import wget
fileloc = '/path/to/the/file/foo.txt'
wget.download(fileloc)