Get elasticloadbalancers names with boto3 - python

When I try to print the load balancers from aws I get a huge dictionary with a lot of keys, but when I'm trying to print only the 'LoadBalancerName' value I get: None, I want to print all the load balancers names in our environment how I can do it? thanks!
What I tried:
import boto3
client = boto3.client('elbv2')
elb = client.describe_load_balancers()
Name = elb.get('LoadBalancerName')
print(Name)

The way in which you were handling the response object was incorrect, and you'll need to put it in a loop if you want all the Names and not just one. What you'll you'll need is this :
import boto3
client = boto3.client('elbv2')
elb = client.describe_load_balancers()
for i in elb['LoadBalancers']:
print(i['LoadBalancerArn'])
print(i['LoadBalancerName'])
However if your still getting none as a value it would be worth double checking what region the load balancers are in as well as if you need to pass in the use of a profile too.

Related

Unable to append data into Azure storage using Python

I am using the below code to append data to Azure blob using python.
from azure.storage.blob import AppendBlobService
append_blob_service = AppendBlobService(account_name='myaccount', account_key='mykey')
# The same containers can hold all types of blobs
append_blob_service.create_container('mycontainer')
# Append blobs must be created before they are appended to
append_blob_service.create_blob('mycontainer', 'myappendblob')
append_blob_service.append_blob_from_text('mycontainer', 'myappendblob', u'Hello, world!')
append_blob = append_blob_service.get_blob_to_text('mycontainer', 'myappendblob')
The above code works fine, but when I tried to insert new data, the old data gets overwritten.
Is there any way I can append data to 'myappendblob'
Considering you are calling the same code to append the data, the issue is with the following line of code:
append_blob_service.create_blob('mycontainer', 'myappendblob')
If you read the documentation for create_blob method, you will notice the following:
Creates a blob or overrides an existing blob. Use if_none_match=* to
prevent overriding an existing blob.
So essentially you are overriding the blob every time you call your code.
You should call this method with if_none_match="*" as the documentation suggests. If the blob exists, your code will throw an exception which you will need to handle.
Try this code which is taken from the Document and it is given by #Harsh Jain ,
from azure.storage.blob import AppendBlobService
def append_data_to_blob(data):
service = AppendBlobService(account_name="<Storage acc name>",
account_key="<Storage acc key>")
try:
service.append_blob_from_text(container_name="<name of Conatiner >", blob_name="<The name of file>", text = data)
except:
service.create_blob(container_name="<name of Conatiner >", blob_name="<the name of file>")
service.append_blob_from_text(container_name="<name of Conatiner>", blob_name="<the name of file>", text = data)
print('Data got Appended ')
append_data_to_blob('Hi blob')
Taken References from:
https://www.educative.io/answers/how-to-append-data-in-blob-storage-in-azure-using-python

Softlayer Object Storage Python API Time To Live

How do I set time to live for a file on object storage?
Looking at the code in https://github.com/softlayer/softlayer-object-storage-python/blob/master/object_storage/storage_object.py it takes in (self, data, check_md5) with no TTL option.
sl_storage = object_storage.get_client(
username = environment['slos_username'],
password = environment['api_key'],
auth_url = environment['auth_url']
)
# get container
sl_container = sl_storage.get_container(environment['object_container'])
# create "pointer" to cointainer file fabfile.zip
sl_file = sl_container[filename]
myzip = open(foldername + filename, 'rb')
sl_file.create()
sl_file.send(myzip, TIME_TO_LIVE_PARAM=100)
I also tried according to https://github.com/softlayer/softlayer-object-storage-python/blob/master/object_storage/container.py
sl_file['ttl'] = timetolive
But it doesn't work.
Thanks!
You need to make sure that the "ttl" is available in the headers, The "TTL" header is available when your container has enabled CDN.
so to verify that the ttl header exist you can use this code line:
sl_storage['myContainserName']['MyFileName'].headers
then you can update the tll using this line code:
sl_storage['myContainserName']['MyFileName'].update({'x-cdn-ttl':'3600'})
in case the ttl values does not exist and you have the cdn enabled try to create the header using this line code:
sl_storage['myContainserName']['MyFileName'].create({'x-cdn-ttl':'3600'})
Regards
You need to set up the header "X-Delete-At: 1417341600" where 1417341600 is a Unix timestamp see more information here http://docs.openstack.org/developer/swift/overview_expiring_objects.html
using the Python client you can use the update method:
https://github.com/softlayer/softlayer-object-storage-python/blob/master/object_storage/storage_object.py#L210-L216
sl_storage['myContainserName']['MyFileName'].update({'X-Delete-At':1417341600})
Regards

How to use pyfig module in python?

This is the part of the mailer.py script:
config = pyfig.Pyfig(config_file)
svnlook = config.general.svnlook #svnlook path
sendmail = config.general.sendmail #sendmail path
From = config.general.from_email #from email address
To = config.general.to_email #to email address
what does this config variable contain? Is there a way to get the value for config variable without pyfig?
In this case config = a pyfig.Pyfig object initialised with the contents of the file named by the content of the string config_file.
To find out what that object does and contains you can either look at the documentation and/or the source code, both here, or you can print out, after the initialisation, e.g.:
config = pyfig.Pyfig(config_file)
print "Config Contains:\n\t", '\n\t'.join(dir(config))
if hasattr(config, "keys"):
print "Config Keys:\n\t", '\n\t'.join(config.keys())
or if you are using Python 3,
config = pyfig.Pyfig(config_file)
print("Config Contains:\n\t", '\n\t'.join(dir(config)))
if hasattr(config, "keys"):
print("Config Keys:\n\t", '\n\t'.join(config.keys()))
To get the same data without pyfig you would need to read and parse at the content of the file referenced by config_file within your own code.
N.B.: Note that pyfig seems to be more or less abandoned - no updates in over 5 years, web site no longer exists, etc., so I would strongly recommend converting the code to use a json configuration file instead.

Downloading via boto

I am using boto client to download and upload my files to s3 and do a whole bunch of other things like copy from one folder key to another and etc. The problem arises when I try to copy a key whose size is 0 bytes. The code that I use to copy is below
# Get the connection to the bucket
conn = boto.connect_s3(AWS_KEY, SECRET_KEY)
bucket = conn.get_bucket('mybucket')
# bucket.name is the name of my bucket
# candidate is the source key
destination_key = "destination/path/on/s3"
candidate = "the/file/to/copy"
# now copy the key
bucket.copy_key(destination_key, bucket.name, candidate) # --> This throws an exception
# just in case, see if the key ended up in the destination.
copied_key = bucket.lookup(destination_key)
The exception that I get is
3ResponseError: 404 Not Found
<Error><Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>the/file/to/copy</Key><RequestId>ABC123</RequestId><HostId>XYZ123</HostId>
</Error>
Now I have verified that the key infact exists by logging into the aws console and navigating to the source key location, the key is there and the aws console shows that its size is 0 (there are cases in my application that I may end up with empty files but I need them on s3).
So upload works fine, boto uploads the key without any issue, but when I attempt to copy it, then I get the error that the key does not exist
So is there any other logic that I should be using to copy such keys? Any help in this regard would be appreciated
Make sure you include the bucket of the source key. Should be something like bucket/path/to/file/to/copy
Try this:
from boto.s3.key import Key
download_path = '/tmp/dest_test.jpg'
bucket_key = Key(bucket)
bucket_key.key = file_key # e.g. images/source_test.jpg
bucket_key.get_contents_to_filename(download_path)

Link generator using django or any python module

I want to generate for my users temporary download link.
Is that ok if i use django to generate link using url patterns?
Could it be correct way to do that. Because can happen that I don't understand some processes how it works. And it will overflow my memory or something else. Some kind of example or tools will be appreciated. Some nginx, apache modules probably?
So, what i wanna to achieve is to make url pattern which depend on user and time. Decript it end return in view a file.
A simple scheme might be to use a hash digest of username and timestamp:
from datetime import datetime
from hashlib import sha1
user = 'bob'
time = datetime.now().isoformat()
plain = user + '\0' + time
token = sha1(plain)
print token.hexdigest()
"1e2c5078bd0de12a79d1a49255a9bff9737aa4a4"
Next you store that token in a memcache with an expiration time. This way any of your webservers can reach it and the token will auto-expire. Finally add a Django url handler for '^download/.+' where the controller just looks up that token in the memcache to determine if the token is valid. You can even store the filename to be downloaded as the token's value in memcache.
Yes it would be ok to allow django to generate the urls. This being exclusive from handling the urls, with urls.py. Typically you don't want django to handle the serving of files see the static file docs[1] about this, so get the notion of using url patterns out of your head.
What you might want to do is generate a random key using a hash, like md5/sha1. Store the file and the key, datetime it's added in the database, create the download directory in a root directory that's available from your webserver like apache or nginx... suggest nginx), Since it's temporary, you'll want to add a cron job that checks if the time since the url was generated has expired, cleans up the file and removes the db entry. This should be a django command for manage.py
Please note this is example code written just for this and not tested! It may not work the way you were planning on achieving this goal, but it works. If you want the dl to be pw protected also, then look into httpbasic auth. you can generate and remove entries on the fly in a httpd.auth file using htpasswd and the subprocess module when you create the link or at registration time.
import hashlib, random, datetime, os, shutil
# model to hold link info. has these fields: key (charfield), filepath (filepathfield)
# datetime (datetimefield), url (charfield), orgpath (filepathfield of the orignal path
# or a foreignkey to the files model.
from models import MyDlLink
# settings.py for the app
from myapp import settings as myapp_settings
# full path and name of file to dl.
def genUrl(filepath):
# create a onetime salt for randomness
salt = ''.join(['{0}'.format(random.randrange(10) for i in range(10)])
key = hashlib('{0}{1}'.format(salt, filepath).hexdigest()
newpath = os.path.join(myapp_settings.DL_ROOT, key)
shutil.copy2(fname, newpath)
newlink = MyDlink()
newlink.key = key
newlink.date = datetime.datetime.now()
newlink.orgpath = filepath
newlink.newpath = newpath
newlink.url = "{0}/{1}/{2}".format(myapp_settings.DL_URL, key, os.path.basename(fname))
newlink.save()
return newlink
# in commands
def check_url_expired():
maxage = datetime.timedelta(days=7)
now = datetime.datetime.now()
for link in MyDlink.objects.all():
if(now - link.date) > maxage:
os.path.remove(link.newpath)
link.delete()
[1] http://docs.djangoproject.com/en/1.2/howto/static-files/
It sounds like you are suggesting using some kind of dynamic url conf.
Why not forget your concerns by simplifying and setting up a single url that captures a large encoded string that depends on user/time?
(r'^download/(?P<encrypted_id>(.*)/$', 'download_file'), # use your own regexp
def download_file(request, encrypted_id):
decrypted = decrypt(encrypted_id)
_file = get_file(decrypted)
return _file
A lot of sites just use a get param too.
www.example.com/download_file/?09248903483o8a908423028a0df8032
If you are concerned about performance, look at the answers in this post: Having Django serve downloadable files
Where the use of the apache x-sendfile module is highlighted.
Another alternative is to simply redirect to the static file served by whatever means from django.

Categories