I'm trying to configure a Virtual Machine(with Vagrant and Ansible), that needs a file.py to the full correct configuration of this machine (according to the book that I'm studying),I'm was using the DigitalOcean API V2, but as I have no a valid credit card my account is bloked,so I had to change DigitalOcean to AWS,as the company where I work have an account with AWS,now I take the 'client id' and 'api key' from AWS VM,so the foregoing problems returned...when I try use the "python file.py" command the output says again:
dopy.manager.DoError: Unable to authenticate you.
**the file.py:**
"""
dependencias:
sudo pip install dopy pyopenssl ndg-httpsclient pyasn1
"""
import os
from dopy.manager import DoManager
import urllib3.contrib.pyopenssl
urllib3.contrib.pyopenssl.inject_into_urllib3()
api_version = os.getenv("DO_API_VERSION")
api_token=os.getenv("DO_API_KEY")
#do = DoManager(cliend_id, api_key)
do = DoManager(None, api_token, api_version=2)
keys = do.all_ssh_keys()
print "ssh key name\tid"
for key in keys:
print "%s\t%d" % (key["name"], key["id"])
print "Image name\tid"
imgs = do.all_images()
for img in imgs:
if img["slug"] == "ubuntu-14-04-x64":
print "%s\t%d" % (img["name"], img["id"])
print "Region name\tid"
regions = do.all_regions()
for region in regions:
if region["slug"] == "nyc2":
print "%s\t%d" % (region["slug"], region["id"])
print "Size name\tid"
sizes = do.sizes()
for size in sizes:
if size["slug"] == "512mb":
print "%s\t%d" % (size["slug"], size["id"])
I appreciate any help.
Try to remove quotes from api_token:
do = DoManager(None, api_token, api_version=2)
Otherwise your token is always literal string api_token, not a variable api_token.
Related
Hello Stack Overflow friends!
I'm trying to use this conversion cloud module (https://github.com/groupdocs-conversion-cloud/groupdocs-conversion-cloud-python), but it's returning the module error: ModuleNotFoundError: No module named 'groupdocs_conversion_cloud'
I've installed the packaged through the terminal with no problem, even though the error keeps returning.
Here's my code:
import groupdocs_conversion_cloud
# Get your app_sid and app_key at https://dashboard.groupdocs.cloud (free registration is required).
app_sid = "3e9***ca"
app_key = "f0***ad407e1"
# Create instance of the API
convert_api = groupdocs_conversion_cloud.ConvertApi.from_keys(app_sid, app_key)
file_api = groupdocs_conversion_cloud.FileApi.from_keys(app_sid, app_key)
try:
#upload soruce file to storage
filename = 'FT_Manteiga virgem de cacau.pdf'
remote_name = 'FT_Manteiga virgem de cacau.pdf'
output_name= 'FT_Manteiga virgem de cacau.docx'
strformat='docx'
request_upload = groupdocs_conversion_cloud.UploadFileRequest(remote_name,filename)
response_upload = file_api.upload_file(request_upload)
#Convert PDF to Word document
settings = groupdocs_conversion_cloud.ConvertSettings()
settings.file_path =remote_name
settings.format = strformat
settings.output_path = output_name
loadOptions = groupdocs_conversion_cloud.PdfLoadOptions()
loadOptions.hide_pdf_annotations = True
loadOptions.remove_embedded_files = False
loadOptions.flatten_all_fields = True
settings.load_options = loadOptions
convertOptions = groupdocs_conversion_cloud.DocxConvertOptions()
convertOptions.from_page = 1
convertOptions.pages_count = 1
settings.convert_options = convertOptions
request = groupdocs_conversion_cloud.ConvertDocumentRequest(settings)
response = convert_api.convert_document(request)
print("Document converted successfully: " + str(response))
except groupdocs_conversion_cloud.ApiException as e:
print("Exception when calling get_supported_conversion_types: {0}".format(e.message))
I've installed this lib using pip install groupdocs_conversion_cloud
I'm trying to install CDH5 parcels on Hadoop-cluster using Cloudera Manager Python API. I'm doing this using following code:
test_cluster = ... # configuring cluster
# adding hosts ...
for parcel in test_cluster.get_all_parcels():
if parcel.product == 'CDH' and 'cdh5':
parcel.start_download().wait()
parcel.start_distribution().wait()
success = parcel.activate().wait().success
But I catch such error:
cm_api.api_client.ApiException: Parcel for CDH : 5.8.0-1.cdh5.8.0.p0.42 is not available on UBUNTU_TRUSTY. (error 400)
The CDH 5.8.0-1.cdh5.8.0.p0.42 was in AVAILABLE_REMOTELY, as we can see if print a string representation on this parcel:
<ApiParcel>: CDH-5.8.0-1.cdh5.8.0.p0.42 (stage: AVAILABLE_REMOTELY) (state: None) (cluster: TestCluster)
After the execution of code, parcel changes its stage to DOWNLOADED.
It seems, I should add a new parcel repository, compatible with Ubuntu Trusty (14.04). But I don't know of doing this using Cloudera Manager API.
How I can specify the new repository for installing correct CDH?
You may want to be more specific about the parcel you are acting on. I use something like this for the same purpose, the important part for your question is the combined check on parcel.version and parcel.product. After that (yes I am verbose in my output) I print the list of parcels to verify I am trying to only install the 1 parcel I want.
I'm sure you've been here, but if not the cm_api github site has some helpful examples too.
cdh_version = "CDH5"
cdh_version_number = "5.6.0"
# CREATE THE LIST OF PARCELS TO BE INSTALLED (CDH)
parcels_list = []
for parcel in cluster.get_all_parcels():
if parcel.version.startswith(cdh_version_number) and parcel.product == "CDH":
parcels_list.append(parcel)
for parcel in parcels_list:
print "WILL INSTALL " + parcel.product + ' ' + parcel.version
# DISTRIBUTE THE PARCELS
print "DISTRIBUTING PARCELS..."
for p in parcels_list:
cmd = p.start_distribution()
if not cmd.success:
print "PARCEL DISTRIBUTION FAILED"
exit(1)
# MAKE SURE THE DISTRIBUTION FINISHES
for p in parcels_list:
while p.stage != "DISTRIBUTED":
sleep(5)
p = get_parcel(api, p.product, p.version, cluster_name)
print p.product + ' ' + p.version + " DISTRIBUTED"
# ACTIVATE THE PARCELS
for p in parcels_list:
cmd = p.activate()
if not cmd.success:
print "PARCEL ACTIVATION FAILED"
exit(1)
# MAKE SURE THE ACTIVATION FINISHES
for p in parcels_list:
while p.stage != "ACTIVATED":
p = get_parcel(api, p.product, p.version, cluster_name)
print p.product + ' ' + p.version + " ACTIVATED"
I have developed an application which uses udisks version 1 to find and list details of connected USB drives. The details include device (/dev/sdb1...etc), mount point, and free space. However, I found that modern distros has udisks2 installed by default. Here is the little code found on the other SO thread:-
#!/usr/bin/python2.7
import dbus
bus = dbus.SystemBus()
ud_manager_obj = bus.get_object('org.freedesktop.UDisks2', '/org/freedesktop/UDisks2')
om = dbus.Interface(ud_manager_obj, 'org.freedesktop.DBus.ObjectManager')
for k,v in om.GetManagedObjects().iteritems():
drive_info = v.get('org.freedesktop.UDisks2.Drive', {})
if drive_info.get('ConnectionBus') == 'usb' and drive_info.get('Removable'):
if drive_info['MediaRemovable']:
print("Device Path: %s" % k)
It produces:-
[sundar#arch ~]$ ./udisk2.py
Device Path: /org/freedesktop/UDisks2/drives/JetFlash_Transcend_8GB_GLFK4LYSFG3HZZ48
The above result is fine but how can I connect org.freedesktop.UDisks2.Block and get properties of the devices?
http://udisks.freedesktop.org/docs/latest/gdbus-org.freedesktop.UDisks2.Block.html
After lot of hit and trial, I could get what I wanted. Just posting it so that some one can benefit in the future. Here is the code:-
#!/usr/bin/python2.7
# coding: utf-8
import dbus
def get_usb():
devices = []
bus = dbus.SystemBus()
ud_manager_obj = bus.get_object('org.freedesktop.UDisks2', '/org/freedesktop/UDisks2')
om = dbus.Interface(ud_manager_obj, 'org.freedesktop.DBus.ObjectManager')
try:
for k,v in om.GetManagedObjects().iteritems():
drive_info = v.get('org.freedesktop.UDisks2.Block', {})
if drive_info.get('IdUsage') == "filesystem" and not drive_info.get('HintSystem') and not drive_info.get('ReadOnly'):
device = drive_info.get('Device')
device = bytearray(device).replace(b'\x00', b'').decode('utf-8')
devices.append(device)
except:
print "No device found..."
return devices
def usb_details(device):
bus = dbus.SystemBus()
bd = bus.get_object('org.freedesktop.UDisks2', '/org/freedesktop/UDisks2/block_devices%s'%device[4:])
try:
device = bd.Get('org.freedesktop.UDisks2.Block', 'Device', dbus_interface='org.freedesktop.DBus.Properties')
device = bytearray(device).replace(b'\x00', b'').decode('utf-8')
print "printing " + device
label = bd.Get('org.freedesktop.UDisks2.Block', 'IdLabel', dbus_interface='org.freedesktop.DBus.Properties')
print 'Name od partition is %s'%label
uuid = bd.Get('org.freedesktop.UDisks2.Block', 'IdUUID', dbus_interface='org.freedesktop.DBus.Properties')
print 'UUID is %s'%uuid
size = bd.Get('org.freedesktop.UDisks2.Block', 'Size', dbus_interface='org.freedesktop.DBus.Properties')
print 'Size is %s'%uuid
file_system = bd.Get('org.freedesktop.UDisks2.Block', 'IdType', dbus_interface='org.freedesktop.DBus.Properties')
print 'Filesystem is %s'%file_system
except:
print "Error detecting USB details..."
The complete block device properties can be found here http://udisks.freedesktop.org/docs/latest/gdbus-org.freedesktop.UDisks2.Block.html
Edit
Note that the Block object does not have ConnectionBus or Removable properties. You will have to change the code to remove references to Drive object properties for the code to work.
/Edit
If you want to connect to Block, not Drive, then instead of
drive_info = v.get('org.freedesktop.UDisks2.Drive', {})
try
drive_info = v.get('org.freedesktop.UDisks2.Block', {})
Then you can iterate through drive_info and output it's properties. For example, to get the Id property, you could:
print("Id: %s" % drive_info['Id'])
I'm sure that there is a nice pythonic way to iterate through all the property key/value pairs and display the values, but I'll leave that to you. Key being 'Id' and value being the string stored in drive_info['Id']. Good luck
I'm having some issues with the EC2 bit of Boto (Boto v2.8.0, Python v2.6.7).
The first command returns a list of S3 Buckets - all good! The second command to get a list of EC2 instances blows up with a 403 with "Query-string authentication requires the Signature, Expires and AWSAccessKeyId parameters"
s3_conn = S3Connection(AWSAccessKeyId, AWSSecretKey)
print s3_conn.get_all_buckets()
ec2_conn = EC2Connection(AWSAccessKeyId, AWSSecretKey)
print ec2_conn.get_all_instances()
Also, my credentials are all good (Full admin) - I tested them using the Ruby aws-sdk, both EC2 and S3 work fine.
I also noticed that the host attribute in the ec2_conn object is s3-eu-west-1.amazonaws.com, "s3"...? Surely thats wrong? I've tried retro fixing it to the correct endpoint but no luck.
Any help would be great appreciate
Thanks
Here's some working code I use to list all my instances across potentially multiple regions.
Its doing a lot more than you need, but maybe you can pare it down to what you want.
#!/usr/bin/python
import boto
import boto.ec2
import sys
class ansi_color:
red = '\033[31m'
green = '\033[32m'
reset = '\033[0m'
grey = '\033[1;30m'
def name(i):
if 'Name' in i.tags:
n = i.tags['Name']
else:
n = '???'
n = n.ljust(16)[:16]
if i.state == 'running':
n = ansi_color.green + n + ansi_color.reset
else:
n = ansi_color.red + n + ansi_color.reset
return n
def pub_dns( i ):
return i.public_dns_name.rjust(43)
def pri_dns( i ):
return i.private_dns_name.rjust(43)
def print_instance( i ):
print ' ' + name(i) + '| ' + pub_dns(i) + ' ' + pri_dns(i)
regions = sys.argv[1:]
if len(regions)==0:
regions=['us-east-1']
if len(regions)==1 and regions[0]=="all":
rr = boto.ec2.regions()
else:
rr = [ boto.ec2.get_region(x) for x in regions ]
for reg in rr:
print "========"
print reg.name
print "========"
conn = reg.connect()
reservations = conn.get_all_instances()
for r in reservations:
# print ansi_color.grey + str(r) + ansi_color.reset
for i in r.instances:
print_instance(i)
There is the connect_to_region command:
import boto.ec2
connection = boto.ec2.connect_to_region('eu-west-1', aws_access_key_id=AWSAccessKeyId,
aws_secret_access_key=AWSSecretKey)
The Boto tutorial gives another way. That method would basically work like this:
import boto.ec2
for region in boto.ec2.regions():
if region.name == 'my-favorite-region':
connection = region.connect()
break
This has not been working on older versions of Boto.
Do you have your IAM credentials in order? The given access key should have rights for EC2. If you're not sure, you can add the policy AmazonEC2FullAccess to test, and later tune this down.
This is copy of the code in mining the social web book.
I am a new in this field and with redis too. I want to understand what does $ mean in this context. Also the print with %s, What does it mean?
This is the source code below (from: https://github.com/ptwobrussell/Mining-the-Social-Web):
import sys
import redis
from twitter__util import getRedisIdByScreenName
# A pretty-print function for numbers
from twitter__util import pp
r = redis.Redis()
screen_names=['user1','user2']
def friendsFollowersInCommon(screen_names):
r.sinterstore('temp$friends_in_common',
[getRedisIdByScreenName(screen_name, 'friend_ids')
for screen_name in screen_names]
)
r.sinterstore('temp$followers_in_common',
[getRedisIdByScreenName(screen_name, 'follower_ids')
for screen_name in screen_names]
)
print 'Friends in common for %s: %s' % (', '.join(screen_names),
pp(r.scard('temp$friends_in_common')))
print 'Followers in common for %s: %s' % (', '.join(screen_names),
pp(r.scard('temp$followers_in_common')))
# Clean up scratch workspace
r.delete('temp$friends_in_common')
r.delete('temp$followers_in_common')
if __name__ == "__main__":
if len(screen_names) < 2:
print >> sys.stderr, "Please supply at least two screen names."
sys.exit(1)
friendsFollowersInCommon(screen_names[1:])
$ symbol is just a part of key name. It separates name parts. I usually use : for the same purpose (e.g. users:123)
%s part is python's string formatting.