pyvmomi deploy ovf templte - python

I am trying to deploy a ovf image in vcenter using the code in below link:
[https://github.com/vmware/pyvmomi-community-samples/blob/master/samples/deploy_ovf.py][1]
python vm_deploy.py -s ‘vcneter_url’ -u ‘username’ -p ‘password’ -v ‘.vmdk path’ -f ‘.ovf path’
But it is failing with below traceback:
Traceback (most recent call last):
File "orig.py", line 218, in <module>
exit(main())
File "orig.py", line 185, in main
objs = get_objects(si, args)
File "orig.py", line 150, in get_objects
resource_pool_obj = cluster_obj.resourcePool
AttributeError: 'vim.Folder' object has no attribute 'resourcePool'
Im not able to get much help on this error online other than below link:
[https://github.com/vmware/pyvmomi-community-samples/issues/201][1]
I can see dir(cluster_obj) has no resourcePool in it but not sure how to get this working.

This script work only if you have manually created a resource pool in your architecture.
However, with a little modification you can target the default resourcePool of a specific Host.
Replace, in get_objects function, this (~line 140) :
# Get cluster object.
cluster_list = datacenter_obj.hostFolder.childEntity
if args.cluster_name:
cluster_obj = get_obj_in_list(args.cluster_name, cluster_list)
elif len(cluster_list) > 0:
cluster_obj = cluster_list[0]
else:
print "No clusters found in DC (%s)." % datacenter_obj.name
hosts = datacenter_obj.hostFolder.childEntity
resource_pool = hosts[0].resourcePool
# Generate resource pool.
resource_pool_obj = cluster_obj.resourcePool
By this :
for computeResource in datacenter_obj.hostFolder.childEntity :
if computeResource.name == 'ip or hostname here':
resource_pool = computeResource.resourcePool
break
The datacenter_obj is given, in the script that you gave, with something lik this :
datacenter_obj = si.content.rootFolder.childEntity[0]
0 is the first Datacenter found.
I hope it will help you, even if the answer comes a little late

Related

Cambrionix PowerPad15S is not showing connected devices in Ubuntu

I am using Cambrionix PowerPad15s for my devices but while running their first code which is to find all the device connected to the usb i am having some issue in jsonrpc file(Which is provided by the company itself).
I have to import this-
from cbrxapi import cbrxapi
This code is to get all the connected device in the usb port and save in result variable-
result = cbrxapi.cbrx_discover("local")
Rest of the code is-
if result==False:
print "No Cambrionix unit found."
sys.exit(0)
unitId = result[0]
handle = cbrxapi.cbrx_connection_open(unitId)
nrOfPorts = cbrxapi.cbrx_connection_get(handle, "nrOfPorts")
cbrxapi.cbrx_connection_close(handle)
print "The Cambrionix unit " + unitId + " has " + str(nrOfPorts) + " ports."
The error I am facing in is
Traceback (most recent call last):
File "cbrx_api_quickstart.py", line 9, in
result = cbrxapi.cbrx_discover("local")
File "/usr/local/share/cbrxapi/jsonrpc-0.1/jsonrpc.py", line 936, in call
return self.__req(self.__name, args, kwargs)
File "/usr/local/share/cbrxapi/jsonrpc-0.1/jsonrpc.py", line 908, in __req
raise RPCTransportError(err)
jsonrpc.RPCTransportError: [Errno 111] Connection refused
The product I am using is Cambrionix
Sorry for not explaining properly. I am still in learning phase..
Found the solution-
I have to install one more file to my system to get the code working..
$ sudo apt-get install avahi-daemon
And I need to ensure that one more script is running on my system.
install_service.sh in /usr/local/share/cbrxd/setup

How do I get debug_info from lttng ctf trace using babeltrace python bindings?

I'm using the Babeltrace python3 bindings to read an lttng ust trace that contains debug_info. When I run Babeltrace from the shell I see the debug_info in the output:
[13:28:29.998652878] (+0.000000321) hsm-dev lttng_ust_cyg_profile:func_exit: { cpu_id = 1 }, { ip = 0x4008E5, debug_info = { bin = "a.out#0x4008e5", func = "foo+0" }, vpid = 28208, vtid = 28211 }, { addr = 0x4008E5, call_site = 0x400957 }
From the python bindings I can get the other event fields (cpu_id, ip, addr, call_site...) but I get key errors trying to access debug_info, bin or func.
import babeltrace
collection = babeltrace.TraceCollection()
collection.add_traces_recursive('lttng-traces/a.out-20170624-132829/', 'ctf')
for e in collection.events:
if e.name == 'lttng_ust_cyg_profile:func_entry':
print(e['addr'])
print(e['func'])
Traceback (most recent call last):
File "fields.py", line 9, in <module>
print(e['func'])
File "/usr/lib/python3/dist-packages/babeltrace.py", line 865, in __getitem__
raise KeyError(field_name)
KeyError: 'func'
Is there a way to get those fields from Python?
I'm using Babeltrace 1.5.2
Not yet. It is possible with the Babeltrace 2 Python bindings, after building the appropriate processing graph and running it, but this major revision is not released as of this date (pre stage).
There's a hack for debug information in Babeltrace 1 in which the text output "injects" virtual fields at print time, but they are not available before that, so that's why you can't access e['func'], for example.
Your best bet for the moment is to create a babeltrace CLI subprocess and, one line of output at a time, use a regex to find the fields you need. Ugly, but that's what available today.

Using nmap library in Python program

I have a simple Python program which uses nmap library to do port scanning.
from optparse import OptionParser
import nmap
from threading import *
screenLock=Semaphore(value=1)
def nmapScan(thost,tport):
x=nmap.PortScanner()
x.scan(thost,tport)
state=x[thost]['tcp'][int(tport)]['state']
print "[*]" + thost + "tcp/"+tport+" "+state
def main():
parser=OptionParser('usage %prog -H <target host> -p <target port>')
parser.add_option('-H',dest='thost',type='string',help='specify target host')
parser.add_option('-p',dest='tports',type='string',help='specify target port[s] seperated by comma')
(options,args)=parser.parse_args()
thost=options.thost
tports=options.tports
tports=tports.split(',')
if (thost==None)|(tports==None):
print parser.usage
exit(0)
for i in tports:
nmapScan(thost,i)
main()
When i run the program, i get the following error.
akshayrajmacbookpro$ python nmapScanner.py -H 192.168.1.60 -p 80,443
Traceback (most recent call last):
File "nmapScanner.py", line 28, in <module>
main()
File "nmapScanner.py", line 26, in main
nmapScan(thost,i)
File "nmapScanner.py", line 10, in nmapScan
state=x[thost]['tcp'][int(tport)]['state']
File "build/bdist.macosx-10.11-intel/egg/nmap/nmap.py", line 555, in __getitem__
KeyError: '192.168.1.60'
I tried using url instead of ip in the command line. But I get the same error. Being new to Python, I am not able to understand and resolve this.
x (instance of nmap.PortScanner) does not contain those keys. To be able to iterate the scan results you can do this:
for host, result in x._scan_result['scan'].items():
print "[*]" + thost + "tcp/" + tport + " " + result['status']['state']
It's best if you looked at the docs or source code of python-nmap to see what other useful info is available e.g. Service name and version that is listening on that port.
More info here: https://bitbucket.org/xael/python-nmap/src/f368486a2cf12ce2bf3d5978614586e89c49c417/nmap/nmap.py?at=default&fileviewer=file-view-default#nmap.py-381
The host does not exist in the result of the scan as a "Key" for the Dictionary that forms a part of the data thrown up by scan data. That is probably, in my opinion, the reason for the error. Thanks

Trying to make a dynamic host file for ansible in python

This is my first post here, so if there are any questions or if something is unlcear, don't hesitate to ask.
I am trying to use a dynamic host file so I can build multiple vagrant machines without having to manage the host file first. This is what I found online:
#!/usr/bin/env python
# Adapted from Mark Mandel's implementation
# https://github.com/ansible/ansible/blob/devel/plugins/inventory/vagrant.py
import argparse
import json
import paramiko
import subprocess
import sys
def parse_args():
parser = argparse.ArgumentParser(description="Vagrant inventory script")
group = parser.add_mutually_exclusive_group(required=True)
group.add_argument('--list', action='store_true')
group.add_argument('--host')
return parser.parse_args()
def list_running_hosts():
cmd = "vagrant status --machine-readable"
status = subprocess.check_output(cmd.split()).rstrip()
hosts = []
for line in status.split('\n'):
(_, host, key, value) = line.split(',')
if key == 'state' and value == 'running':
hosts.append(host)
return hosts
def get_host_details(host):
cmd = "vagrant ssh-config {}".format(host)
p = subprocess.Popen(cmd.split(), stdout=subprocess.PIPE)
config = paramiko.SSHConfig()
config.parse(p.stdout)
c = config.lookup(host)
return {'ansible_ssh_host': c['hostname'],
'ansible_ssh_port': c['port'],
'ansible_ssh_user': c['user'],
'ansible_ssh_private_key_file': c['identityfile'][0]}
def main():
args = parse_args()
if args.list:
hosts = list_running_hosts()
json.dump({'vagrant': hosts}, sys.stdout)
else:
details = get_host_details(args.host)
json.dump(details, sys.stdout)
if __name__ == '__main__':
main()
However, when I run this I get the following error:
ERROR! The file inventory/vagrant.py is marked as executable, but failed to execute correctly. If this is not supposed to be an executable script, correct this with `chmod -x inventory/vagrant.py`.
ERROR! Inventory script (inventory/vagrant.py) had an execution error: Traceback (most recent call last):
File "/home/sebas/Desktop/playbooks/inventory/vagrant.py", line 52, in <module>
main()
File "/home/sebas/Desktop/playbooks/inventory/vagrant.py", line 45, in main
hosts = list_running_hosts()
File "/home/sebas/Desktop/playbooks/inventory/vagrant.py", line 24, in list_running_hosts
(_, host, key, value) = line.split(',')
ValueError: too many values to unpack
ERROR! inventory/vagrant.py:4: Expected key=value host variable assignment, got: argparse
does anybody know what I did wrong? Thank you guys in advance!
I guess the problem is that vagrant status command will work only inside a directory with a Vagrantfile, or if the ID of a target machine is specified.
To get the state of all active Vagrant environments on the system, vagrant global-status should be used instead. But global-status has a drawback: it uses a cache and does not actively verify the state of machines.
So to reliably determine the state, first we need to get the IDs of all VMs with vagrant global-status and then check these IDs with vagrant status ID.

Python import error

I am trying to run a python file from the command line with a single parameter in Ubuntu 12.04. The program works if I simply run it from the IDE and pass the parameter in the code. However, if I call 'python readFromSerial1.py 3' in the command prompt, I get:
Traceback (most recent call last):
File "readFromSerial1.py", line 62, in <module>
main()
File "readFromSerial1.py", line 6, in main
readDataFromUSB(time)
File "readFromSerial1.py", line 9, in readDataFromUSB
import usb.core
ImportError: No module named usb.core
I'm a little confused as the module imports correctly if I run from the IDE. I download the pyUSB module and extracted it (its filename is pyusb-1.0.0a3). I then copied this file into
/usr/local/lib/python2.7/site-packages/. Is that the correct procedure? I have a feeling the issue is due to python simply not being able to find the usb module and I need to put it in the correct location. My code is below, and any help would be greatly appreciated:
readFromSerial1.py
import sys
def main():
time = sys.argv[1]
#time = 1
readDataFromUSB(time)
def readDataFromUSB(time):
import usb.core
#find device
dev = usb.core.find(idVendor=0x099e, idProduct=0x0001) #GPS info
#Was the device found?
if dev is None:
raise ValueError('Device not found')
else:
print "Device found!"
#Do this to avoid 'Errno16: Resource is busy'
if dev.is_kernel_driver_active(0):
try:
dev.detach_kernel_driver(0)
except usb.core.USBError as e:
sys.exit("Could not detach kernel driver: %s" % str(e))
#Sets default config
dev.set_configuration()
#Gets default endpoint
endpoint = dev[0][(0,0)][0]
writeObject = open("InputData.txt", "w")
#iterate for time purposes
for i in range(0, (time*6)): #sys.argv is command line variable for time input
data = dev.read(endpoint.bEndpointAddress, endpoint.wMaxPacketSize, 0, 100000)
sret = ''.join([chr(x) for x in data])
writeObject.write(sret);
print sret
'''
newStr = ''.join(sret[7:14])
compareStr = ",*4F"
if (newStr == compareStr):
print "The GPS is not reading in any values right now. Try going somewhere else with better reception."
else:
print sret[7:14]
'''
writeObject.close()
main()

Categories