Pyinfra - host/config objects - python

I was trying to run through some of the examples from pyinfra's docs to see if this tool would work for me (so I could manage some servers using good old Python) and I even found they had an example for installing virtualbox on a host machine, but I can't get even the example working. I'm getting a weird error that makes me think there's been an update to the tool since the examples, and even the docs, have been updated. Just wondering if anyone else has a way to get this to work with pyinfra (currently trying with version 1.5).
The example code:
from pyinfra import config, host
from pyinfra.facts.server import LinuxDistribution, LinuxName, OsVersion
from pyinfra.operations import apt, python, server
config.SUDO = True
virtualbox_version = '6.1'
def verify_virtualbox_version(state, host, version):
command = '/usr/bin/virtualbox --help'
status, stdout, stderr = host.run_shell_command(state, command=command, sudo=config.SUDO)
assert status is True # ensure the command executed OK
if version not in str(stdout):
raise Exception('`{}` did not work as expected.stdout:{} stderr:{}'.format(
command, stdout, stderr))
if host.get_fact(LinuxName) == 'Ubuntu':
code_name = host.get_fact(LinuxDistribution)['release_meta'].get('DISTRIB_CODENAME')
apt.packages(
name='Install packages',
packages=['wget'],
update=True,
)
apt.key(
name='Install VirtualBox key',
src='https://www.virtualbox.org/download/oracle_vbox_2016.asc',
)
apt.repo(
name='Install VirtualBox repo',
src='deb https://download.virtualbox.org/virtualbox/debian {} contrib'.format(code_name),
)
# install kernel headers
# Note: host.get_fact(OsVersion) is the same as `uname -r` (ex: '4.15.0-72-generic')
apt.packages(
{
'Install VirtualBox version {} and '
'kernel headers for {}'.format(virtualbox_version, host.get_fact(OsVersion)),
},
[
'virtualbox-{}'.format(virtualbox_version),
'linux-headers-{}'.format(host.get_fact(OsVersion)),
],
update=True,
)
server.shell(
name='Run vboxconfig which will stop/start VirtualBox services and build kernel modules',
commands='/sbin/vboxconfig',
)
python.call(
name='Verify VirtualBox version',
function=verify_virtualbox_version,
version=virtualbox_version,
)
Here is the error that I'm seeing:
File "install_virtualbox.py", line 21, in <module>
if host.get_fact(LinuxName) == 'Ubuntu':
AttributeError: module 'pyinfra.api.host' has no attribute 'get_fact'
I checked the source and I can't argue with my editor when it says it 'cannot find a reference to config' or 'init', as I don't see them either. But then how is anyone getting facts about hosts? Looks like the logic may have been moved to the 'pyinfra.api.host' and 'pyinfra.api.config' packages, but the logic there looks totally different.
Hoping I'm just missing something that someone who's been using the tool can help explain to me.

Related

Linux package management with Python

I am a complete Python beginner and try to write a Python script to automate the setup of a SDK on Linux machines from remote Github repositories.
The script starts by performing some basic preliminary operations, especially the check/setup of several packages (git, docker, pip, etc.).
For now, I target Debian (Stretch, Buster), Centos (6, 7) and Ubuntu Server 18.04LTS.
Of course, I want the script to run on the widest range of linux machines.
Today I rely on available package managers (apt-get and yum), roughly requested through subprocess.call() statements.
I customize the related commands using nasty script configuration variables like below :
import platform
distribution = platform.dist()[0]
version = platform.dist()[1]
if distribution == 'debian':
pkgInstaller = 'dpkg'
pkmManager = 'apt-get'
checkIfInstalled = '-s'
installPackage = 'install'
yesToAll = '-y'
dockerPackage = 'docker-ce'
elif distribution == 'centos':
pkgInstaller = 'rpm'
pkgManager = 'yum'
checkIfInstalled = '-q'
installPackage = 'install'
yesToAll = '-y'
dockerPackage = 'docker'
I then simply loop on an array containing the names of packages to be installed, then run the command through subprocess.call() :
prerequisites = ['git', dockerPackage, 'doxygen', 'python2-pip']
for pkg in prerequisites:
pgkInstallation = subprocess.call(['sudo', pkgManager, yesToAll, installPackage, pkg])
While this approach may have the benefit of not having too much bonding to third-party Python modules, I guess there are... some smarter ways of doing such simple operation ?
Usually when doing switch statements like this, a dictionary might be a bit more useful. Also, normally I'm not one to try to PEP-8 things, but this is an instance where PEP-8 might really help your readability by not matching up your equals signs for all of your lines of code.
The dict will hold your distro as the key, and your vars as a value wrapped in a tuple
options = {
'debian': ('dpkg', 'apt-get', '-s', 'install', '-y', 'docker-ce'),
'centos': ('rpm', 'yum', '-q', 'install', '-y', 'docker'),
}
# unpack this function call here
distribution, version, *_ = platform.dist()
# now get the match
pkg_installer, pkg_manager, check, install_pkg, yes_to_all, docker = options[distribution]
requisites = ['git', docker, 'doxygen', 'python2-pip']
for pkg in requisites:
pgkInstallation = subprocess.call(['sudo', pkg_manager, yes_to_all, install_pkg, pkg])
The options[distribution] call will raise a KeyError for unsupported distributions, so you can probably catch that and raise something a bit more useful like:
try:
pkg_installer, pkg_manager, check, install_pkg, yes_to_all, docker = options[distribution]
except KeyError as e:
raise ValueError(f"Got unsupported OS, expected one of {', '.join(options.keys())}") from e
To make it less verbose, the only var you use out of order is docker, so you can house all of the others in a single var:
try:
*args, docker = options[distribution]
except KeyError as e:
raise ValueError(f"Got unsupported OS, expected one of {', '.join(options.keys())}") from e
requisites = ['git', docker, 'doxygen', 'python2-pip']
for pkg in requisites:
pgkInstallation = subprocess.call(['sudo', *args, pkg])

Using WMI-Client-Wrapper to execute an exe and get output logs

Objective:
I am using Ubuntu 16.04 and am using WMI-CLient-Wrapper module to connect to a remote Windows Machine and send an executable to it(eg. Process Explorer) and further execute it and collect the logs it creates and fetch them back to my Linux Machine for further processing. Using WMI CLient Wrapper is the only option available as WMI Module doesn't work with Linux.
Problem:
I am able to send the file to the remote Windows machine, by establishing a connection using WMI-Client-Wrapper and SMB File Transfer Mechanism. After that when I try to create a Process for the same and try to execute that process it gives me an error stating that some of the attributes that WMI actually has, are not supported by WMI client Wrapper.
What I tried
Python Code:
import os
import wmi_client_wrapper as wmic
from socket import *
import time
wmic = wmic.WmiClientWrapper(
host ="192.168.115.128",
username = "LegalWrongDoer",
password = "sasuke14"
)
SW_SHOWNORMAL = 1
str = "smbclient //192.168.115.128/C$ -U LegalWrongDoer%sasuke14 -c \'put \"procexp64.exe\"\'"
os.system(str)
print("Folder sent")
process_startup = wmic.Win32_ProcessStartup.new()
process_startup.ShowWindow = SW_SHOWNORMAL
process_id, result = wmic.Win32_Process.Create(CommandLine="C:/procexp64.exe", ProcessStartupInformation=process_startup)
process_startup.ShowWindow = SW_SHOWNORMAL
if result == 0:
print("Process started successfully")
else:
print("Sorry, but can't execute Process!")
When I run this python file, it gives me the output to the initial query I make. But the Process_StartUp fails.
Further Traceback Calls:
Traceback (most recent call last):
File "WMIClient.py", line 22, in <module>
process_startup = wmic.Win32_ProcessStartup.new()
AttributeError: 'WmiClientWrapper' object has no attribute 'Win32_ProcessStartup'
I'd be extremely grateful if anyone of you can be able to help me through this. Thanks in advance :)
Well I finally managed to get a work-around for this whole scenario, and it might look a little messy but it sure does work for me.
Firstly I use smbclient to transfer the executable to the end-point where I want to execute it. Inside my code I use os.system() calls to make this happen.
import os
str1 = "smbclient //'<HostMachineIP>'/admin$ -U '<domain>\\<username>%<password>' -c \'lcd /usr/local/acpl/bin/endPoint/; put \"EndPointForeignsics.exe\"\'"
os.system(str1)
This helps me put the executable in desired shared folder that the user has access(Admin in my case) to and then use WMI-query through a tool called Winexe to get access to the console/command prompt of the end-point. I use another os.system() call to execute this again.
str2 = r'/usr/local/bin/winexe -U "<domain>\\<username>%<password>" //<HostMachineIP> "cmd /c c:\windows\EndPointForeignsics.exe '
os.system(str2)
P.S:-- Winexe is a tool that you'll have to download off the internet and compile it. It may take some time and effort to do that, but is quite achievable. You'll get a lot of help on the same from StackOverflow and Documentation of the tool.

Calling gcloud from bazel genrule

I am having some issues getting gcloud to run in a Bazel genrule. Looks like python path related issues.
genrule(
name="foo",
outs=["bar"],
srcs=[":bar.enc"],
cmd="gcloud decrypt --location=global --keyring=foo --key=bar --plaintext-file $# --ciphertext-file $(location bar.enc)"
)
The exception is:
ImportError: No module named traceback
From:
try:
gcloud_main = _import_gcloud_main()
except Exception as err: # pylint: disable=broad-except
# We want to catch *everything* here to display a nice message to the user
# pylint:disable=g-import-not-at-top
import traceback
# We DON'T want to suggest `gcloud components reinstall` here (ex. as
# opposed to the similar message in gcloud_main.py), as we know that no
# commands will work.
sys.stderr.write(
('ERROR: gcloud failed to load: {0}\n{1}\n\n'
'This usually indicates corruption in your gcloud installation or '
'problems with your Python interpreter.\n\n'
'Please verify that the following is the path to a working Python 2.7 '
'executable:\n'
' {2}\n\n'
'If it is not, please set the CLOUDSDK_PYTHON environment variable to '
'point to a working Python 2.7 executable.\n\n'
'If you are still experiencing problems, please reinstall the Cloud '
'SDK using the instructions here:\n'
' https://cloud.google.com/sdk/\n').format(
err,
'\n'.join(traceback.format_exc().splitlines()[2::2]),
sys.executable))
sys.exit(1)
My questions are:
How do I best call gcloud from a genrule?
What are the parameters needed to specify the python path?
How is Bazel blocking this?
Update:
Able to get it to run by specifying the CLOUDSDK_PYTHON.
Indeed, bazel runs in a sandbox, hence gcloud cannot find its dependencies. Acutally, I'm surprised gcloud can be invoked at all.
To proceed, I would wrap gcloud in a bazel py_binary and refer it with tools attribute in the genrule. You also need to wrap it with location in the cmd. In the end, you will have
genrule(
name = "foo",
outs = ["bar"],
srcs = [":bar.enc"],
cmd = "$(location //third_party/google/gcloud) decrypt --location=global --keyring=foo --key=bar --plaintext-file $# --ciphertext-file $(location bar.enc)",
tools = ["//third_party/google/gcloud"],
)
And for that you define in third_party/google/gcloud/BUILD (or anywhere your want, I just used a path that makes sense to me)
py_binary(
name = "gcloud",
srcs = ["gcloud.py"],
main = "gcloud.py",
visibility = ["//visibility:public"],
deps = [
":gcloud_sdk",
],
)
py_library(
name = "gcloud_sdk",
srcs = glob(
["**/*.py"],
exclude = ["gcloud.py"],
# maybe exclude tests and unrelated code, too.
),
deps = [
# Whatever extra deps are needed by gcloud to compile
]
)
I had a similar issue, worked for me running this command:
export CLOUDSDK_PYTHON=/usr/bin/python
(this was answered above as an update but I felt to post the whole command for future people coming here)

Executable out of script containing serial_for_url

I have developed a python script for making a serial communication to a digital pump. I now need to make an executable out of it. However even though it works perfectly well when running it with python and py2exe does produce the .exe properly when I run the executable the following error occurs:
File: pump_model.pyc in line 96 in connect_new
File: serial\__init__.pyc in line 71 in serial_for_url
ValueError: invalid URL protocol 'loop' not known
The relevant piece of my code is the following:
# New serial connection
def connect_new(self, port_name):
"""Function for configuring a new serial connection."""
try:
self.ser = serial.Serial(port = port_name,\
baudrate = 9600,\
parity = 'N',\
stopbits = 1,\
bytesize = 8,\
timeout = self.timeout_time)
except serial.SerialException:
self.ser = serial.serial_for_url('loop://',\
timeout = self.timeout_time) # This line BLOWS!
except:
print sys.exc_info()[0]
finally:
self.initialize_pump()
I should note that the application was written in OSX and was tested on Windows with the Canopy Python Distribution.
I had the exact same problem with "socket://" rather than "loop://"
I wasn't able to get the accepted answer to work however the following seems to succeed:
1) Add an explicit import of the offending urlhandler.* module
import serial
# explicit import for py2exe - to fix "socket://" url issue
import serial.urlhandler.protocol_socket
# explicit import for py2exe - to fix "loop://" url issue (OP's particular prob)
import serial.urlhandler.protocol_loop
# use serial_for_url in normal manner
self._serial = serial.serial_for_url('socket://192.168.1.99:12000')
2) Generate a setup script for py2exe (see https://pypi.python.org/pypi/py2exe/) -- I've installed py2exe to a virtualenv:
path\to\env\Scripts\python.exe -m py2exe myscript.py -W mysetup.py
3) edit mysetup.py to include option
zipfile="library.zip" # default generated value is None
(see also http://www.py2exe.org/index.cgi/ListOfOptions)
3) build it:
path\to\env\Scripts\python.exe mysetup.py py2exe
4) run it
dist\myscript.exe
Found it!
It seems that for some reason the 'loop://' arguement can't be recognised after the .exe production.
I figured out by studying the pyserial/init.py script that when issuing the command serial.serial_for_url(‘loop://') you essentially call:
sys.modules['serial.urlhandler.protocol_loop’].Serial(“loop://“)
So you have to first import the serial.urlhandler.protocol_loop
and then issue that command in place of the one malfunctioning.
So you can now type:
__import__('serial.urlhandler.protocol_loop')
sys.modules[‘serial.urlhandler.protocol_loop’].Serial("loop://")
After this minor workaround it worked fine.

CherryPy3 and IIS 6.0

I have a small Python web application using the Cherrypy framework. I am by no means an expert in web servers.
I got Cherrypy working with Apache using mod_python on our Ubuntu server. This time, however, I have to use Windows 2003 and IIS 6.0 to host my site.
The site runs perfectly as a stand alone server - I am just so lost when it comes to getting IIS running. I have spent the past day Googling and blindly trying any and everything to get this running.
I have all the various tools installed that websites have told me to (Python 2.6, CherrpyPy 3, ISAPI-WSGI, PyWin32) and have read all the documentation I can. This blog was the most helpful:
http://whatschrisdoing.com/blog/2008/07/10/turbogears-isapi-wsgi-iis/
But I am still lost as to what I need to run my site. I can't find any thorough examples or how-to's to even start with. I hope someone here can help!
Cheers.
I run CherryPy behind my IIS sites. There are several tricks to get it to work.
When running as the IIS Worker Process identity, you won't have the same permissions as you do when you run the site from your user process. Things will break. In particular, anything that wants to write to the file system will probably not work without some tweaking.
If you're using setuptools, you probably want to install your components with the -Z option (unzips all eggs).
Use win32traceutil to track down problems. Be sure that in your hook script that you're importing win32traceutil. Then, when you're attempting to access the web site, if anything goes wrong, make sure it gets printed to standard out, it'll get logged to the trace utility. Use 'python -m win32traceutil' to see the output from the trace.
It's important to understand the basic process to get an ISAPI application running. I suggest first getting a hello-world WSGI application running under ISAPI_WSGI. Here's an early version of a hook script I used to validate that I was getting CherryPy to work with my web server.
#!python
"""
Things to remember:
easy_install munges permissions on zip eggs.
anything that's installed in a user folder (i.e. setup develop) will probably not work.
There may still exist an issue with static files.
"""
import sys
import os
import isapi_wsgi
# change this to '/myapp' to have the site installed to only a virtual
# directory of the site.
site_root = '/'
if hasattr(sys, "isapidllhandle"):
import win32traceutil
appdir = os.path.dirname(__file__)
egg_cache = os.path.join(appdir, 'egg-tmp')
if not os.path.exists(egg_cache):
os.makedirs(egg_cache)
os.environ['PYTHON_EGG_CACHE'] = egg_cache
os.chdir(appdir)
import cherrypy
import traceback
class Root(object):
#cherrypy.expose
def index(self):
return 'Hai Werld'
def setup_application():
print "starting cherrypy application server"
#app_root = os.path.dirname(__file__)
#sys.path.append(app_root)
app = cherrypy.tree.mount(Root(), site_root)
print "successfully set up the application"
return app
def __ExtensionFactory__():
"The entry point for when the ISAPIDLL is triggered"
try:
# import the wsgi app creator
app = setup_application()
return isapi_wsgi.ISAPISimpleHandler(app)
except:
import traceback
traceback.print_exc()
f = open(os.path.join(appdir, 'critical error.txt'), 'w')
traceback.print_exc(file=f)
f.close()
def install_virtual_dir():
import isapi.install
params = isapi.install.ISAPIParameters()
# Setup the virtual directories - this is a list of directories our
# extension uses - in this case only 1.
# Each extension has a "script map" - this is the mapping of ISAPI
# extensions.
sm = [
isapi.install.ScriptMapParams(Extension="*", Flags=0)
]
vd = isapi.install.VirtualDirParameters(
Server="CherryPy Web Server",
Name=site_root,
Description = "CherryPy Application",
ScriptMaps = sm,
ScriptMapUpdate = "end",
)
params.VirtualDirs = [vd]
isapi.install.HandleCommandLine(params)
if __name__=='__main__':
# If run from the command-line, install ourselves.
install_virtual_dir()
This script does several things. It (a) acts as the installer, installing itself into IIS [install_virtual_dir], (b) contains the entry point when IIS loads the DLL [__ExtensionFactory__], and (c) it creates the CherryPy WSGI instance consumed by the ISAPI handler [setup_application].
If you place this in your \inetpub\cherrypy directory and run it, it will attempt to install itself to the root of your IIS web site named "CherryPy Web Server".
You're also welcome to take a look at my production web site code, which has refactored all of this into different modules.
OK, I got it working. Thanks to Jason and all his help. I needed to call
cherrypy.config.update({
'tools.sessions.on': True
})
return cherrypy.tree.mount(Root(), '/', config=path_to_config)
I had this in the config file under [/] but for some reason it did not like that. Now I can get my web app up and running - then I think I will try and work out why it needs that config update and doesn't like the config file I have...

Categories