I need to use Fabric to do some operations in a website that use one machine for the filesystem and other machine to the database server. I need to handle two hosts. How can I do that?
I have some code but I cannot get the environment definition to work.
The idea is to connect to the remote Filesystem server and get the files and then connect to the remote Database server and get the database schema.
The code that I have for now is something like this:
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
'''
Here I define where is my "aid"s file structure
'''
local_root = '/home/andre/test' # This is the root folder for the audits
code_location = '/remote_code' # This is the root folder dor the customer code inside each audit
#
# ENVIRONMENTS CONFIGURATIONS
#
'''
Here I configure where is the remote file server
'''
def file_server():
env.user = 'andre'
env.hosts = ['localhost']
'''
Here I configure where is the database server
'''
def database_server():
env.user = 'andre'
env.hosts = ['192.168.5.1']
#
# START SCRIPT
#
def get_install(remote_location, aid):
### I will get the files
'''
Here I need to load the file_server() definitions
'''
working_folder = local_root + '/%s' % aid # I will define the working folder
local('mkdir ' + working_folder) # I will create the working folder for this audit
local('mkdir ' + working_folder + code_location) # I will create the folder to receive the code
get(remote_location, working_folder + code_location) # I will download the code to my machine
### I will get the database
'''
Here I need to load the database_server() definitions
'''
local('dir') # Just to test
How can I inside get_install() define the environments file_server() and database_server() ?
Best Regards,
I don't understand exactly what you are trying to do, but maybe you can split up your get_install function into two functions each for every server.
Then limit those functions to the correct servers with fabric.decorators.hosts(*host_list) decorator:
For example, the following will ensure that, barring an override on the command line, my_func will be run on host1, host2 and host3, and with specific users on host1 and host3:
#hosts('user1#host1', 'host2', 'user2#host3')
def my_func():
pass
(For more info see http://readthedocs.org/docs/fabric/en/1.1.0/api/core/decorators.html#fabric.decorators.hosts)
And you can than call those 2 functions in one go by defining your get_install method as:
def get_install():
func1()
func2()
You should be able to do this with fab database_server get_install. Basically, fab [environment] [command] should do what you want.
Related
I am using Fabric2 version and I don't see It has exist method in it to check if folder path has existed in the remote server. Please let me know how can I achieve this in Fabric 2 http://docs.fabfile.org/en/stable/.
I have seen a similar question Check If Path Exists Using Fabric, But this is for fabric 1.x version
You can execute the test command remotely with the -d option to test if the file exist and is a directory while passing the warn parameter to the run method so the execution doesn't stop in case of a non-zero exit status code. Then the value failed on the result will be True in case that the folder doesn't exist and False otherwise.
folder = '/path/to/folder'
if c.run('test -d {}'.format(folder), warn=True).failed:
# Folder doesn't exist
c.run('mkdir {}'.format(folder))
exists method from fabric.contrib.files was moved to patchwork.files with a small signature change, so you can use it like this:
from fabric2 import Connection
from patchwork.files import exists
conn = Connection('host')
if exists(conn, SOME_REMOTE_DIR):
do_something()
The below code is to check the existence of the file (-f), just change to '-d' to check the existence of a directory.
from fabric import Connection
c = Connection(host="host")
if c.run('test -f /opt/mydata/myfile', warn=True).failed:
do.thing()
You can find it in the Fabric 2 documentation below:
https://docs.fabfile.org/en/2.5/getting-started.html?highlight=failed#bringing-it-all-together
Hi That's not so difficult, you have to use traditional python code to check if a path already exists.
from pathlib import Path
from fabric import Connection as connection, task
import os
#task
def deploy(ctx):
parent_deploy_dir = '/var/www'
deploy_dir ='/var/www/my_folder'
host = 'REMOTE_HOST'
user = 'USER'
with connection(host=host, user=user) as c:
with c.cd(parent_deploy_dir):
if not os.path.isdir(Path(deploy_dir)):
c.run('mkdir -p ' + deploy_dir)
I have a class that I instantiate in a request (it's a ML model that loads and takes a bit of time to configure on startup). The idea is to only do that once and have each request use the model for predictions. Does gunicorn instantiate the app every time?
Aka, will the model retrain every time a new request comes in?
It sounds like you could benefit from the application preloading:
http://docs.gunicorn.org/en/stable/settings.html#preload-app
This will let you load app code before spinning off your workers.
For those who are looking for how to share a variable between gunicorn workers without using Redis or Session, here is a good alternative with the awesome python dotenv:
The principle is to read and write shared variables from a file that could be done with open() but dotenv is perfect in this situation.
pip install python-dotenv
In app directory, create .env file:
├── .env
└── app.py
.env:
var1="value1"
var2="value2"
app.py: # flask app
from flask import Flask
import os
from dotenv import load_dotenv
app = Flask( __name__ )
# define the path explicitly if not in same folder
#env_path = os.path.dirname(os.path.realpath(__file__)) +'/../.env'
#load_dotenv(dotenv_path=env_path) # you may need a first load
def getdotenv(env):
try:
#global env_path
#load_dotenv(dotenv_path=env_path,override=True)
load_dotenv(override=True)
val = os.getenv(env)
return val
except :
return None
def setdotenv(key, value): # string
global env_path
if key :
if not value:
value = '\'\''
cmd = 'dotenv -f '+env_path+' set -- '+key+' '+value # set env variable
os.system(cmd)
#app.route('/get')
def index():
var1 = getdotenv('var1') # retreive value of variable var1
return var1
#app.route('/set')
def update():
setdotenv('newValue2') # set variable var2='newValue2'
Ansible-playbook has a --list-hosts cli switch that just outputs the hosts affected by each play in a playbook. I am looking for a way to access to same information through the python API.
The (very) basic script I am using to test right now is
#!/usr/bin/python
import ansible.runner
import ansible.playbook
import ansible.inventory
from ansible import callbacks
from ansible import utils
import json
# hosts list
hosts = ["127.0.0.1"]
# set up the inventory, if no group is defined then 'all' group is used by default
example_inventory = ansible.inventory.Inventory(hosts)
pm = ansible.runner.Runner(
module_name = 'command',
module_args = 'uname -a',
timeout = 5,
inventory = example_inventory,
subset = 'all' # name of the hosts group
)
out = pm.run()
print json.dumps(out, sort_keys=True, indent=4, separators=(',', ': '))
I just can't figure out what to add to ansible.runner.Runner() to make it output affected hosts and exit.
I'm not sure what are you trying to achieve, but ansible.runner.Runner is actually one task and not playbook.
Your script is a more kind of ansible CLI and not ansible-playbook.
And ansible doesn't have any kind of --list-hosts, while ansible-playbook does.
You can see how listhosts is done here.
Original Question
I've got some python scripts which have been using Amazon S3 to upload screenshots taken following Selenium tests within the script.
Now we're moving from S3 to use GitHub so I've found GitPython but can't see how you use it to actually commit to the local repo and push to the server.
My script builds a directory structure similar to \images\228M\View_Use_Case\1.png in the workspace and when uploading to S3 it was a simple process;
for root, dirs, files in os.walk(imagesPath):
for name in files:
filename = os.path.join(root, name)
k = bucket.new_key('{0}/{1}/{2}'.format(revisionNumber, images_process, name)) # returns a new key object
k.set_contents_from_filename(filename, policy='public-read') # opens local file buffers to key on S3
k.set_metadata('Content-Type', 'image/png')
Is there something similar for this or is there something as simple as a bash type git add images command in GitPython that I've completely missed?
Updated with Fabric
So I've installed Fabric on kracekumar's recommendation but I can't find docs on how to define the (GitHub) hosts.
My script is pretty simple to just try and get the upload to work;
from __future__ import with_statement
from fabric.api import *
from fabric.contrib.console import confirm
import os
def git_server():
env.hosts = ['github.com']
env.user = 'git'
env.passowrd = 'password'
def test():
process = 'View Employee'
os.chdir('\Work\BPTRTI\main\employer_toolkit')
with cd('\Work\BPTRTI\main\employer_toolkit'):
result = local('ant viewEmployee_git')
if result.failed and not confirm("Tests failed. Continue anyway?"):
abort("Aborting at user request.")
def deploy():
process = "View Employee"
os.chdir('\Documents and Settings\markw\GitTest')
with cd('\Documents and Settings\markw\GitTest'):
local('git add images')
local('git commit -m "Latest Selenium screenshots for %s"' % (process))
local('git push -u origin master')
def viewEmployee():
#test()
deploy()
It Works \o/ Hurrah.
You should look into Fabric. http://docs.fabfile.org/en/1.4.1/index.html. Automated server deployment tool. I have been using this quite some time, it works pretty fine.
Here is my one of the application which uses it, https://github.com/kracekumar/sachintweets/blob/master/fabfile.py
It looks like you can do this:
index = repo.index
index.add(['images'])
new_commit = index.commit("my commit message")
and then, assuming you have origin as the default remote:
origin = repo.remotes.origin
origin.push()
I have Django application which needs to call psql. I do this in a celery thread which looks like this:
#task()
def insert_sqldump_threaded(username, database, file):
host = database.server.db_address
work = subprocess.Popen([settings.PSQL,
"-f%s" % file,
"-d%s" % database.db_name,
"-h%s" % host,
"-U%s" % settings.DB_ADMIN_USER
], env = {'PGPASSFILE': settings.DB_PASSFILE}
)
work.wait()
return work.returncode
On my development server the PGPASSFILE looks like this:
localhost:5432:*:postgres:postgres
which should be fine.
The problem is that all I get when this function gets called is an error from psql:
psql: could not translate host name "localhost" to address: Unknown server error
And now it gets really strange, but when I don't submit the "env" variable, psql seems to recognize the host. At least then it asks for a password.
Any ideas on how to solve this?
I think postgresql needs other environment variables that you clear when you pass env. You can simply change os.environ or make a copy of it beforehand as in the following code:
import os
#task()
def insert_sqldump_threaded(username, database, file):
d = dict(os.environ)
d['PGPASSFILE'] = settings.DB_PASSFILE
host = database.server.db_address
work = subprocess.Popen([settings.PSQL,
"-f%s" % file,
"-d%s" % database.db_name,
"-h%s" % host,
"-U%s" % settings.DB_ADMIN_USER
], env = d
)
work.wait()
return work.returncode
When you don't submit env, it picks up the environment variables from the shell you're running in - see os.environ. It must be depending on one of those for looking up localhost. You'll need to include that in the dictionary. Or just copy in everything from os.environ.