Fabric check user has permissions on file/folder - python

Is there an equivalent in fabric to os.access('/path/to/folder', os.W_OK) to check whether a remote folder has correct permissions given an remote user/group?
Currently I could try getting into the folder with with cd and catching the exception but I dont like that way...

You cannot use the os lib for obvious reasons; but you can use test. I made a quick 2-min example on how to use it.
from fabric.api import env, task, run, sudo as _sudo, settings, hide
env.user = 'vagrant'
env.key_filename = '~/.vagrant/machines/default/virtualbox/private_key'
env.host_string = '127.0.0.1'
env.port = '2222'
def is_file_writable(filepath, sudo=False):
fn = run if not sudo else _sudo
with settings(warn_only=True), hide('everything'):
response = fn('test -w ' + filepath)
return response.return_code == 0
#task
def sometask():
print is_file_writable('/etc/sudoers')
print is_file_writable('/etc/sudoers', sudo=True)
output:
$ fab sometask
False
True
Done.
Disconnecting from 127.0.0.1:2222... done.

Related

sudo/suid non-root nesting fails

I have a python script (which must be called as root), which calls a bash script (which must be called as non-root), which sometimes needs to call sudo. This does not work - the "leaf" sudo calls give the message "$user is not in the sudoers file. This incident will be reported." How can I make this work?
The code (insert your non-root username in place of "your_username_here"):
tezt.py:
#!/usr/bin/python3
import os
import pwd
import subprocess
def run_subshell_as_user(cmd_args, user_name=None, **kwargs):
cwd = os.getcwd()
user_obj = pwd.getpwnam(user_name)
# Set up the child process environment
new_env = os.environ.copy()
new_env["PWD"] = cwd
if user_name is not None:
new_env["HOME"] = user_obj.pw_dir
new_env["LOGNAME"] = user_name
new_env["USER"] = user_name
# This function is run after the fork and before the exec in the child
def suid_func():
os.setgid(user_obj.pw_gid)
os.setuid(user_obj.pw_uid)
return subprocess.Popen(
cmd_args,
preexec_fn=suid_func,
cwd=cwd,
env=new_env,
**kwargs).wait() == 0
run_subshell_as_user(["./tezt"], "your_username_here") # <-- HERE
tezt:
#!/bin/bash
sudo ls -la /root
Then run it as:
sudo ./tezt.py
Does anyone know why this doesn't work? The user can run sudo under normal circumstances. Why does "user -sudo-> root -suid-> user" work fine, but then when you try to sudo from there it fails?
I'd suggest using sudo to drop privileges rather than doing so yourself -- that's a bit more thorough where possible, modifying effective as opposed to only real uid and gid. (To modify the full set yourself, you might try changing setuid() to setreuid(), and likewise setgid() to setregid()).
...this would mean passing something to Popen akin to the following:
["sudo", "-u", "your_username_here", "--"] + cmd_args

How to run the same task in parallel on multiple hosts with different parameters using fabric?

demo.py
from fabric.api import env, run,execute
env.hosts = ['10.1.1.100','10.1.1.200']
env.remotePath = {'10.1.1.100':'/home','10.1.1.200':'/var'}
env.parallel=True
def mytask(remotePath):
run('ls %s' % remotePath)
def test():
execute(mytask,env.remotePath[env.host])
fab -f demo.py test
I want to execute command ls /home at 10.1.1.100,and ls /var at 10.1.1.200 in parallel using #parallel decorator,is there any way to make it possible?
Use host_string to get the current host, then the command/argument you want to use.
#parallel
def mytask():
host_ip = env.host_string
remote_path = env.remotePath[host_ip]
run('ls %s' % remote_path)
According to Fabric's API doc: host_string
Defines the current user/host/port which Fabric will connect to when
executing run, put and so forth. This is set by fab when iterating
over a previously set host list, and may also be manually set when
using Fabric as a library.
Hope this helps :)

Fabric executing command on remote server doesn't work

I want use fabric in python, to execute command on remote server.
I wrote these:
from fabric.api import *
from fabric.tasks import execute
def do_some_thing():
run("ls -lh")
if __name__ == '__main__':
execute(do_some_thing,hosts=['root#10.18.103.102'])
but, it doesn't work and make me login..
It's the output:
➜ ~ python test.py
[root#10.18.103.102] Executing task 'do_some_thing'
[root#10.18.103.102] run: ls -lh
[root#10.18.103.102] out: root#svn:~#
[root#10.18.103.102] out: root#svn:~#
Make use of the env variable -
from fabric.api import *
from fabric.contrib.files import *
def myserver():
env.hosts = ['10.18.103.102']
env.user = 'root'
# if you have key based authentication, uncomment and point to private key
# env.key_filename = '~/.ssh/id_rsa'
# if you have password based authentication
env.password = 'ThEpAsAwOrd'
def ls():
run('ls -al')
Now save these in a file call fabfile.py and execute (on the same directory) -
$ fab myserver ls
Fabric will execute both the functions one after another. So when it executes ls() it'll have the server details in env.

How to prevent fabric form waiting for the process to return

I always use fabric to deploy my processes from my local pc to remote servers.
If I have a python script like this:
test.py:
import time
while True:
print "Hello world."
time.sleep(1)
Obviously, this script is a continuous running script.
And I deploy this script to remote server and execute my fabric script like this:
...
sudo("python test.py")
The fabric will always wait the return of test.py and won't exit.How can I stop the fabric script at once and ignore the return of test.py
Usually for this kind of asynchronous task processing Celery is preferred .
This explains in detail the use of Celery and Fabric together.
from fabric.api import hosts, env, execute,run
from celery import task
env.skip_bad_hosts = True
env.warn_only = True
#task()
def my_celery_task(testhost):
host_string = "%s#%s" % (testhost.SSH_user_name, testhost.IP)
#hosts(host_string)
def my_fab_task():
env.password = testhost.SSH_password
run("ls")
try:
result = execute(my_fab_task)
if isinstance(result.get(host_string, None), BaseException):
raise result.get(host_string)
except Exception as e:
print "my_celery_task -- %s" % e.message
sudo("python test.py 2>/dev/null >/dev/null &")
or redirect the output to some other file instead of /dev/null
This code worked for me:
fabricObj.execute("(nohup python your_file.py > /dev/null < /dev/null &)&")
Where fabricObj is an object to fabric class(defined internally) which speaks to fabric code.

Passing a Fabric env.hosts sting as a variable is not work in function

Passing a Fabric env.hosts sting as a variable is not work in function.
demo.py
#!/usr/bin/env python
from fabric.api import env, run
def deploy(hosts, command):
print hosts
env.hosts = hosts
run(command)
main.py
#!/usr/bin/env python
from demo import deploy
hosts = ['localhost']
command = 'hostname'
deploy(hosts, command)
python main.py
['localhost']
No hosts found. Please specify (single) host string for connection:
But env.host_string works!
demo.py
#!/usr/bin/env python
from fabric.api import env, run
def deploy(host, command):
print host
env.host_string = host
run(command)
main.py
#!/usr/bin/env python
from demo import deploy
host = 'localhost'
command = 'hostname'
deploy(host, command)
python main.py
localhost
[localhost] run: hostname
[localhost] out: heydevops-workspace
But the env.host_string is not enough for us, it's a single host.
Maybe we can use env.host_string within a loop, but that's not good.
Because we also want to set the concurrent tasks number and run them parallelly.
Now in ddep(my deployment engine), I only use MySQLdb to get the parameters then execute the fab command like:
os.system("fab -f service/%s.py -H %s -P -z %s %s" % (project,host,number,task))
This is a simple way but not good.
Because if I use the fab command, I can't catch the exceptions and failures of the results in Python, to make my ddep can "retry" the failed hosts.
If I use the "from demo import deploy", I can control and get them by some codes in Python.
So now "env.host " is the trouble. Can somebody give me a solution?
Thanks a lot.
Here's my insight.
According to docs, if you're calling fabric tasks from python scripts - you should use fabric.tasks.execute.
Should be smth like this:
demo.py
from fabric.api import run
from fabric.tasks import execute
def deploy(hosts, command):
execute(execute_deploy, command=command, hosts=hosts)
def execute_deploy(command):
run(command)
main.py
from demo import deploy
hosts = ['localhost']
command = 'hostname'
deploy(hosts, command)
Then, just run python main.py. Hope that helps.
Finally, I fixed this problem by using execute() and exec.
main.py
#!/usr/bin/env python
from demo import FabricSupport
hosts = ['localhost']
myfab = FabricSupport()
myfab.execute("df",hosts)
demo.py
#!/usr/bin/env python
from fabric.api import env, run, execute
class FabricSupport:
def __init__(self):
pass
def hostname(self):
run("hostname")
def df(self):
run("df -h")
def execute(self,task,hosts):
get_task = "task = self.%s" % task
exec get_task
execute(task,hosts=hosts)
python main.py
[localhost] Executing task 'hostname'
[localhost] run: hostname
[localhost] out: heydevops-workspace

Categories