Python's Fabric provides the ability to invoke fabric functions outside of the fab utility using the execute function. A contextual problem arises when an execute function is invoked within another function that was called using execute. Fabric loses the context of the outer execute when the inner execute is invoked and never recovers it. For example:
env.roledefs = {
'webservers': ['web1','web2'],
'load_balancer': ['lb1']
}
#roles('webserver')
def deploy_code():
#ship over tar.gz of code to unpack.
...
execute(remove_webserver_from_load_balancer, sHost=env.host_string)
...
#shutdown webserver, unpack files, and restart web server
...
execute(add_webserver_to_load_balancer, sHost=env.host_string)
#roles('load_balancer')
def remove_webserver_from_load_balancer(sHost=None):
ssh("remove_host %s" % sHost)
execute(deploy_code)
After the first call to execute, Fabric completely loses its context and executes all further commands within the deploy_code function with host_string='lb1' instead of 'web1'. How can I get it to remember it?
I came up with this hack, but I feel like it could break on future releases:
with settings(**env):
execute(remove_webserver_from_load_balancer, sHost=env.host_string)
This effectively saves all state and restores it after the call, but seems like an unintended use of the function. Is there a better way to tell Fabric that it's in a nested execute and to use a settings stack or an equivalent method to remember state?
Thanks!
Your not using fabric right. As you'd just call fab deploy_code instead of running the fabfile like it's python. I'd suggest going through the tutorial for a better idea on how to structure your fabfile.
Anyhow though, you can look here for how to use execute(), and here for more of the specifics.
You have a typo in that you've dropped the 's' from the webservers role. Which might account for you not having a good host string when you want it on the second task.
But that aside, you can also set roles and hosts in the execute() command itself.
Related
I found myself having to implement the following use case: I need to run a webapp in which users can submit C programs, which need to be run safely on my backend.
I'm trying to get this done using Node. In the past, I had to do something similar but the user-submitted code was JavaScript code, and I got away with using Node vm2 module. Essentially, I would create a VM and call its run method with the user submitted code as a string argument, then collect the output and do whatever I had to.
I'm trying to understand if using the same moule could help me with C code as well. The idea would be to use exec to first call gcc and compile the user code. Afterwards, I would use a VM to run exec again, this time passing the generated executable as a result. Would this be safe?
I don't understand vm2 deeply enough to know whether the safety is only limited to executing JS code or if it can be trusted to also run any arbitrary shell command safely.
In case vm2 isn't appropriate, what would be another way to run an executable in a sandboxed fashion in Node? Feel free to also suggest Python-based solutions, if you know any. Please note that the code will still be executed in a separate container as the main app regardless, but I want to make extra sure users cannot easily just tear it down at their liking.
Thank you in advance.
I am currently experiencing the same challenge as you, trying to execute safely some untrusted code using spawn, so what I can tell you is that vm2 only works for JS/TS code, but can't control what happens to a new process created by spawn, fork or exec.
For now I haven't found any good solution, but I'm thinking of trying to run the process as a user with limited rights.
As you seem to have access to the C source code, I would advise you to search how to run untrusted C programs (in plain C), and see if you can manipulate the C code in order to have a safer environment from this point of view.
This is my fabric code:
from fabric import Connection, task
server = Connection(host="usrename#server.com:22", connect_kwargs={"password": "mypassword"})
#task
def dostuff(somethingmustbehere):
server.run("uname -a")
This code works just fine. When I execute fab dostuff it does what I want it to do.
When I remove somethingmustbehere however I get this error message:
raise TypeError("Tasks must have an initial Context argument!")
TypeError: Tasks must have an initial Context argument!
I never defined somethingmustbehere anywhere in my code. I just put it in and the error is gone and everything works. But why? What is this variable? Why do I need it? Why is it so important? And if it is so important why can it just be empty? I am really lost here. Yes it works, but I cannot run code that I don't understand. It drives me insane. :-)
Please be aware that I'm talking about the Python 3(!) version of Fabric!
The Fabric version is 2.4.0
To be able to run a #task you need a context argument. Fabric uses invoke task() which expects to see a context object. Normally we name the variable c or ctx (which I always use to make it more clear). I don't prefer using c because I use it normally for connection
Check this line on github from invoke package repo, you will see that it raises an exception when the context argument is not present, but it doesn't explain why!
To know more about Context object, what it 's and why we need it, you can read the following on the site of pyinvoke:
Aside: what exactly is this ‘context’ arg anyway? A common problem
task runners face is transmission of “global” data - values loaded
from configuration files or other configuration vectors, given via CLI
flags, generated in ‘setup’ tasks, etc.
Some libraries (such as Fabric 1.x) implement this via module-level
attributes, which makes testing difficult and error prone, limits
concurrency, and increases implementation complexity.
Invoke encapsulates state in explicit Context objects, handed to tasks
when they execute . The context is the primary API endpoint, offering
methods which honor the current state (such as Context.run) as well as
access to that state itself.
Check these both links :
Context
what exactly is this ‘context’ arg anyway?
To be honest, I wasted a lot of time figuring out what context is and why my code wouldn't run without it. But at some point I just gave up and started using to make my code run without errors.
Assume I have python code
def my_great_func(an_arg):
a_file = open("/user/or/root/file", "w")
a_file.write("bla")
which I want to maintain without paying attention to invokation with and without priveleges. At the same time I don't want to invoke the script with sudo/enforce the invokation with sudo (although this would be a legitemate pratice) or enable setuid for my python interpreter (generally a bad idea...). An idea is now to start a second instance of the python interpretor and communicate over processes/pipes. In order to maximize the maintainability of the code it would be nice to simply pass the callable to the instance (e.g. started with subprocess.Popen and addressed to with its PID) like I would pass it to multiprocess.Process (which I can't use because I can't setuid in the subprocess). I imagine something like
# please consider this pseudo python code
pid = subprocess.Popen(["sudo", "python"]).get_pid()
thelib.pass_callable(pid, target, args)
or even
interpreter_instance = greatlib.Python(target, args)
interpreter_instance.start()
interpreter_instance.wait()
Is that possible and covered by existing libs?
Generally speaking, you don't want any script to run as Super User unless the script invoking it was called with Super User. This is not only an issue of good practice and secure programming, but also programmer etiquette. If any part of your program requires use of Super User, this intention should be made known before you even begin the program.
With that in mind, the Python thread library should work just fine for this.
How can I access a running Python script's variable? Or access a function, to set the variable. I want to access it from the command line or from another Python script, that doesn't matter.
For example,
I have one script running run_motor.py, with a variable called mustRun. When the user pushes the stop button it should access the variable mustRun to change it to false.
If you want to interact with a running python script and modify some variables in it (I don't know why you want to do that, but... meh) you can have a look at Pyrasite.
Here is a demo of Pyrasite on asciinema
This is damn impressive.
By the way just so you know, that's NOT the best practice for what you want to do. I assume this is for testing purpose because using that kind of script in production or something like that wouldn't be safe at all...
Easiest way of accomplishing this is to run a small TCP server in a thread and have it change the variable you want to change when it receives a command to do so. Then write a python script that sends the stop command to that TCP server.
I'm working on a grid system which has a number of very powerful computers. These can be used to execute python functions very quickly. My users have a number of python functions which take a long time to calculate on workstations, ideally they would like to be able to call some functions on a remote powerful server, but have it appear to be running locally.
Python has an old function called "apply" - it's mostly useless these days now that python supports the extended-call syntax (e.g. **arguments), however I need to implement something that works a bit like this:
rapply = Rapply( server_hostname ) # Set up a connection
result = rapply( fn, args, kwargs ) # Remotely call the function
assert result == fn( *args, **kwargs ) #Just as a test, verify that it has the expected value.
Rapply should be a class which can be used to remotely execute some arbitrary code (fn could be literally anything) on a remote server. It will send back the result which the rapply function will return. The "result" should have the same value as if I had called the function locally.
Now let's suppose that fn is a user-provided function I need some way of sending it over the wire to the execution server. If I could guarantee that fn was always something simple it could could just be a string containing python source code... but what if it were not so simple?
What if fn might have local dependencies: It could be a simple function which uses a class defined in a different module, is there a way of encapsulating fn and everything that fn requires which is not standard-library? An ideal solution would not require the users of this system to have much knowledge about python development. They simply want to write their function and call it.
Just to clarify, I'm not interested in discussing what kind of network protocol might be used to implement the communication between the client & server. My problem is how to encapsulate a function and its dependencies as a single object which can be serialized and remotely executed.
I'm also not interested in the security implications of running arbitrary code on remote servers - let's just say that this system is intended purely for research and it is within a heavily firewalled environment.
Take a look at PyRO (Python Remote objects) It has the ability to set up services on all the computers in your cluster, and invoke them directly, or indirectly through a name server and a publish-subscribe mechanism.
It sounds like you want to do the following.
Define a shared filesystem space.
Put ALL your python source in this shared filesystem space.
Define simple agents or servers that will "execfile" a block of code.
Your client then contacts the agent (REST protocol with POST methods works well for
this) with the block of code.
The agent saves the block of code and does an execfile on that block of code.
Since all agents share a common filesystem, they all have the same Python library structure.
We do with with a simple WSGI application we call "batch server". We have RESTful protocol for creating and checking on remote requests.
Stackless had ability to pickle and unpickle running code. Unfortunately current implementation doesn't support this feature.
You could use a ready-made clustering solution like Parallel Python. You can relatively easily set up multiple remote slaves and run arbitrary code on them.
You could use a SSH connection to the remote PC and run the commands on the other machine directly. You could even copy the python code to the machine and execute it.
Syntax:
cat ./test.py | sshpass -p 'password' ssh user#remote-ip "python - script-arguments-if-any for test.py script"
1) here "test.py" is the local python script.
2) sshpass used to pass the ssh password to ssh connection