I am writing a fabfile that I want to include an environment variable for the virtualenv that I'm using for development. I don't want to have to put
with shell_env(venv=VIRTUALENV):
...
in every single function. I'd like to be able to add this at the beginning of the file and have it apply to all tasks globally. Is there a way to do this?
For example, I have the tasks
def setup_dev_env():
with shell_env(venv=VIRTUALENV):
local('virtualenv $(venv)')
with prefix('workon $(venv)'):
local('pip install -r requirements.txt')
test()
def test():
with shell_env(venv=VIRTUALENV):
with prefix('workon $(vnenv)'):
local('python3 manage.py test')
and it would be nice if I didn't have to repeat exactly the same line at the beginning of both functions.
I'm not sure if this is considered a hack of command_prefixes but it works:
env.command_prefixes=["export myvar='Hello world'",]
def echo_env():
run(r"echo $myvar")
Output:
C:\Users\swozn\PycharmProjects\aaetuea>fab echo_env
[swozn#localhost] Executing task 'echo_env'
[swozn#localhost] run: echo $myvar
[swozn#localhost] out: Hello world
The advantage is that it is impossible to forget, because it is automatically prepended to all your commands.
If you look at prefix() and shell_env() - you'll notice that they just use _setenv({..}) you can combine everything with settings() and put that in a function and all it once or just have it all in one line.. up to you.
https://github.com/fabric/fabric/blob/5217b12f8aca3bc071206f7f4168e62c003509d1/fabric/context_managers.py#L370
https://github.com/fabric/fabric/blob/5217b12f8aca3bc071206f7f4168e62c003509d1/fabric/context_managers.py#L160
https://github.com/fabric/fabric/blob/5217b12f8aca3bc071206f7f4168e62c003509d1/fabric/context_managers.py#L443
if will look something like:
def setup_dev_env():
with settings(..all your options..):
local(..your command...)
or if you really want to be cool:
#with_settings(..all your same options from before..)
def setup_dev_env():
local(..command..)
Related
I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.
I am trying to mock our git wrapper, so that we can test it. I plan to use mockproc python library which provides the functionality to mock any process name, with a provided script. It works something like this -
self.scripts.append( 'process-name', returncode=0, stdout="output to process" )
with self.scripts:
run_and_handle_result()
I need to add a decorator layer over this so that I can do some extra things like handle retries. What I want is something like this -
#mockproc('git') # tells that we are mocking git
def test_something(mock_proc):
mock_proc.set_script("sleep (60)")
# Run some git command
mockproc.check_exit_signal()
The problem is I want my decorator to handle the with self.scripts part. So what I want is that the decorator runs the function, setting the process name as git, which is simple. Then run the test function, which adds the script and add with self.script around the git command and then resumes the function.
Is there anyway to do it ? is a decorator bad way to implement it ? This is not a cosmetic requirement. I need this because in some of my commands there is retry logic, for which I need to provide more than one script to mockproc and run multiple times.
If I understood you correctly, you want to override a named free variable of a function. You can use fun.func_globals[some_name] = some_value. E.g.
def x(a):
pow2(a)
x.func_globals['pow2'] = lambda y: y*y
x(3) == 9
I did hours on research regarding the following question but I wasn't able to find an answer at all. Though there seem to be many fellows having problems with that. I hope I will recieve some help from the community. ;)
I have a Cshell script where I need to call a Python3 script from. Also I am passing a variable.
.csh
#!/bin/csh -f
set variable = value
/../geos.py $variable
So far so fine. In my Python3 script I take this variable, do some calculations and now want to pass back the 'new_variable' to the VERY SAME C shell script in order to proceed my set of data.
.py
import os
...
new_variable = 'foobar'
os.environ['new_variable'] = new_variable
return new_variable
My actual goal is that my C Shell script:
#!/bin/csh -f
set variable = value
/../geos.py $variable
echo $new_variable
doesn't return 'Undefined variable'. So obviously my code doesn't work. Sure, I might be able to temporarily save the python calculations into a file but this seems quite unconvincingly. Also, I understand that it is just not possible to manipulate an environmental variable of the shell through a child process, but still I only want to pass a normal variable. There should be one way, no?
If it is possible, I wasn't able to figure out any solution using subprocess.check_call. What am I missing?
E D I T:
Merci beaucoup.
I knew that there must have been an easy solution. Thanks a lot!
For CSHELL the following code worked:
set new_variable=`../geos.py $variable`
echo $new_variable
For BASH the following code worked:
new_variable=`../geos.py $variable`
echo $new_variable
In the python script itself you don't need to do anything but putting your desired variable into standard output, e.g. print(you_even_can_name_them_as_you_want). No os.environ oo whatever necessary. Made my day. SOLVED
in bash I'd use:
new_variable=$(../geos.py $variable)
Have the python script produce the new value as standard out (i.e. print(new_variable) )
In csh I don't know, maybe you would have to use backquotes instead of $() ?
In fabric, the cd context manager works like
with cd("dir"):
run("command")
and the command will be run after changing to the dir directory. This works fine, but the issue is that it uses a global state. For example, suppose I have a helper function, which needs to use cd:
def helper():
with cd("foo"):
run("some command")
If I call helper from another function like
def main_function():
helper()
...
it works fine. But if I do something like
def main_function():
with cd("bar"):
helper()
it breaks, because the run("come command") from helper is now run from bar/foo instead of just foo.
Any tips on how to get around this? I tried using absolute paths in cd, but that didn't work. What I really want is for the cd context to only extend to the function scope.
So apparently absolute paths do work. The issue is that paths with ~ do not work (they are treated like relative paths, which IMHO is a bug in fabric), and that was what I was trying. So you have to do (e.g., if you are using vagrant) cd('/home/vagrant/foo').
Probably you can get away with relative paths in nested context managers
def func():
with cd("/home/vagrant/foo"):
stuff()
with cd("bar"): # cd /home/vagrant/foo/bar
more_stuff()
because you know exactly what the current working directory is when you call the cd('bar'). But for top-level cds, if the function can ever be called from within another function (not just directly from fab), you should use absolute paths.
I would like to play around in the python interpreter but with a bunch of imports and object setup completed. Right now I'm launching the interpreter on the command line and doing the setup work every time. Is there any way to launch the command line interpreter with all the initialization work done?
Ex:
# Done automatically.
import foo
import baz
l = [1,2,3,4]
# Launch the interpreter.
launch_interpreter()
>> print l
>> [1,2,3,4]
You can create a script with the code you wish to run automatically, then use python -i to run it. For example, create a script (let's call it script.py) with this:
import foo
import baz
l = [1,2,3,4]
Then run the script
$ python -i script.py
>>> print l
[1, 2, 3, 4]
After the script has completed running, python leaves you in an interactive session with the results of the script still around.
If you really want some things done every time you run python, you can set the environment variable PYTHONSTARTUP to a script which will be run every time you start python. See the documentation on the interactive startup file.
I use PYTHONSTARTUP.
My .bash_profile has a path to my home folder .pyrc, which as the import statements in it.
https://docs.python.org/3/using/cmdline.html#envvar-PYTHONSTARTUP
I came across this question when trying to configure a new desk for my research and found that the answers above didn't quite suit my desire: to contain the entire desk configuration within one file (meaning I wouldn't create a separate script.py as suggested by #srgerg).
This is how I ended up achieving my goal:
export PYTHONPATH=$READ_GEN_PATH:$PYTHONPATH
alias prepy="python3 -i -c \"
from naive_short_read_gen import ReadGen
from neblue import neblue\""
In this case neblue is in the CWD (so no path extension is required there), whereas naive_short_read_gen is in an arbitrary directory on my system, which is specified via $READ_GEN_PATH.
You could do this in a single line if necessary: alias prepy=PYTHONPATH=$EXTRA_PATH:$PYTHONPATH python3 -i -c ....
You can use the -s option while starting the command line. The details are given in the documentation here
I think I know what you want to do. You might want to check IPython, because you cannot start the python interpreter without giving the -i option (at least not directly).
This is what I did in my project:
def ipShell():
'''Starts the interactive IPython shell'''
import IPython
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, display_banner=False)
# Then add the following line to start the shell
ipShell()
You need to be careful, though, because the shell will have the namespace of the module that the function ipShell() is defined. If you put the definition in the file you run, then you will be able to access the globals() you want. There could be other workarounds to inject the namespace you want, b̶u̶t̶ ̶y̶o̶u̶ ̶w̶o̶u̶l̶d̶ ̶h̶a̶v̶e̶ ̶t̶o̶ ̶g̶i̶v̶e̶ ̶a̶r̶g̶u̶m̶e̶n̶t̶s̶ ̶t̶o̶ ̶t̶h̶e̶ ̶f̶u̶n̶c̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶a̶t̶ ̶c̶a̶s̶e̶.
EDIT
The following function defaults to caller's namespace (__main__.__dict__).
def ipShell():
'''Starts the interactive IPython shell
with the namespace __main__.__dict__'''
import IPython
from __main__ import __dict__ as ns
from IPython.config.loader import Config
cfg = Config()
cfg.TerminalInteractiveShell.confirm_exit = False
IPython.embed(config=cfg, user_ns=ns, display_banner=False)
without any extra arguments.