How do you define a custom prompt to use when activating a Python virtual environment?
I have a bash script for activating a virtualenv I use when calling specific Fabric commands. I want the shell prompt to say something like "(fab)" so I can easily distinguish it from other shells I have open. Following this example, I've tried:
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". .env/bin/activate; PS1='(fab) '; exec /bin/bash -i"
but there's no change to the prompt. What am I doing wrong?
The prompt is set in the virtualenv's activate script (located in the bin folder under the virtualenv). If you only want to change the prompt some times, you could set an environment variable before calling activate (make sure to clear it in the corresponding deactivate file). If you simply want the prompt to be different all the time, you can do that right in activate at the line that looks like
set "PROMPT=(virtualenvname) %PROMPT%"
If you're using virtualenvwrapper, you could do all of this in the postactivate and postdeactivate scripts as well.
I couldn't find any way to do this via a script executed as a child process. Calling a separate bash process seems to forget any previously set PS1. However, it turned out to be trivial if I just sourced the script:
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
. .env/bin/activate
PS1="(fab) "
It appears the
exec /bin/bash -i
is resetting the PS1 variable. When I run
export PS1="foo "; bash
it resets it too. Curiously, when I look into the bash sources (shell.c and variables.c) it appears to use
set_if_not ("PS1", primary_prompt);
to init it. But I'm not exactly sure what happens between this and main(). Giving up.
I tried on cygwin and on linux (RedHat CentOS) as well. I found solution for both.
CYGWIN
After some investigation I found that the problem is that PS1 is set by /etc/bash.bashrc which overrides the PS1 env.var. So You need to disable to run this file using:
/bin/bash -c ". .env/bin/activate; PS1='(fab) ' exec /bin/bash -i --norc"
or
/bin/bash -c ". .env/bin/activate; export PS1='(fab) '; exec /bin/bash -i --norc"
LINUX
It works much simpler:
/bin/bash -c ". .env/bin/activate; PS1='(fab) ' exec /bin/bash -i"
or
/bin/bash -c ". .env/bin/activate; export PS1='(fab) '; exec /bin/bash -i"
If the script You are calling does not export the variables (and I suppose it does not) and the set variables does not appears in the environment then You could try something like this:
/bin/bash -c "PS1='(fab) ' exec /bin/bash --rcfile .env/bin/activate; "
I hope I could help!
Related
I would like to set permanently a conda environment in my docker image in order that the functions of the conda package could be used by the script given as argument to the entrypoint.
This is the dockerfile that I created.
FROM continuumio/anaconda3
RUN conda create -n myenv
RUN echo "source activate myenv" > ~/.bashrc
ENV PATH:="/opt/conda/envs/myenv/bin:$PATH"
SHELL ["/bin/bash", "-c"]
ENTRYPOINT ["python3"]
It seems that the ~/.bashrc file is not sourced when I run the docker container. Am I doing something wrong?
Thank you
As a work around either use 'SHELL ["/bin/bash", "-i", "--login", "-c"]'
-or-
edit the .bashrc file in the image to not exit if not in interactive mode by changing "*) return;;" to read "*) ;;"
Using the first option bash will complain about job control and ttys, but the error can be ignored.
cause of the issue:
the .bashrc file contains the following command:
# If not running interactively, don't do anything
case $- in
*i*) ;;
*) return;;
esac
which causes bash to stop sourcing the file if not in interactive mode. (the -i flag)
Unfortunately, I haven't found a way for the conda stanza to be inserted into .bash_profile or .profile automatically instead of (or in addition to) .bashrc, as there doesn't seem to be an option to override or add to the list of what files conda init examines for modification.
How can run a sourced bash script, and then change directories, and then run a command, all within the same shell (Using python)? Is this even possible?
My Attempt:
subprocess.check_call(["env -i bash -c 'source ./init-build ARG'", "cd ../myDir", "bitbake myBoard"], shell =True)
I would make this for you, but I need to see the absolute paths. Here is an example
subprocess.check_call(["""/usr/bin/env bash -c "cd /home/x/y/tools && source /home/x/y/venv/bin/activate && python asdf.py" >> /tmp/asdf.txt 2>&1"""], shell=True)
I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.
I'm trying to set up my /etc/rc.local to automatically start up a process on reboot as another user. For some reason, the .bash_rc for this user does not seem to be getting initialized.
Here's the command I added to /etc/rc.local :
sudo su -l batchuser -c "/home/batchuser/app/run_prod.sh &"
this didn't work, so I also tried this:
sudo su -l batchuser -c ". /home/batchuser/.profile; /home/batchuser/app/run_prod.sh &"
run_prod.sh just starts up a python script. The python script fails because it references modules which are in a python path which gets initialized in the .bash_rc
EDIT: it works when I do this
sudo su -l batchuser -c "export PYTHONPATH=/my/python/path; /home/batchuser/app/run_prod.sh &"
Why does this work and not the statement above? How come the .bashrc is not getting initialized?
I have run into this same problem. I can't fully explain the behavior, but I ended up doing this type of thing:
sudo $PYTHONPATH=$PYTHONPATH the_command
or more specifically for your case,
sudo $PYTHONPATH=$PYTHONPATH su -l batchuser -c /home/batchuser/app/run_prod.sh &"
Does that work for you? If it does, you may find it doesn't return immediately like you expect it to. You may need to move the & outside the quotes so it applies to the sudo command.
I'm trying to run a command that I've installed in my home directory on a remote server. It's already been added to my $PATH in .bash_profile. I'm able to use it when logged in remotely via a normal ssh session, but Fabric doesn't seem to be pulling in my $PATH. Thus, I've tried adding it to my $PATH using Fabric's path context manager like so:
def test_path():
print('My env.path setting: %(path)s' % env)
with path('/path/to/sources/drush'):
run('echo $PATH')
run('drush')
Fabric responds with:
Executing task 'test_path'
My env.path setting:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
out:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/path/to/sources/drush
out:
run: drush
out: /bin/bash: drush: command not found
out:
Fatal error: run() received nonzero return code 127 while executing!
Requested: drush
Executed: /bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
Aborting.
Thanks for looking...
The problem is in the way the PATH variable gets set - there is an additional space character at the end of it:
/bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
^HERE
The last directory in the search path is interpreted by bash as "/path/to/source/drush " (trailing space) - an invalid directory.