I'm trying to run a command that I've installed in my home directory on a remote server. It's already been added to my $PATH in .bash_profile. I'm able to use it when logged in remotely via a normal ssh session, but Fabric doesn't seem to be pulling in my $PATH. Thus, I've tried adding it to my $PATH using Fabric's path context manager like so:
def test_path():
print('My env.path setting: %(path)s' % env)
with path('/path/to/sources/drush'):
run('echo $PATH')
run('drush')
Fabric responds with:
Executing task 'test_path'
My env.path setting:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
out:
run: echo $PATH
out: /usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/path/to/sources/drush
out:
run: drush
out: /bin/bash: drush: command not found
out:
Fatal error: run() received nonzero return code 127 while executing!
Requested: drush
Executed: /bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
Aborting.
Thanks for looking...
The problem is in the way the PATH variable gets set - there is an additional space character at the end of it:
/bin/bash -l -c "export PATH=\"\$PATH:\"/path/to/sources/drush\" \" && drush"
^HERE
The last directory in the search path is interpreted by bash as "/path/to/source/drush " (trailing space) - an invalid directory.
Related
I need to execute mysqldump from within a django function. I can do so easily enough from the terminal command line, but when I try to run it from within the python script, I get an error:
sh: mysqldump: command not found
when running the following.
filestamp = date.today()
dumpcmd = "mysqldump -u root appdb > appdb%s.out" % (filestamp)
os.system(dumpcmd)
I think the problem has something to do with the Path in either the django application or Eclipse, but I can't figure out why mysqldump can't be found from within the django app but it can be from the command line / virtualenv
make sure mysqldump is in your path
$ whereis mysqldump; echo $PATH
mysqldump: /usr/bin/mysqldump /usr/share/man/man1/mysqldump.1.gz
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
and/or
using mysqldump and mysql inside python
import subprocess
subprocess.Popen('mysqldump -h localhost -P 3306 -u -root appdb > appdb.sql', shell=True)
Or use the full path in the Python statement. e.g. /usr/bin/mysqldump
I'm pretty new to Mac, sorry if this is something very simple.
I can run my javascript file through terminal using command:
casperjs myfile.js
However, I want to execute this command through python script.
this is what i've got:
pathBefore = os.getcwd()
os.chdir("path/to/javascript/")
cmd_output = subprocess.check_output(["casperjs click_email_confirm_link.js"], shell = True)
os.chdir(pathBefore)
print cmd_output
which returns /bin/sh: casperjs: command not found
As you can see, changing the working dir doesn't work.
I can't figure out how to make /bin/sh recognise casperjs, any help would be very appreciated
thanks
EDIT: this is how my code looks now
.bash_profile environment variable:
export PATH=$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
.profile environment variable:
export PHANTOMJS_EXECUTABLE="/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs"
`try:
CASPER ='/usr/local/bin/casperjs'
SCRIPT = 'path/to/javascript/click_email_confirm_link.js'
params = CASPER + ' ' + SCRIPT
stdout_as_string = subprocess.check_output(params, shell=True)
print stdout_as_string
except CalledProcessError as e:
print e.output`
which returns error:
Fatal: [Errno 2] No such file or directory; did you install phantomjs?
Solved my issue by typing these three lines in terminal:
sudo ln -s /path/to/bin/phantomjs /usr/local/share/phantomjs
sudo ln -s /path/to/bin/phantomjs /usr/local/bin/phantomjs
sudo ln -s /path/to/bin/phantomjs /usr/bin/phantomjs
and using python code:
commands = '''
pwd
cd ..
pwd
cd shared/scripts/javascript
pwd
/usr/local/Cellar/casperjs/1.1.3/libexec/bin/casperjs click_email_confirm_link.js
'''
process = subprocess.Popen('/bin/bash', stdin=subprocess.PIPE, stdout=subprocess.PIPE)
out, err = process.communicate(commands.encode('utf-8'))
print(out.decode('utf-8'))
also following files were edited like this:
vi .bash_profile
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH=$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
vi .bashrc + source .bashrc
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH="/usr/local/Cellar/phantomjs/2.1.1/bin:$PATH"
vi .profile
export PHANTOMJS_EXECUTABLE=/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs
export PATH="$PATH:/usr/local/Cellar/phantomjs/2.1.1/bin/phantomjs"
I'm using Vagrant to set up a box with python, pip, virtualenv, virtualenvwrapper and some requirements. A provisioning shell script adds the required lines for virtualenvwrapper to .bashrc. It does a very basic check that they're not already there, so that it doesn't duplicate them with every provision:
if ! grep -Fq "WORKON_HOME" /home/vagrant/.bashrc; then
echo 'export WORKON_HOME=/home/vagrant/.virtualenvs' >> /home/vagrant/.bashrc
echo 'export PROJECT_HOME=/home/vagrant/Devel' >> /home/vagrant/.bashrc
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> /home/vagrant/.bashrc
source /home/vagrant/.bashrc
fi
That seems to work fine; after provisioning is finished, the lines are in .bashrc, and I can ssh to the box and use virtualenvwrapper.
However, virtualenvwrapper doesn't work during provisioning. After the section above, this next checks for a pip requirements file and tries to install with virtualenvwrapper:
if [[ -f /vagrant/requirements.txt ]]; then
mkvirtualenv 'myvirtualenv' -r /vagrant/requirements.txt
fi
But that generates:
==> default: /tmp/vagrant-shell: line 50: mkvirtualenv: command not found
If I try and echo $WORKON_HOME from that shell script, nothing appears.
What am I missing to have those environment variables available, so virtualenvwrapper will run?
UPDATE: Further attempts... it seems that doing source /home/vagrant/.bashrc has no effect in my shell script - I can put echo "hello" in the .bashrc file , and that isn't output during provisioning (but is if I run source /home/vagrant/.bashrc when logged in.
I've also tried su -c "source /home/vagrant/.bashrc" vagrant in the shell script but that is no different.
UPDATE 2: Removed the $BASHRC_PATH variable, which was confusing the issue.
UPDATE 3: In another question I got the answer as to why source /home/vagrant/.bashrc wasn't working: the first part of the .bashrc file prevented it from doing anything when run "not interactively" in that way.
The Vagrant script provisioner will run as root, so it's home dir (~) will be /root. In your script if you define BASHRC_PATH=/home/vagrant, then I believe your steps will work: appending to, then sourcing from /home/vagrant/.bashrc.
Update:
Scratching my earlier idea ^^ because BASHRC_PATH is already set correctly.
As an alternative we could use .profile or .bash_profile. Here's a simplified example which sets environment variable FOO, making it available during provisioning and after ssh login:
Vagrantfile
Vagrant.configure(2) do |config|
config.vm.box = "hashicorp/precise32"
$prov_script = <<SCRIPT
if ! grep -q "export FOO" /home/vagrant/.profile; then
sudo echo "export FOO=bar" >> /home/vagrant/.profile
echo "before source, FOO=$FOO"
source /home/vagrant/.profile
echo "after source, FOO=$FOO"
fi
SCRIPT
config.vm.provision "shell", inline: $prov_script
end
Results
$ vagrant up
...
==> default: Running provisioner: shell...
default: Running: inline script
==> default: before source, FOO=
==> default: after source, FOO=bar
$ vagrant ssh -c 'echo $FOO'
bar
$ vagrant ssh -c 'tail -n 1 ~/.profile'
export FOO=bar
I found a solution, but I don't know if it's the best. It feels slightly wrong as it's repeating things, but...
I still append those lines to .bashrc, so that virtualenvwrapper will work if I ssh into the machine. But, because source /home/vagrant/.bashrc appears to have no effect during the running of the script, I have to explicitly repeat those three commands:
if ! grep -Fq "WORKON_HOME" $BASHRC_PATH; then
echo 'export WORKON_HOME=$HOME/.virtualenvs' >> $BASHRC_PATH
echo 'export PROJECT_HOME=$HOME/Devel' >> $BASHRC_PATH
echo 'source /usr/local/bin/virtualenvwrapper.sh' >> $BASHRC_PATH
fi
WORKON_HOME=/home/vagrant/.virtualenvs
PROJECT_HOME=/home/vagrant/Devel
source /usr/local/bin/virtualenvwrapper.sh
(As an aside, I also realised that during vagrant provisioning $HOME is /root, not the /home/vagrant I was assuming.)
The .bashrc in Ubuntu box does not work. You have to create the .bash_profile and add:
if [ -f ~/.bashrc ]; then
. ~/.bashrc
fi
As mentioned in your other Q, Vagrant prohibits interactive shells during provisioning - apparently, only for some boxes (need to reference this though). For me, this affects the official Ubuntu Trusty and Xenial boxes.
However, you can simulate an interactive bash shell using sudo -H -u USER_HERE bash -i -c 'YOUR COMMAND HERE'
Answer taken from: https://stackoverflow.com/a/30106828/4186199
This has worked for me installing Ruby via rbenv and Node via nvm when provisioning the Ubuntu/trusty64 and xenial64 boxes.
How do you define a custom prompt to use when activating a Python virtual environment?
I have a bash script for activating a virtualenv I use when calling specific Fabric commands. I want the shell prompt to say something like "(fab)" so I can easily distinguish it from other shells I have open. Following this example, I've tried:
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
/bin/bash -c ". .env/bin/activate; PS1='(fab) '; exec /bin/bash -i"
but there's no change to the prompt. What am I doing wrong?
The prompt is set in the virtualenv's activate script (located in the bin folder under the virtualenv). If you only want to change the prompt some times, you could set an environment variable before calling activate (make sure to clear it in the corresponding deactivate file). If you simply want the prompt to be different all the time, you can do that right in activate at the line that looks like
set "PROMPT=(virtualenvname) %PROMPT%"
If you're using virtualenvwrapper, you could do all of this in the postactivate and postdeactivate scripts as well.
I couldn't find any way to do this via a script executed as a child process. Calling a separate bash process seems to forget any previously set PS1. However, it turned out to be trivial if I just sourced the script:
#!/bin/bash
script_dir=`dirname $0`
cd $script_dir
. .env/bin/activate
PS1="(fab) "
It appears the
exec /bin/bash -i
is resetting the PS1 variable. When I run
export PS1="foo "; bash
it resets it too. Curiously, when I look into the bash sources (shell.c and variables.c) it appears to use
set_if_not ("PS1", primary_prompt);
to init it. But I'm not exactly sure what happens between this and main(). Giving up.
I tried on cygwin and on linux (RedHat CentOS) as well. I found solution for both.
CYGWIN
After some investigation I found that the problem is that PS1 is set by /etc/bash.bashrc which overrides the PS1 env.var. So You need to disable to run this file using:
/bin/bash -c ". .env/bin/activate; PS1='(fab) ' exec /bin/bash -i --norc"
or
/bin/bash -c ". .env/bin/activate; export PS1='(fab) '; exec /bin/bash -i --norc"
LINUX
It works much simpler:
/bin/bash -c ". .env/bin/activate; PS1='(fab) ' exec /bin/bash -i"
or
/bin/bash -c ". .env/bin/activate; export PS1='(fab) '; exec /bin/bash -i"
If the script You are calling does not export the variables (and I suppose it does not) and the set variables does not appears in the environment then You could try something like this:
/bin/bash -c "PS1='(fab) ' exec /bin/bash --rcfile .env/bin/activate; "
I hope I could help!
I'm currently executing a python file with runuser and redirecting the output to a file. This is the command:
runuser -l "user" -c "/path/python-script.py parameter1 > /path/file.log &"
This run correctly the python script but creates an empty log file. If I run without the redirect:
runuser -l "user" -c "/path/python-script.py parameter1 &"
runs correctly the python script and make all output from python script to flow the screen. The output from the python script are done with print which output to stdout.
I don't understand why the output from the python script is not dumped to the file. File permissions are correct. The log files is created, but not filled.
But, if I remove the "parameter1", then the error message reported by the python script is correctly dumped to the log file:
runuser -l "user" -c "/path/python-script.py > /path/file.log &"
The error message is done with print too, so I don't understand why one message are dumped and others not.
Maybe runuser interprets the "parameter1" as a command or something. But, the parameter is correctly passed to the script, as I can see with ps:
/usr/bin/python /path/python-script.py connect
I've tried adding 2>&1 but still don't work.
Any idea ?
Encountered similar problem in startup scripts, where I needed to log output. In the end I came up with following:
USR=myuser
PRG=myprogrog
WKD="/path/to/workdir"
BIN="/path/to/binary"
ARG="--my arguments"
PID="/var/run/myapp/myprog.pid"
su -l $USR -s /bin/bash -c "exec > >( logger -t $PRG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
Handy as you can also have PID of the process available. Example writes to syslog, but if it is to write to the file use cat:
LOG="/path/to/file.log"
su -l $USR -s /bin/bash -c "exec > >( cat > $LOG ) 2>&1 ; cd $WKD; { $BIN $ARG & }; echo \$! > $PID "
It starts a new shell and ties all outputs to command inside exec.