I've written a shell script that does several things, such as...
define environment variables
activates a python virtual environment
The issue I am having is any commands that I put in after the 2nd step does not get recognized.
I do this by the following line
bash --rcfile "<PATH_TO_VIR_ENV>/bin/activate" -i
The way I use my shell script is to first log in to the linux server then do
bash ./myScript.sh
This activates my virtual environment but doesn't do the things I would like it to do afterwards. For example source by .bashrc file so that I can use aliases stored in there.
Thank you for your time in advance!
Related
For example if I run setstate.py
to shell would go from
~/Desktop $
to
(customstate) ~/Desktop $
sort of like in anaconda when you activate an environment
for example something like:
import shellstate
shellstate.set_state("custom_state")
print('set state to custom state')
You can't. That would be a security breach.
The shell is a process, your python program is another.
What you call "anaconda when you activate an environment" is something else: you don't run another process, you run command in the shell. By sourcing a shell script. (I don't know anaconda well, but something like source activate environment, which is a shell command, not a python program)
Any "state" (or any other internal change of your shell) has to be triggered by a shell command. It can't happen from a command of another process.
I'm using command docker run -e GRB_WLSACCESSID=xxxxxxx to set environment variables for Gurobi authorization. The OS of container is Ubuntu 16.04. This is OK if I login the container via SSH interactively and read the environment variables by python code os.getenv().
But, when I add this container as remote SSH interpreter in Pycharm and execute the python code along Pycharm, I can't get the environment variables.
At last I found the problem is that the environment variables generated by docker run -e can only be read by interactive shell. This can be validated by executing ssh root#x.x.x.x env, and interactively execute env after logging into the container. The former outputs less.
One possible solution is write some configuration manually after the container is generated, e.g., set the variables in /etc/environment (provided by this).
The other possible solution is add the variables manually in Pycharm edit configuration.
Is there a more elegant solution? :(
I finally understood the specific meaning of the relevant answers
It means, in the remote VM or container, create a linux script file named mypython as the python wrapper with the content:
#!/bin/bash -l
/path/to/interpreter/bin/python "$#"
, where /path/to/interpreter/bin/python is the path to the python interpreter. For conda interpreter, it might look like /root/miniconda3/envs/py37/bin/python.
The script mypython should be placed in the same path as binary python, i.e., /root/miniconda3/envs/py37/bin/mypython
And then add the execute permission to mypython:
chmod +x /root/miniconda3/envs/py37/bin/mypython
The above two steps can also be executed by command as an alternative:
echo '#!/bin/bash -l
/root/miniconda3/envs/py37/bin/python "$#"' > /root/miniconda3/envs/py37/bin/mypython
chmod +x /root/miniconda3/envs/py37/bin/mypython
At last, add the SSH interpreter in Pycharm, make sure the interpreter path is /root/miniconda3/envs/py37/bin/mypython
And the problem is solved.
I have a bash file, it works fine when executed from terminal.
#!/bin/bash
source activate tensorflow_p36
python /home/ec2-user/abc/wsgi.py
Note: tensorflow_p36 being an in-built conda environment, does not require to be called from specific /env/bin directory. It can be activated from any directory. I think it's a feature of Amazon Deep Learning AMIs.
If I run this bash script with sudo it doesnt activate the virtual environment and works in default python environment. The python file can run in that virtual environment only.
I have tried all 3 alternatives (rc.local, .conf file, init.d config)here, also tried to use crontab as suggested here. I have also tried using supervisord to add this bash script as a program.
When the program runs from these methods, I always get the same import errors because it is using default python 3 environment which doesn't have the required dependencies.
I am working on Amazon CentOS (Deep learning AMI). Can someone please suggest a method to run this script every time system restarts?
In the rc.local, instruct root to run it as you:
su --command /path/to/bash/file --login grimlock
You can run it from your personal Crontab.
( crontab -l; printf '#reboot /path/to/bash/file\n' ) | crontab -
If you don't have a crontab there will be an error message from crontab -l but it's harmless.
crontab: no crontab for ec2-user
You just need to do this once, and the job will execute as yourself once the system comes up.
try to change source by .
. activate tensorflow_p36
python /home/ec2-user/abc/wsgi.py
also check chmod +x your path file.
If one defines which version of python to use in a bash script, it would be
export PYTHON = "/path/python/python-3.5.1/bin/python"
But for Python virtualenv's, one executes these commands in the command line
cd /path/pathto/virtualenv
source activate
cd another_directory
How does one "enter" a Python virtualenv in a bash script? What is the standard approach here?
We have to distinguish two cases here:
You want to use/call python (or python-based tools) in your bash script, but python or those tools should be taken from and run in a virtualenv
You want a script that, amongst other things, lets the shell from which you call it enter the virtualenv, so that you can interactively call python (or python-based tools) inside the virtualenv
Case 1: Using a virtualenv inside a script
How does one "enter" a Python virtualenv in a bash script?
Just like on the interactive bash command line:
source /path/to/the/virtual_env/bin/activate
What is the standard approach here?
The standard approach is not to enter the virtualenv in a bash script. Instead, call python and/or the python-based commands you want to use by their full path. To make this easier and less repetitive, you can use aliases and variables.
Case 2: Activating a virtualenv in an interactive bash session by calling a script
There already is such a script. It's called activate and it's located in the bin directory of the virtualenv. You have to source it rather than calling it like a normal command. Only then will it run in the same session instead of in a subshell, and thus only then can it make modifications to the session that won't be lost due to the subshell terminating at the end of the script.
So just do:
source /path/to/the/virtual_env/bin/activate
in your interactive shell session.
But what if you want to do more than the activate script does? You can put
source /path/to/the/virtual_env/bin/activate
into a shell script. But, due to the reason mentioned above, it won't have much effect when you call your script normally. Instead, source your script to use it from an interactive session.
Thus:
Content of my_activate.sh
#!/bin/bash
# Do something
# ...
# then
source /path/to/the/virtual_env/bin/activate
# Do more stuff
# ...
and in your interactive session
source my_activate.sh
I recommend using virtualenvwrapper. It provides some useful tools for managing your virtual environments.
pip install --user virtualenvwrapper
When you create the virtual environment, you specify which version of python should be used in the environment.
mkvirtualenv -p /usr/local/bin/python2.6 myproject.2.6
mkvirtualenv -p /usr/local/bin/python3.3 myproject.3.3
Then, "enter" the environment with the workon command.
workon myproject.2.6
Here are few steps to follow, one thing you can do is
export PYTHON = "/path/pathto/virtualenv/python"
Use this path in bashrc to use. Or you can do something like:-
vim ~/.bashrc
Go to end and set
alias python=/path/pathto/virtualenv/python
source ~/.bashrc
I've done a fair bit of bash scripting, but very little batch scripting on Windows. I'm trying to activate a Python virtualenv, run a Python script, then deactivate the virtualenv when the script exits.
I've got a folder called env, which is my virtualenv, and a folder called work, which contains my scripts.
This is what I've got so far:
%~dp0env\Scripts\activate.bat
python %~dp0work\script.py
deactivate
However, when I run the script, it activates the virtualenv then stops. It does not get to the second line and run the Python script. Is there a way to "source" the activate script folder, so that the rest of the batch script can be run as if I'd called activate.bat from the command line?
I'd say you just need to prepend 'call' to your activate.bat invocation, to ensure that the current batch file is resumed after activate is executed:
call %~dp0env\Scripts\activate.bat
Consider doing the same for deactivate.bat. Furthermore, if you want to ensure that the current cmd.exe environment is not polluted by a call to your batch file, consider wrapping your commands in a setlocal/endlocal command pair.
I made a .lnk file that points to cmd /k "path/to the/script/activate.bat", and it works.
CMD parameters & options
I suppose you just want to perform the same commands in Windows as if expected in Linux Bash/shell. When I want to start a virtualenv I am actually in its top directory, and the Linux command would be "source bin/activate".
It is no problem to simulate this behaviour on Windows. Me personally, I've put a batch file named activate.bat somewhere on the PATH environment variable like this:
:: activate.bat
#echo off
REM source bin/activate
if "%1" == "bin/activate" (
if not EXIST "%CD%\Scripts\activate.bat" goto notfound
set WRAPEX=Scripts\activate.bat
) ELSE (
set WRAPEX=%*
)
call %WRAPEX%
goto :eof
:notfound
echo Cannot find the activate script -- aborting.
goto :eof