When I run source venv/bin/activate in command line , it activates the virtualenv. However when run this via a shell script ./run.sh , I don't see the virtualenv being activated. Similar scripts used to work for me in the past , however I am not sure what I am missing now. I am running this is on a mac.
#! /bin/bash
source venv/bin/activate
(venv) 8c859072374671e:my-project tee78$
When you are running source inside a script. It is running in a new environment. It won't get reflected in the parent shell.
$ cat run.sh
#! /bin/bash
source venv/bin/activate
It you need that to happen do, source your script,
source run.sh
Also, you won't need the shebang line :)
Related
I have a python file which uses the prettytable. I also have a bash script that opens and runs it using xQuartz. When I open xQuartz and run the file from there, it works as expected, however when I try to run it using the script, it is unable to find my prettytable module. What might be going on?
bash script line:
xterm -geometry 55x24+000+000 -hold -e /bin/bash -l -c "python3 server.py 1"
running python3 server.py 1 on xQuartz terminal is fine. It also works if I run xterm from the mac terminal and do the same.
As pointed out by #DiegoTorresMilano, you may be running a different version of python 3 depending on what's in your ~/.bash_profile or ~/.bashrc. This is possible if you have more than one version of python 3 installed.
When you are running an interactive non-login bash session, your ~/.bash_profile will be sourced first, and then your ~/.bashrc. Since your ~/.bashrc will be sourced second, it can override things set in your ~/.bash_profile.
When you run bash with the -l option, this tells bash to run as though it was a login shell. Then the "Invocation" section of the bash man page tells us that ~/.bash_profile will be sourced by a login shell, but not ~/.bashrc.
What you should try is running python3 --version in an interactive xQuartz terminal. This will give you output something like Python x.y.z (for example, Python 3.8.5). Then you can run that specific python version by using pythonx.y in your bash script (for example, if the output of your python3 --version was Python 3.8.5, you should use python3.8 in your bash script).
I am working on the project where I need to run my script daily at a specific time. I am using crontab/cronjob to run shell script. Using shell script I want to activate virtualenv and python interpreter to run some commands and deactivate it. I have tried
#!/bin/bash
bash
source virtual_env/bin/activate
cd src
python script.py
but didn't work for me
Note: I can activate my virtualenv and use /home/bin/virtual_env/python the interpreter manually but i wanted to it via shell script.
Often there is no need to activate a virtual environment:
#!/usr/bin/env sh
PYTHON_BIN='/path/to/virtual_environment/bin/python'
pushd 'src'
"${PYTHON_BIN}" 'script.py'
I needed a module for a shell script I've written in Python, so I used pipenv to install it. I can run the command fine using:
~$ pipenv run python3 foo
Now, if I want to just run ~$ foo on the command line (fish shell on MacOS, homebrew installed), how do I invoke the pipenv environment in the shebang of my Python script? Or is there a better way?
As documented here https://pipenv.readthedocs.io/en/latest/ you need to activate the virtual environment first. This will spawn another shell with the virtual environment activated
$ pipenv shell
so that you can run
$ python foo
to execute your script. Then you can use
#!/usr/bin/env python
on the first line of your script and make the script executable (chmod +x foo.py) so that you can run
$ ./foo
If the location of that script is part of your PATH environment variable, you should be able to run now
$ foo.py
If you don't like the extension you will have to remove from your script too
With pipenv-shebang you can run your script with
pipenv-shebang PATH/SCRIPT
or you could insert the shebang
#!/usr/bin/env pipenv-shebang
to run it with just PATH/SCRIPT.
create wrapper file like below works for me, but little bit hacky way.
import subprocess
if __name__ == '__main__':
subprocess.run(['pipenv', 'run', 'foo'])
Environment
Windows Subsystem for Linux with Serial Communication to a GPS.
Adafruit GPS connected to a Arduino Nano which is connected to COM10. In Windows Subsystem for Linux this is equivalent to /dev/ttyS10
Requirements: pyserial
I have written a simple script to read information from the GPS module:
import serial
def select_sentence():
""" This function sends serial data to the GPS module to display only GPGGA and GPRMC"""
def read_gps():
ser = serial.Serial("/dev/ttyS10", 9600)
while True:
print(ser.readline().decode('utf-8'))
if __name__ == "__main__":
select_sentence()
read_gps()
In the virtualenv I chose Python3 and when I executed it I got Permission Error for the serial port /ttyS10 so I chose to sudo chmod 666 /dev/ttyS10 to use the script in the virtualenv.
However is there an alternative to the above mentioned chmod /dev/serial in order to avoid the PermissionErrors?
I am aware that even in the virtualenv when one uses sudo the packages installed in the virtualenv are no considered and instead sudo looks for your global pip packages.
When you activate a virtualenv (by source venv/bin/activate or similar), that basically just tells your shell: "hey, when you search for a command, look in venv/bin before you look anywhere else", by updating the $PATH environment variable. That way, when you run a command like python, your shell sees and runs the python in venv/bin instead of in /usr/bin or wherever. That copy of Python is configured to look in venv/lib for packages rather than /usr/lib, so you can use the packages in your virtualenv instead of the ones installed globally.
However, when you run a program with sudo, it ignores $PATH. Why does it do that? Because in the historical days of *nix, it was common to have sudo set up so that users could execute only specific commands with it, like (say) sudo iftop1, so that anyone could check what the network was being used for, but still nobody could run sudo rm -rf /*. If sudo respected the user's $PATH, you could just copy /bin/rm to ~/bin/iftop, add ~/bin to your $PATH, then run sudo iftop – but you would actually be running rm as root!
So, sudo ignores $PATH by default. But you can still execute specific programs by giving sudo the full path to the program, so you can execute the Python in your virtualenv as root by running something like sudo ./venv/bin/python (assuming your virtualenv is called venv). That will make you root while still having access to the packages in your virtualenv, like pyserial.
1: I don't actually know of any command that would be set up like this, this is a bad example, sorry.
Also make sure you have created virtualenv without sudo command, since that may cause permissions issues on using virtual env without sudo later. If that's the case run the command below:
sudo chown -R your_username:your_username path/to/virtuaelenv/
Than you can enable permissions for reading the /dev/ttyS10 for your user and run python script by that user.
NOTE: Also you want to add shebang line to to the top of your python script with the path to python interpreter which sits in your env. So you will be able to call it without interpreter.
#!/usr/bin/env python
See more on that SO Answer: Should I put #! (shebang) in Python scripts, and what form should it take?
Here is my workaround on bash. Put this in an executable file on your PATH (e.g. vesudo):
#!/bin/bash
if [ -z "$VIRTUAL_ENV" ]; then
echo "Error: Virtual environment not found" >&2
exit 1
fi
_args=''
for _a in "$#"; do
_a="${_a//\\/\\\\}"
_args="$_args \"${_a//\"/\\\"}\""
done
sudo bash <<_EOF
source "$VIRTUAL_ENV/bin/activate"
$_args
_EOF
The logic is simple: Escape input arguments, run a privileged subshell, source virtual environment and pass arguments to the subshell.
Example usage:
~/tmp$ source .venv/bin/activate
(.venv) ~/tmp$ which python
/home/omer/tmp/.venv/bin/python
(.venv) ~/tmp$ vesudo which python
/home/omer/tmp/.venv/bin/python
(.venv) ~/tmp$ which pip
/home/omer/tmp/.venv/bin/pip
(.venv) ~/tmp$ vesudo which pip
/home/omer/tmp/.venv/bin/pip
(.venv) ~/tmp$ vesudo python -c 'import sys; print(sys.argv)' it works 'with spaced arguments' as well
['-c', 'it', 'works', 'with spaced arguments', 'as', 'well']
(.venv) ~/tmp$ vesudo echo '$HOME'
/root
I put this script in a repo for convenience.
Add alias in your linux machine:
# ~/.bash_aliases
alias spython='sudo $(printenv VIRTUAL_ENV)/bin/python3'
NOTE: make sure you have virtual env activated.
Run python script with spython command :)
spython file.py
I tried to make a cron job on crontab which runs a python script on AWS ec2. My python script includes a module that is only available for python3.
Using the following command I changed the ec2 default python interpreter from python2.7 to python3.4
Source /home/ec-2user/venv/python34/bin/activate
and then using pip install, I installed the required module for python3.4. So now the default interpreter is python3.4 and when I run the script on ec2-user directory using the following command:
python test.py
the program runs without any problem (so I am sure the module is installed correctly).
But when I assign python file to a cronjob
* * * * * python test.py
It does not work. Checking the mail, the error is:
“No module found named “xxxxx” “
But as I said it worked fine outside of the cron.
I was wondering if you can help me with this problem. I appreciate your time and information.
You have to make a shell script which will do the steps of changing to script directory, activating virtual environment and then running it.
Example:
#!/bin/bash
cd $YOUR_DIR
. venv/bin/activate
python3.4 test.py
Then you call this script in cron with
/bin/bash /.../script.sh
What you could do additionally is
chmod +x test.py
and add/update first line to:
#!/usr/bin/env python3.4
This way you can just run Python script with ./test.py
Create file as 'user_cron.sh'
#!/bin/bash
cd '/root/my_new_project_python'
. my_project_venv/bin/activate
python3 main.py
set the cron using crontab -e