Environment
Windows Subsystem for Linux with Serial Communication to a GPS.
Adafruit GPS connected to a Arduino Nano which is connected to COM10. In Windows Subsystem for Linux this is equivalent to /dev/ttyS10
Requirements: pyserial
I have written a simple script to read information from the GPS module:
import serial
def select_sentence():
""" This function sends serial data to the GPS module to display only GPGGA and GPRMC"""
def read_gps():
ser = serial.Serial("/dev/ttyS10", 9600)
while True:
print(ser.readline().decode('utf-8'))
if __name__ == "__main__":
select_sentence()
read_gps()
In the virtualenv I chose Python3 and when I executed it I got Permission Error for the serial port /ttyS10 so I chose to sudo chmod 666 /dev/ttyS10 to use the script in the virtualenv.
However is there an alternative to the above mentioned chmod /dev/serial in order to avoid the PermissionErrors?
I am aware that even in the virtualenv when one uses sudo the packages installed in the virtualenv are no considered and instead sudo looks for your global pip packages.
When you activate a virtualenv (by source venv/bin/activate or similar), that basically just tells your shell: "hey, when you search for a command, look in venv/bin before you look anywhere else", by updating the $PATH environment variable. That way, when you run a command like python, your shell sees and runs the python in venv/bin instead of in /usr/bin or wherever. That copy of Python is configured to look in venv/lib for packages rather than /usr/lib, so you can use the packages in your virtualenv instead of the ones installed globally.
However, when you run a program with sudo, it ignores $PATH. Why does it do that? Because in the historical days of *nix, it was common to have sudo set up so that users could execute only specific commands with it, like (say) sudo iftop1, so that anyone could check what the network was being used for, but still nobody could run sudo rm -rf /*. If sudo respected the user's $PATH, you could just copy /bin/rm to ~/bin/iftop, add ~/bin to your $PATH, then run sudo iftop – but you would actually be running rm as root!
So, sudo ignores $PATH by default. But you can still execute specific programs by giving sudo the full path to the program, so you can execute the Python in your virtualenv as root by running something like sudo ./venv/bin/python (assuming your virtualenv is called venv). That will make you root while still having access to the packages in your virtualenv, like pyserial.
1: I don't actually know of any command that would be set up like this, this is a bad example, sorry.
Also make sure you have created virtualenv without sudo command, since that may cause permissions issues on using virtual env without sudo later. If that's the case run the command below:
sudo chown -R your_username:your_username path/to/virtuaelenv/
Than you can enable permissions for reading the /dev/ttyS10 for your user and run python script by that user.
NOTE: Also you want to add shebang line to to the top of your python script with the path to python interpreter which sits in your env. So you will be able to call it without interpreter.
#!/usr/bin/env python
See more on that SO Answer: Should I put #! (shebang) in Python scripts, and what form should it take?
Here is my workaround on bash. Put this in an executable file on your PATH (e.g. vesudo):
#!/bin/bash
if [ -z "$VIRTUAL_ENV" ]; then
echo "Error: Virtual environment not found" >&2
exit 1
fi
_args=''
for _a in "$#"; do
_a="${_a//\\/\\\\}"
_args="$_args \"${_a//\"/\\\"}\""
done
sudo bash <<_EOF
source "$VIRTUAL_ENV/bin/activate"
$_args
_EOF
The logic is simple: Escape input arguments, run a privileged subshell, source virtual environment and pass arguments to the subshell.
Example usage:
~/tmp$ source .venv/bin/activate
(.venv) ~/tmp$ which python
/home/omer/tmp/.venv/bin/python
(.venv) ~/tmp$ vesudo which python
/home/omer/tmp/.venv/bin/python
(.venv) ~/tmp$ which pip
/home/omer/tmp/.venv/bin/pip
(.venv) ~/tmp$ vesudo which pip
/home/omer/tmp/.venv/bin/pip
(.venv) ~/tmp$ vesudo python -c 'import sys; print(sys.argv)' it works 'with spaced arguments' as well
['-c', 'it', 'works', 'with spaced arguments', 'as', 'well']
(.venv) ~/tmp$ vesudo echo '$HOME'
/root
I put this script in a repo for convenience.
Add alias in your linux machine:
# ~/.bash_aliases
alias spython='sudo $(printenv VIRTUAL_ENV)/bin/python3'
NOTE: make sure you have virtual env activated.
Run python script with spython command :)
spython file.py
Related
After installing GDAL and Fiona via Homebrew, I can no longer access anything found in the /usr or /usr/local paths via the terminal. Prior to these installs, I've been accessing anything I care to using python3.9 -m ... such as python3.9 -m pip install ... or python3.9 -m jupyter notebook
Here is a copy of what my terminal looks like when trying to open jupyter notebook
my_name#name-MacBook-Pro ~ % python3.9 -m jupyter notebook
/opt/homebrew/opt/python#3.9/bin/python3.9: No module named jupyter
When I echo the PATH, this is what is returned:
my_name#name-MacBook-Pro ~ % echo $PATH
/opt/homebrew/lib:/opt/homebrew/lib:/opt/homebrew/bin:/Library/Frameworks/Python.framework/Versions/3.9/bin:/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/camelot/ext/ghostscript/9.53.3_1/lib:/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/camelot/ext/ghostscript/9.53.3_1/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
I'm no expert on CLI or terminal for MAC, but I am not familiar with the path /opt/homebrew/opt/python#3.9/bin/python3.9 or any of the paths above being main paths, in general or something that I would have set up.
It would appear that I somehow set the default path to some library within homebrew. That's my guess anyways.
I can temporarily fix the issue by executing PATH="/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin:$PATH" and export PATH. However, if I restart terminal or the computer, the issue re-occurs.
How do I permanently change the default library back to reflect the paths shown when I run the above, which are the same as what's shown when I execute sudo nano /etc/paths which are:
/opt/homebrew/bin
/usr/local/bin
/usr/bin
/bin
/usr/sbin
/sbin
Thanks in advance.
For those who come to this looking for a solution, I was able to back-track to where the issue first began.
Somehow, I wrote to my machine's .zshrc file. All I had to do was open it up via the terminal, delete out the text that was mistakenly written, and save.
nano ~/.zshrc
delete out the mistakenly-written lines
ctrl+X to exit
source .zshrc to make the changes stick
Exit the terminal
done!
Hi I am trying to run a python script as sudo from inside my virtualenvironment.
When I have activated my virtualenvironment I would normally use python somescript.py and my script starts up with the correct version of python and everything
When I use sudo python somescript.py I load up the wrong python install, which is not the one from my environment.
How do I resove this?
The activate script sets some environment variables (defines some functions, ...), which facilitate invoking Python (and tools). One way (more like a workaround) of achieving your goal, would be the variables to be carried across the [man7]: sudo(8) session. For that, you need to:
Pass the -E flag to sudo
PATH needs to be carried manually ([StackExchange.Unix]: How to make `sudo` preserve $PATH?)
All in all:
sudo -E env PATH=${PATH} python somescript.py
Output (works for simple commands):
(py_venv_pc064_03.05.02_test0) [cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q061715573]> python3 -c "import sys, os; print(\"EXE: {0:s}\nPATH: {1:s}\n\".format(sys.executable, os.environ[\"PATH\"]))"
EXE: /home/cfati/Work/Dev/VEnvs/py_venv_pc064_03.05.02_test0/bin/python3
PATH: /home/cfati/Work/Dev/VEnvs/py_venv_pc064_03.05.02_test0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
(py_venv_pc064_03.05.02_test0) [cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q061715573]> sudo python3 -c "import sys, os; print(\"EXE: {0:s}\nPATH: {1:s}\n\".format(sys.executable, os.environ[\"PATH\"]))"
EXE: /usr/bin/python3
PATH: /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin
(py_venv_pc064_03.05.02_test0) [cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q061715573]> sudo -E env PATH=${PATH} python3 -c "import sys, os; print(\"EXE: {0:s}\nPATH: {1:s}\n\".format(sys.executable, os.environ[\"PATH\"]))"
EXE: /home/cfati/Work/Dev/VEnvs/py_venv_pc064_03.05.02_test0/bin/python3
PATH: /home/cfati/Work/Dev/VEnvs/py_venv_pc064_03.05.02_test0/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
The one way that never fails in this kind of situations, is using (Python's) executable full path. But since that's just a symlink, you'd probably want to preserve the environment anyway:
sudo -E env PATH=${PATH} /somePath/someFolder/myEnvironment/bin/python somescript.py
I think this is answered in here: https://askubuntu.com/questions/234758/how-to-use-a-python-virtualenv-with-sudo
The issue is almost certainly that when you run sudo, the virtualenv
environment variables, aliases, functions, etc aren't being carried
over.
The solution would be to explicitly run the virtual environment's
Python executable with sudo. For example if your virtualenv is
./AwesomeProject, then you could run sudo ./AwesomeProject/bin/python
to use the script with the virtualenv with root privileges.
I am trying to deploy a Python Webservice (with flask) that uses CNTK in a Docker Container. I use an Ubuntu-Base-Image from Microsoft that is supposed to contain all the neccessary and correct programs and libraries to run CNTK.
The Script works locally (on Windows) and also when I run the container and start a bash from cmd-line with
docker exec -it <container_id> bash
and start the script from "within the container".
An important addition is that the python script uses two precompiled modules that are *.pyd files for windows and *.so files for Linux. So for the docker image I replaced the former for the latter for running the script from within the container.
The problems start when I start the script with a CMD in the Dockerfile. The creation of the image shows no problems. But when I start the container with
docker run -p 1234:80 app
I get the following error:
...
ImportError: libpython3.5m.so.1.0: cannot open shared object file: No such file or directory
It seems like the library is missing. But (I repeat) when I run the script from within a bash running in the container (which should only have the containers libraries as far as I can see), everything works fine. I even can look up the library with
ldd $(which python)
And the file is definitely in the folder. So the question is why python can't find its dependency when running the docker container.
It even gets weirder when I try to give the path to the library explicitely by writing it in the environment variable:
ENV LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/root/anaconda3/pkgs/python-3.5.2-0/lib/"
Then the library it seems the library is found but it is not accepted as correct:
ImportError: dynamic module does not define init function (initcython_bbox)
"cython_bbox" is the name of one of the *.pyd / *.so file/library that is to be imported. This is apparantly a typical error for these kinds of filetypes. But I don't have any experience with them.
I am also not at the point (in my personal development) to be able to compile my own files from foreign source or create the docker image itself on my own. I rely on the parts I got from Microsoft. But I would be open to suggestions.
I also already tried to install the library anew inside my Dockerfile after importing the base image with
RUN apt-get install -y libpython3.5
But it provoked the same error as when I put the path in the environment variable.
I am really eager to know what goes wrong here. Why does everything run smoothly "inside the container" but not with Autostart at Initialization of a Container with CMD?
For additional info I add the Dockerfile:
# Use an official Python runtime as a parent image
FROM microsoft/cntk:2.5.1-cpu-python3.5
# Set the working
directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN apt-get update && apt-get install -y python-pip RUN pip install
--upgrade pip
# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt
# Make port 80 available to the world outside this container
EXPOSE 80
# Run app.py when the container launches
CMD ["python", "test.py"]
The rest of the project is a pretty straightforward flask-webapp that runs without problems when I comment out all import of the actual CNTK-project. It is the CNTK Object Detection with Faster-RCNN by the way, as it can be found in the cntk-git-repository.
EDIT:
I found out what the actual problem is, yet I still have no way to solve it. The thing is that when I start bash with "docker exec" it runs a script at startup that activates an anaconda environment with python3.5 and all the neat libraries. But when CMD just starts python this is done by the standard Bourne shell "sh" which (as I tried out) runs with python2.7.
So I need a way either to start my container with bash (including its autostart scripts) or somehow activate the environment on startup in another way.
I looked up the script and it basically checks if bash is the current shell, sets some environment variables and activates the environment.
if [ -z "$BASH_VERSION" ]; then
echo Error: only Bash is supported.
elif [ "$(basename "$0" 2> /dev/null)" == "activate-cntk" ]; then
echo Error: this script is meant to be sourced. Run 'source activate-cntk'
else
export PATH="/cntk/cntk/bin:$PATH"
export LD_LIBRARY_PATH="/cntk/cntk/lib:/cntk/cntk/dependencies/lib:$LD_LIBRARY_PATH"
source "/root/anaconda3/bin/activate" "/root/anaconda3/envs/cntk-py35"
cat <<MESSAGE
************************************************************
CNTK is activated.
Please checkout tutorials and examples here:
/cntk/Tutorials
/cntk/Examples
To deactivate the environment run
source /root/anaconda3/bin/deactivate
************************************************************
MESSAGE
fi
I tried some dozens of things like linking sh to bash
RUN ln -fs /bin/bash /bin/sh
or using bash as ENTRYPOINT.
I have found a workaround that works for now.
First I manually link python to python3 in my environment:
RUN ln -fs /root/anaconda3/envs/cntk-py35/bin/python3.5 /usr/bin/python
Then I add the environment libraries to the Library-Path:
ENV LD_LIBRARY_PATH "/cntk/cntk/lib:/cntk/cntk/dependencies/lib:$LD_LIBRARY_PATH"
And to be sure I add all important folders to PATH:
ENV PATH "/cntk/cntk/bin:$PATH"
ENV PATH "/root/anaconda3/envs/cntk-py35/bin:$PATH"
I then have to install my python packages again:
RUN pip install flask
And can finally just start my script with:
CMD ["python", "app.py"]
I have also found this GIT Repository doing pretty much the same thing I did. And they also need to start their environment. Realizing that I need to learn how to write better Dockerfiles. I think this is the correct way to do it, i.e. using a shell script as ENTRYPOINT
ENTRYPOINT ["/app/run.sh"]
which activates the environment, installs additional python packages (this could be a problem) and starting the actual app.
#!/bin/bash
source /root/anaconda3/bin/activate root
pip install easydict
pip install azure-ml-api-sdk==0.1.0a9
pip install sanic
python3 /app/app.py
I had a problem where python was not finding modules installed by pip while in the virtualenv.
I have narrowed it down, and found that when I call python when my virtualenv in activated, it still reaches out to /usr/bin/python instead of /home/liam/dev/.virtualenvs/noots/bin/python.
When I use which python in the virtualenv I get:
/home/liam/dev/.virtualenvs/noots/bin/python
When I look up my $PATH variable in the virtualenv I get:
bash: /home/liam/dev/.virtualenvs/noots/bin:/home/liam/bin:/home/liam/.local/bin:/home/liam/bin:/home/liam/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin: No such file or directory
and yet when I actually run python it goes to /usr/bin/python
To make things more confusing to me, if I run python3.5 it grabs python3.5 from the correct directory (i.e. /home/liam/dev/.virtualenvs/noots/bin/python3.5)
I have not touched /home/liam/dev/.virtualenvs/noots/bin/ in anyway. python and python3.5 are still both linked to python3 in that directory. Traversing to /home/liam/dev/.virtualenvs/noots/bin/ and running ./python, ./python3 or ./python3.5 all work normally.
I am using virtualenvwrapper if that makes a difference, however the problem seemed to occur recently, long after install virtualenv and virtualenvwrapper
My problem was that i recently moved my project with virtualenv to another location, due to this activate script had wrong VIRTUAL_ENV path.
$ cat path_to_your_env/bin/activate
... # some declarations
VIRTUAL_ENV="/path_to_your_env/bin/python" # <-- THIS LINE
export VIRTUAL_ENV
... # some declarations
To fix this, just update VIRTUAL_ENV in activate script.
Also you maybe need to fix first line of your bin/pip to link to real python path.
As tdelaney suggested in the comments, I ran alias and found that I had previously aliased python to /usr/bin/python3.5 in my .bashrc.
I removed that alias from my .bashrc, ran unalias python, and source ~/.bashrc and the problem was solved.
If you don't get the program that which says you should get, you need to look higher up the chain than the platform executor. Shells typically have a way to alias commands and on most unixy shells you can just enter alias to see which commands have been remapped. Then its just a matter of going to the config files for your shell and removing the alias.
Sometimes people alias python to try to sort out which python they should be using. But there are usually other, better ways. On my linux machine, for example, python3 is in the path but is a symlink to the real python I am using.
td#mintyfresh ~ $ which python3
/usr/bin/python3
td#mintyfresh ~ $ ls -l /usr/bin/python3
lrwxrwxrwx 1 root root 9 Feb 17 2016 /usr/bin/python3 -> python3.4
td#mintyfresh ~ $
This is nice because non-shell programs running python get the same one I do and virtual environments work naturally.
On Cygwin, I still have a problem even after I created symlink to point /usr/bin/python to F:\Python27\python.exe. Here, after source env/Scripts/activate, which python is still /usr/bin/python.
After a long time, I figured out a solution. Instead of using virtualenv env, you have to use virtualenv -p F:\Python27\python.exe env even though you have created a symlink.
I'm currently having the same problem. Virtualenv was created in Windows, now I'm trying to run it from WSL.
In virtualenv I renamed python.exe to python3.exe(as I have only python3 command in WSL). In $PATH my virtualenv folder is first, there is no alias for python. I receive which python3
/usr/bin/python3. In /usr/bin/python3 there is symlink `python3 -> python3.6. I suppose it doesn't matter for order resolution.
Had the exact same problem.
I ran:
virtualenv -p /venv/bin/python3 env
and got a permission denied.
so i tried:
sudo chmod 777 -R /venv/bin
which python and print(sys.executable)
were not agreeing for me. This meant that with an active virtualenv pip install <package> would install to the virtualenv, but running python would be the base install.
I eventually got around this by running
virtualenv -p \path\to\python.exe --always-copy <venvName>
I'm not sure if specifying the path to the original python is really necessary, but can't hurt. According to the man page:
--copies, --always-copy try to use copies rather than symlinks, even when symlinks are the default for the platform (default: False)
I'm using windows powershell with msys64.
I recently came across this in a cron script at my place of work:
/bin/bash -c "[[ -s $HOME/.pythonbrew/etc/bashrc ]] && source $HOME/.pythonbrew/etc/bashrc && pythonbrew use 2.6.7 && pythonbrew venv use someapp && python /opt/someapp/bin/someapp.py"
This is for a system-wide (multi-user) installation of Pythonbrew.
It works. But please tell me there's a better way.
Addendum
To clarify what I'm looking for: I'd like a one-line command to run my script though a virtualenv tied to pythonbrew. With virtualenv alone, I could do something like this:
/opt/someapp/venv/bin/python /opt/someapp/bin/someapp.py
What I don't want is another script to run my script (like that cron command above).
I believe it can be done by using the python binary directly from you pythonbrew virtual environment.
By default its in ~/.pythonbrew/venvs/Python-<version>/<name of venv>/bin/python
But I think you can change the path with an environmental variable.
So just change the first half of the line you added to reference the pythonbrew virtual environment python binary and it should work.
On the first line on your python script add a shebang (#!) followed by a path to your target python. Then make the python script executable. It can then be executed directly from the command line (crontab, another bash script, whatever).
make a virtual env in your temp dir:
$ cd /tmp
$ virtualenv venv
the path to your python in that venv is /tmp/venv/bin/python
Using an editor create a simple script containing all of the following:
#!/tmp/venv/bin/python
print("hello world")
Save it in your home directory as "mypyscript.py"
make it executable:
$ chmod 755 mypyscript.py
Now you should be able to execute it using the filename directly on the command line:
$ ./mypyscript.py
hello world
Do this to your someapp.py substituting the relevant path to your python and that should work.
The trick turned out to be locating the pythonbrew virtualenv's python binary. Mark's answer pointed me in the right direction. But here's a complete rundown for future reference:
With pythonbrew installed, I did the following (as root on the server):
pythonbrew install 2.6.6
pythonbrew switch 2.6.6
pythonbrew venv create --no-site-packages myapp
I had a pip freeze file, so I set up my virtualenv using that:
/usr/local/pythonbrew/venvs/Python-2.6.6/myapp/bin/pip install -r /tmp/requirements.pip
Now my python binary can be found at /usr/local/pythonbrew/venvs/Python-2.6.6/myapp/bin/python. So to run my script:
/usr/local/pythonbrew/venvs/Python-2.6.6/myapp/bin/python /opt/myapp/bin/myapp.py