I'm writing a program that uses some cryptography for a class. Since I'm low on time, I'd like to go with Python for this assignment. The issue that I run into is that the code must be able to work on the Linux machines at the school. We are able to SSH into those machines and run the code, but we aren't allowed to install anything. I'm using the Cryptography library for Python:
pip install cryptography
Is there a straightforward way that I can include this with my .py file such that the issue of not being able to install the library on the Linux machines won't be a problem?
You have few options:
virtualenv
Install into virtualenv (assuming command virtualenv is installed):
$ cd projectdir
$ virtualenv venv
$ source venv/bin/activate
(venv)$ pip install cryptography
(venv)$ vim mycode.py
(venv)$ python mycode.py
The trick is, you install into local virtual environment, which does not
requires root priviledges.
tox
tox is great tool. After investing a bit of time, you can easily create multiple virtualenvs.
It assumes, you have tox installed in your system.
$ tox-quickstart
$ ...accept all defaults
$ vim tox.ini
The tox.ini my look like:
[tox]
envlist = py27
skipsdist = true
[testenv]
commands = python --version
deps =
cryptography
then run (with virtualenvs being deactivated):
$ tox
it will create virtualenv in directory .tox/py27
Activate it (still being in the same dir):
$ source .tox/py27/bin/activate
(py27)$ pip freeze
cryptography==1.2.2
... and few more...
Install into --user python profile
While this allows installing without root priviledges, it is not recommended as
it soon ends in one big mess.
EDIT (reaction to MattDMo comment):
If one user has two project with conflicting requirements (e.g. different
package versions), --user installation will not work as the packages are
living in one scope shared across all user projects.
With virtualenvs you may keep virtualenv inside of project folders and feel
free to destroy and recreate or modify any of them without affecting any other
project.
Virtualenvs have no problem with "piling up": if you can find your project
folder, you shall be able to find and manage related virtualenv(s) in it.
Use of virtualenv became de-facto recommended standard. I remember numerous
examples starting with creating virtualenv, but I cannot remember one case
using $ pip install --user.
Related
I use windows. My tox.ini:
[tox]
envlist =
docs
min_version = 4
skipsdist = True
allowlist_externals = cd
passenv =
HOMEPATH
PROGRAMDATA
basepython = python3.8
[testenv:docs]
changedir = docs
deps =
-r subfolder/package_name/requirements.txt
commands =
bash -c "cd ../subfolder/package_name/ && pip install ."
Within a tox environment, I install some additional packages. I do it as above in tox's commands.
However, tox is installing a library in WSL's python instance path:
/home/usr/.local/lib/python3.8/site-packages
Not in tox's environment place for libraries:
.tox/env/lib/site-packages
I trigger tox inside the python virtual environment. If I run it in Docker everything works fine.
I use Windows. Besides WSL I have two python versions installed on my Windows: 3.8 and the main one 3.10. However, the virtual env that I'm using for tox is with 3.8. But I tested it with 3.10 and I receive the same result. In the system variables' path they are in such order and on the top of the list:
C:\Users\UserName\AppData\Local\Programs\Python\Python310\Scripts
C:\Users\UserName\AppData\Local\Programs\Python\Python310
C:\Users\UserName\AppData\Local\Programs\Python\Python38\Scripts
C:\Users\UserName\AppData\Local\Programs\Python\Python38\
As I found in the tox documentation:
Name or path to a Python interpreter which will be used for creating
the virtual environment, first one found wins. This determines in
practice the Python for what we’ll create a virtual isolated
environment.
So:
I'm trying to understand and fix the way how tox picks up the path to the python instance. As a result it should install libraries inside tox environment in libs folder.
You should install additional Python packages via the deps directive, see the example in our documentation:
https://tox.wiki/en/latest/config.html#tox-ini
P.S.: I am one of the tox maintainers
Found a solution. Silly mistake:
Use python instead of bash to install libraries.
commands =
python -m pip install [path_to_library]
As simple.
What is a / the proper way to install Python programs into the shell's namespace via Poetry? With setuptools, you can use $ pip install -e path/to/project and then you can invoke $ project. However I have not found a way to do that with Poetry. Instead I need to use $ poetry run project.
To clarify, I want the following behavior:
$ poetry <some command> path/to/project
$ project
<output of project>
I guess your goal is to install your cli and make changes to it without having to reinstall it?
As you've probably noticed pip install -e doesn't yet support PEP 518 based projects like you get with poetry.
Poetry's solution for this is of course to let you execute installed libs or cli tools with poetry run as you described, or to activate the virtualenv that poetry creates containing your project in a new shell with poetry shell, which is similar to an interactive installation as long as your stay inside that shell.
Of course you can still install the project as usual with pip install ., and mess around with the files as the installed location, though this isn't a recommended workflow.
When installing Python package through pip in a Dockerfile, such as:
pip install --trusted-host pypi.python.org -r requirements.txt
with requirements.txt, e.g. as:
python-dotenv>=0.15.0
psycopg2>=2.8.6
sqlalchemy>=1.3.22
numpy>=1.19.0
rasterio>=1.1.8
pandas>=1.1.5
geopandas>=0.8.1
matplotlib>=3.3.0
seaborn>=0.11.0
I recently seen this warning:
WARNING: Running pip as the 'root' user can result in broken permissions
and conflicting behavior with the system package manager.
It is recommended to use a virtual environment
instead: https://pip.pypa.io/warnings/venv
Hence my 'naive' question:
would that make sense to set up a virtual environment / or installing Python packages as the non root user (which is the default in Docker), as one would normally do on his/her local computer?
For the moment I never cared about that, because I'm inside a Docker container which by definition hosts a single application, so I think it's perfectly OK that these packages are installed globally. Hopefully I cannot break anything on my local machine.
Honestly? It doesn't much matter.
Using the root user during the build of your container is generally necessary and expected. The warning from pip is "running pip as root could screw up the packages your OS programs depend on" - but there's no OS in your container.
If you drop to a less privileged user at the end of your build or during your docker run, installing the packages as root won't have hurt you any. Practically a container is a single process (your python application) that has a view of the filesystem different from the root system- very much like a virtualenv would try to accomplish.
As #Paul Becotte said in his answer, there is no risk in installing your packages globally in your container, but you are seeing this warning because pip doesn't care you are running inside a container or not.
The general python good practice is to create a virtual env as an unprivileged user : $ python -m venv .venv
Then activate it : $ source .venv/bin/activate
And then install your packages with your $ pip install -r requirements.txt command.
You can totally adapt that to the docker build syntax :
RUN python -m venv /abolute/path/to/venv
RUN source /absolute/path/to/venv/bin/activate && pip install -r requirements.txt
CMD source /absolute/path/to/venv/bin/activate && python /path/to/your/app.py
However, the General docker good practice is to run with an unprivileged user. So you have multiple choice here :
Live with the warning, install your python packages globally without venv, drop the privileges and run your container with a normal user
create venv to get rid of the warning, install your packages in it, run you container as root (not a good practice, but maybe remapping root user can be ok)
create a venv, install the packages, run the container as a normal user, which is probably the best option if your application doesn't need to be run as root in your container.
It's not the best practice to run pip as a root user or to install packages globally. The best practice is to create virtual environment within the project and install every packages there.
If you install globally, it will actually affect other projects because you might not need the packages you installed for project A in the project B that you are about to start and if you remove all the packages, there will be a break in project A as it cannot access the packages again.
Have virtual environment for all your projects and with that, you will be rest assured that all projects will not affect each other.
I have a package that I am developing for a local server. I would like to have the current stable release importable in a Jupyter notebook using import my_package and the current development state importable (for end-to-end testing and stuff) with import my_package_dev, or something like that.
The package is version controlled with git. The master branch holds the stable release, and new development work is done in the develop branch.
I currently pulled these two branches into two different folders:
my_package/ # tracks master branch of repository
setup.py
requirements.txt
my_package/
__init__.py
# other stuff
my_package_dev/ # tracks develop branch of repository
setup.py
requirements.txt
my_package/
__init__.py
# other stuff for dev branch
My setup.py file looks like this:
from setuptools import setup
setup(
name='my_package', # or 'my_package_dev' for the dev version
# metadata stuff...
)
I can pip install my_package just fine, but I have been unable to get anything to link to the name my_package_dev in Python.
Things I have tried
pip install my_package_dev
Doesn't seem to overwrite the existing my_package, but doesn't seem to make my_package_dev available either, even though pip says it finishes OK.
pip install -e my_package_dev
makes an egg and puts the development package path in easy-install.pth, but I cannot import my_package_dev, and my_package is still the old content.
Adding a file my_package_dev.pth to site-packages directory and filling it with /path/to/my_package_dev
causes no visible change. Still does not allow me to import my_package_dev.
Thoughts on a solution
It looks like the best approach is going to be to use virtual environments, as discussed in the answers.
With pip install you install packages by its name in setup.py's name attribute. If you have installed both and execute pip freeze, you will see both packages listed. Which code is available depends on how they are included in Python path.
The issue is those two packages contains just a python module named my_package, that it why you can not import my_package_dev (it does not exist).
I would suggest you to have an working copy for each version (without modifying package name) and use virtualenv to keep environments isolated (one virtualenv for stable version and the other for dev).
You could also use pip's editable install to keep the environment updated with the working copies.
Note: Renaming my_package_dev's my_package module directory to my_package_dev, will also work. But it will be harder to merge changes from one version to the other.
The answer provided by Gonzalo got me on the right track: use virtual environments to manage two different builds. I created the virtual environment for the master (stable) branch with:
$ cd my_package
$ virtualenv venv # make the virtual environment
$ source venv/bin/activate
(venv) $ pip install -r requirements.txt # install everything listed as a requirement
(venv) $ pip install -e . # install my_package dynamicially so that any changes are visible right away
(venv) $ sudo venv/bin/python -m ipykernel install --name 'master' --display-name 'Python 3 (default)'
And for the develop branch, I followed the same procedure in my my_package_dev folder, giving it a different --name and --display-name value.
Note that I needed to use sudo for the final ipykernel install command because I kept getting permission denied errors on my system. I would recommend trying without sudo first, but for this system it needed to be installed system-wide.
Finally, to switch between which version of the tools I am using, I just have to select Kernel -> Change kernel and choose Python 3 (default) or Python 3 (develop). The import stays the same (import my_package), so nothing in the notebook has to change.
This isn't quite my ideal scenario since it means that I will then have to re-run the whole notebook any time I change kernels, but it works!
I've created folder and initialized a virtualenv instance in it.
$ mkdir myproject
$ cd myproject
$ virtualenv env
When I run (env)$ pip freeze, it shows the installed packages as it should.
Now I want to rename myproject/ to project/.
$ mv myproject/ project/
However, now when I run
$ . env/bin/activate
(env)$ pip freeze
it says pip is not installed. How do I rename the project folder without breaking the environment?
You need to adjust your install to use relative paths. virtualenv provides for this with the --relocatable option. From the docs:
Normally environments are tied to a
specific path. That means that you
cannot move an environment around or
copy it to another computer. You can
fix up an environment to make it
relocatable with the command:
$ virtualenv --relocatable ENV
NOTE: ENV is the name of the virtual environment and you must run this from outside the ENV directory.
This will make some of the files
created by setuptools or distribute
use relative paths, and will change
all the scripts to use
activate_this.py instead of using the
location of the Python interpreter to
select the environment.
Note: you must run this after you've
installed any packages into the
environment. If you make an
environment relocatable, then install
a new package, you must run virtualenv
--relocatable again.
I believe "knowing why" matters more than "knowing how". So, here is another approach to fix this.
When you run . env/bin/activate, it actually executes the following commands (using /tmp for example):
VIRTUAL_ENV="/tmp/myproject/env"
export VIRTUAL_ENV
However, you have just renamed myproject to project, so that command failed to execute.
That is why it says pip is not installed, because you haven't installed pip in the system global environment and your virtualenv pip is not sourced correctly.
If you want to fix this manually, this is the way:
With your favorite editor like Vim, modify /tmp/project/env/bin/activate usually in line 42:
VIRTUAL_ENV='/tmp/myproject/env' => VIRTUAL_ENV='/tmp/project/env'
Modify /tmp/project/env/bin/pip in line 1:
#!/tmp/myproject/env/bin/python => #!/tmp/project/env/bin/python
After that, activate your virtual environment env again, and you will see your pip has come back again.
NOTE: As #jb. points out, this solution only applies to easily (re)created virtualenvs. If an environment takes several hours to install this solution is not recommended
Virtualenvs are great because they are easy to make and switch around; they keep you from getting locked into a single configuration. If you know the project requirements, or can get them, Make a new virtualenv:
Create a requirements.txt file
(env)$ pip freeze > requirements.txt
If you can't create the requirements.txt file, check env/lib/pythonX.X/site-packages before removing the original env.
Delete the existing (env)
deactivate && rm -rf env
Create a new virtualenv, activate it, and install requirements
virtualenv env && . env/bin/activate && pip install -r requirements.txt
Alternatively, use virtualenvwrapper to make things a little easier as all virtualenvs are kept in a centralized location
$(old-venv) pip freeze > temp-reqs.txt
$(old-venv) deactivate
$ mkvirtualenv new-venv
$(new-venv) pip install -r temp-reqs.txt
$(new-venv) rmvirtualenv old-venv
I always install virtualenvwrapper to help out. From the shell prompt:
pip install virtualenvwrapper
There is a way documented in the virtualenvwrapper documents - cpvirtualenv
This is what you do. Make sure you are out of your environment and back to the shell prompt. Type in this with the names required:
cpvirtualenv oldenv newenv
And then, if necessary:
rmvirtualenv oldenv
To go to your newenv:
workon newenv
You can fix your issue by following these steps:
rename your directory
rerun this: $ virtualenv ..\path\renamed_directory
virtualenv will correct the directory associations while leaving your packages in place
$ scripts/activate
$ pip freeze to verify your packages are in place
An important caveat, if you have any static path dependencies in script files in your virtualenv directory, you will have to manually change those.
Yet another way to do it that worked for me many times without problems is virtualenv-clone:
pip install virtualenv-clone
virtualenv-clone old-dir/env new-dir/env
Run this inside your project folder:
cd bin
sed -i 's/old_dir_name/new_dir_name/g' *
Don't forget to deactivate and activate.
In Python 3.3+ with built-in venv
As of Python 3.3 the virtualenv package is now built-in to Python as the venv module. There are a few minor differences, one of which is the --relocatable option has been removed. As a result, it is normally best to recreate a virtual environment rather than attempt to move it. See this answer for more information on how to do that.
What is the purpose behind wanting to move rather than just recreate any virtual environment? A virtual environment is intended to manage the dependencies of a module/package with the venv so that it can have different and specific versions of a given package or module it is dependent on, and allow a location for those things to be installed locally.
As a result, a package should provide a way to recreate the venv from scratch. Typically this is done with a requirements.txt file and sometimes also a requirements-dev.txt file, and even a script to recreate the venv in the setup/install of the package itself.
One part that may give headaches is that you may need a particular version of Python as the executable, which is difficult to automate, if not already present. However, when recreating an existing virtual environment, one can simply run python from the existing venv when creating the new one. After that it is typically just a matter of using pip to reinstall all dependencies from the requirements.txt file:
From Git Bash on Windows:
python -m venv mynewvenv
source myvenv/Scripts/activate
pip install -r requirements.txt
It can get a bit more involved if you have several local dependencies from other locally developed packages, because you may need to update local absolute paths, etc. - though if you set them up as proper Python packages, you can install from a git repo, and thus avoid this issue by having a static URL as the source.
virtualenv --relocatable ENV is not a desirable solution. I assume most people want the ability to rename a virtualenv without any long-term side effects.
So I've created a simple tool to do just that. The project page for virtualenv-mv outlines it in a bit more detail, but essentially you can use virtualenv-mv just like you'd use a simple implementation of mv (without any options).
For example:
virtualenv-mv myproject project
Please note however that I just hacked this up. It could break under unusual circumstances (e.g. symlinked virtualenvs) so please be careful (back up what you can't afford to lose) and let me know if you encounter any problems.
Even easier solution which worked for me: just copy the site-packages folder of your old virtual environment into a new one.
Using Visual Studio Code (vscode), I just opened the ./env folder in my project root, and did a bulk find/replace to switch to my updated project name. This resolved the issue.
Confirm with which python
If you are using an conda env,
conda create --name new_name --clone old_name
conda remove --name old_name --all # or its alias: `conda env remove --name old_name`