Let me first outline my desired solution and then elaborate on a specific question how to achieve this state.
I'm soon starting two coding projects in python. I've used python before but never on such big projects. My ideal scenario would be to have a setup where I can run virtual environments and different python version for various project. Some research pointed me to virtualenv / virtualenvwrapper and pyenv. It seems using pyenv-virtualenv or pyenv-virtualenvwrapper there is a nice way to specify the virtualenvironment and python version for a specific project.
Question: Once I've setup a virtualenvironment and python version for a specific project, how easily could I switch in a later stage to a newer python version? Let's say I've started project A with python 3.4 and in one year in the future I would like to move everything to python 3.6. Is this possible in a neat way?
Sure:
$ rm -r my-python-3.4-env
$ virtualenv -p python3.6 my-python-3.6-env
$ source my-python-3.6-env/bin/activate
In other words, each virtual environment is just a folder with the necessary files in it. You "activate" an environment with the source .../activate command (in case of virtualenv) and you leave it just as easily. To switch to a different environment you simply create a new one with a specific Python executable and activate it.
What you want to be careful about is to keep your installation repeatable, meaning if you depend on external modules (which modern projects typically do), you don't want to install each dependency by hand and instead automate that. For instance, you create a setuptools setup.py file which lists your dependencies, and then have it install them into your new environment automatically:
$ source my-python-3.6-env/bin/activate
(my-python-3.6-env) $ python setup.py develop
Related
My question is do i have to install django every single time in my virtual environment in order to run my python files? and is this taking up bunch of space on my machine? My project also uses "matplotlib" and every virtual environment i create it also asks me to import the matplotlib module too. its getting annoying. do i have to do this every time?
Im new to Django. I wanted to run some python files in django but they weren't working, so after some research i found out i needed to run my pycharm project in a virtual environment in order to run these python files.
my folders look like this pycharmProjects -> my project
I enter pycharmProjects and I set up virtual environment using "pienv shell". Then i run "python3 manage.py runserver". It turns out i must install django in the virtual environment before the files run.
Short answer is no, you don't have to use a virtual environment at all and can install your dependancies globally instead. However you will soon find that it will cause a lot of issues. The main reason you would create a virtual environment is to give control of your dependancies and prevent bugs that could be caused because of them having their wires crossed between projects.
Short answer yes.
If you create a virualenv you have to install all packages, that your program needs.
Long answer:
You could install django system wide and then create a virtualenv with the option
--system-site-packages then django would be used from your globally installed python.
(Or you install everything just in your global python, put I personally don't think this is good practice)
If you work with many different projects I think you will avoid a lot of trouble if you use one virtualenv per project.
Trouble meaning that one project breaks, because one pip install for another project changed the version of one package and one project can't handle the newer version.
I would recommend to create a requirements.txt file for each project, that lists the dependencies then you can create the virtualenv with following command
pip install -r requirements.txt
if you have requirement.txt files, then you can create virtualenvs rather quickly if going back to an old project and you can delete the virtualenvs whenever you run out of disk space. If you want to be an the safe side, type pip freeze > pipfreeze.txt prior to deleting the virtualenv and use pip install -r pipfreeze.txt if you want to create one with the same modules and the same versions.
You also might want to look at direnv or autoenv if working on a linux like system.
This will automatically switch to the required virtualenv when changing to a project's working directory.
I've been going around but was not able to find a definitive answer...
So here's my question..
I come from javascript background. I'm trying to pickup python now.
In javascript, the basic practice would be to npm install (or use yarn)
This would install some required module in a specific project.
Now, for python, I've figured out that pip install is the module manager.
I can't seem to figure out how to install this specific to a project (like how javascript does it)
Instead, it's all global.. I've found --user flag, but that's not really I'm looking for.
I've come to conclusion that this is just a complete different schema and I shouldn't try to approach as I have when using javascript.
However, I can't really find a good document why this method was favored.
It may be just my problem but I just can't not think about how I'm consistently bloating my pip global folder with modules that I'm only ever gonna use once for some single project.
Thanks.
A.) Anaconda (the simplest) Just download “Anaconda” that contains a lots of python modules pre installed just use them and it also has code editors. You can creat multiple module collections with the GUI.
B.) Venv = virtual environments (if you need something light and specific that contains specific packages for every project
macOS terminal commands:
Install venv
pip install virtualenv
Setup Venve (INSIDE BASE Project folder)
python3 -m venv thenameofyourvirtualenvironment
Start Venve
source thenameofyourvirtualenvironment/bin/activate
Stop Venve
deactivate
while it is activated you can install specific packages ex.:
pip -q install bcrypt
C.) Use “Docker” it is great if you want to go in depth and have a solide experience, but it can get complicated.
Pip is a program used to manage Python distribution. You usually have one system distribution which is by default managed by Pip. When you do pip install scipy, you install package scipy to your system Python. Everytime you try to import scipy after it will work because your system Python has it.
Project specific distributions are acomplished by using virtual environments. python -m venv env or venv env creates a copy of system Python interpreter, pip, setuptools and a couple of other essential tools. In other words, virtual environment created this way is empty.
To use created virtual environement one should use source env/bin/activate. After that, everytime you invoke python command it will use activated Python interpreter. When you install packages using pip, it will install them in the virtual environment rather than to your system python. To use system Python again use deactivate.
Such usage is actually prefered for projects because some user applications could rely on system Python and some packages, and installing, updating etc. could be potentionally dangerous.
Further reading: venv documentation
I have a python library that I am wanting to help out with and fix some issues. I just don't know how to test my changes given the complexity of how python/pip installs libraries.
I have the library installed with pip and I can run python code connecting to the library by doing an "from import *". But now that I want to make changes to it I pulled the code with git and plan to branch to work on my changes. That's fine. I will then do a pull request to merge any changes given tests pass.
But after I make a change, how do I integrate my changes into python to test out the changes I made with the library? Can pip install my custom/modified version of the library?
I have looked around and haven't successfully found an answer to this but perhaps I'm not looking in the right spot.
Can pip install my custom/modified version of the library?
Yes.
There are various ways of approaching this question. A common solution is the use of Python virtual environments. This allows you to create an isolated Python environment that does not share the same packages as your system Python install. You can then install things into it (such as your modified Python library) to test it out.
To get started, you need the virtualenv tool. This is probably available as a package for your distribution, but you can also install it using pip. Once you have it, you can run in the same directory as your code:
virtualenv .venv
This creates a virtuelenv named .venv. You can call it anything you want, but naming it .venv (or anything starting with a .) means it won't clutter up the output of ls in your workspace.
Next, you need to activate the virtualenv:
. .venv/bin/activate.sh
This modifies your $PATH to place the virtualenv at the front of the list of directories. Now when you type python or pip, you'll be using the virtualenv version.
If your code has a setup.py file, you can install it like this:
pip install -e .
The -e means you want to perform an "editable" install, which means python will use the code "in place", and any changes you make will be immediately visible to the code you use for testing.
When you're done, you can run:
deactivate
This will remove the changes that activate made to your environment.
For more information:
Pipenv & Virtual Environments discusses a higher level tool for managing virtual environments.
Virtualenvwrapper is another take on a higher level management tool.
I've been trying to switch my coding to Linux.
I have ironed out most of my issues but one last thing I am not able to find any explanation of is the virtualization of make and bash commands.
I have installed PyCharm which virtualizes everything from what I have seen.
However, when I am cloning repositories from Github, the instructions require building some code using make and then installing them and later on using bash to build dependencies.
I am running the commands in PyCharm terminal but instead of installing into the venv, it's installing the data in /usr/xxx instead.
How do I tell PyCharm to use bash and make in a similar way to pip to virtualize the setup ?
Edit:
One of the projects in question is gym-gazebo which requires:
git clone https://github.com/erlerobot/gym-gazebo/blob/master/INSTALL.md
Then make and make install which installs it in the root
Later on there is also
bash setup_kinetic.bash
Which also uses root folders and not venv
I was able to install it but it is not virtualized the way it should when compared with coding on Windows
Basically the answer based on Evert and CharlesDuffy, what I'm looking for is containerization, since the libraries I'm looking for are C based with a python wrapper (something like that)
Docker, Singularity and Conda are on of the solutions.
I'm aware of how to use virtualenv for isolating Python dependencies in a long-running script, like a Flask or Twisted app. But I've been sort of puzzled about how you're supposed to go about this for a script intended to be invoked from the command line.
Suppose I wanted to make a CLI tool for interacting with some API, perhaps using Click or docopt. Obviously you don't want to have to source venv/bin/activate every time you want to use this tool. But I'd assume that it's still best to still use virtualenv to keep a clean environment even beyond development.
Sorry for the newbie question, but...what are you supposed to do to package up a script so it can be cleanly used in this manner? (I'm more used to RubyGems, and am still figuring out Pip and VirtualEnv.)
In general, if a package you've installed in a virtual env that provides binary command line script, say in ~/.virtualenv/bin/ you can symlink it into ~/bin/ (or wherever on your path you'd like to put local scripts).
There are a couple projects that aim to solve this issue:
pipsi the pip script installer -- amounts to doing the virtual env creation and symlinking for you
pipx pip for executable binaries
An excellent article on virtualenv by Dabapps would make it clear for you:
http://www.dabapps.com/blog/introduction-to-pip-and-virtualenv-python/
As for invoking it from a CLI script:
1. cd to your project root
2. env/bin/python your_main_file.py (Assuming your virtualenv is named env)
This way you don't need to source the virtualenv everytime.
Each virtualenv has it's own Python site_packages, builtin modles, and Python interpreter. So virtualenv is meant to be used at the project level, not at the "package by package" level. It isolates a collection of Python modules and possible dependencies. Each virtualenv has it's own location where pip will install packages. In theory, virtualenv should not be necessary, but in practice, it's nice to have a way to have different "environments" with different versions of Python modules and Python interpreters. I don't know if Ruby has something similar, that would allow you to have different "sets" of gems for different projects.
People that use straight virtualenv will add aliases to their .bashrc, for example:
alias workonawesomeproject="source ~/venv/awesomeproject/bin/activate"
They would activate the virtualenv with the alias
workonawesomeproject
To leave a virtualenv you use the command deactivate
An easier way to deal with virtualenvs is to use virtualenvwrapper
pip install virtualenvwrapper
Add these lines to your .bashrc (or other shell initialization file)
export WORKON_HOME=$HOME/venv # this directory is your choice
export PROJECT_HOME=$HOME/src # this directory is your choice
source /usr/local/bin/virtualenvwrapper.sh # leave this alone
If you just modified your .bashrc make sure to source it
source ~/.bashrc
Then to make a new virtualenv you simply run
mkvirtualenv awesomeproject
To use that virtualenv
workon awesomeproject
To deactivate that virtualenv
deactivate
Virtualenvwrapper Docs:
http://virtualenvwrapper.readthedocs.org/en/latest/install.html