I am working to set up a django project on ec2 with an Ubuntu 14.4 LTS instance. I want to write my code using python 3 and django. I've been advised that the best way to do this is to use a virtualenv. Following
https://robinwinslow.co.uk/2013/12/26/python-3-4-virtual-environment/
I tried:
~$ pyvenv-3.4 djenv
Which appears to create a virtualenv (please see screenshot). Now I have 2 questions:
1) What folder should I place my django project. - I'm thinking within the djenv folder. In other words I'd run:
/home/ubuntu/djenv$ django-admin.py startproject testproject.
2) init a git repository. I'm assuming I'd to it it in the same location, i.e.
/home/ubuntu/djenv$ git init
from within
Does this seem correct or is there a better way to do this?
Your project source code should be entirely separate from your virtual env in the file system. If they are in the same place, as you suggest, then you will end up checking libraries into your git repository needlessly and that will take up extra space end up causing problems.
Once you have activated a virtualenv you can run Python and use all the libraries in it. You don't need any connection in the file system.
You should store a PIP file in your git repo somewhere that describes how to install the relevant dependencies into your virtualenv so you can re-create it on another machine.
On my machine my projects are in /home/me/projects/«project» and my virtualenvs are in /home/me/envs/«envname». I use virtualenvwrapper which makes things easy.
Create an environment
$ mkvirtualenv test
New python executable in test/bin/python
Installing Setuptools......done.
Installing Pip.........done.
Activate it
$ workon test
Python now refers to the one in my environment. It has its own site-packages etc.
$ which python
/Users/joe/Envs/test/bin/python
If we run it and look at the paths, they point to the virtualenv. This is where it looks for packages (lots removed from my path for simplicity).
$ python
Python 2.7.5 (default, Mar 9 2014, 22:15:05)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.0.68)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/Users/joe/Envs/test/lib/python27.zip', '/Users/joe/Envs/test/lib/python2.7', '/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7', '/Users/joe/Envs/test/lib/python2.7/site-packages']
>>>
Related
After installing miniconda, my python modules stopped working, throwing ModuleNotFoundError. From what I can tell, miniconda changed my default environment settings. I checked both .bash_profile and .bashrc and updated the files to give conda the lowest priority. This fixed my default python version but didn't fix any of the broken modules.
Next I checked my PYTHONPATH with python3 -c "import sys;print(sys.path)". I discovered that the PYTHONPATH consisted entirely of conda python paths instead of the python version I had called. For reference, my default python version should be 3.8 (now set in .bashrc), and the conda version is 3.9.
['', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python39.zip', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/lib-dynload']
I then manually changed my PYTHONPATH in the .bashrc file to include the appropriate library paths. After reloading .bashrc:
['', '/Users/Ghoti/venv/3.8/lib/python3.8/site-packages', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/site-packages', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python39.zip', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9', '/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.9/lib/python3.9/lib-dynload']
My modules now work! However, I haven't been able to figure out how to stop the conda 3.9 libraries from being appended to my PYTHONPATH. In addition, my printed python version is wrong.
Ghoti$ python --version
Python 3.9.6
Ghoti$ which python
/Users/Ghoti/venv/3.8/bin/python
I was able to "fix" my ModuleNotFoundError problem. However, the solution is only temporary. If I ever need to switch python version/environment, I'll have to go through the process again. I'd like to figure out what is overriding my PYTHONPATH, causing it to call conda 3.9 libraries, and fix the python version irregularity. I've considered that there might be a script/process running in the background, but I haven't found any related to conda/miniconda. I've also been looking for a python setting/config file. No luck. Any suggestions on where I should look?
Edit - Did some more digging. It looks like my version 3.8 python executable was entirely overwritten, and the only existing python installation that is version 3.9.6 is in my "/usr/bin". The two conda environments have versions "3.9.12" and "3.8.13". I feel more confident the issue isn't due to conda, but unsure what could have caused the problem.
Final Edit
I don't think the problem was miniconda. I did start having problems within a few days of using miniconda and I original assumed that it just took me a while to notice the issues. However, I now think that my virtual environment was created using a shared python. Problems were noticed on the same day that I connected to network. The shared python version changed, and that broke my environment. I don't have a solution to salvage the broken environment, but rebuilding it from scratch shouldn't take too long.
Sounds like you only want to use conda when you explicitly need it, in other words, the default Python is the system Python.
If that's the case, you should disable the auto-activation of the base environment:
conda config --set auto_activate_base false
<restart shell>
Now you'll need to explicitly activate the conda environment before you can use the conda Python:
$ python
Python 3.10.6 (main, Aug 11 2022, 13:49:25) [Clang 13.1.6 (clang-1316.0.21.2.5)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
$ conda activate base
(base) $ python
Python 3.9.12 (main, Jun 1 2022, 06:36:29)
[Clang 12.0.0 ] :: Anaconda, Inc. on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>>
I had python version 2.7.3 and i wanted to learn django so i installed django version 1.8.2 on my ubuntu 12.0.4 .
invivtus#invictus:~/bin$ python
Python 2.7.3 (default, Sep 26 2013, 20:08:41)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>> django.VERSION
(1, 8, 2, 'final', 0)
Then i read that best way yo work with django is to work on python version 3.3 so i installed python version 3.3.6 on my system where py is symbolic link pointing to /opt/python3.3/bin/python3.3
invictus#invictus:~/bin$ py
Python 3.3.6 (default, Jun 21 2015, 16:13:35)
[GCC 4.9.2] on linux
Type "help", "copyright", "credits" or "license" for more information.
when I try and import django here i get error
>>> import django
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'django
I see django got installed my python 2.7 directory.
>>> import django
>>> django
<module 'django' from '/usr/local/lib/python2.7/dist-packages/django/__init__.pyc'>
How can i use this django with my 3.3.6 version. My default python version is 2.7.3
What are possible workout here?
Yes, each Python version has its own folder with installed packages. You'll have to install Django separately for Python 3.3. The same is true for every package that is not available by default.
(If you're using Python 3, why not go for the latest and greatest, 3.4?)
As some of the comments said, you should be using a virtualenv to isolate your environments. You would do it like this:
1) Ensure you have virtualenv installed. On Ubuntu for instance, that would be package virtualenv.
2) Create a new, empty environment. You choose which python version it will be like this:
virtualenv -p /usr/bin/python3.4 env
3) That created an env folder. Activate the newly created environment:
. env/bin/activate
This updates your paths so now, when you run python or pip from this shell, they will execute in the context of your virtualenv.
4) Update the virtualenv (optional)
pip install -U pip
5) Install whatever packages you need. The recommended way is to have a requirements.txt file at the root of your project. You would pull them this way:
pip install -r myproject/requirements.txt
That's it. Use the pip command as usual. As long as you're working with the virtualenv active, your python command will only see the modules you explicitly install in it.
6) Don't forget to re-run . env/bin/activate in every new shell. If you think you'll probably forget, you can add this to your manage.py:
import sys
if __name__ == "__main__":
if not hasattr(sys, 'real_prefix'):
sys.stderr.write('Running outside of any virtualenv - did you forget to activate one?\n')
What are the benefits?
You have an isolated environment for every project (no conflicts).
You may use different versions of the same module in different projects.
System updates will not break your project.
You are not polluting your system with unmanaged files.
You never run stuff as root, which means both added isolation, and the possibility of running your project without having root access to the system.
As long as you keep your requirements.txt up to date (using pip freeze), you can rebuild the virtualenv on another system and it will work.
[edit: using requirements.txt]
That's just a file that has pip install specifications, one by line. It allows to rebuild the virtualenv from scratch easily. You can generate it from your current virtualenv using:
pip freeze > requirements.txt
So the idea is just to remember to re-run this command everytime you change your environment (installing, removing or upgrading some package).
Attempting to finally make the jump to Python 3, but am running into some issues with virtualenvwrapper. I start out by creating the virtual environment like so:
mkvirtualenv -p /usr/local/bin/python3 projectname
which yields:
Running virtualenv with interpreter /usr/local/bin/python3
Using base prefix '/usr/local/Cellar/python3/3.3.3/Frameworks/Python.framework/Versions/3.3'
New python executable in projectname/bin/python3.3
Also creating executable in projectname/bin/python
Installing setuptools, pip...done.
So far, so good. I check the python console to make sure that the environment is looking at the correct interpreter and all that and it is. Here's where sadness happens (while the virtualenv is active):
pip install flask claims to be successful, but alas:
Python 3.3.3 (default, Jan 2 2014, 13:26:32)
[GCC 4.2.1 Compatible Apple LLVM 5.0 (clang-500.2.79)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import flask
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'flask'
Here's the issue:
$ pip show flask
---
Name: Flask
Version: 0.10.1
Location: /usr/local/lib/python3.3/site-packages
Requires: Werkzeug, Jinja2, itsdangerous
Unless I'm completely misunderstanding virtualenv/wrapper and their respective magics (which I very well could be), it seems like pip install is installing Flask globally rather than to the site-packages within my virtualenv, and thus the virtualenv is ignoring it.
Any clues what's going on here/how to fix? Am I wrong in assuming that virtualenvwrapper is ready for primetime with python3? Pretty solutions where I don't have to mangle my .bashrc or manually set environment variables are preferable. I'm hoping there's a way to do this through the api's provided by virtualenv and virtualenvwrapper.
Thanks!
I had problems with pip installing packages globally instead of in the activated virtualenv too. Have a look at pip installing in global site-packages instead of virtualenv for the question (and the answer).
Basically, the solution consisted of modifying the shebang of the pip scripts within the virtualenv as they pointed to the wrong python installation (global instead of in the virtualenv). Just change the shebang to point to the correct location and you're set.
Note: credit should go to Chase Ries who came up with the solution.
I had the same issue. It appears to be resolved as of Virtualenv 1.11.4.
I am using Ubuntu 12.04 64-bit, and started learning python today.(I tried to install a pirate version of MATLAB but failed...)
I have a linear programming problem to solve, and I want to use lp_solve module for Python.
I tried for 1~2 hours to find the download file and install the module.
I am not sure if I downloaded a right thing, and I could not install it until now.
How can I install this?
There is no download link in http://lpsolve.sourceforge.net/, and it tells me to run a command
python setup.py install
but there is no setup.py file in anywhere, including the lpsolve source file I downloaded somewhere.
If you know where to download it, and install it, could you teach me how to do them, step by step?
I am not sure about the version of my Python.
Thank you.
Adding few more details to the answer provided by dnozay.
Download the following two files from http://sourceforge.net/projects/lpsolve/files/lpsolve/
lp_solve_5.5.2.0_dev_ux64.tar.gz - contains the .so files
lp_solve_5.5.2.0_Python2.5_exe_ux64.tar.gz - contains the python wrapper scripts for lpsolver, which helps to invoke the native library from .so files.
Unzip the above downloaded files, where each directory formed by unzip will have an lpsolve55.so file, though at different locations.
Specify the paths to lpsolve55.so file in each directory by setting the following two environment variables:
export LD_LIBRARY_PATH=/usr/local/lib:/home/xxx/lp_solve_dev/
export PYTHONPATH=/home/xxx/usr/lib/python2.5/site-packages
To test if lpsolver is configured as expected :
[xx-xxxx#ip-xx-x-x-xx ~]$ python
>>>Python 2.7.9 (default, Apr 1 2015, 18:18:03)
[GCC 4.8.2 20140120 (Red Hat 4.8.2-16)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>from lpsolve55 import *
>>>lpsolve()
lpsolve Python Interface version 5.5.0.9
using lpsolve version 5.5.2.0
Usage: ret = lpsolve('functionname', arg1, arg2, ...)
P.S.: make sure you have installed python-dev (if not, type sudo apt-get install python-dev at the command line) before you do this all.
The download link is:
http://sourceforge.net/projects/lpsolve/, or
http://sourceforge.net/projects/lpsolve/files/lpsolve/ for the files tab.
Once you have it installed, you may need to tweak your PYTHONPATH.
You also may want to look into cvexp:
http://pypi.python.org/pypi/cvexp
I'm having some strange issues with PyGTK in "virtualenv". gtk does not import in my virtualenv, while it does import in my global python install. (I wasn't having this particular issue last week, guessing some software update upset something.)
Is there a good way to resolve this behavior?
Shown here: importing gtk globally,
tom#zeppelin:~$ python
Python 2.7.1+ (r271:86832, Sep 27 2012, 21:12:17)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gtk
>>> gtk
<module 'gtk' from '/usr/lib/pymodules/python2.7/gtk-2.0/gtk/__init__.pyc'>
and then failing to import gtk,
tom#zeppelin:~$ workon py27
(py27)tom#zeppelin:~$ python
Python 2.7.1+ (r271:86832, Sep 27 2012, 21:12:17)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named gtk
Unfortunately, this has broken my ipython --pylab environment: http://pastebin.com/mM0ur7Hc
UPDATE:
I was able to fix this by adding symbolic links as suggested by grepic / this thread: Python: virtualenv - gtk-2.0
with a minor difference, namely that my "cairo" package was located in /usr/lib/pymodules/python2.7/cairo/ rather than in /usr/lib/python2.7/dist-packages/cairo.
SECOND UPDATE:
I also found it useful to add the following lines to my venv/bin/activate:
export PYTHONPATH=$PYTHONPATH:/home/tom/.virtualenvs/py27/lib/python2.7/dist-packages
export PYTHONPATH=$PYTHONPATH:/home/tom/.virtualenvs/py27/lib/python2.7/dist-packages/gtk-2.0
export PYTHONPATH=$PYTHONPATH:/usr/lib/pymodules/python2.7/gtk-2.0
(I suspect that one or more of these is unneccessary, but I've been fiddling around with this for too long and have decided to stop investigating -- my setup now works and so I'm satisfied.)
Problem solved! Thanks everyone.
Try creating your virtual environment with the --system-site-packages flag.
So gtk normally lives in a place like /usr/lib/python2.7/dist-packages which is in your Python path in your global environment, but not in your virtual environment.
You may wish to just add the path to gtk manually with something like
import sys
sys.path.append("/usr/lib/python2.7/dist-packages/gtk")
You could also change the path when you activate the virtual environment. Open up venv/bin/activate. Its a scary looking file, but at the end you can just put:
export PATH=$PATH:/my/custom/path
Save that and the next time you activate the virtual environment with:
source venv/bin/activate
your custom path will be in the path. You can verify this with
echo $PATH
An alternative approach suggested Python: virtualenv - gtk-2.0 is to go into your virtualenv directory and add a 'dist-packages' directory and create symbolic links to the gtk package you were using previously:
mkdir -p venv/lib/python2.7/dist-packages/
cd venv/lib/python2.7/dist-packages/
For GTK2:
ln -s /usr/lib/python2.7/dist-packages/glib/ glib
ln -s /usr/lib/python2.7/dist-packages/gobject/ gobject
ln -s /usr/lib/python2.7/dist-packages/gtk-2.0* gtk-2.0
ln -s /usr/lib/python2.7/dist-packages/pygtk.pth pygtk.pth
ln -s /usr/lib/python2.7/dist-packages/cairo cairo
For GTK3:
ln -s /usr/lib/python2.7/dist-packages/gi gi
Full disclosure: I feel that both these solutions are somewhat hackish, which is ok given that you say the question is urgent. There is probably a 'proper' way to extend a virtual environment so let us know if you eventually discover the better solution. You may have some luck with http://www.virtualenv.org/en/latest/index.html#creating-your-own-bootstrap-scripts
Another way to do this is to create a .pth file in your virtualenv's site-packages dir
eg
(in <virtualenv>/lib/python2.7/site-packages/dist-packages.pth)
/usr/lib/python2.7/dist-packages/
This fixed the issue I was having with apt-get installed version of pycairo
If you want to include the links to the relevant system's python gtk-2.0 in the virtualenv, you can just use pip to install ruamel.venvgtk:
pip install ruamel.venvgtk
You don't have import anything, the links are setup during installation.
This is especially handy if you are using tox, in that case you only need to include the dependency (for tox):
deps:
pytest
ruamel.venvgtk
and a newly setup python2.7 environment will have the relevant links included before the tests are run.
It is now possible to resolve this using vext. Vext allows you to install packages in a virtualenv that individually access your system packages. To access PyGTK, do the following:
pip install vext
pip install vext.pygtk