I've installed Python Numpy on Debian using...
apt-get install python-numpy
But when run the Python shell I get the following...
Python 2.7.10 (default, Sep 9 2015, 20:21:51)
[GCC 4.9.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import numpy
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named numpy
When I view the contents of /usr/local/lib/python2.7/site-packages/ I noticed numpy is not list.
If I install it via pip i.e pip install numpy it works just fine, However, I want to use the apt-get method. What I'm I doing wrong?
Other:
echo $PYTHONPATH /usr/local/lib/python2.7
dpkg -l python-numpy...
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-===============================================-============================-============================-====================================================================================================
ii python-numpy 1:1.8.2-2 amd64 Numerical Python adds a fast array facility to the Python language
Python 2.7.10
['', '/usr/local/lib/python2.7', '/usr/local/lib/python27.zip', '/usr/local/lib/python2.7/plat-linux2', '/usr/local/lib/python2.7/lib-tk', '/usr/local/lib/python2.7/lib-old', '/usr/local/lib/python2.7/lib-dynload', '/usr/local/lib/python2.7/site-packages']
which -a python...
/usr/local/bin/python
/usr/bin/python
echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
As you can tell from your which result, the python you are running when just typing python is /usr/local/bin/python.
It's a python you probably installed there yourself, as Debian will never put anything in /usr/local by itself (except for empty directories).
How? Well, by running pip for instance. As a rule, you should never use pip outside of a virtualenv, because it will install stuff on your system that your package manager will not know about. And maybe break stuff, like what you see on your system.
So, if you run /usr/bin/python, it should see the numpy package you installed using your package manager.
How to fix it? Well, I would clear anything in /usr/local (beware, it will definitely break stuff that rely on things you installed locally). Then I would apt-get install python-virtualenv, and always work with a virtualenv.
$ virtualenv -p /usr/bin/python env
$ . env/bin/activate
(env)$ pip install numpy
(env)$ python
>>> import numpy
>>>
That way, packages will be installed in the env directory. You do all this as a regular user, not root. And your different projects can have different environments with different packages installed.
Related
After activating a python virtual environment at the terminal with source ./venv/bin/activate, and running python3 in the venv, it doesn't seem to have the packages from the venv in the path.
(venv) d#MBP-2020 scrapers % ls venv/lib/python3.11/site-packages | grep "pandas"
pandas
pandas-1.5.2.dist-info
(venv) d#MBP-2020 scrapers % pip list | grep "pandas"
pandas 1.5.2
(venv) d#MBP-2020 scrapers % python3
Python 3.11.0 (v3.11.0:deaf509e8f, Oct 24 2022, 14:43:23) [Clang 13.0.0 (clang-1300.0.29.30)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import pandas as pd
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pandas'
>>> import sys; print(sys.path)
['', '/Library/Frameworks/Python.framework/Versions/3.11/lib/python311.zip', '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11', '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/lib-dynload', '/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages']
I thought activating the venv would put the venv-installed packages in the right path for the python3 executable? Must I manually add the site-packages directory to the path somehow?
What is the right workflow to access packages installed venvs with python?
this pip is not match you cmd python3 run python env,
you can use whereis pip and whereis python3 to check you pip and python3 real link to where.
if your want to use venv py.
run venv/lib/python3.11/bin/python, this env is your grep cmd search dir,and is installed pandas lib py env,
you can usr pyenv or conda to manage you mulit version.
I recently added python 3.9 to my ubuntu installation.
Can run python3.9 code using an installed package from bash, can also run python 3.9 using an installed module from a bash script using a shebang to load the python 3.9 environment, fairly standard stuff.
But when I try running an installed script from https://github.com/gitbls/imon, the installed package is not found.
initially I thought this was fixed by changing the imon bash script shebang to reference the new python.
ie
old
#!/usr/bin/python3
new
#!/usr/bin/python3.9
also added these lines to the imon script to verify which version of python in environment
import sys
print("sys.version:", sys.version)
when I run the /usr/local/bin/imon package, it reports 'no module' error.
$ sudo /usr/local/bin/imon --nosyslog --instance my_instance_name
sys.version: 3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
Traceback (most recent call last):
File "/usr/local/bin/imon", line 11, in <module>
from icmplib import ping, multiping
ModuleNotFoundError: No module named 'icmplib'
the thing that is really throwing me, is the output shows the imon script is accepting the shebang to load python 3.9, but will not find the module, despite my script using identical code able to load the module.
my test exercises below to verify and demonstrate I have python3.9 installed and a bash script can load python3.9 and the icmplib module.
$ python3.9
Python 3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> print (sys.version)
3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
>>> import icmplib
ie: the icmplib package has been installed and can be used when in the python shell.
I created a shell script and made it executable
vim test_script.sh
#!/usr/bin/python3.9
import sys
print (sys.version)
import icmplib
print("icmplib.__version__:", icmplib.__version__)
chmod u+x test_script.sh
output from this shows the script is running python3.9 and can access the installed module.
$ ./test_script.sh
3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
icmplib.__version__: 3.0.3
#EDIT START--------------------------------------------------------------
added tests as suggested by #furas in comments.
print("sys.path:", sys.path)
print("icmplib.__file__:", icmplib.__file__)
these above lines were added to both test_script.sh & /usr/local/bin/imon
the results below reveal the local test_script.sh accesses a directory not accessed by the imon script run by root.
'/home/m/.local/lib/python3.9/site-packages',
oddly, both the script run locally and the script run under root access path
'/usr/lib/python3/dist-packages'
(I don't know why python3 instead of python3.9 is in sys.path, this seems odd to me)
taking another suggestion from #furas I tried pip installing the module as root
$ sudo python3.9 -m pip install icmplib
Collecting icmplib
Using cached icmplib-3.0.3-py3-none-any.whl (30 kB)
Installing collected packages: icmplib
Successfully installed icmplib-3.0.3
this looked helpful. unfortunately running the imon script as root still resulted in the Namerror.
$ sudo /usr/local/bin/imon --nosyslog --instance imon-bmt1
sys.version: 3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
sys.path: ['/usr/local/bin', '/usr/lib/python39.zip', '/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload', '/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages']
Traceback (most recent call last):
File "/usr/local/bin/imon", line 8, in <module>
print("icmplib.__file__:", icmplib.__file__)
NameError: name 'icmplib' is not defined
I am now 18hrs awake and brain fading. :(
#EDIT END--------------------------------------------------------------
#EDIT: this below was before the edit above.
when I run test_script.sh as root, I can replicate the error experienced by imon.
$ ./test_script.sh
3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
icmplib.__version__: 3.0.3
sys.path: ['/home/m/f_projs/internet_monitor',
'/usr/lib/python39.zip',
'/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload',
'/home/m/.local/lib/python3.9/site-packages',
'/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages']
icmplib.__file__: /home/m/.local/lib/python3.9/site-packages/icmplib/__init__.py
$ sudo /usr/local/bin/imon --nosyslog --instance imon-bmt1
sys.version: 3.9.14 (main, Sep 7 2022, 23:43:29)
[GCC 9.4.0]
sys.path: ['/usr/local/bin',
'/usr/lib/python39.zip',
'/usr/lib/python3.9', '/usr/lib/python3.9/lib-dynload',
'/usr/local/lib/python3.9/dist-packages', '/usr/lib/python3/dist-packages']
Traceback (most recent call last):
File "/usr/local/bin/imon", line 8, in <module>
print("icmplib.__file__:", icmplib.__file__)
NameError: name 'icmplib' is not defined
my environment = ubuntu 20.04 with python 3.9.14
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.4 LTS
Release: 20.04
Codename: focal
$ which python3
/usr/bin/python3
$ which python3.9
/usr/bin/python3.9
$ python3.9 --version
Python 3.9.14
$ python3 --version
Python 3.8.10
I'm marking this as 'answered' because I've realised I broke my python3 and python3.9 install beyond repair while attempting to fix this issue.
now I have a completely different problem.
most of the original problem appears to be failing to use 'sudo pip install X' (or 'sudo python3.9 -m pip install x') when installing packages. I normally use virtual env for my local projects.
I was doing uninstall of python3.9 and cleanup with view to reinstalling python3.9.
sudo apt remove python3.9
#now check what packages remain to be removed manually.
dpkg --list | grep python3.9
#this showed a short list of packages to be cleaned up.
#nb: 'rc' = package has been removed, but configuration files remain
sudo apt-get remove --auto-remove python3.9-minimal
sudo apt-get remove --auto-remove python3.9-venv
sudo apt-get remove --auto-remove python3.9
sudo dpkg -P libpython3.9-minimal
sudo dpkg -P python3.9-minimal
sudo dpkg -P python3.9-venv
#then manually deleting the python3.9 directories previously listed by sys.path
ie
sudo rm -rf /usr/lib/python3.9
sudo rm -rf /usr/local/lib/python3.9/dist-packages
my new problem is due to accidentally deleting a directory from sys-path which 'apt-get remove' and 'sudo dpkg -P' didn't cleanup.
sudo rm -rf /usr/lib/python3/dist-packages
hindsight screams out this will destroy the python3 install. Bitter experience just now tells me it is near impossible to repair since the apt-get tools are dependent on a working python3 install.
I think the only way to fix this is copying across a full directory of '/usr/lib/python3/dist-packages' from a clean install.
new SO question for my new problem > (how to repair python3 install on ubuntu after rm -rf /usr/lib/python3/dist-packages?)
Hopefully this post will be useful and a warning to others in future.
I am using Ubuntu 16 machine. I want to use python 3. I isntalled it. However, I have to use the command python3 otherwise it runs python2.7.
I installed pycrypto library using pip install pycrypto but when I try to import from pycrypto using python3 I get this error:
>>> from Crypto.Cipher import AES
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'Crypto'
While I do not face the same problem in python 2.7 (the import works fine). What is the problem? how to solve it?
UPDATE:
I tried pip3 and this is the result:
x#x-VirtualBox:~$ sudo -H pip3 install pycrypto
Requirement already satisfied: pycrypto in /usr/local/lib/python3.6/dist-packages (2.6.1)
x#x-VirtualBox:~$ python3
Python 3.5.2 (default, Nov 23 2017, 16:37:01)
[GCC 5.4.0 20160609] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from Crypto.Cipher import AES
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named 'Crypto'
>>>
Apparently, you have 2 Python3 instances installed:
Python3.6:
Which is the one you want to use
Where pycrypto is installed (by pip3)
Python3.5.2:
Which is launched by python3 command
So, you are launching the wrong Python interpreter, most likely because python3 points to /usr/bin/python3 (you can check that by typing which python3 in your shell), which is Python3.5.2, and whose path is in the ${PATH} env var.
From your pip paths, it seems like Python3.6 is installed under /usr/local (and the executable would be /usr/local/bin/python3), so you can either:
Launch the Python3 executable by its full path (/usr/local/bin/python3, as stated above)
Add /usr/local/bin to ${PATH} before /usr/bin, and then simply launch Python3.6 by typing python3 in your shell - but I'd advise against that
There are other methods (e.g. creating an alias), but I guess you got the idea
#EDIT0:
Some more info as requested in comments. This has nothing to do with Python, it' all just Ubtu stuff:
To list packages: use apt or dpkg
To check Python2.7 (or any other version): use which (as above)
You don't need to uninstall Python3.5. Multiple version can coexist safely
If you want to make one as default, make an alias (like I did at the end of the example below) and if you want it to be persistent, place it in your profile file (e.g. .profile, .bashrc, .bash_profile)
Examples (on my VM):
[cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q050526408]> apt list python python3
Listing... Done
python/xenial-updates,now 2.7.12-1~16.04 amd64 [installed]
python3/xenial,now 3.5.1-3 amd64 [installed]
[cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q050526408]> which python
/usr/bin/python
[cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q050526408]> ll /usr/bin/python
lrwxrwxrwx 1 root root 7 mar 12 16:25 /usr/bin/python -> python2*
[cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q050526408]> dpkg -S /usr/bin/python3 /usr/bin/python2
python3-minimal: /usr/bin/python3
python-minimal: /usr/bin/python2
[cfati#cfati-ubtu16x64-0:~/Work/Dev/StackOverflow/q050526408]> alias python3=/usr/local/bin/python3
As you are using python3:
pip3 install pycrypto
Hi I'm using Ubuntu release 12.10 (quantal) 32-bit with Linux Kernel 3.5.0-21-generic. I'm trying to get IPython's History to work. I've set it up using pythonbrew and a virtual environment. In there I use pip to install IPython. Currently, when I start up IPython in a terminal I get:
WARNING: IPython History requires SQLite, your history will not be saved
Python 2.7.3 (default, Nov 8 2012, 18:25:10)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
Searching on the warning in the first line, I found this issue report, so I went back and installed the following:
sudo apt-get install libsqlite0 libsqlite0-dev libsqlite3-0 libsqlite3-dev
and then removed and reinstalled pysqlite using pip
pip uninstall pysqlite
pip install pysqlite
After that I thought I would check the installation by importing the module:
Python 2.7.3 (default, Nov 8 2012, 18:25:10)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/.pythonbrew/pythons/Python-2.7.3/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
from dbapi2 import *
File "/home/me/.pythonbrew/pythons/Python-2.7.3/lib/python2.7/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: No module named _sqlite3
So now it seems the file _sqlite3.so can't be found. That's when I found this SO question. Either it doesn't exist or it's not in my PYTHONPATH environment variable. Searching for the file, I get:
$ locate _sqlite3.so
/home/me/Desktop/.dropbox-dist/_sqlite3.so
/home/me/epd/lib/python2.7/lib-dynload/_sqlite3.so
/usr/lib/python2.7/lib-dynload/_sqlite3.so
So the file is there, but when I looked in my python path:
import sys
for p in sys.path:
print p
none of the above paths that contain _sqlite3.so were contained in my PYTHONPATH. For giggles, I added the path /usr/lib/python2.7/lib-dynload to my PYTHONPATH in a terminal and then tried to import sqlite3 again:
Python 2.7.3 (default, Nov 8 2012, 18:25:10)
[GCC 4.7.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path.append("/usr/lib/python2.7/lib-dynload")
>>> import sqlite3
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/me/.pythonbrew/pythons/Python-2.7.3/lib/python2.7/sqlite3/__init__.py", line 24, in <module>
from dbapi2 import *
File "/home/me/.pythonbrew/pythons/Python-2.7.3/lib/python2.7/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ImportError: /usr/lib/python2.7/lib-dynload/_sqlite3.so: undefined symbol: PyUnicodeUCS4_DecodeUTF8
Uh oh. Now I'm completely stuck. Can anyone help me out? I've also read in a few places that I may have to rebuild Python. I have no idea how to do this in pythonbrew. Can anyone point me in the right direction?
I've also read in a few places that I may have to rebuild Python.
This is correct. SQLite is part of the standard library,
and is built when you compile Python. There are a few 'optional' parts
of the standard library, which Python will simply skip (with minimal warning, unfortunately)
if the dependencies are missing at build time, and sqlite is one of these.
You should be able to just install libsqlite3-dev,
then rebuild Python and you should be set.
Keep an eye on the build messages,
as they do report which modules they are skipping due to missing dependencies.
Thanks to minrk for pointing me in the right direction. All I had to do was rebuild python. I've outlined the steps below for those that are using pythonbrew. Notice that I already installed the libsqlite3-dev package up in the question section.
First, with the proper version of python and virtual environment loaded up run the command:
$ pip freeze -l > requirements.txt
This gives us a text file list of all of the pip packages that have been installed in the virtual environment for this particular python release in pythonbrew. Then we remove the version of python from pythonbrew and reinstall it (this is the "rebuild python" step):
$ pythonbrew uninstall 2.7.3
$ pythonbrew install 2.7.3
After that, we switch over to the newly installed python version 2.7.3 and create a new virtual environment (which I've called "sci"):
$ pythonbrew switch 2.7.3
$ pythonbrew venv create sci
$ pythonbrew venv use sci
Ideally you should be able to run the command:
$ pip install -r requirements.txt
and according to this pip should reinstall all the modules that you had in the virtual environment before we clobbered that version of python (2.7.3). It didn't work for me for whatever reason so I manually installed all of the modules using pip individuality.
$ ipython --pylab
Python 2.7.3 (default, Jan 5 2013, 18:48:27)
Type "copyright", "credits" or "license" for more information.
IPython 0.13.1 -- An enhanced Interactive Python.
? -> Introduction and overview of IPython's features.
%quickref -> Quick reference.
help -> Python's own help system.
object? -> Details about 'object', use 'object??' for extra details.
and IPython history works!
What worked for me (using osx + homebrew + brewed python):
# Reinstall Python 2.7 with sqlite
brew remove python
brew install readline sqlite gdbm --universal
brew install python --universal --framework
# Reinstall iPython with correct bindings
pip uninstall ipython
pip install ipython
And you should be good to go.
You should rebuild your python with sqlite support
sudo apt-get install libsqlite3-dev
wget https://www.python.org/ftp/python/2.7.15/Python-2.7.15.tgz
tar -xvf Python-2.7.15.tgz
cd Python-2.7.15
./configure
make
sudo make install
Recreate your virtual environment and you should be good to go
rmvirtualenv venv
mkvirtualenv -p python2 venv
workon venv
pip install -r requirements.txt
# or
pip install ipython
This warning appears on macOS when python is installed with pyenv. By default it installs python without sqlite. These commands reinstall python with sqlite support:
pyenv uninstall 3.7
CFLAGS="-I$(xcrun --show-sdk-path)/usr/include" pyenv install 3.7
I'm having some strange issues with PyGTK in "virtualenv". gtk does not import in my virtualenv, while it does import in my global python install. (I wasn't having this particular issue last week, guessing some software update upset something.)
Is there a good way to resolve this behavior?
Shown here: importing gtk globally,
tom#zeppelin:~$ python
Python 2.7.1+ (r271:86832, Sep 27 2012, 21:12:17)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gtk
>>> gtk
<module 'gtk' from '/usr/lib/pymodules/python2.7/gtk-2.0/gtk/__init__.pyc'>
and then failing to import gtk,
tom#zeppelin:~$ workon py27
(py27)tom#zeppelin:~$ python
Python 2.7.1+ (r271:86832, Sep 27 2012, 21:12:17)
[GCC 4.5.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import gtk
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named gtk
Unfortunately, this has broken my ipython --pylab environment: http://pastebin.com/mM0ur7Hc
UPDATE:
I was able to fix this by adding symbolic links as suggested by grepic / this thread: Python: virtualenv - gtk-2.0
with a minor difference, namely that my "cairo" package was located in /usr/lib/pymodules/python2.7/cairo/ rather than in /usr/lib/python2.7/dist-packages/cairo.
SECOND UPDATE:
I also found it useful to add the following lines to my venv/bin/activate:
export PYTHONPATH=$PYTHONPATH:/home/tom/.virtualenvs/py27/lib/python2.7/dist-packages
export PYTHONPATH=$PYTHONPATH:/home/tom/.virtualenvs/py27/lib/python2.7/dist-packages/gtk-2.0
export PYTHONPATH=$PYTHONPATH:/usr/lib/pymodules/python2.7/gtk-2.0
(I suspect that one or more of these is unneccessary, but I've been fiddling around with this for too long and have decided to stop investigating -- my setup now works and so I'm satisfied.)
Problem solved! Thanks everyone.
Try creating your virtual environment with the --system-site-packages flag.
So gtk normally lives in a place like /usr/lib/python2.7/dist-packages which is in your Python path in your global environment, but not in your virtual environment.
You may wish to just add the path to gtk manually with something like
import sys
sys.path.append("/usr/lib/python2.7/dist-packages/gtk")
You could also change the path when you activate the virtual environment. Open up venv/bin/activate. Its a scary looking file, but at the end you can just put:
export PATH=$PATH:/my/custom/path
Save that and the next time you activate the virtual environment with:
source venv/bin/activate
your custom path will be in the path. You can verify this with
echo $PATH
An alternative approach suggested Python: virtualenv - gtk-2.0 is to go into your virtualenv directory and add a 'dist-packages' directory and create symbolic links to the gtk package you were using previously:
mkdir -p venv/lib/python2.7/dist-packages/
cd venv/lib/python2.7/dist-packages/
For GTK2:
ln -s /usr/lib/python2.7/dist-packages/glib/ glib
ln -s /usr/lib/python2.7/dist-packages/gobject/ gobject
ln -s /usr/lib/python2.7/dist-packages/gtk-2.0* gtk-2.0
ln -s /usr/lib/python2.7/dist-packages/pygtk.pth pygtk.pth
ln -s /usr/lib/python2.7/dist-packages/cairo cairo
For GTK3:
ln -s /usr/lib/python2.7/dist-packages/gi gi
Full disclosure: I feel that both these solutions are somewhat hackish, which is ok given that you say the question is urgent. There is probably a 'proper' way to extend a virtual environment so let us know if you eventually discover the better solution. You may have some luck with http://www.virtualenv.org/en/latest/index.html#creating-your-own-bootstrap-scripts
Another way to do this is to create a .pth file in your virtualenv's site-packages dir
eg
(in <virtualenv>/lib/python2.7/site-packages/dist-packages.pth)
/usr/lib/python2.7/dist-packages/
This fixed the issue I was having with apt-get installed version of pycairo
If you want to include the links to the relevant system's python gtk-2.0 in the virtualenv, you can just use pip to install ruamel.venvgtk:
pip install ruamel.venvgtk
You don't have import anything, the links are setup during installation.
This is especially handy if you are using tox, in that case you only need to include the dependency (for tox):
deps:
pytest
ruamel.venvgtk
and a newly setup python2.7 environment will have the relevant links included before the tests are run.
It is now possible to resolve this using vext. Vext allows you to install packages in a virtualenv that individually access your system packages. To access PyGTK, do the following:
pip install vext
pip install vext.pygtk