After installing mechanize, I don't seem to be able to import it.
I have tried installing from pip, easy_install, and via python setup.py install from this repo: https://github.com/abielr/mechanize. All of this to no avail, as each time I enter my Python interactive I get:
Python 2.7.3 (default, Aug 1 2012, 05:14:39)
[GCC 4.6.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import mechanize
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named mechanize
>>>
The installations I ran previously reported that they had completed successfully, so I expect the import to work. What could be causing this error?
In my case, it is permission problem. The package was somehow installed with root rw permission only, other user just cannot rw to it!
I had the same problem: script with import colorama was throwing an ImportError, but sudo pip install colorama was telling me "package already installed".
My fix: run pip without sudo: pip install colorama. Then pip agreed it needed to be installed, installed it, and my script ran. Or even better, use python -m pip install <package>. The benefit of this is, since you are executing the specific version of python that you want the package in, pip will unequivocally install the package into the "right" python. Again, don't use sudo in this case... then you get the package in the right place, but possibly with (unwanted) root permissions.
My environment is Ubuntu 14.04 32-bit; I think I saw this before and after I activated my virtualenv.
I was able to correct this issue with a combined approach. First, I followed Chris' advice, opened a command line and typed 'pip show packagename'
This provided the location of the installed package.
Next, I opened python and typed 'import sys', then 'sys.path' to show where my python searches for any packages I import. Alas, the location shown in the first step was NOT in the list.
Final step, I typed 'sys.path.append('package_location_seen_in_step_1'). You optionally can repeat step two to see the location is now in the list.
Test step, try to import the package again... it works.
The downside? It is temporary, and you need to add it to the list each time.
It's the python path problem.
In my case, I have python installed in:
/Library/Frameworks/Python.framework/Versions/2.6/bin/python,
and there is no site-packages directory within the python2.6.
The package(SOAPpy) I installed by pip is located
/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/
And site-package is not in the python path, all I did is add site-packages to PYTHONPATH permanently.
Open up Terminal
Type open .bash_profile
In the text file that pops up, add this line at the end:
export PYTHONPATH=$PYTHONPATH:/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/
Save the file, restart the Terminal, and you're done
The Python import mechanism works, really, so, either:
Your PYTHONPATH is wrong,
Your library is not installed where you think it is
You have another library with the same name masking this one
I have been banging my head against my monitor on this until a young-hip intern told me the secret is to "python setup.py install" inside the module directory.
For some reason, running the setup from there makes it just work.
To be clear, if your module's name is "foo":
[burnc7 (2016-06-21 15:28:49) git]# ls -l
total 1
drwxr-xr-x 7 root root 118 Jun 21 15:22 foo
[burnc7 (2016-06-21 15:28:51) git]# cd foo
[burnc7 (2016-06-21 15:28:53) foo]# ls -l
total 2
drwxr-xr-x 2 root root 93 Jun 21 15:23 foo
-rw-r--r-- 1 root root 416 May 31 12:26 setup.py
[burnc7 (2016-06-21 15:28:54) foo]# python setup.py install
<--snip-->
If you try to run setup.py from any other directory by calling out its path, you end up with a borked install.
DOES NOT WORK:
python /root/foo/setup.py install
DOES WORK:
cd /root/foo
python setup.py install
I encountered this while trying to use keyring which I installed via sudo pip install keyring. As mentioned in the other answers, it's a permissions issue in my case.
What worked for me:
Uninstalled keyring:
sudo pip uninstall keyring
I used sudo's -H option and reinstalled keyring:
sudo -H pip install keyring
In PyCharm, I fixed this issue by changing the project interpreter path.
File -> Settings -> Project -> Project Interpreter
File -> Invalidate Caches… may be required afterwards.
I couldn't get my PYTHONPATH to work properly. I realized adding export fixed the issue:
(did work)
export PYTHONPATH=$PYTHONPATH:~/test/site-packages
vs.
(did not work)
PYTHONPATH=$PYTHONPATH:~/test/site-packages
This problem can also occur with a relocated virtual environment (venv).
I had a project with a venv set up inside the root directory. Later I created a new user and decided to move the project to this user. Instead of moving only the source files and installing the dependencies freshly, I moved the entire project along with the venv folder to the new user.
After that, the dependencies that I installed were getting added to the global site-packages folder instead of the one inside the venv, so the code running inside this env was not able to access those dependencies.
To solve this problem, just remove the venv folder and recreate it again, like so:
$ deactivate
$ rm -rf venv
$ python3 -m venv venv
$ source venv/bin/activate
$ pip install -r requirements.txt
Something that worked for me was:
python -m pip install -user {package name}
The command does not require sudo. This was tested on OSX Mojave.
In my case I had run pip install Django==1.11 and it would not import from the python interpreter.
Browsing through pip's commands I found pip show which looked like this:
> pip show Django
Name: Django
Version: 1.11
...
Location: /usr/lib/python3.4/site-packages
...
Notice the location says '3.4'. I found that the python-command was linked to python2.7
/usr/bin> ls -l python
lrwxrwxrwx 1 root root 9 Mar 14 15:48 python -> python2.7
Right next to that I found a link called python3 so I used that. You could also change the link to python3.4. That would fix it, too.
In my case it was a problem with a missing init.py file in the module, that I wanted to import in a Python 2.7 environment.
Python 3.3+ has Implicit Namespace Packages that allow it to create a packages without an init.py file.
Had this problem too.. the package was installed on Python 3.8.0 but VS Code was running my script using an older version (3.4)
fix in terminal:
py .py
Make sure you're installing the package on the right Python Version
I had colorama installed via pip and I was getting "ImportError: No module named colorama"
So I searched with "find", found the absolute path and added it in the script like this:
import sys
sys.path.append("/usr/local/lib/python3.8/dist-packages/")
import colorama
And it worked.
I had just the same problem, and updating setuptools helped:
python3 -m pip install --upgrade pip setuptools wheel
After that, reinstall the package, and it should work fine :)
The thing is, the package is built incorrectly if setuptools is old.
If the other answers mentioned do not work for you, try deleting your pip cache and reinstalling the package. My machine runs Ubuntu14.04 and it was located under ~/.cache/pip. Deleting this folder did the trick for me.
Also, make sure that you do not confuse pip3 with pip. What I found was that package installed with pip was not working with python3 and vice-versa.
I had similar problem (on Windows) and the root cause in my case was ANTIVIRUS software! It has "Auto-Containment" feature, that wraps running process with some kind of a virtual machine.
Symptoms are: pip install somemodule works fine in one cmd-line window and import somemodule fails when executed from another process with the error
ModuleNotFoundError: No module named 'somemodule'
In my case (an Ubuntu 20.04 VM on WIN10 Host), I have a disordered situation with many version of Python installed and variuos point of Shared Library (installed with pip in many points of the File System). I'm referring to 3.8.10 Python version.
After many tests, I've found a suggestion searching with google (but' I'm sorry, I haven't the link). This is what I've done to resolve the problem :
From shell session on Ubuntu 20.04 VM, (inside the Home, in my case /home/hduser), I've started a Jupyter Notebook session with the command "jupyter notebook".
Then, when jupyter was running I've opened a .ipynb file to give commands.
First : pip list --> give me the list of packages installed, and, sympy
wasn't present (although I had installed it with "sudo pip install sympy"
command.
Last with the command !pip3 install sympy (inside jupyter notebook
session) I've solved the problem, here the screen-shot :
Now, with !pip list the package "sympy" is present, and working :
In my case, I assumed a package was installed because it showed up in the output of pip freeze. However, just the site-packages/*.dist-info folder is enough for pip to list it as installed despite missing the actual package contents (perhaps from an accidental deletion). This happens even when all the path settings are correct, and if you try pip install <pkg> it will say "requirement already satisfied".
The solution is to manually remove the dist-info folder so that pip realizes the package contents are missing. Then, doing a fresh install should re-populate anything that was accidentally removed
When you install via easy_install or pip, is it completing successfully? What is the full output? Which python installation are you using? You may need to use sudo before your installation command, if you are installing modules to a system directory (if you are using the system python installation, perhaps). There's not a lot of useful information in your question to go off of, but some tools that will probably help include:
echo $PYTHONPATH and/or echo $PATH: when importing modules, Python searches one of these environment variables (lists of directories, : delimited) for the module you want. Importing problems are often due to the right directory being absent from these lists
which python, which pip, or which easy_install: these will tell you the location of each executable. It may help to know.
Use virtualenv, like #JesseBriggs suggests. It works very well with pip to help you isolate and manage the modules and environment for separate Python projects.
I had this exact problem, but none of the answers above worked. It drove me crazy until I noticed that sys.path was different after I had imported from the parent project. It turned out that I had used importlib to write a little function in order to import a file not in the project hierarchy. Bad idea: I forgot that I had done this. Even worse, the import process mucked with the sys.path--and left it that way. Very bad idea.
The solution was to stop that, and simply put the file I needed to import into the project. Another approach would have been to put the file into its own project, as it needs to be rebuilt from time to time, and the rebuild may or may not coincide with the rebuild of the main project.
I had this problem with 2.7 and 3.5 installed on my system trying to test a telegram bot with Python-Telegram-Bot.
I couldn't get it to work after installing with pip and pip3, with sudo or without. I always got:
Traceback (most recent call last):
File "telegram.py", line 2, in <module>
from telegram.ext import Updater
File "$USER/telegram.py", line 2, in <module>
from telegram.ext import Updater
ImportError: No module named 'telegram.ext'; 'telegram' is not a package
Reading the error message correctly tells me that python is looking in the current directory for a telegram.py. And right, I had a script lying there called telegram.py and this was loaded by python when I called import.
Conclusion, make sure you don't have any package.py in your current working dir when trying to import. (And read error message thoroughly).
I had a similar problem using Django. In my case, I could import the module from the Django shell, but not from a .py which imported the module.
The problem was that I was running the Django server (therefore, executing the .py) from a different virtualenv from which the module had been installed.
Instead, the shell instance was being run in the correct virtualenv. Hence, why it worked.
This Works!!!
This often happens when module is installed to an older version of python or another directory, no worries as solution is simple.
- import module from directory in which module is installed.
You can do this by first importing the python sys module then importing from the path in which the module is installed
import sys
sys.path.append("directory in which module is installed")
import <module_name>
Most of the possible cases have been already covered in solutions, just sharing my case, it happened to me that I installed a package in one environment (e.g. X) and I was importing the package in another environment (e.g. Y). So, always make sure that you're importing the package from the environment in which you installed the package.
For me it was ensuring the version of the module aligned with the version of Python I was using.. I built the image on a box with Python 3.6 and then injected into a Docker image that happened to have 3.7 installed, and then banging my head when Python was telling me the module wasn't installed...
36m for Python 3.6
bsonnumpy.cpython-36m-x86_64-linux-gnu.so
37m for Python 3.7 bsonnumpy.cpython-37m-x86_64-linux-gnu.so
I know this is a super old post but for me, I had an issue with a 32 bit python and 64 bit python installed. Once I uninstalled the 32 bit python, everything worked as it should.
I have solved my issue that same libraries were working fine in one project(A) but importing those same libraries in another project(B) caused error. I am using Pycharm as IDE at Windows OS.
So, after trying many potential solutions and failing to solve the issue, I did these two things (deleted "Venv" folder, and reconfigured interpreter):
1-In project(B), there was a folder named("venv"), located in External Libraries/. I deleted that folder.
2-Step 1 (deleting "venv" folder) causes error in Python Interpreter Configuration, and
there is a message shown at top of screen saying "Invalid python interpreter selected
for the project" and "configure python interpreter", select that link and it opens a
new window. There in "Project Interpreter" drop-down list, there is a Red colored line
showing previous invalid interpreter. Now, Open this list and select the Python
Interpreter(in my case, it is Python 3.7). Press "Apply" and "OK" at the bottom and you
are good to go.
Note: It was potentially the issue where Virtual Environment of my Project(B) was not recognizing the already installed and working libraries.
(Before responding with a 'see this link' answer, know that I've been searching for hours and have probably read it all. I've done my due diligence, I just can't seem to find the solution)
That said, I'll start with my general setup and give details after.
Setup: On my desktop, I have a project that I am running in Pycharm, Python3.4, using a virtual environment. In the cloud (AWS). I have an EC2 instance running Ubuntu. I'm not using a virtual environment in the cloud. The cloud machine has both python 2.7 and python 3.5 installed.
[Edit] I've switched to a virtual machine on my cloud environment, and installing from setup distrubution (still broken)
Problem: On my desktop, both within pycharm and from the command line (within the virtual environment using workon (project), I can run a particular file called "do_daily.py" without any issues. However, If I try to run the same file on the cloud server, I get the famous import error.
[edit] Running directly from command line on remote server.
python3 src/do_daily.py
File "src/do_daily.py", line 3, in <module>
from src.db_backup import dev0_backup as dev0bk
ImportError: No module named 'src.db_backup'
Folder Structure: My folder structure for the specific import is (among other stuff).
+ project
+ src
- __init__.py
- do_daily.py
+ db_backup
- __init__.py
- dev0_backup.py
Python Path: (echo $PYTHONPATH)
/home/ubuntu/automation/Project/src/tg_servers:/home/ubuntu/automation/Project/src/db_backup:/home/ubuntu/automation/Project/src/aws:/home/ubuntu/automation/Project/src:/home/ubuntu/automation/Project
Other stuff:
print(sys.executable) = /usr/bin/python3
print(sys.path) = gives me all the above plus a bunch of default paths.
I have run out of ideas and would appreciate any help.
Thank you,
SteveJ
SOLUTION
Clearly the accepted answer was the most comprehensive and represents the best approach to the problem. However, for those seeing this later - I was able to solve my specific problem a little more directly.
(From within the virtual environment), both the add2virtualenv and creating .pth files did work. What I was missing is that I had to add all packages; src, db_backup, pkgx,y,z etc...
I have created a github repository (https://github.com/thebjorn/pyimport.git), and tested the code on a freshly created AWS/Ubuntu instance.
First the installs and updates I did (installing and updating pip3):
ubuntu#:~$ sudo apt-get update
ubuntu#:~$ sudo apt install python3-pip
ubuntu#:~$ pip3 install -U pip
Then get the code:
ubuntu#:~$ git clone https://github.com/thebjorn/pyimport.git
My version of do_daily.py imports dev0_backup, contains a function that tells us it was called, and a __main__ section (for calling with -m or filename):
ubuntu#ip-172-31-29-112:~$ cat pyimport/src/do_daily.py
from __future__ import print_function
from src.db_backup import dev0_backup as dev0bk
def do_daily_fn():
print("do_daily_fn called")
if __name__ == "__main__":
do_daily_fn()
The setup.py file points directly to the do_daily_fn:
ubuntu#ip-172-31-29-112:~$ cat pyimport/setup.py
from setuptools import setup
setup(
name='pyimport',
version='0.1',
description='pyimport',
url='https://github.com/thebjorn/pyimport.git',
author='thebjorn',
license='MIT',
packages=['src'],
entry_points={
'console_scripts': """
do_daily = src.do_daily:do_daily_fn
"""
},
zip_safe=False
)
Install the code in dev mode:
ubuntu#:~$ pip3 install -e pyimport
I can now call do_daily in a number of ways (notice that I haven't done anything with my PYTHONPATH).
The console_scripts in setup.py makes it possible too call do_daily by just typing its name:
ubuntu#:~$ do_daily
do_daily_fn called
Installing the package (in dev mode or otherwise) makes the -m flag work out-of-the box:
ubuntu#:~$ python3 -m src.do_daily
do_daily_fn called
You can even call the file directly (although this is by far the ugliest way and I would recommend against it):
ubuntu#:~$ python3 pyimport/src/do_daily.py
do_daily_fn called
Your PYTHONPATH should contain /home/ubuntu/automation/Project and likely nothing below it.
There all reasons to use a virtualenv in production and never install any packages into system Python explicitly. System Python is for running the OS-provided software written in Python. Don't mix it with your deployments.
A few questions here.
From which directory are you running you program?
Did you try to import the db_backup module inside of src/__init__.py?
From IDLE, I tried to run a script with a newly installed scrapy 1.0.3.
I'm using a script from a friend whom it worked for (but on Windows, I'm on a Mac).
From the import of scrapy on the first line, I get this error when running the program:
ImportError: No module named twisted.persisted.styles
The whole script, if it's helpful, points to this:
Traceback (most recent call last):
File "/Users/eliasfong/tutorial/tutorial/spiders/medspider.py", line 1, in <module>
import scrapy
File "/Library/Python/2.7/site-packages/scrapy/__init__.py", line 27, in <module>
from . import _monkeypatches
File "/Library/Python/2.7/site-packages/scrapy/_monkeypatches.py", line 20, in <module>
import twisted.persisted.styles # NOQA
ImportError: No module named twisted.persisted.styles
Any suggestions on how to tackle this problem?
Just try to force the update of twisted :
pip install twisted --upgrade
That works for me with python3.4 and Scrapy==1.1.0rc1
Either twisted is installed on your mac (I highly doubt it since it's not a standard library) and for whatever reason the IDE (i'm assuming that's what you mean since you typed "idle") or the terminal you are in doesn't have your updated environment variables, meaning it doesn't understand where your default python libraries are (again I highly doubt it), or you simple do not have twisted installed on your mac. If it's not installed you have a couple of options:
The easiest way to install a python package is through pip.
If that not an option you can try homebrew which is another package manager for macs. It offers an easy way to install packages correctly.
If that still is not an option for you or you simply don't want to attempt that you can download twisted directly from here (the .bz2 since you're on a mac), click on it and it should unzip it for you. Then just run setup.py and it should install it in the correct location on your mac.
If that still doesn't work and you have decent knowledge of unix. Use the "locate" command on the terminal and find out where your dist-packages directory is and put the source for twisted in there directly and then attempt to import twisted in your IDE or in the python interpreter to verify that it is installed.
note: If you're still having problems after it is installed trying restarting your IDE or messing with some setting to make sure your IDE has the right environment and python path. Hope that helps!
It could be related to having installed Python without bzip2. I had the same error and this helped me, see the accepted answer here:
Installing Twisted through pip broken on one server
Had this exact thing on FreeBSD. Solution (as root/sudo):
chmod -R go+rX /usr/local/lib/python2.7/site-packages
Some directory permissions weren't set up right on install.
I'm trying to start using the pygame module but I can't get it to run. I'm using Mountain Lion with Python 2.7 and MacPorts, but I also installed some science and math modules using Anaconda before I ever discovered and started using MacPorts. Note that my MacPorts was just updated before I started any of the following. I initially tried to just use:
sudo port install py27-game
which looked like it worked and set everything up without a problem. But, when I go into the Python interpreter from the command line and type:
import pygame
I get the response:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named pygame
So then I went in and did:
import sys
print sys.path
which gave:
['', '/Users/trav/anaconda/lib/python27.zip', '/Users/trav/anaconda/lib/python2.7',
'/Users/trav/anaconda/lib/python2.7/plat-darwin',
'/Users/trav/anaconda/lib/python2.7/plat-mac',
'/Users/trav/anaconda/lib/python2.7/plat-mac/lib-scriptpackages',
'/Users/trav/anaconda/lib/python2.7/lib-tk', '/Users/trav/anaconda/lib/python2.7/lib-
old', '/Users/trav/anaconda/lib/python2.7/lib-dynload',
'/Users/trav/anaconda/lib/python2.7/site-packages',
'/Users/trav/anaconda/lib/python2.7/site-packages/PIL',
'/Users/trav/anaconda/lib/python2.7/site-packages/setuptools-0.6c11-py2.7.egg-info']
So, I'm guessing that because I used the Anaconda set up when I initially put the NumPy, SciPy & Matplotlib setup on here that this has caused MacPorts to clash with this somehow in the path.
Should I just remove the Ananconda package? If so, how can I go about removing these dependancies when I do that?
Ok, so I figured out the problem, and it was my path. I went in and removed the anaconda package with:
rm -r ~/anaconda
Then, I used macports to basically reinstall the whole scipy stack with:
sudo port install py27-wxpython py27-numpy py27-matplotlib py27-scipy py27-ipython
This took some time to compile, and when it was finished I went in on the command line and used:
sudo port select --set python python27
After that I opened my interpreter and imported all my scientific computing needs without a problem and pygame, which I had installed earlier with macports. I hope this helps someone else in the future. ALSO: when you remove packages like anaconda, make sure and close your terminal and then re-open it or it will still try to use the dependancies of anaconda, which are no longer there. I had macports set up already so after removing anaconda the macports path became the default.
One last edit. In order to get all of this to run correctly and allow me to run the scripts from within emacs as well with all the imported modules I had to switch to aquamacs from my normal emacs editor in order for the correct path to be used from within the emacs environment , or I could have just run emacs from the terminal with /Application/Emacs.app/Contents/MacOS/Emacs.
PySide is installed successfully, and it works perfectly, but I can't find a way to import the shiboken module. Now I found the discussion about the feature request to expose shiboken functions through a python module (http://bugs.pyside.org/show_bug.cgi?id=902), but the issue is resolved. It was implemented in january 2012, if I understood correctly.
Even though after the installation of PySide 1.1.1 when I try:
>>> import shiboken
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: No module named shiboken
I get an ImportError exception. How can I install the shiboken python module?
Looks like someone forgot to update cmake: bugs-PYSIDE-55.
However, I've just compiled shiboken-1.1.2, and the issue seems to be fixed.
I beleive under ideal circumstances ekhumoro's answer is totally correct, unfortunately I was not that lucky, and the binary packages still didn't allow the usage of the shiboken python module. I had to compile it manually, but that part became tricky too as it didn't work by the default instructions found on their homepage, probably because I'm using ubuntu 12.04, or I'm not sure why else.
As the target was the usage from withing a virtualenv I followed these instructions:
export PYSIDESANDBOXPATH=/path/to/my/virtualenv
export PATH=$PYSIDESANDBOXPATH/bin:$PATH
export PYTHONPATH=$PYSIDESANDBOXPATH/lib/python2.6/site-packages:$PYTHONPATH
export LD_LIBRARY_PATH=$PYSIDESANDBOXPATH/lib:$LD_LIBRARY_PATH
export PKG_CONFIG_PATH=$PYSIDESANDBOXPATH/lib/pkgconfig:$PKG_CONFIG_PATH
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=$PYSIDESANDBOXPATH -DCMAKE_BUILD_TYPE=Debug -DENABLE_ICECC=0'
make
make install
sudo ldconfig
The first problem here was that after it was compiled, and the installation began, and it wanted to install the shiboken python module, this happend:
-- Installing: .../lib/python2.7/site-packages/shiboken.so
-- Removed runtime path from .../lib/python2.7/site-packages/shiboken.so
Then I found somewhere that I should add this parameter to cmake:
-DCMAKE_SKIP_RPATH:BOOL=YES
Now the installation was successful, but when I tried to import shiboken in python, this happened:
import shiboken
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: libshiboken.so: cannot open shared object file: No such file or directory
Google revealed that the issue is caused because $LD_LIBRARY_PATH does not contain the path where those libs are located. First of all ubuntu 12 (and I think 10 and 11 also) does not use the $LD_LIBRARY_PATH environment variable anymore, so it was not even set.
So even the path was incorrect because I tried to join that unset variable with a path:
export LD_LIBRARY_PATH=$PYSIDESANDBOXPATH/lib:$LD_LIBRARY_PATH
So it treated it as two regular strings and just joined them together. The snippet below shows how to join them safely to avoid causing such troubles. But that didn't solve the problem either. as running ldconfig still didn't update anything, so the importing in python failed again.
The final solution found with google too :) was creating a new file in /etc/ld.so.conf.d/ and put there the contents of $LD_LIBRARY_PATH, and run ldconfig after that. So here is the final install script which worked as expected:
#!/usr/bin/env bash
export PYSIDESANDBOXPATH=/path/to/my/virtualenv
export PATH="$PYSIDESANDBOXPATH/bin${PATH:+:$PATH}"
export PYTHONPATH="$PYSIDESANDBOXPATH/lib/python2.7/site-packages${PYTHONPATH:+:$PYTHONPATH}"
export LD_LIBRARY_PATH="$PYSIDESANDBOXPATH/lib${LD_LIBRARY_PATH:+:$LD_LIBRARY_PATH}"
export PKG_CONFIG_PATH="$PYSIDESANDBOXPATH/lib/pkgconfig${PKG_CONFIG_PATH:+:$PKG_CONFIG_PATH}"
mkdir build
cd build
cmake .. -DCMAKE_INSTALL_PREFIX=$PYSIDESANDBOXPATH -DCMAKE_SKIP_RPATH:BOOL=YES -DCMAKE_BUILD_TYPE=Debug -DENABLE_ICECC=0
make
make install
sudo sh -c "echo $LD_LIBRARY_PATH > /etc/ld.so.conf.d/shiboken.conf"
sudo ldconfig
That's all, it cost me several hours to figure out, hope this will save someone else :)
Here's how I compile shiboken.pyd on Windows from source code, tested with PySide-1.1.2 + Qt4.8.4 + msvc2010.
First, manually download shiboken-1.1.2.tar.bz2, extract it. Then compile it this way (you might need to set up virtualenv):
python setup.py build --openssl=C:\dev\OpenSSL\1.0.0j\bin --qmake=C:\Qt\4.8.4\bin\qmake.exe
After it finished, I got shiboken.pyd at:
PySide-1.1.2\pyside_install\py2.7-qt4.8.4-32bit-release\lib\site-packages\shiboken.pyd
P.S.
However, shiboken.pyd was missing in "PySide-1.1.2\build\lib", where files would be installed to site-packages. This explains why I coundn't get shiboken.pyd by compiling PySide from PIP using:
pip install PySide --install-option="--openssl=C:\dev\OpenSSL\1.0.0j\bin" --install-option="--qmake=C:\Qt\4.8.4\bin\qmake.exe"
btw, on Mac OS X, if you install PySide using macports, "import shiboken" will also fail, because it is installed into the wrong location ("/opt/local/lib/python2.7/site-packages" instead of "/opt/local/Library/Frameworks/Python.framework/Version/2.7/lib/python2.7/site-packages"). Add "/opt/local/lib/python2.7/site-packages" to PYTHONPATH will solve the issue.
There are definitely bugs in the pyside-setup scripts. Hope Digia could send somebody to fix PySide before the project goes dead.