Install m2crypto on a virtualenv without system packages - python

I have created a virtual environment without the system packages with python's virtualenv in Ubuntu and installed m2crypto, but when I execute a shell and I try to import M2Crypto i get the following error:
ImportError: /home/imediava/.virtualenvs/myenv/local/lib/python2.7/site-
packages/M2Crypto/__m2crypto.so: undefined symbol: SSLv2_method
From outside the environment I run into the same problem unless from ubuntu I install python-m2crypto with apt-get. I know that I could create the environment with the system packages but I would prefer not to do it.
Is there anyway that I could create a virtual environment without the system packages and then install m2crypto with pip without running into the SSLv2_method?

There seems to be a regression bug from an earlier version of M2Crypto.
After placing M2Crypto's source in your virtualenv, you can try to patch it with the diff code below.
You do this by downloading the source code, untar it via:
tar -xzf M2Crypto-0.21.1.tar.gz
This should create the directory M2Crypto-0.21.1 which will contain the SWIG directory
In SWIG you'll find _ssl.i, which is the file to be patched. In the same directory create a file called _ssl.i.patch for example using the nano editor and paste the complete diff code listed below into it.
Next issue the patch _ssl.i _ssl.i.patch command to merge the patch into the code. (Afterwards you may remove the patch file if you want.)
Finally issue the commands:
python setup.py build
followed by:
python setup.py install
to install manually.
diff code:
--- SWIG/_ssl.i 2011-01-15 20:10:06.000000000 +0100
+++ SWIG/_ssl.i 2012-06-17 17:39:05.292769292 +0200
## -48,8 +48,10 ##
%rename(ssl_get_alert_desc_v) SSL_alert_desc_string_long;
extern const char *SSL_alert_desc_string_long(int);
+#ifndef OPENSSL_NO_SSL2
%rename(sslv2_method) SSLv2_method;
extern SSL_METHOD *SSLv2_method(void);
+#endif
%rename(sslv3_method) SSLv3_method;
extern SSL_METHOD *SSLv3_method(void);
%rename(sslv23_method) SSLv23_method;

You can install this lib in your global environment and then just copy from your global site-packages to virtualenv.

M2Crypto 0.22.3 (the current version in pypi) fixes this problem, so the simplest solution is now simply:
pip install --upgrade M2Crypto
M2Crypto 0.22.3 has been released from martinpaljak's github repository, rather than from the original M2Crypto repository.

I had the same problem with the current release (M2Crypto-0.22.5). The latest RC build worked for me.
pip install M2Crypto==0.22.6rc4

There is a patch slated for 0.26.1.
In the meantime, you can clone the repo, apply the patch, and install from source.
git clone https://gitlab.com/m2crypto/m2crypto.git
(
cd m2crypto
git checkout 0.25.1
curl 'https://gitlab.com/m2crypto/m2crypto/merge_requests/117.diff' | git apply -v
)
pip install -U m2crypto

Related

How to resolve CMake Error: Could not find a package configuration file provided by "boost_python3"

I tried to install the lanelet2 library according to the github installation guide at https://github.com/fzi-forschungszentrum-informatik/Lanelet2.
When I perform catkin build I get the following error:
Errors << lanelet2_python:cmake /home/student/catkin_ws/logs/lanelet2_python/build.cmake.000.log
CMake Error at /usr/lib/x86_64-linux-gnu/cmake/Boost-1.71.0/BoostConfig.cmake:117 (find_package):
Could not find a package configuration file provided by "boost_python3"
(requested version 1.71.0) with any of the following names:
boost_python3Config.cmake
boost_python3-config.cmake
Add the installation prefix of "boost_python3" to CMAKE_PREFIX_PATH or set
"boost_python3_DIR" to a directory containing one of the above files. If
"boost_python3" provides a separate development package or SDK, be sure it
has been installed.
My OS is Ubuntu 20.04 with ROS noetic. The build is performed inside a venv with Python Version 3.8.10.
The command python is pointing to python3. I've also installed the following dependencies:
sudo apt-get install ros-noetic-rospack ros-noetic-catkin ros-noetic-mrt-cmake-modules
sudo apt-get install libboost-dev libeigen3-dev libgeographic-dev libpugixml-dev libpython3-dev libboost-python-dev python3-catkin-tools
Does someone have an idea how to resolve this error?
See neutrinoyu's comment at https://github.com/ethz-asl/kalibr/issues/368#issuecomment-651726289
/kalibr/Schweizer-Messer/numpy_eigen/cmake/add_python_export_library.cmake:89
change
list(APPEND BOOST_COMPONENTS python3)
to
list(APPEND BOOST_COMPONENTS python)

nrfutil - "ImportError: No module named main" on Nixos

I'm using the tool nrfutil which is implemented in Python. To be able to use it under NixOS I was using a default.nix file, that installed nrfutil into a venv. This worked for some time very well. (The last build on the build server using Nix within an alpine container could build the software I'm working on 11 days ago successfully.) When I do exactly the same things (i.e. restarting the CI server build without changes), the build fails now complaining about pip being incorrect:
$ nix-shell
New python executable in /home/matthias/source/tbconnect/bootloader/.venv/bin/python2.7
Not overwriting existing python script /home/matthias/source/tbconnect/bootloader/.venv/bin/python (you must use /home/matthias/source/tbconnect/bootloader/.venv/bin/python2.7)
Installing pip, wheel...
done.
Traceback (most recent call last):
File "/home/matthias/source/tbconnect/bootloader/.venv/bin/pip", line 6, in <module>
from pip._internal.main import main
ImportError: No module named main
To me it seems that the module main should exist:
$ ls -l .venv/lib/python2.7/site-packages/pip/_internal/main.py
-rw-r--r-- 1 matthias matthias 1359 10月 15 12:27 .venv/lib/python2.7/site-packages/pip/_internal/main.py
I'm not very much into the Python environment, so I don't know any further. Has somebody any pointer for me where to continue debugging? How is Python resolving modules? Why doesn't it find the module, that seems to be present to me?
This is my default.nix that I use to install pip:
with import <nixpkgs> {};
with pkgs.python27Packages;
stdenv.mkDerivation {
name = "impurePythonEnv";
buildInputs = [
automake
autoconf
gcc-arm-embedded-7
# these packages are required for virtualenv and pip to work:
#
python27Full
python27Packages.virtualenv
python27Packages.pip
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.txt require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
taglib
openssl
git
stdenv
zlib ];
src = null;
shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s)
virtualenv --no-setuptools .venv
export PATH=$PWD/.venv/bin:$PATH
#pip install nrfutil
pip help
# the following is required to build micro_ecc_lib_nrf52.a in the SDK
export GNU_INSTALL_ROOT="${gcc-arm-embedded-7}/bin/"
unset CC
'';
}
I replaced pip install nrfutil with pip help to make sure the problem is not the package I try to install itself.
I'm still using python 2.7 as the nrfutil still is not fit for Python 3.
Anyway replacing python27 with python37 did not change the error I get when trying to start pip.)
NixOS version used locally is 19.09. Nix in the CI docker container is nixos/nix:latest which is the nix package manager on Alpine Linux.
Update:
Actually it works when I replace the call to pip install nrfutil with python2.7 -m pip install nrfutil. This actually confuses me even more. python2.7 is exactly the binary that is in the shebang of pip:
[nix-shell:~/source/tbconnect/bootloader]$ type python2.7
python2.7 is /home/matthias/source/tbconnect/bootloader/.venv/bin/python2.7
[nix-shell:~/source/tbconnect/bootloader]$ type pip
pip is /home/matthias/source/tbconnect/bootloader/.venv/bin/pip
[nix-shell:~/source/tbconnect/bootloader]$ head --lines 2 .venv/bin/pip
#!/home/matthias/source/tbconnect/bootloader/.venv/bin/python2.7
# -*- coding: utf-8 -*-
Update 2:
I found out that another way to fix the problem is to edit .venv/bin/pip. This script tried the following import:
from pip._internal.main import main
Which I think is the new module path starting with pip 19.3. But I still have pip 19.2. When I change this line to:
from pip._internal import main
Running pip by typing pip is working.
The thing is I have no idea why the pip script is trying to load the new module path while NixOS still has the old version of pip.
I also opened an issue for NixOS on GitHub: https://github.com/NixOS/nixpkgs/issues/71178
I got your shell derivation to work by dropping the Python27Packages.pip,
(nix-shell) 2d [azul:/tmp/lixo12333] $
>>> pip list
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7. More details about Python 2 support in pip, can be found at https://pip.pypa.io/en/latest/development/release-process/#python-2-support
Package Version
---------------- -------
behave 1.2.6
Click 7.0
crcmod 1.7
ecdsa 0.13.3
enum34 1.1.6
future 0.18.2
intelhex 2.2.1
ipaddress 1.0.23
libusb1 1.7.1
linecache2 1.0.0
nrfutil 5.2.0
parse 1.12.1
parse-type 0.5.2
pc-ble-driver-py 0.11.4
piccata 1.0.1
pip 19.3.1
protobuf 3.10.0
pyserial 3.4
pyspinel 1.0.0a3
PyYAML 4.2b4
setuptools 41.6.0
six 1.12.0
tqdm 4.37.0
traceback2 1.4.0
virtualenv 16.4.3
wheel 0.33.6
wrapt 1.11.2
(nix-shell) 2d [azul:/tmp/lixo12333] $
and my default.nix
with import <nixpkgs> {};
with pkgs.python27Packages;
stdenv.mkDerivation {
name = "impurePythonEnv";
buildInputs = [
automake
autoconf
gcc-arm-embedded-7
# these packages are required for virtualenv and pip to work:
#
python27Full
python27Packages.virtualenv
# the following packages are related to the dependencies of your python
# project.
# In this particular example the python modules listed in the
# requirements.txt require the following packages to be installed locally
# in order to compile any binary extensions they may require.
#
taglib
openssl
git
stdenv
zlib ];
src = null;
shellHook = ''
# set SOURCE_DATE_EPOCH so that we can use python wheels
SOURCE_DATE_EPOCH=$(date +%s)
virtualenv .venv
export PATH=$PWD/.venv/bin:$PATH
pip install nrfutil
#pip help
# the following is required to build micro_ecc_lib_nrf52.a in the SDK
export GNU_INSTALL_ROOT="${gcc-arm-embedded-7}/bin/"
unset CC
'';
}

How to install dependencies from requirements.txt in a Yocto recipe for a local Python project

What I should have:
I want my Yocto Project to build a package for my Python project with all dependencies inside. The project has to run out of box on the resulting read-only sdcard image.
It simply should install all requirements in the required version to the package.
What I tried without luck:
Calling pip in do_install():
"pip/pip3 is not found", even it's in RDEPENDS.
Anyway, I really prefer this way.
With inherit pypi:
When trying with inherit pypi, it tries to get also my local sources (my pyton project) from pypi. And I have always to copy the requirements to the recipe. This is not my preferred way.
Calling pip in pkg_postinst():
It tries to install the modules on first start and fails, because the system has no internet connection and it's a read-only system. It must run out of the box without installation on first boot time. Does its stuff to late.
Where I'll get around:
There should be no need to change anything in the recipes when something changes in requirements.txt.
Background information
I'm working with Yocto Rocko in a Linux environment.
In the Hostsystem, there is no pip installed. I want to run this one installed from RDEPENDS in the target system.
Building the Package (only this recipe) with:
bitbake myproject
Building the whole sdcard image:
bitbake myProject-image-base
The recipe:
myproject.bb (relevant lines):
RDEPENDS_${PN} = "python3 python3-pip"
APP_SOURCES_DIR := "${#os.path.abspath(os.path.dirname(d.getVar('FILE', True)) + '/../../../../app-sources')}"
FILESEXTRAPATHS_prepend := "${THISDIR}/files:"
SRC_URI = " \
file://${APP_SOURCES_DIR}/myProject \
...
"
inherit allarch # tried also with pypi and setuptools3 for the pypi way.
do_install() { # Line 116
install -d -m 0755 ${D}/myProject
cp -R --no-dereference --preserve=mode,links -v ${APP_SOURCES_DIR}/myProject/* ${D}/myProject/
pip3 install -r ${APP_SOURCES_DIR}/myProject/requirements.txt
# Tried also python ${APP_SOURCES_DIR}/myProject/setup.py install
}
# Tried also this, but it's no option because the data MUST be included in the Package:
# pkg_postinst_${PN}() {
# #!/bin/sh -e
# pip3 install -r /myProject/requirements.txt
# }
FILES_${PN} = "/myProject/*"
Resulting Errors:
Expected to install the listed modules from requirements.txt into the myProject package, so that the python app will run directly on the resulting readonly sdcard image.
With pip, I get:
| /*/tmp/work/*/myProject/0.1.0-r0/temp/run.do_install: 116: pip3: not found
| WARNING: exit code 127 from a shell command.
| ERROR: Function failed: do_install ...
When using pypi:
404 Not Found
ERROR: myProject-0.1.0-r0 do_fetch: Fetcher failure for URL: 'https://files.pythonhosted.org/packages/source/m/myproject/myproject-0.1.0.tar.gz'. Unable to fetch URL from any source.
=> But it should not fetch myProject, since it is already local and nowhere remote.
Any ideas? What would be the best way to reach to a ready to use sdcard image without the need to change recipes when requirements.txt changes?
You should use RDEPENDS_${PN} to take care of your dependencies for your app in the recipe.
For example, assuming your python app needs aws-iot-device-sdk-python module, you should add it to RDEPENDS in the recipe. In your case, it would be like this:
RDEPENDS_${PN} = "python3 \
python3-pip \
python3-aws-iot-device-sdk-python \
"
Here's the link showing the Python modules supported by OpenEmbedded Layer.
https://layers.openembedded.org/layerindex/branch/master/layer/meta-python/
If the modules you need are not there, you will likely need to create recipes for the modules.
My newest findings:
Yocto/bitbake seems to suppress interpreting the requirements, because this breaks automatic dependency resolving what could lead to conflicts.
Reason: The required modules from setup.py would not be stored as independent packages, but as part of my package. So, bitbake does not know about this modules what could conflict with other packages that probably requires same modules in different versions.
What was in my recipe:
MY_INSTALL_ARGS = "--root=${D} \
--prefix=${prefix} \
--install-lib=${PYTHON_SITEPACKAGES_DIR} \
--install-data=${datadir}"
do_install() {
PYTHONPATH=${PYTHON_SITEPACKAGES_DIR} \
${STAGING_BINDIR_NATIVE}/${PYTHON_PN}-native/${PYTHON_PN} setup.py install ${MY_INSTALL_ARGS}
}
If I execute this outside of bitbake as python3 setup.py install ${MY_INSTALL_ARGS}, all will be installed correctly, but in the recipe, no requirements are installed.
There is a parameter --no-deps, but I didn't find where it is set.
I think there could be one possibility to exploit the requirements out of setup.py:
Find out where to disable --no-deps in the openembedded/poky layer for easy_install.
Creating a separate PYTHON_SITEPACKAGES_DIR
Install this separate PYTHON_SITEPACKAGES_DIR in eg the home directory as private python modules dir.
This way, no python module would trigger a conflict.
Since I do not have the time to experiment with this, I'll define now one recipe per requirement.
You try installing pip?
Debian
apt-get install python-pip
apt-get install python3-pip
Centos
yum install python-pip

ImportError: cannot import name _imaging

I installed Pillow, and after I want to do:
from PIL import Image
I get the following error:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 61, in <module>
ImportError: cannot import name _imaging
However, if I import these separately, everything is fine, ie:
import _imaging
import Image
Do you know what the problem might be?
I had the same problem and I solved that by upgrading this package using the command below:
pip install -U Pillow
This also happens if you built Pillow in one OS and then copied the contents of site-packages to another one. For example, if you are creating AWS Lambda deployment package, that's the error you will face when running the Lambda function. If that's the case, then Pillow needs to be installed in a Amazon Linux instance and you have to use the resulting site-packages in your deployment package. See instructions and details here:
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html
I ran into this problem as well. It can happen if you have PIL installed, then install Pillow on top of it.
Go to /usr/local/lib/python2.7/dist-packages/ and delete anything with "PIL" in the name (including directories). If the Pillow .egg file is there you might as well delete that too.
Then re-install Pillow.
substitute "python2.7" for the version of python you're using.
What is your version of pillow?
Pillow >= 2.1.0 no longer supports import _imaging. Please use from PIL.Image import core as _imaging instead. Here's the official documentation.
I have got the same error with Python 3.6. Upgrading Pillow did the job for me.
sudo python3.6 -m pip install Pillow --upgrade
Probably for other python versions use your version instead of 3.6.
This can happen if you're trying to run Pillow installed on a Mac in a Linux environment (for example, e.g. building an AWS Lambda on a Mac then deploying it to a Linux runtime).
To make sure you're installing it for the right platform do the following:
pip3 install --platform manylinux1_x86_64 --only-binary=:all:
The --only-binary=:all: is required when specifying --platform and the platform itself can be found by looking at https://pypi.org/project/Pillow/7.2.0/#files (for example) - the platform is the last part of the filename e.g. win32, manylinux1_x86_64, manylinux1_i686 etc.
This avoids the need to be running Linux to install the Linux build of Pillow.
This may be a niche solution but I was able to fix this problem on Pycharm by going to file->settings->python interpreter and clicking the upgrade symbol next to the pillow package.
For pillow to work PIL must be in /usr/local/lib/python2.7 or3/dist-packages/PIL.py.
In dist-packages PIL.py should have a folder.
sudo apt-get update
pip install Pillow
PIL != PiL
I had the same problem when it tried to deploy a lambda package, the thing is that you have to precompile the package emulating the lambda architecture/runtime that you are going to use, otherwise you'll get cannot import name _imaging. 2 ways of solving this:
1 - spin up an EC2 Amazon Linux instance.( i will only cover this part)
2 - Use dockers.
Short solution
Install Python 3 in Amazon Linux 2 intance. (Must be python3.X you plan to use in lambda)
Install a virtual environment under the ec2-user home directory.
Activate the environment, and then install Boto 3.
Install Pillow
Create a ZIP archive with the contents of the library(PIL and Pillow.libs)
Add your function code to the archive.
Update your the lambda.(AWS CLI)
Long solution
If Python 3 isn't already installed, then install the package using the yum package manager.
`$ sudo yum install python3 -y`
Create a virtual environment under the ec2-user home directory
The following command creates the app directory with the virtual environment inside of it. You can change my_app to another name. If you change my_app, make sure that you reference the new name in the remaining resolution steps.
`$ python3 -m venv my_app/env`
Activate the virtual environment and install Boto 3
Attach an AWS Identity and Access Management (IAM) role to your EC2 instance with the proper permissions policies so that Boto 3 can interact with the AWS APIs. For other authentication methods....For a quick use you can set your credential using $ aws confifure see documentation ( you will need this in step 7)
3.1 Activate the environment by sourcing the activate file in the bin directory under your project directory.
`$ source ~/my_app/env/bin/activate`
3.2. Make sure that you have the latest pip module installed within your environment.
$ pip install pip --upgrade
3.3 Use the pip command to install the Boto 3 library within our virtual environment.
`pip install boto3`
Install libraries with pip.
$ pip install Pillow
4.1 Deactivate the virtual environment.
`$ deactivate`
Create a ZIP archive with the contents of the library.
change directory to where pip is installes. it should be something like /my_app/env/lib/python3.x/site-packages.
IMPORTANT: the key here is to zip the file inside site-packages into
your lambda.(i only used PIL and Pillow.libs to save space but you can
zip everything)
5.1 ZIP everything thats inside the PIL folder.
`zip -r9 PIL.zip ./PIL/`
add the Pillow.libs to your ZIP
`zip -gr PIL.zip Pillow.libs`
Add your function code to the archive.
you can do this in the console if it just on file of code, but i recomend doing it in this step.If you don't have your code,just create a file using vi or nano and save it with the name that your lambda handler will use (in this case will use lambda_function.py).
`zip -g PIL.zip lambda_function.py`
Update your the lambda.(AWS CLI)
if you haven't create a lambda function,do it now before updating the function from the aws cli, make sure that you have the right permission to update lambda from the aws cli.
change LAMBDAFUNCTIONNAME for your function name
aws lambda update-function-code --function-name LAMBDAFUNCTIONNAME P --zip-file fileb://PIL.zip
Getting out of the first loop of hell
go to your lambda console and test your code, make sure you use the same runtime/python version you used in the EC2 instance
Quick solution - import PyQt5 as well,
you will not get that error message.
import PyQt5
from PIL import ImageGrab
As some other answers have alluded to, this can happen when you build Pillow on MacOS and try to import PIL in another OS like some Amazon Linux flavor.
My exact use-case was to package imagehash as a Lambda layer which includes pillow as a dependency. The following guideline has worked great for me for all python packages.
Install the SAM CLI SAM Installation
Create your python script with the lambda handler defined
Create your template.yml file with your Lambda function defined. Your CodeUri should be the relative path to your python script.
Add the package you are trying to create a layer for to your requirements.txt.
Run the following SAM command sam build -t path_to_template
You will now have the following directory .aws-sam/build/{Logical ID Of Lambda Function}. Inside you will see that your python packages and their dependencies have been installed just as if you ran pip download package and unzipped the wheel files.
Now, the python files have been prepped by SAM specifically for Lambda and you can continue with creating your Lambda Layer as desired. Configuring Lambda Layers
Since I use AWS SAM CLI already for running Lambda functions locally, this has been the easiest method for me to create my layers.
Just uninstall pillow:
pip uninstall pillow
then install pillow again:
pip install pillow
works great
I'm using Flask with Google App Engine. I have the module Pillow installed via this command:
pip install -t lib pillow
I fixed this error by defined PIL in my app.yaml file:
libraries:
- name: PIL
version: latest
Solution
pip uninstall PIL
pip uninstall Pillow
pip install Pillow

Installing gevent in virtualenv

I am just starting with virtualenv, but I am trying to install gevent within a virtualenv environment (I am running Windows). When I use PIP from virtualenv, I get this error:
MyEnv>pip install gevent
Downloading/unpacking gevent
Running setup.py egg_info for package gevent
Please provide path to libevent source with --libevent DIR
The package index has MSIs and EXEs for installing on Windows (http://pypi.python.org/pypi/gevent/0.13.7), but I don't know how to install those into a virtualenv environment (or if that is even possible). When I try pip install gevent-0.13.7.win32-py2.7.exe from the virtualenv promp, I get an error as well:
ValueError: ('Expected version spec in', 'D:\\Downloads\\gevent-0.13.7.win32-py2.7.exe', 'at', ':\\Downloads\\gevent-0.13.7.win32-py2.7.exe')
Does someone know how to do this?
Pip doesn't support installing binary packages, yet. If you want to install from binary package you have to use easy_install - easy_install gevent-0.13.7.win32-py2.7.exe
Microsoft Windows XP [Wersja 5.1.2600]
(C) Copyright 1985-2001 Microsoft Corp.
Z:\>virtualenv z:\venv\gevent-install
New python executable in z:\venv\gevent-install\Scripts\python.exe
Installing distribute..................................................................................................
............................................................................................done.
Installing pip.................done.
Z:\>venv\gevent-install\Scripts\activate
(gevent-install) Z:\>easy_install c:\python\packages\gevent-0.13.7.win32-py2.7.exe
Processing gevent-0.13.7.win32-py2.7.exe
creating 'c:\docume~1\pdobro~1\ustawi~1\temp\easy_install-b5nj3i\gevent-0.13.7-py2.7-win32.egg' and adding 'c:\docume~1
pdobro~1\ustawi~1\temp\easy_install-b5nj3i\gevent-0.13.7-py2.7-win32.egg.tmp' to it
creating z:\venv\gevent-install\lib\site-packages\gevent-0.13.7-py2.7-win32.egg
Extracting gevent-0.13.7-py2.7-win32.egg to z:\venv\gevent-install\lib\site-packages
Adding gevent 0.13.7 to easy-install.pth file
Installed z:\venv\gevent-install\lib\site-packages\gevent-0.13.7-py2.7-win32.egg
Processing dependencies for gevent==0.13.7
Searching for greenlet
Reading http://pypi.python.org/simple/greenlet/
Reading http://bitbucket.org/ambroff/greenlet
Reading https://github.com/python-greenlet/greenlet
Best match: greenlet 0.3.4
Downloading http://pypi.python.org/packages/2.7/g/greenlet/greenlet-0.3.4-py2.7-win32.egg#md5=9941aa246358c586bb274812e
130629
Processing greenlet-0.3.4-py2.7-win32.egg
creating z:\venv\gevent-install\lib\site-packages\greenlet-0.3.4-py2.7-win32.egg
Extracting greenlet-0.3.4-py2.7-win32.egg to z:\venv\gevent-install\lib\site-packages
Adding greenlet 0.3.4 to easy-install.pth file
Installed z:\venv\gevent-install\lib\site-packages\greenlet-0.3.4-py2.7-win32.egg
Finished processing dependencies for gevent==0.13.7
(gevent-install) Z:\>
See Can I install Python windows packages into virtualenvs? Another option is to install from source and you can do this with pip but this requires setting up compiler and environment which is much harder than the simple command above.
From the error message, it would appear you need libevent source code. I would imagine you need to go a step further and compile/install libevent system-wide so pip can find it.
I would start by downloading the latest stable source from http://libevent.org/.
Compile and install it using instructions in the README: https://github.com/libevent/libevent#readme
To compile it on Windows, you'll need to use GNU-style build utilities like make and autoconf. I recommend http://www.mingw.org/.
Once you've installed libevent system-wide, I imagine pip will find it and proceed with gevent installation.
In the msi for gevent-0.13.7 there's an option to select an alternate installation point. point it to the root dir of your particular virtual environment (just above where /Lib and /Scripts are located). That should install it correctly.
You also need to make sure greenlets are installed. For that you can use Piotr's suggested method with easy_install on the .exe.

Categories