Can not import snappy in python - python

I use the package named python-snappy. This package requires snappy library. So, I download and install snappy successfully by the following commands such as:
./configure
make
sudo make install
When I import snappy, I receive the errors:
from _snappy import CompressError, CompressedLengthError, \
ImportError: libsnappy.so.1 cannot open shared object file: No such file or directory
I'm using Python 2.7, snappy, python-snappy and Ubuntu 12.04
How can I fix this problem?
Thanks

Traditionally you might have to run the ldconfig utility to update your /etc/ld.so.cache (or equivalent as appropriate to your OS). Sometimes it might be necessary to add new entries (paths) to your /etc/ld.so.conf.
Basically the shared object (so) loaders on many versions of Unix (and probably other Unix-like operating systems) use a cache to help resolve their base filenames into actual files to be loaded (usually mmap()'d). This is roughly similar to the intermittent need to run hash -r or rehash in your shell after adding things to directories in your PATH.
Usually you can just run ldconfig with no arguments (possibly after adding your new library's path to your /etc/ld.so.conf text file). Good Makefiles will do this for you during make install.
Here's a little bit more info: http://linux.101hacks.com/unix/ldconfig/

You can install the python-snappy and libsnappy1 from the ubuntu repos:
$ sudo apt-get install libsnappy1 python-snappy
You should not have to download anything.

the following worked for me:
$ conda install python-snappy
then in my code I used:
import snappy

Here for e.g. anaconda python
Download snappy from github
also download the python file
extract both files
google-snappy folder
$ ./configure
$ make
$ sudo make install
Then in python folder:
$ python setup.py build # here I get the same import _snappy error
$ python setup.py install # after this import works

Related

Python Deployment Package with SKLEARN, PANDAS and NUMPY issue?

I am a newbie on the AWS & Python and trying to implement a simple ML recommendation system using AWS Lambda function for self-learning. I am stuck on the packaging the combination of sklearn, numpy and pandas. If combined any two lib means (Pandas and Numpy) or (Numpy and Skype) is working fine and deploy perfectly. Because I am using ML system then i need sklearn (scipy and pandas and numpy) which cannot work and getting this error on aws lambda test.
What I have done so far :
my deployment package from within a python3.6 virtualenv, rather than directly from the host machine. (have python3.6, virtualenv and awscli already installed/configured, and that your lambda function code is in the ~/lambda_code directory):
cd ~ (We'll build the virtualenv in the home directory)
virtualenv venv --python=python3.6 (Create the virtual environment)
source venv/bin/activate (Activate the virtual environment)
pip install sklearn, pandas, numpy
cp -r ~/venv/lib/python3.6/site-packages/* ~/lambda_code (Copy all installed packages into root level of lambda_code directory. This will include a few unnecessary files, but you can remove those yourself if needed)
cd ~/lambda_code
zip -r9 ~/package.zip . (Zip up the lambda package)
aws lambda update-function-code --function-name my_lambda_function --zip-file fileb://~/package.zip (Upload to AWS)
after that getting this error:
**"errorMessage": "Unable to import module 'index'"**
and
START RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf Version: $LATEST
Unable to import module 'index': **Missing required dependencies ['numpy']**
END RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf
REPORT RequestId: 0e9be841-2816-11e8-a8ab-636c0eb502bf Duration: 0.90 ms Billed Duration: 100 ms Memory Size: 128 MB Max Memory Used: 33 MB
I have tried this on EC2 instance as well but did not a success.I did the google and read multiple blogs and solution but not worked.
Please help me out on this.
u are using python 3.6 .
so
pip3 install numpy
should be used, make a try .
You need to make sure all the dependent libraries AND the Python file containing your function are all in one zip file in order for it to detect the correct dependencies.
So essentially, you will need to have Numpy, Panda and your own files all in one zip file before you upload it. Also make sure that your code is referring to the local files (in the same unzipped directory) as dependencies. If you have done that already, the issue is probably how your included libraries gets referenced. Make sure you are able to use the included libraries as a dependency by getting the correct relative path on AWS once it's deployed to Lambda.
So like Wai kin chung said, you need to use pip3 to install the libraries.
so to figure out which python version is default you can type:
which python
or
python -v
So in order to install with python3 you need to type:
python3 -m pip install sklearn, pandas, numpy --user
Once that is done, you can make sure that the packages are installed with:
python3 -m pip freeze
This will show all the python libraries installed with your python model.
Once you have the libraries you would want to continue with you regular steps. Of course you would first want to delete everything that you have placed in ~/venv/lib/python3.6/site-packages/*.
cd ~/lambda_code
zip -r9 ~/package.zip
If you're running this on Windows (like I was), you'll run into an issue with the libraries being compiled on an incompatible OS.
You can use an Amazon Linux EC2 instance, or a Cloud9 development instance to build your virtualenv as detailed above.
Or, you could just download the pre-compiled wheel files as discussed on this post:
https://aws.amazon.com/premiumsupport/knowledge-center/lambda-python-package-compatible/
Essentially, you need to go to the project page on https://pypi.org and download the files named like the following:
For Python 2.7: module-name-version-cp27-cp27mu-manylinux1_x86_64.whl
For Python 3.6: module-name-version-cp36-cp36m-manylinux1_x86_64.whl
Then unzip the .whl files to your project directory and re-zip the contents together with your lambda code.
Was having a similar problem on Ubuntu 18.04.
Solved the issue by using python3.7 and pip3.7.
Its important to use pip3.7 when installing the packages, like pip3.7 install numpy or pip3.7 install numpy --user
To install python3.7 and pip3.7 on Ubuntu you can use deadsnakes/ppa
sudo add-apt-repository ppa:deadsnakes/ppa
sudo apt-get update
sudo apt-get install python3.7
curl https://bootstrap.pypa.io/get-pip.py -o /tmp/get-pip.py
python3.7 /tmp/get-pip.py
This solution should also work on Ubuntu 16.04.

'bz2 is module not available' when installing Pandas with pip in python virtual environment

I am going through this post Numpy, Scipy, and Pandas - Oh My!, installing some python packages, but got stuck at the line for installing Pandas:
pip install -e git+https://github.com/pydata/pandas#egg=pandas
I changed 'wesm' to 'pydata' for the latest version, and the only other difference to the post is that I'm using pythonbrew.
I found this post, related to the error, but where is the Makefile for bz2 mentioned in the answer? Is there another way to resolve this problem?
Any help would be much appreciated. Thanks.
You need to build python with BZIP2 support.
Install the following package before building python:
Red Hat/Fedora/CentOS: yum install bzip2-devel
Debian/Ubuntu: sudo apt-get install libbz2-dev
Extract python tarball. Then
configure;
make;
make install
Install pip using the new python.
Alternative:
Install a binary python distribution using yum or apt, that was build with BZIP2 support.
See also: ImportError: No module named bz2 for Python 2.7.2
I spent a lot of time on the internet and got a partial answer everywhere. Here is what you need to do to make it work. Follow every step.
sudo apt-get install libbz2-dev Thanks to Freek Wiekmeijer for this.
Now you also need to build python with bz2. Previously installed python won't work. For that do following:
Download stable python version from https://www.python.org/downloads/source/ then extract that Gzipped source tarball file. You can use wget https://python-tar-file-link.tgz to download and tar -xvzf python-tar-file.tgz to extract it in current directory
Go inside extracted folder then run following commands one at a time
./configure
make
make install
This will build a python file with bz2 that you previously installed
Since this python doesn't have pip installed, idea was to create a virtual environment with above-built python then install pandas using previously installed pip
You will see python file in the same directory. Just create a virtual environment.
./python -m env myenv (create myenv in the same directory or outside it's your choice)
source myenv/bin/activate (activate virtual environment)
pip install pandas (install pandas in the current environment)
That's it. Now with this environment, you should be able to use pandas without error.
pyenv
I noticed that installing Python using source takes a long time (I am doing it on i7 :/ ); especially the make and make test...
A simpler and shorter solution was to install another version of Python (I did Python 3.7.8) using pyenv, install it using these steps.
It not only saved the problem of using multiple Python instances on the same system but also maintain my virtual environments without virtualenvwrapper (which turned buggy on my newly setup ubuntu-20.04).

pip-installed uWSGI ./python_plugin.so error

I've installed uWSGI using pip and start it up with an XML to load my application. The XML config contains <plugin>python</plugin>. On my new server it leads to an error:
open("./python_plugin.so"): No such file or directory [core/utils.c line 3321]
!!! UNABLE to load uWSGI plugin: ./python_plugin.so: cannot open shared object file: No such file or directory !!!
I can find the .c and the .o versions:
sudo find / -name 'python_plugin.c'
/srv/www/li/venv/build/uwsgi/build/uwsgi/plugins/python/python_plugin.c
/srv/www/li/venv/build/uwsgi/plugins/python/python_plugin.c
sudo find / -name 'python_plugin.o'
/srv/www/li/venv/build/uwsgi/build/uwsgi/plugins/python/python_plugin.o
/srv/www/li/venv/build/uwsgi/plugins/python/python_plugin.o
sudo find / -name 'python_plugin.so'
But no .so found. My previous system had a uwsgi install through apt-get, but that's really old (and I'm quite sure it uses the pip installed uwsgi normally, but maybe not for shared objects then?)
Some background info:
Ubuntu 12.0.4 LTS
Python 2.7 (virtualenv)
I've installed uWSGI in my venv, using the normal pip install uwsgi (no sudo)
So I'm a tad clueless :( I can't be the only person in the world to have this, right? Should I compile the .so objects myself? (If so, how?) Or is there another great solution?
Distros should package uWSGI in a modular way, with each feature as a plugin. But when you install using language specific ways (pip, gem...) the relevant language is embedded, so you do not need to load the plugin
For anyone that is having trouble with this, basically you need to remove lines that state your plugin from your configuration files if you change from a distro package to a pypi or gem install. I was previously using the Ubuntu/Debian package for uwsgi, but it was old so I upgraded to use pip instead.
So, in my configuration .ini file, I had the following line:
plugin = python
Removing that line fixes the problem.
Maybe you forgot this command
$ apt-get install uwsgi-plugin-python
If you're using Python3, try this command instead:
$ apt-get install uwsgi-plugin-python3
Install all available plugins: sudo apt-get install uwsgi-plugins-all
As of 6/2018 the yum package name was updated from uwsgi-plugin-python to uwsgi-plugin-python2 https://src.fedoraproject.org/rpms/uwsgi/pull-request/4#
The new install command is therefore yum install uwsgi-plugin-python2

no module named zlib

First, please bear with me. I have hard time telling others my problem and this is a long thread...
I am using pythonbrew to run multiple versions of python in Ubuntu 10.10.
For installing pythonbrew and how it works, please refers to this link below
http://www.howopensource.com/2011/05/how-to-install-and-manage-different-versions-of-python-in-linux/
After reading a couple stackoverflow threads, I finally found the file called Setup under this directory: ~/.pythonbrew/pythons/Python-2.7.1/lib/python2.7/config
In this Setup file I see
# Andrew Kuchling's zlib module.
# This require zlib 1.1.3 (or later).
# See http://www.gzip.org/zlib/
# zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz
I uncommented the last line, then I ran python -v again. However, I received the same error when I tried import zlib, so I guess I have to do something to install zlib into the lib.
But I am clueless about what I need to do. Can someone please direct me in the right direction??? Thank you very much!
I am doing this because I want to use different version of python in different virtualenv I created.
When I did virtualenv -p python2.7 I received no module named zlib.
jwxie518#jwxie518-P5E-VM-DO:~$ virtualenv -p python2.7 --no-site-packages testenv
Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-2.7.1/bin/python2.7
Traceback (most recent call last):
File "/usr/local/lib/python2.6/dist-packages/virtualenv.py", line 17, in <module>
import zlib
ImportError: No module named zlib
EDIT
I have to install 2.7.1 by appending --force.
I am developing Django, and I need some of these missing modules, for example sqlite3, and to create my virtualenv I definitely need zlib. If I just use the system default (2.6.6), I have no problem.
To do this with system default, all I need to do is
virtualenv --no-site-packages testenv
Thanks!
(2nd edit)
I installed 3.2 also and I tested it without problem, so I guess my problem comes down to how to install the missing module(s).
jwxie518#jwxie518-P5E-VM-DO:~$ virtualenv -p python3.2 testenv
Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-3.2/bin/python3.2
New python executable in testenv/bin/python3.2
Also creating executable in testenv/bin/python
Installing distribute..................................................................................................................................................................................................................................................................................................................................done.
Installing pip...............done.
jwxie518#jwxie518-P5E-VM-DO:~$ virtualenv -p python3.2 --no-site-packages testenv
Running virtualenv with interpreter /home/jwxie518/.pythonbrew/pythons/Python-3.2/bin/python3.2
New python executable in testenv/bin/python3.2
Also creating executable in testenv/bin/python
Installing distribute..................................................................................................................................................................................................................................................................................................................................done.
Installing pip...............done.
Sounds like you need to install the devel package for zlib, probably want to do something like
# ubuntu 12,14,16,18,20.04+
sudo apt-get install zlib1g-dev
Instead of using python-brew you might want to consider just compiling by hand, it's not very hard. Just download the source, and configure, make, make install. You'll want to at least set --prefix to somewhere, so it'll get installed where you want.
./configure --prefix=/opt/python2.7 + other options
make
make install
You can check what configuration options are available with ./configure --help and see what your system python was compiled with by doing:
python -c "import sysconfig; print sysconfig.get_config_var('CONFIG_ARGS')"
The key is to make sure you have the development packages installed for your system, so that Python will be able to build the zlib, sqlite3, etc modules. The python docs cover the build process in more detail: http://docs.python.org/using/unix.html#building-python.
By default when you configuring Python source, zlib module is disabled, so you can enable it using option --with-zlib when you configure it. So it becomes
./configure --with-zlib
For the case I met, I found there are missing modules after make. So I did the following:
install zlib-devel
make and install python again.
After running configure, you can change the config option in the file Modules/Setup as below:
zlib zlibmodule.c -I$(prefix)/include -L$(exec_prefix)/lib -lz
Or you can uncomment the zlib line as-is.
I had a lot of problems making a virtual environment (venv) as described in the tensorflow installation guide.
Most of the commands listed in this post didn't help me either so, if this is also your case this is what I did:
pip3 install --user pipenv
pip install virtualenv
Installs the dependencies to create a virtual environment
mkdir myenv
Makes a new directory called myenv but you can call it whatever you want e.g. mynewenv
cd myenv
Or whatever you call your directory so: cd [your_directory_name]
virtualenv -p /usr/bin/python3 venv
Creates a virtual environment called venv in the folder myenv. You can call your virtual env whatever you like e.g. vitualenv [v_env_name]
source ./venv/bin/activate
Activates the virtual environment. Note that if you choose a different v. env. name your commands should be written as such source ./[v_env_name]/bin/activate
deactivate
Deactivates the virtual environment.
Note: I am using Python 3.6.6 & Ubuntu 18.04
source for the commands
After you install the missing zlib dev package you can also use pythonbrew to uninstall and then reinstall the version of python you wanted and it seems like it picks up the new package to compile to correct abilities. This way you can keep using pythonbrew and don't have to do the compilation yourself (though it isn't that difficult)
Similar to the answers here on CentOS or RHEL run
sudo yum install zlib-devel
The --with-zlib solutions shown here seem to be missing headers that Python 3.9 and up needs to link (in my case).
The easiest solution I found, is given on python.org devguide:
sudo apt-get build-dep python3.6
If that package is not available for your system, try reducing the minor version until you find a package that is available in your system’s package manager.
I tried explaining details, on my blog.
My objective was to create a new Django project from the command line in Ubuntu, like so:
django-admin.py startproject mysite
I have python2.7.5 installed. I got this error:
ImportError: No module named zlib
For hours I could not find a solution, until now!
Here is a link to the solution -
http://doc.biblissima-condorcet.fr/loris-setup-guide-ubuntu-debian
I followed and executed instruction in Section 1.1 and it is working perfectly! It is an easy solution.

Unable to install Python without sudo access

I extracted, configured and used make for the installation package in my server.
However, I could not use make install. I get the error
[~/wepapps/python/Python-2.6.1]# make install
/usr/bin/install -c python /usr/local/bin/python2.6
/usr/bin/install: cannot create regular file `/usr/local/bin/python2.6': Permission denied
make: *** [altbininstall] Error 1
I run the folder with
chmod +x Python-2.6.1
I get still the same error.
How can I run make install without sudo access?
How can I install to a path under my home directory?
mkdir /home/masi/.local
cd Python-2.6.1
make clean
./configure --prefix=/home/masi/.local
make
make install
Then run using:
/home/masi/.local/bin/python
Similarly if you have scripts (eg. CGI) that require your own user version of Python you have to tell them explicitly:
#!/home/masi/.local/bin/python
instead of using the default system Python which “#!/usr/bin/env python” will choose.
You can alter your PATH setting to make just typing “python” from the console run that version, but it won't help for web apps being run under a different user.
If you compile something that links to Python (eg. mod_wsgi) you have to tell it where to find your Python or it will use the system one instead. This is often done something like:
./configure --prefix=/home/masi/.local --with-python=/home/masi/.local
For other setup.py-based extensions like MySQLdb you simply have to run the setup.py script with the correct version of Python:
/home/masi/.local/bin/python setup.py install
As of year 2020, pyenv is the best choice for installing Python without sudo permission, supposing the system has necessary build dependencies.
# Install pyenv
$ curl https://pyenv.run | bash
# Follow the instruction to modify ~/.bashrc
# Install the latest Python from source code
$ pyenv install 3.8.3
# Check installed Python versions
$ pyenv versions
# Switch Python version
$ pyenv global 3.8.3
# Check where Python is actually installed
$ pyenv prefix
/home/admin/.pyenv/versions/3.8.3
# Check the current Python version
$ python -V
Python 3.8.3
Extending bobince answer, there is an issue if you don't have the readline development package installed in your system, and you don't have root access.
When Python is compiled without readline, your arrow keys won't work in the interpreter. However, you can install the readline standalone package as follows: Adding Readline Functionality Without Recompiling Python
On the other hand, if you prefer to compile python using a local installation of readline, here's how.
Before doing as bobince was telling, compile and install readline. These are the steps to do so:
wget ftp://ftp.cwru.edu/pub/bash/readline-6.2.tar.gz
tar -zxvf readline-6.2.tar.gz
cd readline-6.2
./configure --with-prefix=$HOME/.local
make
make install
Then, add this line to your .bash_profile script:
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/.local/lib
Last, but not least, execute the following command
export LDFLAGS="-L$HOME/.local"
I hope this helps someone!
You can't; not to /usr, anyway. Only superusers can write to those directories. Try installing Python to a path under your home directory instead.

Categories