How can I install the Freeimage library in a Starcluster cluster so that it can be used with the scikit-image module?
I set up a cluster on AWS using Starcluster and I want to run some script that requires loading .jp2 images with the scikit-image module, which can be done with the Freeimage library. The command to do this is:
skimage.io.imread("path/to/image.jp2", plugin='freeimage). This works when I run it on my machine.
I have installed the scikit-image in my cluster using the Python packages plugin in the Starcluster config file as indicated in the Starcluster documentation:
[plugin pypackages]
setup_class = starcluster.plugins.pypkginstaller.PyPkgInstaller
packages = networkx, scikit-learn, scikit-image
I also installed the following packages into my cluster following the instructions in the documentation
[plugin pkginstaller]
SETUP_CLASS = starcluster.plugins.pkginstaller.PackageInstaller
PACKAGES = libfreeimage3, libfreeimage-dev
But when I run skimage.io.imread("path/to/image.jp2", plugin='freeimage) in the cluster I get the following error message:
RuntimeError: Could not find a FreeImage library in any of:
/usr/local/lib/python2.7/dist-packages/skimage/io/_plugins
/lib
/usr/lib
/usr/local/lib
/usr/lib
I am using OS X.
I was able to solve this by updating the Ubuntu installation on the Starcluster AMI to Ubuntu 14.04.
The problem was that Starcluster's AMIs are currently using Ubuntu 13 which, apparently is no longer supported. This means that installing packages through apt-get no longer works.
I was able to create an AMI with Ubuntu 14.04 following the instructions in the following video: https://www.youtube.com/watch?v=2RBupgpi_ec. Once I did that I was able to install libfreeimage3 and libfreeimage-dev as described in the question without problems.
Related
I need to use the sksparse.chomod package however my pycharm does not let me install it as it can't seem to find it.
I found the sksparse package on github and downloaded it but I do not know how to add a package downloaded from the internet into a conda environment. So, my first question would be can you download a package from github and add it to your conda environment, and how do you do this?
As I did not know how to do the above I instead saved the package within my project and thought I could simply import sksparse.cholmod. However, the line in my code that says import sksparse.cholmod as sks has no errors with it, so I assumed that meant this was ok, but when I try to run my file I get this error:
import sksparse.cholmod as sks
ModuleNotFoundError: No module named 'sksparse.cholmod'
If I have downloaded the package into my project why can't it be found, yet there are no errors when importing?
The cholmod file is a pyx file which I've been told should not be a problem.
Please could anyone help, I am reasonably new to python and I am looking for a straight forward solution that won't be time consuming.
It was an issue with windows, I was able to fix this using the instructions on this link
https://github.com/EmJay276/scikit-sparse
We must follow these steps precisely:
(This was tested with a Anaconda 3 installation and Python 3.7)
Install these requirements in order:
'''
conda install -c conda-forge numpy - tested with v1.19.1
conda install -c anaconda scipy - tested with v1.5.0
conda install -c conda-forge cython - tested with v0.29.21
conda install -c conda-forge suitesparse - tested with v5.4.0
'''
Download Microsoft Build Tools for C++ from https://visualstudio.microsoft.com/de/visual-cpp-build-tools/ (tested with 2019, should work with 2015 or newer)
Install Visual Studio Build Tools
Choose Workloads
Check "C++ Buildtools"
Keep standard settings
Run ''' pip install git+https://github.com/EmJay276/scikit-sparse '''
Test ''' from sksparse.cholmod import cholesky '''
Use all the versions stated for numpy etc, however with scipy I installed the latest version and it worked fine.
I installed opencv-python on ubuntu wsl, after setting up a venv using virtualenvwrapper (I use wsl in visual studio code).
When running this code (which appears in one of the articles of this OCR guide:
import argparse
import cv2
ap = argparse.ArgumentParser()
ap.add_argument("-i", "--image", required=True)
args = vars(ap.parse_args())
image = cv2.imread(args["image"])
cv2.imshow("I", image)
with this command on teminal:
python script.py --image temp.png
I get:
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/home/ben123/.local/bin/.virtualenvs/ocr_venv/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb.
The interpreter in vscode is the correct one (the one of the venv), and when I type pip list I get
Package Version
------------- --------
numpy 1.22.2
opencv-python 4.5.5.62
pip 22.0.3
setuptools 60.6.0
wheel 0.37.1
Would appreciate any help at this point, since I spent so much time and didn't get nowhere.
Things I tried:
following this guide to install it. Gave the same error.
following an older guide from this site, was much more complicated and didn't work as well.
uninstalling opencv-python and installing opencv-python again/ opencv-python-contrib/ opencv-python-headless/ opencv-python-contrib-headless (only one of them at a time)
following this thread because it has similar problem
literally reset my wsl several times just to make sure I don't have multiple pythons/ opencv versions that mess this up.
tried installing (to a wsl venv) opencv directly with the official documentation
Tried to give up on wsl completely and install opencv using anaconda but even that didn't work.
Just delete cv2.imshow from your code. Your OS is without graphics and can't display image
I had the same error in a completely different context.
Found that the problem was a PyQt5 installation in my virtual environment.
Check if you have a PyQt in the path
/home/ben123/.local/bin/.virtualenvs/ocr_venv/lib/python3.8/site-packages/
if so, remove it
$ pip uninstall <PyQT package installed>
example:
$ pip uninstall PyQt5
Then reinstall opencv-python
$ pip uninstall opencv-python
$ pip install opencv-python
Hope that works!
To display graphical information about wsl, you should configure x11 related content.
eg: You can use MobaXterm for graphical display.
Uninstalling opencv and installing similar headless version worked for me.
$ pip install opencv-python-headless
I'm trying to load an existing azure workspace in RStudio Azure Compute Instance like it's shown in this link: https://azure.github.io/azureml-sdk-for-r/. But, after installing azuremlsdk package when I'm running this code azuremlsdk::install_azureml(). I'm getting this error :
Attempting uninstall: certifi Found existing installation: certifi2016.9.26ERROR: Cannot uninstall 'certifi'. It is a distutilsinstalled project and thus we cannot accurately determine which files belong to it which would lead to only a partial uninstall.Error: Error installing package(s): 'azureml-sdk==1.10.0', 'numpy', 'pandas'
By referring to this link : https://learn.microsoft.com/en-us/azure/machine-learning/how-to-troubleshoot-environments; I tried to fix this error by running conda remove certifi through terminal of that Compute Instance & Jupyter Notebook of that Compute Instance. But, no luck.
Does anyone have any experience in resolving this issue. Please help.
Azure ML has issues with Python versions and its dependency packages, make sure you are using Python package of 3.5 to 3.8 while installing these.
While installing azureml it will search for all the dependency packages and will install all of them, in this process there will be the version issues, like pandas, numpy.. with different pip versions.
From your stack trace looks like the error is happening when we install the packages like pandas numpy etc along with azureml-train-automl-client package so try to install them before hand by checking its versions which are dependent with you python versions packages.
Check the Azure ML documentation for installation of Azure ML Additional packages.
If you investigate them azureml-train-automl requires somes data science packages including pandas, numpy, and scikit-learn.
Kindly follow below commands for conda environment:
pip install azureml-train-automl
pip install --upgrade azureml-train-automl
pip install show azureml-train-automl
It seems that the Python SDK installation conflicts with itself when using Python 3.6 (the default). I was able to install the SDK for Python 3.7:
azuremlsdk::install_azureml(conda_python_version = '3.7')
I'm having trouble installing packages and using them in Pycharm. I've followed various threads (I'm new to Macs and seem to have tried everything) now I'm stuck.
In this case, I want to use the package xgboost.
I have brew installed, after launching a terminal using Rosetta:
%brew install xgboost
Warning: xgboost 1.3.3 is already installed and up-to-date.
It appears installed OK here:
/opt/homebrew/Cellar/xgboost
I also have Python installed here:
/opt/homebrew/Cellar/python#3.9
But no matter how I configure an Interpreter in Pycharm, I can't seem to get the package recognised.
Where have I gone wrong?
I am very unsure exactly how, but I've got this working.
Following: https://abbasegbeyemi.me/blog/homebrew-python-apple-m1
I changed the order of elements in my path:
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/opt/homebrew/bin
then a new interpreter in Pycharm using:
usr/local/Cellar/Python#3.9/3.9.2_2/bin/python3.9
Now I can install packages just using pip in pycharm and it works.
This has been 6 hours of pain - warning to anyone who isn't well versed in macs, setting up an M1 for python dev was a complete nightmare for me.
Docs: https://xgboost.readthedocs.io/en/latest/build.html
Pre-built binary wheel for Python
If you are planning to use Python, consider installing XGBoost from a pre-built binary wheel, available from Python Package Index (PyPI). You may download and install it by running
# Ensure that you are downloading one of the following:
# * xgboost-{version}-py2.py3-none-manylinux1_x86_64.whl
# * xgboost-{version}-py2.py3-none-win_amd64.whl
pip3 install xgboost
I installed Pillow, and after I want to do:
from PIL import Image
I get the following error:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 61, in <module>
ImportError: cannot import name _imaging
However, if I import these separately, everything is fine, ie:
import _imaging
import Image
Do you know what the problem might be?
I had the same problem and I solved that by upgrading this package using the command below:
pip install -U Pillow
This also happens if you built Pillow in one OS and then copied the contents of site-packages to another one. For example, if you are creating AWS Lambda deployment package, that's the error you will face when running the Lambda function. If that's the case, then Pillow needs to be installed in a Amazon Linux instance and you have to use the resulting site-packages in your deployment package. See instructions and details here:
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html
I ran into this problem as well. It can happen if you have PIL installed, then install Pillow on top of it.
Go to /usr/local/lib/python2.7/dist-packages/ and delete anything with "PIL" in the name (including directories). If the Pillow .egg file is there you might as well delete that too.
Then re-install Pillow.
substitute "python2.7" for the version of python you're using.
What is your version of pillow?
Pillow >= 2.1.0 no longer supports import _imaging. Please use from PIL.Image import core as _imaging instead. Here's the official documentation.
I have got the same error with Python 3.6. Upgrading Pillow did the job for me.
sudo python3.6 -m pip install Pillow --upgrade
Probably for other python versions use your version instead of 3.6.
This can happen if you're trying to run Pillow installed on a Mac in a Linux environment (for example, e.g. building an AWS Lambda on a Mac then deploying it to a Linux runtime).
To make sure you're installing it for the right platform do the following:
pip3 install --platform manylinux1_x86_64 --only-binary=:all:
The --only-binary=:all: is required when specifying --platform and the platform itself can be found by looking at https://pypi.org/project/Pillow/7.2.0/#files (for example) - the platform is the last part of the filename e.g. win32, manylinux1_x86_64, manylinux1_i686 etc.
This avoids the need to be running Linux to install the Linux build of Pillow.
This may be a niche solution but I was able to fix this problem on Pycharm by going to file->settings->python interpreter and clicking the upgrade symbol next to the pillow package.
For pillow to work PIL must be in /usr/local/lib/python2.7 or3/dist-packages/PIL.py.
In dist-packages PIL.py should have a folder.
sudo apt-get update
pip install Pillow
PIL != PiL
I had the same problem when it tried to deploy a lambda package, the thing is that you have to precompile the package emulating the lambda architecture/runtime that you are going to use, otherwise you'll get cannot import name _imaging. 2 ways of solving this:
1 - spin up an EC2 Amazon Linux instance.( i will only cover this part)
2 - Use dockers.
Short solution
Install Python 3 in Amazon Linux 2 intance. (Must be python3.X you plan to use in lambda)
Install a virtual environment under the ec2-user home directory.
Activate the environment, and then install Boto 3.
Install Pillow
Create a ZIP archive with the contents of the library(PIL and Pillow.libs)
Add your function code to the archive.
Update your the lambda.(AWS CLI)
Long solution
If Python 3 isn't already installed, then install the package using the yum package manager.
`$ sudo yum install python3 -y`
Create a virtual environment under the ec2-user home directory
The following command creates the app directory with the virtual environment inside of it. You can change my_app to another name. If you change my_app, make sure that you reference the new name in the remaining resolution steps.
`$ python3 -m venv my_app/env`
Activate the virtual environment and install Boto 3
Attach an AWS Identity and Access Management (IAM) role to your EC2 instance with the proper permissions policies so that Boto 3 can interact with the AWS APIs. For other authentication methods....For a quick use you can set your credential using $ aws confifure see documentation ( you will need this in step 7)
3.1 Activate the environment by sourcing the activate file in the bin directory under your project directory.
`$ source ~/my_app/env/bin/activate`
3.2. Make sure that you have the latest pip module installed within your environment.
$ pip install pip --upgrade
3.3 Use the pip command to install the Boto 3 library within our virtual environment.
`pip install boto3`
Install libraries with pip.
$ pip install Pillow
4.1 Deactivate the virtual environment.
`$ deactivate`
Create a ZIP archive with the contents of the library.
change directory to where pip is installes. it should be something like /my_app/env/lib/python3.x/site-packages.
IMPORTANT: the key here is to zip the file inside site-packages into
your lambda.(i only used PIL and Pillow.libs to save space but you can
zip everything)
5.1 ZIP everything thats inside the PIL folder.
`zip -r9 PIL.zip ./PIL/`
add the Pillow.libs to your ZIP
`zip -gr PIL.zip Pillow.libs`
Add your function code to the archive.
you can do this in the console if it just on file of code, but i recomend doing it in this step.If you don't have your code,just create a file using vi or nano and save it with the name that your lambda handler will use (in this case will use lambda_function.py).
`zip -g PIL.zip lambda_function.py`
Update your the lambda.(AWS CLI)
if you haven't create a lambda function,do it now before updating the function from the aws cli, make sure that you have the right permission to update lambda from the aws cli.
change LAMBDAFUNCTIONNAME for your function name
aws lambda update-function-code --function-name LAMBDAFUNCTIONNAME P --zip-file fileb://PIL.zip
Getting out of the first loop of hell
go to your lambda console and test your code, make sure you use the same runtime/python version you used in the EC2 instance
Quick solution - import PyQt5 as well,
you will not get that error message.
import PyQt5
from PIL import ImageGrab
As some other answers have alluded to, this can happen when you build Pillow on MacOS and try to import PIL in another OS like some Amazon Linux flavor.
My exact use-case was to package imagehash as a Lambda layer which includes pillow as a dependency. The following guideline has worked great for me for all python packages.
Install the SAM CLI SAM Installation
Create your python script with the lambda handler defined
Create your template.yml file with your Lambda function defined. Your CodeUri should be the relative path to your python script.
Add the package you are trying to create a layer for to your requirements.txt.
Run the following SAM command sam build -t path_to_template
You will now have the following directory .aws-sam/build/{Logical ID Of Lambda Function}. Inside you will see that your python packages and their dependencies have been installed just as if you ran pip download package and unzipped the wheel files.
Now, the python files have been prepped by SAM specifically for Lambda and you can continue with creating your Lambda Layer as desired. Configuring Lambda Layers
Since I use AWS SAM CLI already for running Lambda functions locally, this has been the easiest method for me to create my layers.
Just uninstall pillow:
pip uninstall pillow
then install pillow again:
pip install pillow
works great
I'm using Flask with Google App Engine. I have the module Pillow installed via this command:
pip install -t lib pillow
I fixed this error by defined PIL in my app.yaml file:
libraries:
- name: PIL
version: latest
Solution
pip uninstall PIL
pip uninstall Pillow
pip install Pillow