I want to create an airflow DAG to transfer files to cloud storage but I'm running into a problem importing Google Cloud libraries.
Libraries I want to use:
from airflow.providers.google.cloud.operators.gcs import GCSCreateBucketOperator, GCSDeleteBucketOperator
from airflow.providers.google.cloud.operators.gcs import GCSCreateBucketOperator
from airflow.providers.google.cloud.transfers.gcs_to_local import GCSToLocalFilesystemOperator
from airflow.providers.google.cloud.transfers.local_to_gcs import LocalFilesystemToGCSOperator
I want to create an airflow DAG to transfer files to cloud storage but I'm running into a problem importing Google Cloud libraries.
Libraries I want to use:
The error I got:
pkg_resources.ContextualVersionConflict: (protobuf 4.21.9 (/opt/anaconda3/lib/python3.9/site-packages), Requirement.parse('protobuf<4.0.0dev'), {'google-cloud-secret-manager'})
I tried pip install googleapis-common-protos --upgrade to fix the problem but still the same problem persists
In your virtual env, you can try to install the Apache Airflow package with extra gcp to prevent depencencies conflicts :
Example with pip :
requirements.txt file
apache-airflow[gcp]==2.4.2
pip command :
pip install -r requirements.txt
You can also use another package manager with Python like pipenv and PipFile :
apache-airflow = { version = "==2.4.2", extras = ["gcp"] }
In all the cases, I really recommend you to use a virtual env to isolate the packages for your current project and to prevent conflict on installed packages.
I am following along with the O'Riley Head First Python (2nd Edition) Course.
At one point you will create a webapp and deploy it to pythonanywhere (chapter5).
The webapp uses two functions, imported from a module, created earlier.
The module is called vsearch.py. I also created a readme.txt and a setup.py and used setuptools to create a source distribution file using :
python3 setup.py sdist
The code of the setup.py read as follows:
from setuptools import setup
setup(
name = "vsearch",
version = "1.0",
description = "The Head First Python Seach Tools",
author = "HF Python 2e",
author_email = "hfpy2e#gmail.com",
url = "headfirstlabs.com",
py_modules = ["vsearch"],
)
The source distribution file gets created without errors and creates a file called vsearch-1.0.tar.gz
The file then gets uploaded to pythonanywhere and installed via console using:
python3 -m pip install vsearch-1.0.tar.gz --user
Console outputs:
15:36 ~/mysite $ python3 -m pip install vsearch-1.0.tar.gz --user
Looking in links: /usr/share/pip-wheels
Processing ./vsearch-1.0.tar.gz
Building wheels for collected packages: vsearch
Running setup.py bdist_wheel for vsearch ... done
Stored in directory: /home/Mohr/.cache/pip/wheels/85/fd/4e/5302d6f3b92e4057d341443ed5ef0402eb04994663282c12f7
Successfully built vsearch
Installing collected packages: vsearch
Found existing installation: vsearch 1.0
Uninstalling vsearch-1.0:
Successfully uninstalled vsearch-1.0
Successfully installed vsearch-1.0
Now when I try to run my webapp I get the following error:
2020-03-24 16:18:14,592: Error running WSGI application
2020-03-24 16:18:14,592: ModuleNotFoundError: No module named 'vsearch'
2020-03-24 16:18:14,593: File "/var/www/mohr_eu_pythonanywhere_com_wsgi.py", line 16, in <module>
2020-03-24 16:18:14,593: from vsearch4web import app as application # noqa
2020-03-24 16:18:14,593:
2020-03-24 16:18:14,593: File "/home/Mohr/mysite/vsearch4web.py", line 3, in <module>
2020-03-24 16:18:14,593: from vsearch import search4letters
Judging from this error I assume that "vsearch" can not be found because it was installed as "vsearch-1.0". However when I try to change this line to:
from vsearch-1.0 import search4letters
I rightfully get a synthax error since I can not adress modules this way. So what can I do about this? When creating the module in the beginning I added a version number to the setup.py file because according to the lecture it is good practice. Setuptools then automatically creates the source distribution file with the "-1.0" at the end. Also when importing it using the command shown above i automatically gets importet as "vsearch-1.0" which in turn I am unable to reference in my python code because of bad synthax.
Am I doing something wrong? Is there a way to import this under another namespace? Is there a way to reference "vsearch-1.0" in my python code without getting a synthax error?
There are different python3 versions installed on PythonAnywhere. When you install something using python3 -m pip or pip3 you use default python3 that is probably not matching python version setting of your web app. Use python3.7 and pip3.7 or python3.6 and pip3.6 etc. for --user installations to be sure.
pip install --user (with emphasized --user) installed the package into your user directory: /home/Mohr/.local/lib/pythonX.Y/site-packages/.
To run your WSGI application you probably use a virtual environment in which the user-installed modules are not available. To use modules in the venv you have to install everything in the venv. So activate the venv in a terminal and install the module with the venv's pip:
pip install vsearch-1.0.tar.gz
I'm using Python 3.6.5 and I've built my application in py file -it works fine-, and I can run it on cmd perfectly. So I want to built its exe file.
I tried with cx_freeze, py2exe but they didn't work for me.
Lastly I tried with PyInstaller, it looks successfully completed; but when I'm running my exe file, I have this error and I tried many ways to solve this but I didn't. The error is in this picture
ImportError: Failed to import the Cloud Storage library for Python. Make sure to install the "google-cloud-storage" module.
I'm so sure that I've installed google-cloud-storage and requests packages (I checked many times with pip install google-cloud-storage and pip install google-resumable-media[requests] and it says Requirements already installed
What I've tried are;
-adding google folders from site-packages folder to dist folder
-installing grpio (already installed)
-changing google hooks files' inside
But I didn't find any solution.
What should I do to solve this problem?
I am a making a very simple API call to the Google Vision API, but all the time it's giving me error that 'google.oauth2' module not found. I've pip installed all the dependcies. To check this, I've imported google.oauth2 module in command line Python and It's working there. Please help me with this.
There are multiple reasons for this:
Check whether you have installed dependencies in only one place or multiple places. Try to install it only in the source library folder.
If above doesn't solve uninstall all Google packages from your local machine, delete the lib folder in your app folder, create it again and then execute:
pip install -t lib google-auth google-auth-httplib2 google-api-python-client --upgrade
Hope this should solve your problem!!
I installed Pillow, and after I want to do:
from PIL import Image
I get the following error:
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 61, in <module>
ImportError: cannot import name _imaging
However, if I import these separately, everything is fine, ie:
import _imaging
import Image
Do you know what the problem might be?
I had the same problem and I solved that by upgrading this package using the command below:
pip install -U Pillow
This also happens if you built Pillow in one OS and then copied the contents of site-packages to another one. For example, if you are creating AWS Lambda deployment package, that's the error you will face when running the Lambda function. If that's the case, then Pillow needs to be installed in a Amazon Linux instance and you have to use the resulting site-packages in your deployment package. See instructions and details here:
http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html
I ran into this problem as well. It can happen if you have PIL installed, then install Pillow on top of it.
Go to /usr/local/lib/python2.7/dist-packages/ and delete anything with "PIL" in the name (including directories). If the Pillow .egg file is there you might as well delete that too.
Then re-install Pillow.
substitute "python2.7" for the version of python you're using.
What is your version of pillow?
Pillow >= 2.1.0 no longer supports import _imaging. Please use from PIL.Image import core as _imaging instead. Here's the official documentation.
I have got the same error with Python 3.6. Upgrading Pillow did the job for me.
sudo python3.6 -m pip install Pillow --upgrade
Probably for other python versions use your version instead of 3.6.
This can happen if you're trying to run Pillow installed on a Mac in a Linux environment (for example, e.g. building an AWS Lambda on a Mac then deploying it to a Linux runtime).
To make sure you're installing it for the right platform do the following:
pip3 install --platform manylinux1_x86_64 --only-binary=:all:
The --only-binary=:all: is required when specifying --platform and the platform itself can be found by looking at https://pypi.org/project/Pillow/7.2.0/#files (for example) - the platform is the last part of the filename e.g. win32, manylinux1_x86_64, manylinux1_i686 etc.
This avoids the need to be running Linux to install the Linux build of Pillow.
This may be a niche solution but I was able to fix this problem on Pycharm by going to file->settings->python interpreter and clicking the upgrade symbol next to the pillow package.
For pillow to work PIL must be in /usr/local/lib/python2.7 or3/dist-packages/PIL.py.
In dist-packages PIL.py should have a folder.
sudo apt-get update
pip install Pillow
PIL != PiL
I had the same problem when it tried to deploy a lambda package, the thing is that you have to precompile the package emulating the lambda architecture/runtime that you are going to use, otherwise you'll get cannot import name _imaging. 2 ways of solving this:
1 - spin up an EC2 Amazon Linux instance.( i will only cover this part)
2 - Use dockers.
Short solution
Install Python 3 in Amazon Linux 2 intance. (Must be python3.X you plan to use in lambda)
Install a virtual environment under the ec2-user home directory.
Activate the environment, and then install Boto 3.
Install Pillow
Create a ZIP archive with the contents of the library(PIL and Pillow.libs)
Add your function code to the archive.
Update your the lambda.(AWS CLI)
Long solution
If Python 3 isn't already installed, then install the package using the yum package manager.
`$ sudo yum install python3 -y`
Create a virtual environment under the ec2-user home directory
The following command creates the app directory with the virtual environment inside of it. You can change my_app to another name. If you change my_app, make sure that you reference the new name in the remaining resolution steps.
`$ python3 -m venv my_app/env`
Activate the virtual environment and install Boto 3
Attach an AWS Identity and Access Management (IAM) role to your EC2 instance with the proper permissions policies so that Boto 3 can interact with the AWS APIs. For other authentication methods....For a quick use you can set your credential using $ aws confifure see documentation ( you will need this in step 7)
3.1 Activate the environment by sourcing the activate file in the bin directory under your project directory.
`$ source ~/my_app/env/bin/activate`
3.2. Make sure that you have the latest pip module installed within your environment.
$ pip install pip --upgrade
3.3 Use the pip command to install the Boto 3 library within our virtual environment.
`pip install boto3`
Install libraries with pip.
$ pip install Pillow
4.1 Deactivate the virtual environment.
`$ deactivate`
Create a ZIP archive with the contents of the library.
change directory to where pip is installes. it should be something like /my_app/env/lib/python3.x/site-packages.
IMPORTANT: the key here is to zip the file inside site-packages into
your lambda.(i only used PIL and Pillow.libs to save space but you can
zip everything)
5.1 ZIP everything thats inside the PIL folder.
`zip -r9 PIL.zip ./PIL/`
add the Pillow.libs to your ZIP
`zip -gr PIL.zip Pillow.libs`
Add your function code to the archive.
you can do this in the console if it just on file of code, but i recomend doing it in this step.If you don't have your code,just create a file using vi or nano and save it with the name that your lambda handler will use (in this case will use lambda_function.py).
`zip -g PIL.zip lambda_function.py`
Update your the lambda.(AWS CLI)
if you haven't create a lambda function,do it now before updating the function from the aws cli, make sure that you have the right permission to update lambda from the aws cli.
change LAMBDAFUNCTIONNAME for your function name
aws lambda update-function-code --function-name LAMBDAFUNCTIONNAME P --zip-file fileb://PIL.zip
Getting out of the first loop of hell
go to your lambda console and test your code, make sure you use the same runtime/python version you used in the EC2 instance
Quick solution - import PyQt5 as well,
you will not get that error message.
import PyQt5
from PIL import ImageGrab
As some other answers have alluded to, this can happen when you build Pillow on MacOS and try to import PIL in another OS like some Amazon Linux flavor.
My exact use-case was to package imagehash as a Lambda layer which includes pillow as a dependency. The following guideline has worked great for me for all python packages.
Install the SAM CLI SAM Installation
Create your python script with the lambda handler defined
Create your template.yml file with your Lambda function defined. Your CodeUri should be the relative path to your python script.
Add the package you are trying to create a layer for to your requirements.txt.
Run the following SAM command sam build -t path_to_template
You will now have the following directory .aws-sam/build/{Logical ID Of Lambda Function}. Inside you will see that your python packages and their dependencies have been installed just as if you ran pip download package and unzipped the wheel files.
Now, the python files have been prepped by SAM specifically for Lambda and you can continue with creating your Lambda Layer as desired. Configuring Lambda Layers
Since I use AWS SAM CLI already for running Lambda functions locally, this has been the easiest method for me to create my layers.
Just uninstall pillow:
pip uninstall pillow
then install pillow again:
pip install pillow
works great
I'm using Flask with Google App Engine. I have the module Pillow installed via this command:
pip install -t lib pillow
I fixed this error by defined PIL in my app.yaml file:
libraries:
- name: PIL
version: latest
Solution
pip uninstall PIL
pip uninstall Pillow
pip install Pillow