Install gdal==2.3.3 into a venv with pip - python

I am using anaconda with python 3.7.4 and I am working on VScode. I am currently creating a Flask webapp mostly with bokeh, that I am deploying on google app engine(gcloud). In order to work with GIS I need to install geopandas which will require gdal, fiona, rtree, shapely, pyproj, numpy, among few others. I am working in a virtual environment so I can install the .whl files directly with pip install [file.whl] and it will work locally with no problem. I also created the environment variable for gdal_data and added it to he PATH variable as well. So I have been trying to deploy the app since I installed geopandas and google is trowing me an error of gdal-config not found. I tried to dig into it with my low knowledge with dependencies and deployment. What I figured out was the following:
conda will superseded gdal 3.0.4 and install gdal 2.3.3
pip does not have this version and that when it comes to problem.
As long as I understood it google will use my requirements.txt to install the libraries I am using in my virtual environment into their cloud environment, so an error will be thrown once pip will not find the gdal 2.3.3 version that I will pass to my requirements.txt and the one I installed it manually.
Also fiona that is one of pillars of the wheel to build GIS plots is not compatible to the gdal version that conda is insisting to superseded.
I have read a lot and spend a good amount of time dealing with this error. There are a lot of info mostly for Linux, but I could not find anything to help me out.
If someone out there could help me that would be appreciated.

I had the same issue when installing gdal:
...
main.gdal_config_error: [Errno 2] No such file or directory: 'gdal-config': 'gdal-config' ...
The problem is that the underlying docker container does not have the required C libraries for running this version of gdal. So you cannot use the default app engine environment for running your application.
The solution is then to create a custom runtime (i.e. docker container) to run your app engine instance. There is another stackoverflow post which explains how to exactly do this.
The most important step is to include:
sudo apt-get install gdal-bin python-gdal

Related

No module named 'mlflow.sklearn'; 'mlflow' is not a package

installed mlflow on my windows machine with
pip install mlflow
followed by other dependent library pandas,numpy,sklearn
Ran a tutorial on wine quality model from the give link
https://www.mlflow.org/docs/latest/tutorials-and-examples/tutorial.html
I am getting the below error.
import mlflow.sklearn
ModuleNotFoundError: No module named 'mlflow.sklearn'; 'mlflow' is not a package
I thought it may be some firewall issue, so I tried on my personal system, and it's still the same error.
What could be the mistake I am doing here? or some library related issues I am facing here?
Please make sure you are installing the package at the correct PATH.
For example, if you have different versions of Python installed on your computer, installing packages can get a little tricky and thus, it's recommended to use a virtual environment to keep packages or different installations properly organised on your computer.
Since you are using a conda environment, I would suggest to use conda install mlflow with an appropriate channel instead of pip install mlflow i.e. conda install -c conda-forge mlflow.
For more details, please check https://anaconda.org/conda-forge/mlflow.

Azure Function Core Tools does not link with my Conda environment

I'm got a working respiratory of codes working on Azure in a function app (version 3). However, I can't get it running locally. The reason for this is (i think) because the Azure Core Tools doesn't link to my Conda environment.
When I run 'func start' the output first generates the following
(myenv) C:\mypath\__app__>func start
Found Python version 3.8.0 (py).
This is odd because in myenv (conda environment) I have installed python 3.8.13, which is the version I need. The result of this difference in versions leads to multiple modules not being found.
Error message:
Exception: ModuleNotFoundError: No module named 'azure.identity'.
I installed Core Tools v3
After reproducing from our end we could able to make this work after installing the azure.identity in your environment.
pip install azure-identity
Also, make sure you install the packages in requirements.txt file by running the command
pip install -r requirements.txt

Tensorflow: Installing from source - ImportError: No module named pywrap_tensorflow_internal

Using machine Ubuntu 16.04.3 LTS and followed the steps defined in the documentation here. I'm only building for CPU.
I've managed to follow all the steps successfully until I reach the sectionInstall the pip package which states that:
Invoke pip install to install that pip package. The filename of the .whl file depends on your platform. For example, the following command will install the pip package
for TensorFlow 1.2.1 on Linux:
$ sudo pip install /tmp/tensorflow_pkg/tensorflow-1.2.1-py2-none-any.whl
Problem 1: However, I've not been able to find any .whl file. Where can I find this file?
Problem 2: When I try to import tensorflow, I get the following error:
ImportError: No module named pywrap_tensorflow_internal
Problem 3: If I try to import tensorflow from any other directory, I receive the error
ImportError: No module named tensorflow
It seems it can find the path to the tensorflow. How do i change that?
It seems that some issues has happened in your pip that sometimes they are hard to find, because original pip is system-wide, which can cause some unexpected such as dependency issues, etc.
It is good idea to use Anaconda. It is a python data science platform. It can be deemed as a large py package extendable bundle with a python virtual environment tool called conda. You can create many isolated python environments with installing and updating py packages you need. Almost packages which can be found in PyPI using pip can also be found in Anaconda.
You can also use pip to install packages that anaconda do not contains for even pip is one package in it. All environments and packages are in your /home/(..user..)directory if you install without sudo as default.
For example (no worry about package dependancy):
[chain#ChainFedora Project]$ conda install tensorflow
Fetching package metadata .........
Solving package specifications: .
Package plan for installation in environment /home/chain/anaconda3:
The following NEW packages will be INSTALLED:
backports.weakref: 1.0rc1-py36_0
libprotobuf: 3.2.0-0
markdown: 2.6.8-py36_0
protobuf: 3.2.0-py36_0
tensorflow: 1.2.1-py36_0
Proceed ([y]/n)?
Very easy to get started with conda cheat sheet

How to resize an image in Python without using Pillow

My Django site is hosted on Azure.
It allows for users to upload photos. I need a way for the system to resize, and possibly rotate photos.
Seems simple, and I tried to use the Pillow library but while it works locally it will not deploy to Azure for a number of reasons. I can be specific if needed but this is well documented like here.
I even tried buiding a wheel of Pillow and deploying that but Azure refuses to load it saying it is the wrong platform (even though I matched the Python 2.7 version - and 32 bit). I tried to upload 64 bit versions as well, and nothing works. So at this point I just want to leave Pillow behind me and ask for another way to achieve this in Python without Pillow. Is there any other way to do this?
Notes of things I tried:
1) Installing Pillow the normal way gives this familiar error message:
ValueError: zlib is required unless explicitly disabled using --disable-zlib, aborting
2) I then created a wheel by doing: pip wheel Pillow --wheel-dir=requirements
This however yields the following error in the pip.log:
Pillow-3.4.2-cp27-cp27m-win32.whl is not a supported wheel on this platform.
Pillow-4.1.1-cp27-cp27m-win32.whl is not a supported wheel on this platform.
I am certain that I'm runing Python 2.7 on a 32bit platform so not sure why its complaining.
After days wasted, I've discovered the reason why Pillow isn't installing. It's not because the wheel is incompatible to the platform, but rather that pip is too old.
Azure is using pip version 1.5.6 at the moment - shame on them. This version doesn't recognise wheels.
Here is how I fixed this:
Goto the Kudu DebugConsole:
https://[site_name].scm.azurewebsites.net/DebugConsole
Activate your VirtualEnv:
env\Scripts\activate
Note that if you run pip --version how old that version number is.
Now upgrade this by running:
python -m pip install -U pip
Note that you cannot upgrade the default pip in D:\Python27 as you don't have access to it but you can upgrade your local pip inside fo the virtual environment.
Now run pip --version to ensure you are running the latest version (i.e. >=9.0.1).
Now inside of requirements.txt you can tell pip to look for wheels in specific folders by adding a line at the top such as:
--find-links requirements (which means it will search the requirements folder).
Here is how you create the Pillow wheel. You can run this locally or on the Kudu Console. If you run it locally ensure your python version matches what you use on Azure (2.7 or 3.X) and by default make sure you use a 32bit version.
pip install wheel (Only if you don't have wheel installed)
pip wheel Pillow --wheel-dir=requirements
This will copy two files into your requirements folder: Pillow-X.whl and olefile-X.whl. Ensure these are added to your source control if you are deploying via git push. Push these to the server.
Now in the Kudu DebugConsole you can test the .whl files are there (after deploying) and test the installing by running:
pip install --no-index -r requirements.txt
This should now work and install Pillow!
When deploying pay close attention to if it says Found compatible virtual environment. or Creating python 2-7 virtual environment.. The former is what you want. But if you see the latter it means that the deploy has blasted your env folder and reset you back to pip 1.5.6. I don't know why it does this sometimes, but try to make as few changes to the env folder as possible after deploying (i.e. just upgrade pip and thats it) to avoid this.
I can't help you much with installing Pillow on Azure platform.
But my days of using manually resizing and other stuff is long gone.
I have been using thumbor https://thumbor.org/ for quite some time.
Just setup a secured instance of the same and use it resize, crop and manage your images dynamically.
Hope it helps
There is the other SO thread Microsoft Azure Django Python setup error Pillow, which has the similar issue about installing Pillow on Azure. I think my answer for that is helpful for resolving your issue. Any concern for my solution, please feel free to let me know.

Have MySQLdb installed, works outside of virtualenv but inside it doesn't exist. How to resolve?

I'm using the most recent versions of all software (Django, Python, virtualenv, MySQLdb) and I can't get this to work. When I run "import MySQLdb" in the python prompt from outside of the virtualenv, it works, inside it says "ImportError: No module named MySQLdb".
I'm trying to learn Python and Linux web development. I know that it's easiest to use SQLLite, but I want to learn how to develop larger-scale applications comparable to what I can do in .NET. I've read every blog post on Google and every post here on StackOverflow and they all suggest that I run "sudo pip install mysql-python" but it just says "Requirement already satisfied: mysql-python in /usr/lib/pymodules/python2.7"
Any help would be appreciated! I'm stuck over here and don't want to throw in the towel and just go back to doing this on Microsoft technologies because I can't even get a basic dev environment up and running.
If you have created the virtualenv with the --no-site-packages switch (the default), then system-wide installed additions such as MySQLdb are not included in the virtual environment packages.
You need to install MySQLdb with the pip command installed with the virtualenv. Either activate the virtualenv with the bin/activate script, or use bin/pip from within the virtualenv to install the MySQLdb library locally as well.
Alternatively, create a new virtualenv with system site-packages included by using the --system-site-package switch.
source $ENV_PATH/bin/activate
pip uninstall MySQL-python
pip install MySQL-python
this worked for me.
I went through same problem, but using pip from virtualenv didn't solve the problem as I got this error
error: could not delete '/Library/Python/2.7/site-packages/_mysql.so': Permission denied
Earlier I had installed the package by sudo pip install mysql-python
To solve, copy files /Library/Python/2.7/site-packages/MySQL_python-1.2.5-py2.7.egg-info and /Library/Python/2.7/site-packages/_mysql* to ~/v/lib/python-2.7/site-packages and include /usr/local/mysql/lib in DYLD_LIBRARY_PATH env variable.
For the second step I am doing export DYLD_LIBRARY_PATH=/usr/local/mysql/lib in ~/.profile

Categories