How to update pip in streamlit hosting platform - python

I have created a very small web application which uses machine learning model in its backend, I tried to host the application through streamlit hosting platform,
But initially i got a error saying modules not found so I put in the list of modules in requirements.txt
But when streamlit tries to install the modules and encounters tensorflow, it throws an error asking to update pip since it uses an older version of pip
I have verified twice that I run the latest version of pip in my local computer now how can I make streamlit use the latest version of pip to install tensorflow successfully
I am new to python and machine learning so please explain in an easy to understand way
Any response is at most welcomed.

Try adding the following to your code and redeployment it to Streamlit Hosting:
!pip install --upgrade pip

Related

Anaconda installation messed up the existing Python packages

For my project requirements I had installed Python packages like Jira and Bitbucket to connect to the servers using their API wrappers. After I installed Anaconda all the existing packages stopped working. Now Im getting No Module Found error for both Bitbucket and JIRA although the modules are already installed
For example :
pip3 install bitbucket-python
gives me that requirement is already satisfied. But when I run the code I get:
No module named "bitbucket" found
The same code was running fine 2 days ago before I installed anaconda.
Please help.
Now you have different versions of python and pip in your system. You have to make sure you are running correct version of the command. In my system with conda installed and activated pip3 refers to pip for system python. and pip for active conda python.
Try this: pip install bitbucket-python
you can use which command to see which binary is getting executed for example on my system.
dhananjay#ideapad:~$ which python
/home/dhananjay/.conda/bin/python
dhananjay#ideapad:~$ which pip
/home/dhananjay/.conda/bin/pip
dhananjay#ideapad:~$ which pip3
/usr/bin/pip3

Install gdal==2.3.3 into a venv with pip

I am using anaconda with python 3.7.4 and I am working on VScode. I am currently creating a Flask webapp mostly with bokeh, that I am deploying on google app engine(gcloud). In order to work with GIS I need to install geopandas which will require gdal, fiona, rtree, shapely, pyproj, numpy, among few others. I am working in a virtual environment so I can install the .whl files directly with pip install [file.whl] and it will work locally with no problem. I also created the environment variable for gdal_data and added it to he PATH variable as well. So I have been trying to deploy the app since I installed geopandas and google is trowing me an error of gdal-config not found. I tried to dig into it with my low knowledge with dependencies and deployment. What I figured out was the following:
conda will superseded gdal 3.0.4 and install gdal 2.3.3
pip does not have this version and that when it comes to problem.
As long as I understood it google will use my requirements.txt to install the libraries I am using in my virtual environment into their cloud environment, so an error will be thrown once pip will not find the gdal 2.3.3 version that I will pass to my requirements.txt and the one I installed it manually.
Also fiona that is one of pillars of the wheel to build GIS plots is not compatible to the gdal version that conda is insisting to superseded.
I have read a lot and spend a good amount of time dealing with this error. There are a lot of info mostly for Linux, but I could not find anything to help me out.
If someone out there could help me that would be appreciated.
I had the same issue when installing gdal:
...
main.gdal_config_error: [Errno 2] No such file or directory: 'gdal-config': 'gdal-config' ...
The problem is that the underlying docker container does not have the required C libraries for running this version of gdal. So you cannot use the default app engine environment for running your application.
The solution is then to create a custom runtime (i.e. docker container) to run your app engine instance. There is another stackoverflow post which explains how to exactly do this.
The most important step is to include:
sudo apt-get install gdal-bin python-gdal

No module named "fastai"

I'm trying to use fastai to figure out an optimal learning rate for my neural network. Everything else is working fine I'm just not quite getting the accuracy I want. So I'm trying to use the following lines of code to optimize my learning rate:
learn.lr_find()
learn.sched.plot_lr()
So I pip installed fastai and everything seemed like it installed correctly and into the correct directory, but every time I try to import fastai, I can't. I included pictures of my command prompt and the error message. Thank you all for the help in advance I really appreciate it. If I didn't provide enough info just let me know. I'm new to asking questions on here.
Error Message
Installation of Package
From the screenshots provided, it seems like you didn't install FastAi properly and you were installing PyTorch instead. FastAi comes with a PyTorch build. So you don't need to install PyTorch separately.
You can install FastAi with Pip by the following command
pip install fastai
If you are using Python 3.x sometimes you might need to use pip3 command instead
pip3 install fastai
Another problem might be Python mismatch. Maybe you've installed the both version of Python and you distro couldn't verify from where to grab the package. So make sure to use the correct Python and Pip version
I had a similar issue where I installed fastai with pip. I got the 'No module named fastai' error whenever I tried to import fastai, but if I did "pip freeze | grep fastai" in the terminal, it showed that fastai was clearly installed.
The solution for me was to download anaconda3, enter an anaconda environment, and reinstall fastai with conda using step 6 of fastai's setup instructions for using AWS EC2 instances.
Useful links:
https://docs.anaconda.com/anaconda/install/linux/
https://course.fast.ai/start_aws.html
conda install -c fastai fastai
I would recommend waiting for the new release of fastai, they are currently working on version 2 in July.
Learner.lr_find(start_lr=1e-07, end_lr=10, num_it=100, stop_div=True, show_plot=True, suggestions=True)

Flask app can't find google module

I'm using a virtual environment to run a flask app. When I run pip freeze, I get the following:
google-api-core==0.1.1
google-auth==1.2.1
google-cloud-core==0.28.0
google-cloud-speech==0.30.0
google-gax==0.15.16
googleapis-common-protos==1.5.3
However, during run time, I get the following error:
from google.cloud import speech
ModuleNotFoundError: No module named 'google'
I'm using the google speech APIs. They work just fine when I run them locally. I don't understand why the app can't find the modules even though they're listed as installed. Can someone suggest a fix? I've tried doing pip install google, and it downloaded a bunch of other stuff, but still no fix.
So there are a lot of places where the error could be coming from. Could you please provide more details?
For example, which python version are you using? Python 2 or 3? If you are calling the wrong interpreter you need to type
python3 -m pip install
or
python3 -m pip install
accordingly.
Secondly are you using conda? If so, you need to use
conda install
instead of pip install. You can find out by typing which python in your terminal.
Third, are you sure you installed the google module correctly? If not try using
pip install google --user
and see if that works.
Lastly, are you installing the correct package? Because I believe for the speech api you need to do:
pip install --upgrade google-api-python-client
Well, removing the virtual environment and reinstalling all the dependencies worked.
It might be easier to add to your flask.py the path to modules you already installed:
import sys
sys.path.append("/home/ubuntu/.local/lib/python3.6/site-packages/")
import google
import gspread
That worked like magic to me at AWS.

How to resize an image in Python without using Pillow

My Django site is hosted on Azure.
It allows for users to upload photos. I need a way for the system to resize, and possibly rotate photos.
Seems simple, and I tried to use the Pillow library but while it works locally it will not deploy to Azure for a number of reasons. I can be specific if needed but this is well documented like here.
I even tried buiding a wheel of Pillow and deploying that but Azure refuses to load it saying it is the wrong platform (even though I matched the Python 2.7 version - and 32 bit). I tried to upload 64 bit versions as well, and nothing works. So at this point I just want to leave Pillow behind me and ask for another way to achieve this in Python without Pillow. Is there any other way to do this?
Notes of things I tried:
1) Installing Pillow the normal way gives this familiar error message:
ValueError: zlib is required unless explicitly disabled using --disable-zlib, aborting
2) I then created a wheel by doing: pip wheel Pillow --wheel-dir=requirements
This however yields the following error in the pip.log:
Pillow-3.4.2-cp27-cp27m-win32.whl is not a supported wheel on this platform.
Pillow-4.1.1-cp27-cp27m-win32.whl is not a supported wheel on this platform.
I am certain that I'm runing Python 2.7 on a 32bit platform so not sure why its complaining.
After days wasted, I've discovered the reason why Pillow isn't installing. It's not because the wheel is incompatible to the platform, but rather that pip is too old.
Azure is using pip version 1.5.6 at the moment - shame on them. This version doesn't recognise wheels.
Here is how I fixed this:
Goto the Kudu DebugConsole:
https://[site_name].scm.azurewebsites.net/DebugConsole
Activate your VirtualEnv:
env\Scripts\activate
Note that if you run pip --version how old that version number is.
Now upgrade this by running:
python -m pip install -U pip
Note that you cannot upgrade the default pip in D:\Python27 as you don't have access to it but you can upgrade your local pip inside fo the virtual environment.
Now run pip --version to ensure you are running the latest version (i.e. >=9.0.1).
Now inside of requirements.txt you can tell pip to look for wheels in specific folders by adding a line at the top such as:
--find-links requirements (which means it will search the requirements folder).
Here is how you create the Pillow wheel. You can run this locally or on the Kudu Console. If you run it locally ensure your python version matches what you use on Azure (2.7 or 3.X) and by default make sure you use a 32bit version.
pip install wheel (Only if you don't have wheel installed)
pip wheel Pillow --wheel-dir=requirements
This will copy two files into your requirements folder: Pillow-X.whl and olefile-X.whl. Ensure these are added to your source control if you are deploying via git push. Push these to the server.
Now in the Kudu DebugConsole you can test the .whl files are there (after deploying) and test the installing by running:
pip install --no-index -r requirements.txt
This should now work and install Pillow!
When deploying pay close attention to if it says Found compatible virtual environment. or Creating python 2-7 virtual environment.. The former is what you want. But if you see the latter it means that the deploy has blasted your env folder and reset you back to pip 1.5.6. I don't know why it does this sometimes, but try to make as few changes to the env folder as possible after deploying (i.e. just upgrade pip and thats it) to avoid this.
I can't help you much with installing Pillow on Azure platform.
But my days of using manually resizing and other stuff is long gone.
I have been using thumbor https://thumbor.org/ for quite some time.
Just setup a secured instance of the same and use it resize, crop and manage your images dynamically.
Hope it helps
There is the other SO thread Microsoft Azure Django Python setup error Pillow, which has the similar issue about installing Pillow on Azure. I think my answer for that is helpful for resolving your issue. Any concern for my solution, please feel free to let me know.

Categories