Python serverless: ModuleNotFoundError - python

I'm trying to use serverless framework with a python project.
I created a hello world example that I run in offline mode. It works well but when I try to import a python package I get ModuleNotFoundError.
Here is my serverless.yaml file:
service: my-test
frameworkVersion: "3"
provider:
name: aws
runtime: python3.8
functions:
hello:
handler: lambdas.hello.hello
events:
- http:
path: /hello
method: get
plugins:
- serverless-python-requirements
- serverless-offline
In lambdas.hello.py:
import json
import pandas
def hello(event, context):
body = {
"message": 'hello world',
}
response = {"statusCode": 200, "body": json.dumps(body)}
return response
In my Pipfile:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
pandas = "*"
[requires]
python_version = "3.8"
To run it, I use the command $ sls offline start
Then When I query on postman http://localhost:3000/dev/hello I get the error ModuleNotFoundError.
If I remove the line import pandasin hello.py file, it works.
I don't understand why I get this error as serverless-python-requirements is supposed to check the pipfile and pandas is in my pipfile.
How can I use pandas (or any other python package) in my lambdas with serverless framework in offline mode ?

The serverless-python-requirements plugin is used to bundle your dependencies and package them for deployment. This only comes to effect when you run sls deploy.
From the plugin page -
The plugin will now bundle your python dependencies specified in your requirements.txt or Pipfile when you run sls deploy
Read more about python packaging here - https://www.serverless.com/blog/serverless-python-packaging
Since you are running your service locally, this plugin will not be used.
Your dependencies need to be installed locally.
perform the below steps to make it work -
Create a virtual environment in you serverless directory.
install the plugin serverless plugin install -n serverless-offline
install pandas using pip
run sls offline start

Your lambda function don't have the panda module installed
You need to use the serverless-python-requirements plugin : https://www.serverless.com/plugins/serverless-python-requirements. To use it you need docker on your machine and to create a requirement.txt file in your service with the packages you need in your lambda

Related

Python Lambda missing dependencies when set up through Amplify

I've been trying to configure an Amplify project with a Python based Lambda backend API.
I have followed the tutorials by creating an API through the AWS CLI and installing all the dependencies through pipenv.
When I cd into the function's directory, my Pipfile looks like this:
name = "pypi"
url = "https://pypi.python.org/simple"
verify_ssl = true
[dev-packages]
[packages]
src = {editable = true, path = "./src"}
flask = "*"
flask-cors = "*"
aws-wsgi = "*"
boto3 = "*"
[requires]
python_version = "3.8"
And when I run amplify push everything works and the Lambda Function gets created successfully.
Also, when I run the deploy pipeline from the Amplify Console, I see in the build logs that my virtual env is created and my dependencies are downloaded.
Something else that was done based on github issues (otherwise build would definitely fail) was adding the following to amplify.yml:
backend:
phases:
build:
commands:
- ln -fs /usr/local/bin/pip3.8 /usr/bin/pip3
- ln -fs /usr/local/bin/python3.8 /usr/bin/python3
- pip3 install --user pipenv
- amplifyPush --simple
Unfortunately, from the Lambda's logs (both dev and prod), I see that it fails importing every dependency that was installed through Pipenv. I added the following in index.py:
import os
os.system('pip list')
And saw that NONE of my dependencies were listed so I was wondering if the Lambda was running through the virtual env that was created, or was just using the default Python.
How can I make sure that my Lambda is running the virtualenv as defined in the Pipfile?
Lambda functions do not run in a virtualenv. Amplify uses pipenv to create a virtualenv and download the dependencies. Then Amplify packages those dependencies, along with the lambda code, into a zip file which it uploads to AWS Lambda.
Your problem is either that the dependencies are not packaged with your function or that they are packaged with a bad directory structure. You can download the function code to see exactly how the packaging went.

How does deploying and running a python script to an azure resource work?

I'm very new to DevOps, so this may be a very silly question. I'm trying to deploy a python Web scraping script onto an azure webapp using GitHub actions. This script is meant to be run for a long period of time as it is analyzing websites word by word for hours. It then logs the results to .log files.
I know a bit of how GitHub actions work, I know that I can trigger jobs when I push to the repo for instance. However, I'm a bit confused as to how one runs the app or a script on an azure resource (like a VM or webapp) for example. Does this process involve SSH-ing into the resource and then automatically run the cli command "python main.py" or "docker-compose up", or is there something more sophisticated involved?
For better context, this is my script inside of my workflows folder:
on:
[push]
env:
AZURE_WEBAPP_NAME: emotional-news-service # set this to your application's name
WORKING_DIRECTORY: '.' # set this to the path to your path of working directory inside GitHub repository, defaults to the repository root
PYTHON_VERSION: '3.9'
STARTUP_COMMAND: 'docker-compose up --build -d' # set this to the startup command required to start the gunicorn server. default it is empty
name: Build and deploy Python app
jobs:
build-and-deploy:
runs-on: ubuntu-latest
environment: dev
steps:
# checkout the repo
- uses: actions/checkout#master
# setup python
- name: Setup Python
uses: actions/setup-python#v1
with:
python-version: ${{ env.PYTHON_VERSION }}
# setup docker compose
- uses: KengoTODA/actions-setup-docker-compose#main
with:
version: '1.26.2'
# install dependencies
- name: python install
working-directory: ${{ env.WORKING_DIRECTORY }}
run: |
sudo apt install python${{ env.PYTHON_VERSION }}-venv
python -m venv --copies antenv
source antenv/bin/activate
pip install setuptools
pip install -r requirements.txt
python -m spacy download en_core_web_md
# Azure login
- uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- uses: azure/appservice-settings#v1
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
mask-inputs: false
general-settings-json: '{"linuxFxVersion": "PYTHON|${{ env.PYTHON_VERSION }}"}' #'General configuration settings as Key Value pairs'
# deploy web app
- uses: azure/webapps-deploy#v2
with:
app-name: ${{ env.AZURE_WEBAPP_NAME }}
package: ${{ env.WORKING_DIRECTORY }}
startup-command: ${{ env.STARTUP_COMMAND }}
# Azure logout
- name: logout
run: |
az logout
most of the script above was taken from: https://github.com/Azure/actions-workflow-samples/blob/master/AppService/python-webapp-on-azure.yml.
is env.STARTUP_COMMAND the "SSH and then run the command" part that I was thinking of, or is it something else entirely?
I also have another question: is there a better way to view logs from that python script running from within the azure resource? The only way I can think of is to ssh into it and then type in "cat 'whatever.log'".
Thanks in advance!
Instead of using STARTUP_COMMAND: 'docker-compose up --build -d' you can use the startup file name.
startUpCommand: 'gunicorn --bind=0.0.0.0 --workers=4 startup:app'
or
StartupCommand: 'startup.txt'
The StartupCommand parameter defines the app in the startup.py file. By default, Azure App Service looks for the Flask app object in a file named app.py or application.py. If your code doesn't follow this pattern, you need to customize the startup command. Django apps may not need customization at all. For more information, see How to configure Python on Azure App Service - Customize startup command.
Also, because the python-vscode-flask-tutorial repository contains the same startup command in a file named startup.txt, you could specify that file in the StartupCommand parameter rather than the command, by using StartupCommand: 'startup.txt'.
Refer: here for more info

problem with command heroku run -a <name of app> pipenv run upgrade

I have made a code with python flask and I am following the next steps to deploy it:
Deploying to Heroku (takes 7 minutes)
Install heroku (if you don't have it yet)
$ npm i heroku -g
Login to heroku on the command line (if you have not already)
$ heroku login -i
Create an application (if you don't have it already)
$ heroku create <your_application_name>
Enviroment Variables (takes 2 minutes)
Now navigate to your heroku dashboard and look for your application settings, we have to manually add our environment variables into heroku:
You cannot create a .env file on Heroku, instead you need to manually create all the variables under your project settings.
Open your .env file and copy and paste each variable (FLASK_APP, DB_CONNECTION_STRING, etc.) to Heroku.
Deploying your database to Heroku (takes 3 minutes)
You local MySQL Database now has to be uploaded to a cloud, there are plenty of services that provide MySQL database hosting but we recommend JawDB because it has a Free Tier, its simple and 100% integrated with Heroku.
Go to your heroku project dashboard and look to add a new heroku add-on.
Look for JawDB MySQL and add it to your project (it may ask for a Credit Card but you will not be charged as long as your remain within 5mb database size, enough for your demo.
Once JawDB is added to your project look for the Connection String inside your JawDB dashboard, something like:
mysql://tqqa0ui0cga32nxd:eqi8nchjbpwth82v#c584md9egjnm02sk.5btxwkvyhwsf.us-east-1.rds.amazonaws.com:3306/45fds423rbtbr
Copy the connection string and create a new environment variable on your project settings.
Run migrations on heroku: After your database is connected, you have to create the tables and structure, you can do that by running the pipenv run upgrade command on the production server like this:
$ heroku run -a=<your_app_name> pipenv run upgrade
:warning: Note: Notice that you have to replace <your app name> with your application name, you also have to be logged into heroku in your terminal (you can do that by typing heroku login -i)
Push to the Heroku codebase
Commit and push to heroku, make sure you have added and committed your changes and push to heroku
$ git push heroku main hh
That is it!
Now the problem is when I run this command:
heroku run -a=<your_app_name> pipenv run upgrade
And the respons is:
bash: pipenv: command not found
This is my Pipfile.txt:
[[source]]
name = "pypi"
url = "https://pypi.org/simple"
verify_ssl = true
[dev-packages]
[packages]
flask = "*"
sqlalchemy = "*"
flask-sqlalchemy = "*"
flask-migrate = "*"
flask-swagger = "*"
psycopg2-binary = "*"
python-dotenv = "*"
mysql-connector-python = "*"
flask-cors = "*"
gunicorn = "*"
mysqlclient = "*"
flask-admin = "*"
cloudinary = "*"
flask-login = "*"
pipenv = "*"
[requires]
python_version = "3.8"
[scripts]
start="flask run -p 3000 -h 0.0.0.0"
init="flask db init"
migrate="flask db migrate"
upgrade="flask db upgrade"
deploy="echo 'Please follow this 3 steps to deploy: https://github.com/4GeeksAcademy/flask-rest-hello/blob/master/README.md#deploy-your-website-to-heroku' "
These are the commands I run before deploying:
pipenv install;
mysql -u root -e "CREATE DATABASE example";
pipenv run init;
pipenv run migrate;
pipenv run upgrade;
If I don't run the upgrade in heroku this is what Release Log in heroku:
sqlalchemy.exc.DatabaseError: (mysql.connector.errors.DatabaseError) 2003 (HY000): Can't connect to MySQL server on 'localhost' (111)
(Background on this error at: http://sqlalche.me/e/14/4xp6)
It seems you are missing the pipenv tool or missing in PATH. You may install it using:
$ pip install pipenv
If pipenv already is installed check the PATH variable and if you are able to locate pipenv using $ which pipenv. Have a look at the docs regarding PATH:
https://pipenv-fork.readthedocs.io/en/latest/advanced.html

Serverless: Using a private Python package as a dependency

I have a Python Serverless project that uses a private Git (on Github) repo.
Requirements.txt file looks like this:
itsdangerous==0.24
boto3>=1.7
git+ssh://git#github.com/company/repo.git#egg=my_alias
Configurations of the project mainly looks like this
plugins:
- serverless-python-requirements
- serverless-wsgi
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: true
dockerSsh: true
When I deploy using this command:
sls deploy --aws-profile my_id --stage dev --region eu-west-1
I get this error:
Command "git clone -q ssh://git#github.com/company/repo.git /tmp/pip-install-a0_8bh5a/my_alias" failed with error code 128 in None
What am I doing wrong? I'm suspecting either the way I configured my SSH key for Github access or the configurations of the serverless package.
So the only way I managed to sort this issue was
Configure the SSH WITH NO PASSPHRASE. Following steps here.
In serverless.yml, I added the following:
custom:
wsgi:
app: app.app
packRequirements: false
pythonRequirements:
dockerizePip: true
dockerSsh: true
dockerSshSymlink: ~/.ssh
Notice I added dockerSshSymlink to report the location of the ssh files on my local machine; ~/.ssh.
In requirements.txt, I added my private dependency like this:
git+ssh://git#github.com/my_comp/my_repo.git#egg=MyRepo
All works.
Although not recommeneded. Have you tried using sudo sls deploy --aws-profile my_id --stage dev --region eu-west-1
This error can be also created by using the wrong password or ssh key.

App Engine Flexible - requirements.txt include GCP repository

I am trying to set up an application running in a python 3 App Engine Flexible environment. I have an app.yaml file:
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT application:app
runtime_config:
python_version: 3
I have a requirements.txt listing some packages my app needs:
Flask==0.12
gunicorn==19.7.1
...
I also have a common functions package that is located in a GCP source respository (git). I don't want to host it publicly on PyPi. Is it possible to still include it as a requirement? Something like:
git+https://source.developers.google.com/p/app/r/common
Using the above ask for a username and password when I try it on my local machine, even though I have a helper set up:
git config credential.helper gcloud.sh
You can add -i http://yourhost.com --trusted-host yourhost.com flags to requirements.txt file.

Categories