I created a lambda function using serverless in a private subnets of the non default VPC. I wanted to restart the app server of elasticbeanstalk application at a schedule time. I used boto3 and here is the reference [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html][1]
The problem is that when i run the function locally it runs and restart the application server. But when i deploy using sls deploy, it is not working and i get null response back when i test it from the lambda console.
Here is the code:
import json
from logging import log
from loguru import logger
import boto3
from datetime import datetime
import pytz
def main(event, context):
try:
client = boto3.client("elasticbeanstalk", region_name="us-west-1")
applications = client.describe_environments()
current_hour = datetime.now(pytz.timezone("US/Eastern")).hour
for env in applications["Environments"]:
applicationname = env["EnvironmentName"]
if applicationname == "xxxxx-xxx":
response = client.restart_app_server(
EnvironmentName=applicationname,
)
logger.info(response)
print("restarted the application")
return {"statusCode": 200, "body": json.dumps("restarted the instance")}
except Exception as e:
logger.exception(e)
if __name__ == "__main__":
main("", "")
Here the serverless.yml file:
service: beanstalk-starter
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
profile: xxxx-admin
region: us-west-1
memorySize: 512
timeout: 15
vpc:
securityGroupIds:
- sg-xxxxxxxxxxx (open on all ports for inbound)
subnetIds:
- subnet-xxxxxxxxxxxxxxxx (private)
- subnet-xxxxxxxxxxxxxxxx (private)
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
functions:
main:
handler: handler.main
events:
- schedule: rate(1 minute)
Response from lambda console:
The area below shows the result returned by your function execution. Learn more about returning results from your function.
null
Any help would be appreciated! Let me know what I'm missing here!
To solve this, I have to give these two permissions to my AWS lambda role from the AWS management console. You can also set the permission in the serverless.yml file.
AWSLambdaVPCAccessExecutionRole
AWSCodePipeline_FullAccess
(*Make sure you are using the least privileges while giving permission to a role.)
Thank you.
Related
I have Kinesis streams set up locally with Localstack, and Lambda (in Python) set up locally with Serverless Offline. I cannot set up event source between them due to 404 and 500 errors.
Kinesis is set up with Docker-compose:
version: '3'
services:
localstack:
container_name: "localstack"
image: localstack/localstack:latest
environment:
- DEFAULT_REGION=eu-central-1
- SERVICES=kinesis
- DOCKER_HOST=unix:///var/run/docker.sock
ports:
- "4566:4566" # LocalStack Gateway
- "4510-4559:4510-4559" # external services port range
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
Streams are set up with boto3:
import boto3
if __name__ == '__main__':
client = boto3.client(
"kinesis",
region_name="eu-central-1",
endpoint_url="http://localhost:4566"
)
client.create_stream(StreamName="audience_events_local", ShardCount=1)
client.create_stream(StreamName="audience_events_local_cache", ShardCount=1)
Lambda functions are set up with Serverless Offline: serverless offline --stage=local. Relevant part of serverless.yml:
serverless-offline:
httpPort: 3000 # HTTP port to listen on
lambdaPort: 3002 # Lambda HTTP port to listen on
I try to set up event sources with:
import boto3
def get_kinesis_stream_arns() -> list[str]:
client = boto3.client(
"kinesis",
region_name="eu-central-1",
endpoint_url="http://localhost:4566"
)
return [
client.describe_stream(StreamName=stream_name)["StreamDescription"]["StreamARN"]
for stream_name in ["audience_events_local", "audience_events_local_cache"]
]
def create_event_sources(stream_arns: list[str]) -> None:
client = boto3.client(
"lambda",
region_name="eu-central-1",
endpoint_url="http://localhost:4566"
)
for arn in stream_arns:
# example:
# arn:aws:kinesis:eu-central-1:000000000000:stream/audience_events_local
# -> function_name = audience-events-local
function_name = arn.split("/")[-1].replace("_", "-")
client.create_event_source_mapping(
EventSourceArn=arn,
FunctionName=function_name,
MaximumRetryAttempts=2
)
if __name__ == '__main__':
stream_arns = get_kinesis_stream_arns()
print("Stream ARNs:", stream_arns)
create_event_sources(stream_arns)
However, I get errors:
if I use endpoint_url="http://localhost:4566" in create_event_sources, I get botocore.exceptions.ClientError: An error occurred (500) when calling the CreateEventSourceMapping operation (reached max retries: 4):
if I use endpoint_url="http://localhost:3002", I get botocore.exceptions.ClientError: An error occurred (404) when calling the CreateEventSourceMapping operation: Not Found
How can I fix this?
I have covered many different questions just like this on here and have still not come to a conclusion of what the problem is.
I have two API's
Classroom API
Invites API
Both API's use FastAPI with Magnum for the handler. I am using SAM CLI to generate the build and deploy locally via docker. The Classrooms API works and runs just fine locally. There is no issue with the build, running locally, ect. I copied the working Classrooms-API and rebuilt the Invites-API and try running the same exact way. I am getting the below issue. I am struggling to understand what the problem is because it's an exact replica of a working example
user#my-device-name invites-api % sam local start-api
Mounting Function at http://127.0.0.1:3000$default [X-AMAZON-APIGATEWAY-ANY-METHOD]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. You only need to restart SAM CLI if you update your AWS SAM template
2022-04-18 19:56:23 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
Invoking app.handler (python3.9)
Skip pulling image and use local one: public.ecr.aws/sam/emulation-python3.9:rapid-1.46.0-x86_64.
Mounting /Users/user/Documents/Projects/prj/apis/invites-api/.aws-sam/build/Function as /var/task:ro,delegated inside runtime container
START RequestId: 77cc785c-1499-4575-8bbc-0d60d49a01ca Version: $LATEST
Traceback (most recent call last): Unable to import module 'app': No module named 'app'
END RequestId: 77cc785c-1499-4575-8bbc-0d60d49a01ca
REPORT RequestId: 77cc785c-1499-4575-8bbc-0d60d49a01ca Init Duration: 0.88 ms Duration: 521.27 ms Billed Duration: 522 ms Memory Size: 128 MB Max Memory Used: 128 MB
No Content-Type given. Defaulting to 'application/json'.
Classroom API structure
classroom-api/
/src
/app
/models, controllers, schemas, ect
__init__.py <-- handler is defined here
requirements.txt
template.yaml
Classroom __init__.py
import time
from mangum import Mangum
from fastapi import FastAPI, Request
from fastapi.exceptions import RequestValidationError
from fastapi.encoders import jsonable_encoder
from fastapi.responses import JSONResponse
from app.controller.api import api_router
app = FastAPI(title="Classrooms API", root_path="/api/v1", openapi_url="/openapi.json")
... (app exceptions and middleware defined)
app.include_router(api_router)
handler = Mangum(app)
Classroom template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: AWS Sam Template
Globals:
Function:
Timeout: 30
Resources:
Function:
Type: AWS::Serverless::Function
Properties:
FunctionName: "Classrooms-API-V1"
MemorySize: 128
CodeUri: src
Handler: app.handler
Runtime: python3.9
Events:
Api:
Type: HttpApi
Properties:
ApiId: !Ref Api
Api:
Type: AWS::Serverless::HttpApi
Outputs:
ApiUrl:
Description: URL of your API endpoint
Value:
Fn::Sub: 'https://${Api}.execute-api.${AWS::Region}.${AWS::URLSuffix}/'
Invites API structure
invites-api/
/src
/app
/models, controllers, schemas, ect
__init__.py <-- handler is defined here
requirements.txt
template.yaml
Invites API __init__.py
import time
from mangum import Mangum
from fastapi import FastAPI, Request
from fastapi.exceptions import RequestValidationError
from fastapi.encoders import jsonable_encoder
from fastapi.responses import JSONResponse
from app.controller.api import api_router
app = FastAPI(title="Invites API", root_path="/api/v1", openapi_url="/openapi.json")
... (app exceptions and middleware defined)
app.include_router(api_router)
handler = Mangum(app)
Invites API template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: AWS Sam Template
Globals:
Function:
Timeout: 30
Resources:
Function:
Type: AWS::Serverless::Function
Properties:
FunctionName: "Invite-API-V1"
MemorySize: 128
CodeUri: src
Handler: app.handler
Runtime: python3.9
Events:
Api:
Type: HttpApi
Properties:
ApiId: !Ref Api
Api:
Type: AWS::Serverless::HttpApi
Outputs:
ApiUrl:
Description: URL of your API endpoint
Value:
Fn::Sub: 'https://${Api}.execute-api.${AWS::Region}.${AWS::URLSuffix}/'
I am creating a k8s deployment, service, and ingress using the k8s Python API. The deployment uses the minimal-notebook container to create a Jupyter notebook instance.
After creating the deployment, how can I read the token for my minimal-notebook pod using the k8s Python API?
You would need to get the pod logs, and extract the token.
Given that the pod is already running
k get pods
NAME READY STATUS RESTARTS AGE
mininote 1/1 Running 0 17m
k get pod mininote -o json | jq '.spec.containers[].image'
"jupyter/minimal-notebook"
you could do this:
[my pod's name is mininote and it is running in the default namespace]
import re
from kubernetes.client.rest import ApiException
from kubernetes import client, config
config.load_kube_config()
pod_name = "mininote"
namespace_name = "default"
try:
api = client.CoreV1Api()
response = api.read_namespaced_pod_log(name=pod_name, namespace=namespace_name)
match = re.search(r'token=([0-9a-z]*)', response)
print(match.group(1))
except ApiException as e:
print('Found exception in reading the logs:')
print(e)
running:
> python main.py
174c891b5db325b2aec283df90525c68ab02b02e3a565da5
How to push my app (using python-flask + redis) to gcr.io and deploy to google kubernetes (by yaml file)?
And I want to set env variable for my app
import os
import redis
from flask import Flask
from flask import request, redirect, render_template, url_for
from flask import Response
app = Flask(__name__)
redis_host = os.environ['REDIS_HOST']
app.redis = redis.StrictRedis(host=redis_host, port=6379, charset="utf-8", decode_responses=True)
# Be super aggressive about saving for the development environment.
# This says save every second if there is at least 1 change. If you use
# redis in production you'll want to read up on the redis persistence
# model.
app.redis.config_set('save', '1 1')
#app.route('/', methods=['GET', 'POST'])
def main_page():
if request.method == 'POST':
app.redis.lpush('entries', request.form['entry'])
return redirect(url_for('main_page'))
else:
entries = app.redis.lrange('entries', 0, -1)
return render_template('main.html', entries=entries)
#Router my app by post and redirect to mainpage
#app.route('/clear', methods=['POST'])
def clear_entries():
app.redis.ltrim('entries', 1, 0)
return redirect(url_for('main_page'))
#use for docker on localhost
if __name__ == "__main__":
app.run(host='0.0.0.0', port=5000)
Posting this answer as a community wiki to set more of a baseline approach to the question rather than to give a specific solution addressing the code included in the question.
Feel free to edit/expand.
This topic could be quite wide considering the fact it could be addressed in many different ways (as described in the question, by using Cloud Build, etc).
Addressing this question specifically on the part of:
Building the image and sending it to GCR.
Using newly built image in GKE.
Building the image and sending it to GCR.
Assuming that your code and your whole Docker image is running correctly, you can build/tag it in a following manner to then send it to GCR:
gcloud auth configure-docker
adds the Docker credHelper entry to Docker's
configuration file, or creates the file if it doesn't exist. This will
register gcloud as the credential helper for all Google-supported
Docker registries.
docker tag YOUR_IMAGE gcr.io/PROJECT_ID/IMAGE_NAME
docker push gcr.io/PROJECT_ID/IMAGE_NAME
After that you can go to the:
GCP Cloud Console (Web UI) -> Container Registry
and see the image you've uploaded.
Using newly built image in GKE
To run earlier mentioned image you can either:
Create the Deployment in the Cloud Console (Kubernetes Engine -> Workloads -> Deploy)
A side note!
You can also add there the environment variables of your choosing (as pointed in the question)
Create it with a YAML manifest that will be similar to the one below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: amazing-app
labels:
app: amazing-app
spec:
replicas: 3
selector:
matchLabels:
app: amazing-app
template:
metadata:
labels:
app: amazing-app
spec:
containers:
- name: amazing-app
image: gcr.io/PROJECT-ID/IMAGE-NAME # <-- IMPORTANT!
env:
- name: DEMO_GREETING
value: "Hello from the environment"
Please take a specific look on following part:
env:
- name: DEMO_GREETING
value: "Hello from the environment"
This part will create an environment variable inside of each container:
$ kubectl exec -it amazing-app-6db8d7478b-4gtxk -- /bin/bash -c 'echo $DEMO_GREETING'
Hello from the environment
Additional resources:
Cloud.google.com: Container registry: Docs: Pushing and pulling
Cloud.google.com: Build: Docs: Deploying builds: Deploy GKE
I have a rather simple test app:
import redis
import os
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
def test_redis(event, context):
redis_endpoint = None
if "REDIS" in os.environ:
redis_endpoint = os.environ["REDIS"]
log.debug("redis: " + redis_endpoint)
else:
log.debug("cannot read REDIS config environment variable")
return {
'statusCode': 500
}
redis_conn = None
try:
redis_conn = redis.StrictRedis(host=redis_endpoint, port=6379, db=0)
redis_conn.set("foo", "boo")
redis_conn.get("foo")
except:
log.debug("failed to connect to redis")
return {
'statusCode': 500
}
finally:
del redis_conn
return {
'statusCode': 200
}
which I have deployed as a HTTP endpoint with serverless
#
# For full config options, check the docs:
# docs.serverless.com
#
service: XXX
plugins:
- serverless-aws-documentation
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
provider:
name: aws
stage: dev
region: eu-central-1
runtime: python3.6
environment:
# our cache
REDIS: xx-xx-redis-001.xxx.euc1.cache.amazonaws.com
functions:
hello:
handler: hello/hello_world.say_hello
events:
- http:
path: hello
method: get
# private: true # <-- Requires clients to add API keys values in the `x-api-key` header of their request
# authorizer: # <-- An AWS API Gateway custom authorizer function
testRedis:
handler: test_redis/test_redis.test_redis
events:
- http:
path: test-redis
method: get
When I trigger the endpoint via API Gateway, the lambda just times out after about 7 seconds.
The environmental variable is read properly, no error message displayed.
I suppose there's a problem connecting to the redis, but the tutorial are quite explicit - not sure what the problem could be.
The problem might need the need to set up a NAT, not sure how to accomplish this task with serverless
I ran into this issue as well. For me, there were a few problems that had to be ironed out
The lambda needs VPC permissions.
The ElastiCache security group needs an inbound rule from the Lambda security group that allows communication on the Redis port. I thought they could just be in the same security group.
And the real kicker: I had turned on encryption in-transit. This meant that I needed to pass redis.RedisClient(... ssl=True). The redis-py page mentions that ssl_cert_reqs needs to be set to None for use with ElastiCache, but that didn't seem to be true in my case. I did however need to pass ssl=True.
It makes sense that ssl=True needed to be set but the connection was just timing out so I went round and round trying to figure out what the problem with the permissions/VPC/SG setup was.
Try having the lambda in the same VPC and security group as your elastic cluster