Serverless framework can't connect to existing REST API in AWS - python

Hello and thanks in advance for the help,
Im trying to connect my serverless file to an existing API Rest in AWS but when I make the deploy it fails with the message:
CREATE_FAILED: ApiGatewayResourceOtherversion (AWS::ApiGateway::Resource)
Resource handler returned message: "Invalid Resource identifier specified
Here is the configuration in my serverless file and the API in the cloud
service: test-api-2
frameworkVersion: '3'
provider:
name: aws
region: us-east-1
runtime: python3.8
apiGateway:
restApiId: 7o3h7b2zy5
restApiRootResourceId: "/second"
functions:
hello_oscar:
handler: test-api/handler.hello_oscar
events:
# every Monday at 03:15 AM
- schedule: cron(15 3 ? * MON *)
#- sqs: arn:aws:sqs:region:XXXXXX:MyFirstQueue
package:
include:
- test-api/**
get:
handler: hexa/application/get/get.get_information
memorySize: 128
description: Test function
events:
- http:
path: /hola
method: GET
cors: true
package:
include:
- hexa/**
other_version:
handler: other_version/use_other.another_version
layers:
- xxxxxxxxx
runtime: python3.7
description: Uses other version of python3.7
events:
- http:
path: /other_version
method: POST
cors: true
package:
include:
- other_version/**
diferente:
handler: other_version/use_other.another_version
layers:
- xxxxxxxxxxxxxx
runtime: python3.8

In the example serverless.yml, where you have the restApiRootResourceId property set to /second, you should have it set to the root resource id, which is shown in your screen shot as bt6nd8xw4l

Related

Why I see only 100 maxmimum total concurrent requests whem the limit is 1000?

Graphic
Invocations
By default amazon says that there are can be 1000 instances for free plan.
But when i've write a highload stress-testing with multiple requests at the same time, only 100 instances are allocated. Why this is happen?
I've expect that all 1000 instances will be allocated in free plan.
There is config file:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Globals:
Function:
Timeout: 50
MemorySize: 3000
Tracing: Active
Api:
TracingEnabled: true
Resources:
InferenceFunction:
Type: AWS::Serverless::Function # More info about Function Resource: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#awsserverlessfunction
Properties:
PackageType: Image
Architectures:
- x86_64
Events:
Inference:
Type: Api # More info about API Event Source: https://github.com/awslabs/serverless-application-model/blob/master/versions/2016-10-31.md#api
Properties:
Path: /predict
Method: post
Metadata:
Dockerfile: Dockerfile
DockerContext: ./app
DockerTag: python3.9-v1
ApplicationResourceGroup:
Type: AWS::ResourceGroups::Group
Properties:
Name:
Fn::Join:
- ''
- - ApplicationInsights-SAM-
- Ref: AWS::StackName
ResourceQuery:
Type: CLOUDFORMATION_STACK_1_0
ApplicationInsightsMonitoring:
Type: AWS::ApplicationInsights::Application
Properties:
ResourceGroupName:
Fn::Join:
- ''
- - ApplicationInsights-SAM-
- Ref: AWS::StackName
AutoConfigurationEnabled: 'true'
DependsOn: ApplicationResourceGroup
Outputs:
# ServerlessRestApi is an implicit API created out of Events key under Serverless::Function
# Find out more about other implicit resources you can reference within SAM
# https://github.com/awslabs/serverless-application-model/blob/master/docs/internals/generated_resources.rst#api
InferenceApi:
Description: API Gateway endpoint URL for Prod stage for Inference function
Value: !Sub "https://${ServerlessRestApi}.execute-api.${AWS::Region}.amazonaws.com/Prod/predict/"
InferenceFunction:
Description: Inference Lambda Function ARN
Value: !GetAtt InferenceFunction.Arn
InferenceFunctionIamRole:
Description: Implicit IAM Role created for Inference function
Value: !GetAtt InferenceFunctionRole.Arn
There is Python AWS Lambda handler
import json
from analyzer.model import Model
model_file = '/opt/ml/model'
model = Model(model_file, device="cpu")
def lambda_handler(event, context):
sample = event['body']
result =model(sample)
return {
'statusCode': 200,
'body': json.dumps(
{
"predicted_label": result.value
}
)
}
There is Docker file
FROM public.ecr.aws/lambda/python:3.9
COPY . .
RUN python3.9 -m pip install -r requirements.txt -t .
CMD ["app.lambda_handler"]
I am guessing that this is a relatively new AWS account with minimal usage and billing history. You are therefore experiencing reduced concurrency, per the documentation:
New AWS accounts have reduced concurrency and memory quotas. AWS raises these quotas automatically based on your usage. You can also request a quota increase.

AWS SAM - Key Error via EventBridge but not over S3 PUT - WHY?

so i created a SAM App with following template.
And i wanted to test the Event Bridge functionality. So everytime when i upload an object in the EventBucket, i want the function to trigger. But i keep on getting the Error:
[ERROR] KeyError: 'Records'.
BUT, when i setup a normal S3 Trigger, the events get recognized. I know this is probably the right way to trigger the Function. But like i said, i want to test the EventBridge. Can somebody tell me what i did wrong? Why is my Event not recognized when I trigger it via the Event Bridge, but "directly" over a PUT to the S3 Bucket it is.
Thanks for your help !
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
## S3 bucket
EventBucket:
Type: AWS::S3::Bucket
Properties:
NotificationConfiguration:
EventBridgeConfiguration:
EventBridgeEnabled: True
## S3 Bucket to receive 3 diffrent
DestinationBucket:
Type: AWS::S3::Bucket
SourceBucket:
Type: AWS::S3::Bucket
# Enforce HTTPS only access to S3 bucket #
BucketForJSONPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref EventBucket
PolicyDocument:
Statement:
- Action: s3:*
Effect: Deny
Principal: "*"
Resource:
- !Sub "arn:aws:s3:::${EventBucket}/*"
- !Sub "arn:aws:s3:::${EventBucket}"
Condition:
Bool:
aws:SecureTransport: false
## Lambda function
MyFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: src/
Handler: app.lambda_handler
Description: Twitter Daten Transormieren
Runtime: python3.9
MemorySize: 128
Timeout: 3
Policies:
- S3CrudPolicy:
BucketName: !Ref EventBucket
- S3ReadPolicy:
BucketName: !Ref SourceBucket
- S3CrudPolicy:
BucketName: !Ref DestinationBucket
Environment:
Variables:
Bucket1: !Ref SourceBucket
Bucket2: !Ref DestinationBucket
Bucket3: !Ref EventBucket
Layers:
- !Ref TwitterLayer
Events:
Trigger:
Type: EventBridgeRule
Properties:
Pattern:
source:
- "aws.s3"

Restart ElasticBeanstalk app server on schedule

I created a lambda function using serverless in a private subnets of the non default VPC. I wanted to restart the app server of elasticbeanstalk application at a schedule time. I used boto3 and here is the reference [https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/elasticbeanstalk.html][1]
The problem is that when i run the function locally it runs and restart the application server. But when i deploy using sls deploy, it is not working and i get null response back when i test it from the lambda console.
Here is the code:
import json
from logging import log
from loguru import logger
import boto3
from datetime import datetime
import pytz
def main(event, context):
try:
client = boto3.client("elasticbeanstalk", region_name="us-west-1")
applications = client.describe_environments()
current_hour = datetime.now(pytz.timezone("US/Eastern")).hour
for env in applications["Environments"]:
applicationname = env["EnvironmentName"]
if applicationname == "xxxxx-xxx":
response = client.restart_app_server(
EnvironmentName=applicationname,
)
logger.info(response)
print("restarted the application")
return {"statusCode": 200, "body": json.dumps("restarted the instance")}
except Exception as e:
logger.exception(e)
if __name__ == "__main__":
main("", "")
Here the serverless.yml file:
service: beanstalk-starter
frameworkVersion: '2'
provider:
name: aws
runtime: python3.8
lambdaHashingVersion: 20201221
profile: xxxx-admin
region: us-west-1
memorySize: 512
timeout: 15
vpc:
securityGroupIds:
- sg-xxxxxxxxxxx (open on all ports for inbound)
subnetIds:
- subnet-xxxxxxxxxxxxxxxx (private)
- subnet-xxxxxxxxxxxxxxxx (private)
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
functions:
main:
handler: handler.main
events:
- schedule: rate(1 minute)
Response from lambda console:
The area below shows the result returned by your function execution. Learn more about returning results from your function.
null
Any help would be appreciated! Let me know what I'm missing here!
To solve this, I have to give these two permissions to my AWS lambda role from the AWS management console. You can also set the permission in the serverless.yml file.
AWSLambdaVPCAccessExecutionRole
AWSCodePipeline_FullAccess
(*Make sure you are using the least privileges while giving permission to a role.)
Thank you.

Unable to load Python dependencies with Serverless Framework for AWS Lambda: Error: STDOUT

This should be fairly straightfoward (I think). I've been using Serverless Framework for the past several months without much issue, and have been able to load packages such as pandas and numpy, but had recently tried loading email and sklearn and received the bellow message when I try to deploy the stack.
Error --------------------------------------------------
Error: STDOUT:
STDERR: Python was not found but can be installed from the Microsoft Store: https://go.microsoft.com/fwlink?linkID=2082640
at C:\Users\schuy\node_modules\serverless-python-requirements\lib\pip.js:325:13
at Array.forEach (<anonymous>)
at installRequirements (C:\Users\schuy\node_modules\serverless-python-requirements\lib\pip.js:312:28)
at installRequirementsIfNeeded (C:\Users\schuy\node_modules\serverless-python-requirements\lib\pip.js:556:3)
at ServerlessPythonRequirements.installAllRequirements (C:\Users\schuy\node_modules\serverless-python-requirements\lib\pip.js:635:29)
at ServerlessPythonRequirements.tryCatcher (C:\Users\schuy\node_modules\bluebird\js\release\util.js:16:23)
at Promise._settlePromiseFromHandler (C:\Users\schuy\node_modules\bluebird\js\release\promise.js:547:31)
at Promise._settlePromise (C:\Users\schuy\node_modules\bluebird\js\release\promise.js:604:18)
at Promise._settlePromise0 (C:\Users\schuy\node_modules\bluebird\js\release\promise.js:649:10)
at Promise._settlePromises (C:\Users\schuy\node_modules\bluebird\js\release\promise.js:729:18)
at _drainQueueStep (C:\Users\schuy\node_modules\bluebird\js\release\async.js:93:12)
at _drainQueue (C:\Users\schuy\node_modules\bluebird\js\release\async.js:86:9)
at Async._drainQueues (C:\Users\schuy\node_modules\bluebird\js\release\async.js:102:5)
at Immediate.Async.drainQueues [as _onImmediate] (C:\Users\schuy\node_modules\bluebird\js\release\async.js:15:14)
at processImmediate (internal/timers.js:456:21)
at process.topLevelDomainCallback (domain.js:137:15)
I retried deploying the stack with just numpy, pandas, and datetime and had no issues/errors, but adding email or sklearn creates this error message.
Any idea on how to resolve this and I can load those packages into my Lambda Function with Serverless Framework?
Edit
As requested the Yaml file. Although this worked with other dependencies/packages I've worked before
service: new-process-5
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
resources:
Resources:
aaaaincomingcsv:
Type: 'AWS::S3::Bucket'
Properties: {}
aaaaprocessedsalestotal:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: aaaa-processed-salestotalv5
aaaaprocessedwinloss:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: aaaa-processed-winlossgroupedv5
aaaaemployeesstargetotal:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: aaaa-employees-stagetotalv5
aaaaemployeesalespivot:
Type: 'AWS::S3::Bucket'
Properties:
BucketName: aaaa-employees-salespivotv5
provider:
name: aws
runtime: python3.8
region: us-east-1
profile: serverless-admin
timeout: 500
memorySize: 128
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: "*"
functions:
csv-processor:
handler: handler.featureengineering
events:
- s3:
bucket: aaaaincomingcsvv5
event: s3:ObjectCreated:*
rules:
- suffix: .csv
custom:
pythonRequirements:
dockerizePip: true
plugins:
- serverless-python-requirements
additional edit
I corrected for the lack of indentation under custom and python requirements but still receiving an error message.

Running Apache Beam python pipelines in Kubernetes

This question might seem like a duplicate of this.
I am trying to run Apache Beam python pipeline using flink on an offline instance of Kubernetes. However, since I have user code with external dependencies, I am using the Python SDK harness as an External Service - which is causing errors (described below).
The kubernetes manifest I use to launch the beam python SDK:
apiVersion: apps/v1
kind: Deployment
metadata:
name: beam-sdk
spec:
replicas: 1
selector:
matchLabels:
app: beam
component: python-beam-sdk
template:
metadata:
labels:
app: beam
component: python-beam-sdk
spec:
hostNetwork: True
containers:
- name: python-beam-sdk
image: apachebeam/python3.7_sdk:latest
imagePullPolicy: "Never"
command: ["/opt/apache/beam/boot", "--worker_pool"]
ports:
- containerPort: 50000
name: yay
apiVersion: v1
kind: Service
metadata:
name: beam-python-service
spec:
type: NodePort
ports:
- name: yay
port: 50000
targetPort: 50000
selector:
app: beam
component: python-beam-sdk
When I launch my pipeline with the following options:
beam_options = PipelineOptions([
"--runner=FlinkRunner",
"--flink_version=1.9",
"--flink_master=10.101.28.28:8081",
"--environment_type=EXTERNAL",
"--environment_config=10.97.176.105:50000",
"--setup_file=./setup.py"
])
I get the following error message (within the python sdk service):
NAME READY STATUS RESTARTS AGE
beam-sdk-666779599c-w65g5 1/1 Running 1 4d20h
flink-jobmanager-74d444cccf-m4g8k 1/1 Running 1 4d20h
flink-taskmanager-5487cc9bc9-fsbts 1/1 Running 2 4d20h
flink-taskmanager-5487cc9bc9-zmnv7 1/1 Running 2 4d20h
(base) [~]$ sudo kubectl logs -f beam-sdk-666779599c-w65g5
2020/02/26 07:56:44 Starting worker pool 1: python -m apache_beam.runners.worker.worker_pool_main --service_port=50000 --container_executable=/opt/apache/beam/boot
Starting worker with command ['/opt/apache/beam/boot', '--id=1-1', '--logging_endpoint=localhost:39283', '--artifact_endpoint=localhost:41533', '--provision_endpoint=localhost:42233', '--control_endpoint=localhost:44977']
2020/02/26 09:09:07 Initializing python harness: /opt/apache/beam/boot --id=1-1 --logging_endpoint=localhost:39283 --artifact_endpoint=localhost:41533 --provision_endpoint=localhost:42233 --control_endpoint=localhost:44977
2020/02/26 09:11:07 Failed to obtain provisioning information: failed to dial server at localhost:42233
caused by:
context deadline exceeded
I have no idea what the logging- or artifact endpoint (etc.) is. And by inspecting the source code it seems like that the endpoints has been hard-coded to be located at localhost.
(You said in a comment that the answer to the referenced post is valid, so I'll just address the specific error you ran into in case someone else hits it.)
Your understanding is correct; the logging, artifact, etc. endpoints are essentially hardcoded to use localhost. These endpoints are meant to be only used internally by Beam and are not configurable. So the Beam worker is implicitly assumed to be on the same host as the Flink task manager. Typically, this is accomplished by making the Beam worker pool a sidecar of the Flink task manager pod, rather than a separate service.

Categories