Define resource_defs in dagster job sensor - python

I am trying to build sensor for the execution of pipeline/graph. The sensor would check on different intervals and executes the job containing different ops. Now the Job requires some resource_defs and config. In the offical documentation I don't see how I can define resource_defs for Job. A small hint would be great
Question : where or how do i define resource_defs in sensor ? Do I even have to define it ? its not mentioned in official documentation
https://docs.dagster.io/concepts/partitions-schedules-sensors/sensors
### defining Job
#job(
resource_defs = {"some_API_Module": API_module , "db_Module" : db} ,
config = {key : value }
)
def job_pipeline ():
op_1 () ## API is used as required resource
op_2 () ## db is used as required resource
### defining sensor that triggers the Job
#sensor ( Job = job_pipeline) :
### some calculation
yield RunRequest(run_key = "" config = {key : value} )

It seems like you may be missing building your run config from within the sensor. The configuration you pass to a RunRequest should contain the resource configuration you want to run the job with, and will look exactly like the run configuration you'd configure from the launchpad. Something like:
### defining sensor that triggers the Job
#sensor ( Job = job_pipeline) :
### some calculation
run_config = {
"ops": {
"op1": {
"config": {
"key": "value"
},
}
},
"op2": {
"config": {
"key": "value",
}
},
},
"resources": {
"some_API_module": {
"config": {"key": "value"}
},
"db": {
"config": {"key": "value"}
},
},
}
yield RunRequest(run_key="<unique_value>", run_config=run_config)

Related

PynamoDB dynamic names for MapAttribute

I'm currently a bit stuck. I have to create a model of our dynamodb using pynamodb, and here's what I have (part of a bigger model, exported from a real user in DB)
packs = [
{
"exos": {
"4": {
"start": 1529411739,
"session": 1529411739,
"finish": 1529417144
},
"5": {
"start": 1529417192,
"session": 1529417192
}
},
"id": 10
},
{
"exos": {
"32": {
"start": 1529411706,
"session": 1529411706
}
},
"id": 17
}
]
Here's what I got so far:
class ExercisesMapAttribute(MapAttribute):
pass
class PackMapAttribute(MapAttribute):
exos = ExercisesMapAttribute()
id = NumberAttribute()
class TrainUser(Model):
class Meta:
table_name = 'TrainUser'
region = 'eu-west-1'
# [...] Other stuff removed for clarity
packs = ListAttribute(of=PackMapAttribute())
Currently I think I got correctly up until the "4":, "5": etc... As they are exercices name.
The idea is that this structure is supposed to represent a list of packs, then inside a list of exercises with their id and more info... I need a way to map that from the current db json to classes I can then use with PynamoDB. Is it even possible?
Thanks!
I've tried DynamicMapAttribute but I can't figure out if it will work with my configuration.
I also don't know how a custom attribute could be done to overcome this.

AWS WAF CDK Python How to change rule action

Here is my python cdk code which create 2 rules "AWS-AWSManagedRulesCommonRuleSet" and "AWS-AWS-ManagedRulesAmazonIpReputationList".
In each rule there are child rules that i can change their Rule Actions to Count, the question is how can i add this to my code, i didn't find any good explanation for those child rules.
Added some changes but still doesn't work, i get this error:
Resource handler returned message: "Error reason: You have used none or multiple values for a field that requires exactly one value., field: RULE, parameter: Rule (Service: Wafv2, Status Code: 400, Request ID: 248d9235-bd01-49f4-963b-109bac2776c5, Extended Request ID: null)" (RequestToken: 8bb5****-****-3e95-****-
8e336ae3eed4, HandlerErrorCode: InvalidRequest)
the code:
class PyCdkStack(core.Stack):
def __init__(self, scope: core.Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
web_acl = wafv2.CfnWebACL(
scope_=self, id='WebAcl',
default_action=wafv2.CfnWebACL.DefaultActionProperty(allow={}),
scope='REGIONAL',
visibility_config=wafv2.CfnWebACL.VisibilityConfigProperty(
cloud_watch_metrics_enabled=True,
sampled_requests_enabled=True,
metric_name='testwafmetric',
),
name='Test-Test-WebACL',
rules=[
{
'name': 'AWS-AWSManagedRulesCommonRuleSet',
'priority': 1,
'statement': {
'RuleGroupReferenceStatement': {
'vendorName': 'AWS',
'name': 'AWSManagedRulesCommonRuleSet',
'ARN': 'string',
"ExcludedRules": [
{
"Name": "CrossSiteScripting_QUERYARGUMENTS"
},
{
"Name": "GenericLFI_QUERYARGUMENTS"
},
{
"Name": "GenericRFI_QUERYARGUMENTS"
},
{
"Name": "NoUserAgent_HEADER"
},
{
"Name": "SizeRestrictions_QUERYSTRING"
}
]
}
},
'overrideAction': {
'none': {}
},
'visibilityConfig': {
'sampledRequestsEnabled': True,
'cloudWatchMetricsEnabled': True,
'metricName': "AWS-AWSManagedRulesCommonRuleSet"
}
},
]
)
The Cfn- constructs are a one to one mapping to the cloudformation resources. You can simply check the docs for aws::wafv2::webacl.
For an example on how to exclude in cloudformation, see below. Note that object keys need to start with lowercase in order for CDK to process them.
{
"name": "AWS-AWSBotControl-Example",
"priority": 5,
"statement": {
"managedRuleGroupStatement": {
"vendorName": "AWS",
"name": "AWSManagedRulesBotControlRuleSet",
"excludedRules": [
{
"name": "CategoryVerifiedSearchEngine"
},
{
"name": "CategoryVerifiedSocialMedia"
}
]
},
"visibilityConfig": {
"sampledRequestsEnabled": true,
"cloudWatchMetricsEnabled": true,
"metricName": "AWS-AWSBotControl-Example"
}
}
This actually sets the two mentioned rules to Count mode. See https://docs.aws.amazon.com/waf/latest/developerguide/web-acl-rule-group-settings.html#web-acl-rule-group-rule-to-count. Note it sais:
Rules that you alter like this are described as being excluded rules in the rule group. If you have metrics enabled, you receive COUNT metrics for each excluded rule. This change alters how the rules in the rule group are evaluated.

How to push the data from dynamodb through stream

Below is the json file
[
{
"year": 2013,
"title": "Rush",
"actors": [
"Daniel Bruhl",
"Chris Hemsworth",
"Olivia Wilde"
]
},
{
"year": 2013,
"title": "Prisoners",
"actors": [
"Hugh Jackman",
"Jake Gyllenhaal",
"Viola Davis"
]
}
]
Below is the code to push to dynamodb. I have created testjsonbucket bucket name, moviedataten.json is the filename and saved above json.Create a dynamodb with Primary partition key as year (Number) and
Primary sort key as title (String).
import json
from decimal import Decimal
import json
import boto3
s3 = boto3.resource('s3')
obj = s3.Object('testjsonbucket', 'moviedataten.json')
body = obj.json
#def lambda_handler(event,context):
# print (body)
def load_movies(movies, dynamodb=None):
if not dynamodb:
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('Movies')
for movie in movies:
year = int(movie['year'])
title = movie['title']
print("Adding movie:", year, title)
table.put_item(Item=movie)
def lambda_handler(event, context):
movie_list = json.loads(body, parse_float=Decimal)
load_movies(movie_list)
I want to push in to ElasticSearch from dynamodb.
I have created a Elastic Domain https://xx.x.x.com/testelas
I have gone through the link https://aws.amazon.com/blogs/compute/indexing-amazon-dynamodb-content-with-amazon-elasticsearch-service-using-aws-lambda/
I clicked Managestream also
My Requirement:
Any change in Dynamodb has to reflect in the Elasticsearch?
This lambda just writing the document to DynamoDb, and I will not recommend adding the code in this lambda to push the same object to Elastic search, as lambda function should perform a single task and pushing the same document to ELK should be managed as a DynamoDB stream.
What if ELK is down or not available how you will manage this in lambda?
What if you want to disable this in future? you will need to modify lambda instead of controlling this from AWS API or AWS console, all you need to just disable the stream when required no changes on above lambda side code
What if you want to move only modify or TTL item to elastic search?
So create Dyanodb Stream that pushes the document to another Lambda that is responsible to push the document to ELK, with this option you can also push old and new both items.
You can look into this article too that describe another approach data-streaming-from-dynamodb-to-elasticsearch
For above approach look into this GitHub project dynamodb-stream-elasticsearch.
const { pushStream } = require('dynamodb-stream-elasticsearch');
const { ES_ENDPOINT, INDEX, TYPE } = process.env;
function myHandler(event, context, callback) {
console.log('Received event:', JSON.stringify(event, null, 2));
pushStream({ event, endpoint: ES_ENDPOINT, index: INDEX, type: TYPE })
.then(() => {
callback(null, `Successfully processed ${event.Records.length} records.`);
})
.catch((e) => {
callback(`Error ${e}`, null);
});
}
exports.handler = myHandler;
DynamoDB has a built in feature (DynamoDB streams) that will handle the stream part of this question.
When you configure this you have the choice of the following configurations:
KEYS_ONLY — Only the key attributes of the modified item.
NEW_IMAGE — The entire item, as it appears after it was modified.
OLD_IMAGE — The entire item, as it appeared before it was modified.
NEW_AND_OLD_IMAGES — Both the new and the old images of the item.
This will produce an event that looks like the following
{
"Records":[
{
"eventID":"1",
"eventName":"INSERT",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"NewImage":{
"Message":{
"S":"New item!"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"111",
"SizeBytes":26,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
},
{
"eventID":"2",
"eventName":"MODIFY",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"NewImage":{
"Message":{
"S":"This item has changed"
},
"Id":{
"N":"101"
}
},
"OldImage":{
"Message":{
"S":"New item!"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"222",
"SizeBytes":59,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
},
{
"eventID":"3",
"eventName":"REMOVE",
"eventVersion":"1.0",
"eventSource":"aws:dynamodb",
"awsRegion":"us-east-1",
"dynamodb":{
"Keys":{
"Id":{
"N":"101"
}
},
"OldImage":{
"Message":{
"S":"This item has changed"
},
"Id":{
"N":"101"
}
},
"SequenceNumber":"333",
"SizeBytes":38,
"StreamViewType":"NEW_AND_OLD_IMAGES"
},
"eventSourceARN":"stream-ARN"
}
]
}
As you're already familiar with Lambda it makes sense to use a Lambda function to consume the records and then iterate through them to process them in the Elasticsearch format before adding them to your index.
When doing this make sure that you iterate through each record as there may be multiple depending on your configuration.
For more information on the steps required for the Lambda side of the function check out the Tutorial: Using AWS Lambda with Amazon DynamoDB streams page.

How to share data in `AWS Step Functions` without passing it between the steps

I use AWS Step Functions and have the following workflow
initStep - It's a lambda function handler, that gets some data and sends it to SQS for external service.
activity = os.getenv('ACTIVITY')
queue_name = os.getenv('QUEUE_NAME')
def lambda_handler(event, context):
event['my_activity'] = activity
data = json.dumps(event)
# Retrieving a queue by its name
sqs = boto3.resource('sqs')
queue = sqs.get_queue_by_name(QueueName=queue_name)
queue.send_message(MessageBody=data, MessageGroupId='messageGroup1' + str(datetime.time(datetime.now())))
return event
validationWaiting - It's an activity that waits for an answer from the external service that include the data.
complete - It's a lambda function handler, that uses the data from the initStep.
def lambda_handler(event, context):
email = event['email'] if 'email' in event else None
data = event['data'] if 'data' in event else None
client = boto3.client(service_name='ses')
to = email.split(', ')
message_conrainer = {'Subject': {'Data': 'Email from step functions'},
'Body': {'Html': {
'Charset': "UTF-8",
'Data': """<html><body>
<p>""" + data """</p>
</body> </html> """
}}}
destination = {'ToAddresses': to,
'CcAddresses': [],
'BccAddresses': []}
return client.send_email(Source=from_addresses,
Destination=destination,
Message=message_container)
It does work, but the problem is that I'm sending full data from the initStep to external service, just to pass it later to complete. Potentially more steps can be added.
I believe it would be better to share it as some sort of global data (of current step function), that way I could add or remove steps and data would still be available for all.
You can make use of InputPath and ResultPath. In initStep you would only send necessary data to external service (probably along with some unique identifier of Execution). In the ValidaitonWaiting step you can set following properties (in State Machine definition):
InputPath: What data will be provided to GetActivityTask. Probably you want to set it to something like $.execution_unique_id where execution_unique_id is field in your data that external service uses to identify Execution (to match it with specific request during initStep).
ResultPath: Where output of ValidationWaiting Activity will be saved in data. You can set it to $.validation_output and json result from external service will be present there.
This way you can send to external service only data that is actually needed by it and you won't lose access to any data that was previously (before ValidationWaiting step) in the input.
For example, you could have following definition of the State Machine:
{
"StartAt": "initStep",
"States": {
"initStep": {
"Type": "Pass",
"Result": {
"executionId": "some:special:id",
"data": {},
"someOtherData": {"value": "key"}
},
"Next": "ValidationWaiting"
},
"ValidationWaiting": {
"Type": "Pass",
"InputPath": "$.executionId",
"ResultPath": "$.validationOutput",
"Result": {
"validationMessages": ["a", "b"]
},
"Next": "Complete"
},
"Complete": {
"Type": "Pass",
"End": true
}
}
}
I've used Pass states for initStep and ValidationWaiting to simplify the example (I haven't run it, but it should work). Result field is specific to Pass task and it is equivalent to the result of your Lambda functions or Activity.
In this scenario Complete step would get following input:
{
"executionId": "some:special:id",
"data": {},
"someOtherData": {"value": key"},
"validationOutput": {
"validationMessages": ["a", "b"]
}
}
So the result of ValidationWaiting step has been saved into validationOutput field.
Based on the answer of Marcin Sucharski I've came up with my own solution.
I needed to use Type: Task since initStep is a lambda, which sends SQS.
I didn't needed InputPath in ValidationWaiting, but only ResultPath, which store the data received in activity.
I work with Serverless framework, here is my final solution:
StartAt: initStep
States:
initStep:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:init-step
Next: ValidationWaiting
ValidationWaiting:
Type: Task
ResultPath: $.validationOutput
Resource: arn:aws:states:#{AWS::Region}:#{AWS::AccountId}:activity:validationActivity
Next: Complete
Catch:
- ErrorEquals:
- States.ALL
ResultPath: $.validationOutput
Next: Complete
Complete:
Type: Task
Resource: arn:aws:lambda:#{AWS::Region}:#{AWS::AccountId}:function:complete-step
End: true
Here a short and simple solution with InputPath and ResultPath. My Lambda Check_Ubuntu_Updates return a list of instance ready to be updated. This list of instances is received by the step Notify_Results, then it use this data. Remember that if you have several ResultPath in your Step Function and you need more than 1 input in a step you can use InputPath only with $.
{
"Comment": "A state machine that check some updates systems available.",
"StartAt": "Check_Ubuntu_Updates",
"States": {
"Check_Ubuntu_Updates": {
"Type": "Task",
"Resource": "arn:aws:lambda:us-east-1:#############:function:Check_Ubuntu_Updates",
"ResultPath": "$.instances",
"Next": "Notify_Results"
},
"Notify_Results": {
"Type": "Task",
"InputPath": "$.instances",
"Resource": "arn:aws:lambda:us-east-1:#############:function:Notify_Results",
"End": true
}
}
}

Google fit data via google python api libraries

I'm using this python library from google but I can't figure out what to use for the 'body' argument. Is there an example body that I can draw from to create the dict that this tool will need?
Here is the code that I'm using:
flow = client.flow_from_clientsecrets(
workLaptop,
scope='https://www.googleapis.com/auth/fitness.activity.read',
redirect_uri='oauth:code:from:somehwere')
auth_uri = flow.step1_get_authorize_url()
webbrowser.open_new(auth_uri)
auth_code = "a;ldskjfa;lsdkfja;ldsfkja;lsdkfjaldgha;"
credentials = flow.step2_exchange(auth_code)
http_auth = credentials.authorize(httplib2.Http())
service = discovery.build('fitness', 'v1',http_auth)
fitData = service.users().dataset().aggregate(userId='me',body=body).execute()
It's all fine until the part where I need to define the body. Here is the body that I'm trying:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
What is wrong with my body dict? Here is the error code:
https://www.googleapis.com/fitness/v1/users/me/dataset:aggregate?alt=json returned "Internal Error">
Here is an example of the object in the API explorer:
Although I'm not 100% o fey with the Google API for the Google Fit, there definitely some issues with your JSON body request in the first instance.
For example:
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
},
],
"bucketByActivitySegment": {
"minDurationMillis": "A String", # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10", # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10", # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10",
},
}
Should actually be this;
body = {
"aggregateBy": [
{
"dataSourceId": "derived:com.google.step_count.delta:com.google.android.gms:estimated_steps",
"dataTypeName": "com.google.step_count.delta"
}
],
"bucketByActivitySegment": {
"minDurationMillis": "A String" # Only activity segments of duration longer than this is used
},
"endTimeMillis": "1435269600000000000",
"bucketBySession": {
"minDurationMillis": "10" # Only sessions of duration longer than this is used
},
"bucketByActivityType": {
"minDurationMillis": "10" # Only activity segments of duration longer than this is used
},
"startTimeMillis": "1435183200000000000", # required time range
"bucketByTime": { # apparently oneof is not supported by reduced_nano_proto
"durationMillis": "10"
}
}
JSON Based rest services are really unforgiving for the use of extra comma's where they should not be, it renders the string un-jsonable which will lead to a 500 failure. Give that a try in the first instance ;)
Not an expert myself, but I have been playing with the API for a number of days. Here's a sample from my OAuth playground.
Sample response
From what I understand, your "endTimeMillis": "1435269600000000000" is not properly defined as it's in nanoseconds. For it to be in milli, change it to "1435269600000"

Categories