Goal
To run AWS CDK Python unit tests that validate configuration properties
Problem
While pytest correctly creates the cdk.App object, it fails to read cdk.json for some reason.
Conditions
Tested tests/cdk.json
Tested tests/unit/cdk.json
Tested values in ./cdk.json
Workaround
Force context within pytest file (ref)
eg:
TEST_VALUE = "10.1.0.0/20"
TEST_CONTEXT = {
"cidr": TEST_VALUE,
}
def test_config():
app = cdk.App(context=TEST_CONTEXT)
# app = cdk.App() # This fails to get values from cdk.json
stack = MyStack(app, "mycdk")
template = assertions.Template.from_stack(stack)
# Verify cdk.json values
template.has_resource_properties("AWS::EC2::VPC", {
"CidrBlock": TEST_CIDR
})
Related
I have a project created with openapi-generator and a python-flask application is generated using the open api. I am using following command to generate my app.
openapi-generator generate -i OpenAPI/api.yaml -g python-flask --skip-validate-spec
But this command override my already implemented controller implementations to the default. Is there a way to,
Skip the controllers that was not updated in this iteration and create only the new ones.
If not skip generation of controller python files
I have already used .openapi-generator-ignore and ignored the controller repository.
But i am looking for something better?
I'm working on a project here which could helps you for generating openapi easily without have to execute or config anything just run your server. Here it is flask-toolkits
the implementation is via your view function and its really flexible. Its inspired by FastAPI's openapi setup
from typing import Optional
from flask import Flask
from flask_toolkits import AutoSwagger, APIRouter, Body, Header, Query
from flask_toolkits.responses import JSONResponse
app = Flask(__name__)
auto_swagger = AutoSwagger()
app.register_blueprint(auto_swagger)
router = APIRouter("email", __name__, url_prefix="/email")
#router.post("/read", tags=["Email Router"])
def get_email(
token: int = Header(),
id: int = Body(),
name: Optional[str] = Body(None),
race: Optional[str] = Query(None)
):
return JSONResponse({"id":id, "name": name})
app.register_blueprint(email_router)
if __name__ == "__main__":
app.run()
go to http://localhost:5000/docs
and here you go
I need to access defines and variables from the Kamailio configuration file in my Python script. So far, I am able to access the global variables only through self.my_var = int(KSR.pv.get("$sel(cfg_get.my_group.my_var)")) where this variable is defined in the configuration file as my_group.my_var = 664. How can I access the definitions (to know that #!ifdef MY_DEFINE succeeds or not)? Or at least the configuration file to read them myself ?
I found nothing about on the official documentation and even $sel(cfg_get.my_group.my_var) I found it elsewhere.
UPDATE These values are not normally available at runtime (preprocessing), so the native code can use them like this:
#!define WITH_SIPTRACE
#!substdef "!SERVER_ID!654!g"
...
request_route {
#!ifdef WITH_SIPTRACE
xlog("L_NOTICE", "$C(yx)========= server_id=SERVER_ID NEW $rm $pr:$si:$sp$C(xx)\n");
#!endif
...
Can this same behavior be achieved in Python ?
UPDATE 2 Found partial answer (below) in KSR.kx.get_def() and KSR.kx.get_defn().
You should not change defines. There is no mechanism for that.
To make configurable parameters, use database or curl module + in-memory cache via htable module.
Here is htable optimization for auth, for example
AUTH WITH CACHING
# authentication with password caching using htable
modparam("htable", "htable", "auth=>size=10;autoexpire=300;")
modparam("auth_db", "load_credentials", "$avp(password)=password")
route[AUTHCACHE] {
if($sht(auth=>$au::passwd)!=$null) {
if (!pv_auth_check("$fd", "$sht(auth=>$au::passwd)", "0", "1")) {
auth_challenge("$fd", “1”);
exit;
}
} else {
# authenticate requests
if (!auth_check("$fd", "subscriber", "1")) {
auth_challenge("$fd", "0");
exit;
}
$sht(auth=>$au::passwd) = $avp(password);
}
# user authenticated - remove auth header
if(!is_method("REGISTER|PUBLISH"))
consume_credentials();
}
Here is how to get info from db using sql.
https://kamailio.org/docs/modules/5.0.x/modules/avpops.html#avpops.f.avp_db_query
avp_db_query("select password, ha1 from subscriber where username='$tu'",
"$avp(i:678);$avp(i:679)");
As opensource sample of such optimization you can see code of kazoo project.
A solution I dislike because it adds extra variables for each interesting definition.
On the configuration side:
####### Custom Parameters #########
/*
Is there a better method to access these flags (!define)?
For the constants (!substdef) there are KSR.kx.get_def(), KSR.kx.get_defn().
*/
#!ifdef WITH_OPTIONA
my_group.option_a = yes
#!else
my_group.option_a = no
#!endif
#!ifdef WITH_OPTIONB
my_group.option_b = yes
#!else
my_group.option_b = no
#!endif
On the Python side:
def __init__(self):
self.my_domain = KSR.kx.get_def("MY_DOMAIN")
self.server_id = KSR.kx.get_defn("SERVER_ID")
assert self.my_domain and self.server_id
self.flags = None
self.initialized = False
def real_init(self):
'''
Object data is initialized here, dynamically,
because__init__ and child_init are called too early.
'''
def read_flags():
flags = {}
for i in ['option_a', 'option_b']:
value = bool(int(KSR.pv.get("$sel(cfg_get.my_group.{})".format(i))))
if value:
flags[i[len('option_'):]] = True
return flags
self.flags = read_flags()
self.initialized = True
Update Partial solution found: !substdef constants, but not !define ones.
Update 5.5.2 KSR.kx.ifdef() and KSR.kx.ifndef() added on the master branch, but not yet exported in a official release (the latest being 5.5.2)
See kemix: added KSR.kx.ifdef() and KSR.kx.ifndef().
I'm trying to create a glue etl job. I'm using boto3. I'm using the script below. I want to create it as type=Spark, but the script below creates a type=Python Shell. Also it doesn't disable bookmarks. Does anyone know what I need to add to make it a type Spark and disable bookmarks?
script:
response = glue_assumed_client.create_job(
Name='mlxxxx',
Role='Awsxxxx',
Command={
'Name': 'mlxxxx',
'ScriptLocation': 's3://aws-glue-scripts-xxxxx-us-west-2/xxxx',
'PythonVersion': '3'
},
Connections={
'Connections': [
'sxxxx',
'spxxxxxx',
]
},
Timeout=2880,
MaxCapacity=10
)
To create Spark jobs, you would have to mention the name of the command as ‘glueetl’ as described below and if you are not running a python shell job you need not specify the python version in the Command parameters
response = client.create_job(
Name='mlxxxyu',
Role='Awsxxxx',
Command={
'Name': 'glueetl', # <—— mention the name as glueetl to create spark job
'ScriptLocation': 's3://aws-glue-scripts-xxxxx-us-west-2/xxxx'
},
Connections={
'Connections': [
'sxxxx',
'spxxxxxx',
]
},
Timeout=2880,
MaxCapacity=10
)
Regarding job bookmarks, job bookmarks are disabled by default, so if you don’t specify a parameter for a job bookmarks then the job created would have bookmarks disabled.
If you want to explicitly disable bookmarks, then you can specify the same in the Default Arguments[1] as shown below.
response = client.create_job(
Name='mlxxxyu',
Role='Awsxxxx',
Command={
'Name': 'glueetl',
'ScriptLocation': ‘s3://aws-glue-scripts-xxxxx-us-west-2/xxxx'
},
DefaultArguments={
'--job-bookmark-option': 'job-bookmark-disable'
},
Timeout=2880,
MaxCapacity=10
)
See the documentation.
Command (dict) -- [REQUIRED] The JobCommand that executes this job.
Name (string) -- The name of the job command. For an Apache Spark ETL job, this must be glueetl . For a Python shell job, it must be pythonshell .
You may reset the bookmark by using the function
client.reset_job_bookmark(
JobName='string',
RunId='string'
)
where the JobName is required. It can be obtained from the response['Name'] of the command create_job()
Actually we bulit webapp from there we are passing variables to the terraform by
like below
terraform apply -input=false -auto-approve -var ami="%ami%" -var region="%region%" -var icount="%count%" -var type="%instance_type%"
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
I need to create backend.tf file from python by using below variables.
And need to replace key="${profile}/tfstate
for each profile the profile need to replace
i am thinking of using git repo by using git we create files and pull the values and again commit and execute
Please help me with some examples and ideas.
Code is like below:
My main.tf like below
terraform {
backend “s3” {
bucket = “terraform-007”
key = “key”
region = “ap-south-1”
profile=“venu”
}
}
provider “aws” {
profile = “ var.awsprofile"
region="{var.aws_region}”
}
resource “aws_instance” “VM” {
count = var.icount
ami = var.ami
instance_type = var.type
tags = {
Environment = “${var.env_indicator}”
}
}
vars.tf like
variable “aws_profile” {
default = “default”
description = “AWS profile name, as set in ~/.aws/credentials”
}
variable “aws_region” {
type = “string”
default = “ap-south-1”
description = “AWS region in which to create resources”
}
variable “env_indicator” {
type = “string”
default = “dev”
description = “What environment are we in?”
}
variable “icount” {
default = 1
}
variable “ami” {
default =“ami-54d2a63b”
}
variable “bucket” {
default=“terraform-002”
}
variable “type” {
default=“t2.micro”
}
output.tf like:
output “ec2_public_ip” {
value = ["${aws_instance.VM.*.public_ip}"]
}
output “ec2_private_ip” {
value = ["${aws_instance.VM.*.private_ip}"]
}
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
Since the configuration for the backend cannot use interpolation, we have used a configuration by convention approach.
The terraform for all of our state collections (microservices and other infrastructure) use the same S3 bucket for state storage and the same DynamoDB table for locking.
When executing terraform, we use the same IAM role (a dedicated terraform only user).
We define the key for the state via convention, so that it does not need to be generated.
key = "platform/services/{name-of-service}/terraform.tfstate"
I would avoid a process that results in changes to the infrastructure code as it is being deployed to ensure maximum understand-ability by the engineers reading/maintaining the code.
EDIT: Adding key examples
For the user service:
key = "platform/services/users/terraform.tfstate"
For the search service:
key = "platform/services/search/terraform.tfstate"
For the product service:
key = "platform/services/products/terraform.tfstate"
I am trying to write an Ember CLI application that talks to a REST api developed using Django-Rest-Framework.
I tried to ember-django-adapter as my data adpater for the ember application, however I cannot find a sample code on how to configure and write a model to use this data adapter. Can someone please help.
This is the EDA code https://github.com/dustinfarris/ember-django-adapter.
Also all I did on the ember app side is to create new app, and change the config as recommended here http://dustinfarris.com/ember-django-adapter/configuring/:
if (environment === 'development') {
ENV.APP.API_HOST = 'http://localhost:8000';
}
if (environment === 'production') {
ENV.APP.API_HOST = 'https://api.myproject.com';
ENV.APP.API_NAMESPACE = 'v2';
}
but this doc, doesn't say how to configure the data adapter for ember! Please let me know if there is a way to make ember js and django-rest-framework talk.
Thanks.
Before using Ember-data, I would suggest you to create a basic Ajax call using jQuery.
Step 1 (basic AJAX call with jQuery):
route.js:
model() {
return Ember.$.getJSON("/api/v1/foo");
}
Step 2 (Create the model foo with the correct adapter using ActiveModelAdapter):
models/foo.js:
import DS from 'ember-data';
var attr = DS.attr;
export default DS.Model.extend({
bar: attr('string'),
isTest: attr('boolean')
});
adapters/foo.js:
import DS from 'ember-data';
import config from 'shippo-frontend/config/environment';
import Ember from 'ember';
export default DS.ActiveModelAdapter.extend({
namespace: 'api/v1',
host: apiUrl
});
Step 3 (replace your jQuery call by the Ember-data call):
route.js:
model() {
return this.get('store').findAll('foo');
}
Notes:
active-model-adapter allows you to transform your snake_keys into camelCaseKeys. https://github.com/ember-data/active-model-adapter
If you need to do other modifications on the data coming from Django, create a serializers/foo.js and play around with the payload.