Goal
To run AWS CDK Python unit tests that validate configuration properties
Problem
While pytest correctly creates the cdk.App object, it fails to read cdk.json for some reason.
Conditions
Tested tests/cdk.json
Tested tests/unit/cdk.json
Tested values in ./cdk.json
Workaround
Force context within pytest file (ref)
eg:
TEST_VALUE = "10.1.0.0/20"
TEST_CONTEXT = {
"cidr": TEST_VALUE,
}
def test_config():
app = cdk.App(context=TEST_CONTEXT)
# app = cdk.App() # This fails to get values from cdk.json
stack = MyStack(app, "mycdk")
template = assertions.Template.from_stack(stack)
# Verify cdk.json values
template.has_resource_properties("AWS::EC2::VPC", {
"CidrBlock": TEST_CIDR
})
I am working on a project with TwinCat and AMR. I'm using Python as a communication medium between the two systems. I have an issue with waiting for the variable to change value. I have a variable of type BOOL and want to perform a certain action when the variable changes. Can someone help me with this?
P.S. I have notified for change in variable as well.
import pyads
PLC = pyads.Connection('127.0.0.1.1.1', pyads.PORT_SPS1)
PLC.open()
StnF = PLC.read_by_name('GVL.AGVgotoStnF', pyads.PLCTYPE_BOOL)
print(StnF)
if StnF == 'TRUE' :
ArrStnF = PLC.write_by_name('GVL.iPosAGV',3,pyads.PLCTYPE_INT)
print(ArrStnF)
Your looking for notifications. The documentation of pyads gives and example how to do this:
import pyads
from ctypes import sizeof
# define the callback which extracts the value of the variable
def callback(notification, data):
contents = notification.contents
var = next(map(int, bytearray(contents.data)[0:contents.cbSampleSize]))
plc = pyads.Connection('127.0.0.1.1.1', pyads.PORT_SPS1)
plc.open()
attr = pyads.NotificationAttrib(sizeof(pyads.PLCTYPE_INT))
# add_device_notification returns a tuple of notification_handle and
# user_handle which we just store in handles
handles = plc.add_device_notification('GVL.integer_value', attr, callback)
# To remove the device notification just use the del_device_notication
# function.
plc.del_device_notification(*handles)
Actually we bulit webapp from there we are passing variables to the terraform by
like below
terraform apply -input=false -auto-approve -var ami="%ami%" -var region="%region%" -var icount="%count%" -var type="%instance_type%"
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
I need to create backend.tf file from python by using below variables.
And need to replace key="${profile}/tfstate
for each profile the profile need to replace
i am thinking of using git repo by using git we create files and pull the values and again commit and execute
Please help me with some examples and ideas.
Code is like below:
My main.tf like below
terraform {
backend “s3” {
bucket = “terraform-007”
key = “key”
region = “ap-south-1”
profile=“venu”
}
}
provider “aws” {
profile = “ var.awsprofile"
region="{var.aws_region}”
}
resource “aws_instance” “VM” {
count = var.icount
ami = var.ami
instance_type = var.type
tags = {
Environment = “${var.env_indicator}”
}
}
vars.tf like
variable “aws_profile” {
default = “default”
description = “AWS profile name, as set in ~/.aws/credentials”
}
variable “aws_region” {
type = “string”
default = “ap-south-1”
description = “AWS region in which to create resources”
}
variable “env_indicator” {
type = “string”
default = “dev”
description = “What environment are we in?”
}
variable “icount” {
default = 1
}
variable “ami” {
default =“ami-54d2a63b”
}
variable “bucket” {
default=“terraform-002”
}
variable “type” {
default=“t2.micro”
}
output.tf like:
output “ec2_public_ip” {
value = ["${aws_instance.VM.*.public_ip}"]
}
output “ec2_private_ip” {
value = ["${aws_instance.VM.*.private_ip}"]
}
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
Since the configuration for the backend cannot use interpolation, we have used a configuration by convention approach.
The terraform for all of our state collections (microservices and other infrastructure) use the same S3 bucket for state storage and the same DynamoDB table for locking.
When executing terraform, we use the same IAM role (a dedicated terraform only user).
We define the key for the state via convention, so that it does not need to be generated.
key = "platform/services/{name-of-service}/terraform.tfstate"
I would avoid a process that results in changes to the infrastructure code as it is being deployed to ensure maximum understand-ability by the engineers reading/maintaining the code.
EDIT: Adding key examples
For the user service:
key = "platform/services/users/terraform.tfstate"
For the search service:
key = "platform/services/search/terraform.tfstate"
For the product service:
key = "platform/services/products/terraform.tfstate"
I am trying to write an Ember CLI application that talks to a REST api developed using Django-Rest-Framework.
I tried to ember-django-adapter as my data adpater for the ember application, however I cannot find a sample code on how to configure and write a model to use this data adapter. Can someone please help.
This is the EDA code https://github.com/dustinfarris/ember-django-adapter.
Also all I did on the ember app side is to create new app, and change the config as recommended here http://dustinfarris.com/ember-django-adapter/configuring/:
if (environment === 'development') {
ENV.APP.API_HOST = 'http://localhost:8000';
}
if (environment === 'production') {
ENV.APP.API_HOST = 'https://api.myproject.com';
ENV.APP.API_NAMESPACE = 'v2';
}
but this doc, doesn't say how to configure the data adapter for ember! Please let me know if there is a way to make ember js and django-rest-framework talk.
Thanks.
Before using Ember-data, I would suggest you to create a basic Ajax call using jQuery.
Step 1 (basic AJAX call with jQuery):
route.js:
model() {
return Ember.$.getJSON("/api/v1/foo");
}
Step 2 (Create the model foo with the correct adapter using ActiveModelAdapter):
models/foo.js:
import DS from 'ember-data';
var attr = DS.attr;
export default DS.Model.extend({
bar: attr('string'),
isTest: attr('boolean')
});
adapters/foo.js:
import DS from 'ember-data';
import config from 'shippo-frontend/config/environment';
import Ember from 'ember';
export default DS.ActiveModelAdapter.extend({
namespace: 'api/v1',
host: apiUrl
});
Step 3 (replace your jQuery call by the Ember-data call):
route.js:
model() {
return this.get('store').findAll('foo');
}
Notes:
active-model-adapter allows you to transform your snake_keys into camelCaseKeys. https://github.com/ember-data/active-model-adapter
If you need to do other modifications on the data coming from Django, create a serializers/foo.js and play around with the payload.
I'm trying to understand how Django is setting keys for my views. I'm wondering if there's a way to just get all the saved keys from Memcached. something like a cache.all() or something. I've been trying to find the key with cache.has_key('test') but still cant figure out how the view keys are being named.
UPDATE: The reason I need this is because I need to manually delete parts of the cache but dont know the key values Django is setting for my cache_view key
For RedisCache you can get all available keys with.
from django.core.cache import cache
cache.keys('*')
As mentioned there is no way to get a list of all cache keys within django. If you're using an external cache (e.g. memcached, or database caching) you can inspect the external cache directly.
But if you want to know how to convert a django key to the one used in the backend system, django's make_key() function will do this.
https://docs.djangoproject.com/en/1.8/topics/cache/#cache-key-transformation
>>> from django.core.cache import caches
>>> caches['default'].make_key('test-key')
u':1:test-key'
For debugging, you can temporarily switch to LocMemCache instead of PyMemcacheCache:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'unique-snowflake',
}
}
then see this question:
from django.core.cache.backends import locmem
print(locmem._caches)
In my setup with Django 3.2 there is a method to get a "raw" client for Redis which you can get the keys from.
from django.core.cache import cache
cache.get_client(1).keys()
You can use http://www.darkcoding.net/software/memcached-list-all-keys/ as explained in How do I check the content of a Django cache with Python memcached?
The Memcached documentation recommends that instead of listing all the cache keys, you run memcached in verbose mode and see everything that gets changed. You should start memcached like this
memcached -vv
and then it will print the keys as they get created/updated/deleted.
For Redis Backend
I'm going to add this answer because I landed on this SO question searching for exactly the same question but using a different cache backend. Also with REDIS in particular if you are using the same REDIS server for multiple applications you will want to scope your cache keys with the KEY_PREFIX option otherwise you could end up with cache keys from another application.
My answer is for if you have setup KEY_PREFIX in your settings.py and if you are using either redis_cache.RedisCache or django.core.cache.backends.redis.RedisCache
e.g.
CACHES = {
"default": {
"BACKEND": "redis_cache.RedisCache",
"LOCATION": f"redis://localhost:6379",
"KEY_PREFIX": "my_prefix",
},
}
or
CACHES = {
"default": {
"BACKEND": "django.core.cache.backends.redis.RedisCache",
"LOCATION": f"redis://localhost:6379",
"KEY_PREFIX": "my_prefix",
},
}
redis_cache.RedisCache
from django.conf import settings
from django.core.cache import cache
cache_keys = cache.get_client(1).keys(
f"*{settings.CACHES['default']['KEY_PREFIX']}*"
)
django.core.cache.backends.redis.RedisCache
Doing some tests shows that using Django's built in RedisCache may already be scoped but in my case I'm doing it to be explicit. calling .keys("*") will also return keys that belong to celery tasks
from django.conf import settings
from django.core.cache import cache
cache_keys = cache._cache.get_client().keys(
f"*{settings.CACHES['default']['KEY_PREFIX']}*"
)
Bonus: Deleting all app keys
If you want to clear the cache for your specific app instead of ALL the keys in REDIS you'll want to using the prior technique and then call cache.delete_many(cache_keys) instead of cache.clear() as the Django Docs warns that using cache.clear() will remove ALL keys in your cache, not just the ones that are created by your app.
You can use memcached_stats from: https://github.com/dlrust/python-memcached-stats. This package makes it possible to view the memcached keys from within the python environment.
If this is not too out of date, I have had similar issue, due I have had to iterate over whole cache. I managed it, when I add something to my cache like in following pseudocode:
#create caches key list if not exists
if not my_cache.get("keys"):
my_cache.set("keys", [])
#add to my cache
my_cache.set(key, value)
#add key to keys
if key not in my_cache.get("keys"):
keys_list = my_cache.get("keys")
keys_list.append(key)
my_cache.set("keys", keys_list)
this helps.
Ref:
https://lzone.de/blog/How-to%20Dump%20Keys%20from%20Memcache
https://github.com/dlrust/python-memcached-stats
import re, telnetlib, sys
key_regex = re.compile(r"ITEM (.*) \[(.*); (.*)\]")
slab_regex = re.compile(r'STAT items:(.*):number')
class MemcachedStats:
def __init__(self, host='localhost', port='11211'):
self._host = host
self._port = port
self._client = None
#property
def client(self):
if self._client is None:
self._client = telnetlib.Telnet(self._host, self._port)
return self._client
def command(self, cmd):
' Write a command to telnet and return the response '
self.client.write("{}\n".format(cmd).encode())
res = self.client.read_until('END'.encode()).decode()
return res
def slab_ids(self):
' Return a list of slab ids in use '
slab_ids = slab_regex.findall(self.command('stats items'))
slab_ids = list(set(slab_ids))
return slab_ids
def get_keys_on_slab(self, slab_id, limit=1000000):
cmd = "stats cachedump {} {}".format(slab_id, limit)
cmd_output = self.command(cmd)
matches = key_regex.findall(cmd_output)
keys = set()
for match_line in matches:
keys.add(match_line[0])
return keys
def get_all_keys(self):
slab_ids = self.slab_ids()
all_keys = set()
for slab_id in slab_ids:
all_keys.update(self.get_keys_on_slab(slab_id))
return list(all_keys)
def main():
m = MemcachedStats()
print(m.get_all_keys())
if __name__ == '__main__':
main()
There are some weird workarounds you can do to get all keys from the command line, but there is no way to do this with memcached inside of Django. See this thread.