I was using django-environ to manage env variables, everything was working fine, recently I moved to django-configurations.
My settings inherit configurations.Configuration but I am having trouble getting values from .env file. For example, while retrieving DATABASE_NAME it gives the following error:
TypeError: object of type 'Value' has no len()
I know the below code returns a value.Value instance instead of a string, but I am not sure why it does so. The same is the case with every other env variable:
My .env. file is as follows:
DEBUG=True
DATABASE_NAME='portfolio_v1'
SECRET_KEY='your-secrete-key'
settings.py file is as follows
...
from configurations import Configuration, values
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': values.Value("DATABASE_NAME", environ=True),
...
I have verified that my `.env' file exists and is on the valid path.
I spent more time resolving the above issue and found what was missing.
Prefixing .env variables is mandatory in django-configuration as a default behavior.
While dealing dict keys, we have to provide environ_name kwarg to the Value instance
NOTE: .env variables should be prefixed with DJANGO_ even if you provide environ_name. If you want to override the prefix you have to provide environ_prefix) i.e.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': values.Value(environ_name="DATABASE_NAME"), # provide DJANGO_DATABASE_NAME='portfolio_v1' in .env file
other use cases are:
VAR = values.Value() # works, provided DJANGO_VAR='var_value'
VAR = values.Value(environ_prefix='MYSITE') # works, provided MYSITE_VAR='var_value'
CUSTOM_DICT = {
'key_1': values.Value(environ_required=True), # doesn't work
'key_2': values.Value(environ_name='other_key'), # works if provided DJANGO_key_2='value_2' in .env
}
You are using django-configurations in the wrong way.
See the source code of the Value class:
class Value:
#property
def value(self):
...
def __init__(self, default=None, environ=True, environ_name=None,
environ_prefix='DJANGO', environ_required=False,
*args, **kwargs):
...
So you want to have the default value not as "DATABASE_NAME", and your environment variable in your .env file should start with DJANGO_.
Then to use the value you can use the value property, so your settings file should look like:
...
from configurations import Configuration, values
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': values.Value("DEFAULT_VAL").value,
# No need for environ=True since it is default
...
Related
Well, I have a django model that stores the name of a music band.
So new trouble arises when a band called '✝✝✝ (Crosses)' attempts to be stored in a TextField.
This is the error:
django.db.utils.OperationalError: (1366, "Incorrect string value: '\\xE2\\x9C\\x9D\\xE2\\x9C\\x9D...' for column 'album_name' at row 1")
But this becomes weird because I have another table that stores a JsonField with the band info. The same name '✝✝✝ (Crosses)' is stored correctly. the JsonField was a TextField that stored json.dumps(dict_with_band_info) ... So in the database is stored something like
{ "name": "✝✝✝ (Crosses)" ...}. And repeat, this was a TextField before and works as expected.
So why attempting to add "name": "✝✝✝ (Crosses)" to the db Textfield shows that error but not in the other table no? I'm using pdb.set_trace() to see what are the values before do the save().
I would like to mention again that that error doesn't appear even when the JsonField was TextField in my band info table, but the error appears in the TextField of the band_name and exactly in the instance.save(). with this, I can deduct that my text_fields are ready to receive unicode, because in the band info table, the jsonfield shows the "✝✝✝ (Crosses)". Why python is doing a utf-8 in the step of saving in the band name text field?
The only thing that I see different is the fact that:
When I save the band info I call the model like:
from bands.model import BandInfo
from apis import music_api as api
# Expected to be dict
band_info = api.get_band_info(song="This is a trick", singer="chino moreno")[0]
band = BandInfo()
band.band_info=band_info #{'name':'✝✝✝ (Crosses)'}
band.save()
and when I save the band_name:
def save_info(Table, data:dict):
instance_ = Table(
'name': data['name'] #'✝✝✝ (Crosses)'
)
instance_.save()
then in another file:
from apis import music_api as api
from bands import snippets
from bands.models import Tracks
track_info = api.get_track_info(song="This is a trick", singer="chino moreno")[0]
snippets.save_info(Tracks, data:dict)
Using: python 3.9.1
django 3.1.7
MySQL workbench 8 with the community installation
well, hope I'm doing an obvious mistake.
MySQL's utf8 permits only the Unicode characters that can be represented with 3 bytes in UTF-8. If you have MySQL 5.5 or later you can change the column encoding from utf8 to utf8mb4. This encoding allows storage of characters that occupy 4 bytes in UTF-8.
To do this, set the charset option to utf8mb4 in the OPTIONS dict of the DATABASES setting in the Django settings file.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'my_db',
'USER': 'my_user',
'PASSWORD': 'my_pass',
'HOST': 'my.host',
'OPTIONS': {
'charset': 'utf8mb4' # This is the important line
}
}
}
I'm new to the concept of using a global variable to spawn new DB Sessions. I want to import settings.py file into manage.py to call method that loads environment variables using the .env helper library dotenv. However, since the variables are not yet initialized in the moment of settings.py import into manage.py (the .env filename is yet to be read from sys.argv), the engine construction line cannot stay "in plain sight" (unindented). I wonder if settings.py are the right place for such a global variable? I'm used to settings.py only containing dictionaries of basic types, no objects or classes.
Right now init_db_globs looks like this.
# set the default db .env file name
def init_db_globs(db_name='db', env_file_name='.env'):
# refer to global vars
global engine, session
# connection string template
template_uri = 'postgresql://{}:{}#{}:{}/{}'
# load env vars
load_dotenv(env_file_name)
# load database parameters from env vars
dbs = {
'db': {
# 'var': os.getenv('DB_VAR'),
},
'db_test': {
# 'var': os.getenv('DB_VAR_TEST'),
}
}
uri = template_uri.format(db['user'],db['password'],db['host'],db['port'],db['name'])
engine = create_engine(uri, params_list)
# create a configured "Session" class
Session = orm.sessionmaker(bind=engine)
# init global session
session = Session()
# create global vars
session = None
engine = None
How should this be improved?
Actually we bulit webapp from there we are passing variables to the terraform by
like below
terraform apply -input=false -auto-approve -var ami="%ami%" -var region="%region%" -var icount="%count%" -var type="%instance_type%"
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
I need to create backend.tf file from python by using below variables.
And need to replace key="${profile}/tfstate
for each profile the profile need to replace
i am thinking of using git repo by using git we create files and pull the values and again commit and execute
Please help me with some examples and ideas.
Code is like below:
My main.tf like below
terraform {
backend “s3” {
bucket = “terraform-007”
key = “key”
region = “ap-south-1”
profile=“venu”
}
}
provider “aws” {
profile = “ var.awsprofile"
region="{var.aws_region}”
}
resource “aws_instance” “VM” {
count = var.icount
ami = var.ami
instance_type = var.type
tags = {
Environment = “${var.env_indicator}”
}
}
vars.tf like
variable “aws_profile” {
default = “default”
description = “AWS profile name, as set in ~/.aws/credentials”
}
variable “aws_region” {
type = “string”
default = “ap-south-1”
description = “AWS region in which to create resources”
}
variable “env_indicator” {
type = “string”
default = “dev”
description = “What environment are we in?”
}
variable “icount” {
default = 1
}
variable “ami” {
default =“ami-54d2a63b”
}
variable “bucket” {
default=“terraform-002”
}
variable “type” {
default=“t2.micro”
}
output.tf like:
output “ec2_public_ip” {
value = ["${aws_instance.VM.*.public_ip}"]
}
output “ec2_private_ip” {
value = ["${aws_instance.VM.*.private_ip}"]
}
Actually the problem here was backend does not support variables i need to pass there values also form app.
TO resolve this I find some solution like we need to create backend.tf before execution.
But I am unable to get the idea how to do it if anyone having any exmaples regarding this please help me.
Thanks in advance..
Since the configuration for the backend cannot use interpolation, we have used a configuration by convention approach.
The terraform for all of our state collections (microservices and other infrastructure) use the same S3 bucket for state storage and the same DynamoDB table for locking.
When executing terraform, we use the same IAM role (a dedicated terraform only user).
We define the key for the state via convention, so that it does not need to be generated.
key = "platform/services/{name-of-service}/terraform.tfstate"
I would avoid a process that results in changes to the infrastructure code as it is being deployed to ensure maximum understand-ability by the engineers reading/maintaining the code.
EDIT: Adding key examples
For the user service:
key = "platform/services/users/terraform.tfstate"
For the search service:
key = "platform/services/search/terraform.tfstate"
For the product service:
key = "platform/services/products/terraform.tfstate"
I'd like to use Bonobo to move data from one Postgres database to another on different services. I have the connections configured and would like to use one during extraction and one during loading.
Here is my testing setup:
source_connection_config_env = 'DEV'
source_connection_config = get_config(source_connection_config_env)
target_connection_config_env = 'TRAINING'
target_connection_config = get_target_connection_config(target_connection_config_env)
...
def get_services(**options):
if connection == 'source':
return {
'sqlalchemy.engine': create_postgresql_engine(**{
'host': source_connection_config.source_postres_connection['HOST'],
'name': source_connection_config.source_postres_connection['DATABASE'],
'user': source_connection_config.source_postres_connection['USER'],
'pass': source_connection_config.source_postres_connection['PASSWORD']
})
}
if connetion == 'target':
return {
'sqlalchemy.engine': create_postgresql_engine(**{
'host': target_connection_config.target_postres_connection['HOST'],
'name': target_connection_config.target_postres_connection['DATABASE'],
'user': target_connection_config.target_postres_connection['USER'],
'pass': target_connection_config.target_postres_connection['PASSWORD']
})
}
I'm not sure where the best place to change connections is, or how to actually go about it.
Thanks in advance!
As far as I understand, you want to use both source and target connection in the same graph (I hope I got this right).
So you cannot have this conditional, as it would return only one.
Instead, I'd return both, named differently:
def get_services(**options):
return {
'engine.source': create_postgresql_engine(**{...}),
'engine.target': create_postgresql_engine(**{...}),
}
And then use different connections in the transformations:
graph.add_chain(
Select(..., engine='engine.source'),
...,
InsertOrUpdate(..., engine='engine.target'),
)
Note that service names are just strings, there is no convention or naming pattern enforced. the 'sqlalchemy.engine' name is just the default, but you don't have to agree on it as long as you configure your transformations with the names you actually use.
For simplicity I think I need to rewrite this to just one statement
config = {'webapp2_extras.jinja2': {'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat},
'environment_args': {'extensions': ['jinja2.ext.i18n']}}}
config['webapp2_extras.sessions'] = \
{'secret_key': 'my-secret-key'}
Then I want to know where to put it if I use multiple files with multiple request handlers. Should I just put it in one file and import it to the others? Since the session code is secret, what are your recommendation for handling it via source control? To always change the secret before or after committing to source control?
Thank you
Just add 'webapp2_extras.sessions' to your dict initializer:
config = {'webapp2_extras.jinja2': {'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat},
'environment_args': {'extensions': ['jinja2.ext.i18n']}},
'webapp2_extras.sessions': {'secret_key': 'my-secret-key'}}
This would be clearer if the nesting were explicit, though:
config = {
'webapp2_extras.jinja2': {
'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat
},
'environment_args': {'extensions': ['jinja2.ext.i18n']},
},
'webapp2_extras.sessions': {'secret_key': 'my-secret-key'}
}
I would recommend storing those in a datastore Entity for more flexibility and caching them in the instance memory at startup.
You could also consider having a config.py file excluded from the source control, if you want to get things done quickly.