For simplicity I think I need to rewrite this to just one statement
config = {'webapp2_extras.jinja2': {'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat},
'environment_args': {'extensions': ['jinja2.ext.i18n']}}}
config['webapp2_extras.sessions'] = \
{'secret_key': 'my-secret-key'}
Then I want to know where to put it if I use multiple files with multiple request handlers. Should I just put it in one file and import it to the others? Since the session code is secret, what are your recommendation for handling it via source control? To always change the secret before or after committing to source control?
Thank you
Just add 'webapp2_extras.sessions' to your dict initializer:
config = {'webapp2_extras.jinja2': {'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat},
'environment_args': {'extensions': ['jinja2.ext.i18n']}},
'webapp2_extras.sessions': {'secret_key': 'my-secret-key'}}
This would be clearer if the nesting were explicit, though:
config = {
'webapp2_extras.jinja2': {
'template_path': 'templates',
'filters': {
'timesince': filters.timesince,
'datetimeformat': filters.datetimeformat
},
'environment_args': {'extensions': ['jinja2.ext.i18n']},
},
'webapp2_extras.sessions': {'secret_key': 'my-secret-key'}
}
I would recommend storing those in a datastore Entity for more flexibility and caching them in the instance memory at startup.
You could also consider having a config.py file excluded from the source control, if you want to get things done quickly.
Related
I am using the a json config_file with the Tortoise ORM setup.
This is a sample entry from the working file:
"apps": {
"my_app": {
"models": ["my_models.users", "my_models.teams"],
"default_connection": "default",
}
},
my_models is the folder and users.py and teams.py are its files.
I would like to only have the parent folder my_models specified in the config file, and not need to enumerate all the sub modules under that directory (my_models.users, my_models.teams etc.)
On larger projects this could look pretty ugly, difficult to maintain.
The question is: "Is there a way to specify only the folder name instead?"
For example with a wild card, making the configuration element look something like:
"apps": {
"my_app": {
"models": ["my_models.*"],
"default_connection": "default",
}
},
I'm trying to write a highly modular Python logging system (using the logging module) and include information from the trace module in the log message.
For example, I want to be able to write a line of code like:
my_logger.log_message(MyLogFilter, "this is a message")
and have it include the trace of where the "log_message" call was made, instead of the actual logger call itself.
I almost have the following code working except for the fact that the trace information is from the logging.debug() call rather than the my_logger.log_message() one.
class MyLogFilter(logging.Filter):
def __init__(self):
self.extra = {"error_code": 999}
self.level = "debug"
def filter(self, record):
for key in self.extra.keys():
setattr(record, key, self.extra[key])
class myLogger(object):
def __init__(self):
fid = logging.FileHandler("test.log")
formatter = logging.Formatter('%(pathname)s:%(lineno)i, %(error_code)%I, %(message)s'
fid.setFormatter(formatter)
self.my_logger = logging.getLogger(name="test")
self.my_logger.setLevel(logging.DEBUG)
self.my_logger.addHandler(fid)
def log_message(self, lfilter, message):
xfilter = lfilter()
self.my_logger.addFilter(xfilter)
log_funct = getattr(self.logger, xfilter.level)
log_funct(message)
if __name__ == "__main__":
logger = myLogger()
logger.log_message(MyLogFilter, "debugging")
This is a lot of trouble to go through in order to make a simple logging.debug call but in reality, I will have a list of many different versions of MyLogFilter at different logging levels that contain different values of the "error_code" attribute and I'm trying to make the log_message() call as short and sweet as possible because it will be repeated numerous times.
I would appreciate any information about how to do what I want to, or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
I would like to stick to the internal python modules of "logging" and "trace" if that's possible instead of using any external solutions.
or if I'm completely off on the wrong track and if that's the case, what I should be doing instead.
My strong suggestion is that you view logging as a solved problem and avoid reinventing the wheel.
If you need more than the standard library's logging module provides, it's probably something like structlog (pip install structlog)
Structlog will give you:
data binding
cloud native structured logging
pipelines
...and more
It will handle most local and cloud use cases.
Below is one common configuration that will output colorized logging to a .log file, to stdout, and can be extended further to log to eg AWS CloudWatch.
Notice there is an included processor: StackInfoRenderer -- this will include stack information to all logging calls with a 'truthy' value for stack_info (this is also in stdlib's logging btw). If you only want stack info for exceptions, then you'd want to do something like exc_info=True for your logging calls.
main.py
from structlog import get_logger
from logging_config import configure_local_logging
configure_local_logging()
logger = get_logger()
logger.info("Some random info")
logger.debug("Debugging info with stack", stack_info=True)
try:
assert 'foo'=='bar'
catch Exception as e:
logger.error("Error info with an exc", exc_info=e)
logging_config.py
import logging
import structlog
def configure_local_logging(filename=__name__):
"""Provides a structlog colorized console and file renderer for logging in eg ING tickets"""
timestamper = structlog.processors.TimeStamper(fmt="%Y-%m-%d %H:%M:%S")
pre_chain = [
structlog.stdlib.add_log_level,
timestamper,
]
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"plain": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=False),
"foreign_pre_chain": pre_chain,
},
"colored": {
"()": structlog.stdlib.ProcessorFormatter,
"processor": structlog.dev.ConsoleRenderer(colors=True),
"foreign_pre_chain": pre_chain,
},
},
"handlers": {
"default": {
"level": "DEBUG",
"class": "logging.StreamHandler",
"formatter": "colored",
},
"file": {
"level": "DEBUG",
"class": "logging.handlers.WatchedFileHandler",
"filename": filename + ".log",
"formatter": "plain",
},
},
"loggers": {
"": {
"handlers": ["default", "file"],
"level": "DEBUG",
"propagate": True,
},
}
})
structlog.configure_once(
processors=[
structlog.stdlib.add_log_level,
structlog.stdlib.PositionalArgumentsFormatter(),
timestamper,
structlog.processors.StackInfoRenderer(),
structlog.processors.format_exc_info,
structlog.stdlib.ProcessorFormatter.wrap_for_formatter,
],
context_class=dict,
logger_factory=structlog.stdlib.LoggerFactory(),
wrapper_class=structlog.stdlib.BoundLogger,
cache_logger_on_first_use=True,
)
Structlog can do quite a bit more than this. I suggest you check it out.
It turns out the missing piece to the puzzle is using the "traceback" module rather than the "trace" one. It's simple enough to parse the output of traceback to pull out the source filename and line number of the ".log_message()" call.
If my logging needs become any more complicated then I'll definitely look into struct_log. Thank you for that information as I'd never heard about it before.
I am struggling to understand how to configure pysaml2 and add the AuthnContext in my request.
I have a SP and I would need to add the following request when the client performs the login request:
<samlp:RequestedAuthnContext>
<saml:AuthnContextClassRef>
urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport
</saml:AuthnContextClassRef>
</samlp:RequestedAuthnContext>
I am struggling because I tried everything I could and I believe that it is possible to add that in my requests because in here https://github.com/IdentityPython/pysaml2/blob/master/src/saml2/samlp.py
I can see:
AUTHN_PASSWORD = "urn:oasis:names:tc:SAML:2.0:ac:classes:Password"
AUTHN_PASSWORD_PROTECTED = \
"urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport"
I just do not know how to reference that.. I have a simple configuration like this:
"service": {
"sp": {
"name": "BLABLA",
"allow_unsolicited": true,
"want_response_signed": false,
"logout_requests_signed": true,
"endpoints": {
"assertion_consumer_service": ["https://mywebste..."],
"single_logout_service": [["https://mywebste...", "urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect"]]
},
"requestedAuthnContext" : true
}
}
Anyone know how to add the above config?
I struggle to understand how to build the config dictionary, even by reading their docs. Any ideas?
I am happy to add the "PasswordProtectedTransport" directly in the code if the config does not allow that.. But I am not sure how to do it.
Thanks,
R
At some point your client calls create_authn_request(...)
(or prepare_for_authenticate(...),
or prepare_for_negotiated_authenticate(...)).
You should pass the extra arg requested_authn_context.
The requested_authn_context is an object of type saml2.samlp.RequestedAuthnContext that contains the wanted AuthnContextClassRef.
...
from saml2.saml import AUTHN_PASSWORD_PROTECTED
from saml2.saml import AuthnContextClassRef
from saml2.samlp import RequestedAuthnContext
requested_authn_context = RequestedAuthnContext(
authn_context_class_ref=[
AuthnContextClassRef(AUTHN_PASSWORD_PROTECTED),
],
comparison="exact",
)
req_id, request = create_authn_request(
...,
requested_authn_context=requested_authn_context,
)
I'd like to use Bonobo to move data from one Postgres database to another on different services. I have the connections configured and would like to use one during extraction and one during loading.
Here is my testing setup:
source_connection_config_env = 'DEV'
source_connection_config = get_config(source_connection_config_env)
target_connection_config_env = 'TRAINING'
target_connection_config = get_target_connection_config(target_connection_config_env)
...
def get_services(**options):
if connection == 'source':
return {
'sqlalchemy.engine': create_postgresql_engine(**{
'host': source_connection_config.source_postres_connection['HOST'],
'name': source_connection_config.source_postres_connection['DATABASE'],
'user': source_connection_config.source_postres_connection['USER'],
'pass': source_connection_config.source_postres_connection['PASSWORD']
})
}
if connetion == 'target':
return {
'sqlalchemy.engine': create_postgresql_engine(**{
'host': target_connection_config.target_postres_connection['HOST'],
'name': target_connection_config.target_postres_connection['DATABASE'],
'user': target_connection_config.target_postres_connection['USER'],
'pass': target_connection_config.target_postres_connection['PASSWORD']
})
}
I'm not sure where the best place to change connections is, or how to actually go about it.
Thanks in advance!
As far as I understand, you want to use both source and target connection in the same graph (I hope I got this right).
So you cannot have this conditional, as it would return only one.
Instead, I'd return both, named differently:
def get_services(**options):
return {
'engine.source': create_postgresql_engine(**{...}),
'engine.target': create_postgresql_engine(**{...}),
}
And then use different connections in the transformations:
graph.add_chain(
Select(..., engine='engine.source'),
...,
InsertOrUpdate(..., engine='engine.target'),
)
Note that service names are just strings, there is no convention or naming pattern enforced. the 'sqlalchemy.engine' name is just the default, but you don't have to agree on it as long as you configure your transformations with the names you actually use.
The pydocumentdb.document_client.DocumentClient object has a CreateCollection() method, defined here.
When creating a collection with this method, one needs to specify the database link (already known), the collection (I don't know how to reference it if it hasn't been made) and options.
Parameters that I would like to control when creating the collection are:
name of collection
type of collection (fixed size vs. partitioned)
partition keys
RU value
Indexing policy (or at least be able to create a default template somewhere and automatically copy it to the newly created one)
Enums for some of these parameters seem to be defined here, but I don't see any potentially useful HTTP headers in http_constants.py, and I don't see where RUs come in to play or where a cohesive "Collection" object would be passed as a parameter.
You could refer to the source sample code from here and the rest api from here.
import pydocumentdb;
import pydocumentdb.errors as errors
import pydocumentdb.document_client as document_client
config = {
'ENDPOINT': 'https://***.documents.azure.com:443/',
'MASTERKEY': '***'
};
# Initialize the Python DocumentDB client
client = document_client.DocumentClient(config['ENDPOINT'], {'masterKey': config['MASTERKEY']})
databaseLink = "dbs/db"
coll = {
"id": "testCreate",
"indexingPolicy": {
"indexingMode": "lazy",
"automatic": False
},
"partitionKey": {
"paths": [
"/AccountNumber"
],
"kind": "Hash"
}
}
collection_options = { 'offerThroughput': 400 }
client.CreateCollection(databaseLink , coll, collection_options)
Hope it helps you.