When setting up a Pyramid app and adding settings to the Configurator, I'm having issues understanding how to access information from request, like request.session and such. I'm completely new at using Pyramid and I've searched all over the place for information on this but found nothing.
What I want to do is access information in the request object when sending out exception emails on production. I can't access the request object, since it's not global in the __init__.py file when creating the app. This is what I've got now:
import logging
import logging.handlers
from logging import Formatter
config.include('pyramid_exclog')
logger = logging.getLogger()
gm = logging.handlers.SMTPHandler(('localhost', 25), 'email#email.com', ['email#email.com'], 'Error')
gm.setLevel(logging.ERROR)
logger.addHandler(gm)
This works fine, but I want to include information about the logged in user when sending out the exception emails, stored in session. How can I access that information from __init__.py?
Attempting to make request a global variable, or somehow store a pointer to "current" request globally (if that's what you're going to try with subscribing to NewRequest event) is not a terribly good idea - a Pyramid application can have more than one thread of execution, so more than one request can be active within a single process at the same time. So the approach may appear to work during development, when the application runs in a single thread mode and just one user accesses it, but produce really funny results when deployed to a production server.
Pyramid has pyramid.threadlocal.get_current_request() function which returns thread-local request variable, however, the docs state that:
This function should be used extremely sparingly, usually only in unit
testing code. it’s almost always usually a mistake to use
get_current_request outside a testing context because its usage makes
it possible to write code that can be neither easily tested nor
scripted.
which suggests that the whole approach is not "pyramidic" (same as pythonic, but for Pyramid :)
Possible other solutions include:
look at exlog.extra_info parameter which should include environ and params attributes of the request into the log message
registering exception views would allow completely custom processing of exceptions
Using WSGI middleware, such as WebError#error_catcher or Paste#error_catcher to send emails when an exception occurs
if you want to log not only exceptions but possibly other non-fatal information, maybe just writing a wrapper function would be enough:
if int(request.POST['donation_amount']) >= 1000000:
send_email("Wake up, we're rich!", authenticated_userid(request))
Related
Maybe someone here have a good idea on how to solve my issue. I have a REST API project driven by FastAPI. Every incomming request comes with a hash in the header. I am looking for a simple solution to write this hash as an extra parameter to the logs. I want to avoid adding it every time per hand. I first come up with the solution to write a Middleware which writes the hash in a Logger Object and then later use the loggerObject.log() function which adds the hash automatically. But this only works for my own log messages. Log messages from for example exceptions or from libraries I use dont have the extra parameter.
I solved a similar problem using the structlog library. It allows you to set global context variables that will be outputed to every log line.
Note that this only applies to logs outputted by structlog, special configuration is required in order to output the stdlib logging logs with these context variables.
Here is a demo FastAPI application I wrote that generates a request-id for every incoming http request, and adds the request ID to every log line outputted during that HTTP request.
In this example I have configured the stdlib logging logs to be processed by structlog, so them to have the request-id parameter added to them.
https://gitlab.com/sagbot/structlog-demo
Reference files in demo:
demo/route.py - A bunch of demo routes. note the /long route outputs both stdlib logs and structlog logs, but both of them are outputted with the request-id property
demo/configure-logging.py - This func configures both structlog and stdlib logging. Note that the "Special configuration" required to show the context variables in stdlib logs is in this func, where I added the custom formatters in logging.config.dictConfig(...) func
demo/middlewares/request_logging/middleware.py - This is the middleware that generates and adds the request-id to the logs. You can alter this middleware to not generate the value, but extract it from the hash header of the incomming request, as you wanted
I simply want to receive notifications from dropbox that a change has been made. I am currently following this tutorial:
https://www.dropbox.com/developers/reference/webhooks#tutorial
The GET method is done, verification is good.
However, when trying to mimic their implementation of POST, I am struggling because of a few things:
I have no idea what redis_url means in the def_process function of the tutorial.
I can't actually verify if anything is really being sent from dropbox.
Also any advice on how I can debug? I can't print anything from my program since it has to be ran on a site rather than an IDE.
Redis is a key-value store; it's just a way to cache your data throughout your application.
For example, access token that is received after oauth callback is stored:
redis_client.hset('tokens', uid, access_token)
only to be used later in process_user:
token = redis_client.hget('tokens', uid)
(code from https://github.com/dropbox/mdwebhook/blob/master/app.py as suggested by their documentation: https://www.dropbox.com/developers/reference/webhooks#webhooks)
The same goes for per-user delta cursors that are also stored.
However there are plenty of resources how to install Redis, for example:
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-redis
In this case your redis_url would be something like:
"redis://localhost:6379/"
There are also hosted solutions, e.g. http://redistogo.com/
Possible workaround would be to use database for such purpose.
As for debugging, you could use logging facility for Python, it's thread safe and capable of writing output to file stream, it should provide you with plenty information if properly used.
More info here:
https://docs.python.org/2/howto/logging.html
I am using Flask Restful to build an API. I have a number of model classes with methods that may raise custom exceptions (for example: AuthFailed exception on my User model class). I am using the custom error handling, documented here, to handle this (so that when auth fails, an appropriate response is sent). So far so good. However, I notice that when the exception is raised although the correct response JSON and status is sent back, I still get a traceback which is not ideal. Usually, if I handle an error (outside of flask) with a try-except block, the except can catch the error and handle it (preventing the traceback). So what is the correct approach here? Am I misunderstanding how to use the errors feature?
Unfortunately for you, it is handled this way "by design" of the Flask-RESTful APIs errors functionality.
The exceptions which are thrown are logged and the corresponding response defined in the errors dict is returned.
However, you can change the level of log output by modifying the log level of Flask's logger like this:
app = Flask(__name__)
app.logger.setLevel(logging.CRITICAL)
I think you would actually have to set it to CRITICAL because these errors are still getting logged even on log level ERROR as far as I know.
Furthermore, both Flask and Flask-RESTful are open-source. That being said, after looking at the code I found the function of a Flask app that is responsible for adding the exception traceback to the log (Flask version 0.11.1).
Of course you could just create your own App class (extending the original class of Flask) which overrides this method (or a caller of it) and does something else instead. However, you should be careful when updating your Flask version if you make use of undocumented stuff like this.
I have a Pyramid application which uses a number of customization on the request object, in particular, and I would like to be sure that my settings are correctly configured and that they are actually configured.
For example, I have the following (simplified for brevity):
config = Configurator()
config.add_request_method(lambda self: portal_object, name="portal", property=True)
config.set_default_permission('view')
config.add_request_method(auth.get_user, 'user', reify=True)
If these things are not set on the configuration, the application is not going to work, or going to be completely open.
The things I would be interested to test would be:
the portal property that I want to set on the request is the one I passed when configuring the application
by default, my views have a permission set (so unauthenticated users have a forbidden access)
my requests always have a user property, and this property is cached.
So far, I tried to produce a "real" Pyramid request, which involves copy/paste-ing code from pyramid.router (not cool :( ) and, although I haven't tried, I guess it would work if I was setting up something like WebTest but I would then test the whole stack, which I'm not so interested at the moment (the views, especially, are already tested separately.)
What are my possibilities to test my application's configuration, and (hopefully) only this?
How about moving the configuration setting part to a separate function and create a unit test against this functinon?
App engine "modules" are a new (and experimental, and confusingly-named) feature in App Engine: https://developers.google.com/appengine/docs/python/modules. Developers are being urged to convert use of the "backends" feature to use of this new feature.
There seem to be two ways to start an instance of a module: to send a HTTP request to it (i.e. at http://modulename.appname.appspot.com for the appname application and modulename module), or to call google.appengine.api.modules.start_module().
The Simple Way
The simple way to start an instance of a module would seem to be to create an HTTP request. However, in my case this results in only two outcomes, neither of which is what I want:
If I use the name of the backend that my application defines, i.e. http://backend.appname.appspot.com, the request is properly routed to the backend and properly denied (because backend access is defined by default to be private).
Anything else results in the request being routed to the sole frontend instance of the default module, even using random character strings as module names, such as http://sdlsdjfsldfsdf.appname.appspot.com. This even holds for made-up instance IDs such as in the case of http://99.sdlsdjfsldfsdf.appname.appspot.com, etc. And of course (this is the problem) for the actual name of my module as well.
Starting via the API
The documentation says that calling start_module() with the name of a module and version should cause the specified version of the specified module to start up. However, I'm getting an UnexpectedStateError whenever I call this function with valid arguments.
The Unfortunate State of Affairs
Because I can't get this to work, I'm wondering if there is some subtlety that the documentation might not have mentioned. My setup is pretty straightforward, so I'm wondering if this is a widespread problem to which someone has found a solution.
It turns out that versions cannot be numeric. This problem seems to have been happening because our module's version was "1" and not (for example) "v1".
With modules, they changed the terminology around a little bit. What used to be "backends" are now "basic scaling" or "manual scaling" instances.
"Automatic scaling" and "basic scaling" instances start when they process a request, while "manual scaling" instances run constantly.
Generally to start an instance you would send an HTTP request to your module's URL.
start_module() seems to have limited use for modules with "manual scaling" instances, or restarting modules that have been stopped with stop_module().
You can add:
login: admin
To the handler for your backend. This way an admin user can call your backend and trigger it to run. With login: admin, you can also have issue URLFetch requests froom elsewhwere in your app (ie from a frontend) trigger your backend.