probably I don't quite understand how logging really works in Python. I'm trying to debug a Flask+SQLAlchemy (but without flask_sqlalchemy) app which mysteriously hangs on some queries only if run from within Apache, so I need to have proper logging to get meaningful information. The Flask application by default comes with a nice logger+handler, but how do I get SQLAlchemy to use the same logger?
The "Configuring Logging" section in the SQLAlchemy just explains how to turn on logging in general, but not how to "connect" SQLAlchemy's logging output to an already existing logger.
I've been looking at Flask + sqlalchemy advanced logging for a while with a blank, expressionless face. I have no idea if the answer to my question is even in there.
EDIT: Thanks to the answer given I now know that I can have two loggers use the same handler. Now of course my apache error log is littered with hundreds of lines of echoed SQL calls. I'd like to log only error messages to the httpd log and divert all lower-level stuff to a separate logfile. See the code below. However, I still get every debug message into the http log. Why?
if app.config['DEBUG']:
# Make logger accept all log levels
app.logger.setLevel(logging.DEBUG)
for h in app.logger.handlers:
# restrict logging to /var/log/httpd/error_log to errors only
h.setLevel(logging.ERROR)
if app.config['LOGFILE']:
# configure debug logging only if logfile is set
debug_handler = logging.FileHandler(app.config['LOGFILE'])
debug_handler.setLevel(logging.DEBUG)
app.logger.addHandler(debug_handler)
# get logger for SQLAlchemy
sq_log = logging.getLogger('sqlalchemy.engine')
sq_log.setLevel(logging.DEBUG)
# remove any preconfigured handlers there might be
for h in sq_log.handlers:
sq_log.removeHandler(h)
h.close()
# Now, SQLAlchemy should not have any handlers at all. Let's add one
# for the logfile
sq_log.addHandler(debug_handler)
You cannot make SQLAlchemy and Flask use the same logger, but you can make them writing to one place by add a common Handler. And maybe this article is helpful: https://www.electricmonk.nl/log/2017/08/06/understanding-pythons-logging-module/
By the way, if you want to get all logs in one single request, you can set a uniq name for current thread before request, and add the threadName in you logging's formatter.
Answer to my question at EDIT: I still had "echo=True" set on the create_engine, so what I saw was all the additional output on stderr. echo=False stops that but still logs to debug level DEBUG.
Clear all corresponding handlers created by SqlAlchemy:
logging.getLogger("sqlalchemy.engine.Engine").handlers.clear()
The code above should be called after engine created.
Related
I'm building a website using Flask and I'm now in the process of adding some logging to it for which I found these docs. The basic example is as follows:
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
after which you can log using app.logger.error('An error occurred'). This works fine, but apart from the fact that I do not see any advantage over the regular python logging module I also see a major downside: if I want to log outside of a request context (when for example running some code with a cron job) I get errors because I'm using app outside of the request context.
So my main question; why would I use the Flask logger at all? What is the reason that it was ever built?
The Flask logger uses the "generic" Python logging, it's a logging.getLogger(name) like any other.
The logger exists so that Flask app and views can log things that happen during execution. For example, it will log tracebacks on 500 errors during debug mode. The configuration example is there to show how to enable these logs, which are still useful in production, when you are not in debug mode.
Having an internal logger is not unique to Flask, it's the standard way to use logging in Python. Every module defines it's own logger, but the configuration of the logging is only handled by the last link in the chain: your project.
You can also use app.logger for your own messages, although it's not required. You could also create a separate logger for your own messages. In the end, it's all Python logging.
I'm building a website using Flask and I'm now in the process of adding some logging to it for which I found these docs. The basic example is as follows:
if not app.debug:
import logging
from themodule import TheHandlerYouWant
file_handler = TheHandlerYouWant(...)
file_handler.setLevel(logging.WARNING)
app.logger.addHandler(file_handler)
after which you can log using app.logger.error('An error occurred'). This works fine, but apart from the fact that I do not see any advantage over the regular python logging module I also see a major downside: if I want to log outside of a request context (when for example running some code with a cron job) I get errors because I'm using app outside of the request context.
So my main question; why would I use the Flask logger at all? What is the reason that it was ever built?
The Flask logger uses the "generic" Python logging, it's a logging.getLogger(name) like any other.
The logger exists so that Flask app and views can log things that happen during execution. For example, it will log tracebacks on 500 errors during debug mode. The configuration example is there to show how to enable these logs, which are still useful in production, when you are not in debug mode.
Having an internal logger is not unique to Flask, it's the standard way to use logging in Python. Every module defines it's own logger, but the configuration of the logging is only handled by the last link in the chain: your project.
You can also use app.logger for your own messages, although it's not required. You could also create a separate logger for your own messages. In the end, it's all Python logging.
In my Django project, I'm using suds, which logs to a predetermined log (suds.client) and doesn't let me configure a different log for its messages. I'm trying to get around this by adding one of the handlers I have defined in settings.py. In my code that uses suds:
logging.basicConfig(level=logging.INFO)
suds_logger = logging.getLogger('suds.client')
suds_logger.setLevel(logging.INFO)
suds_logger.addHandler('my_handler')
suds_logger.propagate = False
This is obviously incorrect (because I'm just passing the string naming the handler and not the handler itself) and results in this error:
ERROR:my_handler:'str' object has no attribute 'level'
So, it seems like all I have to do is correctly use the 'my_handler' handler from settings and I should be good to go. So, what is the correct syntax?
When setting up a Pyramid app and adding settings to the Configurator, I'm having issues understanding how to access information from request, like request.session and such. I'm completely new at using Pyramid and I've searched all over the place for information on this but found nothing.
What I want to do is access information in the request object when sending out exception emails on production. I can't access the request object, since it's not global in the __init__.py file when creating the app. This is what I've got now:
import logging
import logging.handlers
from logging import Formatter
config.include('pyramid_exclog')
logger = logging.getLogger()
gm = logging.handlers.SMTPHandler(('localhost', 25), 'email#email.com', ['email#email.com'], 'Error')
gm.setLevel(logging.ERROR)
logger.addHandler(gm)
This works fine, but I want to include information about the logged in user when sending out the exception emails, stored in session. How can I access that information from __init__.py?
Attempting to make request a global variable, or somehow store a pointer to "current" request globally (if that's what you're going to try with subscribing to NewRequest event) is not a terribly good idea - a Pyramid application can have more than one thread of execution, so more than one request can be active within a single process at the same time. So the approach may appear to work during development, when the application runs in a single thread mode and just one user accesses it, but produce really funny results when deployed to a production server.
Pyramid has pyramid.threadlocal.get_current_request() function which returns thread-local request variable, however, the docs state that:
This function should be used extremely sparingly, usually only in unit
testing code. it’s almost always usually a mistake to use
get_current_request outside a testing context because its usage makes
it possible to write code that can be neither easily tested nor
scripted.
which suggests that the whole approach is not "pyramidic" (same as pythonic, but for Pyramid :)
Possible other solutions include:
look at exlog.extra_info parameter which should include environ and params attributes of the request into the log message
registering exception views would allow completely custom processing of exceptions
Using WSGI middleware, such as WebError#error_catcher or Paste#error_catcher to send emails when an exception occurs
if you want to log not only exceptions but possibly other non-fatal information, maybe just writing a wrapper function would be enough:
if int(request.POST['donation_amount']) >= 1000000:
send_email("Wake up, we're rich!", authenticated_userid(request))
I'm running my app on the GAE development server, with app-engine-patch to run Django.
One of my views is bugged , so I want to log everything that happens.
I added in myapp.views:
import logging
LOG_FILENAME = '/mylog.txt'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
and my function is:
def function(string):
logging.debug('new call')
#do stuff
#logging.debug('log stuff')
My problem is that I can't find the log. When I run my app I get no errors, but the log is not created.
I also tried various paths: /mylog.txt ,mylog.txt , c:\mylog.txt, c:\complete\path\to \my\app\mylog.txt, but it doesn't work.
On the other hand I tried to create and run a separate test:
#test.py
import logging
LOG_FILENAME = '/mylog.txt'
logging.basicConfig(filename=LOG_FILENAME,level=logging.DEBUG)
logging.debug('test')
And the log is created without problems: c:\mylog.txt
I'm not familiar with logging so I don't know if there might be some issues with django, or appengine.
Thanks
You can't write to files on App Engine - thus, any attempt to log to text files is also doomed to failure. Log output will appear on the SDK console in development, or in the logs console in production.
I am guessing the problem is that you put your log configuration in a view. A general rule of thumb for django logging is to set the log configuration in your settings.py.
By default, dev_appserver suppresses the debug log. To turn it on, run it with the --debug option. Note that dev_appserver itself uses the same logger, and will spew lots of debugging stuff you might not care about.
Your logging will print to stderr.