In a Flask application, I would like to configure two differents file loggers:
One for HTTP access logs (access.log), which will log stuff like:
1.2.3.4 - [11/Jun/2018 09:51:13] "GET /some/path HTTP/1.1" 200 -
One for application logs (my_app.log), which will keep logs defined by me in my code when I'm using current_app.logger.info('some message'):
2018-06-08 15:08:50,083 - flask.app - INFO - some message
How should my configuration looks like to achieve this ? Here is what I tried, without success:
# content of "run.py" :
app = Flask(__name__)
app.logger.removeHandler(default_handler)
# Define 'my_app.log' :
handler = logging.FileHandler('my_app.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter(app.config['LOGGING_FORMAT'])
handler.setFormatter(formatter)
app.logger.addHandler(handler)
# Define 'access.log' :
access_handler = logging.getLogger('werkzeug')
access_handler = logging.FileHandler('access.log')
access_handler.setLevel(logging.DEBUG)
access_handler.setFormatter(app.config['LOGGING_FORMAT'])
app.logger.addHandler(access_handler)
# Then register my blueprints:
app.register_blueprint(some_blueprint, url_prefix='/')
....
And I run it with python3 run.py. With this config, the only logged stuff are the HTTP access logs in the my_app.log file.
What's wrong with my config ?
Related
I have python flask micro-service in which we're using python's logging library. I want to add request session ID or any other unique ID per session in logs, so i can easily grep the logs for that particular request.
We have on micro-service in java in which we have done the same thing using this.. I want to do the same thing in python.How can i do this.
Also we already have lots of logs in our existing code i want achieve this with modifying existing logs call(or maybe some little modification like passing some argument to log function)
Currently our logging format is like this
formatter = logging.Formatter('[%(asctime)s] - [%(threadName)s] [%(thread)d] - %(levelname)s in %(module)s at %(lineno)d: %(message)s')
Either use an adapter or add a filter to your log. An adapter would wrap your logger, while with a Filter you would be modifying the LogRecord object.
Using the adapter would work somewhat like this:
import logging
# either user flasks builtin app.logger, or replace with your own logger
request_id = 123 # implement however you prefer
app.logger = CustomAdapter(app.logger, {'request_id': request_id})
and use %(request_id)s in your formatter somewhere. app.logger is the standard logger that flask provides.
I found this way to add request ids to flask logs on this blog post from McPolemic. I adapted it a bit here.
You will need to implement your own logging.Filter (RequestIdFilter) that will add the req_id attribute to the record. Then, the Formatter (log_formatter) will parse the record and find the req_id and print it.
Since we are creating the req_id from the flask.g.request_id param, it will be the same id used throughout the request.
# server.py
import uuid
import logging
import flask
from flask import Flask
def get_request_id():
if getattr(flask.g, 'request_id', None):
return flask.g.request_id
new_uuid = uuid.uuid4().hex[:10]
flask.g.request_id = new_uuid
return new_uuid
class RequestIdFilter(logging.Filter):
# This is a logging filter that makes the request ID available for use in
# the logging format. Note that we're checking if we're in a request
# context, as we may want to log things before Flask is fully loaded.
def filter(self, record):
record.req_id = get_request_id() if flask.has_request_context() else ''
return True
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
# The StreamHandler responsible for displaying
# logs in the console.
sh = logging.StreamHandler()
sh.addFilter(RequestIdFilter())
# Note: the "req_id" param name must be the same as in
# RequestIdFilter.filter
log_formatter = logging.Formatter(
fmt="%(module)s: %(asctime)s - %(levelname)s - ID: %(req_id)s - %(message).1000s"
)
sh.setFormatter(log_formatter)
logger.addHandler(sh)
app = Flask(__name__)
#app.route("/")
def hello():
logger.info("Hello world!")
logger.info("I am a log inside the /hello endpoint")
return "Hello World!"
if __name__ == "__main__":
app.run()
Then, run this server:
python server.py
Finally, run this inside your Python console:
>>> import requests
>>> r = requests.get("http://localhost:5000/"); r.text
'Hello World!'
>>> r = requests.get("http://localhost:5000/"); r.text
'Hello World!'
And the logs will show:
server: 2022-08-01 15:35:26,201 - INFO - ID: 2f0e2341aa - Hello world!
server: 2022-08-01 15:35:26,202 - INFO - ID: 2f0e2341aa - I am a log inside the /hello endpoint
127.0.0.1 - - [01/Aug/2022 15:35:26] "GET / HTTP/1.1" 200 -
server: 2022-08-01 15:35:30,698 - INFO - ID: 1683ba71d9 - Hello world!
server: 2022-08-01 15:35:30,698 - INFO - ID: 1683ba71d9 - I am a log inside the /hello endpoint
127.0.0.1 - - [01/Aug/2022 15:35:30] "GET / HTTP/1.1" 200 -
Note that on different requests, the ids were modified, but they remained unchanged on the same request.
I'd like to use logging in my Flask app by simply calling logging.getLogger('myapi') in each of the files where I need to log.
There should be a single place to define the handler and format for this "global" application logger. This works, but Flask is also continually logging its own logs in the default format. These logs only exist when I import the library fbprophet. I would like to prevent Flask from logging these extra, unformatted, duplicate logs.
(Flask also has a werkzeug logger, which is fine and can stay.)
Code:
import sys
import logging
import fbprophet
from flask import Flask, jsonify
from werkzeug.serving import run_simple
# Set up my custom global logger
log = logging.getLogger('myapi')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter('*** %(asctime)s %(levelname)s %(message)s'))
log.addHandler(handler)
log.setLevel(logging.DEBUG)
def create_app(config={}):
''' Application factory to create and configure the app '''
app = Flask('myapi', instance_relative_config=False)
app.config.from_mapping(config)
log.info('TEST INFO')
log.debug('TEST DEBUG')
#app.route('/health')
def health():
log.info('Health OK')
return 'OK'
return app
if __name__ == '__main__':
dev_config = {'SECRET_KEY': 'dev', 'DEBUG': False}
app = create_app(dev_config)
run_simple('localhost', 5000, app)
Output:
I'm expecting to see logs prefixed with ***. The ones starting with LEVEL only appear when I import facebook prophet.
* Serving Flask app "main.py" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
*** 2019-06-05 14:17:56,702 INFO TEST INFO # good log
INFO:myapi:TEST INFO # bad dupe log
*** 2019-06-05 14:17:56,702 DEBUG TEST DEBUG
DEBUG:myapi:TEST DEBUG
*** 2019-06-05 14:17:56,703 INFO TEST INFO
INFO:myapi:TEST INFO
*** 2019-06-05 14:17:56,703 DEBUG TEST DEBUG
DEBUG:myapi:TEST DEBUG
*** 2019-06-05 14:18:10,405 INFO Health OK
INFO:myapi:Health OK
127.0.0.1 - - [05/Jun/2019 14:18:10] "GET /health HTTP/1.1" 200 - # good werkzeug api log
INFO:werkzeug:127.0.0.1 - - [05/Jun/2019 14:18:10] "GET /health HTTP/1.1" 200 - # bad dupe log
More explanation:
I've tried setting the app's logger too, but I don't want to have to call current_app.logger from other modules.
I tried disabling Flask's logger with logging.getLogger('flask.app').handlers.clear() but this also doesn't work.
When importing fbprophet, I get the below console errors (from prophet):
ERROR:fbprophet:Importing matplotlib failed. Plotting will not work.
ERROR:fbprophet:Importing matplotlib failed. Plotting will not work.
*** 2019-06-05 14:29:06,488 INFO TEST INFO
INFO:myapi:TEST INFO
I thought this could be causing the issue, so I fixed the errors following this. But Flask is still logging the extra logs.
import plotly
import matplotlib as mpl
mpl.use('TkAgg')
import fbprophet
Summary:
Looking for formatted global logging in Flask with no duplicate logs. I just need my global logging.getLogger('myapi') and the werkzeug API logs.
I've had the same problem and spent hours to resolve it. Actually it's not even related to Flask (I figured this out after few hours).
Even in a simple script you'll have the duplicated log in your handler.
The only working solution seems to be to add: logger.propagate = False in your own handler. This will prevent the log to be passed to handlers of higher level (ancestor) loggers i.e loggers created by Prophet (even if I dont see where this hierarchy resides exactly).
Found in this answer: https://stackoverflow.com/a/55877763/3615718
You can search through all the registered loggers and change their settings as needed. It sounds like fbprophet is probably setting its own logger instance, so hopefully it will get set to the level you want if you do something like this:
for logger_name in logging.root.manager.loggerDict:
print(f"found a logger:{logger_name}")
logger = logging.getLogger(logger_name)
logger.setLevel(logging.ERROR)
if logger_name == 'myapi':
logger.setLevel(logging.INFO)
I am logging to a log file in a Flask application under gunicorn and nginx using the following setup:
def setup_logging():
stream_handler = logging.StreamHandler()
formatter = logging.Formatter('[%(asctime)s][PID:%(process)d][%(levelname)s][%(name)s.%(funcName)s()] %(message)s')
stream_handler.setFormatter(formatter)
stream_handler.setLevel("DEBUG")
logging.getLogger().addHandler(stream_handler)
file_handler = RotatingFileHandler("log.txt", maxBytes=100000, backupCount=10)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
file_handler.setLevel("DEBUG")
logging.getLogger().addHandler(file_handler)
logging.getLogger().setLevel("DEBUG")
Then inializing logging prior to creating the app:
setup_logging()
def create_app(config_name):
app = Flask(__name__)
Then in modules I am logging using:
import logging
logger = logging.getLogger(__name__)
x = 2
logger.debug('x: {0}' x)
Logging works OK on my local machine - both to stdout and log.txt
However when I run the application on a remote server nothing gets written to log.txt. I have deployed as a user with read and write permission on log.txt on the remote system.
I have tried initializing the app on the remote server with DEBUG = True, still nothing written to log file. The only way I can get view any logs is by viewing /var/log/supervisor/app-stdout---supervisor-nnn.log files but these dont show all logging output
Using the answer from HolgerShurig here Flask logging - Cannot get it to write to a file on the server log file I get only named logger output (ie no output from module level logging)
2017-10-21 00:32:45,125 - file - DEBUG - Debug FILE
running same code lo local machine I get
2017-10-21 08:35:39,046 - file - DEBUG - Debug FILE
2017-10-21 08:35:42,340 - werkzeug - INFO - * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit) ie
2017-10-21 08:38:46,236 [MainThread ] [INFO ] 127.0.0.1 - - [21/Oct/2017 08:38:46] "[37mGET /blah/blah HTTP/1.1[0m" 200 -
I then changed the logging config to:
def setup_logging(app):
stream_handler = logging.StreamHandler()
formatter = logging.Formatter('[%(asctime)s][PID:%(process)d][%(levelname)s][%(lineno)s][%(name)s.%(funcName)s()] %(message)s')
stream_handler.setFormatter(formatter)
stream_handler.setLevel(Config.LOG_LEVEL)
app.logger.addHandler(stream_handler)
file_handler = RotatingFileHandler(Config.LOGGING_FILE, maxBytes=Config.LOGGING_MAX_BYTES, backupCount=Config.LOGGING_BACKUP_COUNT)
file_handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(pathname)s:%(lineno)d]'
))
file_handler.setLevel(Config.LOG_LEVEL)
loggers = [app.logger]
for logger in loggers:
logger.addHandler(file_handler)
app.logger.setLevel(Config.LOG_LEVEL)
app.logger.debug('this message should be recorded in the log file')
and set to us this just after creating the Flask app:
setup_logging(app)
I each module I am still using
import logging
logger = logging.getLogger(__name__)
#for example
def example():
logger.debug('debug')
logger.info('info')
logger.warn('warn')
When I run the application on ther server with
gunicorn manage:app
The only thing printed in the log.txt file is
2017-10-21 02:48:32,982 DEBUG: this message should be recorded in the log file [in /../../__init__.py:82]
But locally MainThread processeses are shown as well
Any ideas?
If your configuration is working on your local machine and not working on your remote server, then your problem is about permission, regarding the file or the directory where the logfile resides.
Here's something that could help you.
Besides that, here's a Gist which could give another perspective regarding logging configuration for Flask applications.
I'm building a python application with also a web interface, with Flask web framework.
It runs on Flask internal server in debug/dev mode and in production mode it runs on tornado as wsgi container.
This is how i've set up my logger:
log_formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(message)s')
file_handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=5 * 1024 * 1024, backupCount=10)
file_handler.setFormatter(log_formatter)
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_formatter)
log = logging.getLogger('myAppLogger')
log.addHandler(file_handler)
log.addHandler(console_handler)
To add my logger to the Flask app i tried this:
app = Flask('system.web.server')
app.logger_name = 'myAppLogger'
But the log still going to the Flask default log handler, and in addition, I didn't found how to customize the log handlers also for the Tornado web server.
Any help is much appreciated,
thanks in advance
AFAIK, you can't change the default logger in Flask. You can, however, add your handlers to the default logger:
app = Flask('system.web.server')
app.logger.addHandler(file_handler)
app.logger.addHandler(console_handler)
Regarding my comment above - "Why would you want to run Flask in tornado ...", ignore that. If you are not seeing any performance hit, then clearly there's no need to change your setup.
If, however, in future you'd like to migrate to a multithreaded container, you can look into uwsgi or gunicorn.
I managed to do it, multiple handler, each doing their thing, so that ERROR log will not show on the INFO log as well and end up with duplicate info grrr:
app.py
import logging
from logging.handlers import RotatingFileHandler
app = Flask(__name__)
# Set format that both loggers will use:
formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
# Set logger A for known errors
log_handler = RotatingFileHandler('errors.log', maxBytes=10000, backupCount=1)
log_handler.setFormatter(formatter)
log_handler.setLevel(logging.INFO)
a = logging.getLogger('errors')
a.addHandler(log_handler)
# Set logger B for bad exceptions
exceptions_handler = RotatingFileHandler('exceptions.log', maxBytes=10000, backupCount=1)
exceptions_handler.setFormatter(formatter)
exceptions_handler.setLevel(logging.ERROR)
b = logging.getLogger('exceptions')
b.addHandler(exceptions_handler)
...
whatever_file_where_you_want_to_log.py
import logging
import traceback
# Will output known error messages to 'errors.log'
logging.getLogger('errors').error("Cannot connect to database, timeout")
# Will output the full stack trace to 'exceptions.log', when trouble hits the fan
logging.getLogger('exceptions').error(traceback.format_exc())
How can I view log messages on Google Cloud?: https://console.cloud.google.com/logs
This is what I see in the terminal when I run dev_appserver.py (locally running):
INFO 2016-05-16 14:00:45,118 module.py:787] default: "GET /static/images/contact.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,128 module.py:787] default: "GET /static/images/email.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,136 module.py:787] default: "GET /static/images/phone.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,487 basehandler.py:19] entering basehandler.py
INFO 2016-05-16 14:00:45,516 module.py:787] default: "GET /static/images/logo-349x209.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,562 requesthandlers.py:26] entering requesthandlers.py
INFO 2016-05-16 14:00:45,563 app.py:28] entering app.py
INFO 2016-05-16 14:00:45,563 app.py:198] Using development database
Both application log messages and request logging is displayed.
However when I view the log of the same code deployed I can only see the requests being logged:
The code I'm using to generate application log messages is something like:
import logging
logger = logging.getLogger("someLogger")
logger.info("entering app.py")
But I've also tried using logging.info(...) directly with the same results.
I've tried finding an answer to this in various resources but I've come up empty-handed, most refer to how to set log level when developing locally.
I'm guessing that I need to enable some setting in order to view application logs on Google Cloud Logs.
Resources that I've looked at:
https://cloud.google.com/logging/docs/view/logs_viewer
https://cloud.google.com/appengine/docs/python/logs/
How to change the logging level of dev_appserver
How do I write to the console in Google App Engine?
Google App Engine - Can not find my logging messages
https://docs.python.org/3/howto/logging.html
App engine groups the logs by request. You need to expand the log using the triangle/pointer on the left of the request in the 'new' GAE log viewer.
Personally I prefer using the old GAE log viewer, but I am unsure how much longer it will be around:
https://appengine.google.com/logs?app_id=s~xxx
(This viewer shows request + logs and allows log expansion)
An easy way to integrate Google Cloud Platform logging into your Python code is to create a subclass from logging.StreamHandler. This way logging levels will also match those of Google Cloud Logging, enabling you to filter based on severity. This solution also works within Cloud Run containers.
Also you can just add this handler to any existing logger configuration, without needing to change current logging code.
import json
import logging
import os
import sys
from logging import StreamHandler
from flask import request
class GoogleCloudHandler(StreamHandler):
def __init__(self):
StreamHandler.__init__(self)
def emit(self, record):
msg = self.format(record)
# Get project_id from Cloud Run environment
project = os.environ.get('GOOGLE_CLOUD_PROJECT')
# Build structured log messages as an object.
global_log_fields = {}
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and project:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{project}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity=record.levelname, message=msg)
print(json.dumps(entry))
sys.stdout.flush()
A way to configure and use the handler could be:
def get_logger():
logger = logging.getLogger(__name__)
if not logger.handlers:
gcp_handler = GoogleCloudHandler()
gcp_handler.setLevel(logging.DEBUG)
gcp_formatter = logging.Formatter(
'%(levelname)s %(asctime)s [%(filename)s:%(funcName)s:%(lineno)d] %(message)s')
gcp_handler.setFormatter(gcp_formatter)
logger.addHandler(gcp_handler)
return logger