How can I view log messages on Google Cloud?: https://console.cloud.google.com/logs
This is what I see in the terminal when I run dev_appserver.py (locally running):
INFO 2016-05-16 14:00:45,118 module.py:787] default: "GET /static/images/contact.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,128 module.py:787] default: "GET /static/images/email.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,136 module.py:787] default: "GET /static/images/phone.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,487 basehandler.py:19] entering basehandler.py
INFO 2016-05-16 14:00:45,516 module.py:787] default: "GET /static/images/logo-349x209.png HTTP/1.1" 304 -
INFO 2016-05-16 14:00:45,562 requesthandlers.py:26] entering requesthandlers.py
INFO 2016-05-16 14:00:45,563 app.py:28] entering app.py
INFO 2016-05-16 14:00:45,563 app.py:198] Using development database
Both application log messages and request logging is displayed.
However when I view the log of the same code deployed I can only see the requests being logged:
The code I'm using to generate application log messages is something like:
import logging
logger = logging.getLogger("someLogger")
logger.info("entering app.py")
But I've also tried using logging.info(...) directly with the same results.
I've tried finding an answer to this in various resources but I've come up empty-handed, most refer to how to set log level when developing locally.
I'm guessing that I need to enable some setting in order to view application logs on Google Cloud Logs.
Resources that I've looked at:
https://cloud.google.com/logging/docs/view/logs_viewer
https://cloud.google.com/appengine/docs/python/logs/
How to change the logging level of dev_appserver
How do I write to the console in Google App Engine?
Google App Engine - Can not find my logging messages
https://docs.python.org/3/howto/logging.html
App engine groups the logs by request. You need to expand the log using the triangle/pointer on the left of the request in the 'new' GAE log viewer.
Personally I prefer using the old GAE log viewer, but I am unsure how much longer it will be around:
https://appengine.google.com/logs?app_id=s~xxx
(This viewer shows request + logs and allows log expansion)
An easy way to integrate Google Cloud Platform logging into your Python code is to create a subclass from logging.StreamHandler. This way logging levels will also match those of Google Cloud Logging, enabling you to filter based on severity. This solution also works within Cloud Run containers.
Also you can just add this handler to any existing logger configuration, without needing to change current logging code.
import json
import logging
import os
import sys
from logging import StreamHandler
from flask import request
class GoogleCloudHandler(StreamHandler):
def __init__(self):
StreamHandler.__init__(self)
def emit(self, record):
msg = self.format(record)
# Get project_id from Cloud Run environment
project = os.environ.get('GOOGLE_CLOUD_PROJECT')
# Build structured log messages as an object.
global_log_fields = {}
trace_header = request.headers.get('X-Cloud-Trace-Context')
if trace_header and project:
trace = trace_header.split('/')
global_log_fields['logging.googleapis.com/trace'] = (
f"projects/{project}/traces/{trace[0]}")
# Complete a structured log entry.
entry = dict(severity=record.levelname, message=msg)
print(json.dumps(entry))
sys.stdout.flush()
A way to configure and use the handler could be:
def get_logger():
logger = logging.getLogger(__name__)
if not logger.handlers:
gcp_handler = GoogleCloudHandler()
gcp_handler.setLevel(logging.DEBUG)
gcp_formatter = logging.Formatter(
'%(levelname)s %(asctime)s [%(filename)s:%(funcName)s:%(lineno)d] %(message)s')
gcp_handler.setFormatter(gcp_formatter)
logger.addHandler(gcp_handler)
return logger
Related
I am trying to setup AI similar to how it is being done here
I am using Python 3.9.13 and following packages: opencensus==0.11.0, opencensus-ext-azure==1.1.7, opencensus-context==0.1.3
My code looks something like this:
import logging
import time
from opencensus.ext.azure.log_exporter import AzureLogHandler
# create the logger
app_insights_logger = logging.getLogger(__name__)
# set the handler
app_insights_logger.addHandler(AzureLogHandler(
connection_string='InstrumentationKey=00000000-0000-0000-0000-000000000000')
)
# set the logging level
app_insights_logger.setLevel(logging.INFO)
# this prints 'logging level = 20'
print('logging level = ',app_insights_logger.getEffectiveLevel())
# try to log an exception
try:
result = 1 / 0
except Exception:
app_insights_logger.exception('Captured a math exception.')
app_insights_logger.handlers[0].flush()
time.sleep(5)
However the exception does not get logged, I tried adding the explicit flush as mentioned in this post
Additionally, I tried adding the instrumentation key as mentioned in the docs, when that didn't work I tried with the entire connection string(the one with the ingestion key)
So,
How can I debug if my app is indeed sending requests to Azure ?
How can I check on the Azure portal if it is a permission issue ?
You can set the severity level before logging any kind of telemetry information to Application Insights.
Note: By default, root logger can be configured with warning severity. if you want to add other severity information you have to set like (logger.setLevel(logging.INFO)).
I am using below code to log the telemetry information in Application Insights
import logging
from logging import Logger
from opencensus.ext.azure.log_exporter import AzureLogHandler
AI_conn_string= '<Your AI Connection string>'
handler = AzureLogHandler(connection_string=AI_conn_string)
logger = logging.getLogger()
logger.addHandler(handler)
#by default root logger can be configured with warning severity.
logger.warning('python console app warning log in AI ')
# setting severity for information level logging.
logger.setLevel(logging.INFO)
logger.info('Test Information log')
logger.info('python console app information log in AI')
try:
logger.warning('python console app Try block warning log in AI')
result = 1 / 0
except Exception:
logger.setLevel(logging.ERROR)
logger.exception('python console app error log in AI')
Results
Warning and Information log
Error log in AI
How can I debug if my app is indeed sending requests to Azure ?
We cannot debug the information whether the telemetry data send to Application Insights or not. But we can see the process like below
How can I check on the Azure portal if it is a permission issue ?
Instrumentation key and connection string has the permission to access the Application Insights resource.
I'd like to use logging in my Flask app by simply calling logging.getLogger('myapi') in each of the files where I need to log.
There should be a single place to define the handler and format for this "global" application logger. This works, but Flask is also continually logging its own logs in the default format. These logs only exist when I import the library fbprophet. I would like to prevent Flask from logging these extra, unformatted, duplicate logs.
(Flask also has a werkzeug logger, which is fine and can stay.)
Code:
import sys
import logging
import fbprophet
from flask import Flask, jsonify
from werkzeug.serving import run_simple
# Set up my custom global logger
log = logging.getLogger('myapi')
handler = logging.StreamHandler(sys.stdout)
handler.setFormatter(logging.Formatter('*** %(asctime)s %(levelname)s %(message)s'))
log.addHandler(handler)
log.setLevel(logging.DEBUG)
def create_app(config={}):
''' Application factory to create and configure the app '''
app = Flask('myapi', instance_relative_config=False)
app.config.from_mapping(config)
log.info('TEST INFO')
log.debug('TEST DEBUG')
#app.route('/health')
def health():
log.info('Health OK')
return 'OK'
return app
if __name__ == '__main__':
dev_config = {'SECRET_KEY': 'dev', 'DEBUG': False}
app = create_app(dev_config)
run_simple('localhost', 5000, app)
Output:
I'm expecting to see logs prefixed with ***. The ones starting with LEVEL only appear when I import facebook prophet.
* Serving Flask app "main.py" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Debugger is active!
*** 2019-06-05 14:17:56,702 INFO TEST INFO # good log
INFO:myapi:TEST INFO # bad dupe log
*** 2019-06-05 14:17:56,702 DEBUG TEST DEBUG
DEBUG:myapi:TEST DEBUG
*** 2019-06-05 14:17:56,703 INFO TEST INFO
INFO:myapi:TEST INFO
*** 2019-06-05 14:17:56,703 DEBUG TEST DEBUG
DEBUG:myapi:TEST DEBUG
*** 2019-06-05 14:18:10,405 INFO Health OK
INFO:myapi:Health OK
127.0.0.1 - - [05/Jun/2019 14:18:10] "GET /health HTTP/1.1" 200 - # good werkzeug api log
INFO:werkzeug:127.0.0.1 - - [05/Jun/2019 14:18:10] "GET /health HTTP/1.1" 200 - # bad dupe log
More explanation:
I've tried setting the app's logger too, but I don't want to have to call current_app.logger from other modules.
I tried disabling Flask's logger with logging.getLogger('flask.app').handlers.clear() but this also doesn't work.
When importing fbprophet, I get the below console errors (from prophet):
ERROR:fbprophet:Importing matplotlib failed. Plotting will not work.
ERROR:fbprophet:Importing matplotlib failed. Plotting will not work.
*** 2019-06-05 14:29:06,488 INFO TEST INFO
INFO:myapi:TEST INFO
I thought this could be causing the issue, so I fixed the errors following this. But Flask is still logging the extra logs.
import plotly
import matplotlib as mpl
mpl.use('TkAgg')
import fbprophet
Summary:
Looking for formatted global logging in Flask with no duplicate logs. I just need my global logging.getLogger('myapi') and the werkzeug API logs.
I've had the same problem and spent hours to resolve it. Actually it's not even related to Flask (I figured this out after few hours).
Even in a simple script you'll have the duplicated log in your handler.
The only working solution seems to be to add: logger.propagate = False in your own handler. This will prevent the log to be passed to handlers of higher level (ancestor) loggers i.e loggers created by Prophet (even if I dont see where this hierarchy resides exactly).
Found in this answer: https://stackoverflow.com/a/55877763/3615718
You can search through all the registered loggers and change their settings as needed. It sounds like fbprophet is probably setting its own logger instance, so hopefully it will get set to the level you want if you do something like this:
for logger_name in logging.root.manager.loggerDict:
print(f"found a logger:{logger_name}")
logger = logging.getLogger(logger_name)
logger.setLevel(logging.ERROR)
if logger_name == 'myapi':
logger.setLevel(logging.INFO)
In a Flask application, I would like to configure two differents file loggers:
One for HTTP access logs (access.log), which will log stuff like:
1.2.3.4 - [11/Jun/2018 09:51:13] "GET /some/path HTTP/1.1" 200 -
One for application logs (my_app.log), which will keep logs defined by me in my code when I'm using current_app.logger.info('some message'):
2018-06-08 15:08:50,083 - flask.app - INFO - some message
How should my configuration looks like to achieve this ? Here is what I tried, without success:
# content of "run.py" :
app = Flask(__name__)
app.logger.removeHandler(default_handler)
# Define 'my_app.log' :
handler = logging.FileHandler('my_app.log')
handler.setLevel(logging.INFO)
formatter = logging.Formatter(app.config['LOGGING_FORMAT'])
handler.setFormatter(formatter)
app.logger.addHandler(handler)
# Define 'access.log' :
access_handler = logging.getLogger('werkzeug')
access_handler = logging.FileHandler('access.log')
access_handler.setLevel(logging.DEBUG)
access_handler.setFormatter(app.config['LOGGING_FORMAT'])
app.logger.addHandler(access_handler)
# Then register my blueprints:
app.register_blueprint(some_blueprint, url_prefix='/')
....
And I run it with python3 run.py. With this config, the only logged stuff are the HTTP access logs in the my_app.log file.
What's wrong with my config ?
I'm building a python application with also a web interface, with Flask web framework.
It runs on Flask internal server in debug/dev mode and in production mode it runs on tornado as wsgi container.
This is how i've set up my logger:
log_formatter = logging.Formatter('%(asctime)s [%(levelname)s] %(message)s')
file_handler = logging.handlers.RotatingFileHandler(LOG_FILE, maxBytes=5 * 1024 * 1024, backupCount=10)
file_handler.setFormatter(log_formatter)
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setFormatter(log_formatter)
log = logging.getLogger('myAppLogger')
log.addHandler(file_handler)
log.addHandler(console_handler)
To add my logger to the Flask app i tried this:
app = Flask('system.web.server')
app.logger_name = 'myAppLogger'
But the log still going to the Flask default log handler, and in addition, I didn't found how to customize the log handlers also for the Tornado web server.
Any help is much appreciated,
thanks in advance
AFAIK, you can't change the default logger in Flask. You can, however, add your handlers to the default logger:
app = Flask('system.web.server')
app.logger.addHandler(file_handler)
app.logger.addHandler(console_handler)
Regarding my comment above - "Why would you want to run Flask in tornado ...", ignore that. If you are not seeing any performance hit, then clearly there's no need to change your setup.
If, however, in future you'd like to migrate to a multithreaded container, you can look into uwsgi or gunicorn.
I managed to do it, multiple handler, each doing their thing, so that ERROR log will not show on the INFO log as well and end up with duplicate info grrr:
app.py
import logging
from logging.handlers import RotatingFileHandler
app = Flask(__name__)
# Set format that both loggers will use:
formatter = logging.Formatter("[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s")
# Set logger A for known errors
log_handler = RotatingFileHandler('errors.log', maxBytes=10000, backupCount=1)
log_handler.setFormatter(formatter)
log_handler.setLevel(logging.INFO)
a = logging.getLogger('errors')
a.addHandler(log_handler)
# Set logger B for bad exceptions
exceptions_handler = RotatingFileHandler('exceptions.log', maxBytes=10000, backupCount=1)
exceptions_handler.setFormatter(formatter)
exceptions_handler.setLevel(logging.ERROR)
b = logging.getLogger('exceptions')
b.addHandler(exceptions_handler)
...
whatever_file_where_you_want_to_log.py
import logging
import traceback
# Will output known error messages to 'errors.log'
logging.getLogger('errors').error("Cannot connect to database, timeout")
# Will output the full stack trace to 'exceptions.log', when trouble hits the fan
logging.getLogger('exceptions').error(traceback.format_exc())
I'd like to add kind of application logging in a REST API I'm developing with the fantastic Python EVE framework. The goal is to provide auditing features to the application, so that application logging is my first choice for it.
I've followed steps detailed in this section from the official documentation, but I haven't been lucky. Probably I'm missing something:
from eve import Eve
import logging
def log_every_get(resource, request, payload):
# custom INFO-level message is sent to the log file
app.logger.info('We just answered to a GET request!')
app = Eve()
app.on_post_GET += log_every_get
if __name__ == "__main__":
# enable logging to 'app.log' file
handler = logging.FileHandler('<chrootedfolder>/app.log')
# set a custom log format, and add request
# metadata to each log line
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(filename)s:%(lineno)d] -- ip: %(clientip)s, '
'url: %(url)s, method:%(method)s'))
# the default log level is set to WARNING, so
# we have to explictly set the logging level
# to INFO to get our custom message logged.
app.logger.setLevel(logging.INFO)
# append the handler to the default application logger
app.logger.addHandler(handler)
app.run()
To give you some more info about the environment where I'm running my REST API: it's running under UWSGI (chrooted to a specific folder) which is behind NGINX (acting as a proxy).
Hope you can help somehow guys.
Thanks in advance.