I have a Flask API and I am attempting to implement logging. Everything is working fine in my local machine, but nothing seems to work when deployed on AWS Elastic Beanstalk.
I have a file logger.py that creates and configures the logger that looks like this:
import logging
import os
if not os.path.isdir(os.environ.get('LOG_PATH', 'log')):
os.mkdir(os.environ.get('LOG_PATH', 'log'))
# Configure logger
logger = logging.getLogger(__name__)
formatter = logging.Formatter('[%(asctime)s] %(levelname)s in %(module)s: %(message)s')
debug_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/debug.log')
info_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/info.log')
error_file_handler = logging.FileHandler(f'{os.environ.get("LOG_PATH", "log")}/error.log')
debug_file_handler.setLevel(logging.DEBUG)
info_file_handler.setLevel(logging.INFO)
error_file_handler.setLevel(logging.ERROR)
debug_file_handler.setFormatter(formatter)
info_file_handler.setFormatter(formatter)
error_file_handler.setFormatter(formatter)
logger.addHandler(debug_file_handler)
logger.addHandler(info_file_handler)
logger.addHandler(error_file_handler)
I import this file int my app factory and add the handlers to my app.
# Configure logger
app.logger.removeHandler(default_handler)
app.logger.addHandler(debug_file_handler)
app.logger.addHandler(info_file_handler)
app.logger.addHandler(error_file_handler)
Then in the app factory still, I have a #app.before_request so that logging should occur on every request (for testing purposes).
#app.before_request
def before_request():
elif request.method == 'GET':
app.logger.debug(f'{request.method} {request.base_url}: parameters {dict(request.args)}')
else:
app.logger.debug(f'{request.method} {request.base_url}: parameters {request.json}')
This all works fine on my local machine without a problem. However when I try to deploy to AWS EB, nothing is written to the log files. The log files appear to be created and present when I pull them down via the cli or gui, but nothing is being written to it. I've followed a few tutorials and have added to my .ebextensions file but am still having no luck. Currently my .ebextensions includes a logging.config that looks like this:
files:
"/opt/elasticbeanstalk/tasks/bundlelogs.d/applogs.conf" :
mode: "000777"
owner: root
group: root
content: |
/tmp/*.log
"/opt/elasticbeanstalk/tasks/taillogs.d/applogs.conf" :
mode: "000777"
owner: root
group: root
content: |
/tmp/*.log
The log files appear to be included when I request the logs (both tail and full), but the logs are completely empty even after making multiple request. I've also seen a few answers on here saying I need to include chown wsgi:wsgi in this file, but that has not worked either. I've also tried writing the logs to var/app/current/log and that doesn't work either. The log files are always created, but not written to.
For context, the Elastic Beanstalk platform is Python 3.8 running on Amazon Linux 2.
Any help would be greatly appreciated.
Turns out all I needed to do was set the level of the logger to logging.DEBUG so I could see debug logs.
in __init__.py:
app.logger.setLevel(logging.DEBUG)
And since my app name is src when I configure it, I can grab this logger in other modules with:
import logging
logger = logging.getLogger('src')
Related
I have a tornado application and a custom logger method. My code to build and use the custom logger is the following:
def create_logger():
"""
This function creates the logger functionality to be used throughout the Python application
:return: bool - true if successful
"""
# Configuring the logger
filename = "PythonLogger.log"
# Change the current working directory to the logs folder, so that the logs files is written in it.
os.chdir(os.path.normpath(os.path.normpath(os.path.dirname(os.path.abspath(__file__)) + os.sep + os.pardir + os.sep + os.pardir + os.sep + 'logs')))
# Create the logs file
logging.basicConfig(filename=filename, format='%(asctime)s %(message)s', filemode='w')
# Creating the logger
logger = logging.getLogger()
# Setting the threshold of logger to DEBUG
logger.setLevel(logging.NOTSET)
logger.log(0, 'El logger está inicializado')
return True
def log_info_message(msg):
"""
Utility for message logging with code 20
:param msg:
:return:
"""
return logging.getLogger().log(20, msg)
In the code, I initialize the logger and already write a message to it before the Tornado application initialization:
if __name__ == '__main__':
# Logger initialization
create_logger()
# First log message
log_info_message('Initiating Python application')
# Starting Tornado
tornado.options.parse_command_line()
# Specifying what app exactly is being started
server = tornado.httpserver.HTTPServer(test.app)
server.listen(options.port)
try:
if 'Windows_NT' not in os.environ.values():
server.start(0)
tornado.ioloop.IOLoop.instance().start()
except KeyboardInterrupt:
tornado.ioloop.IOLoop.instance().stop()
Then let's say my method get of HTTP request is as follows (only interesting lines):
class API(tornado.web.RequestHandler):
def get(self):
self.write('Get request ')
logging.getLogger("tornado.access").log(20, 'Hola')
logging.getLogger("tornado.application").log(20, '1')
logging.getLogger("tornado.general").log(20, '2')
log_info_message('Received a GET request at: ' + datetime.datetime.now().strftime("%d-%b-%Y (%H:%M:%S.%f)"))
What I see is a difference between local testing and testing on server.
A) On local, I can see log message at first script running, and log messages of requests (after initializing Tornado app) in my log file and the Tornado logs.
B) On server, I only see the first message, not my log messages when Get requests are accepted and also see Tornado's loggers when there's an error, but even don't see the messages produced by Tornado's loggers. I guess that means that somehow Tornado is re-initializing the logger and making mine and his 3 ones write in some other file (somehow that does not affect when errors happens??).
I am aware that Tornado uses its own 3 logging functions, but somehow I would like to use mine as well, at the same time as keeping the Tornado's ones and writing them all into the same file. Basically reproduce that local behaviour on server but also keeping it when some error happens, of course.
How could I achieve this?
Thanks in advance!
P.S.: if I add a name to the logger, let's say logging.getLogger('Example') and changed log_info_message function to return logging.getLogger('Example').log(20, msg), Tornado's logger would fail and raise error. So that option destroys its own loggers...
It seems the only problem was that, on the server side, tornado was setting the the mininum level for a log message to be written on log file higher (minimum of 40 was required). So that logging.getLogger().log(20, msg) would not write on the logging file but logging.getLogger().log(40, msg) would.
I would like to understand why, so if anybody knows, your knowledge would be more than welcome. For the time being that solution is working though.
tornado.log defines options that can be used to customise logging via command line (check tornado.options) - one of them is logging that defines the log level used. You are likely using this on the server and setting it to error.
When debugging logging I suggest you create a RequestHandler that will log or return the structure of the existing loggers by inspecting the root logger. When you see the structure it is much easier to understand why it works the way it works.
I'm struggling to find out why my log entries are being duplicated in Cloud Logging.
I use a custom dummy handler that does nothing, and I'm also using a named logger.
Here's my code:
import google.cloud.logging
import logging
class MyLogHandler(logging.StreamHandler):
def emit(self, record):
pass
# Setting Up App Engine's Logging
client = google.cloud.logging.Client()
client.get_default_handler()
client.setup_logging()
# Setting Up my custom logger/handler
my_handler = MyLogHandler()
logging.getLogger('my_logger').addHandler(my_handler)
logging.getLogger('my_logger').setLevel(logging.DEBUG)
logging.getLogger('my_logger').debug('Why this message is being duplicated?') # please note that i'm logging into 'my_logger' logger, I'm not using root logger for this message
In the first place, I think this message shouldn't even show in Cloud Logging because i'm using a named logger called 'my_logger' and cloud logging is attached to root logger only, but anyway...
The above code is imported into my app.py which bootstraps a Flask app on app engine.
Here is a screenshot of the issue:
This guy has a similar issue: Duplicate log entries with Google Cloud Stackdriver logging of Python code on Kubernetes Engine
But I tried every workaround suggested in that topic and didn't work either.
Is there something I'm missing here? Thanks in advance.
from google.cloud.logging.handlers import AppEngineHandler
root_logger = logging.getLogger()
# use the GCP appengine handlers only in order to prevent logs from getting written to STDERR
root_logger.handlers = [handler for handler in root_logger.handlers
if isinstance(handler, AppEngineHandler)]
I'd like to add kind of application logging in a REST API I'm developing with the fantastic Python EVE framework. The goal is to provide auditing features to the application, so that application logging is my first choice for it.
I've followed steps detailed in this section from the official documentation, but I haven't been lucky. Probably I'm missing something:
from eve import Eve
import logging
def log_every_get(resource, request, payload):
# custom INFO-level message is sent to the log file
app.logger.info('We just answered to a GET request!')
app = Eve()
app.on_post_GET += log_every_get
if __name__ == "__main__":
# enable logging to 'app.log' file
handler = logging.FileHandler('<chrootedfolder>/app.log')
# set a custom log format, and add request
# metadata to each log line
handler.setFormatter(logging.Formatter(
'%(asctime)s %(levelname)s: %(message)s '
'[in %(filename)s:%(lineno)d] -- ip: %(clientip)s, '
'url: %(url)s, method:%(method)s'))
# the default log level is set to WARNING, so
# we have to explictly set the logging level
# to INFO to get our custom message logged.
app.logger.setLevel(logging.INFO)
# append the handler to the default application logger
app.logger.addHandler(handler)
app.run()
To give you some more info about the environment where I'm running my REST API: it's running under UWSGI (chrooted to a specific folder) which is behind NGINX (acting as a proxy).
Hope you can help somehow guys.
Thanks in advance.
I am trying to set up logging with Cherrypy on my Openshift python 3.3 app. The 'appserver.log' file only updates until the actual server starts then nothing gets added to the log file. I have read and followed (as far as I know) the documentation at the below links. Still no logging.
CherryPy server errors log
http://docs.cherrypy.org/dev/refman/_cplogging.html
My python code snippet:
def run_cherrypy_server(app, ip, port=8080):
from cherrypy import wsgiserver
from cherrypy import config
# log.screen: Set this to True to have both “error” and “access” messages printed to stdout.
# log.access_file: Set this to an absolute filename where you want “access” messages written.
# log.error_file: Set this to an absolute filename where you want “error” messages written.
appserver_error_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_error.log')
appserver_access_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_access.log')
config.update({
'log.screen': True,
'log.error_file': appserver_error_log,
'log.access_file': appserver_access_log
})
server = wsgiserver.CherryPyWSGIServer(
(ip, port), app, server_name='www.cherrypy.example')
server.start()
The 'appserver_error.log' and 'appserver_access.log' files actually get created in the proper Openshift python directory. However, no logging information in both of the files appserver_error.log and appserver_access.log.
Everything runs fine but no logging.
Any ideas what I am doing wrong?
The WSGI server itself does not do any logging. The CherryPy engine (which controls process startup and shutdown) writes to the "error" log, and only CherryPy applications (that use CherryPy's Request and Response objects) write to the access log. If you're passing your own WSGI application, you'll have to do your own logging.
I want my Flask app to have different behaviors when it is being run on localhost and when it is being hosted online. How can I detect from a flask app when it is on localhost and when it is deployed?
You'll want to look at the configuration handling section of the docs, most specifically, the part on dev / production. To summarize here, what you want to do is:
Load a base configuration which you keep in source control with sensible defaults for things which need to have some value. Anything which needs a value should have the value set to what makes sense for production not for development.
Load an additional configuration from a path discovered via an environment variable that provides environment-specific settings (e. g. the database URL).
An example in code:
from __future__ import absolute_imports
from flask import Flask
import .config # This is our default configuration
app = Flask(__name__)
# First, set the default configuration
app.config.from_object(config)
# Then, load the environment-specific information
app.config.from_envvar("MYAPP_CONFIG_PATH")
# Setup routes and then ...
if __name__ == "__main__":
app.run()
See also: The docs for Flask.config
Here is one way of doing it. The key is comparing the current root url flask.request.url_root against a known url value you want to match.
Excerpt taken from github repo https://github.com/nueverest/vue_flask
from flask import Flask, request
def is_production():
""" Determines if app is running on the production server or not.
Get Current URI.
Extract root location.
Compare root location against developer server value 127.0.0.1:5000.
:return: (bool) True if code is running on the production server, and False otherwise.
"""
root_url = request.url_root
developer_url = 'http://127.0.0.1:5000/'
return root_url != developer_url