How to collect error message from stderr using a daemonized uwsgi? - python

I run my uwsgi with --daemonzie=~/uwsgi.log.
I use flask. In my flask app, if I print some message into stdin, it will show on uwsgi.log. If I print to stderr, uwsgi.log won't show these message. How should I enable uwsgi to collect message from stderr.
The major problem is that I can not let uwsgi.log collect the exception track after I catch some exceptions in my flask app.

Flask is catching your exceptions, make sure, you set PROPAGATE_EXCEPTIONS in config.
from flask import Flask
application = Flask(__name__)
application.config['PROPAGATE_EXCEPTIONS'] = True
#application.route('/')
def hello_world():
return 'Hello World!'
Uwsgi logging can be set with
--logto /var/log/uwsgi/app.log
or use logto2 flag if if you want to separate stdout from stderr.
There's also possibility of setting loggers plugin (forward to syslog, etc.), however these plugins have to be compiled into uwsgi.

Related

APScheduler does not run scheduled tasks: Flask + uWSGI

I have an application on a Flask and uWSGI with a jobstore in a SQLite. I start the scheduler along with the application, and add new tasks through add_task when some url is visited.
I see that the tasks are saved correctly in the jobstore, I can view them through the API, but it does not execute at the appointed time.
A few important data:
uwsgi.ini
processes = 1
enable-threads = true
__init__.py
scheduler = APScheduler()
scheduler.init_app(app)
with app.app_context():
scheduler.start()
main.py
scheduler.add_job(
id='{}{}'.format(test.id, g.user.id),
func = pay_day,
args = [test.id, g.user.id],
trigger ='interval',
minutes=test.timer
)
in service.py
def pay_day(tid, uid):
with scheduler.app.app_context():
*some code here*
Interesting behavior: if you create a task by going to the URL and restart the application after that, the task will be executed. But if the application is running and one of the users creates a task by going to the URL, then this task will not be completed until the application is restarted.
I don't get any errors or exceptions, even in the scheduler logs.
I already have no idea how to make it work and what I did wrong. I need a hint.
uWSGI employs some tricks which disable the Global Interpreter Lock and with it, the use of threads which are vital to the operation of APScheduler. To fix this, you need to re-enable the GIL using the --enable-threads switch. See the uWSGI documentation for more details.
I know that you had enable-threads = true in uwsgi.ini, but try the to enable it using the command line.

Sentry logging integration prevents sentry events being sent (Python)

I have a python 3 flask server running through WSGI. I have a config file that is imported by my api code that handles environment variables, and I set up sentry in this file. This is my code which is setup exactly as described in the sentry docs https://docs.sentry.io/platforms/python/logging/
if sentry_dsn is not None:
sentry_logging = LoggingIntegration(
level=logging.INFO,
event_level=logging.CRITICAL,
)
LOG.debug(f"Initialising sentry for environment {sentry_environment}")
sentry_sdk.init(
sentry_dsn,
environment=sentry_environment,
integrations=[sentry_logging],
)
else:
LOG.warn("Sentry key not set up")
The problem is this does not send any events to sentry, in exception logs or even uncaught exceptions. I know that the DSN is correct because if I set up sentry like this, all uncaught exceptions as well as error and exception logs are sent to my sentry project:
if sentry_dsn is not None:
LOG.debug("Initialising sentry")
sentry_sdk.init(sentry_dsn, environment=sentry_environment)
else:
LOG.warn("Sentry key not set up")
I've tried the setup with the debug=True setting in the sentry init and logs confirm that sentry intialises and sets up integrations. But when an event occurs that it should report, there is no log or anything recorded by sentry.
Any help would be appreciated
Turns out that flask was intercepting the exceptions because the default sentry integrations exclude Flask. This meant that nothing was reaching sentry. I added the flask integration according to sentry docs (https://docs.sentry.io/platforms/python/flask/) and sentry messages were sent as I expected.

How to use the logging module in Python with gunicorn

I have a flask-based app. When I run it locally, I run it from the command line, but when I deploy it, I start it with gunicorn with multiple workers.
I want to use the logging module to log to a file. The docs I've found for this are https://docs.python.org/3/library/logging.html and https://docs.python.org/3/howto/logging-cookbook.html .
I am confused over the correct way to use logging when my app may be launched with gunicorn. The docs address threading but assume I have control of the master process. Points of confusion:
Will logger = logging.getLogger('myapp') return the same logger object in different gunicorn worker threads?
If I am attaching a logging FileHandler to my logger in order to log to a file, how can I avoid doing this multiple times in the different workers?
My understanding - which may be wrong - is that if I just call logger.setLevel(logging.DEBUG), this will send messages via the root logger which may have a higher default logging level and may ignore debug messages, and so I also need to call logging.basicConfig(logging.DEBUG) in order for my debug messages to get through. But the docs say not to call logging.basicConfig() from a thread. How can I correctly set the root logging level when using gunicorn? Or do I not need to?
This is my typical Flask/Gunicorn setup. Note gunicorn is ran via supervisor.
wsgi_web.py. Note ProxyFix to get a client's real IP address from Nginx.
from werkzeug.contrib.fixers import ProxyFix
from app import create_app
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
application = create_app(logger_override=gunicorn_logger)
application.wsgi_app = ProxyFix(application.wsgi_app, num_proxies=1)
Edit February 2020, for newer versions of werkzeug use the following and adjust the parameters to ProxyFix as necessary:
from werkzeug.middleware.proxy_fix import ProxyFix
from app import create_app
import logging
gunicorn_logger = logging.getLogger('gunicorn.error')
application = create_app(logger_override=gunicorn_logger)
application.wsgi_app = ProxyFix(application.wsgi_app, x_for=1, x_host=1)
Flask application factory create_app
def create_app(logger_override=None):
app = Flask(__name__)
if logger_override:
# working solely with the flask logger
app.logger.handlers = logger_override.handlers
app.logger.setLevel(logger_override.level)
# OR, working with multiple loggers
# for logger in (app.logger, logging.getLogger('sqlalchemy')):
# logger.handlers = logger_override.handlers
# logger.setLevel(logger_override.level)
# more
return app
Gunicorn command (4th line) within supervisor conf, note the --log-level parameter has been set to info in this instance. Note X-REAL-IP passed to access --access-logformat
[program:web]
directory = /home/paul/www/example
environment = APP_SETTINGS="app.config.ProductionConfig"
command = /home/paul/.virtualenvs/example/bin/gunicorn wsgi_web:application -b localhost:8000 --workers 3 --worker-class gevent --keep-alive 10 --log-level info --access-logfile /home/paul/www/logs/admin.gunicorn.access.log --error-logfile /home/paul/www/logs/admin.gunicorn.error.log --access-logformat '%%({X-REAL-IP}i)s %%(l)s %%(u)s %%(t)s "%%(r)s" %%(s)s %%(b)s "%%(f)s" "%%(a)s"'
user = paul
autostart=true
autorestart=true
Each worker is an isolated process with its own memory so you can't really share the same logger across different workers.
Your code runs inside these workers because the master process only cares about managing the workers.
The master process is a simple loop that listens for various process
signals and reacts accordingly. It manages the list of running workers
by listening for signals like TTIN, TTOU, and CHLD. TTIN and TTOU tell
the master to increase or decrease the number of running workers.
In Gunicorn itself, there are two main run modes
Sync
Async
So this is different from threading, this is multiprocessing.
However since Gunicorn 19, a threads option can be used to process requests in
multiple threads. Using threads assumes use of the gthread worker.
With this in mind, the logging code will be written once and will be invoked multiple times each time a new worker is created. You can use Singelton pattern to ensure the same logger instance is being used inside the same worker.
For configuring the logger itself, you just need to follow the normal process of setting the root logger levels and the different loggers levels.
basicConfig() won't affect the root handler if it's already setup:
This function does nothing if the root logger already has handlers configured for it.
To set the level on root explicitly do
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(name)
Then the level can be set on the handler or logger level.
handler = logging.handlers.TimedRotatingFileHandler(log_path, when='midnight', backupCount=30)
handler.setLevel(min_level)
You can check this similar answer for logging related details
Set logging levels
More Resources :
http://docs.gunicorn.org/en/stable/design.html

How to set up logging with Cherrypy?

I am trying to set up logging with Cherrypy on my Openshift python 3.3 app. The 'appserver.log' file only updates until the actual server starts then nothing gets added to the log file. I have read and followed (as far as I know) the documentation at the below links. Still no logging.
CherryPy server errors log
http://docs.cherrypy.org/dev/refman/_cplogging.html
My python code snippet:
def run_cherrypy_server(app, ip, port=8080):
from cherrypy import wsgiserver
from cherrypy import config
# log.screen: Set this to True to have both “error” and “access” messages printed to stdout.
# log.access_file: Set this to an absolute filename where you want “access” messages written.
# log.error_file: Set this to an absolute filename where you want “error” messages written.
appserver_error_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_error.log')
appserver_access_log = os.path.join(os.environ['OPENSHIFT_HOMEDIR'], 'python', 'logs','appserver_access.log')
config.update({
'log.screen': True,
'log.error_file': appserver_error_log,
'log.access_file': appserver_access_log
})
server = wsgiserver.CherryPyWSGIServer(
(ip, port), app, server_name='www.cherrypy.example')
server.start()
The 'appserver_error.log' and 'appserver_access.log' files actually get created in the proper Openshift python directory. However, no logging information in both of the files appserver_error.log and appserver_access.log.
Everything runs fine but no logging.
Any ideas what I am doing wrong?
The WSGI server itself does not do any logging. The CherryPy engine (which controls process startup and shutdown) writes to the "error" log, and only CherryPy applications (that use CherryPy's Request and Response objects) write to the access log. If you're passing your own WSGI application, you'll have to do your own logging.

Disable console messages in Flask server

I have a Flask server running in standalone mode (using app.run()). But, I don't want any messages in the console, like
127.0.0.1 - - [15/Feb/2013 10:52:22] "GET /index.html HTTP/1.1" 200 -
...
How do I disable verbose mode?
You can set level of the Werkzeug logger to ERROR, in that case only errors are logged:
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
Here is a fully working example tested on OSX, Python 2.7.5, Flask 0.10.0:
from flask import Flask
app = Flask(__name__)
import logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
This solution provides you a way to get your own prints and stack traces but without information level logs from flask suck as 127.0.0.1 - - [15/Feb/2013 10:52:22] "GET /index.html HTTP/1.1" 200
from flask import Flask
import logging
app = Flask(__name__)
log = logging.getLogger('werkzeug')
log.disabled = True
None of the other answers worked correctly for me, but I found a solution based off Peter's comment. Flask apparently no longer uses logging for logging, and has switched to the click package. By overriding click.echo and click.secho I eliminated Flask's startup message from app.run().
import logging
import click
from flask import Flask
app = Flask(__name__)
log = logging.getLogger('werkzeug')
log.setLevel(logging.ERROR)
def secho(text, file=None, nl=None, err=None, color=None, **styles):
pass
def echo(text, file=None, nl=None, err=None, color=None, **styles):
pass
click.echo = echo
click.secho = secho
#app.route("/")
def hello():
return "Hello World!"
if __name__ == "__main__":
app.run()
Between setting the logging level to ERROR and overriding the click methods with empty functions, all non-error log output should be prevented.
To suppress Serving Flask app ...:
os.environ['WERKZEUG_RUN_MAIN'] = 'true'
app.run()
In case you are using WSGI server , please set the log to None
gevent_server = gevent.pywsgi.WSGIServer(("0.0.0.0", 8080), app, log=None)
#Drewes solution works most of the time, but in some cases, I still tend to get werkzeug logs. If you really don't want to see any of them, I suggest you disabling it like that.
from flask import Flask
import logging
app = Flask(__name__)
log = logging.getLogger('werkzeug')
log.disabled = True
app.logger.disabled = True
For me it failed when abort(500) was raised.
Late answer but I found a way to suppress EACH AND EVERY CONSOLE MESSAGE (including the ones displayed during an abort(...) error).
import os
import logging
logging.getLogger('werkzeug').disabled = True
os.environ['WERKZEUG_RUN_MAIN'] = 'true'
This is basically a combination of the answers given by Slava V and Tom Wojcik
Another reason you may want to change the logging output is for tests, and redirect the server logs to a log file.
I couldn't get the suggestion above to work either, it looks like loggers are setup as part of the app starting. I was able to get it working by changing the log levels after starting the app:
... (in setUpClass)
server = Thread(target=lambda: app.run(host=hostname, port=port, threaded=True))
server.daemon = True
server.start()
wait_for_boot(hostname, port) # curls a health check endpoint
log_names = ['werkzeug']
app_logs = map(lambda logname: logging.getLogger(logname), log_names)
file_handler = logging.FileHandler('log/app.test.log', 'w')
for app_log in app_logs:
for hdlr in app_log.handlers[:]: # remove all old handlers
app_log.removeHandler(hdlr)
app_log.addHandler(file_handler)
Unfortunately the * Running on localhost:9151 and the first health check is still printed to standard out, but when running lots of tests it cleans up the output a ton.
"So why log_names?", you ask. In my case there were some extra logs I needed to get rid of. I was able to find which loggers to add to log_names via:
from flask import Flask
app = Flask(__name__)
import logging
print(logging.Logger.manager.loggerDict)
Side note: It would be nice if there was a flaskapp.getLogger() or something so this was more robust across versions. Any ideas?
Some more key words: flask test log remove stdout output
thanks to:
http://code.activestate.com/lists/python-list/621740/ and
How to change filehandle with Python logging on the fly with different classes and imports
I spent absolute ages trying to get rid of these response logs with all the different solutions, but as it turns out it wasn't Flask / Werkzeug but Gunicorn access logs dumped on stderr...
The solution was replacing the default access log handler with NullHandler by adding this block in the Gunicorn config file:
logconfig_dict = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"console": {"class": "logging.StreamHandler", "level": "INFO"},
"null": {"class": "logging.NullHandler"},
},
"loggers": {
"gunicorn.error": {"level": "INFO", "propagate": False, "handlers": ["console"]},
"gunicorn.access": {"level": "INFO", "propagate": False, "handlers": ["null"]},
},
}
somehow none of the above options, including .disabled = True, worked for me.
The following did the trick though:
logging.getLogger('werkzeug').setLevel(logging.CRITICAL)
Using the latest versions as of November 2021 under Python 3.7.3:
pip3 list | grep -E "(connexion|Flask|Werkzeug)"
connexion2 2.10.0
Flask 2.0.2
Werkzeug 2.0.2
Here's the answer to disable all logging, including on werkzeug version 2.1.0 and newer:
import flask.cli
flask.cli.show_server_banner = lambda *args: None
import logging
logging.getLogger("werkzeug").disabled = True
My current workaround to silence Flask as of 2022-10.
It disables the logging and the startup banner:
class WSServer:
...
#staticmethod
def _disabled_server_banner(*args, **kwargs):
"""
Mock for the show_server_banner method to suppress spam to stdout
"""
pass
def _run_server(self):
"""
Winds-up the server.
"""
# disable general logging
log = logging.getLogger('werkzeug')
log.setLevel(logging.CRITICAL)
# disable start-up banner
from flask import cli
if hasattr(cli, "show_server_banner"):
cli.show_server_banner = self._disabled_server_banner
...
A brute force way to do it if you really don't want anything to log into the console beside print() statements is to logging.basicConfig(level=logging.FATAL). This would disable all logs that are of status under fatal. It would not disable printing but yeah, just a thought :/
EDIT:
I realized it would be selfish of me not to put a link to the documentation I used :)
https://docs.python.org/3/howto/logging.html#logging-basic-tutorial
The first point: In according to official Flask documentation, you shouldn't run Flask application using app.run(). The best solution is using uwsgi, so you can disable default flask logs using command "--disable-logging"
For example:
uwsgi --socket 0.0.0.0:8001 --disable-logging --protocol=http -w app:app

Categories