How to disable the logging of Uvicorn? - python

I am working on FastAPI - Uvicorn. I want to disable the logging by uvicorn. I need only the logs which are logged by the server.
I referred to this blog and implemented the logging.

You could change log level for getting only needed messages, there are a bunch of possible options:
uvicorn main:app --log-level critical

I think I had the same issue.
To disable the logger one must first find the loggers, that shall be disabled. I did this by following this stackoverflow post and this GitHub-Issue.
For me it works to disable just two loggers from uvicorn:
import logging
# ....CODE....
uvicorn_error = logging.getLogger("uvicorn.error")
uvicorn_error.disabled = True
uvicorn_access = logging.getLogger("uvicorn.access")
uvicorn_access.disabled = True
At first I tried the answer provided by #Sanchouz, but this didn't work out for me - Further setting propagate = false is by some regarded as a bad practice (see this) . As I wanted to do it programmatically I couldn't test the answer provided by #funnydman.
Hope this helps anyone, thinklex.

Problem definition
I had a similar problem and I found a solution. In my case, I created a small website with FastAPI to launch web scrappers in separate processes. I also created class-wrapper for loggers from logging module. My problem was: when I started my app inside Docker container with uvicorn ... command, which includes some settings for logging into file, all logging from any web scrapper would go into both scrapper's separate log file and server's log file. I have a lot of stuff to log, so it's quite a problem.
Short answer
When you get your logger, just set it's propagate property to False like this:
logger = logging.getLogger(logger_name)
logger.propagate = False
Long answer
At first I spent some time debugging insides of a logging module and I found a function called callHandlers, which loops through handlers of current logger and it's parents. I wrongly assumed, that root logger was responsible for that problem, but after some more testing it turned out, that actually root logger didn't have any handlers. It means, that some of uvicorn's logger was responsible for that, which makes sense, also considering Thinklex's solution. I tried his solution too, but it doesn't fit me, because it disabled uvicorn's logging completely, and I don't want that, so I'll stick with preventing propagation on my loggers.

Related

Python switch a logger off by default

In Python, I want to optionally log from a module - but have this logging off by default, enabled with a function call. (The output from this file will be very spammy - so best off by default)
I want my code to look something like this.
log = logging.getLogger("module")
log.switch_off()
---
import module
module.log.switch_on()
I can't seem to find an option to disable a logger.
Options considered:
Using filters: I think this is a bit confusing for the client
Setting a level higher than one I use to log: (e.g. logging.CRITICAL). I don't like that we could inadvertently throw log lines into normal output if we use that level.
Use a flag and add ifs
Require the client to exclude our log events. See logging config
There are two pieces at play here. Python has logging.Logger objects and logging.Handler objects that work together to serve you logging information. Loggers handle the logic of collecting logging information, and deciding whether logs should be emitted to associated handlers. If the logging level of your log record is less severe than the level specified in the logger, it will not pass info to associated handlers.
Handlers have the same feature, and since handlers are the last line between log records and defined output, you would likely want to disable the interaction there. To accomplish this, and avoid having logs inadvertently logged elsewhere, you can add a new logging level to your application:
logging.addLevelName(logging.CRITICAL + 1, "DISABLELOGGING")
Note: This only maps the name to value for purposes of formatting, so you will need to add a member to the logging module as well:
logging.DISABLELOGGING = logging.CRITICAL + 1
Setting it to a value higher than CRITICAL ensures that no normal log event will pass and be emitted.
Then you just need to set your handler to the level you defined:
handler.setLevel(logging.DISABLELOGGING)
and now there should be no logs that pass the handler, and therefore no output shown.

How can I combine the python logging cookbook example of logging network events with a logging.config.listen thread?

Theoretically this should be simple. Taking the example from the logging cookbook here:
https://docs.python.org/3/howto/logging-cookbook.html#sending-and-receiving-logging-events-across-a-network
I want to add the ability to change the logging configuration on the fly. I simply added:
logging.config.dictConfig(...) # setup the root logger
config_thread = logging.config.listen()
config_thread.start()
tcpserver = LogRecordSocketReceiver()
and on startup, this works fine with the provided example of sending log events across the network to the socket receiver.
However, the problem occurs once I send in a new configuration. After that the log server won't produce any more logging messages. That happens even though each handleLogRecord() call gets a new instance of the logger through logging.getLogger().
Any ideas as to what I'm missing?
You need to ensure that in the configuration dictionary, you have disable_existing_loggers set to False. Otherwise, when a new configuration is applied, the existing loggers will be disabled and not produce any more output.

File descriptor doesn't update for logging in Django

We use Python(2.7)/Django(1.8.1) and Gunicorn(19.4.5) for our web application and supervisor(3.0) to monitor it. I have recently encountered 2 issues in logging:
Django was logging into previous day logs(We have log rotation enabled)
Django was not logging anything at all.
The first scenario is understandable where the log rotation changed the file but Django was not updated.
The second scenario fixed when I restarted the supervisor process. Which led me to believe again the file descriptor was not updated in the django process.
I came by this SO thread which states:
Each child is an independent process, and file handles in the parent
may be closed in the child after a fork (assuming POSIX). In any case,
logging to the same file from multiple processes is not supported.
So I have few questions:
My gunicorn has 4 child processes and if one of them fails while
writing to a log file will the other child process won't be able to
use it? and how to debug these kind of scenarios?
Personally I found debugging errors in python logging module to be
difficult. Can some one point how to debug errors such as this or is
there any way I can monkey patch logging to not fail silently?*(Kindly read update section)*
I have seen Django LogRotation causes the Issue type 1 as explained above and not some script scheduled via cron. So what is preferable?
Note: The logging config is not a problem. I have already spent fair amount of time trying to figure that out. Also if the config is the issue Django will not write log files after a process restart.
Update:
For my second question I see that logging modules provides an option to raiseExceptions on failure although this is discourages in production environment. Documentation here. So now my question becomes how do I set this in Django?
I felt like closing this question. Bit awkward and seems stupid after 2 months. But I guess being stupid is part of the learning and want this to be as a reference for people who stumble across this.
Scenario 1: Django on using TimedRotatingFileHandler seems not to update the file descriptor some times and hence writes to old log files unless we restart the supervisor. We are yet to find the reason for this behaviour and update the reason if found. For now we are using WatchedFileHandler and then using logrotate utility to rotate the logs.
Scenario 2: This is the stupid question. When I was logging with some string formatting I forgot to give enough variables which is why the logger was erring. But this didn't get propagated. But locally when I was testing I found that logging module was actually throwing that error but silently and any logs after it in the module were not getting printed. Lessons learns from this scenario were:
If there is a problem in logging find out if the string formatting does not err
Using log.debug('example: {msg}'.format(msg=msg)) of python instead of log.debug('example: %s', msg).

Set global minimum logging level across all loggers in Python/Django

At the time of invocation of my application, I'd like to be able to read in (from config or command line or environment variable) the minimum logging level across ALL of my loggers and then globally set the logging module to respect this MIN_LOGGING_LEVEL.
For example, something like this (which doesn't exist): logging.setLevel(logging.INFO)
I understand that I can individually, on the loggers and on handlers, set the logging levels, as well as create filters for handlers to respect things like the DEBUG flag. However, what I really want is to be able to change the minimum log level across all loggers at invocation time (and depending on environment, for example).
Do I have to roll this myself and construct the configuration dynamically at runtime to accomplish this? Or is there a better way or pattern that I'm not seeing?
You can use logging.disable - the following ensures logging.INFO and below are disabled:
logging.disable(logging.INFO)
From the documentation:
Its effect is to disable all logging calls of severity lvl and below, so that if you call it with a value of INFO, then all INFO and DEBUG events would be discarded, whereas those of severity WARNING and above would be processed according to the logger’s effective level.
To undo it later, you can call:
logging.disable(logging.NOTSET)
As an alternative to using the disable function, you can override the default level setting of WARNING as follows:
import logging
logging.getLogger().setLevel(logging.INFO) # choose your level here
I wouldn't call this a global solution per-se, as it requires a file-level declaration, but it's useful IMO for simple debugging during development before a more cohesive project hierarchy has been established.
I just ran into something similar, but I went in a different direction. I ended up with...
import logging
logging.root.setLevel(logging.DEBUG)
Python loggers work hierarchically, so setting it on the root logger affects all downstream loggers inheriting from it.
In a few other examples, the code, logging.getLogger(), also returns the root logger, so the code here is essentially the same as those examples.
With knowledge of the hierarchical nature of loggers, the new information here is that you don't need to set the level globally, on the root. You can be selective while still affecting multiple modules/loggers.
If you use the code log = logging.getLogger(__name__) to retrieve your logger in each module, and if you configure your logger inside the __init__.py file of a package, you can set the level on that specific logger, affecting all loggers inside that package.
- package_a
- __init__.py
- module_a.py
- module_b.py
- package_b
- __init__.py
- module_c.py
- module_d.py
Say you have packages and modules set up, as is shown above.
import logging
logging.getLogger('package_a').setLevel(logging.DEBUG)
Running this code will make it so module_a.py and module_b.py are now configured to DEBUG level, while module_c and module_d remain unaltered.
I think what you're looking for is this:
logging.getLogger().setLevel(logging.INFO)

Advantages of logging vs. print() + logging best practices

I'm currently working on 1.0.0 release of pyftpdlib module.
This new release will introduce some backward incompatible changes in
that certain APIs will no longer accept bytes but unicode.
While I'm at it, as part of this breackage, I was contemplating the
possibility to get rid of my logging functions, which currently use the
print statement, and use the logging module instead.
As of right now pyftpdlib delegates the logging to 3 functions:
def log(s):
"""Log messages intended for the end user."""
print s
def logline(s):
"""Log commands and responses passing through the command channel."""
print s
def logerror(s):
"""Log traceback outputs occurring in case of errors."""
print >> sys.stderr, s
The user willing to customize logs (e.g. write them to a file) is
supposed to just overwrite these 3 functions as in:
>>> from pyftpdlib import ftpserver
>>>
>>> def log2file(s):
... open('ftpd.log', 'a').write(s)
...
>>> ftpserver.log = ftpserver.logline = ftpserver.logerror = log2file
Now I'm wondering: what benefits would imply to get rid of this approach
and use logging module instead?
From a module vendor perspective, how exactly am I supposed to
expose logging functionalities in my module?
Am I supposed to do this:
import logging
logger = logging.getLogger("pyftpdlib")
...and state in my doc that "logger" is the object which is supposed
to be used in case the user wants to customize how logs behave?
Is it legitimate to deliberately set a pre-defined format output as in:
FORMAT = '[%(asctime)] %(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger('pyftpdlib')
...?
Can you think of a third-party module I can take cues from where the logging functionality is exposed and consolidated as part of the public API?
Thanks in advance.
libraries (ftp server or client library) should never initialize the logging system.
So it's ok to instantiate a logger object and to point at logging.basicConfig in the
documentation (or provide a function along the lines of basicConfig with fancier output
and let the user choose among his logging configuration strategy, plain basicConfig or
library provided configuration)
frameworks (e.g. django) or servers (ftp server daemon)
should initialize the logging system to a reasonable
default and allow for customization of logging system configuration.
Typically libraries should just create a NullHandler handler, which is simply a do nothing handler. The end user or application developer who uses your library can then configure the logging system. See the section Configuring Logging for a Library in the logging documentation for more information. In particular, see the note which begins
It is strongly advised that you do not add any handlers other than NullHandler to your library's loggers.
In your case I would simply create a logging handler, as per the logging documentation,
import logging
logging.getLogger('pyftpdlib').addHandler(logging.NullHandler())
Edit The logging implementation sketched out in the question seems perfectly reasonable. In your documentation just mention logger and discuss or point users to the logging.setLevel and logging.setFormatter methods for customising the output from your library. Rather than using logging.basicConfig(format=FORMAT) you could consider using logging.config.fileConfig to manage the settings for your output and document the configuration file somewhere in your documentation, again pointing the user to the logging module documentation for the format expected in this file.
Here is a resource I used to make a customizable logger. I didn't change much, I just added an if statement, and pass in whether or not I want to log to a file or just the console.
Check this Colorer out. It's really nice for colorizing the output so DEBUG looks different than WARN which looks different than INFO.
The Logging module bundles a heck of a lot of nice functionality, like SMTP logging, file rotation logging (so you can save a couple old log files, but not make 100s of them every time something goes wrong).
If you ever want to migrate to Python 3, using the logging module will remove the need to change your print statements.
Logging is awesome depending on what you're doing, I've only lightly used it before to see where I am in a program (if you're running this function, color this way), but it has significantly more power than a regular print statement.
You can look at Django (just create a sample project) and see how it initialize logger subsystem.
There is also a contextual logger helper that I've written some time ago - this logger automatically takes name of module/class/function is was initialized from. This is very useful for debug messages where you can see right-through that module spits the messages and how the call flow goes.

Categories