How to log a warning without creating a logger? - python

How does one log a warning without creating a logger python? I also want to make sure it DOES log properly.
Is:
import logging
logging.warning('warning')
enough?

If I understand you want logging without all the hassle with setting up a logger. If you don't mind installing third-party package, have a look at loguru.
It is ready to use out of the box without boilerplate:
from loguru import logger
logger.warning("That's it, beautiful and simple logging!")
Check the docs for more info.

Yes:
import logging
logging.warning('warning with real information from your code')
ouput:
WARNING:root:warning

Related

How to disable the logging of Uvicorn?

I am working on FastAPI - Uvicorn. I want to disable the logging by uvicorn. I need only the logs which are logged by the server.
I referred to this blog and implemented the logging.
You could change log level for getting only needed messages, there are a bunch of possible options:
uvicorn main:app --log-level critical
I think I had the same issue.
To disable the logger one must first find the loggers, that shall be disabled. I did this by following this stackoverflow post and this GitHub-Issue.
For me it works to disable just two loggers from uvicorn:
import logging
# ....CODE....
uvicorn_error = logging.getLogger("uvicorn.error")
uvicorn_error.disabled = True
uvicorn_access = logging.getLogger("uvicorn.access")
uvicorn_access.disabled = True
At first I tried the answer provided by #Sanchouz, but this didn't work out for me - Further setting propagate = false is by some regarded as a bad practice (see this) . As I wanted to do it programmatically I couldn't test the answer provided by #funnydman.
Hope this helps anyone, thinklex.
Problem definition
I had a similar problem and I found a solution. In my case, I created a small website with FastAPI to launch web scrappers in separate processes. I also created class-wrapper for loggers from logging module. My problem was: when I started my app inside Docker container with uvicorn ... command, which includes some settings for logging into file, all logging from any web scrapper would go into both scrapper's separate log file and server's log file. I have a lot of stuff to log, so it's quite a problem.
Short answer
When you get your logger, just set it's propagate property to False like this:
logger = logging.getLogger(logger_name)
logger.propagate = False
Long answer
At first I spent some time debugging insides of a logging module and I found a function called callHandlers, which loops through handlers of current logger and it's parents. I wrongly assumed, that root logger was responsible for that problem, but after some more testing it turned out, that actually root logger didn't have any handlers. It means, that some of uvicorn's logger was responsible for that, which makes sense, also considering Thinklex's solution. I tried his solution too, but it doesn't fit me, because it disabled uvicorn's logging completely, and I don't want that, so I'll stick with preventing propagation on my loggers.

Is there any way to stop logger in the AWS Iot Python SDK?

In the AWSIoTPythonSDK, in MqttCore.py, there is logger define '''class MqttCore(object):
_logger = logging.getLogger(name)'''
Everytime "subscribe timed out" is getting printed in the console, is there any way to forcefully stop the logger of the IoT SDK. I know it is not recommended but i badly needed to stop it forcefully.
I dont know if it recommended, but I did it:
sdk_logger = logging.getLogger('AWSIoTPythonSDK.core.protocol.mqtt_core')
sdk_logger.setLevel(logging.WARNING)
you can see here an example that they printed it to a different file:
https://docs.aws.amazon.com/code-samples/latest/catalog/python-iotthingsgraph-camera.py.html
this is a very simple explanation on how logging works in general when you import a module (If you need) https://www.youtube.com/watch?v=jxmzY9soFXg&ab_channel=CoreySchafer
credit: How do I define a different logger for an imported module in Python?

How can I send the output of a Logger object in python to a file?

I am using the logging library in python. I am using several logger objects to output the information. I got to the point where the output is too much, and I need to send it to a logfile instead of the console.
I can't find a function that configures the output stream of the logger object, and the basicConf function doesn't seem to do the trick.
This is what I have:
import logging # Debug logging framework
logging.basicConfig(filename='loggggggmeee.txt')
logger = logging.getLogger('simulation')
logger.setLevel(logging.INFO)
#--------------- etc-------------------#
logger.info('This is sent to the console')
Any ideas? Thanks!
Try to add a file location like here it will solve your problem:
import logging # Debug logging framework
logging.basicConfig(filename='c:\\loggggggmeee.txt')
logger = logging.getLogger('simulation')
logger.setLevel(logging.INFO)
#--------------- etc-------------------#
logger.info('This is sent to the console')
By the way the file without a specific location will be at the module library...
I really recommend you to read the Logging HOWTO It's very simple and useful.
Also the Cookbook has lots of useful examples.
Another good tip about nameing the loggers from the docs:
a good convention to use when naming loggers is to use a module-level
logger, in each module which uses logging, named as follows:
logger = logging.getLogger(__name__)
This means that logger names
track the package/module hierarchy, and it’s intuitively obvious where
events are logged just from the logger name.

Advantages of logging vs. print() + logging best practices

I'm currently working on 1.0.0 release of pyftpdlib module.
This new release will introduce some backward incompatible changes in
that certain APIs will no longer accept bytes but unicode.
While I'm at it, as part of this breackage, I was contemplating the
possibility to get rid of my logging functions, which currently use the
print statement, and use the logging module instead.
As of right now pyftpdlib delegates the logging to 3 functions:
def log(s):
"""Log messages intended for the end user."""
print s
def logline(s):
"""Log commands and responses passing through the command channel."""
print s
def logerror(s):
"""Log traceback outputs occurring in case of errors."""
print >> sys.stderr, s
The user willing to customize logs (e.g. write them to a file) is
supposed to just overwrite these 3 functions as in:
>>> from pyftpdlib import ftpserver
>>>
>>> def log2file(s):
... open('ftpd.log', 'a').write(s)
...
>>> ftpserver.log = ftpserver.logline = ftpserver.logerror = log2file
Now I'm wondering: what benefits would imply to get rid of this approach
and use logging module instead?
From a module vendor perspective, how exactly am I supposed to
expose logging functionalities in my module?
Am I supposed to do this:
import logging
logger = logging.getLogger("pyftpdlib")
...and state in my doc that "logger" is the object which is supposed
to be used in case the user wants to customize how logs behave?
Is it legitimate to deliberately set a pre-defined format output as in:
FORMAT = '[%(asctime)] %(message)s'
logging.basicConfig(format=FORMAT)
logger = logging.getLogger('pyftpdlib')
...?
Can you think of a third-party module I can take cues from where the logging functionality is exposed and consolidated as part of the public API?
Thanks in advance.
libraries (ftp server or client library) should never initialize the logging system.
So it's ok to instantiate a logger object and to point at logging.basicConfig in the
documentation (or provide a function along the lines of basicConfig with fancier output
and let the user choose among his logging configuration strategy, plain basicConfig or
library provided configuration)
frameworks (e.g. django) or servers (ftp server daemon)
should initialize the logging system to a reasonable
default and allow for customization of logging system configuration.
Typically libraries should just create a NullHandler handler, which is simply a do nothing handler. The end user or application developer who uses your library can then configure the logging system. See the section Configuring Logging for a Library in the logging documentation for more information. In particular, see the note which begins
It is strongly advised that you do not add any handlers other than NullHandler to your library's loggers.
In your case I would simply create a logging handler, as per the logging documentation,
import logging
logging.getLogger('pyftpdlib').addHandler(logging.NullHandler())
Edit The logging implementation sketched out in the question seems perfectly reasonable. In your documentation just mention logger and discuss or point users to the logging.setLevel and logging.setFormatter methods for customising the output from your library. Rather than using logging.basicConfig(format=FORMAT) you could consider using logging.config.fileConfig to manage the settings for your output and document the configuration file somewhere in your documentation, again pointing the user to the logging module documentation for the format expected in this file.
Here is a resource I used to make a customizable logger. I didn't change much, I just added an if statement, and pass in whether or not I want to log to a file or just the console.
Check this Colorer out. It's really nice for colorizing the output so DEBUG looks different than WARN which looks different than INFO.
The Logging module bundles a heck of a lot of nice functionality, like SMTP logging, file rotation logging (so you can save a couple old log files, but not make 100s of them every time something goes wrong).
If you ever want to migrate to Python 3, using the logging module will remove the need to change your print statements.
Logging is awesome depending on what you're doing, I've only lightly used it before to see where I am in a program (if you're running this function, color this way), but it has significantly more power than a regular print statement.
You can look at Django (just create a sample project) and see how it initialize logger subsystem.
There is also a contextual logger helper that I've written some time ago - this logger automatically takes name of module/class/function is was initialized from. This is very useful for debug messages where you can see right-through that module spits the messages and how the call flow goes.

Logging every step and action by Python in a Large Script

I have created a large python script at the end. But now I need a logger for it. I have input steps, prompts.. Function calls.. While Loops.., etc. in the script.
And also, the logger is have to log success operations too.
I couldn't find a suitable answer for me. I'm searching on the internet again, and wanted to ask you too.
Whats your opinion?
Thanks
There's a module logging in the standard library. Basic usage is very simple; in every module that needs to do logging, put
logger = logging.getLogger(__name__)
and log with, e.g.,
logger.info("Doing something interesting")
logger.warn("Oops, something's not right")
Then in the main module, put something like
logging.basicConfig(level=logging.INFO)
to print all logs with a severity of INFO or worse to standard error. The module is very configurable, see its documentation for details.

Categories