Why is my error not being written to my logfile? - python

I have an existing program (a python executable) but when I run it, it sometimes stops for no reason. So to try and see where the problem arises I aimed to add the logging concept to the python application so I could exactly see what and when a problem arises.
Below is my code, the file log.txt is being created which is good. Now for testing I created an error on purpose by changing the password of the database. Now from experience I know that it should give me an mysql.connector.errors error, which it does in my terminal. But when I open the logfile.txt I get an empty file...
My aim is to catch all different types of errors thats why I use the Exception method.
What I expected to see in log.txt was for example:
2021-09-16 09:04:29,827 ERROR __main__ 1045 (28000): Access denied for user 'root'#'localhost' (using password: YES)
This is my code:
logging.basicConfig(filename='log.text', level=logging.DEBUG,
format='%(asctime)s %(levelname)s %(name)s %(message)s')
logger=logging.getLogger(__name__)
def main():
mp_list = get_mp_list() #<-- Get the list created
pool = Pool(min(cpu_count(), len(mp_list))) #<-- create a pool
results = pool.imap_unordered(process_mp, mp_list) #<-- Create the pool.imap object
while True:
try:
result = next(results) #<-- run the program
except StopIteration:
break
except Exception as err: #<-- write to the logfile in case of error
logger.error(err)
raise
if __name__ == '__main__':
main()

You seem to be using multiprocessing, and if multiple processes log to the same file, it might not work - for the reasons described in the logging documentation. Basically, you need to use a different strategy to ensure that only one process writes to a particular file. It's too long to reproduce the details here - refer to the linked info for details.

Add filemode it should work then
logging.basicConfig(filename='log.text', level=logging.DEBUG, filemode="w",
format='%(asctime)s %(levelname)s %(name)s %(message)s')

Related

Writing custom log files in Databricks Repos using the logging package

I would like to capture custom metrics as a notebook runs in Databricks. I would like to write these to a file using the logging package. The code below seems to run fine but it never writes to file. How do you achieve this in Databricks runtime 9.1?
Also note that I am running this is Repos so I have to explicitly write it to a location. Furthermore this code runs perfectly fine when run from my workspace.
logger = logging.getLogger('server_logger')
logger.setLevel(logging.INFO)
fh = logging.FileHandler('/dbfs/tmp/my_log.log')
fh.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.warning(f'starting to log the process')
Perhaps the /dbfs/tmp directory doesn't exist, or you don't have write access to it. Changing the log filename to just mylog.log, it works as expected:
~/SO-logging-misc$ python so_74519222.py
~/SO-logging-misc$ more my_log.log
2022-11-21 14:33:22 - WARNING - starting to log the process

logging to a file from bonobo etl

I have written a bonobo script to extract some data, and I would like to use python's logging module to write some status messages to a file while my job runs. I've done the following:
import logging
logging.basicConfig(filename=INFO["LOGFILE_PATH"]+r'\bonobo_job_'+date.today().isoformat(),
filemode='a',
format='%(name)s - %(levelname)s - %(message)s')
If I simply run the script in Pycharm, it logs to the file as I would expect. But if I run it from the command line with the bonobo run command, it ignores the filename and logs to stdout. How do I fix this? Is there a flag or environment variable I need to set somewhere?
Okay,I figured it out. For some reason, basicConfig doesn't work. I had to use getLogger and add a FileHandler. So in main I did this:
logger = logging.getLogger('bonobo_logger')
ch = logging.FileHandler(logfilename)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
Then in every node in my graph where I wanted to do logging, I called:
logger = logging.getLogger('bonobo_logger')
and used the logger object to write out all messages. If anyone knows a better way of doing it, please let me know.

Python: How to color logs while printing to a file?

A little context to what I am doing. I am running some python scripts through a different programming language on an industrial controller. Since I am not running the python scripts directly I can't watch any print or log statements from the terminal so I need to send the detailed logs to a log file.
Since we are logging a lot of information when debugging, I wanted to find a way to color the log file such as coloredlogs does to logs printed to terminal. I looked at coloredlogs but it appears that it can only print colored logs to files when using VIM. Does anyone know a way to print colored logs to a file using python that can be opened with a program such as wordpad? (maybe a .rtf file).
It can be a solution to use the Windows PowerShell Get-Content function to print a file which contains ANSI escape sequences to color the log.
For example:
import coloredlogs
import logging
# Create a logger object.
logger = logging.getLogger(__name__)
# Create a filehandler object
fh = logging.FileHandler('spam.log')
fh.setLevel(logging.DEBUG)
# Create a ColoredFormatter to use as formatter for the FileHandler
formatter = coloredlogs.ColoredFormatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
# Install the coloredlogs module on the root logger
coloredlogs.install(level='DEBUG')
logger.debug("this is a debugging message")
logger.info("this is an informational message")
logger.warning("this is a warning message")
logger.error("this is an error message")
logger.critical("this is a critical message")
When opening a Windows PowerShell you can use Get-Content .\spam.log to print the logs in color.

Python logging does not produce newline after each log (Linux)

I have used the same Python script on Windows which worked fine and produced several logs during each time it was run. The problem is when I ran the script on Linux the logging produced all of the logs onto one line.
I have tried \n in different places such as in the formatter, in each line itself.
This is how the logging is set up:
# This is the setup of the logging system for the program.
logger = logging.getLogger(__name__)
# Sets the name of the log file to 'login.log'
handler = logging.FileHandler(config.path['logger'])
# Sets up the format of the log file: Time, Function Name, Error Level (e.g. Warning Info, Critical),
# and then the message that follows the format.
formatter = logging.Formatter('%(asctime)-5s %(funcName)-20s %(levelname)-10s %(message)s')
handler.setFormatter(formatter)
logger.addHandler(handler)
# Sets the lowest level of messages to info.
logger.setLevel(logging.INFO)
And here is how each log is made:
logger.warning('%-15s' % client + ' Failed: Logout Error')
Thanks in advance

Python logging module logs on Mac, but not Linux

I am experiencing an issue where I am using the logging module in my app. I am working in Eclipse against the LDT Python (Py 2.7) interface (rather than Pydev) on my MacBook Pro. The logging module works through Eclipse; however, when I transfer my app over to a RHEL5 2.7, logging does not seem to be working at all. It is not throwing any exceptions, it is just not logging anything to console or file (it creates the file though).
Code:
# Initialize logging
log = logging.getLogger('pepPrep')
# Log to stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# Log to file
logname = 'pepPrep.' + datetime.datetime.now().strftime("%Y%m%d_%H:%M") + '.log'
filelog = logging.FileHandler(logname)
filelog.setLevel(logging.DEBUG)
# set a format
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
filelog.setFormatter(formatter)
# add the handler to the root logger
log.addHandler(console)
log.addHandler(filelog)
log.INFO('This is a test')
log.DEBUG('This is a test2')
Any pointers on how I can make this work?
The default threshold for logging is WARNING, so INFO and DEBUG messages are not output by default. To do so, add e.g.
logging.getLogger().setLevel(logging.DEBUG)
to get DEBUG and INFO messages.
You can confirm this is your problem by doing
log.warning('This is a test3')
before adding that setLevel, and confirming that the warning is actually output.

Categories