If I run my script manually I see runlog.log populated with messages. When i run the script via cron, there's no output in my log file BUT i do see it in /var/spool/mail/root so I know the script works.
file_handler = logging.FileHandler("runlog.log", "a")
file_handler = logging.StreamHandler()
file_handler.setFormatter(default_formatter)
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
logger.addHandler(file_handler)
...
logger.info("No new files")
Related
I have a python script with logging configuration that is called with subprocess.call from a main script. I would like the subprocess to create the log file, and write to it, as it is configured in the subprocess's code. In addition, I would like to see the logs also displayed in the main process' console output.
The following works on mac, but doesn't work on a linux box with python 3.10.4
Main Code
import subprocess
import logging
log_format = '%(asctime)s :: %(name)s :: %(levelname)-8s :: %(message)s'
log_streamhandler = logging.StreamHandler()
logging.basicConfig(format=log_format, level=logging.INFO,
handlers=[log_streamhandler])
logger = logging.getLogger(__name__)
if __name__ == '__main__':
code = subprocess.call(["python3", "my_subprocess.py"])
logger.info(f'Subprocesses completed with code {code}')
my_subprocess.py
import sys
import logging
if __name__ == '__main__':
log_format = '%(asctime)s :: %(name)s :: %(levelname)-8s :: %(message)s'
log_filehandler = logging.FileHandler('my_subprocess.log')
log_streamhandler = logging.StreamHandler(stream=sys.stdout)
logging.basicConfig(format=log_format, level=logging.INFO,
handlers=[log_filehandler, log_streamhandler])
logger = logging.getLogger(__name__)
logger.info('Starting subprocess.')
logger.info('Completed subprocess.')
The reason why I create the log file from the subprocess, is because it also needs to be able to run on its own, and create the log file. Several such subprocesses are called by the main process. How can I restructure to keep these features and run on linux as expected?
I've set my Python program to log output, but although it logs correctly to the console, it does not log the time, log level information etc to the file.
Program:
import time
import logging
from logging.handlers import RotatingFileHandler
logFileName = 'logs.log'
logging.basicConfig(level=logging.INFO, format='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
log = logging.getLogger(__name__)
handler = RotatingFileHandler(logFileName , maxBytes=2000 , backupCount=5)
log.addHandler(handler)
log.setLevel(logging.INFO)
if __name__ == '__main__':
while True:
log.info("program running")
time.sleep(1)
Output to console:
INFO 05-May-22 23:20:54 - program running
INFO 05-May-22 23:20:55 - program running
INFO 05-May-22 23:20:56 - program running
INFO 05-May-22 23:20:57 - program running
INFO 05-May-22 23:20:58 - program running
INFO 05-May-22 23:20:59 - program running
INFO 05-May-22 23:21:00 - program running
Simultaneous output to file logs.log:
program running
program running
program running
program running
program running
program running
program running
How to make the full output go to the log file?
You can separately set the Formatter for the RotatingFileHandler
handler.formatter = logging.Formatter(fmt='%(levelname)s %(asctime)s - %(message)s', datefmt='%d-%b-%y %H:%M:%S')
logger configuration to log to file and print to stdout would not work with me so:
I want to both print logging details in prog_log.txt AND in the console, I have:
# Debug Settings
Import logging
#logging.disable() #when program ready, un-indent this line to remove all logging messages
logger = logging.getLogger('')
logging.basicConfig(level=logging.DEBUG,
filename = 'prog_log.txt',
format='%(asctime)s - %(levelname)s - %(message)s',
filemode = 'w')
logger.addHandler(logging.StreamHandler())
logging.debug('Start of Program')
The above does print into the prog_log.txt file the logging details properly formated but it prints unformated logging messages multiple times as I re-run the program in the console like below..
In [3]: runfile('/Volumes/GoogleDrive/Mon Drive/MAC_test.py', wdir='/Volumes/GoogleDrive/Mon Drive/')
Start of Program
Start of Program
Start of Program #after third running
Any help welcome ;)
I need your help in solving this problem. So I am using Python's timedrotatingfilehandler lib to append logs for 24 hours in a log file and then rotate the log file. This works as expected but every time I run my script, timedrotatingfilehandler only logs till certain point. For example: If I have 10 log statement, each run only logs till 5 and then nothing gets logged in the log file. Appending happens correctly though. Not sure whats happening. Here is the python code:
#Configuring logger
LOG = logging.getLogger(__name__)
def config_Logs():
# format the log entries
global LOG
formatter = logging.Formatter('[%(asctime)s] : {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s')
handler = TimedRotatingFileHandler('collecting_logs.log',
when='h', interval=24, backupCount=0)
handler.setFormatter(formatter)
LOG.addHandler(handler)
LOG.setLevel(logging.DEBUG)
I am calling config_Logs() at the beginning of the script.
I'm trying to understand why the same set of code works in osx but not in windows.
logger_name = 'gic_scheduler'
logger = logging.getLogger(logger_name)
logger.setLevel(logging.INFO)
ch = logging.StreamHandler()
ch.setLevel(logging.INFO)
fh = logging.FileHandler(filename=os.path.join(tmp_folder, 'scheduler.log'), encoding='utf-8')
fh.setLevel(logging.DEBUG)
logger.addHandler(ch)
logger.addHandler(fh)
executor_logger = logging.getLogger('apscheduler.executors.default')
executor_logger.setLevel(logging.DEBUG)
executor_logger.addHandler(ch)
executor_logger.addHandler(fh)
executors = {'default': ProcessPoolExecutor(5)}
scheduler = BlockingScheduler(executors=executors, logger=logger)
scheduler.add_job(go, 'interval', seconds=5)
scheduler.start()
In particular, no output is produced by the logger 'apscheduler.executors.default'. I digged into the 3rd party library using this logger and printed out logger.handlers and in OSX's case the handlers are there but in Windows they are empty. Any ideas why?
def run_job(job, jobstore_alias, run_times, logger_name):
"""Called by executors to run the job. Returns a list of scheduler events to be dispatched by the scheduler."""
events = []
logger = logging.getLogger(logger_name)
print logger_name
print logger.handlers