I've got a custom django admin command and I want to capture the log output for when that command is run and make it available for download in a separate file. Similar to "Console Output" functionality in Jenkins. This command is invoked using django-after-response and I'm running uWSGI.
At the beginning of the admin command, I do this:
deploy_log = NamedTemporaryFile()
formatter = logging.Formatter("%(asctime)-15s %(levelname)-8s %(message)s")
file_handler = logging.FileHandler(deploy_log.name)
file_handler.setFormatter(formatter)
file_handler.setLevel(logging.INFO)
logging.getLogger('').addHandler(file_handler)
Then at the end of the admin command:
logging.getLogger('').removeHandler(file_handler)
The problem I'm running into is that when there are multiple 'deploys' running simultaneously, the deploy_log for one thread will have entries from other threads. How do I avoid this?
I believe I have found the solution. I had to add the following to my uwsgi vassal ini file:
enable-threads = true
Now the log files are not getting jumbled together.
Related
I would like to capture custom metrics as a notebook runs in Databricks. I would like to write these to a file using the logging package. The code below seems to run fine but it never writes to file. How do you achieve this in Databricks runtime 9.1?
Also note that I am running this is Repos so I have to explicitly write it to a location. Furthermore this code runs perfectly fine when run from my workspace.
logger = logging.getLogger('server_logger')
logger.setLevel(logging.INFO)
fh = logging.FileHandler('/dbfs/tmp/my_log.log')
fh.setLevel(logging.INFO)
formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.warning(f'starting to log the process')
Perhaps the /dbfs/tmp directory doesn't exist, or you don't have write access to it. Changing the log filename to just mylog.log, it works as expected:
~/SO-logging-misc$ python so_74519222.py
~/SO-logging-misc$ more my_log.log
2022-11-21 14:33:22 - WARNING - starting to log the process
I have written a bonobo script to extract some data, and I would like to use python's logging module to write some status messages to a file while my job runs. I've done the following:
import logging
logging.basicConfig(filename=INFO["LOGFILE_PATH"]+r'\bonobo_job_'+date.today().isoformat(),
filemode='a',
format='%(name)s - %(levelname)s - %(message)s')
If I simply run the script in Pycharm, it logs to the file as I would expect. But if I run it from the command line with the bonobo run command, it ignores the filename and logs to stdout. How do I fix this? Is there a flag or environment variable I need to set somewhere?
Okay,I figured it out. For some reason, basicConfig doesn't work. I had to use getLogger and add a FileHandler. So in main I did this:
logger = logging.getLogger('bonobo_logger')
ch = logging.FileHandler(logfilename)
formatter = logging.Formatter('%(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
Then in every node in my graph where I wanted to do logging, I called:
logger = logging.getLogger('bonobo_logger')
and used the logger object to write out all messages. If anyone knows a better way of doing it, please let me know.
I am facing little strange issue where, the logger is not picking up the configured time stamp format (ascii) in the log message for the first time it is initialized. It by default prints the log time format in UTC, not sure why.
Below snip is from the /proj/req_proc.py python code, which uwsgi starts, initialize the logger. The log_config.yaml contains a formatter definition to print timestamp in ascii format.
def setup_logging(default_path='=log_config.yaml',
default_level=logging.INFO):
path = default_path
if os.path.exists(path):
with open(path, 'rt') as f:
config = yaml.load(f.read())
logging.config.dictConfig(config)
Below is the snip from my launch script which starts the uwsgi process.
uwsgi -M --processes 1 --threads 2 -s /tmp/uwsgi.sock --wsgi-file=/proj/req_proc.py --daemonize /dev/null
Is there any specific behavior either to python logger or to uwsgi, which picks up the UTC time format by default? When I restart my uwsgi process, it picks the correct/expected time stamp configured in the log_config.yaml
I have the assumption that the uwsgi module is somehow hijacking Python's logging module. Setting the loglevel, logger name and logging itself works, but trying to modify the format even with something basic like:
logging.basicConfig(level=logging.NOTSET, format='[%(process)-5d:%(threadName)-10s] %(name)-25s: %(levelname)-8s %(message)s')
logger = logging.getLogger(__name__)
has no effect.
Update: Here's a way to overwrite uWSGI's default logger:
# remove uUWSGI's default logging configuration, this can be removed in
# more recent versions of uWSGI
root = logging.getLogger()
map(root.removeHandler, root.handlers[:])
map(root.removeFilter, root.filters[:])
logger = logging.getLogger(__name__)
logging.basicConfig(
level=logging.INFO,
format='%(levelname)-8s %(asctime)-15s %(process)4d:%(threadName)-11s %(name)s %(message)s'
)
I am experiencing an issue where I am using the logging module in my app. I am working in Eclipse against the LDT Python (Py 2.7) interface (rather than Pydev) on my MacBook Pro. The logging module works through Eclipse; however, when I transfer my app over to a RHEL5 2.7, logging does not seem to be working at all. It is not throwing any exceptions, it is just not logging anything to console or file (it creates the file though).
Code:
# Initialize logging
log = logging.getLogger('pepPrep')
# Log to stderr
console = logging.StreamHandler()
console.setLevel(logging.INFO)
# Log to file
logname = 'pepPrep.' + datetime.datetime.now().strftime("%Y%m%d_%H:%M") + '.log'
filelog = logging.FileHandler(logname)
filelog.setLevel(logging.DEBUG)
# set a format
formatter = logging.Formatter('%(name)-12s: %(levelname)-8s %(message)s')
# tell the handler to use this format
console.setFormatter(formatter)
filelog.setFormatter(formatter)
# add the handler to the root logger
log.addHandler(console)
log.addHandler(filelog)
log.INFO('This is a test')
log.DEBUG('This is a test2')
Any pointers on how I can make this work?
The default threshold for logging is WARNING, so INFO and DEBUG messages are not output by default. To do so, add e.g.
logging.getLogger().setLevel(logging.DEBUG)
to get DEBUG and INFO messages.
You can confirm this is your problem by doing
log.warning('This is a test3')
before adding that setLevel, and confirming that the warning is actually output.
I'm trying to configure logging for a Django app using the Python logging module. I have placed the following bit of configuration code in my Django project's settings.py file:
import logging
import logging.handlers
import os
date_fmt = '%m/%d/%Y %H:%M:%S'
log_formatter = logging.Formatter(u'[%(asctime)s] %(levelname)-7s: %(message)s (%(filename)s:%(lineno)d)', datefmt=date_fmt)
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
bytes = 1024 * 1024 # 1 MB
if not os.path.exists(log_dir):
os.makedirs(log_dir)
handler = logging.handlers.RotatingFileHandler(log_name, maxBytes=bytes, backupCount=7)
handler.setFormatter(log_formatter)
handler.setLevel(logging.DEBUG)
logging.getLogger().setLevel(logging.DEBUG)
logging.getLogger().addHandler(handler)
logging.getLogger(__name__).info("Initialized logging subsystem")
At startup, I get a couple Django-related messages, as well as the "Initialized logging subsystem", in the log files, but then all the log messages end up going to the web server logs (/var/log/apache2/error.log, since I'm using Apache), and use the standard log format (not the formatter I designated). Am I configuring logging incorrectly?
Kind of anti-climactic, but it turns out there was a third-party app installed in the project that had its own logging configuration that was overriding the one I set up (it modified the root logger, for some reason -- not very kosher for a Django app!). Removed that code and everything works as expected.
See this other answer. Note that settings.py is usually imported twice, so you should avoid creating multiple handlers. Better logging support is coming to Django in 1.3 (hopefully), but for now you should ensure that if your setup code is called more than once, there are no adverse effects.
I'm not sure why your logged messages are going to the Apache logs, unless you've (somewhere else in your code) added a StreamHandler to your root logger with sys.stdout or sys.stderr as the stream. You might want to print out logging.getLogger().handlers just to see it's what you'd expect to see.
I used this with success (although it does not rotate):
# in settings.py
import logging
logging.basicConfig(
level = logging.DEBUG,
format = '%(asctime)s %(levelname)s %(funcName)s %(lineno)d \
\033[35m%(message)s\033[0m',
datefmt = '[%d/%b/%Y %H:%M:%S]',
filename = '/tmp/my_django_app.log',
filemode = 'a'
)
I'd suggest to try an absolute path, too.
I guess logging stops when Apache forks the process. After that happened, because all file descriptors were closed during daemonization, logging system tries to reopen log file and as far as I understand uses relative file path:
log_dir = os.path.join(PROJECT_DIR, "var", "log", "my_app")
log_name = os.path.join(log_dir, "nyrb.log")
But there is no “current directory” when process has been daemonized. Try to use absolute log_dir path. Hope that helps.