I have an application written in python that logs errors in a log file, I am using the python logging module
Example:
import logging
logging.basicConfig(filename='\logs\filename.log',
level=logging.DEBUG,
format='%(asctime)s - %(levelname)s - %(message)s')
logging.error('error connecting to server')
This is working fine and the log file is logging the errors, which mean the last error is logged on the last line of the file.
is there some kind of setting where I can tell the logging module to always write at the top of the file, this way the last error is always on the first line.
It is very inefficient to write to the front of a flle as others have said. Every single write will have to first seek to the front of the file and then insert the new data before the other data. The underlying I/O of your operating system is designed to make appends cheap.
That said, if you insist on doing it, look at the docs here Logging Handlers. You can implement your own version of logging.handlers.FileHandler that will seek to the beginning before each write. Then you could call logging.addHandler() and place an instance of your class. If you only care about the most recent 10 or so log entries you could even truncate the file before you write.
Related
I'm using the Python logging module to log what's going on in my application on both on a file and into the terminal.
My code is:
import logging
logging.basicConfig(level=logging.DEBUG, handlers=[ # stream=sys.stdout,
logging.FileHandler("debug.log", 'a'),
logging.StreamHandler()
], format='%(asctime)s %(levelname)s %(message)s', )
logging.info("This is a LOG INFO message")
Now, as soon as the program runs, it saves the log on a file.
I would like to know how the logging method save this info on the file. Does it opens -> write -> close the file evry time a logging line is called? Do keep the file open until the program ends?
I'm asking because if my software crash for some reasons or the PC reboot, the already written log file is safe or it could be corrupted?
As the names suggests the handlers you have set here, stream their log messages to the specified sinks. Logging Handler Documentation states that it:
"[...] sends logging output to streams such as sys.stdout, sys.stderr or any file-like object (or, more precisely, any object which supports write() and flush() methods)."
Looking at the Source Code and considering this GitHub Gist since you apparently could use a logging instance with a context manager if you really wanted to, I would make the assumption that your intuition was exactly correct (regarding open > write > close).
How would you dynamically change the file where logs are written to in Python, using the standard logging package?
I have a single process multi-threaded application that processes tasks for specific logical bins. To help simplify debugging and searching the logs, I want each bin to have its own separate log file. Due to memory usage and scaling concerns, I don't want to split the process into multiple processes whose output I could otherwise easily redirect to a separate log. However, by default, Python's logging package only outputs to a single location, either stdout/stderr or or some other single file.
My question's similar to this question except I'm not trying to change the logging level, just the logging output destination.
you will need to create a different logger for each thread and configure each logger to it's own file.
You can call something like this function in each thread, with the appropiate bin_name:
def create_logger(bin_name, level=logging.INFO):
handler = logging.FileHandler(f'{bin_name}.log')
handler.setFormatter(logging.Formatter('%(asctime)s %(levelname)s %(message)s'))
bin_logger = logging.getLogger(bin_name)
bin_logger .setLevel(level)
bin_logger .addHandler(handler)
return bin_logger
i have another python problem related to logging. I want to have a message logged to my log file no matter which log level i'm using. For example i always want to print the time when the program was last executed and also the git version hash of the script.
Is there a way to log something to the log file no matter what log level is set?
Thanks in advance!
I'm trying to write a Python script that will parse a logfile produced by another daemon. This is being done on Linux. I want to be able to parse the log file reliably.
In other words, periodically, we run a script that reads the log file, line by line, and does something with each line. The logging script would need to see every line that may end up in the log file. It could run say once per minute via cron.
Here's the problem that I'm not sure exactly how to solve. Since the other process has a write handle to the file, it could write to the while at the same time that I am reading from the same log file.
Also, every so often we would want to clear this logfile so its size does not get out of control. But the process producing the log file has no way to clear the file other than regularly stopping, truncating or deleting the file, and then restarting. (I feel like logrotate has some method of doing this, but I don't know if logrotate depends on the daemon being aware, or if it's actually closing and restarting daemons, etc. Not to mention I don't want other logs rotated, just this one specific log; and I don't want this script to require other possible users to setup logrotate.)
Here's the problems:
Since the logger process could write to the file while I already have an open file handle, I feel like I could easily miss records in the log file.
If the logger process were to decide to stop, clear the log file, and restart, and the log analyzer didn't run at exactly the same time, log entries would be lost. Similarly, if the log analyzer causes the logger to stop logging while it analyzes, information could also be lost that is dropped because the logger daemon isn't listening.
If I were to use a method like "note the size of the file since last time and seek there if the file is larger", then what would happen if, for some reason, between runs, the logger reset the logfile, but then had reason to log even more than it contained last time? E.g. We execute a log analyze loop. We get 50 log entries, so we set a mark that we have read 50 entries. Next time we run, we see 60 entries. But, all 60 are brand new; the file had been cleared and restarted since the last log run. Instead we end up seeking to entry 51 and missing 50 entries! Either way it doesn't solve the problem of needing to periodically clear the log.
I have no control over the logger daemon. (Imagine we're talking about something like syslog here. It's not syslog but same idea - a process that is pretty critical holds a logfile open.) So I have no way to change its logging method. It starts at init time, opens a log file, and writes to it. We want to be able to clear that logfile AND analyze it, making sure we get every log entry through the Python script at some point.
The ideal scenario would be this:
The log daemon runs at system init.
Via cron, the Python log analyzer runs once per minute (or once per 5 minutes or whatever is deemed appropriate)
The log analyzer collects every single line from the current log file and immediately truncates it, causing the log file to be blanked out. Python maintains the original contents in a list.
The logger then continues to go about its business, with the now blank file. In the mean time, Python can continue to parse the entries at its leisure from the Python list in memory.
I've very, very vaguely studied fifo's, but am not sure if that would be appropriate. In that scenario the log analyzer would run as a daemon itself, while the original logger writes to a FIFO. I have very little knowledge in this area however and don't know if it'd be a solution or not.
So I guess the question really is twofold:
How to reliably read EVERY entry written to the log from Python? Including if the log grows, is reset, etc.
How, if possible to truncate a file that has an open write handle? (Ideally, this would be something I could do from Python; I could do something like logfile.readlines(); logfile.truncate so that way no entries would get lost. But this seems like unless the logger process was well aware of this, it'd end up causing more problems than it solves.)
Thanks!
I don’t see any particular reason why you should be not able to read log file created by syslogd. You are saying that you are using some process similar to syslog, and process is keeping your log file open? Since you are asking rather for ideas, I would recommend you to use syslog! http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc1%2Fhsyslog.html
It is working anyway – use it. Some easy way to write to log is to use logger command:
logger “MYAP: hello”
In python script you can do it like:
import os
os.system(‘logger “MYAP: hello”’)
Also remember you can actually configure syslogd. http://pic.dhe.ibm.com/infocenter/tpfhelp/current/index.jsp?topic=%2Fcom.ibm.ztpf-ztpfdf.doc_put.cur%2Fgtpc1%2Fconstmt.html
Also about your problem with empty logs – sysclog is not clearing logs. There are other tools for it – on debian for example logrotate is used. In this scenario if your log is empty – you can check backup file created by logrotate.
Since it looks like your problem is in logging tool, my advise would be to use syslog for logging. And other tool for rotating logs. Then you can easily parse logs. And if by any means (I don’t know if it is even possible with syslog) you miss some data – remember you will get it in next iteration anyway ;)
Some other idea would be to copy your logfile and work with copy...
When my Python script is writing a large amount of logs to a text file line by line using the Python built-in logging library, in my Delphi-powered Windows program I want to effectively read all newly added logs (lines).
When the Python scripting is logging
to the file, my Windows program will
keep a readonly file handle to
that log file;
I'll use the Windows API to get
informed when the log file is
changed; Once the file is changed, it'll read the newly appended lines.
I'm new to Python, do you see any possible problem with this approach? Does the Python logging lib lock the entire log? Thanks!
It depends on the logging handler you use, of course, but as you can see from the source code, logging.FileHandler does not currently create any file locks. By default, it opens files in 'a' (append) mode, so as long as your Windows calls can handle that, you should be fine.
As ʇsәɹoɈ commented, the standard FileHandler logger does not lock the file, so it should work. However, if for some reason you cannot keep you lock on the file - then I'd recommend having your other app open the file periodically, record the position it's read to and then seek back to that point later. I know the Linux DenyHosts program uses this approach when dealing with log files that it has to monitor for a long period of time. In those situations, simply holding a lock isn't feasible, since directories may move, the file get rotated out, etc. Though it does complicate things in that then you have to store filename + read position in persistent state somewhere.