Someone inadvertently moved the open log file used by a python program.
The program uses the logging module with a TimedRotatingFileHandler. When the time came to roll-over the file this error was output:
Traceback (most recent call last):
File "/python_root/lib/logging/handlers.py", line 78, in emit
self.doRollover()
File "/python_root/lib/logging/handlers.py", line 338, in doRollover
os.rename(self.baseFilename, dfn)
OSError: [Errno 2] no such file or directory
Logged from file logtest.py, line 16
The error was repeated on each subsequent attempt to log something. The logged messages did not go into the old (moved) log file.
This reproduces the problem (if you move the log file :))
import time
import logging
from logging import handlers
f = logging.Formatter( "%(asctime)s %(message)s" )
h = handlers.TimedRotatingFileHandler(
"testlog", when='s', interval=5, backupCount=10 )
h.setFormatter( f )
logger = logging.getLogger( 'test' )
logger.setLevel( logging.INFO )
logger.addHandler( h )
logger.info( "LOGTEST started" )
for i in range( 10 ):
time.sleep( 5 )
logger.info( "Test logging " + str( i ) )
My concern here is that subsequent log messages are lost. What I'd like to achieve is, in ascending order of preference:
An exception that exits.
An exception I can catch and handle.
The logger displays the error, but subsequent messages go to the old log file.
The logger displays this error, but opens a new log file and continues as normal.
I've skimmed the docs/cookbook for relevant hooks, but nothing's popped out at me. Pointers there are equally welcome.
Thanks for your help,
Jonathan
Exceptions that are raised in doRollover are passed to the handleError method of the handler. You can define a subclass and override this method to do whatever it is you want to do to handle the error.
Related
I have never used logging before and am new to Python. My mentor mandated that script must contain a logging file. So I tried to do it in my code following his template. The followign below are excerpts in my code where logging is used:
logfilepath = r"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\odfshistory\CSV_ODFS_Probe_Quality-%Y-%m-%d.log"
log_file_name = datetime.now().strftime(logfilepath)
print(log_file_name)
logging.basicConfig(
filename=log_file_name,
level=logging.DEBUG,
format='[Probe Data Quality] %(asctime)s - %(name)s %(levelname)-7.7s %(message)s' #can you explain this Tenzin?
)
def process_dirconfig_file(config_file_from_sysarg):
logging.info('started processing config file' + (config_file_from_sysarg) )
dirconfig_file_Pobj = Path(config_file_from_sysarg)
try:
if Path.is_file(dirconfig_file_Pobj):
try:
if ("db_string" not in parseddict or parseddict["db_string"] == ""):
raise Exception(f"Error: Your config file is missing 'db connection string' for db connection. Please edit config file to section[db string] and db string key val pairs of form: db_string = <db string>")
#print(f"Error: Your config file is missing 'error directory' for file processing")
except Exception as e:
raise Exception(e) #logging.exception(e) puts it into log file
logging.exception(e)
return parseddict
else:
raise Exception(f"Error: No directory config file. Please create a config file of directories to be used in processing")
except Exception as e:
raise Exception(e)
logging.exception(e)
if __name__ == '__main__':
try:
startTime = datetime.now()
parse_dict = process_dirconfig_file(dirconfig_file)
db_instance = dbhandler(parse_dict["db_string"])
odfs_tabletest_dict = db_instance['odfs_tester_history_files']
odf_history_from_csv_to_dbtable(db_instance)
print(datetime.now() - startTime)
except Exception as e:
logging.exception(e)
Why does this structure give me this error? Of course most of my code is edited out of this for brevity. THe code works. The logging is what's breaking it. I'll also note that a bunch of my functions are used in for loops but I wouldn't think this would be an issue. It's not like log files cannot work in for loops. ANd no I haven't done any threading.
Also how can you make it create a new log file for each run. Right now, the logging info gets appeneded to the logfile?
Error:
[Probe Data Quality] 2020-07-02 09:21:04,217 - root INFO started processing config fileC:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini
[Probe Data Quality] 2020-07-02 09:21:04,373 - root INFO started processing config fileC:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini
[Probe Data Quality] 2020-07-02 09:21:04,420 - root ERROR [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log' -> 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\error\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log'
Traceback (most recent call last):
File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 277, in <module>
odf_history_from_csv_to_dbtable(db_instance)
File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 251, in odf_history_from_csv_to_dbtable
csv.rename(trg_path.joinpath(csv.name))
File "C:\ProgramData\Anaconda3\lib\pathlib.py", line 1329, in rename
self._accessor.rename(self, target)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log' -> 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\error\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log'
Looks like odf_history_from_csv_to_dbtable is trying to rename the file while the logger is still using it? might need to move that logic to someplace else, or re-think what it's trying to do to happen after your main script runs. self._accessor.rename(self, target) and looks like it's trying to move it to an error directory ?
As far as having a unique name for each run of the script:
from uuid import uuid4
run_id = uuid4()
logfilepath = f"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\odfshistory\CSV_ODFS_Probe_Quality-%Y-%m-%d-{run_id}.log"
I am trying to save all output (stdout and all errors) of a cell to a file. To save stdout, I am using the following:
import sys
old_stdout = sys.stdout
sys.stdout = open('test.txt', 'w')
print("Hello World! ")
In this case, the output is not displayed and gets saved in the file as expected. To save errors, I used:
#Doesn't work
sys.stderr = open('error.txt','w')
print(a) #Should raise NameError
When I run this cell, I get the error in the notebook, and not in the file as expected:
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
<ipython-input-5-de3efd936845> in <module>()
1 #Doesn't work
----> 2 sys.stderr = open('error.txt','w')
3 print("Test")
4 print(a)
NameError: name 'sys' is not defined
I would like this saved in a file and not shown in the notebook. What is the correct code for this?
I think that the problem here is that IPython kernels spawned for the notebook use a ZMQInteractiveShell instance, which catches the errors before they make it to stderr, in order to send the error information to the various potential frontends (consoles, jupyter notebooks, etc). See ipykernel/ipkernel.py#L397-L413 for the code which catches exceptions, then InteactiveShell._showtraceback for the base implementation (print to sys.stderr), and ZMQInteractiveShell._showtraceback for that used by notebook kernels (send stderr-channel messages over zmq to frontends).
If you're not bothered about getting exact stderr output, you could take advantage of IPython's existing error logging, which logs errors to a StreamHandler with the prefix "Exception in execute request:". To use this, set the ipython log level, and alter the provided handler's stream:
import logging
import sys
my_stderr = sys.stderr = open('errors.txt', 'w') # redirect stderr to file
get_ipython().log.handlers[0].stream = my_stderr # log errors to new stderr
get_ipython().log.setLevel(logging.INFO) # errors are logged at info level
Alternatively, to get your shell errors to print directly to a file without alteration, you can monkey-patch the _showtraceback method to print traceback to file as well as zmq message queues:
import sys
import types
# ensure we don't do this patch twice
if not hasattr(get_ipython(), '_showtraceback_orig'):
my_stderr = sys.stderr = open('errors.txt', 'w') # redirect stderr to file
# monkeypatch!
get_ipython()._showtraceback_orig = get_ipython()._showtraceback
def _showtraceback(self, etype, evalue, stb):
my_stderr.write(self.InteractiveTB.stb2text(stb) + '\n')
my_stderr.flush() # make sure we write *now*
self._showtraceback_orig(etype, evalue, stb)
get_ipython()._showtraceback = types.MethodType(_showtraceback, get_ipython())
The execution takes place from Robot Framework, where Test.py has been imported as a library and testLog() is being executed, which in turn imports Logger.py and calls LogMessage().
Test.py
import Logger
def testLog():
Logger.LogMessage("This is the first line of the log file.")
Logger.LogMessage("This is the second line of the log file.")
Logger.LogMessage("This is the third line of the log file.")
Logger.py
import logging
def LogMessage(message):
LOG_FILENAME = "C://Log_Details".log"
logger = logging.getLogger()
logFileHandler = logging.FileHandler(LOG_FILENAME)
logger.addHandler(logFileHandler)
Log_Details.log
This is the first line of the log file.
This is the second line of the log file.
This is the second line of the log file.
This is the third line of the log file.
This is the third line of the log file.
This is the third line of the log file.
The message log section in RIDE logs each line just once during execution, but the file named Log_details.log prints them multiple times, i.e the 1st line gets logged once while the 2nd gets logged twice and so on.
You get 1x message 1, 2x message 2 and 3x message 3.
That is because you perform your logging setup as part of your LogMessage function and in there you add a file log handler every time you log a message... so after first run you have 1 handler that logs your message once, after second call you have 2 handlers that log your message twice and so on...
To avoid that just you want to configure your logger only once.
Just move your logging config to a function that you will call once when you start your script and from then on you can just use:
import logging
log = logging.getLogger(__name__)
log.info('smth')
whenever you feel like logging in any other file in your application.
Brief :
base.py:
import logging
from logging.handlers import TimedRotatingFileHandler
import os
slogFile = os.path.join(os.getcwd(), 'LOGS', 'App_Debug.log')
if True != os.path.isdir(os.path.join(os.getcwd(), 'LOGS')):
os.mkdir(os.path.join(os.getcwd(), 'LOGS'))
logging.basicConfig(filename=slogFile, level=logging.DEBUG)
logging.basicConfig(format='%(asctime)s %(message)s')
logger = logging.getLogger("myApp")
fmt = logging.Formatter(fmt='%(asctime)s %(message)s')
size=1024*1024*1 #1mb file size
logger = logging.getLogger("myApp")
fmt = logging.Formatter(fmt='%(asctime)s %(message)s')
hdlr = logging.handlers.RotatingFileHandler(filename = slogFile ,mode='w', maxBytes=size, backupCount=5, encoding=None, delay=0)
hdlr.setFormatter(fmt)
logger.addHandler(hdlr)</em>
app_main1.py:
import base
base.logger.debug('xxx')
app_main2.py:
import base
base.logger.debug('xxx')
NOTE : for my application i am using another module that also recording the logs into the same file .
I am getting this error:
Traceback (most recent call last):
File "C:\Python27\lib\logging\handlers.py", line 78, in emit
self.doRollover()
File "C:\Python27\lib\logging\handlers.py", line 141, in doRollover
os.rename(self.baseFilename, dfn)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
Logged from file app_main1.py, line 59
Could you explain this to me?
I want do backup the log file when its reaches the max(1mb) size.
You probably have the log file open in some other program, which is why it can't be renamed. This could be one of your programs, or an anti-virus or full-text indexer operating on disk files.
I got the same error while developing a flask application. To solve the problem, I had to change the environmental variable from
"FLASK_DEBUG=1" to "FLASK_DEBUG=0". The reason being that turning debugging on leads to threading errors. I got the solution after reading this blog
My guess is that since you imported base.py twice, the RotatingFileHandler is setup twice, therefore it is accessed by two processes.
I had a similar problem today due to this reason. Details here.
I would suggest not importing base.py, but creating a child logger in app_main1.py and app_main2.py. You can refer to the documentation here.
So I exported some unit tests from the Selenium IDE to Python. Now I'm trying to debug something, and I've noticed that Selenium uses the logging module. There is one particular line in selenium.webdriver.remote.remote_connection that I would really like to see the output of. It is:
LOGGER.debug('%s %s %s' % (method, url, data))
At the top of the file is another line that reads:
LOGGER = logging.getLogger(__name__)
So where is this log file? I want to look at it.
In your unit test script, place
import logging
logging.basicConfig(filename = log_filename, level = logging.DEBUG)
where log_filename is a path to wherever you'd like the log file written to.
Without the call to logging.basicConfig or some such call to setup a logging handler, the LOGGER.debug command does nothing.