Brief :
base.py:
import logging
from logging.handlers import TimedRotatingFileHandler
import os
slogFile = os.path.join(os.getcwd(), 'LOGS', 'App_Debug.log')
if True != os.path.isdir(os.path.join(os.getcwd(), 'LOGS')):
os.mkdir(os.path.join(os.getcwd(), 'LOGS'))
logging.basicConfig(filename=slogFile, level=logging.DEBUG)
logging.basicConfig(format='%(asctime)s %(message)s')
logger = logging.getLogger("myApp")
fmt = logging.Formatter(fmt='%(asctime)s %(message)s')
size=1024*1024*1 #1mb file size
logger = logging.getLogger("myApp")
fmt = logging.Formatter(fmt='%(asctime)s %(message)s')
hdlr = logging.handlers.RotatingFileHandler(filename = slogFile ,mode='w', maxBytes=size, backupCount=5, encoding=None, delay=0)
hdlr.setFormatter(fmt)
logger.addHandler(hdlr)</em>
app_main1.py:
import base
base.logger.debug('xxx')
app_main2.py:
import base
base.logger.debug('xxx')
NOTE : for my application i am using another module that also recording the logs into the same file .
I am getting this error:
Traceback (most recent call last):
File "C:\Python27\lib\logging\handlers.py", line 78, in emit
self.doRollover()
File "C:\Python27\lib\logging\handlers.py", line 141, in doRollover
os.rename(self.baseFilename, dfn)
WindowsError: [Error 32] The process cannot access the file because it is being used by another process
Logged from file app_main1.py, line 59
Could you explain this to me?
I want do backup the log file when its reaches the max(1mb) size.
You probably have the log file open in some other program, which is why it can't be renamed. This could be one of your programs, or an anti-virus or full-text indexer operating on disk files.
I got the same error while developing a flask application. To solve the problem, I had to change the environmental variable from
"FLASK_DEBUG=1" to "FLASK_DEBUG=0". The reason being that turning debugging on leads to threading errors. I got the solution after reading this blog
My guess is that since you imported base.py twice, the RotatingFileHandler is setup twice, therefore it is accessed by two processes.
I had a similar problem today due to this reason. Details here.
I would suggest not importing base.py, but creating a child logger in app_main1.py and app_main2.py. You can refer to the documentation here.
Related
I have never used logging before and am new to Python. My mentor mandated that script must contain a logging file. So I tried to do it in my code following his template. The followign below are excerpts in my code where logging is used:
logfilepath = r"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\odfshistory\CSV_ODFS_Probe_Quality-%Y-%m-%d.log"
log_file_name = datetime.now().strftime(logfilepath)
print(log_file_name)
logging.basicConfig(
filename=log_file_name,
level=logging.DEBUG,
format='[Probe Data Quality] %(asctime)s - %(name)s %(levelname)-7.7s %(message)s' #can you explain this Tenzin?
)
def process_dirconfig_file(config_file_from_sysarg):
logging.info('started processing config file' + (config_file_from_sysarg) )
dirconfig_file_Pobj = Path(config_file_from_sysarg)
try:
if Path.is_file(dirconfig_file_Pobj):
try:
if ("db_string" not in parseddict or parseddict["db_string"] == ""):
raise Exception(f"Error: Your config file is missing 'db connection string' for db connection. Please edit config file to section[db string] and db string key val pairs of form: db_string = <db string>")
#print(f"Error: Your config file is missing 'error directory' for file processing")
except Exception as e:
raise Exception(e) #logging.exception(e) puts it into log file
logging.exception(e)
return parseddict
else:
raise Exception(f"Error: No directory config file. Please create a config file of directories to be used in processing")
except Exception as e:
raise Exception(e)
logging.exception(e)
if __name__ == '__main__':
try:
startTime = datetime.now()
parse_dict = process_dirconfig_file(dirconfig_file)
db_instance = dbhandler(parse_dict["db_string"])
odfs_tabletest_dict = db_instance['odfs_tester_history_files']
odf_history_from_csv_to_dbtable(db_instance)
print(datetime.now() - startTime)
except Exception as e:
logging.exception(e)
Why does this structure give me this error? Of course most of my code is edited out of this for brevity. THe code works. The logging is what's breaking it. I'll also note that a bunch of my functions are used in for loops but I wouldn't think this would be an issue. It's not like log files cannot work in for loops. ANd no I haven't done any threading.
Also how can you make it create a new log file for each run. Right now, the logging info gets appeneded to the logfile?
Error:
[Probe Data Quality] 2020-07-02 09:21:04,217 - root INFO started processing config fileC:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini
[Probe Data Quality] 2020-07-02 09:21:04,373 - root INFO started processing config fileC:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\venv\odfs_tester_history_dirs.ini
[Probe Data Quality] 2020-07-02 09:21:04,420 - root ERROR [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log' -> 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\error\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log'
Traceback (most recent call last):
File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 277, in <module>
odf_history_from_csv_to_dbtable(db_instance)
File "C:/Users/sys_nsgprobeingestio/Documents/dozie/odfs/odfshistory3.py", line 251, in odf_history_from_csv_to_dbtable
csv.rename(trg_path.joinpath(csv.name))
File "C:\ProgramData\Anaconda3\lib\pathlib.py", line 1329, in rename
self._accessor.rename(self, target)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log' -> 'C:\\Users\\sys_nsgprobeingestio\\Documents\\dozie\\odfs\\odfshistory\\error\\CSV_ODFS_Probe_Quality-2020-07-02-09-21-04.log'
Looks like odf_history_from_csv_to_dbtable is trying to rename the file while the logger is still using it? might need to move that logic to someplace else, or re-think what it's trying to do to happen after your main script runs. self._accessor.rename(self, target) and looks like it's trying to move it to an error directory ?
As far as having a unique name for each run of the script:
from uuid import uuid4
run_id = uuid4()
logfilepath = f"C:\Users\sys_nsgprobeingestio\Documents\dozie\odfs\odfshistory\CSV_ODFS_Probe_Quality-%Y-%m-%d-{run_id}.log"
I have around 15 different python scripts for an application development, among of which 10 includes logging for debugging purposes. Now there is a main script say "hutextract.py" which is to run with a filename as an argument. Previously the log file name was fixed "test.log". Now I want to create the logging file with the same name as the input filename (except the extension). My "hutextract.py" code, where "randpage" and "mandate" are other python scripts :
from randpage import genrand
from mandatebase import formext
...# importing other scripts, functions
import logging
file_name=sys.argv[1]
def return_log_name():
return ".".join(file_name.split("/")[-1].split(".")[:-1]) + ".log"
log_file_name = return_log_name()
logging.basicConfig(filename=log_file_name, level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(message)s")
In randpage.py, mandatebase.py and other scripts, logging is also included there :
import logging
from hutextract import return_log_name
log_file_name = return_log_name()
logging.basicConfig(filename=log_file_name, level=logging.DEBUG, format="%(asctime)s:%(levelname)s:%(message)s")
This creates an error which is obvious that when we try to run hutextract.py with argument which calls the other scripts (and their functions) which in return again summons return_log_name function from hutextract.py for logging purpose :
Traceback (most recent call last):
File "hutextract.py", line 3, in <module>
from randpage import genrand
File "/home/src/randpage.py", line 3, in <module>
from mandate_final import sigext
File "/home/src/mandate_final.py", line 5, in <module>
from hutextract import return_log_name
File "/home/src/hutextract.py", line 3, in <module>
from randpage import genrand
ImportError: cannot import name 'genrand'
How to do the logging inside all the module scripts to save the log file with same name as the input filename given as an argument?
The error you have provided is because of circular import. You may see this circle in the traceback hutextract.py - randpage.py - mandate_final.py - hutextract.py
Now to the logging. You should use logging.basicConfig(...) only once across multiple scripts (in the starting script) because this line of code modifies so-called root logger of the logging module. This root logger is created when you first import logging module and it is located in the global scope. So root logger is always available - just use it like logging.debug('message') where and when you need.
I am trying to enable my python logging using the following:
#!/usr/bin/env python
# -*- coding: utf-8 -*-
import logging
import logging.config
import os
test_filename = 'my_log_file.txt'
try:
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
except Exception as e:
# try to set up a default logger
logging.error("No loggingpy.conf to parse", exc_info=e)
logging.basicConfig(level=logging.WARNING, format="%(asctime)-15s %(message)s")
test1_log = logging.getLogger("test1")
test1_log.critical("test1_log crit")
test1_log.error("test1_log error")
test1_log.warning("test1_log warning")
test1_log.info("test1_log info")
test1_log.debug("test1_log debug")
I would like to use a loggingpy.conf file to control the logging like the following:
[loggers]
keys=root
[handlers]
keys=handRoot
[formatters]
keys=formRoot
[logger_root]
level=INFO
handlers=handRoot
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(test_filename,)
[formatter_formRoot]
format=%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s
datefmt=
class=logging.Formatter
Here I am trying to route the logging to the file named by the local "test_filename". When I run this, I get:
ERROR:root:No loggingpy.conf to parse
Traceback (most recent call last):
File "logging_test.py", line 8, in <module>
logging.config.fileConfig('loggingpy.conf', disable_existing_loggers=False)
File "/usr/lib/python2.7/logging/config.py", line 85, in fileConfig
handlers = _install_handlers(cp, formatters)
File "/usr/lib/python2.7/logging/config.py", line 162, in _install_handlers
args = eval(args, vars(logging))
File "<string>", line 1, in <module>
NameError: name 'test_filename' is not defined
CRITICAL:test1:test1_log crit
ERROR:test1:test1_log error
WARNING:test1:test1_log warning
Reading the docs, it seems that the "args" value in the config is eval'd in the context of the logging package namespace rather than the context when fileConfig is called. Is there any decent way to try to get the logging to behave this way through a configuration file so I can configure a dynamic log filename (usually like "InputFile.log"), but still have the flexibility to use the logging config file to change it?
Even though it's an old question, I think this still has relevance. An alternative to the above mentioned solutions would be to use logging.config.dictConfig(...) and manipulating the dictionary.
MWE:
log_config.yml
version: 1
disable_existing_loggers: false
formatters:
default:
format: "%(asctime)s:%(name)s:%(process)d:%(lineno)d %(levelname)s %(message)s"
handlers:
console:
class: logging.StreamHandler
formatter: default
stream: ext://sys.stdout
level: DEBUG
file:
class: logging.FileHandler
formatter: default
filename: "{path}/service.log"
level: DEBUG
root:
level: DEBUG
handlers:
- file
- console
example.py
import logging.config
import sys
import yaml
log_output_path = sys.argv[1]
log_config = yaml.load(open("log_config.yml"))
log_config["handlers"]["file"]["filename"] = log_config["handlers"]["file"]["filename"].format(path = log_output_path)
logging.config.dictConfig(log_config)
logging.debug("test")
Executable as follows:
python example.py .
Result:
service.log file in current working directory contains one line of log message.
Console outputs one line of log message.
Both state something like this:
2016-06-06 20:56:56,450:root:12232:11 DEBUG test
You could place the filename in the logging namespace with:
logging.test_filename = 'my_log_file.txt'
Then your existing loggingpy.conf file should work
You should be able to pollute the logging namespace with anything you like (within reason - i wouldn't try logging.config = 'something') in your module and that should make it referencable by the the config file.
The args statement is parsed using eval at logging.config.py _install_handlers. So you can add code into the args.
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(os.getenv("LOG_FILE","default_value"),)
Now you only need to populate the environment variable.
This is very hacky so I wouldn't recommend it. But if you for some reason did not want to add to the logging namespace you could pass the log file name through a command line argument and then use sys.argv[1] to access it (sys.argv[0] is the script name).
[handler_handRoot]
class=FileHandler
level=INFO
formatter=formRoot
args=(sys.argv[1],)
Someone inadvertently moved the open log file used by a python program.
The program uses the logging module with a TimedRotatingFileHandler. When the time came to roll-over the file this error was output:
Traceback (most recent call last):
File "/python_root/lib/logging/handlers.py", line 78, in emit
self.doRollover()
File "/python_root/lib/logging/handlers.py", line 338, in doRollover
os.rename(self.baseFilename, dfn)
OSError: [Errno 2] no such file or directory
Logged from file logtest.py, line 16
The error was repeated on each subsequent attempt to log something. The logged messages did not go into the old (moved) log file.
This reproduces the problem (if you move the log file :))
import time
import logging
from logging import handlers
f = logging.Formatter( "%(asctime)s %(message)s" )
h = handlers.TimedRotatingFileHandler(
"testlog", when='s', interval=5, backupCount=10 )
h.setFormatter( f )
logger = logging.getLogger( 'test' )
logger.setLevel( logging.INFO )
logger.addHandler( h )
logger.info( "LOGTEST started" )
for i in range( 10 ):
time.sleep( 5 )
logger.info( "Test logging " + str( i ) )
My concern here is that subsequent log messages are lost. What I'd like to achieve is, in ascending order of preference:
An exception that exits.
An exception I can catch and handle.
The logger displays the error, but subsequent messages go to the old log file.
The logger displays this error, but opens a new log file and continues as normal.
I've skimmed the docs/cookbook for relevant hooks, but nothing's popped out at me. Pointers there are equally welcome.
Thanks for your help,
Jonathan
Exceptions that are raised in doRollover are passed to the handleError method of the handler. You can define a subclass and override this method to do whatever it is you want to do to handle the error.
So I exported some unit tests from the Selenium IDE to Python. Now I'm trying to debug something, and I've noticed that Selenium uses the logging module. There is one particular line in selenium.webdriver.remote.remote_connection that I would really like to see the output of. It is:
LOGGER.debug('%s %s %s' % (method, url, data))
At the top of the file is another line that reads:
LOGGER = logging.getLogger(__name__)
So where is this log file? I want to look at it.
In your unit test script, place
import logging
logging.basicConfig(filename = log_filename, level = logging.DEBUG)
where log_filename is a path to wherever you'd like the log file written to.
Without the call to logging.basicConfig or some such call to setup a logging handler, the LOGGER.debug command does nothing.