We're using python - Django 1.10. We have more than 1000 tests. When running all the tests we're getting tons of logs to stdout.
It mainly hurts on deployments - we're creating a docker instance and run all our tests (with python manage.py test).
I would like to somehow print only errors when running all tests.
Is there a way to do such thing?
Perhaps create a test specific test_settings.py that overrides the log level with ERROR when the tests are run.
For example, if the main settings.py contains:
LOGGING = {
...
'loggers': {
'myapp': {
'handlers': ['console', 'file'],
'level': 'DEBUG',
}
}
}
Then you could create a test_settings.py that overrides the log level.
from settings import *
LOGGING['loggers']['myapp']['level'] = 'ERROR'
And then specify the test_settings when you run your tests.
python manage.py test --settings test_settings
Related
There are a lot of similar questions on SO, but I couldn't one that exactly fit my situation.
I'm setting up logging in my app factory like so:
__init__.py
import os
from flask import Flask
from logging.config import dictConfig
LOG_FOLDER = f'{os.path.dirname(os.path.abspath(__file__))}/logs'
def create_app(test_config=None):
# Setup logging
# Make log folder if it doesn't exist
try:
os.makedirs(LOG_FOLDER)
print("created logs folder")
except OSError:
print("log folder already exists")
pass
dictConfig({
"version": 1,
"handlers": {
"fileHandler": {
"class": "logging.handlers.RotatingFileHandler",
"formatter": "myFormatter",
"filename": f"{LOG_FOLDER}/flask.log",
"maxBytes": 500,
"backupCount": 5
},
"werkzeugFileHandler": {
"class": "logging.handlers.RotatingFileHandler",
"formatter": "myFormatter",
"filename": f"{LOG_FOLDER}/werkzeug.log",
"maxBytes": 500,
"backupCount": 5
},
"console": {
"class": "logging.StreamHandler",
"formatter": "myFormatter"
}
},
"loggers": {
APP_NAME: {
"handlers": ["fileHandler", "console"],
"level": "INFO",
},
"werkzeug": {
"level": "INFO",
"handlers": ["werkzeugFileHandler", "console"],
}
},
"formatters": {
"myFormatter": {
"format": "[%(asctime)s] {%(pathname)s:%(lineno)d} %(levelname)s - %(message)s"
}
}
})
# create and configure the app
app = Flask(__name__, instance_relative_config=True)
<remainder omitted>
And accessing the logger in my other classes like so:
foo.py
from flask import Flask
from definitions import APP_NAME
app = Flask(APP_NAME)
app.logger.info("blah")
But when it comes time for RotatingFileHandler to rename flask.log to flask.log.1, I get this error I've seen in numerous SO posts:
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\user\\project_root\\logs\\flask.log' -> 'C:\\Users\\user\\project_root\\logs\\flask.log.1'
I am running the flask server locally in development mode, using the flask run cli command.
Another thing to note is, when the flask server is running, I am unable to modify (i.e. delete or rename) the log files manually, so it seems the mere act of having the server running is locking the files from modification? Is it wrong to initialise the logging in __init__.py, or is there something I'm missing?
I think this is a duplicated question as PermissionError when using python 3.3.4 and RotatingFileHandler. But just reposting my answer:
Spent half a day on this as non previous answer resolved my issue.
My working solution is to use https://pypi.org/project/concurrent-log-handler/ instead of RotatingFileHandler. In multiple thread scenarios like Flask app, PermissionError will be raised when we rotate the log file that reaches maximum size.
Install pypiwin32 to get rid of No Module name win32con error.
Thanks go to https://www.programmersought.com/article/43941158027/
Try changing the delay parameter to True in your handler (in my case I used TimeRotatingFileHandler).
https://stackoverflow.com/a/69378029/15036810
Im trying to run scheduled task using Django-q I followed the docs but its not running
heres my config
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'db_cache_table',
}
}
Q_CLUSTER = {
'name': 'DjangORM',
'workers': 4,
'timeout': 90,
'retry': 120,
'queue_limit': 50,
'bulk': 10,
'orm': 'default'
}
heres my scheduled task
Nothin is executing, please help
I also had problems with getting scheduled tasks processed in the first place, but finally found a workflow.
I run django-q on a windows machine, using the django ORM as a broker.
Before talking about the execution routine i came up, lets quickly check out my modules first, starting with ..
settings.py:
Q_CLUSTER = {
"name": "austrian_energy_monthly",
"workers": 1,
"timeout": 10,
"retry": 20,
"queue_limit": 50,
"bulk": 10,
"orm": "default",
"ack_failures": True,
"max_attempts": 1,
"attempt_count": 0,
}
.. and my folder structure:
As you can see, the folder of my django project is inside the src folder. Further, there's a folder for the app i created for this project, which is simply called "app". Inside the app folder i do have another folder called "cron", which includes the following files and functions related to the scheduling:
tasks.py
I do not use the schedule() method provided by the django-q, instead i go for the creating tables directly (see: django-q official schedule docs)
from django.utils import timezone
from austrian_energy_monthly.app.cron.func import create_text_file
from django_q.models import Schedule
Schedule.objects.create(
func="austrian_energy_monthly.app.cron.func.create_text_file",
kwargs={"content": "Insert this into a text file"},
hooks="austrian_energy_monthly.app.cron.hooks.print_result",
name="Text file creation process",
schedule_type=Schedule.ONCE,
next_run=timezone.now(),
)
Make sure you assign the "right" path to the "func" keyword. Just using "func.create_text_file",didn't work out for me, even though these files are laying in the same folder. Same for the "hooks" keyword.
(NOTE: I've set up my project as a development package via setup.py, such that i can call it from everywhere inside my src folder)
func.py:
Contains the function called by the schedule table object.
def create_text_file(content: str) -> str:
file = open(f"copy.txt", "w")
file.write(content)
file.close()
return "Created a text file"
hooks.py:
Contains the function called after the scheduled process finished.
def print_result(task):
print(task.result)
Let's now see how i managed to get the executions running for with the file examples described above:
First i've scheduled the "Text file creation process". Therefore I used "python manage.py shell" and imported the tasks.py module (you probably could schedule everythin via the admin page as well, but i didnt tested this yet):
You could now see the scheduled task, with a question mark on the success column in the admin page (tab "Scheduled tasks", as within your picture):
After that i opened a new terminal and started the cluster with "python manage.py qcluster", resulting in the following output in the terminal:
The successful execution can be inspected by looking at "13:22:17 [Q] INFO Processed [ten-virginia-potato-high]", alongside the hook print statement "Created a text file" in the terminal. Further you can check it at the admin page, under the tab "Successful Tasks", where you should see:
Hope that helped!
Django-q dont support windows. :)
In my code I have the following for a verbose mode and a non-verbose mode. I'm reading from a logDict object.
I expect that in verbose mode I will get "DEBUG MODE: test debug" and "DEBUG MODE: test error" written to the console and "[uuid] [date] [etc] test error" only written to a file, and that in non-verbose mode that nothing gets printed to the console but "test error" will be written to the file.
First, here is my dictConfig
LOGGING_DICT = {
'version': 1,
'disable_existing_loggers': False,
'formatters': {
'simple': {
# we have a uuid for the log so we know what process it came from
'format': '[{0}][%(asctime)s][%(name)s][%(levelname)s] : %(message)s'.format(logger_id),
'datefmt': "%Y-%m-%d %H:%M:%S",
}
},
'loggers': {
'root': {
'handlers': ['console'],
'level': 'DEBUG',
},
'script_A': {
'handlers': ['timed_rotate_file'],
'level': 'INFO',
},
},
'handlers' : {
'timed_rotate_file': {
'filename': 'logs/weekly_tool.log',
'level': 'INFO',
'formatter': 'simple',
'class': 'logging.handlers.TimedRotatingFileHandler',
'encoding': 'utf8',
# Used to configure when backups happen 'seconds, minutes, w0,w1 (monday tuesday)
'when': 'midnight', # Daily backup
# This is used to configure rollover (7=weekly files if when = daily or midnight)
'backupCount': 7,
}
}
And now the script that calls it
from logging.config import dictConfig
from helpers.logging_config import LOGGING_DICT
...
main():
logger.debug("test debug")
logger.error("test error")
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
What I get instead is the following:
~$ python script_A.py
~$ (No output, as expected)
~$ python script_A.py -v
~$ DEBUG MODE: test error
Why is the test_debug not printing to console? Clearly the stream handler is being called, but the level is either not being set correctly or is being ignored.
When I print logger.level in the middle of the script I get 20, which is what I expect given dictConfig, however the handler's level is being set separately, does that mean it is being ignored? (What is the point of setLevel in a python logging handler?) <-- I'm looking at this as well, but my issue is flipped. In dict config my settings are stricter than what I actually want to print, which means that if I reset my log level for the logger I'm getting from dictConfig, things I don't want to print to my file are going to be printed. Can I circumvent this?
I figured this out on my own. Similar to what I posted, I have to reset the log level.
if __name__ == "__main__":
if args.verbose:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
stream_handler = logging.StreamHandler()
formatter = logging.Formatter("DEBUG MODE: %(message)s")
stream_handler.setFormatter(formatter)
stream_handler.setLevel(logging.DEBUG)
logger.addHandler(stream_handler)
logger.setLevel(logging.DEBUG)
else:
dictConfig(LOGGING_DICT)
logger = logging.getLogger("script_A")
I thought that doing this would mean that the file handler level also gets changed, but for some reason that doesn't happen. If anyone knows why I would love to know how the internals work this out.
I am trying to get a rotating log file for a GUI application I am writing with python 3.3.4 and PyQt4.
I have the following snippet of code in my main script:
import logging
import resources
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
fh = RotatingFileHandler(resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.info('main')
I have the maxBytes low so that I can test the rotating is working correctly, which it is not. I am getting the following error whenever the log should be rotated:
Traceback (most recent call last):
File "C:\Python33\lib\logging\handlers.py", line 73, in emit
self.doRollover()
File "C:\Python33\lib\logging\handlers.py", line 176, in doRollover
self.rotate(self.baseFilename, dfn)
File "C:\Python33\lib\logging\handlers.py", line 116, in rotate
os.rename(source, dest)
PermissionError: [WinError 32] The process cannot access the file because it is being used by another process: 'C:\\Users\\myuser\\.logtest\\test.log.1'
And nothing is logged. Any help is much appreciated.
Thank you
Spent half a day on this as non previous answer resolved my issue.
My working solution is to use https://pypi.org/project/concurrent-log-handler/ instead of RotatingFileHandler. In multiple thread scenarios like Flask app, PermissionError will be raised when we rotate the log file that reaches maximum size.
Install pypiwin32 to get rid of No Module name win32con error.
Thanks go to https://www.programmersought.com/article/43941158027/
Instead of adding handler to the logger object you can directly specify handler in basicConfig(). If you add RotatingFileHandler to the logger object then one object might open the log file and another at the same time might try to rename it which throws the PermissionError.
Below code seems to work pretty well.
import logging
import resources
from logging.handlers import RotatingFileHandler
logging.basicConfig(format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', handlers=[RotatingFileHandler(filename=resources.LOG_FILE_PATH, maxBytes=500, backupCount=5)])
logger = logging.getLogger('main.test')
def main():
logger.setLevel(logging.DEBUG)
logger.info('main')
In my case it happens only in Windows. To solve it I changed the delay parameter to True for my TimedRotatingFileHandler log handler.
Docs -> https://docs.python.org/3/library/logging.handlers.html#logging.handlers.TimedRotatingFileHandler
You cannot specify the same filename in both basicConfig() and RotatingFileHandler(). I had this same issue and I removed the filename parameter from basicConfig() and it now works.
In my case (Windows Server 2016 + IIS + FastCGI + Flask) finally I fix it by turning off files indexing in the folder.
how-to
Source:
https://stackoverflow.com/a/22467917/9199668
Btw, it was working for months correctly... I have no idea why...
Check that the file isn't being kept open by e.g. Windows file indexing, anti-virus or other software. Files that are open can't be renamed.
I changed the application to use dictConfig and created a separate file that holds the dictionary configuration. At the top of my main application I have:
from log.logger import LOGGING
logging.config.dictConfig(LOGGING)
logger = logging.getLogger('testlogging')
Then in log.logger I have:
import logging
import sys
import resources
LOGGING = {
"version":1,
"handlers":{
"fileHandler":{
"class":"logging.handlers.RotatingFileHandler",
"formatter":"myFormatter",
"filename":resources.LOG_FILE_PATH,
"maxBytes":100000,
"backupCount":5
},
"console":{
"class":"logging.StreamHandler",
"formatter":"myFormatter"
}
},
"loggers":{
"aoconnect":{
"handlers":["fileHandler", "console"],
"level":"DEBUG",
}
},
"formatters":{
"myFormatter":{
"format":"%(asctime)s - %(name)s - %(levelname)s - %(message)s"
}
}
}
This all seems to work pretty well.
In My case file size is full, after removing server.log file it worked
LOGS_DIR = os.path.join(BASE_DIR, 'logs')
LOGGING = {
'version': 1,
'handlers': {
'log_file': {
'level': 'INFO',
'class': 'logging.handlers.RotatingFileHandler',
'filename': os.path.join(LOGS_DIR, 'server.log'),
'backupCount': 10, # keep at most 10 log files
'maxBytes': 5*1024*1024 # 5242880 bytes (5MB)
},
},
'loggers': {
'django': {
'handlers':['log_file'],
'propagate': True,
'level':'INFO',
},
},
}
I'm logging to syslog fine but can't work out how to specify the 'tag'. The logging currently posts this:
Mar 3 11:45:34 TheMacMini Unknown: INFO FooBar
but I want that 'Unknown' to be set to something. eg:
Mar 3 11:45:34 TheMacMini Foopybar: INFO FooBar
If I use logger from the command line it can be controlled via the -t option...
$ logger -t Foopybar FooBar && tail -1 /var/log/system.log
Mar 3 12:05:00 TheMacMini Foopybar[4566]: FooBar
But logging from python I don't seem to be able to specify the tag:
import logging
logging.info("FooBar")
Just gives me the 'Unknown' tag shown at the top. I've defined this spec:
LOGGING = {
'version': 1,
'formatters': {
'simple': {
'format': '%(levelname)s %(message)s'
},
},
'handlers': {
'syslog':{
'address': '/var/run/syslog',
'class': 'logging.handlers.SysLogHandler',
'facility': 'local2',
'formatter': 'simple'
}
},
'loggers': {
'': {
'handlers': ['syslog'],
'level': 'INFO',
}
}
}
How do I specify the tag so it's not always "Unknown"?
Simple Way of Tagging Log Messages
Do this:
logging.info("TagName: FooBar")
and you message will be tagged! You just need to start all your messages with "TagName: ". And this is of course not very elegant.
Better Solution
Setup your logger:
log = logging.getLogger('name')
address=('log-server',logging.handlers.SYSLOG_UDP_PORT)
facility=logging.handlers.SysLogHandler.LOG_USER
h=logging.handlers.SysLogHandler( address,facility )
f = logging.Formatter('TagName: %(message)s')
h.setFormatter(f)
log.addHandler(h)
And use it:
log.info('FooBar')
I'm adding this just for the sake of completion, even though #sasha's answer is absolutely correct.
If you happen to log messages to syslog directly using syslog.syslog, you can set the tag using the syslog.openlog function:
import syslog
syslog.openlog('foo')
syslog.syslog('bar')
Going back to the linux shell:
$ tail -f /var/log/syslog
Sep 7 07:01:58 dev-balthazar foo: bar