rsyslog template "eating" the first part of a message - python

I'm logging messages to syslog with Python's SysLogHandler. The problem is that startswith combined with a template seems to "eat" the beginning of the logged string.
Rsyslogd is version 8.4.2, Python 2.7.9 (same behaviour on 2.7.11). It does not seem to happen on rsyslogd 7.x with Python 2.7.4 however.
Example:
#!/usr/bin/env python
import logging
from logging.handlers import SysLogHandler
my_fmt = logging.Formatter('%(name)s:%(message)s', '%Y-%m-%d %H:%M:%S')
foo_handler = SysLogHandler(address='/dev/log', facility=SysLogHandler.LOG_LOCAL5)
foo_handler.setLevel(logging.INFO)
foo_handler.setFormatter(my_fmt)
foo = logging.getLogger('foo')
foo.setLevel(logging.INFO)
foo.addHandler(foo_handler)
foo.propagate = False
foo.info("This is foo")
With this rsyslog configuration:
$template myt,"%TIMESTAMP:::date-rfc3339%%msg:::sp-if-no-1st-sp%%msg:::drop-last-lf%\n"
if $syslogfacility-text == "local5" then {
if $msg startswith "foo" then {
action(type="omfile" file="/var/log/foo.log" template="myt")
} else {
action(type="omfile" file="/var/log/bar.log" template="myt")
}
stop
}
Produces the following:
=> /var/log/bar.log <==
2016-06-29T17:29:55.330941+01:00 is foo
Notice the missing 'This' in the message.
Conversely, removing the use the template in rsyslog config file results in:
==> /var/log/bar.log <==
Jun 29 18:19:40 localhost foo:This is foo
Removing %msg:::sp-if-no-1st-sp% from the template does not seem to help either.

The solution appears to be:
Use $syslogtag startswith instead of $msg startswith
In the python source, separate the name from the rest of the string with an empty space: logging.Formatter('%(name)s: %(message)s', '%Y-%m-%d %H:%M:%S')
I am unsure why this wasn't a problem on 2.7.4 and if anyone finds the reason please post a comment below.

Related

How to create new log file for each run of tests in pytest?

I have created a pytest.ini file,
addopts = --resultlog=log.txt
This creates a log file, but I would like to create a new log file everytime I run the tests.
I am new to the pytest, and pardon me if I have missed out anything while reading the documentation.
Thanks
Note
--result-log argument is deprecated and scheduled for removal in version 6.0 (see Deprecations and Removals: Result log). The possible replacement implementation is discussed in issue #4488, so watch out for the next major version bump - the code below will stop working with pytest==6.0.
Answer
You can modify the resultlog in the pytest_configure hookimpl. Example: put the code below in the conftest.py file in your project root dir:
import datetime
def pytest_configure(config):
if not config.option.resultlog:
timestamp = datetime.datetime.strftime(datetime.datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.resultlog = 'log.' + timestamp
Now if --result-log is not passed explicitly (so you have to remove addopts = --resultlog=log.txt from your pytest.ini), pytest will create a log file ending with a timestamp. Passing --result-log with a log file name will override this behaviour.
Answering my own question.
As hoefling mentioned --result-log is deprecated, I had to find a way to do it without using that flag. Here's how I did it,
conftest.py
from datetime import datetime
import logging
log = logging.getLogger(__name__)
def pytest_assertrepr_compare(op, left, right):
""" This function will print log everytime the assert fails"""
log.error('Comparing Foo instances: vals: %s != %s \n' % (left, right))
return ["Comparing Foo instances:", " vals: %s != %s" % (left, right)]
def pytest_configure(config):
""" Create a log file if log_file is not mentioned in *.ini file"""
if not config.option.log_file:
timestamp = datetime.strftime(datetime.now(), '%Y-%m-%d_%H-%M-%S')
config.option.log_file = 'log.' + timestamp
pytest.ini
[pytest]
log_cli = true
log_cli_level = CRITICAL
log_cli_format = %(message)s
log_file_level = DEBUG
log_file_format = %(asctime)s [%(levelname)8s] %(message)s (%(filename)s:%(lineno)s)
log_file_date_format=%Y-%m-%d %H:%M:%S
test_my_code.py
import logging
log = logging.getLogger(__name__)
def test_my_code():
****test code
You can have different pytest run logs by naming the log file the time when test execution starts.
pytest tests --log-file $(date '+%F_%H:%M:%S')
This will create a log file for each test run. And the name of the test run would be the timestamp.
$(date '+%F_%H:%M:%S') is the bash command to get current timestamp in DATE_Hr:Min:Sec format.

How to configure rsyslog for use with SysLogHandler logging class?

In order to write log messages of "myapp" into /var/log/local5.log, I use SysLogHandler.
problem
"myapp" runs well, no error, but nothing gets logged, /var/log/local5.log remains empty.
logging configuration
Relevant parts of the logging configuration file:
handlers:
mainHandler:
class: logging.handlers.SysLogHandler
level: INFO
formatter: defaultFormatter
address: '/dev/log'
facility: 'local5'
loggers:
__main__:
level: INFO
handlers: [mainHandler]
logging test
Here is how I try to write a log in the main script of "myapp":
with open('myconfig.yml') as f:
logging.config.dictConfig(yaml.load(f))
log = logging.getLogger(__name__)
log.info("Starting")
I have added some sys.stderr.write() to /usr/lib/python3.4/logging/handlers.py to see what's happening and I get:
$ myapp
[SysLogHandler._connect_unixsocket()] Sucessfully connected to socket: /dev/log
[SysLogHandler.emit()] called
[SysLogHandler.emit()] msg=b'<174>2016/04/23 07:17:00.453 myapp: main: Starting\x00'
[SysLogHandler.emit()] msg sent to unix socket (no OSError)
rsyslog configuration
/etc/rsyslog.conf (relevant parts; TCP and UDP syslog receptions are disabled):
$ModLoad imuxsock # provides support for local system logging
$ModLoad imklog # provides kernel logging support
[...]
$IncludeConfig /etc/rsyslog.d/*.conf
/etc/rsyslog.d/40-local.conf:
local5.* /var/log/local5.log
rsyslog test
According to lsof output, it looks like rsyslogd is listening to /dev/log (or am I wrong?):
# lsof | grep "/dev/log"
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/1000/gvfs
Output information may be incomplete.
rsyslogd 28044 syslog 0u unix 0xffff8800b4b9b100 0t0 3088160 /dev/log
in:imuxso 28044 28045 syslog 0u unix 0xffff8800b4b9b100 0t0 3088160 /dev/log
in:imklog 28044 28046 syslog 0u unix 0xffff8800b4b9b100 0t0 3088160 /dev/log
rs:main 28044 28047 syslog 0u unix 0xffff8800b4b9b100 0t0 3088160 /dev/log
I don't put the whole rsyslogd -N1 output since it's a bit long, but the mentionning"local" lines:
# rsyslogd -N1 | grep local
rsyslogd: version 7.4.4, config validation run (level 1), master config /etc/rsyslog.conf
3119.943361369:7f39080fc780: cnf:global:cfsysline: $ModLoad imuxsock # provides support for local system logging
3119.944034769:7f39080fc780: rsyslog/glbl: using '127.0.0.1' as localhost IP
3119.946084095:7f39080fc780: requested to include config file '/etc/rsyslog.d/40-local.conf'
3119.946135638:7f39080fc780: config parser: pushed file /etc/rsyslog.d/40-local.conf on top of stack
3119.946432390:7f39080fc780: config parser: resume parsing of file /etc/rsyslog.d/40-local.conf at line 1
3119.946678298:7f39080fc780: config parser: reached end of file /etc/rsyslog.d/40-local.conf
3119.946697644:7f39080fc780: Decoding traditional PRI filter 'local5.*'
3119.946723904:7f39080fc780: symbolic name: local5 ==> 168
3119.949560475:7f39080fc780: PRIFILT 'local5.*'
3119.949675782:7f39080fc780: ACTION 0x224cda0 [builtin:omfile:/var/log/local5.log]
3119.953397587:7f39080fc780: PRIFILT 'local5.*'
3119.953806713:7f39080fc780: ACTION 0x224cda0 [builtin:omfile:/var/log/local5.log]
rsyslogd: End of config validation run. Bye.
I don't understand what I am missing. rsyslog's documentation matching the version I use (7.4.4) seems outdated and I can't find my way in it. Not sure that's the place to find how to fix my problem.
EDITS:
It's not possible to define a "personal" facility, like "myapp" (even if it's defined in rsyslog.conf, so I changed to use the 'local5' one.
Cause of the problem
I finally found out that I previously created /var/log/local5.log with inappropriate owner and group (root:root). They were inappropriate because /etc/rsyslog.conf tells explicitely owner and group should be syslog:syslog:
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
Unfortunately, the other log files rsyslog should take care of (like auth.log) were also root:root, so, seen from ls -lah, mine was not different from others... (what are also empty, I wonder why such a non-functional configuration is installed by default).
Unfortunately, rsyslog does not log any error (or at least I haven't found where).
Some more details that could be useful to finish rsyslog configuration
As a side note, rsyslog expects a special format for the messages it gets, and if it doesn't, it adds some informations, by default (timestamp hostname). It's possible to modify them. Anyway, from my python script, I decided to only send the message to log and let rsyslog format the output. So finally, the relevant parts of my logging configuration file are:
formatters:
rsyslogdFormatter:
format: '%(filename)s: %(funcName)s: %(message)s'
handlers:
mainHandler:
class: logging.handlers.SysLogHandler
level: INFO
formatter: rsyslogdFormatter
address: '/dev/log'
facility: 'local5'
loggers:
__main__:
level: INFO
handlers: [mainHandler]
And I added a customized template in /etc/rsyslog.conf:
$template MyappTpl,"%$now% %timegenerated:12:23:date-rfc3339% %syslogtag%%msg%\n"
and accordingly modified /etc/rsyslog.d/40-local.conf:
local5.* /var/log/local5.log;MyappTpl
I also want to mention that the documentation provided by the matching package (rsyslog-doc for ubuntu) matches the installed version, of course, and provides hints I hadn't found in the online documentation.

Unexpected python logger output when using several handlers with different log levels

I am trying to log data to stderr and into a file. The file should contain all log messages, and to stderr should go only the log level configured on the command line. This is described several times in the logging howto - but it does not seem to work for me. I have created a small test script which illustrates my problem:
#!/usr/bin/env python
import logging as l
l.basicConfig(level=100)
logger = l.getLogger("me")
# ... --- === SEE THIS LINE === --- ...
logger.setLevel(l.CRITICAL)
sh = l.StreamHandler()
sh.setLevel(l.ERROR)
sh.setFormatter(l.Formatter('%(levelname)-8s CONSOLE %(message)s'))
logger.addHandler(sh)
fh = l.FileHandler("test.dat", "w")
fh.setLevel(l.DEBUG)
fh.setFormatter(l.Formatter('%(levelname)-8s FILE %(message)s'))
logger.addHandler(fh)
logger.info("hi this is INFO")
logger.error("well this is ERROR")
In line 5th code line I can go for logger.setLevel(l.CRITICAL) or logger.setLevel(l.DEBUG). Both results are unsatisfying.
With logger.setLevel(l.CRITICAL) I get ...
$ python test.py
$ cat test.dat
$
Now with logger.setLevel(l.DEBUG) I get ...
$ python test.py
INFO:me:hi this is INFO
ERROR CONSOLE well this is ERROR
ERROR:me:well this is ERROR
$ cat test.dat
INFO FILE hi this is INFO
ERROR FILE well this is ERROR
$
In one case I see nothing nowhere, in the other I see everything everywhere, and one message is being displayed even twice on the console.
Now I get where the ERROR CONSOLE and ERROR FILE outputs come from, those I expect. I don't get where the INFO:me... or ERROR:me... outputs are coming from, and I would like to get rid of them.
Things I already tried:
Creating a filter as described here: https://stackoverflow.com/a/7447596/902327 (does not work)
Emptying handlers from the logger with logger.handlers = [] (also does not work)
Can somebody help me out here? It seems like a straightforward requirement and I really don't seem to get it.
You can set the root level to DEBUG, set propagate to False and then set the appropriate level for the other handlers.
import logging as l
l.basicConfig()
logger = l.getLogger("me")
# ... --- === SEE THIS LINE === --- ...
logger.setLevel(l.DEBUG)
logger.propagate = False
sh = l.StreamHandler()
sh.setLevel(l.ERROR)
sh.setFormatter(l.Formatter('%(levelname)-8s CONSOLE %(message)s'))
logger.addHandler(sh)
fh = l.FileHandler("test.dat", "w")
fh.setLevel(l.INFO)
fh.setFormatter(l.Formatter('%(levelname)-8s FILE %(message)s'))
logger.addHandler(fh)
logger.info("hi this is INFO")
logger.error("well this is ERROR")
Output:
~$ python test.py
ERROR CONSOLE well this is ERROR
~$ cat test.dat
INFO FILE hi this is INFO
ERROR FILE well this is ERROR

Python, paramiko, invoke_shell and ugly characters

When I run the Python code below:
import workflow
import console
import paramiko
import time
strComputer = 'server.com'
strUser = 'user'
strPwd = 'passwd'
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.connect(hostname=strComputer, username=strUser, password=strPwd)
channel = client.invoke_shell()
channel.send('ls\n')
time.sleep(3)
output=channel.recv(2024)
print(output)
#Close the connection
client.close()
print('Connection closed.')
I get the desired output mixed with ugly characters:
Last login: Thu Jun 19 23:37:55 2014 from 192.168.0.10
ls
user#server:~$ ls
[0m[01;34mbin[0m Rplots1.pdf
[01;32mbtsync[0m Rplots.pdf
btsync.conf~ [01;31mrstudio-server-0.95.265-amd64.deb[0m
[01;31mbtsync_glibc23_x64.tar[0m screen.vba
[01;34mbudget[0m [01;34mshiny[0m
[01;3
Connection closed.
Can anyone explain me what is going on, and how to get a pretty output instead?
Thanks
those are terminal color codes used by ls to highlight directories, executable files etc. you can call /bin/ls (or, on some distributions, ls --color=never) explicitly to avoid calling aliases etc. and get un-colored output.
the colors are defined using those cryptic codes like [0m[01;34m.
here's how the terminal looks like when ls coloring is enabled:

Easy way to suppress output of fabric run?

I am running a command on the remote machine:
remote_output = run('mysqldump --no-data --user=username --password={0} database'.format(password))
I would like to capture the output, but not have it all printed to the screen. What's the easiest way to do this?
It sounds like Managing output section is what you're looking for.
To hide the output from the console, try something like this:
from __future__ import with_statement
from fabric.api import hide, run, get
with hide('output'):
run('mysqldump --no-data test | tee test.create_table')
get('~/test.create_table', '~/test.create_table')
Belows is the sample results:
No hosts found. Please specify (single) host string for connection: 192.168.6.142
[192.168.6.142] run: mysqldump --no-data test | tee test.create_table
[192.168.6.142] download: /home/quanta/test.create_table <- /home/quanta/test.create_table
Try this if you want to hide everything from log and avoid fabric throwing exceptions when command fails:
from __future__ import with_statement
from fabric.api import env,run,hide,settings
env.host_string = 'username#servernameorip'
env.key_filename = '/path/to/key.pem'
def exec_remote_cmd(cmd):
with hide('output','running','warnings'), settings(warn_only=True):
return run(cmd)
After that, you can check commands result as shown in this example:
cmd_list = ['ls', 'lss']
for cmd in cmd_list:
result = exec_remote_cmd(cmd)
if result.succeeded:
sys.stdout.write('\n* Command succeeded: '+cmd+'\n')
sys.stdout.write(result+"\n")
else:
sys.stdout.write('\n* Command failed: '+cmd+'\n')
sys.stdout.write(result+"\n")
This will be the console output of the program (observe that there aren't log messages from fabric):
* Command succeeded: ls
Desktop espaiorgcats.sql Pictures Public Videos
Documents examples.desktop projectes scripts
Downloads Music prueba Templates
* Command failed: lss
/bin/bash: lss: command not found
For fabric==2.4.0 you can hide output using the following logic
conn = Connection(host="your-host", user="your-user")
result = conn.run('your_command', hide=True)
result.stdout.strip() # here you can get the output
As other answers allude, fabric.api doesn't exist anymore (as of writing, fabric==2.5.0) 8 years after the question. However the next most recent answer here implies providing hide=True to every .run() call is the only/accepted way to do it.
Not being satisfied I went digging for a reasonable equivalent to a context where I can specify it only once. It feels like there should still be a way using an invoke.context.Context but I didn't want to spend any longer on this, and the easiest way I could find was using invoke.config.Config, which we can access via fabric.config.Config without needing any additional imports.
>>> import fabric
>>> c = fabric.Connection(
... "foo.example.com",
... config=fabric.config.Config(overrides={"run": {"hide": True}}),
... )
>>> result = c.run("hostname")
>>> result.stdout.strip()
'foo.example.com'
As of Fabric 2.6.0 hide argument to run is not available.
Expanding on suggestions by #cfillol and #samuel-harmer, using a fabric.Config may be a simpler approach:
>>> import fabric
>>> conf = fabric.Config()
>>> conf.run.hide = True
>>> conf.run.warn = True
>>> c = fabric.Connection(
... "foo.example.com",
... config=conf
... )
>>> result = c.run("hostname")
This way no command output is printed and no exception is thrown on command failure.
As Samuel Harmer also pointed out in his answer, it is possible to manage output of the run command at the connection level.
As of version 2.7.1:
from fabric import Config, Connection
connection = Connection(
host,
config = Config(overrides = {
"run": { "hide": "stdout" }
}),
...
)

Categories