I want to log raw bytes. But if I change the file mode in FileHandler from "w" to "wb" the logger fails with error, whichever data I pass to it: string or bytes.
logging.getLogger("clientIn").error(b"bacd")
Traceback (most recent call last):
File "/usr/lib/python3.4/logging/__init__.py", line 980, in emit
stream.write(msg)
TypeError: 'str' does not support the buffer interface
Call stack:
File "<string>", line 1, in <module>
File "/usr/lib/python3.4/multiprocessing/spawn.py", line 106, in spawn_main
exitcode = _main(fd)
File "/usr/lib/python3.4/multiprocessing/spawn.py", line 119, in _main
return self._bootstrap()
File "/usr/lib/python3.4/multiprocessing/process.py", line 254, in _bootstrap
self.run()
File "/usr/lib/python3.4/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/serj/work/proxy_mult/proxy/connection_worker_process.py", line 70, in __call__
self._do_work(ipc_socket)
File "/home/serj/work/proxy_mult/proxy/connection_worker_process.py", line 76, in _do_work
logging.getLogger("clientIn").error("bacd")
Message: 'bacd'
I need the way to adapt logging module to binary data.
The easiest solution would be to store the bytes in a bytestring.
The other possible way is to customize your logging. The documentation is a start but you will need to look into examples of how people have done it. Personally I have gone only as far as to using a slightly customized record, handler and formatter for allowing my logger to use a SQLite backend.
There are multiple things you need to modify (sorry for not being that specific but I am also still a beginner when it comes to the logging module of Python):
LogRecord - if you inherit from it, you will see that the __init__(...) specifies an argument msg of type object. As the documentation states msg is the event description message, possibly a format string with placeholders for variable data. Imho if msg was supposed to be just a string it would not have been of type object. This is a place, where you can investigate further incl. the use of args. Inheriting is not really necessary for many cases and a simple namedtuple would do just fine.
LoggerAdapter - there is the contextual information of a message, which can contain arbitrary data (from what I understand). You will need a custom adapter to work with that.
In addition you will probably have to use a custom Formatter and/or Handler. Worst case you will have to use some arbitrary string message while passing the extra data (binary or otherwise) alongside it.
Here is a quick and dirty example, where I use a namedtuple to hold the extra data. Note that I was unable to just pass the extra data without an actual message but you might be able to go around this issue if you implement your actual custom LogRecord. Also note that I am omitting the rest of my code since this is just a demonstration for customization:
TensorBoardLogRecord = namedtuple('TensorBoardLogRecord' , 'dtime lvl src msg tbdata')
TensorBoardLogRecordData = namedtuple('tbdata', 'image images scalar scalars custom_scalars')
class TensorBoardLoggerHandler(logging.Handler):
def __init__(self, level=logging.INFO, tboard_dir='./runs') -> None:
super().__init__(level)
self.tblogger = SummaryWriter(tboard_dir)
def emit(self, record: TensorBoardLogRecord) -> None:
# For debugging call print record.__dict__ to see how the record is structured
# If record contains Tensorboard data, add it to TB and flush
if hasattr(record, 'args'):
# TODO Do something with the arguments
...
class TensorBoardLogger(logging.Logger):
def __init__(self, name: str='TensorBoardLogger', level=logging.INFO, tboard_dir='./runs') -> None:
super().__init__(name, level)
self.handler = TensorBoardLoggerHandler(level, tboard_dir)
self.addHandler(self.handler)
...
logging.setLoggerClass(TensorBoardLogger)
logger = logging.getLogger('TensorBoardLogger')
logger.info('Some message', TensorBoardLogRecordData(None, None, 10000, None, None))
What I am trying to do is add the ability to the logger (still work in progress) to actually write a Tensorboard (in my case from the PyTorch utilities module) log entry that can be visualized via the tool inside the web browser. Yours doesn't need to be that complicated. This "solution" is mostly in case you can't find a way to override the msg handling.
I found also this repository - visual logging, which uses the logging facilities of the Python module to handle images. Following the code provided by the repo I was able to get
<LogRecord: TensorBoardLogger, 20, D:\Projects\remote-sensing-pipeline\log.py, 86, "TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None)">
{'name': 'TensorBoardLogger', 'msg': TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None), 'args': (), 'levelname': 'INFO', 'levelno': 20, 'pathname': 'D:\\Projects\\remote-sensing-pipeline\\log.py', 'filename': 'log.py', 'module': 'log', 'exc_info': None, 'exc_text': None, 'stack_info': None, 'lineno': 86, 'funcName': '<module>', 'created': 1645193616.9026344, 'msecs': 902.6343822479248, 'relativeCreated': 834.2068195343018, 'thread': 6508, 'threadName': 'MainThread', 'processName': 'MainProcess', 'process': 16208}
by just calling
logger = TensorBoardLogger(tboard_dir='./LOG')
logger.info(TensorBoardLogRecord(image=None, images=None, scalar=1, scalars=None, custom_scalars=None))
where I changed TensorBoardLogRecord to be
TensorBoardLogRecord = namedtuple('TensorBoardLogRecord' , 'image images scalar scalars custom_scalars')
As you can see the msg is my object TensorBoardLogRecord, which confirms both my statement above as well as the statement in the documentation - as long as you customize your logging properly, you can log whatever you want. In the case of the repo I've pointed at the author is using images, which are numpy objects. However ultimately those images are read from image files hence binary data is also there.
Related
I am developing a script to create a record in a model of an Odoo. I need to run this model's methods on specific records. In my case the method which I need to run on a specific record doesn't have any parameter (just has self). I want to know how can I run the method on a specific record of the model through xmlrpc call from client to Odoo server. Below is the way I tried to call the method and pass the id of a specific record regarding this question.
xmlrpc_object.execute('test_db', user, 'admin', 'test.test', 'action_check_constraint', [record_id])
action_check_constraint checks some constraints on each record of the model and if all the constraints passed, changes the state of the record or raise validation errors. But the above method call with xmlrpc raise below error:
xmlrpc.client.Fault: <Fault cannot marshal None unless allow_none is enabled: 'Traceback (most recent call last):\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 60, in xmlrpc_1\n response = self._xmlrpc(service)\n File "/home/ibrahim/workspace/odoo13/odoo/odoo/addons/base/controllers/rpc.py", line 50, in _xmlrpc\n return dumps((result,), methodresponse=1, allow_none=False)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 968, in dumps\n data = m.dumps(params)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 501, in dumps\n dump(v, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 523, in __dump\n f(self, value, write)\n File "/usr/local/lib/python3.8/xmlrpc/client.py", line 527, in dump_nil\n raise TypeError("cannot marshal None unless allow_none is enabled")\nTypeError: cannot marshal None unless allow_none is enabled\n'>
> /home/ibrahim/workspace/scripts/automate/automate_record_creation.py(328)create_record()
Can anyone help with the correct and best way of calling a model's method (with no parameter except self) on a specific record through xmlrpc client to Odoo server?
That error is raised, because the xmlrpc library is not allowing None as return value as default. But you should change that behaviour by just allowing it.
Following line is from Odoo's external API documentation, extended to allow None as return value:
models = xmlrpc.client.ServerProxy(
'{}/xmlrpc/2/object'.format(url), allow_none=True)
For more information about xmlrpc ServerProxy look into the python documentation
You can get the error if action_check_constraint does not return anything (by default None).
Try to run the server with the log-level option set to debug_rpc_answer to get more details.
After lost of search and try first I used this fix to solve the error but I think this fix is not a best practice. So, I found OdooRPC which does the same job but it handled the above case and there's no such error for model methods which return None. Using OdooRPC solved my problem and I done what I needed to do with xmlrpc in Odoo.
So I have tried to write a small config file for my script, which should specify an IP address, a port and a URL which should be created via interpolation using the former two variables. My config.ini looks like this:
[Client]
recv_url : http://%(recv_host):%(recv_port)/rpm_list/api/
recv_host = 172.28.128.5
recv_port = 5000
column_list = Name,Version,Build_Date,Host,Release,Architecture,Install_Date,Group,Size,License,Signature,Source_RPM,Build_Host,Relocations,Packager,Vendor,URL,Summary
In my script I parse this config file as follows:
config = SafeConfigParser()
config.read('config.ini')
column_list = config.get('Client', 'column_list').split(',')
URL = config.get('Client', 'recv_url')
If I run my script, this results in:
Traceback (most recent call last):
File "server_side_agent.py", line 56, in <module>
URL = config.get('Client', 'recv_url')
File "/usr/lib64/python2.7/ConfigParser.py", line 623, in get
return self._interpolate(section, option, value, d)
File "/usr/lib64/python2.7/ConfigParser.py", line 691, in _interpolate
self._interpolate_some(option, L, rawval, section, vars, 1)
File "/usr/lib64/python2.7/ConfigParser.py", line 716, in _interpolate_some
"bad interpolation variable reference %r" % rest)
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
I have tried debugging, which resulted in giving me one more line of error code:
...
ConfigParser.InterpolationSyntaxError: bad interpolation variable reference '%(recv_host):%(recv_port)/rpm_list/api/'
Exception AttributeError: "'NoneType' object has no attribute 'path'" in <function _remove at 0x7fc4d32c46e0> ignored
Here I am stuck. I don't know where this _remove function is supposed to be... I tried searching for what the message is supposed to tell me, but quite frankly I have no idea. So...
Is there something wrong with my code?
What does '< function _remove at ... >' mean?
There was indeed a mistake in my config.ini file. I did not regard the s at the end of %(...)s as a necessary syntax element. I suppose it refers to "string" but I couldn't really confirm this.
My .ini file for starting the Python Pyramid server had a similar problem.
And to use the variable from the .env file, I needed to add the following: %%(VARIEBLE_FOR_EXAMPLE)s
But I got other problems, and I solved them with this: How can I use a system environment variable inside a pyramid ini file?
I managed to read generic module value with cocotb without problem. But if I can't manage to write it.
My VHDL generic is :
...
generic (
...
C_M00_AXI_BURST_LEN : integer := 16;
...
)
I can read it in cocotb:
self.dut.log.info("C_M00_AXI_BURST_LEN 0x{:x}".format(
int(self.dut.c_m00_axi_burst_len)))
But if I try to change it :
self.dut.c_m00_axi_burst_len = 32
I get this python error :
Send raised exception: Not permissible to set values on object c_m00_axi_burst_len
File "/opt/cocotb/cocotb/decorators.py", line 197, in send
return self._coro.send(value)
File "/usr/local/projects/axi_pattern_tester/vivado_ip/axi_pattern_tester_1.0/cocotb/test_axi_pattern_tester_v1_0.py", line 165, in axi4_master_test
dutest.print_master_generics()
File "/usr/local/projects/axi_pattern_tester/vivado_ip/axi_pattern_tester_1.0/cocotb/test_axi_pattern_tester_v1_0.py", line 86, in print_master_generics
self.dut.c_m00_axi_burst_len = 32
File "/opt/cocotb/cocotb/handle.py", line 239, in __setattr__
return getattr(self, name)._setcachedvalue(value)
File "/opt/cocotb/cocotb/handle.py", line 378, in _setcachedvalue
raise TypeError("Not permissible to set values on object %s" % (self._name))
Is there a way to do it using GHDL as simulator ?
In fact, user1155120, Paebbels and scary_jeff respond to the question : It's not possible.
But it's possible to use configuration differently to solve this problem. VHDL Generic value can be configured in Makefile adding "-g" option to SIM_ARGS parameter :
SIM_ARGS+=-gC_M00_AXI_BURST_LEN=16
This value can then be read under cocotb "dut" object like any others signal and used as simulation parameter :
C_M00_AXI_BURST_LEN = int(dut.C_M00_AXI_BURST_LEN.value)
I'm trying to make a simple program to check and show unread messages, but I have problem while trying to get subject and sender adress.
For sender I've tried this method:
import email
m = server.fetch([a], ['RFC822'])
#a is variable with email id
msg = email.message_from_string(m[a], ['RFC822'])
print msg['from']
from email.utils import parseaddr
print parseaddr(msg['from'])
But it didn't work. I was getting this error:
Traceback (most recent call last):
File "C:/Users/ExampleUser/AppData/Local/Programs/Python/Python35-32/myprogram.py", line 20, in <module>
msg = email.message_from_string(m[a], ['RFC822'])
File "C:\Users\ExampleUser\AppData\Local\Programs\Python\Python35-32\lib\email\__init__.py", line 38, in message_from_string
return Parser(*args, **kws).parsestr(s)
File "C:\Users\ExampleUser\AppData\Local\Programs\Python\Python35-32\lib\email\parser.py", line 68, in parsestr
return self.parse(StringIO(text), headersonly=headersonly)
TypeError: initial_value must be str or None, not dict
I also used this:
print(server.fetch([a], ['BODY[HEADER.FIELDS (FROM)]']))
but the result was like:
defaultdict(<class 'dict'>, {410: {b'BODY[HEADER.FIELDS ("FROM")]': b'From: "=?utf-8?q?senderexample?=" <sender#example.com>\r\n\r\n', b'SEQ': 357}, 357: {b'SEQ': 357, b'FLAGS': (b'\\Seen',)}})
Is there a way to repair the first method, or make the result of second look like:
Sender Example <sender#example.com>
?
And I also don't know how to get email subject. But I guess it's the same as sender, but with other arguments. So the only thing I need are these arguments.
You should start by reviewing various IMAP libraries which are available for Python and use one which fits your needs. There are multiple ways of fetching the data you need in IMAP (the protocol), and by extension also in Python (and its libraries).
For example, the most straightforward way of getting the data you need in IMAP the protocol is through fetching the ENVELOPE object. You will still have to perform decoding of RFC2047 encoding of the non-ASCII data (that's that =?utf-8?q?... bit that you're seeing), but at least it would save you from parsing RFC5322 header structure with multiple decades of compatibility syntax rules.
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement:
Traceback (most recent call last):
File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
Rather than editing installed python code, you can also find the errors like this:
def handleError(record):
raise RuntimeError(record)
handler.handleError = handleError
where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
The logging module is designed to stop bad log messages from killing the rest of the code, so the emit method catches errors and passes them to a method handleError. The easiest thing for you to do would be to temporarily edit /usr/lib/python2.6/logging/__init__.py, and find handleError. It looks something like this:
def handleError(self, record):
"""
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is
encountered during an emit() call. If raiseExceptions is false,
exceptions get silently ignored. This is what is mostly wanted
for a logging system - most users will not care about errors in
the logging system, they are more interested in application errors.
You could, however, replace this with a custom handler if you wish.
The record which was being processed is passed in to this method.
"""
if raiseExceptions:
ei = sys.exc_info()
try:
traceback.print_exception(ei[0], ei[1], ei[2],
None, sys.stderr)
sys.stderr.write('Logged from file %s, line %s\n' % (
record.filename, record.lineno))
except IOError:
pass # see issue 5971
finally:
del ei
Now temporarily edit it. Inserting a simple raise at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me.
My problem was that I replaced all occurrences of print with logging.info ,
so a valid line like print('a',a) became logging.info('a',a) (but it should be logging.info('a %s'%a) instead.
This was also hinted in How to traceback logging errors? , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere.
class DebugFormatter(logging.Formatter):
def format(self, record):
try:
return super(DebugFormatter, self).format(record)
except:
print "Unable to format record"
print "record.filename ", record.filename
print "record.lineno ", record.lineno
print "record.msg ", record.msg
print "record.args: ",record.args
raise
FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s'
formatter = DebugFormatter(FORMAT)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
Had same problem
Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "https://docs.python.org/3/library/logging.html#formatter-objects"