Can someone explaine the marked line of this method from /usr/lib/python2.7/logging/__init__.py?
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""
Implementation of showwarnings which redirects to logging, which will first
check to see if the file parameter is None. If a file is specified, it will
delegate to the original warnings implementation of showwarning. Otherwise,
it will call warnings.formatwarning and will log the resulting string to a
warnings logger named "py.warnings" with level logging.WARNING.
"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno, line)
logger = getLogger("py.warnings")
if not logger.handlers:
logger.addHandler(NullHandler())
logger.warning("%s", s) # <------ I don't understand this line
Why is the last line not this:
logger.warning(s)
Because this is the correct syntax.
See https://docs.python.org/2/library/logging.html
Logger.warning(msg, *args, **kwargs)
Logs a message with level WARNING on this logger. The arguments are interpreted as for debug().
This syntax is well known and used by all *printf() functions for example.
Related
I have a dilemma writing a context/resource manager wrapper for a file object which passes Pylint, for use with with: Do I put the wrapped open call in __init__, or in __enter__?
Conceptually, this is a class that wraps open by accepting either a filename, the string -, or a file object (such as sys.stdin) and does 'the right thing'. In other words, if it's a filename other than -, it opens the file and manages it as a resource; otherwise, it either chooses a default file object (expected to be sys.stdin or sys.stdout) if the filename is -, or if it is a file object, uses that file object unchanged.
I see two possibilities, neither of which are working out right now: The wrapped open goes in the __init__ constructor, or it goes in the __enter__ context management method. If I put it in the __init__ constructor, which examples on SO suggest, Pylint -- which I must pass -- fails on the open with R1732: Consider using 'with' for resource-allocating operations (consider-using-with). If I put the open in the __enter__ method, Pylint is happy, but I am not sure this is the correct practice, plus the problem is, in one use case, I need the file object in order to initialize a base class (and Pylint won't let me call the base class constructor in the __enter__ method).
Some example code is in order. Here is code that opens in the constructor:
class ManagedFile:
'''Manage a file, which could be an unopened filename, could be
'-' for stdin/stdout, or could be an existing filehandle'''
def __init__(self, file_in, handle_default, open_kwargs):
''' Open a file if given a filename, or handle_default if -,
or if it's a file object already, just pass it through.
:param file_in: Valid filename, '-', or file-like object
:param handle_default: What to return if file is '-'
:param open_kwargs: Dictionary of options to pass to open() if used
'''
self.file_handle = None
self.file_in = None
if isinstance(file_in, io.IOBase):
self.file_handle = file_in
elif isinstance(file_in, str):
if file_in is None or file_in == "-":
self.file_handle = handle_default
else:
self.file_handle = open(self.file_in, **open_kwargs)
self.file_in = file_in
else:
raise TypeError('File specified must be string or file object')
def __enter__(self):
self.file_handle.__enter__()
return self.file_handle
def __exit__(self, err_type, err_value, traceback):
self.file_handle.__exit__(err_type, err_value, traceback)
self.file_handle.close()
self.file_in = None
def handle(self):
'''Return handle of file that was opened'''
return self.file_handle
And here is how I would do it with the open call in the __enter__ method:
class ManagedFile:
'''Manage a file, which could be an unopened filename, could be
'-' for stdin/stdout, or could be an existing filehandle'''
def __init__(self, file_in, handle_default, open_kwargs):
''' Open a file if given a filename, or handle_default if -,
or if it's a file object already, just pass it through.
:param file_in: Valid filename, '-', or file-like object
:param handle_default: What to return if file is '-'
:param open_kwargs: Dictionary of options to pass to open() if used
:return: Managed file object
'''
self.managed = False
self.file_handle = None
self.open_kwargs = {'mode': 'r'}
self.file_in = None
if isinstance(file_in, io.IOBase):
self.file_handle = file_in
elif isinstance(file_in, str):
if file_in is None or file_in == "-":
self.file_handle = handle_default
else:
self.file_in = file_in
self.open_kwargs = open_kwargs
self.managed = True
else:
raise TypeError('File specified must be string or file object')
def __enter__(self):
if self.managed:
self.file_handle = open(self.file_in, **self.open_kwargs)
self.file_handle.__enter__()
return self.file_handle
def __exit__(self, err_type, err_value, traceback):
self.file_handle.__exit__(err_type, err_value, traceback)
self.file_handle.close()
self.managed = False
self.file_in = None
def handle(self):
'''Return handle of file that was opened'''
return self.file_handle
I've pored over many SO questions & answers, but haven't been able to triangulate the exact answer I'm looking for, particular one that accounts for why Pylint flags an error when I ostensibly do the right thing.
Undoubtedly I've committed some Python errors in this code so any other ancillary correction would be welcome. Other ideas are also welcome but please don't get too fancy on me.
Ideally the class would itself behave as a full-fledged file object, but right now I'm focusing on something simple: Something that just manages a file handle (i.e. a reference to a regular file object). Extra gratitude if someone can provide some hints on turning it into a file object.
Python version is 3.8.10; platform is Linux.
If you don't need this to be a class, this feels like it's a lot simpler.
import contextlib
import io
import sys
def open_or_stdout(file_or_path, mode):
"""
Returns a context manager with either the file path opened,
the file object passed through or standard output
"""
if isinstance(file_or_path, io.IOBase):
# Input is already a file-like object. Just pass it through
return file_or_path
if file_or_path == "-":
return contextlib.nullcontext(sys.stdout)
return open(path, mode=mode)
If you want to stick with the class, I think you need to return the value of self.file_handle.__enter__() in your implementation of __enter__() rather than self.file_handle.
I'm checking a list of around 3000 telegram chats to get and retrieve the number of chat members in each chat using the get_chat_members_count method.
At some point I'm hitting a flood limit and getting temporarily banned by Telegram BOT.
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\ico_icobench_2.py", line 194, in <module>
ico_tel_memb = bot.get_chat_members_count('#' + ico_tel_trim, timeout=60)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 60, in decorator
result = func(self, *args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 2006, in get_chat_members_count
result = self._request.post(url, data, timeout=timeout)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 278, in post
**urlopen_kwargs)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 208, in _request_wrapper
message = self._parse(resp.data)
File "C:\Python36\lib\site-packages\telegram\utils\request.py", line 168, in _parse
raise RetryAfter(retry_after)
telegram.error.RetryAfter: Flood control exceeded. Retry in 85988 seconds
The python-telegram-bot wiki gives a detailed explanation and example on how to avoid flood limits here.
However, I'm struggling to implement their solution and I hope someone here has more knowledge of this than myself.
I have literally made a copy and paste of their example and can't get it to work no doubt because i'm new to python. I'm guessing I'm missing some definitions but I'm not sure which. Here is the code below and after that the first error I'm receiving. Obviously the TOKEN needs to be replaced with your token.
import telegram.bot
from telegram.ext import messagequeue as mq
class MQBot(telegram.bot.Bot):
'''A subclass of Bot which delegates send method handling to MQ'''
def __init__(self, *args, is_queued_def=True, mqueue=None, **kwargs):
super(MQBot, self).__init__(*args, **kwargs)
# below 2 attributes should be provided for decorator usage
self._is_messages_queued_default = is_queued_def
self._msg_queue = mqueue or mq.MessageQueue()
def __del__(self):
try:
self._msg_queue.stop()
except:
pass
super(MQBot, self).__del__()
#mq.queuedmessage
def send_message(self, *args, **kwargs):
'''Wrapped method would accept new `queued` and `isgroup`
OPTIONAL arguments'''
return super(MQBot, self).send_message(*args, **kwargs)
if __name__ == '__main__':
from telegram.ext import MessageHandler, Filters
import os
token = os.environ.get('TOKEN')
# for test purposes limit global throughput to 3 messages per 3 seconds
q = mq.MessageQueue(all_burst_limit=3, all_time_limit_ms=3000)
testbot = MQBot(token, mqueue=q)
upd = telegram.ext.updater.Updater(bot=testbot)
def reply(bot, update):
# tries to echo 10 msgs at once
chatid = update.message.chat_id
msgt = update.message.text
print(msgt, chatid)
for ix in range(10):
bot.send_message(chat_id=chatid, text='%s) %s' % (ix + 1, msgt))
hdl = MessageHandler(Filters.text, reply)
upd.dispatcher.add_handler(hdl)
upd.start_polling()
The first error I get is:
Traceback (most recent call last):
File "C:\Users\alexa\Desktop\z test.py", line 34, in <module>
testbot = MQBot(token, mqueue=q)
File "C:\Users\alexa\Desktop\z test.py", line 9, in __init__
super(MQBot, self).__init__(*args, **kwargs)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 108, in __init__
self.token = self._validate_token(token)
File "C:\Python36\lib\site-packages\telegram\bot.py", line 129, in _validate_token
if any(x.isspace() for x in token):
TypeError: 'NoneType' object is not iterable
The second issue I have is how to use wrappers and decorators with get_chat_members_count.
The code I have added to the example is:
#mq.queuedmessage
def get_chat_members_count(self, *args, **kwargs):
return super(MQBot, self).get_chat_members_count(*args, **kwargs)
But nothing happens and I don't get my count of chat members. I'm also not saying which chat I need to count so not surprising I'm getting nothing back but where am I supposed to put the telegram chat id?
You are getting this error because MQBot receives an empty token. For some reason, it does not raise a descriptive exception but instead crashes unexpectedly.
So why token is empty? It seems that you are using os.environ.get incorrectly. The os.environ part is a dictionary and its' method get allows one to access dict's contents safely. According to docs:
get(key[, default])
Return the value for key if key is in the dictionary, else default. If default is not given, it defaults to None, so that this method never raises a KeyError.
According to your question, in this part token = os.environ.get('TOKEN') you pass token itself as a key. Instead, you should've passed the name of the environmental variable which contains your token.
You can fix this either rewriting that part like this token = 'TOKEN' or by setting environmental variable correctly and accessing it from os.environ.get via correct name.
In my application, I'm using logging.captureWarnings(True) to make sure any DeprecationWarning gets logged in the normal application log.
This works well, but results in logs like:
WARNING [py.warnings] c:\some\path...
It seems from the documentation that:
If capture is True, warnings issued by the warnings module will be
redirected to the logging system. Specifically, a warning will be
formatted using warnings.formatwarning() and the resulting string
logged to a logger named 'py.warnings' with a severity of WARNING.
So that is all to be expected. But I'd like to change the logger associated to such warnings (use the one my application provides, so that one can know when looking at the logs where the DeprecationWarning comes from).
Is there a way to change the associated logger ?
I just did some more investigation and found a perfect way to achieve that:
Looking at the source code for logging.captureWarnings():
def captureWarnings(capture):
"""
If capture is true, redirect all warnings to the logging package.
If capture is False, ensure that warnings are not redirected to logging
but to their original destinations.
"""
global _warnings_showwarning
if capture:
if _warnings_showwarning is None:
_warnings_showwarning = warnings.showwarning
warnings.showwarning = _showwarning
else:
if _warnings_showwarning is not None:
warnings.showwarning = _warnings_showwarning
_warnings_showwarning = None
It seems one can just change warnings.showwarning to point to another callable that will do whatever logging job you want (or anything else for that matter).
The expected prototype for warnings.showwarning seems to be:
def _show_warning(message, category, filename, lineno, file=None, line=None):
"""Hook to write a warning to a file; replace if you like."""
if file is None:
file = sys.stderr
try:
file.write(formatwarning(message, category, filename, lineno, line))
except IOError:
pass # the file (probably stderr) is invalid - this warning gets lost.
It seems logging.captureWarnings() actually sets the callable to logging._showwarning:
def _showwarning(message, category, filename, lineno, file=None, line=None):
"""
Implementation of showwarnings which redirects to logging, which will first
check to see if the file parameter is None. If a file is specified, it will
delegate to the original warnings implementation of showwarning. Otherwise,
it will call warnings.formatwarning and will log the resulting string to a
warnings logger named "py.warnings" with level logging.WARNING.
"""
if file is not None:
if _warnings_showwarning is not None:
_warnings_showwarning(message, category, filename, lineno, file, line)
else:
s = warnings.formatwarning(message, category, filename, lineno, line)
logger = getLogger("py.warnings")
if not logger.handlers:
logger.addHandler(NullHandler())
logger.warning("%s", s)
When I have lots of different modules using the standard python logging module, the following stack trace does little to help me find out where, exactly, I had a badly formed log statement:
Traceback (most recent call last):
File "/usr/lib/python2.6/logging/__init__.py", line 768, in emit
msg = self.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 648, in format
return fmt.format(record)
File "/usr/lib/python2.6/logging/__init__.py", line 436, in format
record.message = record.getMessage()
File "/usr/lib/python2.6/logging/__init__.py", line 306, in getMessage
msg = msg % self.args
TypeError: not all arguments converted during string formatting
I'm only starting to use python's logging module, so maybe I am overlooking something obvious. I'm not sure if the stack-trace is useless because I am using greenlets, or if this is normal for the logging module, but any help would be appreciated. I'd be willing to modify the source, anything to make the logging library actually give a clue as to where the problem lies.
Rather than editing installed python code, you can also find the errors like this:
def handleError(record):
raise RuntimeError(record)
handler.handleError = handleError
where handler is one of the handlers that is giving the problem. Now when the format error occurs you'll see the location.
The logging module is designed to stop bad log messages from killing the rest of the code, so the emit method catches errors and passes them to a method handleError. The easiest thing for you to do would be to temporarily edit /usr/lib/python2.6/logging/__init__.py, and find handleError. It looks something like this:
def handleError(self, record):
"""
Handle errors which occur during an emit() call.
This method should be called from handlers when an exception is
encountered during an emit() call. If raiseExceptions is false,
exceptions get silently ignored. This is what is mostly wanted
for a logging system - most users will not care about errors in
the logging system, they are more interested in application errors.
You could, however, replace this with a custom handler if you wish.
The record which was being processed is passed in to this method.
"""
if raiseExceptions:
ei = sys.exc_info()
try:
traceback.print_exception(ei[0], ei[1], ei[2],
None, sys.stderr)
sys.stderr.write('Logged from file %s, line %s\n' % (
record.filename, record.lineno))
except IOError:
pass # see issue 5971
finally:
del ei
Now temporarily edit it. Inserting a simple raise at the start should ensure the error gets propogated up your code instead of being swallowed. Once you've fixed the problem just restore the logging code to what it was.
It's not really an answer to the question, but hopefully it will be other beginners with the logging module like me.
My problem was that I replaced all occurrences of print with logging.info ,
so a valid line like print('a',a) became logging.info('a',a) (but it should be logging.info('a %s'%a) instead.
This was also hinted in How to traceback logging errors? , but it doesn't come up in the research
Alternatively you can create a formatter of your own, but then you have to include it everywhere.
class DebugFormatter(logging.Formatter):
def format(self, record):
try:
return super(DebugFormatter, self).format(record)
except:
print "Unable to format record"
print "record.filename ", record.filename
print "record.lineno ", record.lineno
print "record.msg ", record.msg
print "record.args: ",record.args
raise
FORMAT = '%(levelname)s %(filename)s:%(lineno)d %(message)s'
formatter = DebugFormatter(FORMAT)
handler = logging.StreamHandler()
handler.setLevel(logging.DEBUG)
handler.setFormatter(formatter)
logger = logging.getLogger(__name__)
logger.setLevel(logging.DEBUG)
logger.addHandler(handler)
Had same problem
Such a Traceback arises due to the wrong format name. So while creating a format for a log file, check the format name once in python documentation: "https://docs.python.org/3/library/logging.html#formatter-objects"
I'm issuing lots of warnings in a validator, and I'd like to suppress everything in stdout except the message that is supplied to warnings.warn().
I.e., now I see this:
./file.py:123: UserWarning: My looong warning message
some Python code
I'd like to see this:
My looong warning message
Edit 2: Overriding warnings.showwarning() turned out to work:
def _warning(
message,
category = UserWarning,
filename = '',
lineno = -1):
print(message)
...
warnings.showwarning = _warning
warnings.warn('foo')
There is always monkeypatching:
import warnings
def custom_formatwarning(msg, *args, **kwargs):
# ignore everything except the message
return str(msg) + '\n'
warnings.formatwarning = custom_formatwarning
warnings.warn("achtung")
Monkeypatch warnings.showwarning() with your own custom function.
Use the logging module instead of warnings.
Here's what I'm doing to omit just the source code line. This is by and large as suggested by the documentation, but it was a bit of a struggle to figure out what exactly to change. (In particular, I tried in various ways to keep the source line out of showwarnings but couldn't get it to work the way I wanted.)
# Force warnings.warn() to omit the source code line in the message
formatwarning_orig = warnings.formatwarning
warnings.formatwarning = lambda message, category, filename, lineno, line=None: \
formatwarning_orig(message, category, filename, lineno, line='')
Just passing line=None would cause Python to use filename and lineno to figure out a value for line automagically, but passing an empty string instead fixes that.