Getting error "Maximum recursion depth exceeded while calling a Python object" - python

So I just received an error that I kinda don't understand what is the reason of.
Traceback (most recent call last):
File "C:\Users\utils.py", line 657, in script
logger.warn('Wopsiy! No word found!')
File "C:\Users\utils.py", line 30, in warn
sys.stdout.write("{}{} {}".format(self.__timestamp(), '[' + self.name + '] -', colored(text, "yellow")))
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 40, in write
self.__convertor.write(text)
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 141, in write
self.write_and_convert(text)
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 166, in write_and_convert
self.write_plain_text(text, cursor, start)
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 174, in write_plain_text
self.wrapped.write(text[start:end])
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 40, in write
self.__convertor.write(text)
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 141, in write
self.write_and_convert(text)
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 169, in write_and_convert
self.write_plain_text(text, cursor, len(text))
File "C:\Program Files\Python36\lib\site-packages\colorama\ansitowin32.py", line 174, in write_plain_text
self.wrapped.write(text[start:end])
As I can see it has something with the logger that I have created myself that looks like:
Utils class
from datetime import datetime
from termcolor import cprint, colored
import sys
import colorama
class Logger:
def __init__(self,name):
colorama.init()
self.name = name
#staticmethod
def __timestamp():
timestamp = str(datetime.now().strftime("[%H:%M:%S.%f")[:-3]+"]")
return timestamp
def warn(self, text):
sys.stdout.write("{}{} {}".format(self.__timestamp(), '[' + self.name + '] -', colored(text, "yellow")))
sys.stdout.write("\n")
sys.stdout.flush()
And basically I made also a simple code of how my code looks like as well:
from utils import Logger
logger = Logger('Script')
def main():
logger = Logger("product_info")
word = ['Nope', 'There', 'Is', 'No', 'Word']
while True:
try:
for _ in infinity():
if 'Hello' in word:
print('WORKS!!')
else:
logger.warn('Wopsiy! No word found!')
time.sleep(1)
except Exception as err:
print(err)
time.sleep(1)
continue
So the problem is that after a while it gives me an error of maximum recursion depth exceeded while calling a Python object but I only get it whenever I print out except Exception as err: but when I see through a console it gives me the output that is given at the top.
The question is now that I have actually no idea what the cause of it is.
Edit
from datetime import datetime
from termcolor import cprint, colored
import sys
import colorama
colorama.init()
class Logger:
def __init__(self,name):
self.name = name
#staticmethod
def __timestamp():
timestamp = str(datetime.now().strftime("[%H:%M:%S.%f")[:-3]+"]")
return timestamp
def warn(self, text):
sys.stdout.write("{}{} {}".format(self.__timestamp(), '[' + self.name + '] -', colored(text, "yellow")))
sys.stdout.write("\n")
sys.stdout.flush()

As I understood from the discussion in the comments to the question, you may create multiple instances of the Logger class during the execution of your script. Each creation of a Logger invokes colorama.init(). Each call to colorama.init() forces Colorama to replace sys.stdout and sys.stderr streams with colorama-wrapped versions of them.
After more and more calls to colorama.init your streams turn into fat onions of lots of (uselessly repeated) colorama wrapper layers, and a single call to print has to get passed recursively from layer to layer until it reaches the actual sys.stdout.
When the number of layers exceeds the maximum allowed stack depth, you get your exception. This situation is also referenced in this open colorama issue.
The easiest way to fix the problem would be to move colorama.init() out of the Logger constructor, and have something like that globally instead:
import colorama
colorama.init()

Related

Why do we get a NameError when trying to use the SharedMemoryManager (python 3.8) as a replacement for the BaseManager?

Python 3.8 introduces new shared memory features. We are trying to use the SharedMemoryManager and a NameError is thrown.
I thought that we might do something wrong in our complex scenario, so I broke it down using python documentation snippets.
try:
# python >= 3.8
from multiprocessing.managers import SharedMemoryManager as Manager
except:
# python < 3.8
from multiprocessing.managers import BaseManager as Manager
class MathsClass:
def add(self, x, y):
return x + y
def mul(self, x, y):
return x * y
class MyManager(Manager):
pass
MyManager.register('Maths', MathsClass)
if __name__ == '__main__':
with MyManager() as manager:
maths = manager.Maths()
print(maths.add(4, 3)) # prints 7
print(maths.mul(7, 8)) # prints 56
This is pretty much taken from the multiprocessing docs (except for the fallback import) and works fine in python 3.7 but throws the following error in python 3.8:
Traceback (most recent call last):
File "scripts/debug_shared_memory_issue.py", line 21, in <module>
maths = manager.Maths()
File "/usr/lib/python3.8/multiprocessing/managers.py", line 740, in temp
token, exp = self._create(typeid, *args, **kwds)
File "/usr/lib/python3.8/multiprocessing/managers.py", line 625, in _create
id, exposed = dispatch(conn, None, 'create', (typeid,)+args, kwds)
File "/usr/lib/python3.8/multiprocessing/managers.py", line 91, in dispatch
raise convert_to_error(kind, result)
multiprocessing.managers.RemoteError:
---------------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/managers.py", line 210, in handle_request
result = func(c, *args, **kwds)
File "/usr/lib/python3.8/multiprocessing/managers.py", line 1312, in create
if hasattr(self.registry[typeid][-1], "_shared_memory_proxy"):
NameError: name 'self' is not defined
---------------------------------------------------------------------------
The release notes of python 3.8 and the documentation say that SharedMemoryManager is "A subclass of BaseManager" so we expected it to work as a drop-in replacement. But it doesn't seem so. What are we doing wrong? Looking at the current 3.8 branch of CPython there don't seem to be relevant changes to this. In Python 3.9 there is an explicit self arg in the create function though. But 3.9 is WIP, so we'd rather not use it in production software.
Thanks for your help!
It was a bug fixed in python/cpython#142566c (v3.9.0a1).
You can patch from python/cpython/blob/v3.9.0a1/Lib/multiprocessing/managers.py#L1269-L1277:
from multiprocessing.managers import SharedMemoryManager as Manager
import sys
if sys.version_info < (3, 9):
from multiprocessing.managers import Server, SharedMemoryServer
def create(self, c, typeid, /, *args, **kwargs):
if hasattr(self.registry[typeid][-1], "_shared_memory_proxy"):
kwargs['shared_memory_context'] = self.shared_memory_context
return Server.create(self, c, typeid, *args, **kwargs)
SharedMemoryServer.create = create

Exception thrown when running tf.app.run()

I am toying around with flags at the moment and came across some weird behavior when using tf.app.run(). The following code snippet should simply print the string given via the command line.
import tensorflow as tf
# command line flags
tf.app.flags.DEFINE_string('mystring', 'Hello World!',
'''String to print to console.''')
FLAGS = tf.app.flags.FLAGS
def main():
print(FLAGS.mystring)
if __name__ == '__main__':
tf.app.run()
During execution, this error is thrown:
Traceback (most recent call last):
File "", line 1, in
runfile('/path/flags.py', wdir='/path')
File
"/home/abc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/spyder/utils/site/sitecustomize.py",
line 710, in runfile
execfile(filename, namespace)
File
"/home/abc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/spyder/utils/site/sitecustomize.py",
line 101, in execfile
exec(compile(f.read(), filename, 'exec'), namespace)
File "/path/flags.py", line 19, in
tf.app.run()
File
"/home/abc/anaconda3/envs/tensorflow/lib/python3.5/site-packages/tensorflow/python/platform/app.py",
line 126, in run
_sys.exit(main(argv))
TypeError: main() takes 0 positional arguments but 1 was given
...which is strange because I do not give a single argument to main(). However, if I add an underscore def main(_):, it works without any errors.
I couldn't find a doc where this is use of the underscore is described. Does anybody know what happens here? Thank you!
The error message I see in Pycharm IDE when I execute your code is clearer.
Traceback (most recent call last):
File "D:/PycharmProjects/TensorFlow/self.py", line 30, in <module>
tf.app.run()
File "D:\\Anaconda\envs\tensorflow\lib\site-packages\tensorflow\python\platform\app.py",
line 48, in run
_sys.exit(main(_sys.argv[:1] + flags_passthrough))
TypeError: main() takes 0 positional arguments but 1 was given
_sys.exit(main(_sys.argv[:1] + flags_passthrough)) is trying to call our main method with one argument.
This is the run method in app.py
A stripped down version of the run method can be used to test.
import tensorflow as tf
import sys as _sys
from tensorflow.python.platform import flags
# command line flags
tf.app.flags.DEFINE_string('mystring', 'Hello World!',
'''String to print to console.''')
FLAGS = tf.app.flags.FLAGS
def run(main=None, argv=None):
"""Runs the program with an optional 'main' function and 'argv' list."""
f = flags.FLAGS
# Extract the args from the optional `argv` list.
args = argv[1:] if argv else None
# Parse the known flags from that list, or from the command
# line otherwise.
# pylint: disable=protected-access
flags_passthrough = f._parse_flags(args=args)
# pylint: enable=protected-access
main = main or _sys.modules['__main__'].main
print (_sys.argv[:1])
# Call the main function, passing through any arguments
# to the final program.
#_sys.exit(main(_sys.argv[:1] + flags_passthrough))
# Call the main function with no arguments
#_sys.exit(main())
def main():
print(FLAGS.mystring)
if __name__ == '__main__':
#tf.app.run()
run()
print(_sys.argv[1:]) prints ['D:/PycharmProjects/TensorFlow/self.py'] since
argv[0] is the script name passed to the interpreter.
Maybe You can find the answer from this link for explaining how app.py runshow app.py runs
You can also define your main function with def main(argv=None): ..., or like you do def main(_): ..., so this can make it works for you giving the main function parameters.
I had a similar problem when using cProfile and calling script with
python -m cProfile train.py
Seems like the problem was that tf.app.run called main inside cProfile which wasn't ready for argument passing. In my case the solution was to specify main in tf.app.run():
tf.app.run(main=main)
Don't forget to add fake argument in main like this def main(_):.

python multiprocessing pickling/manager/misc error (from PMOTW)

I'm having some trouble getting the following code to run on Eclipse via Windows. The code is from Doug Hellman:
import random
import multiprocessing
import time
class ActivePool:
def __init__(self):
super(ActivePool, self).__init__()
self.mgr = multiprocessing.Manager()
self.active = self.mgr.list()
self.lock = multiprocessing.Lock()
def makeActive(self, name):
with self.lock:
self.active.append(name)
def makeInactive(self, name):
with self.lock:
self.active.remove(name)
def __str__(self):
with self.lock:
return str(self.active)
def worker(s, pool):
name = multiprocessing.current_process().name
with s:
pool.makeActive(name)
print('Activating {} now running {}'.format(
name, pool))
time.sleep(random.random())
pool.makeInactive(name)
if __name__ == '__main__':
pool = ActivePool()
s = multiprocessing.Semaphore(3)
jobs = [
multiprocessing.Process(
target=worker,
name=str(i),
args=(s, pool),
)
for i in range(10)
]
for j in jobs:
j.start()
for j in jobs:
j.join()
print('Now running: %s' % str(pool))
I get the following error, which I assume is due to some pickling issue with passing in pool as an argument to Process.
Traceback (most recent call last):
File "E:\Eclipse_Workspace\CodeExamples\FromCodes\CodeTest.py", line 50, in <module>
j.start()
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\process.py", line 105, in start
self._popen = self._Popen(self)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 223, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\context.py", line 322, in _Popen
return Popen(process_obj)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
reduction.dump(process_obj, to_child)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 939, in reduce_pipe_connection
dh = reduction.DupHandle(conn.fileno(), access)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 170, in fileno
self._check_closed()
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\connection.py", line 136, in _check_closed
raise OSError("handle is closed")
OSError: handle is closed
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\spawn.py", line 99, in spawn_main
new_handle = reduction.steal_handle(parent_pid, pipe_handle)
File "C:\Users\Bob\AppData\Local\Programs\Python\Python36-32\lib\multiprocessing\reduction.py", line 87, in steal_handle
_winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
PermissionError: [WinError 5] Access is denied
A similar question's answer seems to suggest that I initialize pool with a function call at the top level, but I don't know how to apply that to this example. Do I initialize ActivePool in worker? That seems to defeat the spirit of Hellman's example.
Another answer suggests I use __getstate__, __setstate__, to remove unpickleable objects and reconstruct them when unpickling, but I don't know a good way to do this with Proxy Objects like Manager, and I actually don't know what the unpickleable object is.
Is there any way I can make this example work with minimal changes? I really wish to understand what is going on under the hood. Thanks!
Edit - Problem Solved:
The pickling issue was pretty obvious in hindsight. The ActivePool's __init__ contained a Manager() object which seems unpicklable. The code runs normally as per Hellman's example if we remove self.mgr, and initialize the list ProxyObject in one line:
def __init__(self):
super(ActivePool, self).__init__()
self.active = multiprocessing.Manager().list()
self.lock = multiprocessing.Lock()
Comment: The 'join()' was in the Hellman example, but I forgot to add it into the code snippet. Any other ideas?
I'm running Linux and it works as expected, Windows behave different read understanding-multiprocessing-shared-memory-management-locks-and-queues-in-pyt
To determine which Parameter of args=(s, pool) raise the Error remove one and use it as global.
Change:
def worker(s):
...
args=(s,),
Note: There is no need to enclose a multiprocessing.Manager().list() with a Lock().
This is not the culprit of your error.
Question: Is there any way I can make this example work with minimal changes?
Your __main__ Process terminates, therefore all started Processes die at unpredicted position of execution. Add simple a .join() at the end to let the __main__ wait until all Processes done:
for j in jobs:
j.join()
print('EXIT __main__')
Tested with Python: 3.4.2

Is it possible to get lines from Python tracebacks in eggs?

In Python Packaging: Hate, hate, hate everywhere, Armin says:
[...] Python tracebacks no longer included the source lines with the traceback. However there was no technical limitation for why it should not be able to show the correct line numbers there. It was just a bug in the Python interpreter.
I'm seeing exactly this issue for eggs in my virtualenv:
Traceback (most recent call last):
File "/users/example/venv/current/bin/my_script", line 37, in <module>
sys.exit(demo.scripts.foo.main())
File "build/bdist.linux-x86_64/egg/example/demo/scripts/my_script.py", line 90, in main
File "build/bdist.linux-x86_64/egg/example/demo/lib/bar.py", line 18, in func_x
File "build/bdist.linux-x86_64/egg/example/demo/lib/bar.py", line 55, in func_y
AttributeError: 'tuple' object has no attribute 'sort'
Since this is a known bug, are there workarounds? Is there an issue in the Python bug tracker (I can't find one)?
This is a proof of concept
import os
import sys
import traceback
import linecache
def recurse(depth=10):
if depth:
recurse(depth-1)
os.path.join(None, None)
def locate_filename(filename):
def generate_segments():
parts = filename.split(os.sep)
for i in xrange(len(parts) - 1, 0, -1):
yield os.sep.join(os.path.join(parts[i:]))
for segment in generate_segments():
for path in sys.path:
candidate = os.path.join(path, segment)
if os.path.exists(candidate):
return candidate
try:
recurse()
except:
_, _, tb = sys.exc_info()
for filename, lineno, functionname, _ in traceback.extract_tb(tb):
print filename, lineno, functionname
relocated_filename = locate_filename(filename)
if relocated_filename:
print linecache.getline(relocated_filename, lineno)

Jython and the xml.sax parser - strange error

I am just getting started with Python/Jython and the SAX parser (xml.sax). I wrote a simple content handler as a test.
from __future__ import with_statement
from xml.sax import make_parser, handler
from xml.sax.handler import ContentHandler
class CountingHandler(ContentHandler):
def __init__(self):
self.counter = 0
def startElement(self, name, attrs):
self.counter += 1
def main(argv=sys.argv):
parser = make_parser()
h = CountingHandler()
parser.setContentHandler(h)
with open(argv[1], "r") as input:
parser.parse(input)
When I run this on some documents (not all), I get an error:
Traceback (most recent call last):
File "src/sciencenetworks/xmltools.py", line 93, in <module>
sys.exit(main())
File "src/sciencenetworks/xmltools.py", line 88, in main
parser.parse(input)
File "/amd.home/home/staudt/workspace/jython/Lib/xml/sax/drivers2/drv_javasax.py", line 141, in parse
self._parser.parse(JyInputSourceWrapper(source))
File "/amd.home/home/staudt/workspace/jython/Lib/xml/sax/drivers2/drv_javasax.py", line 90, in resolveEntity
return JyInputSourceWrapper(self._resolver.resolveEntity(pubId, sysId))
File "/amd.home/home/staudt/workspace/jython/Lib/xml/sax/drivers2/drv_javasax.py", line 75, in __init__
if source.getByteStream():
AttributeError: 'unicode' object has no attribute 'getByteStream'
When I look into the source code of drv_javasax.py, it seems like input is not recognized as a file like object, which it is.
Any ideas on how to fix this?
I think it's this bug: http://bugs.jython.com/issue1488. Fixed in Jython 2.5.2-b1: http://www.jython.org/latest.html
When you insert print type(input) after your with statement, what do you see?
When you revert to old-style "try/finally" code instead of "with", does it work for all files?
What is different between files that work and files that don't work?
What happens if you change the name input to something that doesn't shadow a built-in function?

Categories