if __name__ == "__main__": failing; why won't __main__ call my classes? - python

I have been simplifying this code and I am without words. Simply put, the file will not run to call the classes present, and by extension the functions within. The error is something I have never come across and would like some clarity on, if someone could provide it, please.
class Server_Design:
def __init__(self):
self.intro_input()
def intro_input(self):
self.host = input('Host: ')
self.port = input('Port: ')
print("y")
if __name__ == "__main__":
Server_Design()
COMMAND LINE OUTPUT:
[SpyderKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
File "C:\Users\ ----\anaconda3\lib\site-packages\spyder_kernels\comms\frontendcomm.py", line 164, in poll_one
asyncio.run(handler(out_stream, ident, msg))
File "C:\Users\ ----\anaconda3\lib\site-packages\nest_asyncio.py", line 33, in run
task = asyncio.ensure_future(main)
File "C:\Users\ ----\anaconda3\lib\asyncio\tasks.py", line 677, in ensure_future
raise TypeError('An asyncio.Future, a coroutine or an awaitable is '
TypeError: An asyncio.Future, a coroutine or an awaitable is required
[SpyderKernelApp] ERROR | Exception in message handler:
Traceback (most recent call last):
repeating indefinitely

This error was fixed in Spyder 5.2.0 (released on November 2021). Please update to a more recent version by following the instructions posted here.

Related

Python3.8 generic thread error "Exception ignored in thread started by..."

The program below needs to import os to work correctly.
from threading import Thread
# import os
class Downloader(Thread):
#classmethod
def __init__(self):
super().__init__(self)
#classmethod
def run(self):
os.path.join("aas", "sadas")
def main():
Downloader().start()
if __name__ == "__main__":
main()
In python 3.8, the above program prints a generic error message for any unhandled error occurred in the thread. Same is true for python 3.9 as well.
Exception ignored in thread started by: <bound method Thread._bootstrap of <Downloader(Thread-1, started 123145545740288)>>
Traceback (most recent call last):
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/threading.py", line 890, in _bootstrap
self._bootstrap_inner()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/threading.py", line 934, in _bootstrap_inner
self._invoke_excepthook(self)
TypeError: invoke_excepthook() takes 1 positional argument but 2 were given
While in python 3.6, it prints the actual error.
Exception in thread Thread-1:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "threads.py", line 10, in run
os.path.join("aas", "sadas")
NameError: name 'os' is not defined
I am having a hard time debugging my threads. The best I can do is to guess the line number and the error. The other posts with the same error deal specifically with the error causing the issue - so not useful.
Why this change in python 3.8? Is there any way to display a proper error message?

Different behaviour in normal and debug run in threaded Virtualbox

I'm encountering a weird error when running the script in PyCharm's debug mode or inside a Flask app. I have narrowed down my code to the following
import virtualbox
import threading
class ThreadExecutor(threading.Thread):
def __init__(self):
super().__init__()
def run(self):
vbox = virtualbox.VirtualBox()
if __name__ == '__main__':
th = ThreadExecutor()
th.start()
Running this as a module produces no errors and executes perfectly fine but in debug mode it produces the following error message
Connected to pydev debugger (build 181.5087.37)
Exception in thread Thread-6:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\site-packages\vboxapi\__init__.py", line 449, in __init__
None)
pywintypes.com_error: (-2147221008, 'CoInitialize has not been called.', None, None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Program Files\Python36\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "C:/Users/.../dev/debug.py", line 13, in run
vbox = virtualbox.VirtualBox()
File "C:\Program Files\Python36\lib\site-packages\virtualbox\library_ext\vbox.py", line 22, in __init__
manager = virtualbox.Manager()
File "C:\Program Files\Python36\lib\site-packages\virtualbox\__init__.py", line 143, in __init__
self.manager = vboxapi.VirtualBoxManager(mtype, mparams)
File "C:\Program Files\Python36\lib\site-packages\vboxapi\__init__.py", line 991, in __init__
self.platform = PlatformMSCOM(dPlatformParams)
File "C:\Program Files\Python36\lib\site-packages\vboxapi\__init__.py", line 455, in __init__
print("Warning: CoInitializeSecurity failed: ", oXctp);
NameError: name 'oXctp' is not defined
Going into \vboxapi__init__.py we find
try:
pythoncom.CoInitializeSecurity(None,
None,
None,
pythoncom.RPC_C_AUTHN_LEVEL_DEFAULT,
pythoncom.RPC_C_IMP_LEVEL_IMPERSONATE,
None,
pythoncom.EOAC_NONE,
None)
except:
_, oXcpt, _ = sys.exc_info();
if isinstance(oXcpt, pythoncom.com_error) and self.xcptGetStatus(oXcpt) == -2147417831: # RPC_E_TOO_LATE
print("Warning: CoInitializeSecurity was already called");
else:
print("Warning: CoInitializeSecurity failed: ", oXctp);
What's causing this error? Does sys.exc_info() behave differently inside a thread inside a debugger?
There seems to be some known, unresolved issues in the interaction between threading and virtualbox, see here. I would suggest using multiprocessing instead as I have not experienced any of the previous issues with it.

debugging errors in python multiprocessing

I'm using the Pool function of the multiprocessing module in order to run the same code in parallel on different data.
It turns out that on some data my code raises an exception, but the precise line in which this happens is not given:
Traceback (most recent call last):
File "my_wrapper_script.py", line 366, in <module>
main()
File "my_wrapper_script.py", line 343, in main
results = pool.map(process_function, folders)
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 148, in map
return self.map_async(func, iterable, chunksize).get()
File "/usr/lib64/python2.6/multiprocessing/pool.py", line 422, in get
raise self._value
KeyError: 'some_key'
I am aware of multiprocessing.log_to_stderr() , but it seems that it is useful when concurrency issues arise, which is not my case.
Any ideas?
If you're using a new enough version of Python, you'll actually see the real exception get printed prior to that one. For example, here's a sample that fails:
import multiprocessing
def inner():
raise Exception("FAIL")
def f():
print("HI")
inner()
p = multiprocessing.Pool()
p.apply(f)
p.close()
p.join()
Here's the exception when running this with python 3.4:
multiprocessing.pool.RemoteTraceback:
"""
Traceback (most recent call last):
File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 119, in worker
result = (True, func(*args, **kwds))
File "test.py", line 9, in f
inner()
File "test.py", line 4, in inner
raise Exception("FAIL")
Exception: FAIL
"""
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "test.py", line 13, in <module>
p.apply(f)
File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 253, in apply
return self.apply_async(func, args, kwds).get()
File "/usr/local/lib/python3.4/multiprocessing/pool.py", line 599, in get
raise self._value
Exception: FAIL
If using a newer version isn't an option, the easiest thing to do is to wrap your worker function in a try/except block that will print the exception prior to re-raising it:
import multiprocessing
import traceback
def inner():
raise Exception("FAIL")
def f():
try:
print("HI")
inner()
except Exception:
print("Exception in worker:")
traceback.print_exc()
raise
p = multiprocessing.Pool()
p.apply(f)
p.close()
p.join()
Output:
HI
Exception in worker:
Traceback (most recent call last):
File "test.py", line 11, in f
inner()
File "test.py", line 5, in inner
raise Exception("FAIL")
Exception: FAIL
Traceback (most recent call last):
File "test.py", line 18, in <module>
p.apply(f)
File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 244, in apply
return self.apply_async(func, args, kwds).get()
File "/usr/local/lib/python2.7/multiprocessing/pool.py", line 558, in get
raise self._value
Exception: FAIL
You need to implement your own try/except block in the worker. Depending on how you want to organize your code, you could log to stderr as you mention above, log to some other place like a file, return some sort of error code or even tag the exception with the current traceback and re-raise. Here's an example of the last technique:
import traceback
import multiprocessing as mp
class MyError(Exception):
pass
def worker():
try:
# your real code here
raise MyError("boom")
except Exception, e:
e.traceback = traceback.format_exc()
raise
def main():
pool = mp.Pool()
try:
print "run worker"
result = pool.apply_async(worker)
result.get()
# handle exceptions you expect
except MyError, e:
print e.traceback
# re-raise the rest
except Exception, e:
print e.traceback
raise
if __name__=="__main__":
main()
It returns
run worker
Traceback (most recent call last):
File "doit.py", line 10, in worker
raise MyError("boom")
MyError: boom

Custom python traceback or debug output

I have a traceback print and want to customize the last part of it.
What: The error occurred in another process and traceback lies there (as is the case in multiprocessing).
Problem: I want to have the full traceback and error report.
Similar to this code:
>>> def f():
g()
>>> def g():
raise Exception, Exception(), None ## my traceback here
>>> f()
Traceback (most recent call last):
File "<pyshell#14>", line 1, in <module>
f()
File "<pyshell#8>", line 2, in f
g()
File "<pyshell#11>", line 2, in g
raise Exception, Exception(), None ## my traceback starts here
my traceback appears here
my traceback appears here
Exception
Impossible "Solutions": subclass and mock-object
>>> from types import *
>>> class CostomTB(TracebackType):
pass
Traceback (most recent call last):
File "<pyshell#125>", line 1, in <module>
class CostomTB(TracebackType):
TypeError: Error when calling the metaclass bases
type 'traceback' is not an acceptable base type
>>> class CostomTB(object):
pass
>>> try: zzzzzzzzz
except NameError:
import sys
ty, err, tb = sys.exc_info()
raise ty, err, CostomTB()
Traceback (most recent call last):
File "<pyshell#133>", line 5, in <module>
raise ty, err, CostomTB()
TypeError: raise: arg 3 must be a traceback or None
I am using python 2.7.
I guess you want the full traceback stack. See this which is having very good examples python logging module.
If some confusion comes See the logging documentation.
You mentioned a separate process: if your problem is to capture the traceback in process A and show it in process B, as if the exception was actually raised in the latter, then I'm afraid there is no clean way to do it.
I would suggest to serialize the traceback in process A, send it to process B and from there raise a new exception that includes the former in its description. The result is a somewhat longer output, but it carries information about both processes stacks.
In the following example there aren't really two separate processes, but I hope it makes my point clearer:
import traceback, StringI
def functionInProcessA():
raise Exception('Something happened in A')
class RemoteException(Exception):
def __init__(self, tb):
Exception.__init__(self, "Remote traceback:\n\n%s" % tb)
def controlProcessB():
try:
functionInProcessA()
except:
fd = StringIO.StringIO()
traceback.print_exc(file=fd)
tb = fd.getvalue()
raise RemoteException(tb)
if __name__ == '__main__':
controlProcessB()
Output:
Traceback (most recent call last):
File "a.py", line 20, in <module>
controlProcessB()
File "a.py", line 17, in controlProcessB
raise RemoteException(tb)
__main__.RemoteException: Remote traceback:
Traceback (most recent call last):
File "a.py", line 12, in controlProcessB
functionInProcessA()
File "a.py", line 4, in functionInProcessA
raise Exception('Something happened in A')
Exception: Something happened in A

SystemExit and NameError issues with exiting

def main():
try:
print "hardfart"
return 0
except:
return 1
if __name__ == '__main__':
exit(main())
Can one kind programmer tell me why this spits out the following error on exit?
Traceback (most recent call last):
File "C:/Apps/exp_exit.py", line 9, in ,module.
exit(main())
File "C:\Apps\python2.7.2\lib\site.py", line 372 in __call__
raise SystemExit(code)
SystemExit: 0
This is causing an error on exit in a project that's set up similarly. For that project, after using gui2exe to compile an exe, when closing the program I get this related error:
Traceback (most recent call last):
File "checkHDBox.py", line 303, in <module>
NameError: name 'exit' is not defined
So if exit is generating this error, how do I exit then? And if I create an exception handler for exit, doesn't that replace the default action that python takes with the exit function?
Thanks.
Edit:
I think this answers my own question.
The traceback here is from IDLE, I think it's a default behavior from other sources I've read.
Traceback (most recent call last):
File "C:/Apps/exp_exit.py", line 9, in ,module.
exit(main())
File "C:\Apps\python2.7.2\lib\site.py", line 372 in __call__
raise SystemExit(code)
SystemExit: 0
The traceback here was fixed by using sys.exit() instead of exit(0)
Traceback (most recent call last):
File "checkHDBox.py", line 303, in <module>
NameError: name 'exit' is not defined
You exit a program by raising SystemExit. This is what exit() does. Someone has incorrectly written an exception handler that catches all exceptions. This is why you only catch the exceptions you can handle.

Categories