How to abbreviate traceback in Jupyter Notebook? - python

I documented an XML-API with Jupyter Notebook, so documentation and specification cannot drift apart.
This works great.
As the API also has to handle invalid input, Jupyter Notebook shows - correctly - the traceback.
The traceback is very verbose - I'd like to abbreviate / shorten it - ideally, only the last line should be shown.
request
server.get_licenses("not-existing-id")
current print out in Jupyter Notebook
---------------------------------------------------------------------------
Fault Traceback (most recent call last)
<ipython-input-5-366cceb6869e> in <module>
----> 1 server.get_licenses("not-existing-id")
/usr/lib/python3.9/xmlrpc/client.py in __call__(self, *args)
1114 return _Method(self.__send, "%s.%s" % (self.__name, name))
1115 def __call__(self, *args):
-> 1116 return self.__send(self.__name, args)
1117
1118 ##
/usr/lib/python3.9/xmlrpc/client.py in __request(self, methodname, params)
1456 allow_none=self.__allow_none).encode(self.__encoding, 'xmlcharrefreplace')
1457
-> 1458 response = self.__transport.request(
1459 self.__host,
1460 self.__handler,
/usr/lib/python3.9/xmlrpc/client.py in request(self, host, handler, request_body, verbose)
1158 for i in (0, 1):
1159 try:
-> 1160 return self.single_request(host, handler, request_body, verbose)
1161 except http.client.RemoteDisconnected:
1162 if i:
/usr/lib/python3.9/xmlrpc/client.py in single_request(self, host, handler, request_body, verbose)
1174 if resp.status == 200:
1175 self.verbose = verbose
-> 1176 return self.parse_response(resp)
1177
1178 except Fault:
/usr/lib/python3.9/xmlrpc/client.py in parse_response(self, response)
1346 p.close()
1347
-> 1348 return u.close()
1349
1350 ##
/usr/lib/python3.9/xmlrpc/client.py in close(self)
660 raise ResponseError()
661 if self._type == "fault":
--> 662 raise Fault(**self._stack[0])
663 return tuple(self._stack)
664
Fault: <Fault 1: 'company id is not valid'>
my wish output
Fault: <Fault 1: 'company id is not valid'>

As it turns out, that's built into iPython, so you don't need to install or update anything.
Just put a single cell at the top of your notebook and run %xmode Minimal as the only input. You can also see the documentation with %xmode? or a lot of other "magic method" documentation with %quickref.

The following solution, using sys.excepthook works in a REPL...
code
import sys
def my_exc_handler(type, value, traceback):
print(repr(value), file=sys.stderr)
sys.excepthook = my_exc_handler
1 / 0
bash
❯ python3.9 main.py
ZeroDivisionError('division by zero')
... but unfortunately not in Jupyter Notebook - I still get the full traceback.
When I have a look at Python's documentation...
When an exception is raised and uncaught
... maybe the "uncaught" is the problem. When I have to guess, I think Jupyter Notebook catches all exceptions, and does the formatting and printing itself.

Related

How to debug "OSError: -9" saving a tiff via libtiff and pillow?

I have a TIFF file that I cannot manipulate with pillow. I know pillow is a user of libtiff, but the error messages are insufficient to be able to debug the issue:
image = PIL.Image.open(open("input.tif", "rb"))
image.save("out.tiff")
I instantly get the following (truncated) traceback:
PIL/Image.py in _ensure_mutable(self)
616 def _ensure_mutable(self):
617 if self.readonly:
--> 618 self._copy()
619 else:
620 self.load()
PIL/Image.py in _copy(self)
609
610 def _copy(self):
--> 611 self.load()
612 self.im = self.im.copy()
613 self.pyaccess = None
PIL/TiffImagePlugin.py in load(self)
1055 def load(self):
1056 if self.tile and self.use_load_libtiff:
-> 1057 return self._load_libtiff()
1058 return super().load()
1059
PIL/TiffImagePlugin.py in _load_libtiff(self)
1159
1160 if err < 0:
-> 1161 raise OSError(err)
1162
1163 return Image.Image.load(self)
OSError: -9
is there a way to enable more useful debugging information so I have some idea where the failure occurred? To be clear, the error is happening on the call to image.save. Image.save does the following to manipulate the file, using libtiff:
decoder = Image._getdecoder(
self.mode, "libtiff", tuple(args), self.decoderconfig
)
...
...
decoder.decode(...)

Jupyter notebook - rpy2 - Cannot find R libraries

I am currently trying to use both R and Python in the same Jupyter Notebook. I successfully installed rpy2; if I try to write something in R (putting %%R at the beginning) everything works, but as soon as I try to use a library, the following error appears:
R[write to console]: Error in library(name of the package) : there is no package
called - name of the package -
If I try to use the same library in R Studio (not in Jupyter) everything works.
This is the code is giving me trouble:
import os
os.environ['R_HOME'] = r'C:/PROGRA~1/R/R-40~1.0'
os.environ['path'] += r';C:/PROGRA~1/R/R-40~1.0\bin;'
%load_ext rpy2.ipython
%%R
library(readr)
After this last line the following error appears:
R[write to console]: Error in library(readr) : there is no package called 'readr'
Error in library(readr) : there is no package called 'readr'
--------------------------------------------------------------------------- RRuntimeError Traceback (most recent call
last)
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\ipython\rmagic.py
in eval(self, code)
267 # Need the newline in case the last line in code is a comment.
--> 268 value, visible = ro.r("withVisible({%s\n})" % code)
269 except (ri.embedded.RRuntimeError, ValueError) as exception:
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\robjects_init_.py
in call(self, string)
415 p = rinterface.parse(string)
--> 416 res = self.eval(p)
417 return conversion.rpy2py(res)
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\robjects\functions.py
in call(self, *args, **kwargs)
196 kwargs[r_k] = v
--> 197 return (super(SignatureTranslatedFunction, self)
198 .call(*args, **kwargs))
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\robjects\functions.py
in call(self, *args, **kwargs)
124 new_kwargs[k] = conversion.py2rpy(v)
--> 125 res = super(Function, self).call(*new_args, **new_kwargs)
126 res = conversion.rpy2py(res)
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\rinterface_lib\conversion.py
in _(*args, **kwargs)
43 def _(*args, **kwargs):
---> 44 cdata = function(*args, **kwargs)
45 # TODO: test cdata is of the expected CType
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\rinterface.py in
call(self, *args, **kwargs)
623 if error_occured[0]:
--> 624 raise embedded.RRuntimeError(_rinterface._geterrmessage())
625 return res
RRuntimeError: Error in library(readr) : there is no package called
'readr'
During handling of the above exception, another exception occurred:
RInterpreterError Traceback (most recent call
last)
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\ipython\rmagic.py
in R(self, line, cell, local_ns)
762 else:
--> 763 text_result, result, visible = self.eval(code)
764 text_output += text_result
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\ipython\rmagic.py
in eval(self, code)
271 warning_or_other_msg = self.flush()
--> 272 raise RInterpreterError(code, str(exception),
273 warning_or_other_msg)
RInterpreterError: Failed to parse and evaluate line
'library(readr)\n'. R error message: "Error in library(readr) : there
is no package called 'readr'"
During handling of the above exception, another exception occurred:
PermissionError Traceback (most recent call
last) in
----> 1 get_ipython().run_cell_magic('R', '', 'library(readr)\n')
~\anaconda3\envs\Cattolica2020\lib\site-packages\IPython\core\interactiveshell.py
in run_cell_magic(self, magic_name, line, cell) 2379
with self.builtin_trap: 2380 args = (magic_arg_s,
cell)
-> 2381 result = fn(*args, **kwargs) 2382 return result 2383
in R(self, line, cell, local_ns)
~\anaconda3\envs\Cattolica2020\lib\site-packages\IPython\core\magic.py
in (f, *a, **k)
185 # but it's overkill for just that one bit of state.
186 def magic_deco(arg):
--> 187 call = lambda f, *a, **k: f(*a, **k)
188
189 if callable(arg):
~\anaconda3\envs\Cattolica2020\lib\site-packages\rpy2\ipython\rmagic.py
in R(self, line, cell, local_ns)
782 print(e.err)
783 if tmpd:
--> 784 rmtree(tmpd)
785 return
786 finally:
~\anaconda3\envs\Cattolica2020\lib\shutil.py in rmtree(path,
ignore_errors, onerror)
735 # can't continue even if onerror hook returns
736 return
--> 737 return _rmtree_unsafe(path, onerror)
738
739 # Allow introspection of whether or not the hardening against symlink
~\anaconda3\envs\Cattolica2020\lib\shutil.py in _rmtree_unsafe(path,
onerror)
613 os.unlink(fullname)
614 except OSError:
--> 615 onerror(os.unlink, fullname, sys.exc_info())
616 try:
617 os.rmdir(path)
~\anaconda3\envs\Cattolica2020\lib\shutil.py in _rmtree_unsafe(path,
onerror)
611 else:
612 try:
--> 613 os.unlink(fullname)
614 except OSError:
615 onerror(os.unlink, fullname, sys.exc_info())
PermissionError: [WinError 32] Impossibile accedere al file. Il file è
utilizzato da un altro processo:
'C:\Users\User\AppData\Local\Temp\tmp82eo8sb4\Rplots001.png'
I also tried to verify if the library directory is the same for Jupyter and R and I obtain the same two directories:
[1] "C:/Users/User/Documents/R/win-library/4.0"
[2] "C:/Program Files/R/R-4.0.0/library
I am currently using R 4.0.0 and Python 3.8.3
The exception RRuntimeError is normally just forwarding to Python an exception that R generated itself during the execution.
The error message says that R does not find the library. If you are really sure that both RStudio and Jupyter use the very same R installed, the difference between the two will come from RStudio being instructed to look for installed R packages in more directories than the R started from Jupyter is.
Run the following in RStudio to know where readr is loaded from:
library(dplyr)
as_data_frame(installed.packages()) %>%
filter(Package == "readr") %>%
select(Package, LibPath)

how to debug a CommClosedError in Dask Gateway deployed in Kubernetes

I have deployed dask_gateway 0.8.0 (with dask==2.25.0 and distributed==2.25.0) in a Kubernetes cluster.
When I create a new cluster with:
cluster = gateway.new_cluster(public_address = gateway._public_address)
I get this error:
Task exception was never retrieved
future: <Task finished coro=<connect.<locals>._() done, defined at /home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py:288> exception=CommClosedError()>
Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py", line 297, in _
handshake = await asyncio.wait_for(comm.read(), 1)
File "/cvmfs/sft.cern.ch/lcg/releases/Python/3.6.5-f74f0/x86_64-centos7-gcc8-opt/lib/python3.6/asyncio/tasks.py", line 351, in wait_for
yield from waiter
concurrent.futures._base.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py", line 304, in _
raise CommClosedError() from e
distributed.comm.core.CommClosedError
However, if I check the pods, the cluster has actually been created, and I can scale it up, and everything seems fine in the dashboard (I can even see the workers).
However, I cannot get the client:
> client = cluster.get_client()
Task exception was never retrieved
future: <Task finished coro=<connect.<locals>._() done, defined at /home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py:288> exception=CommClosedError()>
Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py", line 297, in _
handshake = await asyncio.wait_for(comm.read(), 1)
File "/cvmfs/sft.cern.ch/lcg/releases/Python/3.6.5-f74f0/x86_64-centos7-gcc8-opt/lib/python3.6/asyncio/tasks.py", line 351, in wait_for
yield from waiter
concurrent.futures._base.CancelledError
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jovyan/.local/lib/python3.6/site-packages/distributed/comm/core.py", line 304, in _
raise CommClosedError() from e
distributed.comm.core.CommClosedError
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
~/.local/lib/python3.6/site-packages/distributed/comm/core.py in connect(addr, timeout, deserialize, handshake_overrides, **connection_args)
321 if not comm:
--> 322 _raise(error)
323 except FatalCommClosedError:
~/.local/lib/python3.6/site-packages/distributed/comm/core.py in _raise(error)
274 )
--> 275 raise IOError(msg)
276
OSError: Timed out trying to connect to 'gateway://traefik-dask-gateway:80/jhub.0373ea68815d47fca6a6c489c8f7263a' after 100 s: connect() didn't finish in time
During handling of the above exception, another exception occurred:
OSError Traceback (most recent call last)
<ipython-input-19-affca45186d3> in <module>
----> 1 client = cluster.get_client()
~/.local/lib/python3.6/site-packages/dask_gateway/client.py in get_client(self, set_as_default)
1066 set_as_default=set_as_default,
1067 asynchronous=self.asynchronous,
-> 1068 loop=self.loop,
1069 )
1070 if not self.asynchronous:
~/.local/lib/python3.6/site-packages/distributed/client.py in __init__(self, address, loop, timeout, set_as_default, scheduler_file, security, asynchronous, name, heartbeat_interval, serializers, deserializers, extensions, direct_to_workers, connection_limit, **kwargs)
743 ext(self)
744
--> 745 self.start(timeout=timeout)
746 Client._instances.add(self)
747
~/.local/lib/python3.6/site-packages/distributed/client.py in start(self, **kwargs)
948 self._started = asyncio.ensure_future(self._start(**kwargs))
949 else:
--> 950 sync(self.loop, self._start, **kwargs)
951
952 def __await__(self):
~/.local/lib/python3.6/site-packages/distributed/utils.py in sync(loop, func, callback_timeout, *args, **kwargs)
337 if error[0]:
338 typ, exc, tb = error[0]
--> 339 raise exc.with_traceback(tb)
340 else:
341 return result[0]
~/.local/lib/python3.6/site-packages/distributed/utils.py in f()
321 if callback_timeout is not None:
322 future = asyncio.wait_for(future, callback_timeout)
--> 323 result[0] = yield future
324 except Exception as exc:
325 error[0] = sys.exc_info()
/cvmfs/sft.cern.ch/lcg/views/LCG_96python3/x86_64-centos7-gcc8-opt/lib/python3.6/site-packages/tornado/gen.py in run(self)
1131
1132 try:
-> 1133 value = future.result()
1134 except Exception:
1135 self.had_exception = True
~/.local/lib/python3.6/site-packages/distributed/client.py in _start(self, timeout, **kwargs)
1045
1046 try:
-> 1047 await self._ensure_connected(timeout=timeout)
1048 except (OSError, ImportError):
1049 await self._close()
~/.local/lib/python3.6/site-packages/distributed/client.py in _ensure_connected(self, timeout)
1103 try:
1104 comm = await connect(
-> 1105 self.scheduler.address, timeout=timeout, **self.connection_args
1106 )
1107 comm.name = "Client->Scheduler"
~/.local/lib/python3.6/site-packages/distributed/comm/core.py in connect(addr, timeout, deserialize, handshake_overrides, **connection_args)
332 backoff = min(backoff, 1) # wait at most one second
333 else:
--> 334 _raise(error)
335 else:
336 break
~/.local/lib/python3.6/site-packages/distributed/comm/core.py in _raise(error)
273 error,
274 )
--> 275 raise IOError(msg)
276
277 backoff = 0.01
OSError: Timed out trying to connect to 'gateway://traefik-dask-gateway:80/jhub.0373ea68815d47fca6a6c489c8f7263a' after 100 s: Timed out trying to connect to 'gateway://traefik-dask-gateway:80/jhub.0373ea68815d47fca6a6c489c8f7263a' after 100 s: connect() didn't finish in time
How do I debug this? Any pointer would be greatly appreciated.
I already tried increasing all the timeouts, but nothing changed:
os.environ["DASK_DISTRIBUTED__COMM__TIMEOUTS__CONNECT"]="100s"
os.environ["DASK_DISTRIBUTED__COMM__TIMEOUTS__TCP"]="600s"
os.environ["DASK_DISTRIBUTED__COMM__RETRY__DELAY__MIN"]="1s"
os.environ["DASK_DISTRIBUTED__COMM__RETRY__DELAY__MAX"]="60s"
I wrote a tutorial about the steps I took to deploy dask gateway, see https://zonca.dev/2020/08/dask-gateway-jupyterhub.html.
I am quite sure this was working fine a few weeks ago, but I cannot identify what changed...
You need to use compatible versions of dask and dask-distributed everywhere.
I believe this is an error related to an upgrade in the communications protocol for distributed. See https://github.com/dask/dask-gateway/issues/316#issuecomment-702947730
These are the pinned versions of the dependencies for the docker images as of Nov 10, 2020 (in conda environment.yml compatible format):
- python=3.7.7
- dask=2.21.0
- distributed=2.21.0
- cloudpickle=1.5.0
- toolz=0.10.0

Stop at exception in my, not library code

I'm developing an app using a Python library urllib and it is sometimes rising exceptions due to not being able to access an URL.
However, the exception is raised almost 6 levels into the standard library stack:
/home/user/Workspace/application/main.py in call(path)
11 headers={'content-type': 'application/json'},
12 data=b'')
---> 13 resp = urllib.request.urlopen(req) ####### THIS IS MY CODE
14 return json.loads(resp.read().decode('utf-8'))
/usr/lib/python3.4/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
159 else:
160 opener = _opener
--> 161 return opener.open(url, data, timeout)
162
163 def install_opener(opener):
/usr/lib/python3.4/urllib/request.py in open(self, fullurl, data, timeout)
461 req = meth(req)
462
--> 463 response = self._open(req, data)
464
465 # post-process response
/usr/lib/python3.4/urllib/request.py in _open(self, req, data)
479 protocol = req.type
480 result = self._call_chain(self.handle_open, protocol, protocol +
--> 481 '_open', req)
482 if result:
483 return result
/usr/lib/python3.4/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
439 for handler in handlers:
440 func = getattr(handler, meth_name)
--> 441 result = func(*args)
442 if result is not None:
443 return result
/usr/lib/python3.4/urllib/request.py in http_open(self, req)
1208
1209 def http_open(self, req):
-> 1210 return self.do_open(http.client.HTTPConnection, req)
1211
1212 http_request = AbstractHTTPHandler.do_request_
/usr/lib/python3.4/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1182 h.request(req.get_method(), req.selector, req.data, headers)
1183 except OSError as err: # timeout error
-> 1184 raise URLError(err)
1185 r = h.getresponse()
1186 except:
URLError: <urlopen error [Errno 111] Connection refused>
I usually run the code in ipython3 with the %pdb magic turned on so in case there is an exception I can inspect it immediately. However for this I have to go down the stack 6 levels to get to my code.
Is it achievable that my app crashes pointing to my code directly?
I would go with modifying the code:
try:
resp = urllib.request.urlopen(req)
except Exception as e:
raise RuntimeError(e)
That way:
%pdb moves you to your code,
the original exception is preserved as argument of the "secondary" exception.
You may also monkeypatch urllib.request.urlopen() function:
class MonkeyPatchUrllib(object):
def __enter__(self):
self.__urlopen = urllib.request.urlopen
urllib.request.urlopen = self
def __exit__(self, exception_type, exception_value, traceback):
urllib.request.urlopen = self.__urlopen
def __call__(self, *args, **kwargs):
try:
return self.__urlopen(*args, **kwargs)
except Exception as e:
raise RuntimeError(e)
Any time you have an exception raised in urlibopen() call within the context manager scope:
with MonkeyPatchUrllib():
#your code here
%pdb will move you only 1 level away from your code.
[EDIT]
With sys.exc_info() it is possible to preserve a more verbose context of the original exception (like its traceback).
pdb has only incremental frame positioning (moving up or down the list of frames).
To get the feature you want, you can try trepan (github repository). It has an IPython extension here. You then use the command frame -1 once the exception shows up:
Frame (absolute frame positioning)
frame [thread-Name*|*thread-number] [frame-number]
Change the current frame to frame frame-number if specified, or the current frame, 0, if no frame number specified.
If a thread name or thread number is given, change the current frame to a frame in that thread. Dot (.) can be used to indicate the name of the current frame the debugger is stopped in.
A negative number indicates the position from the other or least-recently-entered end. So frame -1 moves to the oldest frame, and frame 0 moves to the newest frame. Any variable or expression that evaluates to a number can be used as a position, however due to parsing limitations, the position expression has to be seen as a single blank-delimited parameter. That is, the expression (5*3)-1 is okay while (5 * 3) - 1) isn’t.
Once you are in the desired frame, you can use edit to modify your code.
You may find the command backtrace useful too as it gives a stack trace with the less recent call at the bottom.
trepan depends on uncompyle6 available here.
pydb provides a similar feature but was unfortunately not ported to Python3.
Otherwise, you may decide to be patient and wait for improvements. In IPython/core/debugger.py:
"""
Pdb debugger class.
Modified from the standard pdb.Pdb class to avoid including readline, so that
the command line completion of other programs which include this isn't damaged.
In the future, this class will be expanded with improvements over the standard pdb.
[...]
"""
It can be done with some hacking. These docs show how you can turn on post-mortem debugging with the following code in the entry point:
import sys
from IPython.core import ultratb
sys.excepthook = ultratb.FormattedTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
Stepping through this hook after an exception is raised shows that we need to tinker with the debugger method. Unfortunately I can see no better way to do this other than to copy the entire method and modify it where needed (I tried modifying self.tb but traceback objects are read only and can't be used with copy.deepcopy). Here's a demo:
import json
import sys
from IPython.core import debugger, ultratb
from IPython.core.display_trap import DisplayTrap
class CustomTB(ultratb.FormattedTB):
def debugger(self, force=False):
if force or self.call_pdb:
if self.pdb is None:
self.pdb = debugger.Pdb(
self.color_scheme_table.active_scheme_name)
# the system displayhook may have changed, restore the original
# for pdb
display_trap = DisplayTrap(hook=sys.__displayhook__)
with display_trap:
self.pdb.reset()
# Find the right frame so we don't pop up inside ipython itself
if hasattr(self, 'tb') and self.tb is not None:
etb = self.tb
else:
etb = self.tb = sys.last_traceback
# only modification is here -----+
# |
# V
while self.tb is not None and '/lib/python3' not in self.tb.tb_next.tb_frame.f_code.co_filename:
self.tb = self.tb.tb_next
if etb and etb.tb_next:
etb = etb.tb_next
self.pdb.botframe = etb.tb_frame
self.pdb.interaction(self.tb.tb_frame, self.tb)
if hasattr(self, 'tb'):
del self.tb
sys.excepthook = CustomTB(mode='Verbose',
color_scheme='Linux', call_pdb=1)
def foo():
bar()
def bar():
json.dumps(json)
foo()
As you can see it stops searching through the traceback when it's about to reach library code. Here's the result:
TypeErrorTraceback (most recent call last)
/Users/alexhall/Dropbox/python/sandbox3/sandbox.py in <module>()
40 json.dumps(json)
41
---> 42 foo()
global foo = <function foo at 0x1031358c8>
/Users/alexhall/Dropbox/python/sandbox3/sandbox.py in foo()
35
36 def foo():
---> 37 bar()
global bar = <function bar at 0x103135950>
38
39 def bar():
/Users/alexhall/Dropbox/python/sandbox3/sandbox.py in bar()
38
39 def bar():
---> 40 json.dumps(json)
global json.dumps = <function dumps at 0x10168b268>
global json = <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>
41
42 foo()
/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py in dumps(obj=<module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>, skipkeys=False, ensure_ascii=True, check_circular=True, allow_nan=True, cls=None, indent=None, separators=None, default=None, sort_keys=False, **kw={})
228 cls is None and indent is None and separators is None and
229 default is None and not sort_keys and not kw):
--> 230 return _default_encoder.encode(obj)
global _default_encoder.encode = <bound method JSONEncoder.encode of <json.encoder.JSONEncoder object at 0x10166e8d0>>
obj = <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>
231 if cls is None:
232 cls = JSONEncoder
/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/encoder.py in encode(self=<json.encoder.JSONEncoder object>, o=<module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>)
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
chunks = undefined
self.iterencode = <bound method JSONEncoder.iterencode of <json.encoder.JSONEncoder object at 0x10166e8d0>>
o = <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>
global _one_shot = undefined
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/encoder.py in iterencode(self=<json.encoder.JSONEncoder object>, o=<module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>, _one_shot=True)
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
_iterencode = <_json.Encoder object at 0x1031296d8>
o = <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>
258
259 def _make_iterencode(markers, _default, _encoder, _indent, _floatstr,
/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/encoder.py in default(self=<json.encoder.JSONEncoder object>, o=<module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>)
178
179 ""
--> 180 raise TypeError(repr(o) + " is not JSON serializable")
global TypeError = undefined
global repr = undefined
o = <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'>
181
182 def encode(self, o):
TypeError: <module 'json' from '/Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py'> is not JSON serializable
> /Users/alexhall/Dropbox/python/sandbox3/sandbox.py(40)bar()
38
39 def bar():
---> 40 json.dumps(json)
41
42 foo()
ipdb> down
> /Users/alexhall/.pyenv/versions/3.5.0/lib/python3.5/json/__init__.py(230)dumps()
228 cls is None and indent is None and separators is None and
229 default is None and not sort_keys and not kw):
--> 230 return _default_encoder.encode(obj)
231 if cls is None:
232 cls = JSONEncoder
ipdb>
Basically the full traceback is still printed out but ipdb starts at your own code. If you enter the down command you find yourself in a library frame.
I think the answer is no.
pdb stops at the exception and shows you the stack.
Why would it be useful to hide the real source of the exception?
If it worked as you seem to be requesting and hides the 6 layers of stack how would you work out what to fix?
If this is still not on topic please add to your question.
urllib can raise a lot of exceptions.
You need to put a try block around the call into urllib and figure how to handle the exceptions for example:
try:
resp = urllib.request.urlopen(req)
except URLError as e:
# analyse e to figure out the detail
...
Certainly under python2's urllib lots of other exceptions are thrown.
I'm not sure about python3's urllib.

pymongo error when writing

I am unable to do any writes to a remote mongodb database. I am able to connect and do lookups (e.g. find). I connect like this:
conn = pymongo.MongoClient(db_uri,slaveOK=True)
db = conn.test_database
coll = db.test_collection
But when I try to insert,
coll.insert({'a':1})
I run into an error:
---------------------------------------------------------------------------
AutoReconnect Traceback (most recent call last)
<ipython-input-56-d4ffb9e3fa79> in <module>()
----> 1 coll.insert({'a':1})
/usr/lib/python2.7/dist-packages/pymongo/collection.pyc in insert(self, doc_or_docs, manipulate, safe, check_keys, continue_on_error, **kwargs)
410 message._do_batched_insert(self.__full_name, gen(), check_keys,
411 safe, options, continue_on_error,
--> 412 self.uuid_subtype, client)
413
414 if return_one:
/usr/lib/python2.7/dist-packages/pymongo/mongo_client.pyc in _send_message(self, message, with_last_error, command, check_primary)
1126 except (ConnectionFailure, socket.error), e:
1127 self.disconnect()
-> 1128 raise AutoReconnect(str(e))
1129 except:
1130 sock_info.close()
AutoReconnect: not master
If I remove the slaveOK=True (setting it to it's default value of False) then I can still connect, but the reads (and writes) fail:
AutoReconnect Traceback (most recent call last)
<ipython-input-70-6671eea24f80> in <module>()
----> 1 coll.find_one()
/usr/lib/python2.7/dist-packages/pymongo/collection.pyc in find_one(self, spec_or_id, *args, **kwargs)
719 *args, **kwargs).max_time_ms(max_time_ms)
720
--> 721 for result in cursor.limit(-1):
722 return result
723 return None
/usr/lib/python2.7/dist-packages/pymongo/cursor.pyc in next(self)
1036 raise StopIteration
1037 db = self.__collection.database
-> 1038 if len(self.__data) or self._refresh():
1039 if self.__manipulate:
1040 return db._fix_outgoing(self.__data.popleft(),
/usr/lib/python2.7/dist-packages/pymongo/cursor.pyc in _refresh(self)
980 self.__skip, ntoreturn,
981 self.__query_spec(), self.__fields,
--> 982 self.__uuid_subtype))
983 if not self.__id:
984 self.__killed = True
/usr/lib/python2.7/dist-packages/pymongo/cursor.pyc in __send_message(self, message)
923 self.__tz_aware,
924 self.__uuid_subtype,
--> 925 self.__compile_re)
926 except CursorNotFound:
927 self.__killed = True
/usr/lib/python2.7/dist-packages/pymongo/helpers.pyc in _unpack_response(response, cursor_id, as_class, tz_aware, uuid_subtype, compile_re)
99 error_object = bson.BSON(response[20:]).decode()
100 if error_object["$err"].startswith("not master"):
--> 101 raise AutoReconnect(error_object["$err"])
102 elif error_object.get("code") == 50:
103 raise ExecutionTimeout(error_object.get("$err"),
AutoReconnect: not master and slaveOk=false
Am I connecting incorrectly? Is there a way to specify connecting to the primary replica?
AutoReconnect: not master means that your operation is failing because the node on which you are attempting to issue the command is not the primary of a replica set, where the command (e.g., a write operation) requires that node to be a primary. Setting slaveOK=True just enables you to read from a secondary node, where by default you would only be able to read from the primary.
MongoClient is automatically able to discover and connect to the primary if the replica set name is provided to the constructor with replicaSet=<replica set name>. See "Connecting to a Replica Set" in the PyMongo docs.
As an aside, slaveOK is deprecated, replaced by ReadPreference. You can specify a ReadPreference when creating the client or when issuing queries, if you want to target a node other than the primary.
I don't know It's related to this topic or not But when I searched about the below exception google leads me to the question. Maybe it'd be helpful.
pymongo.errors.NotMasterError: not master
In my case, My hard drive was full.
you can also figure it out with df -h command

Categories