How can I debug pserve using Eclipse? - python

I'm getting started with Pyramid development on Windows. I have Python 2.7 installed. I used virtualenv to create a nice sandbox for my Pyramid app. I also created PyDev 2.4 on Eclipse Indigo. I also created a separate PyDev interpreter just for my virutalenv, so it should have access to all the directories.
I set up a new debug configuration.
Project: testapp (the only project in the workspace)
Main module: ${workspace_loc:testapp/Scripts/pserve-script.py}
Args: development.ini
Working dir: Other: ${workspace_loc:testapp/testapp}
When I hit Debug, the output is:
pydev debugger: starting Starting server in PID 2208.
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ Unhandled exception in thread started by
Traceback (most recent call last):
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
Unhandled exception in thread started by
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__
TypeErrorTraceback (most recent call last):
self.original_func(*self.args, **self.kwargs) :
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeError: self.original_func(*self.args, **self.kwargs) : ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple :
ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
serving on http://0.0.0.0:6543
Even though it says the server is running, it's not. Nothing is listening on that port.
Any idea on how to fix this? Debugging certainly isn't necessary, but I like having a fully set up development environment. Thanks!

Pyramid includes remarkably good debug support in the form of the debug toolbar.
Make sure that the line
pyramid.includes = pyramid_debugtoolbar
in your development.ini isn't commented out to enable it. It doesn't support Eclipse breakpoints, but gives almost everything else you'd want.

Haven't gotten into that error, but usually, on difficult to debug environments, the remote debugger (http://pydev.org/manual_adv_remote_debugger.html) may be used (that way it works kind of like pdb: add code to add a breakpoint, so, until that point, your program runs as usual).

Pyramid's pserve seems to use multiple threads like Fabio suggests might be the case. I found I could make breakpoints work by monkey-patching the ThreadTaskDispatcher before invoking pserve:
# Allow attaching PyDev to the web app
import sys;sys.path.append('..../pydev/2.5.0-2/plugins/org.python.pydev.debug_2.4.0.201208051101/pysrc/')
# Monkey patch the thread task dispatcher, so it sets up the tracer in the worker threads
from waitress.task import ThreadedTaskDispatcher
_prev_start_new_thread = ThreadedTaskDispatcher.start_new_thread
def start_new_thread(ttd, fn, args):
def settrace_and_call(*args, **kwargs):
import pydevd ; pydevd.settrace(suspend=False)
return fn(*args, **kwargs)
from thread import start_new_thread
start_new_thread(settrace_and_call, args)
ThreadedTaskDispatcher.start_new_thread = start_new_thread
Note, I also tried:
set_trace(..., trace_only_current_thread=False)
But this either makes the app unusably slow, or doesn't work for some other reason.
Having done the above, when run the app will automatically register it with pydev debug server running locally. See:
http://pydev.org/manual_adv_remote_debugger.html

Related

How to always enable debug in a Python Cmd2 App?

I am using the Cmd2 module in Python (version 1.0.2) to build a command-line interface (CLI).
After I run the program (so that I am inside my custom CLI), if I want debugging to be enabled so that it shows stack traces on errors, I have to manually run "set debug true" from the CLI.
What I want is a way to automatically set the "debug" flag to true every time the CLI is invoked. I know I can pass scripts to the CLI that include setting debug as the first step, but I want interactive sessions to also have this behavior.
Is there any way to change the default value for debug in Cmd2?
The cmd2 docs about settings say (emphases mine):
Settings
Settings provide a mechanism for a user to control the behavior of a cmd2 based application. A setting is stored in an instance attribute on your subclass of cmd2.Cmd and must also appear in the cmd2.Cmd.settable dictionary. Developers may set default values for these settings and users can modify them at runtime using the set command.
So, to enable the debug setting by default, you just have to set the debug attribute of your cmd2.Cmd object to True. For example, if this is the app:
import cmd2
class App(cmd2.Cmd):
#cmd2.with_argument_list()
def do_spam(self, args):
raise Exception("a sample exception")
you just have to do
app = App()
app.debug = True
Now, if I run the app from the command line, debug will be enabled by default.
Full Python code:
import cmd2
class App(cmd2.Cmd):
#cmd2.with_argument_list()
def do_spam(self, args):
raise Exception("a sample exception")
if __name__ == '__main__':
import sys
app = App()
app.debug = True
sys.exit(app.cmdloop())
Input:
spam
Output:
Traceback (most recent call last):
File "[...]\venv\lib\site-packages\cmd2\cmd2.py", line 1646, in onecmd_plus_hooks
stop = self.onecmd(statement, add_to_history=add_to_history)
File "[...]\venv\lib\site-packages\cmd2\cmd2.py", line 2075, in onecmd
stop = func(statement)
File "[...]\venv\lib\site-packages\cmd2\decorators.py", line 69, in cmd_wrapper
return func(cmd2_app, parsed_arglist, **kwargs)
File "[...]/main.py", line 7, in do_spam
raise Exception("a sample exception")
Exception: a sample exception
EXCEPTION of type 'Exception' occurred with message: 'a sample exception'

redis Python catching exception that doesn't inherit from BaseException

We're seeing exceptions in our log like the following:
ERROR Exception ignored in: <function Connection.__del__ at 0x7f9b70a5cc20>
Traceback (most recent call last):
File "/app/.heroku/python/lib/python3.7/site-packages/redis/connection.py", line 537, in __del__
File "/app/.heroku/python/lib/python3.7/site-packages/redis/connection.py", line 667, in disconnect
TypeError: catching classes that do not inherit from BaseException is not allowed
According to the Redis source code, the offending line is the except in the following snippet:
try:
if os.getpid() == self.pid:
shutdown(self._sock, socket.SHUT_RDWR)
self._sock.close()
except socket.error:
pass
Which would indicate that socket.exception doesn't inherit from BaseException. However, as far as I can tell (based on the docs and the mro class method), socket.exception does inherit from BaseException.
Why is this happening? What can I do to prevent it?
By the way, our code doesn't call Redis directly. We are using Redis Queue (rq), which is implemented using Redis.
This would happen if you didn't close the redis client explicitly. That's why you saw __del__ in the traceback.
I'm not using rq but I'll take celery as an example, yet the idea could also be applied to rq.
# tasks/__init__.py
from celeryapp import redis_client
#worker_shutdown.connect # this signal means it's about to shut down the worker
def cleanup(**kwargs):
redis_client.close() # without this you may see error
# celeryapp.py
from celery import Celery
import redis
app = Celery(config_source="celeryconfig")
redis_client = redis.StrictRedis()

Parallel: Import a python file from sibling folder

I have a directory tree
working_dir\
main.py
my_agent\
my_worker.py
my_utility\
my_utils.py
Code in each file is as follows
""" main.py """
import os, sys
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))
from my_agent.my_worker import MyWorker
import ray
ray.init()
workers = [MyWorker.remote(i) for i in range(10)]
ids = [worker.get_id.remote() for worker in workers]
# print(*ids, sep='\n')
print(*ray.get(ids), sep='\n')
""" worker.py """
from my_utility import my_utils
import ray
#ray.remote
class MyWorker():
def __init__(self, id):
self.id = id
def get_id(self):
return my_utils.f(self.id)
""" my_utils.py """
def f(id):
return '{}: Everything is fine...'.format(id)
Here's a part of the error message I received
Traceback (most recent call last):
File "/Users/aptx4869/anaconda3/envs/p35/lib/python3.5/site-packages/ray/function_manager.py", line 616, in fetch_and_register_actor
unpickled_class = pickle.loads(pickled_class)
File "/Users/aptx4869/anaconda3/envs/p35/lib/python3.5/site-packages/ray/cloudpickle/cloudpickle.py", line 894, in subimport
import(name)
ImportError: No module named 'my_utility'
Traceback (most recent call last):
File "main.py", line 12, in
print(*ray.get(ids), sep='\n')
File "/Users/aptx4869/anaconda3/envs/p35/lib/python3.5/site-packages/ray/worker.py", line 2377, in get
raise value
ray.worker.RayTaskError: ray_worker (pid=30025, host=AiMacbook)
Exception: The actor with name MyWorker failed to be imported, and so cannot execute this method
If I remove all statements related to ray, the above code works fine. Therefore, I boldly guess the reason is that ray runs each actor in a new process and sys.path.append only works in the main process. So I add the following code to worker.py
import os, sys
sys.path.append(os.path.abspath(os.path.join(os.path.dirname(__file__), '..')))
But it still does not work: the same error message shows up. Now I run out of ideas, what should I do?
You are correct about what the issue is.
In your example, you modify sys.path in main.py in order to be able to import my_agent.my_worker and my_utility.my_utils.
However, this path change is not propagated to the worker processes, so if you were to run a remote function like
#ray.remote
def f():
# Print the PYTHONPATH on the worker process.
import sys
print(sys.path)
f.remote()
You would see that sys.path on the worker does not include the parent directory that you added.
The reason that modifying sys.path on the worker (e.g., in the MyWorker constructor) doesn't work is that the MyWorker class definition is pickled and shipped to the workers. Then the worker unpickles it, and the process of unpickling the class definition requires my_utils to be imported, and this fails because the actor constructor hasn't had a chance to run yet.
There are a couple possible solutions here.
Run the script with something like
PYTHONPATH=$(dirname $(pwd)):$PYTHONPATH python main.py
(from within working_dir/). That should solve the issue because in this case the worker processes are forked from the scheduler process (which is forked from the main Python interpreter when you call ray.init() and so the environment variable will be inherited by the workers (this doesn't happen for sys.path presumably because it is not an environment variable).
It looks like adding the line
parent_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
os.environ["PYTHONPATH"] = parent_dir + ":" + os.environ.get("PYTHONPATH", "")
in main.py (before the ray.init() call) also works for the same reason as above.
Consider adding a setup.py and installing your project as a Python package so that it's automatically on the relevant path.
The new "Runtime Environments" feature, which didn't exist at the time of this post, should help with this issue: https://docs.ray.io/en/latest/handling-dependencies.html#runtime-environments. (See the working_dir and py_modules entries.)

Emitting Signals on dbus using Python-dbus

I want to be able to at first call a simple script to enable or disable an external monitor from my netbook. I am running Fedora 17 with XFCE as my desktop. I see that I should be able to use python and python-dbus to flip toggle active on and off. My problem is that I can't figure out how to emit a signal to get the new setting to go active. Unfortunately Python is not a language that I use often. The code that I have in place is:
import dbus
item = 'org.xfce.Xfconf'
path = '/org/xfce/Xfconf'
channel = 'displays'
base = '/'
setting = '/Default/VGA1/Active'
bus = dbus.SessionBus()
remote_object = bus.get_object(item, path)
remote_interface = dbus.Interface(remote_object, "org.xfce.Xfconf")
if remote_interface.GetProperty(channel, setting):
remote_interface.SetProperty(channel, setting, '0')
remote_object.PropertyChanged(channel, setting, '0')
else:
remote_interface.SetProperty(channel, setting, '1')
remote_object.PropertyChanged(channel, setting, '0')
It is failing and kicking out:
Traceback (most recent call last): File "./vgaToggle", line 31, in <module>
remote_object.PropertyChanged(channel, setting, '0')
File "/usr/lib/python2.7/site-packages/dbus/proxies.py", line 140, in __call__
**keywords)
File "/usr/lib/python2.7/site-packages/dbus/connection.py", line 630, in call_blocking
message, timeout) dbus.exceptions.DBusException:
org.freedesktop.DBus.Error.UnknownMethod: Method "PropertyChanged"
with signature "sss" on interface "(null)" doesn't exist
I spent a bit of time searching and I am not finding many python examples doing anything close to this. Thanks in advance.
PropertyChanged is a signal, not a method. The services you are communicating with are responsible for emitting signals. In this case, the PropertyChanged should fire implicitly, whenever the value of the property on the respective objects or interfaces have changed.
This should happen implicitly when you call remote_interface.SetProperty(...), and you should not need to explicitly "call" the signal like a method.
If you are interested in receiving the signals, you will need to set up a glib main loop and call connect_to_signal on your proxy object, passing it a callback method to invoke.

cannot override sys.excepthook

I try to customize behavior of sys.excepthook as described by the recipe.
in ipython:
:import pdb, sys, traceback
:def info(type, value, tb):
: traceback.print_exception(type, value, tb)
: pdb.pm()
:sys.excepthook = info
:--
>>> x[10] = 5
-------------------------------------------------
Traceback (most recent call last):
File "<ipython console>", line 1, in <module>
NameError: name 'x' is not defined
>>>
pdb.pm() is not being called. It seems that sys.excepthook = info doesn't work in my python 2.5 installation.
Five years after you wrote this, IPython still works this way, so I guess a solution might be useful to people googling this.
IPython replaces sys.excepthook every time you execute a line of code, so your overriding of sys.excepthook has no effect. Furthermore, IPython doesn't even call sys.excepthook, it catches all exceptions and handles them itself before things get that far.
To override the exception handler whilst IPython is running, you can monkeypatch over their shell's showtraceback method. For example, here's how I override to give what looks like an ordinary Python traceback (because I don't like how verbose IPython's are):
def showtraceback(self, *args, **kwargs):
traceback_lines = traceback.format_exception(*sys.exc_info())
del traceback_lines[1]
message = ''.join(traceback_lines)
sys.stderr.write(message)
import sys
import traceback
import IPython
IPython.core.interactiveshell.InteractiveShell.showtraceback = showtraceback
This works in both the normal terminal console and the Qt console.
ipython, which you're using instead of the normal Python interactive shell, traps all exceptions itself and does NOT use sys.excepthook. Run it as ipython -pdb instead of just ipython, and it will automatically invoke pdb upon uncaught exceptions, just as you are trying to do with your excepthook.
expanding on Chris answer, you can use another function like a decorator to add your own functionality to jupyters showbacktrace:
from IPython.core.interactiveshell import InteractiveShell
from functools import wraps
import traceback
import sys
def change_function(func):
#wraps(func)
def showtraceback(*args, **kwargs):
# extract exception type, value and traceback
etype, evalue, tb = sys.exc_info()
if issubclass(etype, Exception):
print('caught an exception')
else:
# otherwise run the original hook
value = func(*args, **kwargs)
return value
return showtraceback
InteractiveShell.showtraceback = change_function(InteractiveShell.showtraceback)
raise IOError
sys.excepthook won't work in ipython. I think the recommended way of hooking to exceptions is to use the set_custom_exc method, like this:
from IPython import get_ipython
ip = get_ipython()
def exception_handler(self, etype, evalue, tb, tb_offset=None):
print("##### Oh no!!! #####") # your handling of exception here
self.showtraceback((etype, evalue, tb), tb_offset=tb_offset) # standard IPython's printout
ip.set_custom_exc((Exception,), exception_handler) # register your handler
raise Exception("##### Bad things happened! #####") # see it in action
See the docs for more details: https://ipython.readthedocs.io/en/stable/api/generated/IPython.core.interactiveshell.html#IPython.core.interactiveshell.InteractiveShell.set_custom_exc
See this SO question and make sure there isn't something in your sitecustomize.py that prevents debugging in interactive mode.

Categories