How to always enable debug in a Python Cmd2 App? - python

I am using the Cmd2 module in Python (version 1.0.2) to build a command-line interface (CLI).
After I run the program (so that I am inside my custom CLI), if I want debugging to be enabled so that it shows stack traces on errors, I have to manually run "set debug true" from the CLI.
What I want is a way to automatically set the "debug" flag to true every time the CLI is invoked. I know I can pass scripts to the CLI that include setting debug as the first step, but I want interactive sessions to also have this behavior.
Is there any way to change the default value for debug in Cmd2?

The cmd2 docs about settings say (emphases mine):
Settings
Settings provide a mechanism for a user to control the behavior of a cmd2 based application. A setting is stored in an instance attribute on your subclass of cmd2.Cmd and must also appear in the cmd2.Cmd.settable dictionary. Developers may set default values for these settings and users can modify them at runtime using the set command.
So, to enable the debug setting by default, you just have to set the debug attribute of your cmd2.Cmd object to True. For example, if this is the app:
import cmd2
class App(cmd2.Cmd):
#cmd2.with_argument_list()
def do_spam(self, args):
raise Exception("a sample exception")
you just have to do
app = App()
app.debug = True
Now, if I run the app from the command line, debug will be enabled by default.
Full Python code:
import cmd2
class App(cmd2.Cmd):
#cmd2.with_argument_list()
def do_spam(self, args):
raise Exception("a sample exception")
if __name__ == '__main__':
import sys
app = App()
app.debug = True
sys.exit(app.cmdloop())
Input:
spam
Output:
Traceback (most recent call last):
File "[...]\venv\lib\site-packages\cmd2\cmd2.py", line 1646, in onecmd_plus_hooks
stop = self.onecmd(statement, add_to_history=add_to_history)
File "[...]\venv\lib\site-packages\cmd2\cmd2.py", line 2075, in onecmd
stop = func(statement)
File "[...]\venv\lib\site-packages\cmd2\decorators.py", line 69, in cmd_wrapper
return func(cmd2_app, parsed_arglist, **kwargs)
File "[...]/main.py", line 7, in do_spam
raise Exception("a sample exception")
Exception: a sample exception
EXCEPTION of type 'Exception' occurred with message: 'a sample exception'

Related

Customize log format in python Azure Functions

I am writing many Python Azure Functions. I want every line in logs to be prefixed with invocation-id from context to segregate and correlate the logs easily.
I know there are multiple ways to do this for a normal/stand-alone python application. Here Azure Function runtime provides an environment where it invokes my code. I don't-want-to/prefer-not-to:
mess around with existing handlers/formatters registered by Azure Function runtime or
write my own handlers/formatters
(because whatever is registered by default sends the logs to Azure Log Analytics workspace and powers my dashboards etc)
E.g. following code:
import logging
from azure import functions as func
def main(msg: func.QueueMessage, ctx: func.Context) -> None:
logging.info('entry')
logging.info('invocation id of this run: %s', ctx.invocation_id)
logging.debug('doing something...')
logging.info('exit with success')
will produce logs like:
entry
invocation id of this run: 111-222-33-4444
doing something...
exit with success
what I want instead is:
(111-222-33-4444) entry
(111-222-33-4444) invocation id of this run: 111-222-33-4444
(111-222-33-4444) doing something...
(111-222-33-4444) exit with success
I've seen some docs on Azure, seem useless.
You can use a LoggerAdapter to do this, as shown by the following runnable program:
import logging
class Adapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
return '(%s) %s' % (self.extra['context'], msg), kwargs
def main(msg, ctx):
logger = Adapter(logging.getLogger(), {'context': ctx})
logger.info('entry')
logger.info('invocation id of this run: %s', ctx)
logger.debug('doing something ...')
logger.info('exit with success')
if __name__ == '__main__':
logging.basicConfig(level=logging.DEBUG, format='%(message)s')
main('hello', '111-222-33-4444')
Obviously I've removed the Azure references so that I can run it locally, but you should get the gist. The preceding script prints
(111-222-33-4444) entry
(111-222-33-4444) invocation id of this run: 111-222-33-4444
(111-222-33-4444) doing something ...
(111-222-33-4444) exit with success
Update: If you don't want to/can't use LoggerAdapter, then you can subclass Logger as documented here or use a Filter as documented here. But in the latter case you'd still have to attach the filter to all loggers (or handlers, which would be easier) of interest.

Pytest with logging and Click fails with ValueError only within Docker

I have a library I'm working on which makes use of Click. It's contained within a Docker image. I'm trying to test it using pytest using click.testing.CliRunner. I'm using logging to write logs, and I've specified that these logs should be emitted in the pyproject.toml. When an exception is raised in my code, and only within Docker, I get the following exception from Click:
/opt/conda/lib/python3.8/site-packages/click/testing.py:434: ValueError
except Exception as e:
if not catch_exceptions:
raise
exception = e
exit_code = 1
exc_info = sys.exc_info()
finally:
sys.stdout.flush()
> stdout = outstreams[0].getvalue()
E ValueError: I/O operation on closed file.
/opt/conda/lib/python3.8/site-packages/click/testing.py:434: ValueError
I've managed to minimally reproduce this issue. My code looks something like this:
import logging, click
logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)
#click.command()
#click.argument('value')
def main(value):
logger.info(value)
raise RuntimeError()
My tests look like this:
import pytest
from click.testing import CliRunner
from main import main
def test_main():
runner = CliRunner()
runner.invoke(main, ['hello'], catch_exceptions=False)
assert True
And my pyproject.toml is:
[tool.pytest.ini_options]
log_cli = true
log_level = "INFO"
Removing the logging, the CliRunner, or pytest (i.e. running test_main directly) does not trigger the ValueError, and the RuntimeError is the only exception raised. Running this outside of a Docker container also does not raise the ValueError.
How can I avoid this error?
This code is available on a GitHub repo for reproduction. I reproduced this issue in a continuum/miniconda3 container.

Unit testing tornado applications: How to improve the display of error messages

I am using unittest to test a tornado app having several handlers, one of which raises an exception. If I run the following test code with python test.py:
# test.py
import unittest
import tornado.web
import tornado.testing
class MainHandler(tornado.web.RequestHandler):
def get(self):
self.write('Hello World') # handler works correctly
class HandlerWithError(tornado.web.RequestHandler):
def get(self):
raise Exception('Boom') # handler raises an exception
self.write('Hello World')
def make_app():
return tornado.web.Application([
(r'/main/', MainHandler),
(r'/error/', HandlerWithError),
])
class TornadoTestCase(tornado.testing.AsyncHTTPTestCase):
def get_app(self):
return make_app()
def test_main_handler(self):
response = self.fetch('/main/')
self.assertEqual(response.code, 200) # test should pass
def test_handler_with_error(self):
response = self.fetch('/error/')
self.assertEqual(response.code, 200) # test should fail with error
if __name__ == '__main__':
unittest.main()
the test output looks like:
ERROR:tornado.application:Uncaught exception GET /error/ (127.0.0.1)
HTTPServerRequest(protocol='http', host='localhost:36590', method='GET', uri='/error/', version='HTTP/1.1', remote_ip='127.0.0.1', headers={'Connection': 'close', 'Host': 'localhost:3
6590', 'Accept-Encoding': 'gzip'})
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/web.py", line 1332, in _execute
result = method(*self.path_args, **self.path_kwargs)
File "test.py", line 13, in get
raise Exception('Boom') # handler raises an exception
Exception: Boom
ERROR:tornado.access:500 GET /error/ (127.0.0.1) 19.16ms
F.
======================================================================
FAIL: test_handler_with_error (__main__.TornadoTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/tornado/testing.py", line 118, in __call__
result = self.orig_method(*args, **kwargs)
File "test.py", line 33, in test_handler_with_error
self.assertEqual(response.code, 200) # test should fail with error
AssertionError: 500 != 200
----------------------------------------------------------------------
Ran 2 tests in 0.034s
FAILED (failures=1)
However, I would expect unittest to report an Error for the second test, instead of a failing assertion. Moreover, the fact that the traceback for the 'Boom' exception appears before the unittest test report and does not include a reference to the failing test function makes it difficult to find the source of the exception.
Any suggestions how to handle this situation?
Thanks in advance!
EDIT
What I find unexpected is the fact that test_handler_with_error actually arrives at making the assertEqual assertion, instead of throwing the error. For example, the following code does not execute the self.assertEqualstatement, and consequently reports an ERROR instead of a FAIL in the test output:
# simple_test.py
import unittest
def foo():
raise Exception('Boom')
return 'bar'
class SimpleTestCase(unittest.TestCase):
def test_failing_function(self):
result = foo()
self.assertEqual(result, 'bar')
if __name__ == '__main__':
unittest.main()
You can disable logging and only the test reports will appear:
logging.disable(logging.CRITICAL)
You can put that for example in
created TestCase subclass
test runner
More info How can I disable logging while running unit tests in Python Django?
Keep in mind that CI/CD systems actually use normalized report e.g. junit and then present it in more readable/elegant way - more info:
Python script to generate JUnit report from another testing result
How to output coverage XML with nosetests?
This is expected behavior. Your test itself asserts that the return code is HTTP 200, and since this is a formal assert that is false, the outcome is a "failure" instead of an "error". You can suppress logs as mentioned in kwaranuk's answer, but then you lose the information about what actually caused the HTTP 500 error.
Why does your code reach the assert, instead of throwing? It's because your test code does not call HandlerWithError.get. Your test code begins an asynchronous HTTP GET operation with an HTTP client provided by the AsyncHTTPTestCase class. (Check the source code of that class for details.) The event loop runs until HandlerWithError.get receives the request over a localhost socket, and responds on that socket with an HTTP 500. When HandlerWithError.get fails, it doesn't raise an exception into your test function, any more than a failure at Google.com would raise an exception: it merely results in an HTTP 500.
Welcome to the world of async! There's no easy way to neatly associate the assertion error and the traceback from HandlerWithError.get().

Gettings settings and config from INI file for Pyramid functional testing

In a real Pyramid app it does not work per docs http://docs.pylonsproject.org/projects/pyramid//en/latest/narr/testing.html :
class FunctionalTests(unittest.TestCase):
def setUp(self):
from myapp import main
app = main({})
Exception:
Traceback (most recent call last):
File "C:\projects\myapp\tests\model\task_dispatcher_integration_test.py", line 35, in setUp
app = main({})
File "C:\projects\myapp\myapp\__init__.py", line 207, in main
engine = engine_from_config(settings, 'sqlalchemy.')
File "C:\projects\myapp\ve\lib\site-packages\sqlalchemy\engine\__init__.py", line 407, in engine_from_config
url = options.pop('url')
KeyError: 'url'
The reason is trivial: an empty dictionary is passed to main, while it seems that while running real app (from __init__.py) it gets settings pre-filled with values from [app:main] section of development.ini / production.ini:
settings {'ldap_port': '4032', 'sqlalchemy.url': 'postgresql://.....}
Is there some way of reconstructing settings easily from an .ini file for functional testing?
pyramid.paster.get_appsettings is the only thing you need:
from pyramid.paster import get_appsettings
settings = get_appsettings('test.ini', name='main')
app = main(settings)
That test.ini can include all the settings of another .ini file easily like this:
[app:main]
use = config:development.ini#main
and then you only need to override those keys that change (I guess you'd want to rather test against a separate DB):
[app:main]
use = config:development.ini#main
sqlalchemy.uri = postgresql://....
In case anyone else doesn't get #antti-haapala's answer right away:
Create a test.ini filled with:
[app:main]
use = config:development.ini#main
(Actually this step is not necessary. You could also keep your development.ini and use it instead of test.ini in the following code. A separate test.ini might however be useful if you want separate settings for testing)
In your tests.py add:
from pyramid.paster import get_appsettings
settings = get_appsettings('test.ini', name='main')
and replace
app = TestApp(main({}))
with
app = TestApp(main(global_config = None, **settings))
Relevant for this answer was the following comment: Pyramid fails to start when webtest and sqlalchemy are used together
Actually, you don't need import get_appsettings, just add the
parameters like this:
class FunctionalTests(unittest.TestCase):
def setUp(self):
from myapp import main
settings = {'sqlalchemy.url': 'sqlite://'}
app = main({}, **settings)
here is the source: functional test, it is in the second block code, line 31.
Yes there is, though the easy is a subject to debate.
I am using the following py.test test fixture to make --ini option passed to the tests. However this approach is limited to py.test test runner, as other test runner do not have such flexibility.
Also my test.ini has special settings like disabling outgoing mail and instead printing it out to terminal and test accessible backlog.
#pytest.fixture(scope='session')
def ini_settings(request):
"""Load INI settings for test run from py.test command line.
Example:
py.test yourpackage -s --ini=test.ini
:return: Adictionary representing the key/value pairs in an ``app`` section within the file represented by ``config_uri``
"""
if not getattr(request.config.option, "ini", None):
raise RuntimeError("You need to give --ini test.ini command line option to py.test to find our test settings")
# Unrelated, but if you need to poke standard Python ConfigParser do it here
# from websauna.utils.configincluder import monkey_patch_paster_config_parser
# monkey_patch_paster_config_parser()
config_uri = os.path.abspath(request.config.option.ini)
setup_logging(config_uri)
config = get_appsettings(config_uri)
# To pass the config filename itself forward
config["_ini_file"] = config_uri
return config
Then I can set up app (note that here pyramid.paster.bootstrap parses the config file again:
#pytest.fixture(scope='session')
def app(request, ini_settings, **settings_overrides):
"""Initialize WSGI application from INI file given on the command line.
TODO: This can be run only once per testing session, as SQLAlchemy does some stupid shit on import, leaks globals and if you run it again it doesn't work. E.g. trying to manually call ``app()`` twice::
Class <class 'websauna.referral.models.ReferralProgram'> already has been instrumented declaratively
:param settings_overrides: Override specific settings for the test case.
:return: WSGI application instance as created by ``Initializer.make_wsgi_app()``.
"""
if not getattr(request.config.option, "ini", None):
raise RuntimeError("You need to give --ini test.ini command line option to py.test to find our test settings")
data = bootstrap(ini_settings["_ini_file"])
return data["app"]
Furthermore setting up a functional test server:
import threading
import time
from wsgiref.simple_server import make_server
from urllib.parse import urlparse
from pyramid.paster import bootstrap
import pytest
from webtest import TestApp
from backports import typing
#: The URL where WSGI server is run from where Selenium browser loads the pages
HOST_BASE = "http://localhost:8521"
class ServerThread(threading.Thread):
""" Run WSGI server on a background thread.
Pass in WSGI app object and serve pages from it for Selenium browser.
"""
def __init__(self, app, hostbase=HOST_BASE):
threading.Thread.__init__(self)
self.app = app
self.srv = None
self.daemon = True
self.hostbase = hostbase
def run(self):
"""Open WSGI server to listen to HOST_BASE address
"""
parts = urlparse(self.hostbase)
domain, port = parts.netloc.split(":")
self.srv = make_server(domain, int(port), self.app)
try:
self.srv.serve_forever()
except Exception as e:
# We are a background thread so we have problems to interrupt tests in the case of error
import traceback
traceback.print_exc()
# Failed to start
self.srv = None
def quit(self):
"""Stop test webserver."""
if self.srv:
self.srv.shutdown()
#pytest.fixture(scope='session')
def web_server(request, app) -> str:
"""py.test fixture to create a WSGI web server for functional tests.
:param app: py.test fixture for constructing a WSGI application
:return: localhost URL where the web server is running.
"""
server = ServerThread(app)
server.start()
# Wait randomish time to allows SocketServer to initialize itself.
# TODO: Replace this with proper event telling the server is up.
time.sleep(0.1)
assert server.srv is not None, "Could not start the test web server"
host_base = HOST_BASE
def teardown():
server.quit()
request.addfinalizer(teardown)
return host_base

How can I debug pserve using Eclipse?

I'm getting started with Pyramid development on Windows. I have Python 2.7 installed. I used virtualenv to create a nice sandbox for my Pyramid app. I also created PyDev 2.4 on Eclipse Indigo. I also created a separate PyDev interpreter just for my virutalenv, so it should have access to all the directories.
I set up a new debug configuration.
Project: testapp (the only project in the workspace)
Main module: ${workspace_loc:testapp/Scripts/pserve-script.py}
Args: development.ini
Working dir: Other: ${workspace_loc:testapp/testapp}
When I hit Debug, the output is:
pydev debugger: starting Starting server in PID 2208.
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ Unhandled exception in thread started by
Traceback (most recent call last):
Unhandled exception in thread started by
Traceback (most recent call last):
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
Unhandled exception in thread started by
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__
TypeErrorTraceback (most recent call last):
self.original_func(*self.args, **self.kwargs) :
File "C:\Tools\eclipse-cpp-indigo-SR1-incubation-win32-x86_64\eclipse\plugins\org.python.pydev.debug_2.3.0.2011121518\pysrc\pydevd.py", line 200, in __call__ self.original_func(*self.args, **self.kwargs)
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeError: self.original_func(*self.args, **self.kwargs) : ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
TypeErrorThreadedTaskDispatcher object argument after ** must be a mapping, not tuple :
ThreadedTaskDispatcher object argument after ** must be a mapping, not tuple
serving on http://0.0.0.0:6543
Even though it says the server is running, it's not. Nothing is listening on that port.
Any idea on how to fix this? Debugging certainly isn't necessary, but I like having a fully set up development environment. Thanks!
Pyramid includes remarkably good debug support in the form of the debug toolbar.
Make sure that the line
pyramid.includes = pyramid_debugtoolbar
in your development.ini isn't commented out to enable it. It doesn't support Eclipse breakpoints, but gives almost everything else you'd want.
Haven't gotten into that error, but usually, on difficult to debug environments, the remote debugger (http://pydev.org/manual_adv_remote_debugger.html) may be used (that way it works kind of like pdb: add code to add a breakpoint, so, until that point, your program runs as usual).
Pyramid's pserve seems to use multiple threads like Fabio suggests might be the case. I found I could make breakpoints work by monkey-patching the ThreadTaskDispatcher before invoking pserve:
# Allow attaching PyDev to the web app
import sys;sys.path.append('..../pydev/2.5.0-2/plugins/org.python.pydev.debug_2.4.0.201208051101/pysrc/')
# Monkey patch the thread task dispatcher, so it sets up the tracer in the worker threads
from waitress.task import ThreadedTaskDispatcher
_prev_start_new_thread = ThreadedTaskDispatcher.start_new_thread
def start_new_thread(ttd, fn, args):
def settrace_and_call(*args, **kwargs):
import pydevd ; pydevd.settrace(suspend=False)
return fn(*args, **kwargs)
from thread import start_new_thread
start_new_thread(settrace_and_call, args)
ThreadedTaskDispatcher.start_new_thread = start_new_thread
Note, I also tried:
set_trace(..., trace_only_current_thread=False)
But this either makes the app unusably slow, or doesn't work for some other reason.
Having done the above, when run the app will automatically register it with pydev debug server running locally. See:
http://pydev.org/manual_adv_remote_debugger.html

Categories