I have just started to use peewee in python. But as i am saving table data using .save() function. There is an error in the line. and control does not go to the next line.
Just wanted to know how can know what the error is. Although i have narrowed down to the line as below
try:
with database.transaction():
driver = Driver()
driver.person = person
driver.qualification = form.getvalue('qualification')
driver.number = form.getvalue('phone')
driver.license = form.getvalue('issu')
driver.audited_by = 0
print "this line prints"
driver.save()
print "this one does not print"
print "Success"
except:
print "Error"
I have used print statements i was able to figure out the error in in the line driver.save(). But how to check what exactly is the error?
Peewee logs queries at the DEBUG level to the peewee namespace, so you just have to configure logging as desired. Per the docs:
import logging
logger = logging.getLogger('peewee')
logger.addHandler(logging.StreamHandler())
logger.setLevel(logging.DEBUG)
This is specified in the peewee documentation here.
In the future, you should also include the traceback when you're asking for help debugging an error. The traceback tells you, as best as it can, exactly what went wrong.
If you want to do some debugging, you can check out pdb (or ipdb if you use iPython):
https://docs.python.org/2/library/pdb.html
Related
Is there a way to show the errors that occurred on the server during a StaticLiveServerTestCase directly in the test feedback? That being, when some server function call errors and page just doesn't show up, the test execution by default has no knowledge of the server error. Is there someway to pass that output onto the testing thread?
Preferably the these errors would show up in the same place that errors directly in the test code execution show up. If this isn't (easily) possible though, what's the next best way to quickly see those server errors?
Thanks!
Code (as requested):
class TestFunctionalVisitor(StaticLiveServerTestCase):
def setUp(self):
self.browser = webdriver.Firefox()
def tearDown(self):
self.browser.quit()
def test_visitor(self):
self.browser.get(self.live_server_url)
self.assertEqual(self.browser.title, "Something")
...
class Home(TemplateView):
template_name = 'home.html'
def get_context_data(self):
context = {}
MyModel = None
context['my_models'] = MyModel.objects.all()
return context
This has been significantly altered to make it simple and short. But when MyModel is None and tries to call objects.all() the server has a server 500 error, but all I get is the "Something" not in self.browser.title error from the test output, when I'd like to see the NoneType has no... error in the test output.
To see the errors immediately, run the test in DEBUG mode:
from django.test.utils import override_settings
#override_settings(DEBUG=True)
class DjkSampleTestCase(StaticLiveServerTestCase):
# fixtures = ['club_app_phase01_2017-01-09_13-30-19-169537.json']
reset_sequences = True
But one should also configure logging of server-side errors, either via custom django.core.handlers.base.BaseHandler class handle_uncaught_exception() method implementation or via sentry.
I use to override the default logger using:
import logging
logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(levelname)s -%(filename)s:%(lineno)d - %(message)s')
This will display stderr on your terminal. You can even do:
logging.debug('My var %s', var)
I only do this for debugging, if you want to use logging for non-debugging things I'd suggest to create custom loggers.
More details about logging:
https://docs.djangoproject.com/en/1.10/topics/logging/
https://docs.python.org/3/library/logging.html
This is exactly why it is recommended to have much more unit tests than integration and UI/end-to-end. Aside from other things, the latter don't provide you with a specific feedback and you often need more time debugging and investigating why a UI test failed. On the other hand, when a unit test fails, it is usually a failure or an exception pointing you to a specific line in the code - you get the "What went wrong" answer right away.
In other words, the point is: Cover this particular problem with unit tests leaving your UI test as is.
To help you gather more information about the "Something" not in self.browser.title failure, turn the logging on and log as much details as possible. You may also use the built-in Error Reporting and, for instance, let Django send you an email on 500 error. In other words, collect all the details and troubleshoot the failure manually.
I guess The title says it all, bit I'll elaborate.
In non-Django programs (even in non-web projects) I would like to get stack traces with:
Regular file and line number information, code of surrounding lines and scope identification (name of function and whatnot).
Local scope variables (just their names and repr() would be great)
Is there a library? A visual python debugger I could provide a plugin for? How could I go about getting this stack trace?
You can check the traceback module from the Python documentation and the examples in it.
import sys, traceback
def run_user_code(envdir):
source = raw_input(">>> ")
try:
exec source in envdir
except:
print "Exception in user code:"
print '-'*60
traceback.print_exc(file=sys.stdout)
print '-'*60
envdir = {}
while 1:
run_user_code(envdir)
I would the first line of my method to be:
print "this method was called from "+filename_and_linenumber_of_code_that_called_it.
Is it possible to throw an exception, catch it immediately, and print a stack trace when a method is called?
When i just want to make the code crash at some point to see the traceback, i just put "crash" in the code. Because it's not defined, it will crash, and i will see the traceback in django's exception page. If in addition, i use runserver_plus command provided by django-extensions (requires package werkzeug) then i get an AJAX shell at each frame of the stacktrace.
I understand you problem and I'm going to propose a professional method for dealing with this kind of problem. What you are trying to do is called "debugging" and there are tools for that.
Quickstart:
run pip install ipython ipdb
replace the print statement in your code by import ipdb; ipdb.set_trace()
execute your code in runserver, it will pause and spawn a python shell where you can send command "up" to go to the previous stack frame (code that called your code). Type l if you want to see more lines of code.
Longer start: well actually i wrote a an overview of tools which help debugging python and django.
I disagree with other answers which propose to add more elaborate print statement. You want to be a good developer: you want to use a debugger. Be it werkzeug, pdb/ipdb, or GUIs, it doesn't matter as long as you can use it.
No need to throw an exception to view the stack. I have this nice function (it's not perfect, but I think it works) that may help you:
import inspect
def log(error):
frame, filename, ln, fn, lolc, idx = inspect.getouterframes(inspect.currentframe())[1]
print "Error: " + error + " " + filename, ln, fn
It prints the message followed by the name of the file that the parent function is in, then the line number of the call in this file, and then the name of the function. I hope it'll help you :)
This is CPython specific:
import sys
def func():
frm = sys._getframe(1)
print 'called from %s, line %s' % (frm.f_code.co_filename, frm.f_lineno)
def test():
func() # line 8
test()
Prints:
called from /path/to/script.py, line 8
A debugger like pdb can be helpful. Refer below snippet.
def f4():
print "in f4"
def f3():
import pdb
pdb.set_trace()
f4()
print "in f3"
def f2():
f3()
print "in f2"
def f1():
f2()
print "in f1"
f1()
Once entered in pdb console, the up command can be entered to jump to the caller function.
Refer below screenshot.
I have been coding a lot in Python of late. And I have been working with data that I haven't worked with before, using formulae never seen before and dealing with huge files. All this made me write a lot of print statements to verify if it's all going right and identify the points of failure. But, generally, outputting so much information is not a good practice. How do I use the print statements only when I want to debug and let them be skipped when I don't want them to be printed?
The logging module has everything you could want. It may seem excessive at first, but only use the parts you need. I'd recommend using logging.basicConfig to toggle the logging level to stderr and the simple log methods, debug, info, warning, error and critical.
import logging, sys
logging.basicConfig(stream=sys.stderr, level=logging.DEBUG)
logging.debug('A debug message!')
logging.info('We processed %d records', len(processed_records))
A simple way to do this is to call a logging function:
DEBUG = True
def log(s):
if DEBUG:
print s
log("hello world")
Then you can change the value of DEBUG and run your code with or without logging.
The standard logging module has a more elaborate mechanism for this.
Use the logging built-in library module instead of printing.
You create a Logger object (say logger), and then after that, whenever you insert a debug print, you just put:
logger.debug("Some string")
You can use logger.setLevel at the start of the program to set the output level. If you set it to DEBUG, it will print all the debugs. Set it to INFO or higher and immediately all of the debugs will disappear.
You can also use it to log more serious things, at different levels (INFO, WARNING and ERROR).
First off, I will second the nomination of python's logging framework. Be a little careful about how you use it, however. Specifically: let the logging framework expand your variables, don't do it yourself. For instance, instead of:
logging.debug("datastructure: %r" % complex_dict_structure)
make sure you do:
logging.debug("datastructure: %r", complex_dict_structure)
because while they look similar, the first version incurs the repr() cost even if it's disabled. The second version avoid this. Similarly, if you roll your own, I'd suggest something like:
def debug_stdout(sfunc):
print(sfunc())
debug = debug_stdout
called via:
debug(lambda: "datastructure: %r" % complex_dict_structure)
which will, again, avoid the overhead if you disable it by doing:
def debug_noop(*args, **kwargs):
pass
debug = debug_noop
The overhead of computing those strings probably doesn't matter unless they're either 1) expensive to compute or 2) the debug statement is in the middle of, say, an n^3 loop or something. Not that I would know anything about that.
I don't know about others, but I was used to define a "global constant" (DEBUG) and then a global function (debug(msg)) that would print msg only if DEBUG == True.
Then I write my debug statements like:
debug('My value: %d' % value)
...then I pick up unit testing and never did this again! :)
A better way to debug the code is, by using module clrprint
It prints a color full output only when pass parameter debug=True
from clrprint import *
clrprint('ERROR:', information,clr=['r','y'], debug=True)
I'm running Django 1.0 and I'm close to deploying my app. As such, I'll be changing the DEBUG setting to False.
With that being said, I'd still like to include the stacktrace on my 500.html page when errors occur. By doing so, users can copy-and-paste the errors and easily email them to the developers.
Any thoughts on how best to approach this issue?
Automatically log your 500s, that way:
You know when they occur.
You don't need to rely on users sending you stacktraces.
Joel recommends even going so far as automatically creating tickets in your bug tracker when your application experiences a failure. Personally, I create a (private) RSS feed with the stacktraces, urls, etc. that the developers can subscribe to.
Showing stack traces to your users on the other hand could possibly leak information that malicious users could use to attack your site. Overly detailed error messages are one of the classic stepping stones to SQL injection attacks.
Edit (added code sample to capture traceback):
You can get the exception information from the sys.exc_info call. While formatting the traceback for display comes from the traceback module:
import traceback
import sys
try:
raise Exception("Message")
except:
type, value, tb = sys.exc_info()
print >> sys.stderr, type.__name__, ":", value
print >> sys.stderr, '\n'.join(traceback.format_tb(tb))
Prints:
Exception : Message
File "exception.py", line 5, in <module>
raise Exception("Message")
As #zacherates says, you really don't want to display a stacktrace to your users. The easiest approach to this problem is what Django does by default if you have yourself and your developers listed in the ADMINS setting with email addresses; it sends an email to everyone in that list with the full stack trace (and more) everytime there is a 500 error with DEBUG = False.
If we want to show exceptions which are generated , on ur template(500.html) then we could write your own 500 view, grabbing the exception and passing it to your 500 template.
Steps:
# In views.py:
import sys,traceback
def custom_500(request):
t = loader.get_template('500.html')
print sys.exc_info()
type, value, tb = sys.exc_info()
return HttpResponseServerError(t.render(Context({
'exception_value': value,
'value':type,
'tb':traceback.format_exception(type, value, tb)
},RequestContext(request))))
# In Main urls.py:
from django.conf.urls.defaults import *
handler500 = 'project.web.services.views.custom_500'
# In Template(500.html):
{{ exception_value }}{{value}}{{tb}}
more about it here: https://docs.djangoproject.com/en/dev/topics/http/views/#the-500-server-error-view
You could call sys.exc_info() in a custom exception handler. But I don't recommend that. Django can send you emails for exceptions.
I know this is an old question, but these days I would recommend using a service such as Sentry to capture your errors.
On Django, the steps to set this up are incredibly simple. From the docs:
Install Raven using pip install raven
Add 'raven.contrib.django.raven_compat' to your settings.INSTALLED_APPS.
Add RAVEN_CONFIG = {"dsn": YOUR_SENTRY_DSN} to your settings.
Then, on your 500 page (defined in handler500), pass the request.sentry.id to the template and your users can reference the specific error without any of your internals being exposed.