Think about this scenario:
I debug my Django project and I step through the code (in and out). The debugger sometimes enters Django libraries or other external libraries.
Does anyone know how to prevent the debugger from entering external code? Or at least a 'big' step out to get the debugger back to the project code?
Does anyone know how to prevent the debugger from entering external code?
Yes, Dmitry Trofimov knows;
(...) add modules you don't want to trace to the dict DONT_TRACE in <pycharm-distr>/helpers/pydev/pydevd.py
That is a hacky solution (...)
If you want this feature to be less hacky you can vote on it by visiting issue
PY-9101 Implement "Do not step into the classes" option for Python debugger
Those using pdb might be interested to know there is such a feature in pdb;
Starting with Python 3.1, Pdb class has a new argument called skip -
class pdb.Pdb(completekey='tab', stdin=None, stdout=None, skip=None, nosigint=False)
The skip argument, if given, must be an iterable of glob-style module
name patterns. The debugger will not step into frames that originate
in a module that matches one of these patterns. 1
1 Whether a frame is considered to originate in a certain module is
determined by the __name__ in the frame globals.
The example given in the docs shows how to skip Django's packages -
import pdb; pdb.Pdb(skip=['django.*']).set_trace()
Everything looks the same to the debugger, it can't distinguish between your code or Django's code – it's all Python. So it will run everything, however if you want to stop it from drilling down so low you'll have to start “stepping over” lines of code instead of “stepping into” them.
According to the PyCharm docs you'll want to use F8 when ever you see a line of code that looks like it could be a gateway into Django's internals. If you accidently find yourself inside Django's source code you can hit Shift+F8 until you're out of it.
Related
I know how to use PyCharm's debugger but that has only deepened my curiosity of how it accomplishes the task of being so tightly coupled to the Python interpreter.
Does cPython have some sort of intrepreter hooks buried in itself or does PyCharm somehow copy the source code, instrument the code, and then exec it?
Thanks to #unholySheep I was able to go from the github src on PyDev.Debugger back to sys.settrace which lead to a post on Python Module of the week on settrace.
Once the tracing script has the stack frame, it is likely a non-trivial task of inspecting the frame's stack content and or using code/exec/eval to run "watch" statements in context. As for break points, that would be trivial as it is just a task of matching the frame's line number and filepath.
I am analyzing an existing Python code that runs into hundreds of line. Adding log per line to capture flow / understanding run time processing is painful - but then the current application logging is very poor by just using print data.
Hence for support purpose these are not enough as its difficult to understand without looking into code.
What is the best way of change these unstandard logs into at least something like -
Class Name - Method Name - Error Details additional more details
With small modifications - I also run into risk of breaking the flow if not dealt carefully.
Please let me know which application mechanism logging would be the best?
I would advise you to type "Python /?" in a command prompt and see which possibilities you have (e.g. python -v gives a verbose output on the import statements in your code). Like this you might find a way of having more information without needing to modify your source code. Obviously I don't know if the information you get from python -v is the one you're looking for.
I think probably decorators are your best option so you touch the code as less as possible.
First link redirects the standard stdout to a python standard logging module, so it would have the format you want if you specify it within the logger properties.
https://wiki.python.org/moin/PythonDecoratorLibrary#Redirects_stdout_printing_to_python_standard_logging.
https://wiki.python.org/moin/PythonDecoratorLibrary#Logging_decorator_with_specified_logger_.28or_default.29
Sometimes when developing using open source software, you need to read it's source code (specially zope/plone). A lot of times I need to write print statements, or debug calls (import pdb), or comment try/except clauses, you name it.
Sometimes I have a lot of files opened when trying to find an issue, and sometimes I forget to remove these print/debug alterations.
So, my question is: how do you keep yourself organized when doing it? Do you write "TODOs" along the modifications and search for them later, do you keep everything opened in your editor and when you find what you were looking for you just revert the files (This approach isn't useful when you're searching for a really big problem that needs days, you need to turn off your computer and return the other day)? Or you just don't do nothing since print statements in development environment is nothing to worry about?
I'm using Vim. I'm just interested to know how other programmers treat this issue.
I used to run into that problem a lot. Now, as part of my check-in process, I run a find/grep script combo that looks for my debugging statements. The only caveat is that I must keep my added debugging statements consistent so grep can find them all.
something like this:
## pre-checkin_scan.bin
find . -name "*.py" -exec grep -H --file=/homes/js/bin/pre-checkin_scan_regexp_list.grep {} \;
## pre-checkin_scan_regexp_list.grep
## (The first pattern is to ignore Doxygen comments)
^##[^#]
pdb
^ *print *( *" *Dbg
^ *print *( *" *Debug
^ *debug
In case of my own projects, the source code is always in version control. Before committing, I always check the graphical diff so that I can see what has changed, what the commit message should be and whether I can split up into smaller commits. That way, I almost always recognize temporary garbage like print statements. If not, I usually notice it shortly afterwards and can do an uncommit if I haven't yet pushed (works for DVCS like git and bzr, not with subversion).
Concerning problems that take multiple days, it's just the same thing. I don't commit until the problem is solved and then look at the diff again.
A text editor that allows editing within the graphical diff view would be really helpful in these cases, but I'm mostly using Eclipse, which doesn't support that.
Well +1 for starting this discussion. Yes sometime this happen to me. I left those pdb and commit the code to the central code base, git. I use 'emacs'. So, Before commit the code I usually search for pdb in the file. But it is hectic checking each file.So, Before committing the code I usually check the diff very carefully. I am also finding the better way to resolve this issue.
I also develop Python with Vim. I have never had to substantially modify the source code for debugging. I do sometimes put debugging print statements, and I have the habit of putting "# XXX" after every one. Then when I want to remove them (before a commit), and just search for the XXX and delete those lines.
For exceptions, I have arranged to run my code in the Vim buffer with an external interpreter that is set up to automatically enter the debugger on any uncaught exception. Then I'm placed automatically in the code at the point the exception occured. I use a modified debugger that can also signal Vim (actually GTK Gvim) to open that source at that line.
Caught exceptions should report meaningful errors, anyway. It is considered by many to be bad practice to do things like:
try:
... some code
except:
handle everything
Since you probably aren't actually handling every possible error case. By not doing that you also enable the automatic debugging.
I can give you three suggestions:
Do not remove debugger statements. By this, I mean leave them in, but make them conditional on being in debug mode:
# Set this to True to enable Debug code
XYZ_Debug = False
if XYZ_Debug:
do_debugging()
Oh, and if the debugging code is just to print things out, you should get familiar with logging (PyMOTW). If you are using logging, you could:
import logging
# Set this to True to enable debug
XYZ_Debug = False
log = logging.getLogger("XYZ")
log.setLevel(logging.DEBUG if XYZ_Debug else logging.INFO)
log.debug("debug output")
Put the same unique tag (in a comment) after each line, or near each block:
do_debug_code() # XYZZY
I then use Emacs' Ibuffer feature, mark all Python buffers then search for occurrences of this tag. Using some combination of find/grep/sed as in other answers would work as well.
If you are using Mercurial and know Mercurial Queues (or might want to learn them), maintain the debug code as a patch in your queue. When you are ready for "production"; or push of the current changes; pop the patch containing the debug code and go. You could achieve something like this outside of version control with diff and patch.
I'm debugging an script that I'm writing and the result of
executing a statement from pdb does not make sense so my
natural reaction is to try to trace it with pdb.
To paraphrase:
Yo dawg, I like python, so can you put my pdb in my pdb so I can debug while I debug?
It sounds like you're looking for something listed fairly prominently in the docs, which is the set of methods that let you programmatically invoke the debugger on expressions, code in strings, or functions:
http://docs.python.org/library/pdb.html#pdb.run
http://docs.python.org/library/pdb.html#pdb.runeval
http://docs.python.org/library/pdb.html#pdb.runcall
I use these when I'm already at the pdb prompt (generally having gotten there by encountering a well-placed pdb.set_trace() statement) and want to test out, for example, variations on some method calls that aren't called in my source but which I can call right in the current context, manually.
If that's not what you were looking for, do you simply want the "step" command instead of the "next" command at the prompt? (It's unclear what you really want here. An example might help.)
I'm setting up a web application to use IronPython for scripting various user actions and I'll be exposing various business objects ready for accessing by the script. I want to make it impossible for the user to import the CLR or other assemblies in order to keep the script's capabilities simple and restricted to the functionality I expose in my business objects.
How do I prevent the CLR and other assemblies/modules from being imported?
This would prevent imports of both python modules and .Net objects so may not be what you want. (I'm relatively new to Python so I might be missing some things as well):
Setup the environment.
Import anything you need the user to have access to.
Either prepend to their script or execute:
__builtins__.__import__ = None #Stops imports working
reload = None #Stops reloading working (specifically stops them reloading builtins
#giving back an unbroken __import___!
then execute their script.
You'll have to search the script for the imports you don't want them to use, and reject the script in toto if it contains any of them.
Basically, just reject the script if it contains Assembly.Load, import or AddReference.
You might want to implement the protection using Microsoft's Code Access Security. I myself am not fully aware of its workings (or how to make it work with IPy), but its something which I feel you should consider.
There's a discussion thread on the IPy mailing list which you might want to look at. The question asked is similar to yours.
If you'd like to disable certain built-in modules I'd suggest filing a feature request over at ironpython.codeplex.com. This should be an easy enough thing to implement.
Otherwise you could simply look at either Importer.cs and disallow the import there or you could simply delete ClrModule.cs from IronPython and re-build (and potentially remove any references to it).
In case anyone comes across this thread from google still (like i did)
I managed to disable 'import clr' in python scripts by commenting out the line
//[assembly: PythonModule("clr", typeof(IronPython.Runtime.ClrModule))]
in ClrModule.cs, but i'm not convinced this is a full solution to preventing unwanted access, since you will still need to override things like the file builtin.