When using pdb to debug Python code, I often wish that commands such as next, return, and until would show the time it takes to run until the next time pdb breaks.
Is it possible to change a setting or write a plugin to make this happen?
Related
The main Python code calls my written functions that for better overview are stored in separate .py files. For improvement of my code the program shall start from the beginning and stop at a defined position where I want to do some repair. After stopping I want to have access to the local variables where the code has stopped. This means I want to select some code that was performed before the halt and manipulate this code in the console. After testing in the console I want to correct my original code and run again the program.
Example:
You suppose that the following line doesn't execute as you expect:
if a.find('xyz')==-1:
Therefore you stop the program just before:
breakpoint()
if a.find('xyz')==-1:
Now you want to find out why exactly the line is not working as you expected. Maybe it depends on the variable a or on the string xyz or the find command is not correctly applied? So I would now enter a.find('xyz') in the console, vary and adjust the command. After a few tests in the console I find out that the right command must be a.find('XYZ'). Now I correct the line in my original code and restart the program. ---
But this is not possible because the halt command breakpoint() or pdb.set_trace() prohibits me from using the console. Instead I end up in debug mode where I can only run through the code line by line or display variables.
How can I debug my code as desired?
The following workarounds also do not help:
sys.exit()
I stop the code with sys.exit(). Main problem with this method is that I have no access to the variables if the code has stopped in another file. Also I do not see where the program has stopped. If I have several sys.exit() instances distributed in large code I do not know at which instance it has stopped. I can define individual outputs by sys.exit(‘position1’), sys.exit(‘position2’), but still I have to manually find the file and scroll to the given position.
cells
I define cells with #%%. Then I can run these cells separately, but I cannot run all cells from the beginning until the end of a given cell.
Spyder 5.2.2
I have a script which I want to run on a daily basis at the same time, automatically. I tried to use Windows Task Scheduler. but no luck so far. FYI I can run the same script from the console without any issue.
I tried
where python
C:\Users\name\Anaconda3\python.exe
C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe
C:\Users\name\AppData\Local\Microsoft\WindowsApps\python.exe
on TaskScheduler
Program/script: "C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe"
Add arguments: Nomura_Daily_PnL_Check.py
Start in: "C:\Users\name\Jobs\scripts_need_to_run_daily"
when the scheduled time come, literally nothing happens. No error, No output, nothing!
what is wrong in this process?
try mentioning values without double inverted commas, like:
Program/script: C:\Users\name\AppData\Local\Programs\Python\Python38\python.exe
Add arguments: Nomura_Daily_PnL_Check.py
Start in: C:\Users\name\Jobs\scripts_need_to_run_daily
Also first try to run a simple program, lets say a python program which will write to a file so that you will know whether task is failing or there is some issue with your program.
I'm using Pycharm and playing with the profiler it has built in. I've keyed in on some areas where my code can be optimized but I was wondering if there was a way to step through the code and see how long each line took to execute as I stepped through without having to rerun all my code in the profiler.
I think the closes you could do is put a breakpoint
then open up the debugger and enter console mode
and execute the statement as started=time.time();my_function();print("Took %0.2fs"%(time.time()-started))
We have been running a script on partner's computer for 18 hours. We underestimated how long it would take, and now need to turn in the results. Is it possible to stop the script from running, but still have access to all the lists we are building?
We need to add additional code to the one we are currently running that will use the lists being populated right now. Is there a way to stop the process, but still use (what has been generated of) the lists in the next portion of code?
My partner was using python interactively.
update
We were able to successfully print the results and copy and paste after interrupting the program with control-C.
Well, OP doesn't seem to need an answer anymore. But I'll answer anyway for anyone else coming accross this.
While it is true that stopping the program will delete all data from memory you can still save it. You can inject a debug session and save whatever you need before you kill the process.
Both PyCharm and PyDev support attaching their debugger to a running python application.
See here for an explanation how it works in PyCharm.
Once you've attached the debugger, you can set a breakpoint in your code and the program will stop when it hits that line the next time. Then you can inspect all variables and run some code via the 'Evaluate' feature. This code may save whatever variable you need.
I've tested this with PyCharm 2018.1.1 Community Edition and Python 3.6.4.
In order to do so I ran this code which I saved as test.py
import collections
import time
data = collections.deque(maxlen=100)
i = 0
while True:
data.append(i % 1000)
i += 1
time.sleep(0.001)
via the command python3 test.py from an external Windows PowerShell instance.
Then I've opened that file in PyCharm and attached the debugger. I set a Breakpoint at the line i += 1 and it halted right there. Then I evaluated the following code fragment:
import json
with open('data.json', 'w') as ofile:
json.dump(list(data), ofile)
And found all entries from data in the json file data.json.
Follow-up:
This even works in an interactive session! I ran the very same code in a jupyter notebook cell and then attached the debugger to the kernel. Still having test.py open, I set the breakpoint again on the same line as before and the kernel halted. Then I could see all variables from the interactive notebook session.
I don't think so. Stopping the program should also release all of the memory it was using.
edit: See Swenzel's comment for one way of doing it.
A fairly large Python program I write, runs, but sometimes, after running for minutes or hours, in a non easily reproducible moment, hangs and outputs nothing to the screen.
I have no idea what it is doing at that moment, and in what part of code it is.
How can I run this in a debugger or something to see what lines of code is the program executing in the moment it hangs?
Its too large to put "print" statements all over the place.
I did:
python -m trace --trace /usr/local/bin/my_program.py
but that gives me so much output that I can't really see anything, just millions of lines scrolling on the screen.
Best would be if I could send some signal to the program with "kill -SIGUSR1" or something, and at that moment the program would drop into a debugger and show me the line it stopped at and possibly allow me to step through the program then.
I've tried:
pdb usr/local/bin/my_program.py
and then:
(Pdb) cont
but what do I do to see where I am when it hangs?
It doesn't throw and exception, just seems like it waits for something, possibly in an infinite loop.
One more detail: when the program hangs, and I press ^C and then (not sure if that is necessary) the program continues normally (without throwing any exception and without giving me any hint on the screen why did it stop).
This could be useful to you. I usually do
>>> import pdb
>>> import program2debug
>>> pdb.run('program2debug.test()')
I usually add a -v option to my programs, which enables tons of print statements explaining what I'm doing in detail. When you write a program in the future, consider doing the same before it gets thousands of lines big.
You could try running it in debug mode in an IDE like pydev (eclipse) or pycharm. You can break the program at any moment and get to its current execution point.
No program is ever too big to put print statements all over the place. You need to read up on the logging module and insert lots of logging.debug() statements. This is just a better form of print statement that outputs to a file, and can be turned off easily in production software. But years from now, when you need to modify the code, you can easily turn it all back on and get the benefit of the insight of the original programmer.