I would like to set some debugging command (like import ipdb; ipdb.set_trace()) that would run debugger in jupyter (I would have to run a HTTP server).
Does anybody know about something like this?
Context: I have a long running tasks that are processed by a scheduler (not interactive mode). I would like to be able to debug such a task while running it the same way.
I need to run code in "detached" (not interactive). And when some
error is detected I would like to run debugger. That's why I've been
thinking about remote debugger/jupyter notebook or whatever. So - by
default there is no debugging session - so I think that PyCharm remote
debugger is not a case.
Contrary to what you might seem to think here, you do not really need to run the code in a "debugging session" to use remote debugging.
Try the following:
Install pydevd in the Python environment for your "detached" code:
pip install pydevd
Within the places in that code, where you would have otherwise used pdb.set_trace, write
import pydevd; pydevd.settrace('your-debugger-hostname-or-ip')
Now whenever your code hits the pydevd.settrace instruction, it will attempt to connect to your debugger server.
You may then launch the debugger server from within Eclipse PyDev or Pycharm, and have the "traced" process connect to you ready for debugging. Read here for more details.
It is, of course, up to you to decide what to do in case of a connection timeout - you can either have your process wait for the debugger forever in a loop, or give up at some point. Here is an example which seems to work for me (ran the service on a remote Linux machine, connected to it via SSH with remote port forwarding, launched the local debug server via Eclipse PyDev under Windows)
import pydevd
import socket
from socket import error
def wait_for_debugger(ex, retries=10):
print("Bam. Connecting to debugger now...")
while True:
try:
pydevd.settrace()
break
except SystemExit:
# pydevd raises a SystemExit on connection failure somewhy
retries -= 1
if not retries: raise ex
print(".. waiting ..")
def main():
print("Hello")
world = 1
try:
raise Exception
except Exception as ex:
wait_for_debugger(ex)
main()
It seems you should start the local debug server before enabling port forwarding, though. Otherwise settrace hangs infinitely, apparently believing it has "connected" when it really hasn't.
There also seems to be a small project named rpcpdb with a similar purpose, however I couldn't get it to work right out of the box so can't comment much (I am convinced that stepping through code in an IDE is way more convenient anyway).
Related
I have a Docker container running python code on an ubuntu 20 image, the host is also ubuntu 20.
Inconsistently sometimes the container just gets stuck / freezes.
Logs stop being added to the console, the docker's status is "running".
Even when I try to kill the process that runs the python code inside the Docker, it does not affect it, the process does not die.
Restarting the container solves it.
I put a Python code into my service that listens to a specific signal and when I send the signal it should print the stack trace for me, but as mentioned, the processor does not respond to my signals...
Does anyone have an idea what is causing this or how I can debug it?
The problem was that the code used the requests.post function without setting a timeout, the server was probably not available or changed address (Docker's internal network) and it just waited there.
How can I spawn a pdb-like debugger in a Splunk application (meaning: an application made for and ran by Splunk) ?
I have no control over the python process itself, so simply putting import pdb; pdb.set_trace() in the code will just result in the web app crashing.
I guess the ideal solution would be to
either run the python part of Splunk manually, so I have control over it (I tried this, but it didn't work correctly; mongodb daemon wasn't starting, among other things)
use the good old import pdb; pdb.set_trace() breakpoint but attach to the process somehow, so I'm able to manipulate the debugger (I tried gdb, but nothing worked as expected -- perhaps I didn't use it correctly)
One way to debug might be a remote debugger, like remote-pdb.
It behaves similar as pdb. You can set a breakpoint, then configure the interface and a TCP port where the debugger will listen.
from remote_pdb import RemotePdb
RemotePdb('127.0.0.1', 4444).set_trace()
After that you can simply connect to the debugger using telnet telnet 127.0.0.1 4444
More info:
https://pypi.org/project/remote-pdb/
I'm in the strange position of being both the developer of a python utility for our project, and the tester of it.
The app is ready and now I want to write a couple of blackbox tests that connect to the server where it resides (the server itself is the product that we commercialize), and launch the python application.
The python app allows a minimal command line scripting (some parameters automatically launch functions that otherwise would require user interaction at the main menu). For the residual user interactions, I usually try bash syntax like this:
./app -f <<< $'parameter1\nparameter2\n\n'
And finally I redirect everything to >/dev/null.
If I do manual checks at the command line on the server (where I connect via SSH), everything works smoothly. The app launch lasts 30 seconds, and after 30 seconds I'm correctly returned to the prompt.
Now to the blackbox testing part. Here I'm also using python (Py.test framework), but the test code resides on another machine.
The test runner machine will connect to the Server under test via Paramiko libraries. I've already used this a lot in scripting other functionalities of the product, and it works quite well.
But the problem in this case is that the app under test that I wrote in python uses the NCurses library in its normal behaviour ("import curses"), and apparently when trying to launch this in the Py.test script:
import paramiko
...
client = paramiko.SSHClient()
client.set_missing_host_key_policy(paramiko.AutoAddPolicy())
client.load_system_host_keys()
client.connect(myhost, myuser, mypass, mytimeout)
client.exec_command("./app -f <<< $'parameter1\nparameter2\n\n' >/dev/null")
Regardless the redirection to /dev/null, the .exec_command() prints this to standard error out with a message about the curses initialization:
...
File "/my/path/to/app", line xxx, in curses_screen
scr = curses.initscr()
...
_curses.error: setupterm: could not find terminal
and finally the py.test script fails because the app execution crashed.
Is there some conflicts between curses (used by the app under test) and paramiko (used by the test script)? As I said, if I connect manually via SSH to the server where the app resides and launch the command line manually with the silent redirection to /dev/null, it works as I would expect.
ncurses really would like to do input/output to a terminal. /dev/null is not a terminal, and some terminal I/O mode changes will fail in that case. Occasionally someone connects the I/O to a socket, and ncurses will (usually) work in that situation.
In your environment, besides the lack of a terminal, it is possible that TERM is unset. That will make setupterm fail.
Setupterm could not find terminal, in Python program using curses
wrong error from curses.wrapper if curses initialization fails
The issue I'm facing right now:
I deploy Python code on a remote host via SSH
the scripts are passed some arguments and must be ran by a specific user
the PyCharm run/debug configuration that I create connects through SSH via a different user (can't connect with the user that actually runs the scripts)
I want to remote debug this code via PyCharm...I managed to do all configuration, I just get permission errors.
Are there any ways on how I can run/debug the scripts as a specific user (like sudo su - user)?
I've read about specifying some Python Interpeter options in PyCharm's remote/debug configuration, but didn't manage to get a working solution.
If you want an easy and more flexible way to get into the PyCharm debugger, rather than necessarily having a one-click "play" button in PyCharm, you can use the debug server functionality. I've used this in situations where running some Python code isn't as simple as running python ....
See the Remote debug with a Python Debug Server docs for more details, but here's a rough summary of how it works:
Upload & install remote debugging helper egg on your server (On OSX, these are found under /Applications/PyCharm.app/Contents/debug-eggs)
Setup remote debug server run configuration: click on the drop-down run configuration menu, select Edit configurations..., hit the + button, choose Python remote debug.
The details entered here (somewhat confusingly) tell the remote server running the Python script how to connect to your laptop's PyCharm instance.
set Local host name to your laptop's IP address
set port to any free port that you can use on your laptop (e.g. 8888)
Now follow the remaining instructions in that dialog box: copy-paste the import and pydevd.settrace(...) statements into your code, specifically where you want your code to "hit a breakpoint". This is basically the PyCharm equivalent of import pdb; pdb.set_trace(). Make sure the changed code is sync'ed to your server.
Hit the bug button (next to play; this starts the PyCharm debug server), and run your Python script just like you'd normally do, under whatever user, environment etc. When the breakpoint is hit, PyCharm should drop into debug mode.
I have this (finally) working with ssh RemoteForward open, like so:
ssh -R 5678:localhost:5678 user#<remotehost>
Then start the script in this ssh session. The python script host must connect to localhost:5678 and of course your local pycharm debugger must listen to 5678
(or whatever port you choose)
I'm running a local web service on Ubuntu on localhost:8090, written with bottle.py.
The connection uses SSL.
If I execute the main.py file from Nautilus or the terminal and connect to https://localhost:8090 everything works fine.
When I execute it from a link to the file, an .sh script or a .desktop file the server starts running fine, but when I browse to the address firefox says "The connection to localhost:8090 was interrupted while the page was loading"
$telnet 127.0.0.1 8090 gives this:
Trying 127.0.0.1...
Connected to 127.0.0.1...
Escape character is '^]'.
Connection closed by foreign host.
$sudo netstat -ntlupp | grep 8090 gives this:
tcp 0 0 127.0.0.1:8090 0.0.0.0:* LISTEN
iptables is default
I've got the feeling it's blocking the connection when the server is executed "indirectly" (link, script or .desktop), since when I actually click on the file or run it through terminal it runs fine.
I don't have a clue on where to prevent it from blocking the connection, though. Any help is greatly appreciated.
Any workaround will do, even just pretending the file is being run directly from the user.
Thanks in advance
Watch the server logs.
The major difference between the different methods of invocation probably is the current working directory.
I think that it is unlikely that the network configuration is involved in what you are observing.
Depending on the complexity of your web application it might be that a Python import fails if the main script is not run from the right directory. This would trigger a Python exception, which might lead to an immediate connection reset. I have not worked with bottle, but other Python web frameworks distinguish a development mode in which Python tracebacks are shown in the browser, and a production mode in which an HTTP error is sent to the client.
This is what you should do in order to debug your issue: run your server from a terminal (cd to the right directory, then run python application.py). Carefully watch stdout and stderr of that server process while connecting to the web application with your browser.
Ok, problem solved.
It was actually depending on the current working directory not being the same as the python file running the WSGI server.
If I run the .sh script or the link from the same directory everything works fine, and if I give a cd command in the script everything works smoothly.
Thanks for the help Jan-Philip!