I am using SSH to connect to a linux-base remote server and in that server I have run ipython from the terminal that it has brought to me. The point is that I want to interrupt the current operation but I can not do that at all. I have tried pressing double i or the information that have been provided in this web site but did not work (using Ctrl + m i).
I have seen here and here but were useless.
There seem to be some confusion in your question – clarified in the comments – as to whether you refer to the Terminal IPython or IPython Notebook. The two are quite different beasts and do not have the same shortcuts/capabilities.
The docs you point to are old, and the up-to-date version for the notebook interface is here, i,i and Ctrl-m,i are shortcut for the Classic Notebook interface (now there is also a JupyterLab interface), when ran in a browser. Almost None of the shortcut of the notebook interface apply to the terminal. The notebook interface is a 2-to-3 process system, you are not asking you computer to kill directly the computation, you are asking the interface to stop it.
When you run IPython at the terminal you are directly executing the CLI-Interface and your code in the same process, so Many shortcut will actually be shortcuts of your terminal IPython have limited control over. Thus the way to interrupt a computation is Ctrl-C (soft terminate) or Ctrl-\ forcibly terminal. (And actually when you press i,i i na notebook, it sends a network request to send Ctrl-C to your computation)
Now if you have a computation done in C (like in NumPy for example) it cannot be easily interrupted. Python will receive a "please stop as soon as you can" but will have the first occasion to do so only when numpy (or your C routine) has finished. The only solution is to kill the process using the kill <pid> command. But this will not only stop your computation but most likely kill the all IPython session itself.
You may also try Ctrl-Z (if your terminal support it) that should pause the process and put it in background. Not sure how that would behave in an SSH session though.
Related
I have been facing issue with automation execution of my script on one of the VM. I have automated the functionality of Saving a Document which is ideally a Windows Designed UI. I have tried using various technologies/tools like AutoIT, Python, Sikuli but the script halts if VM is minimized. It works perfectly fine is VM is open via RDP and I can see runtime execution. But If I minimize the RDP, the script halts at 'Save As' dialog box, none of the send keys (Cntrl+s) or (Enter) work via AutoIt script. Please help with some solution so as to have successfully execution of script even in minimized mode.
The reason why your script fails when it gets executed over a minimized RDP session is quite simple. GUI automation/testing tools need to have an unlocked, active desktop - otherwise the operation system thinks that it doesn't need to actually render GUI operations (which is time consuming) since there no user can that can see the rendered graphical user interface anyway. And programs don't communicate via GUIs normally ...
This is why QF-Test and other GUI automation/testing tools often have a note in their FAQs describing this kind of problem. For example FAQ 14 in the case of QF-Test, see https://www.qfs.de/qf-test-handbuch/lc/manual-en-faq.html
As described in the FAQ 14 on Windows 10 or Windows Server 2016 and in case of an RDP connection you need to modify the Registry. Go to
HKEY_CURRENT_USER\Software\Microsoft\Terminal Server Client
and add a new value
RemoteDesktop_SuppressWhenMinimized as DWORD having the value 2
After restarting you will then be able to minimize the RDP connections. However disconnecing or closing the RDP connection will probably still result in a failure.
You could try running tscon.exe RDP-Tcp#0 /dest:console as admin as mentioned here. This will disconnect your RDP connection but should leave all GUI programs running normally on the VM. I have personally used this with autoit on a VM and it worked OK. Of course you will not be able to monitor your script as it runs so this may or may not work for you.
I am working on some very lengthy calculations (8 hours). While doing these calculations, I was working on something else in chrome. Something went wrong on that website, and chrome shut down, where also my jupyter notebook file was running. Now I have started it back up and the logo is still indicates the program is running (it shows the hourglass icon), but I am not sure if this is actually true, in that case I would like to restart the program as quickly as I can.
Hope you guys can help! Thanks!
I have just tested this on locally running Jupyter 4.4.0.
Cells submitted for running will complete as usual (assuming no exception occurs) as long as the kernel is still alive. After that computation is done, you can continue working on the notebook as usual. All changes to that kernel session are preserved, for example if you define a function or save your result in a variable, they will be available later. If you have it doing intensive computation, you can check your system monitor: python consuming lots of CPU means that it is probably still running.
If you have unsaved changes to your notebook, for example new code or cells, they will be lost. The code in them still seems to be executed though if it was set to run (Ctrl+Enter).
If you open localhost:8888 in a browser again, you should be able to see if the kernel is running (e.g. the hourglass icon). The running/idle detection seems to work fine upon reconnect.
However, the new browser session never gets updates from other sessions. This means that everything sent by the running code to the standard output (e.g. with print) after the disconnect is irretrievably lost, but you can still see what it printed before you got disconnected, assuming it was (auto-)saved. Once the kernel is done and you run cells from this new session, your browser will correctly get updates and display output as usual. Apparently (#641, #1150, #2833; thanks #unutbu) it is still not fixed due to Jupyter's architecture requiring a huge rework for that to function.
You can also attach a console with jupyter console --existing your-kernel-session-uuid, but it will not respond until the kernel is idle.
I have an ipython noteboook where I am running a process that takes a very long time. I am using ipython %R magic in much of it, so I can't easily convert the notebook to a python script.
Is there a way I can open my notebook, run all, and then close my browser and disconnect from the terminal and still have the notebook running in the background that I can connect to later?
I see information on Stack Exchange about keeping the kernel alive, but I'm confused as to how this interfaces with the actual code running within a notebook.
I'm using tmux, to detach from running jupyter-notebook type ctrl-C then type tmux detach command : ctrl-B, followed by d
link for tmux setup : https://stackoverflow.com/a/42505000/7358899
From my home pc using putty, I ssh'ed into a remote server, and I ran a python program that takes hours to complete, and as it runs it prints stuff. Now after a while, my internet disconnected, and I had to close and re-open putty and ssh back in. If I type 'top' I can see the python program running in the background with its PID number. Is there a command I can use to basically re-open that process and see it printing its stuff again?
Thanks
As noted, best practice is to use screen or tmux (before starting the program, so you do not need to ask this question).
But you can also attach to a running process with a debugger such as gdb (alluded to here as ddd, a wrapper for gdb), as well as with strace (see this question). That's better than nothing - but gdb and strace would not give you the program's command-line again (though this question suggests a way). At least strace could give you some clues of what the program was attempting to print.
Things to try:
nohup, or
screen
I'm trying to hook up my Vim so I can send commands to a running IPython instance. There are scripts for this, but they are outdated. I'm trying to write a new one.
My main stumbling block right now is the proper way to make IPython listen to incoming network connections in the background (i.e. a different thread, other solutions are welcome) and executing the received commands in the main thread. Earlier scripts did not execute commands in the main thread and would crash for instance matplotlib regularly.
I see that twisted provides a ThreadedSelectReactor, but I'm at a loss as to how to use it properly with IPython.
Update
A scenario example would be:
2 Windows open, one is a terminal running IPython, one is Vim where you are editing a python script. You select a line in Vim and hit C-Enter, Vim sends the line to the IPython instance, which executes it and prints the result in the IPython terminal, just as if you had copy/pasted the line over myself.
(Matlab users know how useful this functionality can be.)
Paul Ivanov did this a few months ago, using IPython's zmq interface. It's called vim-ipython.
I get the impression that IPython is or has moved to using zmq as messaging protocol. Atleast when I am running 0.11 version zmq support is available.
Using zmq(zero mq) the whole message passing problem is very much reduced to get your Vim instance to communicate over zmq which as far as I know should not be that hard (zmq is ported to a wide variety of platforms).
Look into this blog: http://ipythonzmq.blogspot.com/
and of course: http://www.zeromq.org/