I am working on some very lengthy calculations (8 hours). While doing these calculations, I was working on something else in chrome. Something went wrong on that website, and chrome shut down, where also my jupyter notebook file was running. Now I have started it back up and the logo is still indicates the program is running (it shows the hourglass icon), but I am not sure if this is actually true, in that case I would like to restart the program as quickly as I can.
Hope you guys can help! Thanks!
I have just tested this on locally running Jupyter 4.4.0.
Cells submitted for running will complete as usual (assuming no exception occurs) as long as the kernel is still alive. After that computation is done, you can continue working on the notebook as usual. All changes to that kernel session are preserved, for example if you define a function or save your result in a variable, they will be available later. If you have it doing intensive computation, you can check your system monitor: python consuming lots of CPU means that it is probably still running.
If you have unsaved changes to your notebook, for example new code or cells, they will be lost. The code in them still seems to be executed though if it was set to run (Ctrl+Enter).
If you open localhost:8888 in a browser again, you should be able to see if the kernel is running (e.g. the hourglass icon). The running/idle detection seems to work fine upon reconnect.
However, the new browser session never gets updates from other sessions. This means that everything sent by the running code to the standard output (e.g. with print) after the disconnect is irretrievably lost, but you can still see what it printed before you got disconnected, assuming it was (auto-)saved. Once the kernel is done and you run cells from this new session, your browser will correctly get updates and display output as usual. Apparently (#641, #1150, #2833; thanks #unutbu) it is still not fixed due to Jupyter's architecture requiring a huge rework for that to function.
You can also attach a console with jupyter console --existing your-kernel-session-uuid, but it will not respond until the kernel is idle.
Related
I am completely new in data-science and the jupyter notebook world. Is there any way to start from where I left on without restarting the whole notebook?
i.e I did some operations on a dataset and get a final_data. Whenever I want to use that final_data after shutting down and opening, I get NameError: name 'final_data' is not defined. How to solve this?
I think that's how the Jupyter Notebook works. You have to run all the notebook again. I go to Cell>Run All after opening my notebook.
It depends whether you are using Google Colab or a simple Jupyter notebook, I think you are using simple Jupyter, however, I will discuss my experience with both.
For Google Colab: Runtime is destroyed with inactivity and you have to run it again from the start.
For Jupyter Notebook: If the notebook tab is open on your browser, the results will not be cleared with time (due to inactivity) as long as you close the tab itself. If you are shutting down/restarting your pc, you will lose the progress (number of lines which were compiled and run), and there is not an easy way to save the progress.
The best way is to keep the Jupyter notebook tab open, even if you put your computer in sleep mode (by flipping the lid of your laptop), the data will be retained, until and unless you do not restart or shut down your PC.
I created a a very simple test that launches and close a software I was testing using Python Nose test platform to track down a bug in the start up sequence of the software I was working on.
The test was set up so that it would launch and close about 1,500 times in a singling execution.
A few hours later, I discovered that the test was not able to launch to the software around after 300 iterations. It was timing out while waiting for the process to start. As soon as I logged back in, the test started launching the process without any problem and all the tests started passing as well.
This is quite puzzling to me. I have never seen this behavior. This never happened on Windows also.
I am wondering if there is a sort of power saving state that Mac was waiting for currently running process to finish and prohibits new process from starting.
I would really appreciate if anybody can shed light on this confusion.
I was running Python 2.7.x on High Sierra.
I am not aware of any state where the system flat out denies new processes while old ones are still running.
However, I can easily imagine a situation in which a process may hang because of some unexpected dependency on e.g. the window server. For example, I once noticed that rsvg-convert, a command-line SVG-to-image converter, running in an SSH session, had different fonts available to it depending on whether I was also simultaneously logged in on the console. This behavior went away when I recompiled the SVG libraries to exclude all references to macOS specific libraries...
This is a strange thing that I've noticed only for a particular computer. If I leave the Jupyter Notebook page inactive for a while (without closing the browser page or putting the computer to sleep) and come back to it, the kernel would appear to be completely unresponsive; but it wouldn't say "Kernel Dead" either. Restarting the kernel from the Jupyter Notebook does nothing so I always ended up having to close the command window from which Jupyter Notebook was run, and it goes without saying that all the variables were lost.
Whenever this happens, any activity on the Jupyter Notebook page following its "death" is not logged in the command window that runs it.
Tried searching around on GitHub which might have been a more appropriate channel with no luck.
I'd be happy to provide more info. Thanks!
This wight not be a helpful answer as it will not address the root cause of the problem but rather present a workaround. The kernel is an independent process from the browser so you can close the browser tab and reopen it (e.g http://localhost:8888/) and it connects to the still running kernel without any problems.
I am using SSH to connect to a linux-base remote server and in that server I have run ipython from the terminal that it has brought to me. The point is that I want to interrupt the current operation but I can not do that at all. I have tried pressing double i or the information that have been provided in this web site but did not work (using Ctrl + m i).
I have seen here and here but were useless.
There seem to be some confusion in your question – clarified in the comments – as to whether you refer to the Terminal IPython or IPython Notebook. The two are quite different beasts and do not have the same shortcuts/capabilities.
The docs you point to are old, and the up-to-date version for the notebook interface is here, i,i and Ctrl-m,i are shortcut for the Classic Notebook interface (now there is also a JupyterLab interface), when ran in a browser. Almost None of the shortcut of the notebook interface apply to the terminal. The notebook interface is a 2-to-3 process system, you are not asking you computer to kill directly the computation, you are asking the interface to stop it.
When you run IPython at the terminal you are directly executing the CLI-Interface and your code in the same process, so Many shortcut will actually be shortcuts of your terminal IPython have limited control over. Thus the way to interrupt a computation is Ctrl-C (soft terminate) or Ctrl-\ forcibly terminal. (And actually when you press i,i i na notebook, it sends a network request to send Ctrl-C to your computation)
Now if you have a computation done in C (like in NumPy for example) it cannot be easily interrupted. Python will receive a "please stop as soon as you can" but will have the first occasion to do so only when numpy (or your C routine) has finished. The only solution is to kill the process using the kill <pid> command. But this will not only stop your computation but most likely kill the all IPython session itself.
You may also try Ctrl-Z (if your terminal support it) that should pause the process and put it in background. Not sure how that would behave in an SSH session though.
I'm using an EC2 spot instance (my windows to ubuntu instance) to run a function that was well beyond my laptop's capabilities. The kernel busy dot has been filled for hours. Previously, I would just listen to my laptop as it was obvious when something was running as opposed to ipnb getting stuck. Is there any way I can tell now?
If I try something like 1+1 in the box below my function it will also turn into an asterisk, but I can open a new notebook and have zero issues running simple commands in the new notebook.
this is because all though each notebook has multiple cells but only one kernel, so the commands in the other cells are queued until the first cell finishes its task. When you open a new notebook that notebook is provided with its own kernel so it can do other simple commands quickly without attempting whatever it is that's taking so much cpu power