(is) kernel busy in ipython notebook - python

I'm using an EC2 spot instance (my windows to ubuntu instance) to run a function that was well beyond my laptop's capabilities. The kernel busy dot has been filled for hours. Previously, I would just listen to my laptop as it was obvious when something was running as opposed to ipnb getting stuck. Is there any way I can tell now?
If I try something like 1+1 in the box below my function it will also turn into an asterisk, but I can open a new notebook and have zero issues running simple commands in the new notebook.

this is because all though each notebook has multiple cells but only one kernel, so the commands in the other cells are queued until the first cell finishes its task. When you open a new notebook that notebook is provided with its own kernel so it can do other simple commands quickly without attempting whatever it is that's taking so much cpu power

Related

Reconnect jupyter notebook to existing kernel

I am somewhat confused about the frontend/backend separation of Jupyter Notebook. I am working on a Linux server and suddenly encountered a memory error which was very unlikely given the available memory. And it lead me to realize that the kernels never completely shut down as it was reported here Memory leakage due to presumably unclosed kernels. Here the solution is to just kill all the processes but as I don't want to run my notebook all from the beginning again it would be handy to connect the notebook back to the existing kernel which holds all the variables and data that were already calculated.
I tried it with
jupyter lab --existing kernel-XXkernelnrXX.json
but it just brings me to the starting page of Jupiter lab where I can open a new file. When I open the file I want to reconnect a new kernel is started which I see in htop and %connect_info executed in the notebook gives me a different kernel. I also tried clicking "Reconnect to kernel" from the menu bar and also "Select kernel for: YX.ipynb" but it doesn't work either. So how can I connect the frontend visual notebook again with a running kernel?

do i have to restart the kernel everytime I run a jupyter notebook?

I am completely new in data-science and the jupyter notebook world. Is there any way to start from where I left on without restarting the whole notebook?
i.e I did some operations on a dataset and get a final_data. Whenever I want to use that final_data after shutting down and opening, I get NameError: name 'final_data' is not defined. How to solve this?
I think that's how the Jupyter Notebook works. You have to run all the notebook again. I go to Cell>Run All after opening my notebook.
It depends whether you are using Google Colab or a simple Jupyter notebook, I think you are using simple Jupyter, however, I will discuss my experience with both.
For Google Colab: Runtime is destroyed with inactivity and you have to run it again from the start.
For Jupyter Notebook: If the notebook tab is open on your browser, the results will not be cleared with time (due to inactivity) as long as you close the tab itself. If you are shutting down/restarting your pc, you will lose the progress (number of lines which were compiled and run), and there is not an easy way to save the progress.
The best way is to keep the Jupyter notebook tab open, even if you put your computer in sleep mode (by flipping the lid of your laptop), the data will be retained, until and unless you do not restart or shut down your PC.

Why IPython Console in Spyder4 updates constantly when connecting to remote kernel on server?

I followed the instructions on the official Spyder page and successfully connected to remote kernel via SSH. However I have a problem in my Ipython Console as it refreshes every 2-3 seconds adding In [1]: lines infinitely. I suppose it is refreshing of a state on server, but could someone explain what is this, why this happens and can I turn it off, so that the Console behaves the same as when I work on my local kernel(without infinite adding of empty lines)?
The solution could be found here in post of scott-8: https://github.com/spyder-ide/spyder/issues/10240#issuecomment-543913159
The copy of the answer:
Don't know about the issue above, but here is what solved my issue for anyone reading: instead of running python -m spyder_kernels.console and connecting to the kernel, quit the kernel after running it. Then restart the kernel with python -m spyder_kernels.console -f kernel-xxxxx.json, specifying the file that was just created in the runtime directory, and connect to it. This fixed my problem for some reason.

Chrome crashed, is jupyter notebook still running?

I am working on some very lengthy calculations (8 hours). While doing these calculations, I was working on something else in chrome. Something went wrong on that website, and chrome shut down, where also my jupyter notebook file was running. Now I have started it back up and the logo is still indicates the program is running (it shows the hourglass icon), but I am not sure if this is actually true, in that case I would like to restart the program as quickly as I can.
Hope you guys can help! Thanks!
I have just tested this on locally running Jupyter 4.4.0.
Cells submitted for running will complete as usual (assuming no exception occurs) as long as the kernel is still alive. After that computation is done, you can continue working on the notebook as usual. All changes to that kernel session are preserved, for example if you define a function or save your result in a variable, they will be available later. If you have it doing intensive computation, you can check your system monitor: python consuming lots of CPU means that it is probably still running.
If you have unsaved changes to your notebook, for example new code or cells, they will be lost. The code in them still seems to be executed though if it was set to run (Ctrl+Enter).
If you open localhost:8888 in a browser again, you should be able to see if the kernel is running (e.g. the hourglass icon). The running/idle detection seems to work fine upon reconnect.
However, the new browser session never gets updates from other sessions. This means that everything sent by the running code to the standard output (e.g. with print) after the disconnect is irretrievably lost, but you can still see what it printed before you got disconnected, assuming it was (auto-)saved. Once the kernel is done and you run cells from this new session, your browser will correctly get updates and display output as usual. Apparently (#641, #1150, #2833; thanks #unutbu) it is still not fixed due to Jupyter's architecture requiring a huge rework for that to function.
You can also attach a console with jupyter console --existing your-kernel-session-uuid, but it will not respond until the kernel is idle.

atom.io, hydrogen and ipython remote (ssh) kernel

So I discovered lately this atom.io text editor. Soon after the amazing hydrogen. So far so good, hydrogen starts an ipython kernel if it isn't already running and then it gives you real time interaction, just like with ipython notebook.
Good, but does anyone know if it's possible for hydrogen to be set up to work with a remote ipython kernel? The way I thought I could make this work is to have a local kernel be a communication layer between hydrogen and the remote kernel, but I don't see any way to do this. Do you have any idea how to set this up?
Hydrogen is able to connect to remote kernels running locally or on a server since version 0.12.
Checkout our documentation for more infos.

Categories