I have a jupyter notebook where an executed cell gives the following error:
IOPub data rate exceeded...
I understand this is an option:
jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10
However, I would really prefer to just to set this along with my import statements and other notebook settings instead of tweaking configuration files or command line when starting notebooks. Is there an easy way to do this?
I would recommend creating an alias jn to launch Jupyter notebook with these settings every time. You do it once for all and do not have to tweak with command line after.
Under UNIX system you run in terminal ::
alias jn="jupyter notebook --NotebookApp.iopub_data_rate_limit=2147483647"
To enable this alias at every session you have to append this command line to your shell config file, by default ~/.bash_profile
If you run Windows check the equivalence to aliases
Related
Assume I have a script 'test.py'. To run this in Jupyter terminal I can simply put
python test.py
However, what if I wish to run this with K8s cluster rather than Python. Is there any way to execute with a specific Kernel from terminal? For example,
python --pyspark_k8s test.py
The above line is for illustrational purposes :-)
Thanks
I am placing a .py file in Jupyter Notebook profile_default/startup folder. It has some problems, I want to debug and see the logs. Is there any way I see the output or logs generated by that file. I am on windows 10. There are a few methods .py file. I am using keyboard module to generate a hotkey whenever the notebook starts. It isn't working for me. Please suggest a good way to at least debug that file. My ipykernel is 5.3.4 and ipython 7.16.1
The easiest way I've found so far is to fire up ipython from your console.
Assuming the error is not specific to how Jupyter starts the kernel, you will see the error with the stacktrace before the ipython shell starts.
If the error is specific to jupyter kernel, use jupyter kernel. It will start the kernel you specify, and drop a log line if there's an error.
I am authoring a Jupyter notebook on my local machine that will eventually be run on a remote server (which is running Ubuntu). Every time I need to make a change I must export the notebook as a .py file and then call it from the command line of the server.
I'd like to be able to run this on the fly, calling one command that takes the current .ipynb file and executes it on the command line as if it were a .py, showing all the print statements and output you'd expect if the .py were run. I thought nbconverter might do the trick using something like the following command:
jupyter nbconvert --to script --execute nbconvert_test.ipynb
As it turnout, this does not convert the .ipynb to a .py file to be executed on the command line as I would like, but rather it creates a new file called nbconvert_test.py in the same directory which I would then have to run in a separate command. I'd really like to prevent the creation of that file every time I make even a small change, and to skip the extra step on the command line.
Any help is appreciated!
You can send the jupyter nbconvert to stranded output and pipe that to python.
jupyter nbconvert --to script --execute --stdout test_nbconvert.ipynb | python
A workaround is a small shell script that has three parts
converting the notebook
executing created script
removing the script
create a file runnb.sh
#!/bin/sh
# First argument is the notebook you would like to run
notebook=$1
scriptname="$(basename $notebook .ipynb)".py
jupyter nbconvert --to script --execute ${notebook} && python ${scriptname}
rm ${scriptname}
use as such:
$ ./runnb.sh nbconvert_test.ipynb
EDIT:
According to this answer, this command should do just fine jupyter nbconvert --execute test_nbconvert.ipynb (just leav out the --to flag
With the boar package, you can run your notebook within a python code, using:
from boar.running import run_notebook
outputs = run_notebook("nbconvert_test.ipynb")
For more information, see:
https://github.com/alexandreCameron/boar/blob/master/USAGE.md
I had a similar error
xyz.py not found for rm command
Then I ran the script inside the conda virtual environment and it worked.
conda activate
I'm using Jupyter Notebook to code in Python 2.
I'm invoking it as:
c:\python27\scripts\jupyter-notebook --no-browser
At the same time I use IPython console, launched with:
c:\python27\scripts\ipython
The problem I have is that Jupyter history is saved and is mixed with IPython history.
I don't want Jupyter Notebook history at all - is there a way to disable it, while retaining IPython** history?
Platform: win32
Update:
I have tried to use suggested setting digest approach.
But when I enter "c.Session.digest_history_size = 0" to the config, restart notebook, write "print 'next test'" in some cell, restart separate IPython and after pressing up the first thing I get is "print 'next test'".
How can I get rid of it?
See this Jupyter issue on Github for the origin of this solution.
In the Introduction to IPython Configuration using configuration scripts located in your home directory at ~/.ipython/profile_default/ is discussed. This is the relevant directory for the default profile, other similar directories appear if one creates other profiles.
Inside that directory one can include the file ipython_config.py which will run on every usages of IPython. However, the file ipython_kernel_config.py will run upon invocation of an IPython kernel, not when invoking the IPython interpreter itself. One can test this by doing ipython kernel --debug.
Jupyter notebooks use this style of kernel invocation. Therefore including a script ipython_kernel_config.py in the directory ~/.ipython/profile_default/ (assuming the default profile) with the following lines:
# Configuration file for ipython-kernel.
c = get_config()
c.HistoryManager.enabled = False
Should disable the history manager completely when using that style of kernel invocation. Therefore one should not populate command history from Jupyter calls.
Incidentally, the file history.sqlite in that same directory is the command history. So, deleting it or moving it to a different filename will clear the command history buffer.
I want to run a IPython notebook server on the a predefined URL like: http://localhost:8888/my_notebook.
I've tried
$ ipython notebook --ip=localhost/my_notebook
which didn't work.
Can anyone please help?
You can set the configurable value from command line like this:
ipython notebook --NotebookApp.base_url='my_notebook'
Alternatively, you can set it in a config file for your profile. You can read more about it here.