I am working in a jupyter notebook with the irkernel.
When I do the same with a Python kernel I usually have at the top of my notebooks the instructions
%load_ext autoreload
%autoreload 2
If I modify a source code that the notebook imports or uses then those functions and pieces of code in the notebook that use that source code will be automatically updated.
Is there an equivalent for this in a jupyter R notebook?
I am using a local package that my notebook uses. I would like to be able to edit the package and have the modifications automatically loaded in my notebook.
In short? Unless jupyter does something that is impossible while working in base R the answer is "No". R cannot dynamically load packages for editing in a similar fashion to how Python does it. The recommended method in R is to modify, install and often run R CMD check. I am not sure how Jupyter implements these, but this is the approach that is also focused upon in the user experience of Rstudio.
Hadley has great (free!) book on how to develop packages in R. I am almost certain he mentions this workflow somewhere in the "Getting started" section.
Related
Is there a possibility to run a Jupyter Notebook in one environment and than to call a .py file (out of the JN) from another environment without pulling it over like it normally occurs?
Example:
from PythonScript1 import FunctionFromScript
Edit:
Because I see my problem is unclear described here some further details and the background of my question:
I want to run a matlab file from a jupyter notebook but this only works on condition which does not allow me to use tensorflow in the same JN (Using Matlab.engine and installing tensorflow at the same time).
My idea was to have the tensorflow model in one .py file which works in an anaconda env. (+ other directory) which is designed for it, while I have an JN in an other anaconda environment to call the matlab code.
You can also use SOS kernels in Jupyter Lab. SOS allows you to run multiple kernels in the same notebook and pass variables between the kernels. I was able to run Python and R kernels in a single notebook using SOS. You can use two Python kernels in your case - one with TF and one without.
P.S. I am not affiliated to SOS and am not promoting it. It worked for me and I thought I'd suggest this option.
No, it is not possible because you can't have two interpreters on the same notebook. Actually, you can have two virtual environments and execute the notebook with one or other, but you can't do it with both.
If you are talking about running a module crafted with other version of python interpreter, it depends on the versions compatibility
I found a solution to my problem. If I build my (.py) script as a Flask, then I can run it in a different environment (+ dir.) than my Jupyter Notebook. The only difference is that I can't call the function directly, I have to access the server and import my data with "get" and "post". Thanks for the help everyone!
I am looking for a tool/extension that helps you writing python docstrings in jupyter notebook.
I normally use VS code where you have the autodocstring extension that automatically generates templates (e.g. the sphinx or numpy template) for docstrings. Is there an equivalent to this in jupyter notebook?
I have been looking online for a long time now, but have trouble finding it.
run this in a Notebook cell:
%config IPCompleter.greedy=True
Then press tab where you want to do autocomplete.
(Extracted from Reddit post)
To make use of auto complete without the use of tab or shift+tab follow this.
However, I do not think there is an autodocstring extension for jupyter notebook like the one on VS Code you mentioned.
I used this answer to build a requirements.txt file for my Jupyter notebook. I then put both the ipynb and requirements.txt file in a Git repo and published it to Binder. As I understand, this should give me an interactive version of my notebook which I can share with people for them to play around with.
The published Binder can be found here.
Does anyone know why the interactive bit is not showing? Specifically, the sliders.
You need to enable the extension that allows the sliders, namely ipywidgets. There is an example of using ipywidgets presently here among the Binder example template repos. (If that repo gets altered, I was talking about this specific commit.)
Right now the extension gets enabled separately for JupyterLab vs the classic interface. If you just want to have your launches default to the classic interface, you can leave out the JupyterLab-related enabling.
The fact is, that in official documentation Jupyter - motivating examples stands
Equation numbering and referencing will be available in a future version of the Jupyter notebook.
I know there is a lot of discussion about this topic. There are some people who claim to solve this issue with some workarounds.
But for ordinary user it is hard to understand the workarounds, or how dirty/useful the hacks really are.
So my questions are:
what means the "available in future version"? Does it mean something like "new month/year" or something like "probably never because it is too impossible"?
If any of the workarounds provided on the Internet safe for a human consumption? I mean is it worthy? Because it is possible to use Sphinx or something else for creation of tutorials, it will be more work, but it will be more work that implementing some hacks, installing plug-ins and so on?
Note: For somebody it could seems to be a question requiring opinion based answer, but I am pretty sure it is not. Any advice can help me (or others users) to make a good/bad decision.
I believe that essentially all information relevant to this question can be found in this long Github issue thread.
The conversation there has been ongoing for (at this moment) 3.5 6.5 8 years and is still active. Important highlights:
You can very simply turn on numbering by executing a cell with the following content:
%%javascript
MathJax.Hub.Config({
TeX: { equationNumbers: { autoNumber: "AMS" } }
});
There is an extension for equation numbering.
Developer minrk has suggested that this extension is the right approach and could be merged into master (but the functionality would be turned off by default).
To install the extension via pip:
pip install jupyter_contrib_nbextensions
To install the extensions via Anaconda:
conda install -c conda-forge jupyter_contrib_nbextensions
After using one of the ways to install provided above, enable the extension:
jupyter contrib nbextension install --user
jupyter nbextension enable equation-numbering/main
Here is a working example, to be entered in a markdown cell:
\begin{equation*}
\mathbf{r} \equiv \begin{bmatrix}
y \\
\theta
\end{bmatrix}
\label{eq:vector_ray} \tag{1}
\end{equation*}
Vector **r** is defined by equation $\eqref{eq:vector_ray}$
It's self explanatory but here's some details:
\label : name describing he equation
\tag : the label appearing next to the equationcan be a number or letters
\eqref : reference to the labeled equation
This will be shown as:
Go to your Jupyter Notebook editor (I am using Anaconda right now), Edit menu, the last item 'nbextensions config'. It opens a page where you can see a list of extensions, one of which is "Equation Auto Numbering". Enable it and restart your notebook. You will see that a button appears on the top of your notebook for resetting the numbering of equations. You will need to press that button every now and then.
When I start up the Jupyter Notebook I've modified the ipython_config.py in my ipython profile to automatically load numpy as np:
c.InteractiveShellApp.exec_lines = [
'import numpy as np',
]
This works great. When I start up a Notebook, in the first cell I can immediately call all of the numpy library via np.. However, if I'm sharing this Notebook via a gist or some other method, these imports are not explicitly shown. This is suboptimal as it makes clear reproducibility impossible.
My question: Is there a way that I could automatically populate the first cell of a new Notebook with the code that I'm importing? (Or some other similar way to document the imports that are occurring for the Notebook).
I'd be OK with removing the exec_lines option and pre-populating the code that I have to run myself or some other solution that gets at the main idea: clear reproducibility of the code that I'm initially importing in the Notebook.
Edit
A deleted answer that might be helpful to people landing here: I found jupyter_boilerplate which as an installable Notebook extension "Adds a customizable menu item to Jupyter (IPython) notebooks to insert boilerplate snippets of code" -- would allow one to easily create a starting code snippet that could be filled in.
Sidenote to MLavoie because "comments disabled on deleted / locked posts / reviews"
Yes, you are right that:
While this link may answer the question, it is better to include the essential parts of the answer here and provide the link for reference. Link-only answers can become invalid if the linked page changes. - From Review – MLavoie Jul 8 '16 at 17:27
But, you'll notice, that this is a widget to be installed, so there isn't relevant code to paste here. It was unhelpful to delete the above answer.
Almost automatically:
%load startup.py
Put import/config code in a version controlled file on your PYTHONPATH and %load it into the first cell.
This has the advantage of allowing you to use different startup code without tweaking your startup config, and notebooks remain portable, i.e. send the notebook and startup file to other users and they can run it without tweaking their startup config.
Create a notebook that contains the preparations you want and use that as a template. That is, copy it to a new file and open it.