Jupyter Notebook malfunctioning - python

I am running Julia and Python on Jupyter notebook. I like the UI and the auto indentation and other features (I'm a beginner). However, while running Julia today the entire notebook just appears blank. The code blocks are not colour coded, and there is no auto indentation. However, when I open a Python notebook/change the kernel from Julia to Python, it works as usual. I have attached pictures here for comparison.
I don't know what has changed. How do I fix this?
EDIT: Adding some code for reference:
Julia
using Plots
function lattice(N)
lat = zeros(N,N)
for i in (1:N)
for j in (1:N)
if rand() < 0.5
lat[i,j] = 1
else lat[i,j] = -1
end
end
end
return(lat)
end
Python
import numpy as np
def ran_ini_cond(L):
spins = np.zeros((L,L))
for i in range(L):
for j in range(L):
spins[i,j] = np.random.choice([1,-1])
print('inital lattice')
return(spins)
These are two different versions of the same program I am trying to run on Julia and Python. The code works, but a lot of the IJulia and/or Jupyter notebook features are disabled for Julia (as seen in the pictures).

Related

Issues with accessing PyObjects after writing

I am trying to do some fairly simple list manipulation, using a Jupyter notebook that calls a DLL function. I'd like my Jupyter notebook/Python code to pass in a Python list to a C++ function, which modifies the list, and then I'd like the Python code to be able to access the new list values.
I can actually read (in Jupyter) the items that were not edited by the C++ code, so there must be some issue with how I'm writing, but every example I can find looks just like my code. When I try to access the item in the list that the C++ code writes, my Jupyter kernel dies with no explanation; I've tried to run the same Python code in the terminal, and the terminal session just exits, again with no explanation.
Running on Windows 10, environment with Python 3.9.2. Here's the Python:
import os
import ctypes
import _ctypes
# Import the DLL
mydll = ctypes.cdll.LoadLibrary(*path to DLL*)
# Set up
data_in = [3,6,9]
mydll.testChange.argtypes = [ctypes.py_object]
mydll.testChange.restype = ctypes.c_float
mydll.testChange(data_in)
# Returns 0.08
After running this and closing the DLL, running data_in[1] returns 6, data_in[2] returns 9, and data_in[0] causes my kernel to die.
C code for the DLL:
float testChange(PyObject *data_out) {
Py_SetPythonHome(L"%user directory%\\anaconda3");
Py_Initialize();
PyList_SetItem(data_out, 0, PyLong_FromLong(1L));
return 0.08;
}
I can also insert a number of print statements in this code that show that I can read out all three items in the DLL both before and after the call to PyList_SetItem using calls like PyLong_AsLong(PyList_GetItem(data_out, 1)). It's not clear to me that any reference counts need changing or anything like that, but perhaps I misunderstand the idea. Any ideas you all have would be greatly appreciated.

Intel Vtune cannot find python source file

This is an old problem as is demonstrated as in https://community.intel.com/t5/Analyzers/Unable-to-view-source-code-when-analyzing-results/td-p/1153210. I have tried all the listed methods, none of them works, and I cannot find any more solutions on the internet. Basically vtune cannot find the custom python source file no matter what is tried. I am using the most recently version as of speaking. Please let me whether there is a solution.
For example, if you run the following program.
def myfunc(*args):
# Do a lot of things.
if __name__ = '__main__':
# Do something and call myfunc
Call this script main.py. Now use the newest vtune version (I have using Ubuntu 18.04), run the vtune-gui and basic hotspot analysis. You will not found any information on this file. However, a huge pile of information on Python and its other codes are found (related to your python environment). In theory, you should be able to find the source of main.py as well as cost on each line in that script. However, that is simply not happening.
Desired behavior: I would really like to find the source file and function in the top-down manual (or any really). Any advice is welcome.
VTune offer full support for profiling python code and the tool should be able to display the source code in your python file as you expected. Could you please check if the function you are expecting to see in the VTune results, ran long enough?
Just to confirm that everything is working fine, I wrote a matrix multiplication code as shown below (don't worry about the accuracy of the code itself):
def matrix_mul(X, Y):
result_matrix = [ [ 1 for i in range(len(X)) ] for j in range(len(Y[0])) ]
# iterate through rows of X
for i in range(len(X)):
# iterate through columns of Y
for j in range(len(Y[0])):
# iterate through rows of Y
for k in range(len(Y)):
result_matrix[i][j] += X[i][k] * Y[k][j]
return result_matrix
Then I called this function (matrix_mul) on my Ubuntu machine with large enough matrices so that the overall execution time was in the order of few seconds.
I used the below command to start profiling (you can also see the VTune version I used):
/opt/intel/oneapi/vtune/2021.1.1/bin64/vtune -collect hotspots -knob enable-stack-collection=true -data-limit=500 -ring-buffer=10 -app-working-dir /usr/bin -- python3 /home/johnypau/MyIntel/temp/Python_matrix_mul/mat_mul_method.py
Now open the VTune results in the GUI and under the bottom-up tab, order by "Module / Function / Call-stack" (or whatever preferred grouping is).
You should be able to see the the module (mat_mul_method.py in my case) and the function "matrix_mul". If you double click, VTune should be able to load the sources too.

Is there a way to call and store data defined in a python script in to julia?

I have a python code which generates a weighted random graph. I want to use the weights generated in that code in a different Julia program. I am able to run the python code through Julia by using PyCall. But I am unable to get any of the data from the graph. Is there any way to do that?
The 'wt' stores the edge data in the python code.
When I am printing 'wt' in the python code it prints the nodes between which the edge is present and the weights.
This is giving me the required graph. I want to call 'wt' in Julia. How can I do that?
Python code
wt = G.edges.data('weight')
print(wt)
Julia code
using PyCall
y = py"exec(open('wtgraph.py').read())"
For your example it would be something like this (you didn't provide the complete code):
using PyCall
py"""
import something as G
def py_function(x):
return G.edges.data('weight')
"""
wt = py"py_function"('weight')

How to get interactive R output in Jupyter (IPython, rpy2), e.g. for a progress bar?

I am trying to use the built-in R progress-bar (txtProgressBar) with %%R magic in Jupyter. While it does produce a nice animation when executed in the R console or RStudio, it does not produce the desired output in the Jupyter (notebook or lab) with an rpy2 extension instead, printing all the steps at once after finishing (which makes the progress bar useless). Two questions:
How could I make it work?
If it is not possible yet, how do I approach implementing this functionality on the rpy2 side (I already know how to make the interactive output/widgets on the Jupyter/IPython side)?
Here is a simple snippet of a progress bar from rfunction.com:
%%R
SEQ <- seq(1,100)
pb <- txtProgressBar(1, 100, style=3)
TIME <- Sys.time()
for(i in SEQ){
Sys.sleep(0.02)
setTxtProgressBar(pb, i)
}
For the folks new to rpy2: It needs to be installed with pip install rpy2 and the magic needs to be loaded in Jupyter with %load_ext rpy2.ipython.
Edit: The workaround I use for now is to manually invoke the code via robjects.r:
from rpy2.robjects import r
r("""
SEQ <- seq(1,100)
pb <- txtProgressBar(1, 100, style=3)
TIME <- Sys.time()
for(i in SEQ){
Sys.sleep(0.02)
setTxtProgressBar(pb, i)
}
""")
however this is not ideal - I would prefer to keep all the benefits of the rpy2's Rmagic.
There should be a way to achieve this, as the R magic is calling robjects.r() (as you are in your workaround).
In short, the following is happening when you submit an %%R jupyter cell for evaluation.
Parameters on the %%R line are evaluated and eventual setup prior to the evaluation of the R code is done (e.g., use a local converter, convert input parameters, etc...)
The R code in the rest of the %%R cell is evaluated in the R "Global Environment" as a string of code
Exit setup is run and results are returned
The second step is a essentially a call to the R C API, which the GIL makes the only activity happening with that process. However, rpy2 is defining default callbacks that reroute R's printing to the terminal/console to Python's own print() which is why you see the prints as the code is running in your call to robjects.r().
I am seeing that the R magic is caching the R output, and while there is an attribute cache_display_data that should control this is it not used. This is bug, for the reason your are asking on Stackoverflow, and because an R code block printing a lot would use more memory than needed (and even exhaust all RAM). I do not know whether it has always be present or it was introduced during code refactoring; it is now tracked here: https://bitbucket.org/rpy2/rpy2/issues/543
Edit: The fix is now in the repository, and will be part of rpy2-3.0.3 (likely released today).

matplotlib.pyplot/pylab not updating figure while isinteractive(), using ipython -pylab [duplicate]

This question already has answers here:
pylab.ion() in python 2, matplotlib 1.1.1 and updating of the plot while the program runs
(2 answers)
Closed 8 years ago.
There are a lot of questions about matplotlib, pylab, pyplot, ipython, so I'm sorry if you're sick of seeing this asked. I'll try to be as specific as I can, because I've been looking through people's questions and looking at documentation for pyplot and pylab, and I still am not sure what I'm doing wrong. On with the code:
Goal: plot a figure every .5 seconds, and update the figure as soon as the plot command is called.
My attempt at coding this follows (running on ipython -pylab):
import time
ion()
x=linspace(-1,1,51)
plot(sin(x))
for i in range(10):
plot([sin(i+j) for j in x])
#see **
print i
time.sleep(1)
print 'Done'
It correctly plots each line, but not until it has exited the for loop. I have tried forcing a redraw by putting draw() where ** is, but that doesn't seem to work either. Ideally, I'd like to have it simply add each line, instead of doing a full redraw. If redrawing is required however, that's fine.
Additional attempts at solving:
just after ion(), tried adding hold(True) to no avail.
for kicks tried show() for **
The closest answer I've found to what I'm trying to do was at plotting lines without blocking execution, but show() isn't doing anything.
I apologize if this is a straightforward request, and I'm looking past something so obvious. For what it's worth, this came up while I was trying to convert matlab code from class to some python for my own use. The original matlab (initializations removed) which I have been trying to convert follows:
for i=1:time
plot(u)
hold on
pause(.01)
for j=2:n-1
v(j)=u(j)-2*u(j-1)
end
v(1)= pi
u=v
end
Any help, even if it's just "look up this_method" would be excellent, so I can at least narrow my efforts to figuring out how to use that method. If there's any more information that would be useful, let me know.
from pylab import *
import time
ion()
tstart = time.time() # for profiling
x = arange(0,2*pi,0.01) # x-array
line, = plot(x,sin(x))
for i in arange(1,200):
line.set_ydata(sin(x+i/10.0)) # update the data
draw() # redraw the canvas
pause(0.01)
print 'FPS:' , 200/(time.time()-tstart)
ioff()
show()
########################
The above worked for me nicely. I ran it in spyder editor in pythonxy2.7.3 under win7 OS.
Note the pause() statement following draw() followed by ioff() and show().
The second answer to the question you linked provides the answer: call draw() after every plot() to make it appear immediately; for example:
import time
ion()
x = linspace(-1,1,51)
plot(sin(x))
for i in range(10):
plot([sin(i+j) for j in x])
# make it appear immediately
draw()
time.sleep(1)
If that doesn't work... try what they do on this page: http://www.scipy.org/Cookbook/Matplotlib/Animations
import time
ion()
tstart = time.time() # for profiling
x = arange(0,2*pi,0.01) # x-array
line, = plot(x,sin(x))
for i in arange(1,200):
line.set_ydata(sin(x+i/10.0)) # update the data
draw() # redraw the canvas
print 'FPS:' , 200/(time.time()-tstart)
The page mentions that the line.set_ydata() function is the key part.
had the exact same problem with ipython running on my mac. (Enthought Distribution of python 2.7 32bit on Macbook pro running snow leopard).
Got a tip from a friend at work. Run ipython from the terminal with the following arguments:
ipython -wthread -pylab
This works for me. The above python code from "Daniel G" runs without incident, whereas previously it didn't update the plot.
According to the ipython documentation:
[-gthread, -qthread, -q4thread, -wthread, -pylab:...] They provide
threading support for the GTK, Qt (versions 3 and 4) and WXPython
toolkits, and for the matplotlib library.
I don't know why that is important, but it works.
hope that is helpful,
labjunky
Please see the answer I posted here to a similar question. I could generate your animation without problems using only the GTKAgg backend. To do this, you need to add this line to your script (I think before importing pylab):
matplotlib.use('GTkAgg')
and also install PyGTK.

Categories