I am running jupyter notebooks through a docker container. I have files, notebooks, etc within the container. I decide in class one day to attempt and install the jupyterthemes package because who doesn't like more colors. I opened a new ipynb and followed instructions per this site: https://github.com/dunovank/jupyter-themes
But it was basically just this:
!pip install jupyterthemes
!jt -t chesterish
The theme does not immediately appear and the directions suggest restarting the notebook or refreshing the browser. This is where the problems start, after trying to refresh or close and restart the notebook, it no longer works and just displays a large "500 : Internal Server Error" on the page. After trying to restart the home page of my notebook (this is locally hosted through docker and run on chrome btw), the jupyter window in chrome displays nothing at all.
Here I go back to terminal and docker and shut down the container. Then I try to restart the same container hoping it will work now. I try to start it as I usually would docker start -ai container_name but it is not successful. It displays these errors everytime
Executing the command: jupyter notebook
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py",
line 528, in get
value = obj._trait_values[self.name]
KeyError: 'allow_remote_access'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/lib/python3.6/site-
packages/notebook/notebookapp.py", line 869, in _default_allow_remote
addr = ipaddress.ip_address(self.ip)
File "/opt/conda/lib/python3.6/ipaddress.py", line 54, in ip_address
address)
ValueError: '' does not appear to be an IPv4 or IPv6 address
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/opt/conda/bin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/opt/conda/lib/python3.6/site-
packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/opt/conda/lib/python3.6/site-
packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/opt/conda/lib/python3.6/site-
packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1629, in initialize
self.init_webapp()
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 1379, in init_webapp
self.jinja_environment_options,
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 158, in __init__
default_url, settings_overrides, jinja_env_options)
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 251, in init_settings
allow_remote_access=jupyter_app.allow_remote_access,
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 556, in __get__
return self.get(obj, cls)
File "/opt/conda/lib/python3.6/site-packages/traitlets/traitlets.py", line 535, in get
value = self._validate(obj, dynamic_default())
File "/opt/conda/lib/python3.6/site-packages/notebook/notebookapp.py", line 872, in _default_allow_remote
for info in socket.getaddrinfo(self.ip, self.port, 0, socket.SOCK_STREAM):
File "/opt/conda/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -2] Name or service not known
So I can no longer access the entire docker container and my files and notebooks within. So I have two questions then:
Can I somehow restore my docker container or at least retrieve the materials within?
and
Why did this error occur during theme installation and how could I go about doing this without breaking my jupyter server or docker container? I have built new containers and attempted again with exactly the same results.
Any advice about how to get files from a not-running docker container, or about compatibility issues between docker, jupyter and the theme package and how to solve them would be much appreciated. For the time being I can work from a new container and keep up with schoolwork, but in the future would be nice to get back my stuff from that container and learn how to successfully change my theme if I want.
So I have an answer to half the question, we found a way to copy and export all the files from my broken, not-running docker container. The files kind of 'invisible' when the container isnt running so it took some trickery to find where they are located and what path to use to call them from terminal.
I'm running docker on a macbook and the location of all the files in a new container we made were container:./home/jovyan/.
Also made a folder called 'Dump' on my normal user desktop to transfer container contents to. After messing around with new 'fake' containers we found a successful way to pull files from a not-running one. I used
docker cp container_name:./home/jovyan/. ./Dump
Where container name is obviously your container and Dump is where you want the files to go. /jovyan/ was the largest one I could call and took everything I had out of the container, but if you knew more folder and filenames you could specify farther and extract specific things.
This is probably pretty simple for most experienced programmers, but as a newbie the hard part was finding where docker stored my container files and what path to use. /home/jovyan/. worked on my mac but could be different for you. If you have a broken container just make a new test one with a recognizable file in it and mess around until you figure out how to pull it out. Opening a new terminal window within the test jupyter notebook helped me find what docker was labeling my pathways.
Still wondering how to actually install those themes though.... dont think itll work work docker and jupyter, probably just too much incompatibility already.
If you stil have the issue, this is what fixed it for me:
In docker terminal, make sure you are using bash; so that the prompt starts with "(base)". I just typed bash and then the prompt looked good.
conda install -c conda-forge jupyterthemes (or pip if you're not using anaconda)
conda update jupyterthemes (it found some updates of other packages, apparently necessary)
jt -t monokai -f fira -fs 10 -nf ptsans -nfs 11 -N -kl -cursw 2 -cursc r -cellw 95% -T (Or another setting; this was copied from Ashraf Khan on Kaggle).
Hard refresh page (chrome on windows: ctrl-F5
I think step 3 was key here, but not sure. However it works now.
Related
I am new to Python. I am new to Vagrant. However, my team runs their project using a Vagrant VM and IDEs of their own choosing. I chose PyCharm, because I've used some JetBrains products in the past.
I'd really like to be able to visually debug through it as it is running. Set a break point, view the values of variables, etc.
PyCharm has a help section (and the related articles above it):
https://www.jetbrains.com/help/pycharm/configuring-product-to-work-on-the-vm.html
I've done all of them, but under Project->Project Interpreter the Path Mappings seem to list all the shared folders on the between the host machine and the Vagrant VM. Those shared folders are
one directory up from the actual project
my vagrant directory
My home directory
I don't think it is pointing to the project or its dependency libraries correctly.
I also get a yellow message at the bottom that says Python packaging tools not found.
If I hit debug, I get the following output in a terminal:
bash: line 0: cd: /vagrant/app: No such file or directory
pydev debugger: process 2032 is connecting
Connected to pydev debugger (build 191.6183.50)
Traceback (most recent call last):
File "/home/vagrant/.pycharm_helpers/pydev/pydevd.py", line 1741, in
<module>
main()
File "/home/vagrant/.pycharm_helpers/pydev/pydevd.py", line 1735, in main
globals = debugger.run(setup['file'], None, None, is_module)
File "/home/vagrant/.pycharm_helpers/pydev/pydevd.py", line 1135, in run
pydev_imports.execfile(file, globals, locals) # execute the script
IOError: [Errno 2] No such file or directory: '/vagrant/app/__main__.py'
It also has opened itself a tab for 'pydev.py' that has in it:
Remote file /home/vagrant/.pycharm_helpers/pydev/pydevd.py is mapped to the
local path C:\Users\<my username>\vagrant\.pycharm_helpers\pydev\pydevd.py
and can't be found. You can continue debugging, but without the source. To
fix that you can do one of the following:
How can setup to debug from PyCharm on my host machine through code running on the Vagrant VM?
I am trying to get a Jupyter notebook up and running on Arch. I have tried installying the jupyter package. I also tried following the Arch wiki and installed jupyter-notebook and jupyter-nbconvert and python-ipywidgets. Lastly, I tried using the pip instructions. All three of which fail and give:
➜ ~ jupyter notebook
Traceback (most recent call last):
File "/usr/sbin/jupyter-notebook", line 11, in <module>
sys.exit(main())
File "/usr/lib/python3.6/site-packages/jupyter_core/application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "/usr/lib/python3.6/site-packages/traitlets/config/application.py", line 657, in launch_instance
app.initialize(argv)
File "<decorator-gen-7>", line 2, in initialize
File "/usr/lib/python3.6/site-packages/traitlets/config/application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "/usr/lib/python3.6/site-packages/notebook/notebookapp.py", line 1368, in initialize
self.init_webapp()
File "/usr/lib/python3.6/site-packages/notebook/notebookapp.py", line 1188, in init_webapp
self.http_server.listen(port, self.ip)
File "/usr/lib/python3.6/site-packages/tornado/tcpserver.py", line 142, in listen
sockets = bind_sockets(port, address=address)
File "/usr/lib/python3.6/site-packages/tornado/netutil.py", line 197, in bind_sockets
sock.bind(sockaddr)
OSError: [Errno 22] Invalid argument
It seems that the other reports related to this involve a different error with sock.bind. I am not sure if the issue is related or not. Any guidance on the matter would be appreciated.
When using socket.bind() usually you can pass it either a hostname or an ip. In particular as we oly listen on the local loopback address we can pass either localhost or 127.0.0.1 (for ip v4) or ::1 for ip v6.
In theory both would be identical, but in practice there are number of systems where one (or the other) is problematic. It can be firewall, and antivirus seeing this binding as suspicious, or strange network configurations. While you probably still should investigate why socket.bind() refuses localhost (in your case), you can configure the jupyter notebook server to bind directly to 127.0.0.1 by using either : jupyter notebook --ip=127.0.0.1, or change the configuration of jupyter notebook server with the equivalent long form option c.NotebookApp.ip='127.0.0.1'.
Also if his is widespread on Arch (and install via the arch repo) I would suggest contacting the arch package maintainer to have a custom patch that switch the default value to 127.0.0.1.
I have a Django website running and any updates I make to the source code won't update.
(Reason I'm changing the file is because one line of code is generating an error. What's weird is I commented out this line of code that causes the error, but the code still runs and thus still causes the error. In the django.log it shows that line causing the error still, but it also shows it commented out now. So the error log shows my new source code, but the application itself isn't executing the new code)
I am very new to Django, so I don't really know what's going on here (not my website, I got thrown on this project for work.)
Researching around for this, I have already tried to restart apache:
$ sudo apachectl restart
$ sudo service apache2 restart
and I've also tried to touch the wsgi.py file:
$ touch wsgi.py
and I have even deleted the .pyc file. Nothing has worked and the old line of code is still executing, even though the logs show it commented out.
Not sure where else to check or what else I'm missing.
Whichever service you are using, do a full stop and a full start (i.e., not just restart).
sudo service apache2 stop
sudo service apache2 start
If you are using uwsgi or gunicorn, you will have to do the same for them. Some init scripts when issuing restart do not restart the master worker process which might cause a cached compiled version of your file to still reside in memory (with the incorrect code).
With the help of #2ps I was able to figure out my problem. When I tried to stop Apache, the website was still up.
I realized there's another IP address for the website, so I'm guessing the first one must redirect to the second one?
Either way, I reopened SSH in the other IP address, restarted Apache and the source code updated immediately!
UPDATE:
As per #VidyaSagar request, I'm providing more info as it seems to be a weird fluke with Django. My OP was that a certain line of code was causing an error. I commented out this line, deleted the .pyc file, and restarted Apache. Another error occurred (as expected due to the code). So then I un-commented that line back to how it was previously, again deleted the .pyc and restarted Apache, and the system worked like normal. It seems that Django just wanted me to have a fresh re-compile of the file?
Django version: 1.7.4
Traceback of django.log
ERROR Internal Server Error: /upload/
Traceback (most recent call last):
File "/home/company/app/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/company/app/app/geo_app/views.py", line 306, in upload
shutil.make_archive(kml_dir, 'zip', root_dir=kml_dir)
File "/usr/lib/python2.7/shutil.py", line 521, in make_archive
save_cwd = os.getcwd()
OSError: [Errno 2] No such file or directory
ERROR Internal Server Error: /upload/
Traceback (most recent call last):
File "/home/company/app/lib/python2.7/site-packages/django/core/handlers/base.py", line 111, in get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/company/app/app/geo_app/views.py", line 306, in upload
# shutil.make_archive(kml_dir, 'zip', root_dir=kml_dir)
File "/usr/lib/python2.7/shutil.py", line 521, in make_archive
save_cwd = os.getcwd()
OSError: [Errno 2] No such file or directory
For those of you with cPanel, if you go under "Setup Python App" and click "Restart" it should update. Saved me about 5 times.
When I run the fully_connected_feed.py code:
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/tutorials/mnist/fully_connected_feed.py
I get an error:
Traceback (most recent call last):
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 277, in <module>
tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\platform\app.py", line 43, in run
sys.exit(main(sys.argv[:1] + flags_passthrough))
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 222, in main
run_training()
File "C:/Users/AppData/Local/Continuum/Anaconda3/Lib/site-packages/tensorflow/examples/tutorials/mnist/fully_connected_feed.py", line 120, in run_training
data_sets = input_data.read_data_sets(FLAGS.input_data_dir, FLAGS.fake_data)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\mnist.py", line 211, in read_data_sets
SOURCE_URL + TRAIN_IMAGES)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\contrib\learn\python\learn\datasets\base.py", line 142, in maybe_download
gfile.Copy(temp_file_name, filepath)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\lib\io\file_io.py", line 316, in copy
compat.as_bytes(oldpath), compat.as_bytes(newpath), overwrite, status)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\contextlib.py", line 66, in __exit__
next(self.gen)
File "C:\Users\AppData\Local\Continuum\Anaconda3\lib\site-packages\tensorflow\python\framework\errors_impl.py", line 469, in raise_exception_on_not_ok_status
pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors_impl.OutOfRangeError: Read fewer bytes than requested
How do I resolve this issue?
After doing the following, I was able to run the script without errors. The key for getting it to work for me, was the version of tensorflow installed has to match the tutorial code, otherwise there were exceptions. Although, I got a different exception than you, at first.
After installing tensorflow, check version. Details of this step may be different if you installed it pip or some other method:
$ conda list tensorflow
# packages in environment at /Users/agr/miniconda3/envs/tensorflow:
#
tensorflow 0.11.0 py35_0 conda-forge
Clone the git repo
$ git clone https://github.com/tensorflow/tensorflow.git
Inspect the tags available and checkout the release matching your install:
$ cd tensorflow
$ git tag -l -n1
...
$ git checkout v0.11.0
Run script!
$ cd examples/tutorials/mnist/
$ python fully_connected_feed.py
The key point being, run the script from here, not from the link you posted in the original question.
TL; DR
Something else is altering your files as you create them. Find the process and stop it.
Research
I've just run the demo with Windows 10, Python 3.5, tensorflow 0.12.0 with no errors. It is therefore something about your environment.
Looking at the actual line of the error, you are failing to read the required number of bytes from the open file. Going further up the stack you can see that CopyFile is actually trying to read all the bytes of a file into a string in this function. This starts by finding out the current file size and then trying to read all the bytes.
The problem is that the file size at the start of this process doesn't match the size by the end of the copy. In other words, something else has altered your file.
What next?
Your best bet is to try to find out what else is accessing your file. I suggest you use the techniques explained here to see what else has the file open as you are running the copy.
I encountered the same problem on Windows 2012 Server.
As suggested in the previous post, I downloaded and launched Process Monitor, then set the filter: "Path contains mnist" see the image. The datasets were downloaded and unpacked correctly, while running code both from Spyder and Jupyter.
I suspect that there is a race condition in the library code, i.e. missing synchronization between downloading and unpacking operations. As Process Monitor introduced additional delays, the datasets were sucessfully downloaded before the next operation started, hence the hazardous behavior was not observed.
I am trying to write a fairly simple function (I thought).
I want a user to specify a path to a checked out git repo on his computer. (A git repo that requires authentication).
I then want to do a git pull on that folder to grab any changes.
My code looks like this:
import git
repo=git.cmd.Git(GIT_PATH)
repo.pull()
This works perfectly on my Linux machine, it never asks for any credentials (I am guessing because ssh-agent has already unlocked the key and supplies credentials when my script needs it).
On Windows however, I can't get it to to work. I have installed putty and added the key to pageant. I can check out the repo using TortoiseGit for example and it works perfectly fine, but if I execute the code above on the Windows machine, I get:
Traceback (most recent call last):
File "test.py", line 2, in <module>
repo = git.repo.base.Repo.clone_from(GIT_PATH, "t
mp")
File "C:\Python34\lib\site-packages\git\repo\base.py", line 849, in clone_from
return cls._clone(Git(os.getcwd()), url, to_path, GitCmdObjectDB, progress,
**kwargs)
File "C:\Python34\lib\site-packages\git\repo\base.py", line 800, in _clone
finalize_process(proc)
File "C:\Python34\lib\site-packages\git\util.py", line 154, in finalize_proces
s
proc.wait()
File "C:\Python34\lib\site-packages\git\cmd.py", line 309, in wait
raise GitCommandError(self.args, status, self.proc.stderr.read())
git.exc.GitCommandError: 'git clone -v ssh://git#git.path/repo tmp' re
turned with exit code 128
stderr: 'Cloning into 'tmp'...
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password).
fatal: Could not read from remote repository.
Please make sure you have the correct access rights
and the repository exists.
Edit: I would like to add that I am not married to GitPython. If anyone knows of another library that would solve my problem, that would work as well.