I am using tensorflow on jupyter notebook. I have no problems using tensorflow, but when I try to display tensorboard using the "show_graph" function (it was referenced here: Simple way to visualize a TensorFlow graph in Jupyter?), all it showed is a blank space. My classmates who ran the same code had no problem opening the tensorboard.
My system is Win 10. I am new to jupyter notebook and tensorflow so I don't know what to do for troubleshooting.
Related
I'm trying to run the Speech Recognition Project by Udacity (you can check the whole program in here) on Google Colab but I have a problem because Google Colab doesn't support Tensorflow 1.x anymore. I've tried running it before and it ran smoothly without any issues but now it can't.
Here are some things I have tried so far:
Scene 1
Using tensorflow.compat.v1 as instructed in TF2 migration from tensorflow
it worked when training the model
but failed to predict the model
Scene 2
I also have tried to convert the whole project using !tf_upgrade_v2 but there's no change (.py and .ipnyb still the same code)
Scene 3
Installing specific tensorflow (1.15), keras(2.0.5) and h5py(2.7.0) in Colab,
it worked too but when I predict the model it gave no transcription
I really need help how to fix this and able to execute this.
I recently published my jupyter notebook in github. All the visualisations of the notebook are visible except Folium heatmap. In the error it specifies to turn on File>Trust in Jupyter notebook.
I tried doing that and then pushed the notebook but still the error persists. I have attached the image for refrence. Thanks in advance for the help.
I am using jupyter notebook on my PC. As the length of my codes is increasing, I want to hide and display some sections of the python code based on the heading markings in Jupyter Notebook like the google colab does.
Is there any python package to install in the environment specifically for this functionality? Similar to the screenshots below.
Hidden Codes based on the headings
[1]: https://i.stack.imgur.com/kQSMG.png
Expanded Python Codes
[2]: https://i.stack.imgur.com/20HNw.png
After working with the tensorflow's example tutorial on colab: Basic regression: Predict fuel efficiency, which can be found here: https://www.tensorflow.org/tutorials/keras/regression.
I've been trying to run this example on jupyter notebook via anaconda in order to be able to run it offline too.
The code that one can find on the link works fine on google colab, but when I try to run it on jupyter notebook I get a vector of nan's.
A run on google colab yields:
Running the example on google colab
A run on jupyter yields:
Running the example on jupyter lab
The code for building the sequential model: [where the problem might come from]
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
The rest of the code can be found in this link that I have attached above:
https://www.tensorflow.org/tutorials/keras/regression
What the model has to do:
This model takes 2 arrays from a dataset: MPG and Horsepower, and is about to use single-variable linear regression in order to predict MPG from Horsepower.
Therefore, the model is introduce a Dense layer with shape of 1 as one can see from here:
layers.Dense(units=1)
The output of the model should be the values of the MPG, based on the horsepower values.
Then we will be able to compare it with the values of the real MPG that we've got from the dataset.
Thank you in advance.
EDIT: I upload my notebook:
https://www.udrop.com/Jk3/Horsepower.ipynb
I see you are using different models in them.
In Colab you have used model having only 5 parameters
whereas in Notebook you are using a dense model having 944 parameters.
Try running the model with less parameter on notebook first
or
Try running the same model on both.(Maybe the complex model was not trained completely.)
EDIT 1:
I have created a jupyter notebook and tried the code.
In my system it is working fine, try running this jupyter notebook in your system. Jupyter Notebook
. If it works then maybe there is some syntax error in your code.
EDIT 2:
Update tensorflow:
pip install --ignore-installed --upgrade tensorflow
use this or any other command that works on your version.
Putting an extra set of brackets around each horsepower value fixes the issue for me:
horsepower = np.array([[h] for h in train_features['Horsepower']])
Any time I train a NN (using TensorFlow with keras) and attempt to plot (matplotlib) the loss history of the fit model, the kernel dies. I do not think it is code because running different code from different validated sources (links below) causes the same problem.
Also, it appears to be specific to TensorFlow and matplotlib. If I run a sklearn model and then plot it works fine.
Example links:
https://github.com/chrisalbon/notes/blob/master/docs/deep_learning/keras/visualize_loss_history.ipynb
https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
Tried fixes:
Restarting comp
Removing keras and matplotlib
Rolling back matplotlib to a previous version (3.02)
Updating Python 3.6 to 3.71
Uninstalling Python and anaconda from comp and re-installing
Running code in different browser (Safari and Chrome)
I believe it has something to do with my installation. I sent the notebook and data to someone else with the same exact comp and setup and it worked fine.
I've also tried running the py file through the command line to retrieve errors, but nothing happens (no error, no indication that the file is running). Other py files run though.
Current versions
OS - Mojave v10.14.5
Python - 3.71
Matplotlib - 3.0.3
Keras - 2.2.4
TensorFlow - 1.13.1
After trial and error the issue appears to stem from a bug in TensorFlow. I'm not sure of the full specifics to what is causing the issue, but when TensorFlow is rolled back to 1.11 the issue no longer occurs. So for anyone who is also experiencing this problem, you may want to try rolling back TensorFlow.