I'm trying to run the Speech Recognition Project by Udacity (you can check the whole program in here) on Google Colab but I have a problem because Google Colab doesn't support Tensorflow 1.x anymore. I've tried running it before and it ran smoothly without any issues but now it can't.
Here are some things I have tried so far:
Scene 1
Using tensorflow.compat.v1 as instructed in TF2 migration from tensorflow
it worked when training the model
but failed to predict the model
Scene 2
I also have tried to convert the whole project using !tf_upgrade_v2 but there's no change (.py and .ipnyb still the same code)
Scene 3
Installing specific tensorflow (1.15), keras(2.0.5) and h5py(2.7.0) in Colab,
it worked too but when I predict the model it gave no transcription
I really need help how to fix this and able to execute this.
Related
I am attempting to use a software called YOLOv5 in my python code for a Kaggle competition. The only thing is it requires you not to use the internet. I have the yolov5 file already loaded into the kaggle code. but am unsure how to call in in my code.
I've tried loading it in the following way but keep getting errors.
import torch
torch.save('yolov5x6.pt', 'yolov5')
yolov5x6_model = torch.load('yolov5')
Yolov5 is a follow up version of yolo which is a neural network library in c language, also known as Darknet created by pjreddie.
It is an object detector model which can be trained to recognise objects in images or videos.
If you just want to detect some daily life object then you can just run inference on images/videos using python and trained weights and config file. You will find these files under the pretrained checkpoints section at the following link.
https://github.com/ultralytics/yolov5
After working with the tensorflow's example tutorial on colab: Basic regression: Predict fuel efficiency, which can be found here: https://www.tensorflow.org/tutorials/keras/regression.
I've been trying to run this example on jupyter notebook via anaconda in order to be able to run it offline too.
The code that one can find on the link works fine on google colab, but when I try to run it on jupyter notebook I get a vector of nan's.
A run on google colab yields:
Running the example on google colab
A run on jupyter yields:
Running the example on jupyter lab
The code for building the sequential model: [where the problem might come from]
horsepower_model = tf.keras.Sequential([
horsepower_normalizer,
layers.Dense(units=1)
])
horsepower_model.summary()
The rest of the code can be found in this link that I have attached above:
https://www.tensorflow.org/tutorials/keras/regression
What the model has to do:
This model takes 2 arrays from a dataset: MPG and Horsepower, and is about to use single-variable linear regression in order to predict MPG from Horsepower.
Therefore, the model is introduce a Dense layer with shape of 1 as one can see from here:
layers.Dense(units=1)
The output of the model should be the values of the MPG, based on the horsepower values.
Then we will be able to compare it with the values of the real MPG that we've got from the dataset.
Thank you in advance.
EDIT: I upload my notebook:
https://www.udrop.com/Jk3/Horsepower.ipynb
I see you are using different models in them.
In Colab you have used model having only 5 parameters
whereas in Notebook you are using a dense model having 944 parameters.
Try running the model with less parameter on notebook first
or
Try running the same model on both.(Maybe the complex model was not trained completely.)
EDIT 1:
I have created a jupyter notebook and tried the code.
In my system it is working fine, try running this jupyter notebook in your system. Jupyter Notebook
. If it works then maybe there is some syntax error in your code.
EDIT 2:
Update tensorflow:
pip install --ignore-installed --upgrade tensorflow
use this or any other command that works on your version.
Putting an extra set of brackets around each horsepower value fixes the issue for me:
horsepower = np.array([[h] for h in train_features['Horsepower']])
Any time I train a NN (using TensorFlow with keras) and attempt to plot (matplotlib) the loss history of the fit model, the kernel dies. I do not think it is code because running different code from different validated sources (links below) causes the same problem.
Also, it appears to be specific to TensorFlow and matplotlib. If I run a sklearn model and then plot it works fine.
Example links:
https://github.com/chrisalbon/notes/blob/master/docs/deep_learning/keras/visualize_loss_history.ipynb
https://machinelearningmastery.com/tutorial-first-neural-network-python-keras/
Tried fixes:
Restarting comp
Removing keras and matplotlib
Rolling back matplotlib to a previous version (3.02)
Updating Python 3.6 to 3.71
Uninstalling Python and anaconda from comp and re-installing
Running code in different browser (Safari and Chrome)
I believe it has something to do with my installation. I sent the notebook and data to someone else with the same exact comp and setup and it worked fine.
I've also tried running the py file through the command line to retrieve errors, but nothing happens (no error, no indication that the file is running). Other py files run though.
Current versions
OS - Mojave v10.14.5
Python - 3.71
Matplotlib - 3.0.3
Keras - 2.2.4
TensorFlow - 1.13.1
After trial and error the issue appears to stem from a bug in TensorFlow. I'm not sure of the full specifics to what is causing the issue, but when TensorFlow is rolled back to 1.11 the issue no longer occurs. So for anyone who is also experiencing this problem, you may want to try rolling back TensorFlow.
I am having an issue with keras leading to my processor seemingly getting bogged down while working through examples.
In the IMDB data set for instance (exercise 3.4.1 in Deep Learning with Python by Chollet if anyone knows the book), running the script:
import keras
from keras.datasets import imdb
(train_data, train_labels), (test_data, test_labels) =
imdb.load_data(num_words=10000)
Produces an output looking something like:
[=====>...] - ETA: 59s✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓✓16105472/17464789
That updates increasingly slowly as the numbers get larger and move toward completion.
I'm assuming my installation of keras/Tensorflow/CUDA/cuDNN is to blame, but curious if you know of anything obvious that would solve the issue.
Running Ubuntu Linux, NVIDIA GTX 1080, Keras/Tensorflow (GPU)/CUDA,cuDNN (maybe, assuming I installed everything correctly which is probably not accurate).
Thanks!
This progress bar is shown during the first initial download and should not be present on subsequent imports of the data.
There might be several issues that cause this to slow down and/or fail:
Your internet connection is unstable.
There is an issue with the serving of the file. Maybe the repository server serves a corrupt file? (You could try to force download from another repository, see How to setup pip to download from mirror repository by default? )
Your local disc or or a previously partial download are corrput: You can try to delete a partial download in ~/.keras/datasets/mnist.npz
Check if your harddisk is full.
In WWDC 2017,Apple announced support for trained ML models and support covert caffe model to apple's mlmodel. so I download a model which train by caffe and try to covert it to apple's mlmodel by follow apple coremltools python document. below is the code:
import coremltools
coreml_model =coremltools.converters.caffe.convert(('oxford102.caffemodel','deploy.prototxt'),predicted_feature_name='class_labels.txt')
coreml_model.save('oxford102.mlmodel')
but every time I click run, the python will crash, could anyone give me some help! thanks
Python quit unexpectedly
i had the same issue. You have to use a specific python version. Take a look at this: https://github.com/hollance/MobileNet-CoreML