I can't import anything from keras if I import it from tensorflow.
I installed tensorflow 2.0 with pip install tensorflow, and while I'm able to write something like:
import tensorflow as tf
from tensorflow import keras
model = keras.Sequential()
If I try to import Sequential from keras
import tensorflow as tf
from tensorflow import keras
from keras import Sequential
I got Unresolved reference 'keras'.
I've looked into every other post I could find and the information is contradictory, some say you have to install keras separately other says you just need to install tensorflow.
So far I've tried:
from tensorflow.python import keras
from tensorflow.contrib import keras
import tensorflow.keras as keras
from tensorflow.keras import Sequential
Plus a bunch of combination of the above, none of these work.
Sorry if it's a dumb question but I've never struggled so much with a simple import before.
Edit: Additionnal info, I'm on ubuntu 18.04, with Pycharm and a Python 3.6 virtual environment.
Answer:
It is actually a PyCharm bug !
Link here: https://youtrack.jetbrains.com/issue/PY-38220
I tried the snippet of code proposed by #AYI here
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten
example_model = Sequential()
example_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(100, 100, 1)))
example_model.add(MaxPooling2D((2, 2)))
example_model.add(Flatten())
example_model.summary()
And actually runs normally despite the warning and error displayed by Pycharm !
Try in this way should help you "from tensorflow.keras.xxx import xxx"
Example of how to import Sequential in tensorflow 2.0:
from tensorflow.keras.models import Sequential
good luck~
Here is the Demo:
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten
example_model = Sequential()
example_model.add(Conv2D(64, (3, 3), activation='relu', padding='same', input_shape=(100, 100, 1)))
example_model.add(MaxPooling2D((2, 2)))
example_model.add(Flatten())
example_model.summary()
Related
I am using Jupyter Notebooks on VSCode to create a U-Net.
Here is a quick snippet of my code that generates the error:
# PREPARE U-NET MODEL
from tensorflow.keras import Input, Model
from tensorflow.keras.backend import clear_session
from tensorflow.keras.layers import Activation, Add, BatchNormalization, Concatenate, Convolution2DTranspose, MaxPool2D, SeparableConv2D
from tensorflow.math import reduce_mean
With the new update, Pylance is now integrated into Jupyter notebooks. However, it gives me an error saying that tensorflow.math cannot be resolved. I obviously did not explicitly not install the math part in TensorFlow.
The specific error given is Pylance(reportMissingImports).
Can you try
from tensorflow._api.v2.math import reduce_mean
I have been trying to run a machine learning training program on an HPC cluster using MobaXterm for a while now and have been getting
ImportError: cannot import name 'Adam' from 'keras.optimizers'
and similar errors when I run the main file which should train a model and then output a file of trained weights. I am making sure to import the necessary package relevant to the error through the line: "from keras.optimizers import Adam", so it's a mystery as to why this won't go away.
Someone in another thread suggested tensorflow.keras.optimizers instead of keras.optimizers, but that just gives me the alternative error:
ValueError: Could not interpret optimizer identifier: <tensorflow.python.keras.optimizer_v2.adam.Adam object at 0x2aab0e2dd828>
Interestingly, the program, which is almost unedited from a github download, runs perfectly when running it on my computer locally, and also works great on Google Colab. As soon as I began sending it to the cluster the issues appear. Wonder if anyone has experience with this kind of thing and knows what I should be paying attention to. Thanks in advance!
Edit: I realized it may be helpful to show all the imports i'm doing at the beginning of the file, they are here:
from __future__ import print_function
import numpy as np
import os
import skimage.io as io
import skimage.transform as trans
import numpy as np
from keras.models import *
from keras.layers import *
from keras.optimizers import * #I have tried commenting out this line but still face the same error
from keras.callbacks import ModelCheckpoint, LearningRateScheduler
from keras import backend as keras
from keras.preprocessing.image import ImageDataGenerator
import glob
from keras.optimizers import Adam
I was initially suggested to check my package versions. My Keras version was for some reason causing issues, so I did a pip uninstall keras and changed all my imports from, for example:
from keras.callbacks import
to like
from tensorflow.keras.callbacks import
And this change fixed the problem
I had a similar problem and I simply replaced this:
from keras.optimizers import Adam
With this:
from tensorflow.keras.optimizers import Adam
To deal with this error in newer version of tensorflow, we can skip importing Adam. We do not have to implicitly import the optimizer. We can just mention:
model.compile(optimizer= "adam", loss='mse')
I have TensorFlow, NVIDIA GPU (CUDA)/CPU, Keras, & Python 3.7 in Linux Ubuntu.
I followed all the steps according to this tutorial:
https://www.youtube.com/watch?v=dj-Jntz-74g
when I run the following code of:
# What version of Python do you have?
import sys
import tensorflow.keras
import pandas as pd
import sklearn as sk
import tensorflow as tf
print(f"Tensor Flow Version: {tf.__version__}")
print(f"Keras Version: {tensorflow.keras.__version__}")
print()
print(f"Python {sys.version}")
print(f"Pandas {pd.__version__}")
print(f"Scikit-Learn {sk.__version__}")
gpu = len(tf.config.list_physical_devices('GPU'))>0
print("GPU is", "available" if gpu else "NOT AVAILABLE")
I get the these results:
Tensor Flow Version: 2.4.1
Keras Version: 2.4.0
Python 3.7.10 (default, Feb 26 2021, 18:47:35)
[GCC 7.3.0]
Pandas 1.2.3
Scikit-Learn 0.24.1
GPU is available
However; I don't know how to run my Keras model on GPU. When I run my model, and I get $ nvidia-smi -l 1, GPU usage is almost %0 during the run.
from keras import layers
from keras.models import Sequential
from keras.layers import Dense, Conv1D, Flatten
from sklearn.metrics import mean_squared_error
import matplotlib.pyplot as plt
from sklearn.metrics import r2_score
from keras.callbacks import EarlyStopping
model = Sequential()
model.add(Conv1D(100, 3, activation="relu", input_shape=(32, 1)))
model.add(Flatten())
model.add(Dense(64, activation="relu"))
model.add(Dense(1, activation="linear"))
model.compile(loss="mse", optimizer="adam", metrics=['mean_squared_error'])
model.summary()
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=70)
history = model.fit(partial_xtrain_CNN, partial_ytrain_CNN, batch_size=100, epochs=1000,\
verbose=0, validation_data=(xval_CNN, yval_CNN), callbacks = [es])
Do I need to change any parts of my code or add a part to force it run on GPU??
To tensorflow work on GPU, there are a few steps to be done and they are rather difficult.
First of compatibility of these frameworks with NVIDIA is much better than others so you could have less problem if the GPU is an NVIDIA and should be in this list.
The second thing is that you need to install all of the requirements which are:
1- The last version of your GPU driver
2- CUDA instalation shown here
3- then install Anaconda add anaconda to environment while installing.
After completion of all the installations run the following commands in the command prompt.
conda install numba & conda install cudatoolkit
Now to assess the results use this code:
from numba import jit, cuda
import numpy as np
# to measure exec time
from timeit import default_timer as timer
# normal function to run on cpu
def func(a):
for i in range(10000000):
a[i]+= 1
# function optimized to run on gpu
#jit(target ="cuda")
def func2(a):
for i in range(10000000):
a[i]+= 1
if __name__=="__main__":
n = 10000000
a = np.ones(n, dtype = np.float64)
b = np.ones(n, dtype = np.float32)
start = timer()
func(a)
print("without GPU:", timer()-start)
start = timer()
func2(a)
print("with GPU:", timer()-start)
Parts of this answer is from here which you can read for more.
I found a solution for my question.
I think the problem was about the incompatibility of the NVIDIA driver, Cudnn, and TensorFlow. because I had the new NVIDIA graphic card (RTX 3060) on my laptop, and it has NVIDIA Ampere Architecture GPU, and probably it was not compatible with others.
Instead I referred to these links to download the 21.02 docker container, then I mount this docker. In this container that is provided by NVIDIA everything is tested and should give good performance.
https://docs.nvidia.com/deeplearning/frameworks/tensorflow-wheel-release-notes/tf-wheel-rel.html
https://docs.nvidia.com/deeplearning/frameworks/tensorflow-release-notes/rel_21-02.html#rel_21-02
Also, to install a docker in Linux you can follow the procedure explained here:
https://towardsdatascience.com/deep-learning-with-docker-container-from-ngc-nvidia-gpu-cloud-58d6d302e4b2
In google Colab I've written an Ipython notebook where I build a neural network model, fetch the data from my google drive and train the model.
My code runs without errors and trains the model. Though I do not see any improvement when I use the colab GPU vs the default CPU. Do I correctly make use of the GPU or can tensorflow not use the GPU of google colab?
Some snippets of the code that could relate to this question:
import tensorflow as tf
print(tf.__version__)
device_name = tf.test.gpu_device_name()
if device_name != '/device:GPU:0':
raise SystemError('GPU device not found')
print('Found GPU at: {}'.format(device_name))
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, BatchNormalization, Flatten, Dense, TimeDistributed, ReLU, ConvLSTM2D, Activation, Dropout, Reshape
Result:
2.0.0-alpha0
Found GPU at: /device:GPU:0
Building the model:
with tf.device("/gpu:0"):
model = Sequential()
#layer1
model.add(
TimeDistributed(
TimeDistributed(
Conv2D(
filters=4, kernel_size=(1,10), strides=(1,10), data_format="channels_last"
)
), input_shape=(40, 5, 7, 100, 1), name="LLConv"
)
)
model.add(TimeDistributed(BatchNormalization(axis=4), name="LBNtes"))
model.add(TimeDistributed(ReLU(), name="LRelu"))
#print(model.output_shape)#(None, 40, 5, 7, 10, 4)
#layer2
model.add(
TimeDistributed(
ConvLSTM2D(
filters=4, kernel_size=(7,3), strides=(1,1),data_format="channels_last", return_sequences=True
), name="LConvLST"
)
)
model.add(TimeDistributed(BatchNormalization(axis=4), name="LBN2"))
model.add(TimeDistributed(Activation("tanh"), name="Ltanh"))
#print(model.output_shape)#(None, 40, 5, 1, 8, 4)
model.add(Reshape((40, 5, 8, 4), name="reshape"))
#layers3
model.add(
ConvLSTM2D(
filters=1, kernel_size=(4,4), strides=(1,1), data_format="channels_last", name="GConvLSTM", return_sequences=True
)
)
model.add(BatchNormalization(axis=3, name="GBN"))
model.add(Activation("tanh", name="Gtanh"))
#print(model.output_shape)#(None, 40, 2, 5, 1)
model.add(TimeDistributed(Flatten()))
#print(model.output_shape)#(None, 40, 10)
model.add(Flatten())
#layer4
model.add(Dense(10, name="GDense"))
model.add(BatchNormalization(axis=-1))
model.add(ReLU())
model.add(Dropout(0.5))
#layer5
model.add(Dense(1, activation="linear"))
model.compile(
loss=tf.keras.losses.MeanSquaredError(),
optimizer=tf.keras.optimizers.Nadam(lr=0.001, decay=1e-6),
metrics=['mae', 'mse'],
)
#model.summary()
Training the model:
EPOCHS = 300
BATCH_SIZE = 15
with tf.device("/gpu:0"):
history = model.fit(train_features, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE, validation_data=(test_features,test_labels))
Make sure that you have tensorflow-gpu installed.
Try this on a new colab notebook first with GPU kernel enabled.
# Uninstall tensorflow first
!pip uninstall tensorflow -y
# Install tensorflow-gpu (stable version)
!pip install tensorflow-gpu # stable
import tensorflow as tf
# Check version
print(tf.__version__)
from tensorflow.python.client import device_lib
device_lib.list_local_devices()
References
How to upgrade tensorflow with GPU on google colaboratory
https://www.tensorflow.org/install/pip?lang=python3
https://www.tensorflow.org/install/gpu#pip_package
UPDATE: It looks like you would no longer need to install tensorflow-gpu in Colab as when you select GPU runtime, the environment installs tensorflow-gpu under the hood according to this video: Using GPUs in TensorFlow, TensorBoard in notebooks, finding new datasets, & more! (#AskTensorFlow).
If you try to update tensorflow by running pip install tensorflow-gpu, the binary you install may not be tuned for the GPU hardware that Colaboratory provides. Instead, you should use the tensorflow version that comes bundled with Colab.
Currently, this version is 1.15, but you can switch to version 2.X by running %tensorflow_version 2.X. At some point in the future, tensorflow 2.X will become the default.
For more information, see https://colab.research.google.com/notebooks/tensorflow_version.ipynb
I've some trouble with tensorflow-gpu 1.6.0.
I'm doing the final assignment of "bayesan methods in machine learning" class on coursera.
https://www.coursera.org/learn/bayesian-methods-in-machine-learning
When I run the code on GPU with tensorflow-gpu (pip install tensorflow-gpu), python crashes, but if I run the same code on CPU with the standard tensorflow (pip isntall tensorflow), the code runs fast without errors or crashes. Obviously I unistalled the gpu version before I installed the standard version and vice versa.
About the python crash, the debugger shows this message:
Unhandled exception at 0x00007FFDAB4DB79E (ucrtbase.dll) in python.exe
This is the starter code:
import numpy as np
import matplotlib.pyplot as plt
from IPython.display import clear_output
import tensorflow as tf
import GPy
import GPyOpt
import keras
from keras.layers import Input, Dense, Lambda, InputLayer, concatenate, Activation, Flatten, Reshape
from keras.layers.normalization import BatchNormalization
from keras.layers.convolutional import Conv2D, Deconv2D
from keras.losses import MSE
from keras.models import Model, Sequential
from keras import backend as K
from keras import metrics
from keras.datasets import mnist
from keras.utils import np_utils
from tensorflow.python.framework import ops
from tensorflow.python.framework import dtypes
import utils
import os
%matplotlib inline
sess = tf.InteractiveSession()
K.set_session(sess)
latent_size = 8
vae, encoder, decoder = utils.create_vae(batch_size=128, latent=latent_size)
sess.run(tf.global_variables_initializer())
vae.load_weights('CelebA_VAE_small_8.h5')
K.set_learning_phase(False)
latent_placeholder = tf.placeholder(tf.float32, (1, latent_size))
decode = decoder(latent_placeholder)
This code causes python crash when is executed on GPU but NOT on CPU:
plt.figure(figsize=(10, 10))
for i in range(25):
plt.subplot(5, 5, i+1)
image = sess.run(decode, feed_dict={latent_placeholder: np.random.normal([0]*latent_size,[1]*latent_size)[:, np.newaxis].T})[0]### YOUR CODE HERE
plt.imshow(np.clip(image, 0, 1))
plt.axis('off')
Additional Information:
python version 3.6.4
tensorflow 1.6.0
tensorflow-gpu 1.6.0
cuDNN 7.1.1 for CUDA 9.0
CUDA 9.0 with patch 1 and 2
GPU 1080ti with driver 391.01
You can find the python notebook and the weights on wetransfer:
https://wetransfer.com/downloads/59b9011823d38c204b5ef5a2b58f5e8e20180311201808/32c900
I found the issue. cuDNN 7.1.1 doesn't work yet with tensorflow-gpu. I downgraded cuDNN to 7.0.5 and now the code works as expected.
If you have a issue like me, you have to downgrade cuDNN!