I got this error after installing Tensorflow.js. Previously this program was working. Could it be a problem with the versions? I'm really curious as to what's causing it.
Thanks in advance.
File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:61
model = build_model()
File ~\OneDrive\Masaüstü\Bitirme Proje\neural_network(sinir_ağları).py:29 in build_model
model = keras.Sequential([
File C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\trackable\base.py:205 in _method_wrapper
result = method(self, *args, **kwargs)
File C:\ProgramData\Anaconda3\lib\site-packages\keras\utils\traceback_utils.py:67 in error_handler
raise e.with_traceback(filtered_tb) from None
File C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py:3331 in saver_with_op_caching
return tf.__internal__.tracking.TrackableSaver(
AttributeError: module 'tensorflow.compat.v2.__internal__.tracking' has no attribute 'TrackableSaver'
I was planning to convert my model with Tensorflow.js and run it over the web. But when I installed Tensorflow.js I got this error in the program.
Update keras with pip install 'keras>=2.9.0' for keras-team/keras#af70910.
- self._trackable_saver = saver_with_op_caching(self)
+ self._checkpoint = tf.train.Checkpoint(root=weakref.ref(self))
I had to create another environment on Jupyter and re-install all the libraries from scratch since it seemed all the libraries started having issues, not just TensorFlow. It is now working with no errors.
After Training, I saved Both Keras whole Model and Only Weights using
model.save_weights(MODEL_WEIGHTS) and model.save(MODEL_NAME)
Models and Weights were saved successfully and there was no error.
I can successfully load the weights simply using model.load_weights and they are good to go, but when i try to load the save model via load_model, i am getting an error.
File "C:/Users/Rizwan/model_testing/model_performance.py", line 46, in <module>
Model2 = load_model('nasnet_RS2.h5',custom_objects={'euc_dist_keras': euc_dist_keras})
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 419, in load_model
model = _deserialize_model(f, custom_objects, compile)
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 321, in _deserialize_model
optimizer_weights_group['weight_names']]
File "C:\Users\Rizwan\AppData\Roaming\Python\Python36\site-packages\keras\engine\saving.py", line 320, in <listcomp>
n.decode('utf8') for n in
AttributeError: 'str' object has no attribute 'decode'
I never received this error and i used to load any models successfully. I am using Keras 2.2.4 with tensorflow backend. Python 3.6.
My Code for training is :
from keras_preprocessing.image import ImageDataGenerator
from keras import backend as K
from keras.models import load_model
from keras.callbacks import ReduceLROnPlateau, TensorBoard,
ModelCheckpoint,EarlyStopping
import pandas as pd
MODEL_NAME = "nasnet_RS2.h5"
MODEL_WEIGHTS = "nasnet_RS2_weights.h5"
def euc_dist_keras(y_true, y_pred):
return K.sqrt(K.sum(K.square(y_true - y_pred), axis=-1, keepdims=True))
def main():
# Here, we initialize the "NASNetMobile" model type and customize the final
#feature regressor layer.
# NASNet is a neural network architecture developed by Google.
# This architecture is specialized for transfer learning, and was discovered via Neural Architecture Search.
# NASNetMobile is a smaller version of NASNet.
model = NASNetMobile()
model = Model(model.input, Dense(1, activation='linear', kernel_initializer='normal')(model.layers[-2].output))
# model = load_model('current_best.hdf5', custom_objects={'euc_dist_keras': euc_dist_keras})
# This model will use the "Adam" optimizer.
model.compile("adam", euc_dist_keras)
lr_callback = ReduceLROnPlateau(monitor='val_loss', factor=0.2, patience=5, min_lr=0.003)
# This callback will log model stats to Tensorboard.
tb_callback = TensorBoard()
# This callback will checkpoint the best model at every epoch.
mc_callback = ModelCheckpoint(filepath='current_best_mem3.h5', verbose=1, save_best_only=True)
es_callback=EarlyStopping(monitor='val_loss', min_delta=0, patience=4, verbose=0, mode='auto', baseline=None, restore_best_weights=True)
# This is the train DataSequence.
# These are the callbacks.
#callbacks = [lr_callback, tb_callback,mc_callback]
callbacks = [lr_callback, tb_callback,es_callback]
train_pd = pd.read_csv("./train3.txt", delimiter=" ", names=["id", "label"], index_col=None)
test_pd = pd.read_csv("./val3.txt", delimiter=" ", names=["id", "label"], index_col=None)
# train_pd = pd.read_csv("./train2.txt",delimiter=" ",header=None,index_col=None)
# test_pd = pd.read_csv("./val2.txt",delimiter=" ",header=None,index_col=None)
#model.summary()
batch_size=32
datagen = ImageDataGenerator(rescale=1. / 255)
train_generator = datagen.flow_from_dataframe(dataframe=train_pd,
directory="./images", x_col="id", y_col="label",
has_ext=True,
class_mode="other", target_size=(224, 224),
batch_size=batch_size)
valid_generator = datagen.flow_from_dataframe(dataframe=test_pd, directory="./images", x_col="id", y_col="label",
has_ext=True, class_mode="other", target_size=(224, 224),
batch_size=batch_size)
STEP_SIZE_TRAIN = train_generator.n // train_generator.batch_size
STEP_SIZE_VALID = valid_generator.n // valid_generator.batch_size
model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=valid_generator,
validation_steps=STEP_SIZE_VALID,
callbacks=callbacks,
epochs=20)
# we save the model.
model.save_weights(MODEL_WEIGHTS)
model.save(MODEL_NAME)
if __name__ == '__main__':
# freeze_support() here if program needs to be frozen
main()
For me the solution was downgrading the h5py package (in my case to 2.10.0), apparently putting back only Keras and Tensorflow to the correct versions was not enough.
I downgraded my h5py package with the following command,
pip install 'h5py==2.10.0' --force-reinstall
Restarted my ipython kernel and it worked.
For me it was the version of h5py that was superior to my previous build.
Fixed it by setting to 2.10.0.
Downgrade h5py package with the following command to resolve the issue,
pip install h5py==2.10.0 --force-reinstall
I had the same problem, solved putting compile=False in load_model:
model_ = load_model('path to your model.h5',custom_objects={'Scale': Scale()}, compile=False)
sgd = SGD(lr=1e-3, decay=1e-6, momentum=0.9, nesterov=True)
model_.compile(loss='categorical_crossentropy',optimizer='sgd',metrics=['accuracy'])
saved using TF format file and not h5py: save_format='tf'. In my case:
model.save_weights("NMT_model_weight.tf",save_format='tf')
This is probably due to a model saved from a different version of keras. I got the same problem when loading a model generated by tensorflow.keras (which is similar to keras 2.1.6 for tf 1.12 I think) from keras 2.2.6.
You can load the weights with model.load_weights and resave the complete model from the keras version you want to use.
The solution than works for me was:
pip3 uninstall keras
pip3 uninstall tensorflow
pip3 install --upgrade pip3
pip3 install tensorflow
pip3 install keras
I still kept having this error after having tensorflow==2.4.1, h5py==2.1.0, and python 3.8 in my environment.
what fixed it was downgrading the python version to 3.6.9
Downgrading python, tensorflow, keras and h5py resolved the issue.
python -> 3.6.2
pip install tensorflow==1.3.0
pip install keras==2.1.2
pip install 'h5py==2.10.0' --force-reinstall
I'm facing BrokenPipeError when I'm trying to run sentiment analysis with hugging face. It's returning [Error No] 32 Broken Pipe. Is there any way to rewrite the next(iter(train_data_loader)) code?
Link with total code 'https://colab.research.google.com/drive/1wBXKa-gkbSPPk-o7XdwixcGk7gSHRMas?usp=sharing'
The code is
def create_data_loader(df, tokenizer, max_len, batch_size):
ds = GPReviewDataset(
reviews=df.content.to_numpy(),
targets=df.sentiment.to_numpy(),
tokenizer=tokenizer,
max_len=max_len
)
return DataLoader(
ds,
batch_size=batch_size,
num_workers=4
)
Followed by below code
BATCH_SIZE = 16
train_data_loader = create_data_loader(df_train, tokenizer, MAX_LEN, BATCH_SIZE)
val_data_loader = create_data_loader(df_val, tokenizer, MAX_LEN, BATCH_SIZE)
test_data_loader = create_data_loader(df_test, tokenizer, MAX_LEN, BATCH_SIZE)
Followed by
data = next(iter(train_data_loader))
data.keys()
I'm facing error with this 'data = next(iter(train_data_loader))' code
Error is BrokenPipeError: [Errno 32] Broken pipe
One of the reason of this issue might be the OS. When you're using Windows, you should not define num_worker, because PyTorch dataloader does not support multi-processing on Windows. By default num_workers is 0 and works on Windows.
DataLoader(
ds,
batch_size=batch_size,
num_workers=0 # should be zero on Windows
)
I am trying to convert a tensorflow graph (.pb file) into a .mlmodel
import tfcoreml
coreml_model = tfcoreml.convert(tf_model_path='optimized_model.pb', mlmodel_path='FaceImages.mlmodel', output_feature_names=['final_result'], input_name_shape_dict={'ResizeBilinear': {'images': None, 'size': {None, None}}}, minimum_ios_deployment_target='13')
but I am getting following error:
/usr/local/lib/python3.6/dist-packages/coremltools/converters/nnssa/frontend/tensorflow/graphdef_to_ssa.py
in load_tf_graph(graph_file)
21 with tf.io.gfile.GFile(graph_file, "rb") as f:
22 graph_def = tf.compat.v1.GraphDef()
---> 23 graph_def.ParseFromString(f.read())
24
25 # Then, we import the graph_def into a new Graph and returns it
DecodeError: Error parsing message
Could anybody help with this pls?
Here is the colab project where I have attached the tensorflow model and the associated code for conversion
https://colab.research.google.com/drive/1S7nf7pnX15UuswFZaTih5pHhfDFwG5Xa
Have you checked that the version of Tensorflow your are using is compatible with those libraries? This is just a guess, but you could try running
!pip install tensorflow --upgrade
at the top of the notebook to see if it resolves the issue.
param_test1 = {'n_estimators':range(20,81,10)}
gsearch1 = GridSearchCV(estimator = GradientBoostingClassifier(learning_rate=0.1,
min_samples_split=500,
min_samples_leaf=50,
max_depth=8,
max_features='sqrt',
subsample=0.8,
random_state=10),
param_grid = param_test1,
scoring='roc_auc',
n_jobs=4,
iid=False,
cv=5)
gsearch1.fit(train[predictors],train[target]) #getting error on this line
Error:PermissionError: [WinError 32] The process cannot access the
file because it is being used by another process:
'C:\Users\xx\AppData\Local\Temp\joblib_memmapping_folder_xx\xx-xx-xx.pkl'
Windows 10
Python 3.6.4
sklearn.__version__
Out[26]: '0.20.2'
Please let me know how to fix this error.
It appears to be a common problem.
Googling the error (which is always good practice :-) ) led here, where they offer this workaround:
Thanks for the details. For now I'm using Parallel with max_nbytes=None