I am working on an image classification problem with multiple classes and I follow the siamese face recognition sample here. I have saved processed data in .npy format and I have used Lambda in the siamese model. It shows an error in <lambda> :
distance_euclid = Lambda( lambda tensors : K.abs( tensors[0] - tensors[1] ))( [output_x1 , output_x2] )
AttributeError: module 'tensorflow.python.keras' has no attribute
'abs'
Here are the package versions I'm using:
keras 2.3.1
python 3.6.10
tensorflow 2.1.0
abs should be imported from tf.keras.backend (tf.keras.backend.abs) seems you are importing tf.python.keras. In your import modify this tf.python.keras line to tf.keras.backend
In addition, don't forget to upgrade tensorflow:
pip install -U tensorflow
Related
When i try to run the efficientNetv2 model
I got this erreur Error-message
AttributeError: module'tensorflow.keras.applications ' has no attribute 'efficientnet_v2'
Tensorflow version : tensorflow-gpu:2.6
The import is incorrect, you need to update it, it might have worked in older Keras versions,but the internal per-network modules inside keras.applications are not exposed anymore, so your correct import would be:
keras.applications.EfficientNetV2S
Or if you use tf.keras:
tf.keras.applications.EfficientNetV2S
For future reference, always check the documentation, for EfficientNetV2S the link is here.
install efficientnet in you env
!pip install keras-efficientnet
then you can import model as
import efficientnet.tfkeras as efc
done...
you can use prefix 'efc' for B0-B7
Trying to convert a keras model (Thumbs.h5) into an onnx model on Google Colab, however I am getting an "AttributeError: module 'tensorflow.python.keras' has no attribute 'applications'" error when I run the code.
My code:
from tensorflow.python.keras import backend as K
from tensorflow.python.keras.models import load_model
import onnx
import keras2onnx
onnx_model_name = 'fish-resnet50.onnx'
model = load_model('model-resnet50-final.h5')
onnx_model = keras2onnx.convert_keras(model, model.name)
onnx.save_model(onnx_model, onnx_model_name)
What I've tried:
Updating keras with !pip install keras --upgrade (already updated)
Running it locally with a jupyter notebook on my M1 Mac (V12.4) to get the same error
Pointers or solutions greatly appreciated.
As per the documentation
keras2onnx has been tested on Python 3.5 - 3.8, with tensorflow
1.x/2.0 - 2.2
So install compatible version of tensorflow.
Also keras-onnx is not under active development so use tf2onnx as per the documentation
I cloned this repository/documentation https://huggingface.co/EleutherAI/gpt-neo-125M
I get the below error whether I run it on google collab or locally. I also installed transformers using this
pip install git+https://github.com/huggingface/transformers
and made sure the configuration file is named as config.json
5 tokenizer = AutoTokenizer.from_pretrained("gpt-neo-125M/",from_tf=True)
----> 6 model = AutoModelForCausalLM.from_pretrained("gpt-neo-125M",from_tf=True)
7
8
3 frames
/usr/local/lib/python3.7/dist-packages/transformers/file_utils.py in __getattr__(self, name)
AttributeError: module transformers has no attribute TFGPTNeoForCausalLM
Full code:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True)
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M",from_tf=True)
transformers-cli env results:
transformers version: 4.10.0.dev0
Platform: Linux-4.4.0-19041-Microsoft-x86_64-with-glibc2.29
Python version: 3.8.5
PyTorch version (GPU?): 1.9.0+cpu (False)
Tensorflow version (GPU?): 2.5.0 (False)
Flax version (CPU?/GPU?/TPU?): not installed (NA)
Jax version: not installed
JaxLib version: not installed
Using GPU in script?:
Using distributed or parallel set-up in script?:
Both collab and locally have TensorFlow 2.5.0 version
Try without using from_tf=True flag like below:
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("EleutherAI/gpt-neo-125M")
model = AutoModelForCausalLM.from_pretrained("EleutherAI/gpt-neo-125M")
from_tf expects the pretrained_model_name_or_path (i.e. the first parameter) to be a path to load saved Tensorflow checkpoints from.
My solution was to first edit the source code to remove the line that adds "TF" in front of the package as the correct transformers module is GPTNeoForCausalLM
, but somewhere in the source code it manually added a "TF" in front of it.
Secondly, before cloning the repository it is a must to run
git lfs install.
This link helped me install git lfs properly https://askubuntu.com/questions/799341/how-to-install-git-lfs-on-ubuntu-16-04
I cloned https://github.com/matterport/Mask_RCNN.git and tried to run demo.ipynb
With these two lines,
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
I got the following error
~\Desktop\Neuer Ordner\Mask RCNN\Mask_RCNN-master\mrcnn\model.py in log2_graph(x)
339 def log2_graph(x):
340 """Implementation of Log2. TF doesn't have a native implementation."""
--> 341 return tf.log(x) / tf.log(2.0)
342
343
AttributeError: module 'tensorflow' has no attribute 'log'
I followed the solutions as per https://github.com/matterport/Mask_RCNN/issues/1797
like down versioning tensorflow to 1.13.1 and using tf.math.log(). But nothing helped.
I also tried running some log functions. But It returned results! I don't know where the problem is.
>>> z=tf.log(x)
>>> with tf.Session() as sess: print(z.eval())
...
[ -inf -0.6931472 0. 1.609438 ]
After
pip3 install -r drive/mrcnn/Mask_RCNN-master/requirements.txt,
you can use !pip3 install 'tensorflow==1.14.0'
and then restart runtime.
you can use tensorflow version 1.14.0
by install this command
!pip install 'tensorflow==1.14.0'
and then restart runtime
make sure use tensor-flow version is 1.14 by run this command
import tensorflow
print(tensorflow.__version__)
I am trying to get Python + deepwater + tensorflow to run on RHEL 6.7. Using conda, I have installed python 3.6.0, tensorflow 1.1.0 and also gcc 4.8.5. TF is working fine.
I have installed the following libraries using pip install: h2o-3.11.0.3904-py2.py3-none-any.whl and h2o-3.11.0-py2.py3-none-any.whl.
I tried to run the following example from the h2o tutorial
import h2o
from h2o.estimators.deepwater import H2ODeepWaterEstimator
h2o.init()
train = h2o.import_file("https://h2o-public-test-data.s3.amazonaws.com/bigdata/laptop/mnist/train.csv.gz")
features = list(range(0,784))
target = 784
train[target] = train[target].asfactor()
model = H2ODeepWaterEstimator(epochs=100, activation="Rectifier", hidden=[200,200], ignore_const_cols=False,
mini_batch_size=256, input_dropout_ratio=0.1, hidden_dropout_ratios=[0.5,0.5], stopping_rounds=3,
stopping_tolerance=0.05, stopping_metric="misclassification", score_interval=2, score_duty_cycle=0.5,
score_training_samples=1000, score_validation_samples=1000, nfolds=5, gpu=False, seed=1234, backend="tensorflow")
model.train(x=features, y=target, training_frame=train)
The following exception is thrown
Exception: Unable to initialize the native Deep Learning backend: Cannot find TensorFlow native library for OS: linux, architecture: x86_64. See https://github.com/tensorflow/tensorflow/tree/master/java/README.md for possible solutions (such as building the library from source).
Is there anything else that I am missing? Would I need to build the bits from scratch for this platform?