Why can't I open a .h5 file in Python? - python

I am trying to open an.h5 file, but experiencing an OS error.
import sys
sys.path.append('..')
from unet3d.training import load_old_model
import tables
from train_model import config
model_file=config["model_file"] #config["model_file"] = os.path.abspath("mc_seg_model.h5")
hdf5_file=config["val_data_file"] #config['val_data_file'] = os.path.abspath("../data/val_data.h5")
model = load_old_model(model_file)
load_model function is as follows:
import math
from functools import partial
import pdb
from keras import backend as K
from keras.callbacks import ModelCheckpoint, CSVLogger, LearningRateScheduler, ReduceLROnPlateau, EarlyStopping
from keras.models import load_model
import tensorflow_addons as tfa
def load_old_model(model_file):
# pdb.set_trace()
print("Loading pre-trained model")
custom_objects = {'dice_coefficient_loss': dice_coefficient_loss, 'dice_coefficient': dice_coefficient,
'weighted_dice_coefficient': weighted_dice_coefficient,
'weighted_dice_coefficient_loss': weighted_dice_coefficient_loss}
try:
#from keras_contrib.layers import InstanceNormalization
from tfa.layers import InstanceNormalization
custom_objects["InstanceNormalization"] = InstanceNormalization
except ImportError:
pass
try:
return load_model(model_file, custom_objects=custom_objects)
except ValueError as error:
if 'InstanceNormalization' in str(error):
raise ValueError(str(error) + "\n\nPlease install keras-contrib to use InstanceNormalization:\n"
"'pip install git+https://www.github.com/keras-team/keras-contrib.git'")
else:
raise error
When I try to load the model, it throws the following OS error and it is an 'Input/output error'.
2021-06-16 14:31:38.354199: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic library libcudart.so.10.1
Traceback (most recent call last):
File "draft.py", line 35, in <module>
model = load_old_model(model_file)
File "../unet3d/training.py", line 50, in load_old_model
return load_model(model_file, custom_objects=custom_objects)
File "/share/apps/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/save.py", line 182, in load_model
return hdf5_format.load_model_from_hdf5(filepath, custom_objects, compile)
File "/share/apps/anaconda3/lib/python3.7/site-packages/tensorflow/python/keras/saving/hdf5_format.py", line 173, in load_model_from_hdf5
model_config = f.attrs.get('model_config')
File "/share/apps/anaconda3/lib/python3.7/_collections_abc.py", line 660, in get
return self[key]
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "/share/apps/anaconda3/lib/python3.7/site-packages/h5py/_hl/attrs.py", line 81, in __getitem__
attr.read(arr, mtype=htype)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5a.pyx", line 355, in h5py.h5a.AttrID.read
File "h5py/_proxy.pyx", line 58, in h5py._proxy.attr_rw
OSError: Unable to read attribute (file read failed: time = Wed Jun 16 14:31:42 2021
, filename = '/data/kfernando/brats20/demo_task3_mcmc/mc_seg_model.h5', file descriptor = 4, errno = 5, error message = 'Input/output error', buf = 0x56126c096440, total read size = 30352, bytes this sub-read = 30352, bytes actually read = 18446744073709551615, offset = 16384)
Can someone please tell me what is causing this error?

Based on your comments about successful h5py open/close, it appears you have a valid HDF5 file. There are 2 more issues to investigate: 1) problems reading attribute data, or 2) errors in TensorFlow load_model() function. I can't help with TF. However here is a bit of code to recursively descend the data hierarchy and output all attributes and values. See below:
def get_all_attrs(name, h5_obj):
if isinstance(h5_obj,h5py.Group):
print('\n{} is a Group'.format(name))
elif isinstance(h5_obj,h5py.Dataset):
print('\n{} is a Dataset'.format(name))
print('number of attributes:',len( h5_obj.attrs.keys() ))
for k in h5_obj.attrs.keys():
print('{} => {}'.format(k, h5_obj.attrs[k]))
with h5py.File(file_path, 'r') as h5r:
print('number of root level attributes:',len( h5r.attrs.keys() ))
for k in h5r.attrs.keys():
print('{} => {}'.format(k, h5r.attrs[k]))
h5r.visititems(get_all_attrs)
Run this with your TF file. It might find a error reading one of the attributes. Example output from my test file looks like this:
number of root level attributes: 2
OS => Windows
User => Me
Base_Group is a Group
number of attributes: 2
Date => today
Time => now
Base_Group/default is a Dataset
number of attributes: 2
attr1 => 1.0
attr2 => 22.2
Group1 is a Group
number of attributes: 0
Group1/default1 is a Dataset
number of attributes: 0
This should help determine the source of the error. If h5py can read the attributes, you need to investigate TF load_data() function. If you get an error reading the attributes....well, that's your problem, but I don't know how to identify the root cause.

Related

Hugging Face Spaces failing to build using fast.ai because of Resampling on module PIL.Image

I keep getting the below error when I try to access my model on hugging face spaces. I am building my model in a Kaggle notebook, then downloading to a pkl file to my spaces repo and git pushing to HF spaces. Below is my ImageDataLoaders class that I am using, as I suspect the error is coming from here.
dls = ImageDataLoaders.from_folder(path, valid_pct = 0.2, item_tfms=Resize(460), batch_tfms=aug_transforms(size=224, min_scale=0.75))
Here is the error I am getting.
Traceback (most recent call last):
File "app.py", line 5, in <module>
learn = load_learner('new_model.pkl')
File "/home/user/.local/lib/python3.8/site-packages/fastai/learner.py", line 428, in load_learner
try: res = torch.load(fname, map_location=map_loc, pickle_module=pickle_module)
File "/home/user/.local/lib/python3.8/site-packages/torch/serialization.py", line 712, in load
return _load(opened_zipfile, map_location, pickle_module, **pickle_load_args)
File "/home/user/.local/lib/python3.8/site-packages/torch/serialization.py", line 1049, in _load
result = unpickler.load()
File "/home/user/.local/lib/python3.8/site-packages/torch/serialization.py", line 1042, in find_class
return super().find_class(mod_name, name)
AttributeError: Custom classes or functions exported with your `Learner` not available in namespace.\Re-declare/import before loading:
Can't get attribute 'Resampling' on <module 'PIL.Image' from '/home/user/.local/lib/python3.8/site-packages/PIL/Image.py'>
Here is my fill app.py code.
from fastai.vision.all import *
import gradio as gr
import skimage
learn = load_learner('new_model.pkl')
categories = ('deer', 'elk', 'moose')
def classify_image(img):
img = PILImage.create(img)
pred, idx, probs = learn.predict(img)
return dict(zip(categories, map(float,probs)))
image = gr.inputs.Image(type='pil', shape=(192,192))
label = gr.outputs.Label()
intf = gr.Interface(fn=classify_image, inputs=image, outputs=label)
intf.launch(inline=False)

Convert Detectron2 model to torchscript

i want to convert detectron2 'COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml model' to torchscript.
I used torc
my code are given below.
import cv2
import numpy as np
import torch
from detectron2 import model_zoo
from detectron2.config import get_cfg
from detectron2.engine import DefaultPredictor
from detectron2.modeling import build_model
from detectron2.export.flatten import TracingAdapter
import os
ModelPath='/home/jayasanka/working_files/create_torchsript/model.pt'
with open('savepic.npy', 'rb') as f:
image = np.load(f)
#-------------------------------------------------------------------------------------
cfg = get_cfg()
cfg.merge_from_file(model_zoo.get_config_file("COCO-InstanceSegmentation/mask_rcnn_R_50_FPN_3x.yaml"))
cfg.MODEL.ROI_HEADS.NUM_CLASSES = 1 # your number of classes + 1
cfg.MODEL.WEIGHTS = os.path.join(cfg.OUTPUT_DIR, ModelPath)
cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.60 # set the testing threshold for this model
predictor = DefaultPredictor(cfg)
i used TracingAdapter and trace functions.i hvent much idea whats the concept behind that.
# im = cv2.imread(image)
im = torch.tensor(image)
def inference_func(model, image):
inputs= [{"image": image}]
return model.inference(inputs, do_postprocess=False)[0]
wrapper= TracingAdapter(predictor, im, inference_func)
wrapper.eval()
traced_script_module= torch.jit.trace(wrapper, (im,))
traced_script_module.save("torchscript.pt")
it gives error given below.
Traceback (most recent call last):
File "script.py", line 49, in <module>
traced_script_module= torch.jit.trace(wrapper, (im,))
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/torch/jit/_trace.py", line 744, in trace
_module_class,
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/torch/jit/_trace.py", line 959, in trace_module
argument_names,
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1039, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/detectron2/export/flatten.py", line 294, in forward
outputs = self.inference_func(self.model, *inputs_orig_format)
File "script.py", line 44, in inference_func
return model.inference(inputs, do_postprocess=False)[0]
File "/home/jayasanka/anaconda3/envs/vha/lib/python3.7/site-packages/yacs/config.py", line 141, in __getattr__
raise AttributeError(name)
AttributeError: inference
can you help me to figure out this.
is there any other methods to do that easily?
Change to
def inference(model, inputs):
# use do_postprocess=False so it returns ROI mask
inst = model.inference(inputs, do_postprocess=False)[0]
return [{"instances": inst}]
isinstance(image, np.ndarray) == True
image_tensor = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
wrapper= TracingAdapter(predictor, inputs=[{"image": image_tensor}], inference_func=inference)

Using PyTorch to utilise DBpedia - keyerror: content disposition

I am currently trying to download data from the torchtext.datasets module and it is not working.
Here is the following code that I have written (taken from https://analyticsindiamag.com/multi-class-text-classification-in-pytorch-using-torchtext/):
import torch
import torchtext
from torchtext.datasets import text_classification
import os
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import DataLoader
import time
from torch.utils.data.dataset import random_split
import re
from torchtext.data.utils import ngrams_iterator
from torchtext.data.utils import get_tokenizer
ngrams = 2
batch_size = 16
if not os.path.isdir('./.data'):
os.mkdir('./.data')
train_dataset, test_dataset = text_classification.DATASETS['DBpedia'](root='./.data', ngrams=ngrams, vocab=None)
It produces the following error:
Traceback (most recent call last):
File "/Users/aidanpayne/Desktop/Scripts/Python/Neural Networks/text_classification_model.py", line 19, in <module>
train_dataset, test_dataset = text_classification.DATASETS['DBpedia'](root='./.data', ngrams=ngrams, vocab=None)
File "/Users/aidanpayne/opt/anaconda3/lib/python3.8/site-packages/torchtext/datasets/text_classification.py", line 237, in DBpedia
return _setup_datasets(*(("DBpedia",) + args), **kwargs)
File "/Users/aidanpayne/opt/anaconda3/lib/python3.8/site-packages/torchtext/datasets/text_classification.py", line 117, in _setup_datasets
dataset_tar = download_from_url(URLS[dataset_name], root=root)
File "/Users/aidanpayne/opt/anaconda3/lib/python3.8/site-packages/torchtext/utils.py", line 100, in download_from_url
return _process_response(response, root, filename)
File "/Users/aidanpayne/opt/anaconda3/lib/python3.8/site-packages/torchtext/utils.py", line 53, in _process_response
d = r.headers['content-disposition']
File "/Users/aidanpayne/opt/anaconda3/lib/python3.8/site-packages/requests/structures.py", line 54, in __getitem__
return self._store[key.lower()][1]
KeyError: 'content-disposition'
If anyone can help, that would be great!

Problem encountered in MMDetection. KeyError: 'mask_detectionDataset is not in the dataset registry'

I tried to train my model with MMdetection, however, error like "KeyError: 'mask_detectionDataset is not in the dataset registry'" keep showing.
I've added my dataset to init.py in \mmdetection\mmdet\datasets. And use #DATASETS.register_module(), but problem doesn't solved.
When I try to run init.py directly in \mmdetection\mmdet\datasets, it shows attempted relative import with no known parent package,i'm wondering why.
here is my code:
# -*- coding: utf-8 -*-
"""
Created on Sat Nov 27 00:55:00 2021
#author: daish
"""
import mmcv
from mmcv import Config
from mmdet.apis import set_random_seed
import os
cfg = Config.fromfile('F:/Project/mmdetection/configs/swin/mask_rcnn_swin-t-p4-w7_fpn_1x_coco.py')
# Modify dataset type and path
cfg.dataset_type = 'mask_detectionDataset'
cfg.data_root = 'F:/Project/dataset/'
cfg.data.test.type = 'mask_detectionDataset'
cfg.data.test.data_root = 'F:/Project/dataset/'
cfg.data.test.ann_file = 'test.txt'
cfg.data.test.img_prefix = 'images'
cfg.data.train.type = 'mask_detectionDataset'
cfg.data.train.data_root = 'F:/Project/dataset/'
cfg.data.train.ann_file = 'train.txt'
cfg.data.train.img_prefix = 'images'
cfg.data.val.type = 'mask_detectionDataset'
cfg.data.val.data_root = 'F:/Project/dataset/'
cfg.data.val.ann_file = 'val.txt'
cfg.data.val.img_prefix = 'images'
# modify num classes of the model in box head
cfg.model.roi_head.bbox_head.num_classes = 3
cfg.model.roi_head.mask_head.num_classes = 3
# We can still use the pre-trained Mask RCNN model though we do not need to
# use the mask branch
cfg.load_from = 'https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_tiny_patch4_window7_224.pth'
# Set up working dir to save files and logs.
cfg.work_dir = './swin/mask_rcnn_swin-t-p4-w7_fpn_1x'
# The original learning rate (LR) is set for 8-GPU training.
# We divide it by 8 since we only use one GPU.
cfg.optimizer.lr = 0.02 / 8
cfg.lr_config.warmup = None
cfg.log_config.interval = 10
# Change the evaluation metric since we use customized dataset.
cfg.evaluation.metric = 'mAP'
# We can set the evaluation interval to reduce the evaluation times
cfg.evaluation.interval = 12
# We can set the checkpoint saving interval to reduce the storage cost
cfg.checkpoint_config.interval = 12
# Set seed thus the results are more reproducible
cfg.seed = 0
set_random_seed(0, deterministic=False)
cfg.gpu_ids = range(1)
# We can initialize the logger for training and have a look
# at the final config used for training
print(f'Config:\n{cfg.pretty_text}')
from mmdet.datasets import build_dataset
from mmdet.models import build_detector
from mmdet.apis import train_detector
# Build dataset
datasets = [build_dataset(cfg.data.train)]
# Build the detector
model = build_detector(
cfg.model, train_cfg=cfg.get('train_cfg'), test_cfg=cfg.get('test_cfg'))
# Add an attribute for visualization convenience
model.CLASSES = datasets[0].CLASSES
# Create work_dir
mmcv.mkdir_or_exist(os.path.abspath(cfg.work_dir))
train_detector(model, datasets, cfg, distributed=False, validate=True)
below is the error:
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py", line 105, in spawn_main
exitcode = _main(fd)
File "D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py", line 114, in _main
prepare(preparation_data)
File "D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py", line 225, in prepare
_fixup_main_from_path(data['init_main_from_path'])
File "D:\anaconda3\envs\openmmlab\lib\multiprocessing\spawn.py", line 277, in _fixup_main_from_path
run_name="__mp_main__")
File "D:\anaconda3\envs\openmmlab\lib\runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "D:\anaconda3\envs\openmmlab\lib\runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "D:\anaconda3\envs\openmmlab\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "F:\Project\config.py", line 70, in <module>
datasets = [build_dataset(cfg.data.train)]
File "D:\anaconda3\envs\openmmlab\lib\site-packages\mmdet\datasets\builder.py", line 80, in build_dataset
dataset = build_from_cfg(cfg, DATASETS, default_args)
File "D:\anaconda3\envs\openmmlab\lib\site-packages\mmcv\utils\registry.py", line 44, in build_from_cfg
f'{obj_type} is not in the {registry.name} registry')
KeyError: 'mask_detectionDataset is not in the dataset registry'
maybe add a custom_imports key in your config?
custom_imports = dict(
imports=['mmdet.datasets.mask_detectionDataset'],
allow_failed_imports=False)

ValueError: as_list() is not defined on an unknown TensorShape

i work on thhe example based in this web and here is i got after this
jobs_train, jobs_test = jobs_df.randomSplit([0.6, 0.4])
>>> zuckerberg_train, zuckerberg_test = zuckerberg_df.randomSplit([0.6, 0.4])
>>> train_df = jobs_train.unionAll(zuckerberg_train)
>>> test_df = jobs_test.unionAll(zuckerberg_test)
>>> from pyspark.ml.classification import LogisticRegression
>>> from pyspark.ml import Pipeline
>>> from sparkdl import DeepImageFeaturizer
>>> featurizer = DeepImageFeaturizer(inputCol="image", outputCol="features", modelName="InceptionV3")
>>> lr = LogisticRegression(maxIter=20, regParam=0.05, elasticNetParam=0.3, labelCol="label")
>>> p = Pipeline(stages=[featurizer, lr])
>>> p_model = p.fit(train_df)
and this is appeared
2018-06-08 20:57:18.985543: I tensorflow/core/platform/cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2 FMA
INFO:tensorflow:Froze 376 variables.
Converted 376 variables to const ops.
Using TensorFlow backend.
Using TensorFlow backend.
INFO:tensorflow:Froze 0 variables.
Converted 0 variables to const ops.
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/spark/python/pyspark/ml/base.py", line 64, in fit
return self._fit(dataset)
File "/opt/spark/python/pyspark/ml/pipeline.py", line 106, in _fit
dataset = stage.transform(dataset)
File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
return self._transform(dataset)
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py", line 159, in _transform
File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
return self._transform(dataset)
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/named_image.py", line 222, in _transform
File "/opt/spark/python/pyspark/ml/base.py", line 105, in transform
return self._transform(dataset)
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_spark-deep-learning-0.1.0-spark2.1-s_2.11.jar/sparkdl/transformers/tf_image.py", line 142, in _transform
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 211, in map_rows
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 132, in _map
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 66, in _add_shapes
File "/tmp/spark-74707b69-e8c9-498b-b0f2-b38828e5ad21/userFiles-ca1eb7cf-9785-441d-a098-54b62380bcee/databricks_tensorframes-0.2.8-s_2.11.jar/tensorframes/core.py", line 35, in _get_shape
File "/home/sulistyo/tensorflow/lib/python3.6/site-packages/tensorflow/python/framework/tensor_shape.py", line 900, in as_list
raise ValueError("as_list() is not defined on an unknown TensorShape.")
ValueError: as_list() is not defined on an unknown TensorShape.
please kindly help, thanks
Use the following to read images and create your training & testing sets
from pyspark.sql.functions import lit
from sparkdl.image import imageIO
img_dir = "/PATH/TO/personalities/"
jobs_df = imageIO.readImagesWithCustomFn(img_dir + "/jobs",decode_f=imageIO.PIL_decode).withColumn("label", lit(1))
zuckerberg_df = imageIO.readImagesWithCustomFn(img_dir + "/zuckerberg", decode_f=imageIO.PIL_decode).withColumn("label", lit(0))

Categories