Displaying image from a pytorch model - python

Having read a paper about demoireing image, I want to see how effective the method is. Given that the whole dataset is 100gbs, I only used 1gb worth of data to train a new one. And in the code below, I'm trying to display the image spitted out by the model. However, the image color is either messed up or in grayscale and holds no visual feature to the source of it which is a moire infected image. So I want to know if it was the small train dataset for the model to behave such way or me not showing the image properly
Example:
source
from_model
other
The code that i tried to display it:
import numpy as np
import os
import math
import torch
from tqdm import tqdm
from utils import MoirePic
from torch.utils.data import DataLoader
from torchvision.io import read_image
from PIL import Image
from torchvision import transforms
import matplotlib.pyplot as plt
def psnr(img1, img2):
mse = np.mean((img1 - img2) ** 2)
if mse == 0:
return 100
return 10 * math.log10(1 / mse)
def Test():
device = "cpu"
root = './Train_Data2'
dataset = MoirePic(os.path.join(root, 'source'),
os.path.join(root, 'target'))
test_loader = DataLoader(dataset=dataset, batch_size=1, drop_last=False)
model = torch.load('./moire_best.pth',map_location=torch.device('cpu') )
model.eval()
loop = tqdm(enumerate(test_loader), total=len(test_loader), leave=False)
psnr_all=0
for idx, (data, target) in loop:
with torch.no_grad():
output = model(data).cpu()
transform = transforms.ToPILImage()
img = transform(output[0])
img.show()
print(psnr(output[0].numpy(), target[0].numpy()))
Test()
The PSNR i got from them 2 is 19.55170616098589
My trained model - https://drive.google.com/file/d/1xuCX7A48MvJU4V3BkvwFLjgccOE2_eBi/view?usp=sharing
The link to the paper : https://paperswithcode.com/paper/moire-photo-restoration-using-multiresolution
The link to the implementation: https://github.com/ZhengJun-AI/MoirePhotoRestoration-MCNN

Related

How to prune an existing tensorflow/keras model trained on imagenet

I am trying to prune InceptionNetV3 from keras trained on imagenet, right now I am using a tensorflow-datasets which has a subset of imagenet which I use for pruning. Currently my pruned models do not work and returns garbage data when tested using the same dataset it was pruned on. How do I prune without losing all accuracy? Here is my code:
Imports:
import logging
import tempfile
from pathlib import Path
import tensorflow as tf
from tensorflow import keras
import numpy as np
import tensorflow_datasets as tfds
from cv2 import cv2 # Pylint now views cv2 as a library
import tensorflow_model_optimization as tfmot
All of these imports are up to date, I'm currently using Python 3.10.1.
Here is the code I am using to prune the model.
v2_path = 'C:\\temp\\imagenet_v2'
inception_image_size = (299, 299)
image_count = 5
batch_size = 512
epochs = 4
dataset = tfds.load(name='imagenet_v2', split='test', data_dir=v2_path)
numpy_dataset = tfds.as_numpy(dataset)
layer_count = 313
count = [1]
def main():
v2_full_path = 'C:\\temp\\imagenet_v2\\downloads\\extracted\\TAR_GZ.s3-us-west-2_image_image-match-frequ8MN_35JZFrGeoTI82aIgjNtpWbosMu7yp_w5ODXJynw.tar.gz\\imagenetv2-matched-frequency-format-val'
dataset_train = tf.keras.utils.image_dataset_from_directory(directory=v2_full_path, image_size=inception_image_size,
label_mode='categorical')
inception_model = tf.keras.applications.InceptionV3(weights='imagenet',
pooling='avg',
input_shape=(299, 299, 3))
def apply_pruning_to_dense(layer):
count[0] += 1 # Python throws a fit if I use a normal variable, but doesn't mind layer_count
if layer_count - count[0] < 5:
return tfmot.sparsity.keras.prune_low_magnitude(layer)
return layer
model_for_pruning = tf.keras.models.clone_model(
inception_model,
clone_function=apply_pruning_to_dense,
)
inception_model = tf.keras.applications.InceptionV3(weights="imagenet")
logdir = tempfile.mkdtemp()
callbacks = [
tfmot.sparsity.keras.UpdatePruningStep(),
tfmot.sparsity.keras.PruningSummaries(log_dir=logdir),
]
model_for_pruning.compile(loss='categorical_crossentropy',
optimizer=keras.optimizers.SGD(learning_rate=1e-3),
metrics=['accuracy'])
model_for_pruning.fit(dataset_train,
batch_size=batch_size,
epochs=epochs,
callbacks=callbacks,
use_multiprocessing=True)
save_test_model(inception_model, ".tflite")
save_test_model(model_for_pruning, "_prune.tflite")
When I run the model through model_for_pruning.fit(...) the accuracy rating is only around 1% - 2%. Though it used to be around .16% per epoch. I fixed this by adding label_mode='categorical' when obtaining the dataset which leads me to believe that the issue is somehow with either my dataset or how I use it.
The resulting pruned tensorflow lite model has a 0% accuracy rating when tested against the imagenet_v2 subset while the upruned one gets around a 40% accuracy rating.

How to create a function that returns outputs from tensorflow 1.x model given input?

It might look like a silly question but I have only worked with tf 2.x with eager execution so I have no idea about functionality of tf 1.x. When I run the code twice, (code given below works fine for 1 time) it throws an error. It makes kind of sense because it uses static graphs and all.
Let us suppose we have a code like:
def predict(my_input):
# process the input, access the global `tf1.x` model and return result
return result
QUESTION:
Where exactly am I supposed to put this predict function or how should I create the predict so that I could use it with some serving package like flask / fastapi etc?
This code is extracted from the test_model.py file of the DPED paper
! git clone https://github.com/aiff22/DPED
%cd ./DPED
CODE
import imageio
from PIL import Image
import numpy as np
import tensorflow as tf
from models import resnet
import utils
import os
import sys
import matplotlib.pyplot as plt
# process arguments
phone, dped_dir, test_subset, iteration, resolution, use_gpu = ('iphone_orig', 'dped/', 'full', 'all', 'orig', 'true')
tf.compat.v1.disable_v2_behavior() # disable tf 2.x workings
# get all available image resolutions
res_sizes = utils.get_resolutions()
# get the specified image resolution
IMAGE_HEIGHT, IMAGE_WIDTH, IMAGE_SIZE = utils.get_specified_res(res_sizes, phone, resolution)
# disable gpu if specified
config = tf.compat.v1.ConfigProto(device_count={'GPU': 0}) if use_gpu == "false" else None
# create placeholders for input images
x_ = tf.compat.v1.placeholder(tf.float32, [None, IMAGE_SIZE])
x_image = tf.reshape(x_, [-1, IMAGE_HEIGHT, IMAGE_WIDTH, 3])
# generate enhanced image
enhanced = resnet(x_image)
with tf.compat.v1.Session(config=config) as sess:
# load pre-trained model
saver = tf.compat.v1.train.Saver()
saver.restore(sess, "models_orig/" + phone)
########### this is the part which reads the image path and returns the results as enhanced_image ##########
image = np.float16(np.array(Image.fromarray(imageio.imread(path))
.resize([res_sizes[phone][1], res_sizes[phone][0]]))) / 255
image_crop = utils.extract_crop(image, resolution, phone, res_sizes)
image_crop_2d = np.reshape(image_crop, [1, IMAGE_SIZE])
# get enhanced image
enhanced_2d = sess.run(enhanced, feed_dict={x_: image_crop_2d})
enhanced_image = np.reshape(enhanced_2d, [IMAGE_HEIGHT, IMAGE_WIDTH, 3])

'NoneType' object has no attribute 'register_forward_hook' error

I have been getting this error 'NoneType' object has no attribute 'register_forward_hook' when I run my code. I believe that there is something wrong with my function but not too sure what is the problem. If not is there a better way to get the feature vectors? I am trying to extract the Feature Vectors from a pretrained model from a resnet pretrained algorithm. Help would be much appreciated.
Here is the code.
import torch
import torch.nn as nn
import torchvision.models as models
import numpy as np
from torch.autograd import Variable
from torchvision import datasets, transforms,models
import torch.nn.functional as F
import torch.nn as nn
import torchvision.utils as vutils
from io import open
import os
from PIL import Image
import sys
import models.resnet as ResNet
import models.senet as SENet
import torchvision.models as models
import pickle
import pandas as pd
import sklearn.metrics
import matplotlib.pyplot as plt
from models import resnet, resnet50_ferplus_dag, resnet50_ft_dag, resnet50_scratch_dag, senet, senet50_ferplus_dag, senet50_ft_dag, senet50_scratch_dag, vgg_face_dag, vgg_m_face_bn_dag
model = resnet50_ft_dag.resnet50_ft_dag(weights_path='Weights/resnet50_ft_dag.pth') # RESNET MS1M
layer = model._modules.get('avgpool')
model.eval()
scaler = transforms.Scale((224, 224))
normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
to_tensor = transforms.ToTensor()
def get_vector(image):
# Load image
img = Image.open(image)
# Create Pytorch variable with transformed image
t_img = Variable(normalize(to_tensor(scaler(img))).unsqueeze(0))
# Create a vector of zeros that will hold our feature vector
# Outputsize of 2048
my_embedding = torch.zeros(2048)
# Define a function that will copy the output of a layer
def copy_data(m, i, o):
my_embedding.copy_(o.data)
# Attach that function to our selected layer
h = layer.register_forward_hook(copy_data)
# Run model on transformed image
model(t_img)
# Detach our copy function from the layer
h.remove()
# Return the feature vector
return my_embedding
get_vector('C:/Users/Public/Documents/DIN_Image/average_images/' + list_dir[2])

ModuleNotFoundError: No module named 'samples.coco'

Could someone help me with the error that is giving in the file within a Mask R-CNN project:
test_model.py
Someone with experience in instance segmentation, could help me with this error that occurred running on Google Colab
(Settings: Tensorflow: 1.13.1 and Keras: 2.1.6)
import os
import sys
import random
import math
import numpy as np
import skimage.io
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
import cv2
import time
from mrcnn.config import Config
from datetime import datetime
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
import mrcnn.model as modellib
from mrcnn import visualize
# import coco config
sys.path.append(os.path.join(ROOT_DIR, "samples/coco")) # To find local version
# import coco
from samples.coco import coco
# from pycocotools.coco import COCO
# Diretório para salvar 'logs' e 'modelo treinado'
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
# Local path to trained weights file
COCO_MODEL_PATH = os.path.join(MODEL_DIR ,"mask_rcnn_shapes_0080.h5")
# Download COCO trained weights from Releases if needed
if not os.path.exists(COCO_MODEL_PATH):
utils.download_trained_weights(COCO_MODEL_PATH)
print("cuiwei***********************")
# Directory of images to run detection on
IMAGE_DIR = os.path.join(ROOT_DIR, "/content/gdrive/My Drive/Fish-characteristic-measurement/Complete_code/images")
class ShapesConfig(Config):
"""Configuration for training on the toy shapes dataset.
Derives from the base Config class and overrides values specific
to the toy shapes dataset.
"""
# Give the configuration a recognizable name
NAME = "shapes"
# Train on 1 GPU and 8 images per GPU. We can put multiple images on each
# GPU because the images are small. Batch size is 8 (GPUs * images/GPU).
GPU_COUNT = 1
IMAGES_PER_GPU = 1
# Number of classes (including background)
NUM_CLASSES = 1 + 80 # background + 1 shapes
# Use small images for faster training. Set the limits of the small side
# the large side, and that determines the image shape.
IMAGE_MIN_DIM = 704
IMAGE_MAX_DIM = 1024
# Use smaller anchors because our image and objects are small
RPN_ANCHOR_SCALES = (8*6,16*6,32 * 6, 64 * 6, 128 * 6)#anchor side in pixels
# Reduce training ROIs per image because the images are small and have
# few objects. Aim to allow ROI sampling to pick 33% positive ROIs.
TRAIN_ROIS_PER_IMAGE =400
# Use a small epoch since the data is simple
STEPS_PER_EPOCH = 100
# use small validation steps since the epoch is small
VALIDATION_STEPS = 50
#import train_tongue
#class InferenceConfig(coco.CocoConfig):
class InferenceConfig(ShapesConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights(COCO_MODEL_PATH, by_name=True)
# COCO Class names
# Index of the class in the list is its ID. For example, to get ID of
# the teddy bear class, use: class_names.index('teddy bear')
class_names = ['BG','sfish_eye']
#'fish_knife','fish_eye','fish_pupil','fish_body','sfish_knife','sfish_eye','sfish_pupil','sfish_body'
# Load a random image from the images folder
count = os.listdir(IMAGE_DIR)
for i in range(0,len(count)):
path = os.path.join(IMAGE_DIR, count[i])
if os.path.isfile(path):
file_names = next(os.walk(IMAGE_DIR))[2]
image = skimage.io.imread(os.path.join(IMAGE_DIR, count[i]))
# Run detection
results = model.detect([image], verbose=1)
r = results[0]
visualize.display_instances(count[i],image, r['rois'], r['masks'], r['class_ids'],
class_names, r['scores'])
ERROR PRESENTED BY THE ABOVE CODE:
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-14-815378d5aae5> in <module>
29 sys.path.append(os.path.join(ROOT_DIR, "samples/coco")) # To find local version
30 # import coco
---> 31 from samples.coco import coco
32 # from pycocotools.coco import COCO
33
ModuleNotFoundError: No module named 'samples.coco'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------

resizing images from 64x64 to 224x224 for the VGG model

Can we resize an image from 64x64 to 256x256 without affecting the resolution is that a way to add zero on new row and column in the new resized output I m working on vgg and I get an error while adding my 64x64 input image because vggface is a pertrained model that include an input size of 224
code:
from keras.models import Model, Sequential
from keras.layers import Input, Convolution2D, ZeroPadding2D, MaxPooling2D, Flatten, Dense, Dropout, Activation
from PIL import Image
import numpy as np
from keras.preprocessing.image import load_img, save_img, img_to_array
from keras.applications.imagenet_utils import preprocess_input
from keras.preprocessing import image
import matplotlib
matplotlib.use('TkAgg')
import matplotlib.pyplot as plt
# from sup5 import X_test, Y_test
from sklearn.metrics import roc_curve, auc
from keras.models import Model, Sequential
from keras.layers import Input, Convolution2D, ZeroPadding2D, MaxPooling2D, Flatten, Dense, Dropout, Activation
from PIL import Image
import numpy as np
from keras.preprocessing.image import load_img, save_img, img_to_array
from keras.applications.imagenet_utils import preprocess_input
from keras.preprocessing import image
import matplotlib.pyplot as plt
# from sup5 import X_test, Y_test
from sklearn.metrics import roc_curve, auc
from keras.applications.vgg16 import VGG16
from keras.preprocessing import image
from keras.applications.vgg16 import preprocess_input
import numpy as np
model = VGG16(weights='imagenet', include_top=False)
from keras.models import model_from_json
vgg_face_descriptor = Model(inputs=model.layers[0].input
, outputs=model.layers[-2].output)
# import pandas as pd
# test_x_predictions = deep.predict(X_test)
# mse = np.mean(np.power(X_test - test_x_predictions, 2), axis=1)
# error_df = pd.DataFrame({'Reconstruction_error': mse,
# 'True_class': Y_test})
# error_df.describe()
from PIL import Image
def preprocess_image(image_path):
img = load_img(image_path, target_size=(224, 224))
img = img_to_array(img)
img = np.expand_dims(img, axis=0)
img = preprocess_input(img)
return img
def findCosineSimilarity(source_representation, test_representation):
a = np.matmul(np.transpose(source_representation), test_representation)
b = np.sum(np.multiply(source_representation, source_representation))
c = np.sum(np.multiply(test_representation, test_representation))
return 1 - (a / (np.sqrt(b) * np.sqrt(c)))
def findEuclideanDistance(source_representation, test_representation):
euclidean_distance = source_representation - test_representation
euclidean_distance = np.sum(np.multiply(euclidean_distance, euclidean_distance))
euclidean_distance = np.sqrt(euclidean_distance)
return euclidean_distance
vgg_face_descriptor = Model(inputs=model.layers[0].input, outputs=model.layers[-2].output)
# for encod epsilon = 0.004
epsilon = 0.16
# epsilon = 0.095
retFalse,ret_val, euclidean_distance = verifyFace(str(i)+"test.jpg", str(j)+"train.jpg", epsilon)
verifyFace1(str(i) + "testencod.jpg", str(j) + "trainencod.jpg")
Error :
ValueError: operands could not be broadcast together with
remapped shapes [original->remapped]:
(512,14,14)->(512,newaxis,newaxis) (14,14,512)->(14,newaxis,newaxis)
and requested shape (14,512)
I'm not sure what you mean, here is my solution for you.
First method, if i understand clearly what you mean, for adding pad with zero value you need to use numpy.pad for each layer of image.
I use this image for take example, its shape is 158x84x3
import numpy as np
import cv2
from matplotlib import pyplot as mlt
image = cv2.imread('zero.png')
shape = image.shape
add_x = int((256-shape[0])/2)
add_y = int((256-shape[1])/2)
temp_img = np.zeros((256,256,3),dtype = int)
for i in range(3):
temp_img[:,:,i] = np.pad(image[:,:,i],((add_x,add_x),(add_y,add_y)),'constant', constant_values = (0))
mlt.imshow(temp_img)
By this code i can add padding into picture and have the result like this.
Now its shape is 256x256x3 like you want.
Or another method for you is use Image of Pillow library. By using that, you can resize the picture without losing too much information with very simple code.
from PIL import Image
image = Image.fromarray(image)
img = image.resize((256, 256), Image.BILINEAR)
mlt.imshow(img)
That code will give you this solution
Hope my answer can help you solve the problem!
I think the best way to solve your problem is not resizing the image but rather to load the model specifying the input shape of your images.
Assuming you are using keras:
model = VGG16(weights=..., include_top=False, input_shape=(64,64,3))
Include top has to be set to false in order to change the input shape, which means you will need to do some sort of training yourself.
If you need include_top to be True resizing the input image is the best way to proceed, but a network trained on 224x224 images is probably not going to perform great with upscaled 64x64 images.
I think you mean resize (resolution) without increasing the size (amount of data)
and as far as I'm aware, the answer would be no, because making the resolution bigger literally would mean a higher pixel count. You could resize the resolution without increasing the file size too much though, there are plenty of programs, websites and utilities for lightweight photo resizing, maybe you could implement the use of a service like such into your code?

Categories