Crop the image using tensoflow Imagegenerator preprocessing fucntion by opencv - python

i create a function for croping eye image. because outer side of eye image is balack and unnecessary pixel .but if i using this function for loading image in opencv ,for large resolution ,allocate the memory. i want to crop the image using tensorflow imagedatagenerator preprocessing function by opencv .
'crop_image' is my function and its work properly.
train is a path . train> Normal and abnormal file where image are stored
def preprocessing_fun(train):
img=cv2.imread(train,cv2.IMREAD_COLOR)
img=crop_image(img)
if img is not None:
image=cv2.resize(img,(512,512))
img=cv2.addWeighted ( image,4, cv2.GaussianBlur( image , (0,0) ,224/10) ,-4 ,128)
return img
img_datagen = ImageDataGenerator(rescale=1./255,
preprocessing_function =
preprocessing_fun)
training_gen = img_datagen.flow_from_directory(train, target_size=(224,224),
color_mode='rgb',batch_size=32,)
after using this it see the number of image but show a error :
SystemError: returned NULL without setting an error
how to solve this problem??

your cv2.imread is not reading one image according to your function because cv2.imread only can read one image at a time.
For example: let's say if in your function train is a list then you can read by using for loop (for image in train:) something like that

Related

How possibly can I use the frames generated from my webcam to be the input of my Keras model?

I want to change the img_path to the frames generated from opencv:
img = image.load_img(img_path, target_size=(224, 224))
How can I rewrite it?
I am assuming that the image.load_img() function that you are using is the one from keras_utils package.
As it is noted in the documentation, load_img() accepts path to the image as the first parameter and returns:
Returns:
A PIL Image instance.
It is not mentioned in the question, but if you read the frames from the camera using opencv they should already be numpy arrays which you could to pass to your model. Of course you should resize them to (224,224) before (how to resize the image using opencv).
However, if you want to have the PIL images (to have the same type as the one returned by load_img()), you need to convert your opencv frames (numpy array) to PIL image. Follow this question and answer by #ZdaR to do this conversion:
import cv2
import numpy as np
from PIL import Image
img = cv2.imread("path/to/img.png")
# You may need to convert the color.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
im_pil = Image.fromarray(img)
# For reversing the operation:
im_np = np.asarray(im_pil)

How can I build a face detection aplication by implementing my own CNN and train it from scratch?

The aplication only takes some images and draws some rectangles around the faces that are present there.
I used some CNNs made in Python with keras for image classification Face and NonFace here are the types of images that I used link . My only problem is that I don't quite understand how to use a CNN to detect MULTIPLE faces from an image, I tried to verify every frame of an image but that takes to much time and it's not good at all. Can I train a model to actually return the location of my face or do I have to make an algorithm that searches for the faces?
Thanks, any help would be much apreaciated.
Have look at the YOLO algorithm, it uses a grid on the image and outputs bounding boxes, giving you location of the object.
you can detect all the faces in an image using mtcnn. The code for that is shown below
from mtcnn import MTCNN
import os
import cv2
detector = MTCNN()
dest_dir=r'C:\Temp\people\storage\cropped' # specify where to save the images
filename=r'C:\Temp\people\storage\34.png' # specify the file name full path
try:
img=cv2.imread(filename) # filename must be full path to the image
shape=img.shape # will cause an exception if image was not read properly
data=detector.detect_faces(img)
if data ==[]:
print ('no faces were detected for file ', filename)
else:
for i, faces in enumerate(data):
box= faces['box']
if box != []:
box[0]= 0 if box[0]<0 else box[0]
box[1]= 0 if box[1]<0 else box[1]
cropped_img=img[box[1]: box[1]+box[3],box[0]: box[0]+ box[2]]
fname=os.path.split(filename)[1]
index=fname.rfind('.')
fileid=fname[:index]
fext=fname[index:]
fname=fileid + '-' +str(i) + fext
save_path=os.path.join(dest_dir,fname )
cv2.imwrite(save_path, cropped_img)
except:
print(' an error occurred')

How to find a file/ data from a given data set in python- opencv image processing project?

I have a data set of images in an image processing project. I want to input an image and scan through the data set to recognize the given image. What module/ library/ approach( eg: ML) should I use to identify my image in my python- opencv code?
To find exactly the same image, you don't need any kind of ML. The image is just an array of pixels, so you can check if the array of the input image equals that of an image in your dataset.
import glob
import cv2
import numpy as np
# Read in source image (the one you want to match to others in the dataset)
source = cv2.imread('test.jpg')
# Make a list of all the images in the dataset (I assume they are images in a directory)
filelist = glob.glob(r'C:\Users\...\Images\*.JPG')
# Loop through the images, read them in and check if an image is equal to your source
for file in filelist:
img = cv2.imread(file)
if np.array_equal(source, img):
print("%s is the same image as source" %(file))
break

OpenCV Error: The operation is neither 'array op array'

I want to superimpose a given set of images of the same size (the AT&T facial images database). I have written the code to do so, which works as follows:
I have assigned the location of the images (for starting I am considering only 4 images).
imstack is used to read one image (as a base image) over which the layover (superimposition) will take place.
A for loop is run that goes through all the images and adds them to the base image (imstack). This adding is done by using the addWeighted() function with the parameters as the current image (im) and the base image (imstack) with the alpha values as 0.5 respectively.
After the loop has run till its completion (all the images are superimposed on the base image) I tried to print the updated imstack as 'compiledimg' by using the imshow().
Further I added the option to save the 'compiledimg' file by pressing 's'.
To fix the error what I have tried is to resize the image after every iteration so that the addWeighted() function receives the images with the same dimensions. First imsize (before entering the for loop) is resized as to set a firm base to the first image with the required size that I have taken as (97(rows),113(columns)).
I don't understand why the addWeighted function is not working because I am using the resize funtion to make sure that the size is kept the same after each iteration. Plus, if also tried to superimpose just two of the images and it worked perfectly fine however it does not work when I try to use the addWeighted() on the third image.
Say I used addWeighted on two images img1 and img2 and stored in img3. Now when I tried to use the addWeighted() on img3 and img4 I am getting the error. Even when I have used the resize function on img3.
Note: Initial size of the images is (97 (rows),113 (columns)) hence I am trying to keep the same image size.
import cv2
import numpy as np
import os
fnames =['~/Downloads/1.pgm','~/Downloads/2.pgm','~/Downloads/3.pgm']
imstack=cv2.imread('~/Downloads/4.pgm')
for path in fnames:
im=cv2.imread(os.path.expanduser(path))
im=cv2.resize(im,(97,113))
imstack=cv2.addWeighted(imstack,0.5,im,0.5,0)
imstack=cv2.resize(imstack,(97,113))
cv2.imshow('compiledimg',imstack)
k = cv2.waitKey(0) & 0xFF
if k == 27:
cv2.destroyAllWindows()
elif k == ord('s'):
cv2.imwrite('compiledimg.pgm',imstack)
cv2.destroyAllWindows()

Passing captured video frame (numpy array object) from webcam feed to caffe model

I am a beginner in Caffe and I am trying to use the Imagenet model for object classification. My requirement is that I want to use it from a webcam feed and detect the objects from the webcam feed.For this, I use the following code
cap = cv2.VideoCapture(0)
while(True):
ret, frame = cap.read() #frame is of type numpy array
#frame = caffe.io.array_to_datum(frame)
img = caffe.io.load_image(frame)
Obviously this does not work since caffe.io.load_image expects an image path.
As you can see, I also tried using caffe io.py's array_to_datum function (got it from this stackoverflow question ) and passed the frame to caffe io load_image but this too does not work.
How can I pass the captured video frames from the webcam directly to caffe io load_image ?
and If that is not possible then what is the way to load the frame into caffe io? Please help. Thanks in advance.
caffe.io.load_image does not do much. It only does the following :
Read image from disk (given the path)
Make sure that the returned image has 3 dimensions (HxWx1 or HxWx3)
(see source caffe.io.load_image)
So it does not load the image into your model, it's just a helper function that loads an image from disk. To load an image into memory, you can load it however you like (from disk, from webcam..etc). To load the network, feed the image into it and do inference, you can do something like the following :
# Load pre-trained network
net=caffe.Net(deployprototxt_path, caffemodel_path, caffe.TEST)
# Feed network with input
net.blobs['data'].data[0,...] = frame
# Do inference
net.forward()
# Access prediction
probabilities = net.blobs['prob'].data
Make sure the frame dimensions match the expected input dimensions as specified in the deploy.prototxt (see example for CaffeNet)
If you are reading the camera frames using OpenCV, you need to re-order color space from OpenCV's default space (BGR) to Caffe input order RGB, then put all values as single precision float:
# load caffe model
net = caffe.Net(model_def, # defines the structure of the model
model_weights, # contains the trained weights
caffe.TEST) # use test mode (e.g., don't perform dropout)
cap = cv2.VideoCapture(1)
cap.set(cv2.CAP_PROP_FRAME_WIDTH,1280);
cap.set(cv2.CAP_PROP_FRAME_HEIGHT,720);
while(True):
ret,frame = cap.read()
image = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB).astype(np.float32)/255.0
# you can now pass image to Caffe
net.blobs['data'].data[...] = image
# forward pass, obtain detections
detections = net.forward()['detection_out']

Categories