Processing Image for machine learning - python

I started to do the medical image analysis for a project.
In this project I have images of human kidney(s) with and without stones. The aim is to predict if the given new image has stone or not.
I chose the KNN classifier model to do classification but I do not understand the image processing. I have some knowledge on segmentation. I can convert it into array for processing but I need some pointers to understand the process.
Image - https://i.stack.imgur.com/9FDUM.jpg

For image classification I would recommend you to use pre-trained neural networks like Resnet etc.
Frameworks like Tensorflow give a good API to re-train pre-trainined neural networks for a different use-case.
You can follow below link:
https://www.tensorflow.org/hub/tutorials/image_retraining

Image Processing is done to convert the digital images into a format which would be easier for a computer to calculate statistics on.
Images do not always contain the necessary information, there is noise and lots of unnecessary background information available in the image which won't be required for a specific purpose.
The Goal of processing an image is to extract the region of interest from the whole image.
Along with this various enhancements are done to the image so that we get features that are useful in calculating inferences
Processing an image consists of various image enhancement techniques and segmentation and other stuff like maybe a histogram equalization which in the end would be used to extract features. Doing this processing yields better features generally.
Also Image processing in itself is a vast topic. I recommend you read about it in papers from Google scholar

Related

Is there a time series ML model (like TimesFormer) which extracts features from 4 channel input images?

Regular TimesFormer takes 3 channel input images, while I have 4 channel images (RGBD). I am struggling to find a TimesFormer (or a model similar to TimesFormer) that takes 4 channel input images and extract features from them.
Does anybody know such a model? Preferably, I would like to find pretrained model with weights.
MORE CONTEXT:
I am working with RGBD video frames and have multiclass classification problem at the end. My videos are fairly large, between 2 to 4 minutes so classical time-series models doesn't work for me. So my inputs are RGBD frames/images from the video and at the end I would like to get class prediction.
My idea was to divide the problem into 2 stages:
Extract features from video into smaller dimension with TimesFormer-like model. Result: I would get a new data representation (dataset).
Train clasification ML network with new data to get a class prediction.
As of Jan 2023, I don't think there's any readily available TimeSformer model/code that works on RGBD 4 channel image.
Alternatively, If you are looking for Vision Transformers that can work with depth as well (RGBD data), you have the entire list with state-of-the-art approaches and corresponding code (wherever available) here.
One of the good approach to start with is DepthFormer: Exploiting Long-Range Correlation and Local Information for Accurate Monocular Depth Estimation. You can find the pre-trained models with this approach here.
If you're looking for 3D CNN based object detectors that can work on RGBD data: RGB-D Salient Object Detection via 3D Convolutional Neural Networks is one of the good ones to start with. Code and pre-trained models can be found here.
Since I don't fully understand your problem statement or exact requirement I'm proposed few things that I thought could be helpful.

Tensorflow should I crop objects from the images for better accuarcy?

I am new to tensorflow and I could not find an answer for my question.
I am trying to make a simple program what recognises the type of van from the picture. I downloaded about 100 pictures for my dataset from each category.
My question is should I crop the pictures so only the van is visible on the picture?
Or should I use the original picture with the background for better accuracy?
The short answer is yes it will, but there is a lot more to consider when asking that question. For example, when I use this model how will the images look? Will there be someone to manually crop these images and then use the model or will someone be taking these photos off of a cell phone using an app? A core concept of Machine learning is to have the images in the production environment as close to the training data as possible so that your performance in production doesn't change.
If you're just trying to learn I would highly recommend trying to build a network on MNIST or lego bricks dataset before you try on your own images as if you get stuck on either there are a lot of great resources available :). Also, consider setting aside 10 images as a Test set so that you can evaluate model performance. And third, Tensorflow has a built-in image dataset generator which will greatly improve your model performance on a small dataset like this. The TensorFlow image dataset generator can scale, rotate, flip and, zoom your images which will produce huge improvements in model accuracy.
Good luck!

How to use tensorflow model for predicting my own images

I've just started with tensorflow. I wrote a program that uses Fashion_MNIST dataset to train the model. And then predicts the labels using 'test_images'and it's working good so far.
But what I am curious how can I use my own image of a shoe or shirt for prediction. Because all the test images are of shape 28*28. How can I do this ?
The task you are engaged in is the task of data preparation and preprocessing. Among the things you must do already having a directory with images is the tagging of the images, for this task I recommend labelImg.
If you also need the dimensionality of the input to be of a specific size like the example you give, you can use digital image processing software. The OpenCV library has dimensionality reduction tools that work for this.

Siamese Network For Image Similarity

Where can I find details to implement siamese networks to perform image similarity and to retrieve the most similar image from a dataset
It is difficult to get a large number of image data for all the classes, so only a few images, eg 10 images for some classes, are available for most of the classes.
SIFT or ORB seems to perform poorly on some classes.
My project is to differentiate between the license plates based on the states of the UAE. Here I upload few example images.
When there is few training data, no matter how annoying it sounds, the best approach is usually to collect more. Deep networks are infamously data hungry and their performance is poor when the data is scarce. This said, there are approaches that might help you:
Transfer learning
Data augmentation
In transfer learning, you take an already trained deep net (e.g. ResNet50), which was trained for some other task (e.g. ImageNet), fix all its network weights except for the weights in the last few layers and train on your task of interest.
Data augmentation slightly modifies your training data in some predictable way. In your case you can rotate your image by a small angle, apply a perspective transformation, scale the image intensities or slightly change the colors. You apply a different set of these operations with different parameters every time you want to use a particular training image. This way you generate new training examples enlarging your training set.

how to prepare image dataset for training model?

I have a project that use Deep CNN to classify parking lot. My idea is to classify every space whether there is a car or not. and my question is, how do i prepare my image dataset to train my model ?
i have downloaded PKLot dataset for training included negative and positive image.
should i turn all my data training image to grayscale ? should i rezise all my training image to one fix size? (but if i resize my training image to one fixed size, i have landscape and portrait image). Thanks :)
This is an extremely vague question since every image processing algorithm has different approaches to extracting features. However, in your parking lot example, you would probably need to do RGB to Greyscale conversion and Size normalization among other image processing techniques.
A great starting point would be in this link: http://www.scipy-lectures.org/advanced/image_processing/
First detect the cars present in the image, and obtain their size and alignment. Then go for segmentation and labeling of the parking lot by fixing a suitable size and alignment.
as you want use pklot dataset for training your machine and test with real data, the best approach is to make both datasets similar and homological, they must be normalized , fixed sized , gray-scaled and parameterized shapes. then you can use Scale-invariant feature transform (SIFT) for image feature extraction as basic method.the exact definition often depends on the problem or the type of application. Since features are used as the starting point and main primitives for subsequent algorithms, the overall algorithm will often only be as good as its feature detector. you can use these types of image features based on your problem:
Corners / interest points
Edges
Blobs / regions of interest points
Ridges
...

Categories