clever image augmentation - random zoom out - python

i'm building a CNN to identify facial keypoints. i want to make the net more robust, so i thought about applying some zoom-out transforms because most pictures have about the same location of keypoints, so the net doesn't learn much.
my approach:
i want augmented images to keep the original image size so apply MaxPool2d and then random (not equal) padding until the original size is reached.
first question
is it going to work with simple average padding or zero padding? i'm sure it would be even better if i made the padding appear more like a background but is there a simple way to do that?
second question
the keypoints are the target vector, they come as a row vector of 30. i'm getting confused with the logic needed to transform them to the smaller space.
generally if an original point was at (x=5,y=7) it transforms to (x=2,y=3)- i'm not sure about it but so far manually checked and it's correct. but what to do if to keypoints are in the same new pixel? i can't feed the network with less target values.
that's it. would be happy to hear your thoughts

I suggest to use torchvision.transforms.RandomResizedCrop as a part of your Compose statement. which will give you random zooms AND resize the resulting the images to some standard size. This avoids issues in both your questions.

Related

CV2 resizing with CNN

I am using CV2 to resize various images with different dimensions(i.e. 70*300, 800*500, 60*50) to a specific (200*200) pixels dimension. Later, I am feeding the pictures to CNN algorithm to classify the images. (my understanding that pictures must have the same size when fed into CNN).
My questions:
1- How low picture resolutions are converted into higher one and how higher resolutions are converted into lower one? Will this affect the stored information in the pictures
2- Is it good practice to use this approach with CNN? Or is it better to Pad zeros to the end of the image to get the desired resolution? I have seen many researchers pad the end of a file with zeros when trying to detect Malware files to have a common dimension for all the files. Does this mean that padding is more accurate than resizing?
Using interpolation. https://chadrick-kwag.net/cv2-resize-interpolation-methods/
Definitely, resizing is a lossy process and you'll lose information.
Both are okay and used depending on the needs. Resizing is also equally applicable. If your CNN can't differentiate between the original and resized images it must be a badly overfitted one. Resizing is a very light regularization too, even it's advisable to apply more augmentation schemes on the images before CNN training.

Stitching four images with OpenCV + Python

Goal:
For the past two weeks I've been trying to figure out how to convert the following image:
To one that looks like this (may not match exactly, as this image was taken at a different time):
Lens Correction (necessary?):
The first thing I noticed is that simply slicing the image and overlaying the four parts wouldn't work perfectly, as the curvature of certain lines does not match. For instance, the mid-court line bends left in the second slice and bends right in the third slice. This bending looks like a barrel distortion so I tried using both a parameterized lens correction function (passing k1, k2, and k3 to OpenCV) and using lensfun. Since the lensfun database does not include my camera make or model (it's an AXIS camera) and I do not know the make or model of the lens (it's manufactured as part of the camera), I wrote a small script to dump test images using various lenses with various parameters, then skimmed through the thousands of output images until I found one that looked like it had relatively straight lines:
This correction was done using the "Samyang 12mm f/2.8 Fish-Eye ED AS NCS" lens with a "Canon EOS 10D" camera in lensfun. It's probably not perfect, but I figured it was close enough to move on to step two.
Once the lens distortion was corrected, the second issue is that the same line in two slices was pointing in different directions, which should be corrected with a simple perspective transform. So I began a long quest to figure out the proper parameters for this perspective transform.
Failed Attempts:
1. Using SciPy
I started by writing a cost function to judge the "quality" of a given set of parameters (overlapped pixels should match) and applying SciPy's solver to figure it out. I made several tweaks to my cost function (applying a Gaussian blur, scaling down the image, gray scaling the image, using the Sobel operator to get a gradient, looking only at the pixels on either side of a "seam" after overlapping instead of the whole overlap region, etc) but it always failed to find a good solution. The results looked worse than the original camera image most of the time:
2. Using math
When that failed I tried applying math to compute the proper perspective transform. I know the FOV of the camera (from the spec sheet), I know the image width and height, I know the sensor size (from the spec sheet), and using a protractor I measured the angles between the lenses. Using the pinhole model I then calculated the expected (x,y) values of points on the image plane and what transform would be necessary to correct them. The results looked better than SciPy, but were still dismal.
3. Using OpenCV's Stitcher
After this I tried using OpenCV's built-in Stitcher class. However it failed to stitch together slices 2 and 3 due to insufficient overlap between the images (and about 10% of the time it even failed to stitch together slices 1 and 2, presumably because of the non-deterministic nature of RANSAC). Even when it did succeed, the stitch wasn't that great:
4. Using ORB and OpenCV's findHomography
Most recently I tried using ORB with a mask (only looking for features in the overlap region) and OpenCV's findHomography function to create a custom version of the Stitcher. While the matches seemed promising, the resulting stitch was still sub-optimal:
I'm beginning to suspect that my methodology (slice -> lens correct -> perspective transform -> overlay) is flawed and there's a better way to do this.
5. Updated ORB / findHomography
I updated my feature detection to eliminate any matches where the Y coordinates differed drastically (e.g. matching the white of the table to the white of the lights). After doing this my number of matched features fell from ~110 to ~55, but the homography was improved significantly. Here's the stitch that results for slices 1/2 and 2/3 with the update:
Until someone can tell me that I'm going about this all wrong, I'm going to keep pursuing this strategy with the following added step:
Slice image
Lens correct each slice
Perspective transform slice 2 or 3 so that the side line is horizontal and the mid-court line is vertical
Use ORB + match filtering + findHomography to iteratively align and then stitch adjacent slices
Ultimately when it's all said and done I want to try and compute a mapping from input pixels to output pixels so that we're not doing all of this complex work (lens correction, ORB, findHomography, etc) per-frame. We'll do it once per camera, save the mapping to a file somewhere, then we can in real-time map the input video to an output video frame-by-frame using cv2.remap
Note:
The second image I posted showing the "expected output" comes directly from the camera in question. It can be configured to return the first image at 30 fps, or the second image at 10 fps. We wish to perform the stitching off-camera on a more powerful computer so we can get 30 fps but still have the single image.
AXIS provides an SDK for doing the stitching off-camera, but this SDK is Windows-only and most of our tech stack is Linux and most of our development machines are Mac OS. I have used a Windows computer to try and look into the stitching SDK they provide, however I had no luck getting it to compile and run. Their sample code kept throwing errors and I've never had any luck getting Visual Studio or C++ to play nicely for me.
My suggestion is to train an autoencoder. Use the first image as input and the second one as an output, as in a denoising autoencoder:
Note that you may lose resolution if you create a botteleneck too small in the middle layer.
Also, Variational autoencoders present a latent vector but work following the same principle.
You can adapt this code:
denoise = Sequential()
denoise.add(Convolution2D(20, 3,3,
border_mode='valid',
input_shape=input_shape))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(UpSampling2D(size=(2, 2)))
denoise.add(Convolution2D(20, 3, 3,
init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(Convolution2D(20, 3, 3,init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(MaxPooling2D(pool_size=(3,3)))
denoise.add(Convolution2D(4, 3, 3,init='glorot_uniform'))
denoise.add(BatchNormalization(mode=2))
denoise.add(Activation('relu'))
denoise.add(Reshape((28,28,1)))
sgd = SGD(lr=learning_rate,momentum=momentum, decay=decay_rate, nesterov=False)
denoise.compile(loss='mean_squared_error', optimizer=sgd,metrics = ['accuracy'])
denoise.summary()
denoise.fit(x_train_noisy, x_train,
nb_epoch=50,
batch_size=30,verbose=1)

Finding the transformation between two matched shapes

Given two images with similar blobs, is there a simple way to find the transformation between them? As an example, I have two images like the following:
The right is the output of a neural network, while the left is an approximate truth (from a shape perspective only). I'm looking to find the transformation to move the left image to best match the position and orientation of the right. In this case, a rotation of some 150-160 degrees CC, and a translation up and right.
This seems to be a shape matching problem with some added constraints, but I'm wondering if there is a way to do it without having to perform a bunch of test transformations/sliding window. Most of the examples I've found have been for classification, and the positional ones are not rotation tolerant.
Ideas I have had so far... I've looked at Hu moments and openCV's matchShapes, which seem like they would get me the similarity (and mirroring, which is a possibility in the data and thus desirable), but I'm not sure how to use them without still using some sort of window. Another option would be SIFT or another feature based approach, but I don't think it would be particularly good given the low information volume of the data and the less similar shapes (Hough transform as a base?). Another brute force method might be to calculate the difference in the centroids, move the left image over the right and then rotate until I find the orientation with the maximum Jaccard index (or use the moments to find the rotation?), but that's the same kind of thing I'm trying to avoid (and it would always be a bit off given the inaccuracy of the NN predictions).
My first instinct is just to make a neural network to do it, but I feel like there is a better answer that I'm just missing.

Python photo mosaic with abstractly shaped mosaics

Image mosaics use a set of predefined squared images to build a larger image (example here).
There are a lot of solutions and it's quite trivial to achieve this effect. However, it becomes much harder with the following constraints:
The shape of the original mosaics is abstract. Any convex polygon could do.
Each mosaic can only be used once.
There is no need for the mosaics to be absolutely packed (i.e. occupying 100% of the canvas), but they should be as packed as possible without overlapping.
I'm trying to automatize the ancient art of tesselation, specifically the Opus palladianum technique.
My idea is to use simulated annealing or some other heuristic to optimize the position and rotation of each irregular mosaic, swaping two in each iteration, trying to minimize some energy function that reflects the similarity to the target image as well as the "packness" of the tiles.
I'm trying to achieve this in python, any ideas and help would be greatly appreciated.
Example:
I expect that you may probably use GA (Genetic Algorithm) with a "non-overlapping" constraint to do this job.
Parameters for individual (each convex polygon) are:
initial position
rotation
(size ?)
And your fit function will be build to give best note to each individual when polygon are not overlapping (and close to other individual)
You may see this video and this one as example.
Regards

Detect the location of an image within a larger image

How do you detect the location of an image within a larger image? I have an unmodified copy of the image. This image is then changed to an arbitrary resolution and placed randomly within a much larger image which is of an arbitrary size. No other transformations are conducted on the resulting image. Python code would be ideal, and it would probably require libgd. If you know of a good approach to this problem you'll get a +1.
There is a quick and dirty solution, and that's simply sliding a window over the target image and computing some measure of similarity at each location, then picking the location with the highest similarity. Then you compare the similarity to a threshold, if the score is above the threshold, you conclude the image is there and that's the location; if the score is below the threshold, then the image isn't there.
As a similarity measure, you can use normalized correlation or sum of squared differences (aka L2 norm). As people mentioned, this will not deal with scale changes. So you also rescale your original image multiple times and repeat the process above with each scaled version. Depending on the size of your input image and the range of possible scales, this may be good enough, and it's easy to implement.
A proper solution is to use affine invariants. Try looking up "wide-baseline stereo matching", people looked at that problem in that context. The methods that are used are generally something like this:
Preprocessing of the original image
Run an "interest point detector". This will find a few points in the image which are easily localizable, e.g. corners. There are many detectors, a detector called "harris-affine" works well and is pretty popular (so implementations probably exist). Another option is to use the Difference-of-Gaussians (DoG) detector, it was developed for SIFT and works well too.
At each interest point, extract a small sub-image (e.g. 30x30 pixels)
For each sub-image, compute a "descriptor", some representation of the image content in that window. Again, many descriptors exist. Things to look at are how well the descriptor describes the image content (you want two descriptors to match only if they are similar) and how invariant it is (you want it to be the same even after scaling). In your case, I'd recommend using SIFT. It is not as invariant as some other descriptors, but can cope with scale well, and in your case scale is the only thing that changes.
At the end of this stage, you will have a set of descriptors.
Testing (with the new test image).
First, you run the same interest point detector as in step 1 and get a set of interest points. You compute the same descriptor for each point, as above. Now you have a set of descriptors for the target image as well.
Next, you look for matches. Ideally, to each descriptor from your original image, there will be some pretty similar descriptor in the target image. (Since the target image is larger, there will also be "leftover" descriptors, i.e. points that don't correspond to anything in the original image.) So if enough of the original descriptors match with enough similarity, then you know the target is there. Moreover, since the descriptors are location-specific, you will also know where in the target image the original image is.
You probably want cross-correlation. (Autocorrelation is correlating a signal with itself; cross correlating is correlating two different signals.)
What correlation does for you, over simply checking for exact matches, is that it will tell you where the best matches are, and how good they are. Flip side is that, for a 2-D picture, it's something like O(N^3), and it's not that simple an algorithm. But it's magic once you get it to work.
EDIT: Aargh, you specified an arbitrary resize. That's going to break any correlation-based algorithm. Sorry, you're outside my experience now and SO won't let me delete this answer.
http://en.wikipedia.org/wiki/Autocorrelation is my first instinct.
Take a look at Scale-Invariant Feature Transforms; there are many different flavors that may be more or less tailored to the type of images you happen to be working with.

Categories