Image mosaics use a set of predefined squared images to build a larger image (example here).
There are a lot of solutions and it's quite trivial to achieve this effect. However, it becomes much harder with the following constraints:
The shape of the original mosaics is abstract. Any convex polygon could do.
Each mosaic can only be used once.
There is no need for the mosaics to be absolutely packed (i.e. occupying 100% of the canvas), but they should be as packed as possible without overlapping.
I'm trying to automatize the ancient art of tesselation, specifically the Opus palladianum technique.
My idea is to use simulated annealing or some other heuristic to optimize the position and rotation of each irregular mosaic, swaping two in each iteration, trying to minimize some energy function that reflects the similarity to the target image as well as the "packness" of the tiles.
I'm trying to achieve this in python, any ideas and help would be greatly appreciated.
Example:
I expect that you may probably use GA (Genetic Algorithm) with a "non-overlapping" constraint to do this job.
Parameters for individual (each convex polygon) are:
initial position
rotation
(size ?)
And your fit function will be build to give best note to each individual when polygon are not overlapping (and close to other individual)
You may see this video and this one as example.
Regards
Related
I have an application where I am attempting structure from motion for multiple views from the same camera using OpenCV in Python. (OpenCV isn't a hard requirement but Python is right now). Example: I have 16 camera images to work with with small motions between each frame. In the frames (200x200) I can track ~50 features and I'd like to estimate camera pose and 3D point locations for the features as accurately as possible.
I'm at the point where I'd like to use a stream of frames (from video) to refine the estimates of camera pose and 3D point positions since estimating pose and triangulating 3D points from two frames produces a lot of noise. I believe Bundle Adjustment is the next obvious direction to go in, but I'm not finding any obvious Python implementation of bundle adjustment to use. Many options (such as LevMarqSparse::bundleAdjust()) appear to be only partially completed or not fully adopted.
What is a good place to start? I guess I'm looking for a relatively simple Python bundle adjustment to prototype with and see if that is a direction I want to invest more time into.
Assuming you have a calibrated camera and an initial estimate of intrinsics and extrinsic parameters, you could at first perform a simple bundle adjustment directly in Python.
For instance, you could use stacks of three images, compute 3D points from the feature points via the homogeneous triangulation method. A simple bundle adjustment as a first prototype can be build for instance via scipy.least_squares and the Trusted Region Reflective non-linear optimization technique. Check out this tutorial.
Afterwards you can decide, whether you want to implement or use a Levenberg-Marquardt optimization technique which is able to handle sparse Jacobians or even determine the Jacobian analytically to hypothetically increase the convergence if desired.
So far, I think there is no adequate library in Python which provides a potent and high performant implementation of bundle adjustment.
Given two images with similar blobs, is there a simple way to find the transformation between them? As an example, I have two images like the following:
The right is the output of a neural network, while the left is an approximate truth (from a shape perspective only). I'm looking to find the transformation to move the left image to best match the position and orientation of the right. In this case, a rotation of some 150-160 degrees CC, and a translation up and right.
This seems to be a shape matching problem with some added constraints, but I'm wondering if there is a way to do it without having to perform a bunch of test transformations/sliding window. Most of the examples I've found have been for classification, and the positional ones are not rotation tolerant.
Ideas I have had so far... I've looked at Hu moments and openCV's matchShapes, which seem like they would get me the similarity (and mirroring, which is a possibility in the data and thus desirable), but I'm not sure how to use them without still using some sort of window. Another option would be SIFT or another feature based approach, but I don't think it would be particularly good given the low information volume of the data and the less similar shapes (Hough transform as a base?). Another brute force method might be to calculate the difference in the centroids, move the left image over the right and then rotate until I find the orientation with the maximum Jaccard index (or use the moments to find the rotation?), but that's the same kind of thing I'm trying to avoid (and it would always be a bit off given the inaccuracy of the NN predictions).
My first instinct is just to make a neural network to do it, but I feel like there is a better answer that I'm just missing.
I am trying to obtain a radius and diameter distribution from some AFM (Atomic force microscopy) measurements. So far I am trying out Gwyddion, ImageJ and different workflows in Matlab.
At the moment the best results I have found is to use Gwyddion and to take the Phase image, high pass filter it and then try an edge detection with 'Laplacian of Gaussian'. The result is shown in figure 3. However this image is still too noisy and doesnt really capture the edges of all the particles. (some are merged together others do not have a clear perimeter).
In the end I need an image which segments each of the spherical particles which I can use for blob detection/analysis to obtain size/radius information.
Can anyone recommend a different method?
[
I would definitely try a Granulometry, it was designed for something really similar. There is a good explanation of granulometry here starting page 158.
The granulometry will perform consecutive / increasing openings that will erase the different patterns according to their dimensions. The bigger the pattern, the latter it will be erased. It will give you a curve that represent the pattern dimension distributions in your image, so exactly what you want.
However, it will not give you any information about the position inside the image. If you want to have a rough modeling of the blobs present in your image, you can take a look to the Ultimate Opening.
Maybe you can use Avizo, it's a powerful software for dealing with image issues, especially for three D data (CT)
Suppose that I have an array of sensors that allows me to come up with an estimate of my pose relative to some fixed rectangular marker. I thus have an estimate as to what the contour of the marker will look like in the image from the camera. How might I use this to better detect contours?
The problem that I'm trying to overcome is that sometimes, the marker is occluded, perhaps by a line cutting across it. As such, I'm left with two contours that if merged, would yield the marker. I've tried opening and closing to try and fix the problem, but it isn't robust to the different types of lighting.
One approach that I'm considering is to use the predicted contour, and perform a local convolution with the gradient of the image, to find my true pose.
Any thoughts or advice?
The obvious advantage of having a pose estimate is that it restricts the image region for searching your target.
Next, if your problem is occlusion, you then need to model that explicitly, rather than just try to paper it over with image processing tricks: add to your detector objective function a term that expresses what your target may look like when partially occluded. This can be either an explicit "occluded appearance" model, or implicit - e.g. using an algorithm that is able to recognize visible portions of the targets independently of the whole of it.
I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.