How to draw shapes at non-integer locations in Python OpenCV? [duplicate] - python

I am plotting some geometry on an image. The problem is the coordinates are coming out as floats, so I am not able to plot them. As a work around I am using floor(); to truncate it to nearest Integer. This works fine in some cases, except that it shifts my image a little bit.
x=9.7
x'=floor(x)= 9 //the plot is now at 9 and not at 9.7 as desired, (this is 'shift')
But in cases where my requirement is to draw multiple shapes at equal distance I am facing problem as the shift (because of floor();)is not uniform for each shape.
It is just like quantization noise in digital communication.
Is there any way I can get around this problem ?

This problem is actually quite a common one in computer graphics. Rasterisation is the problem domain you are currently struggling with. You may find Bresenham's line algorithm a good introduction to the topic. Here is a nice interactive site with a few different rasterisation algorithms.
Hope that helps!

Related

Calculating the normal between a straight line and a curved surface in three dimensions - python

For the time being I am asking for more conceptual answers than coding answers as I have been racking my brain as to how to do this and I am fairly new to python (I only properly started about 3 days ago but I have reached the point of being able to define the straight lines as classes and calculate dot products and what not using numpy arrays).
The problem I am trying to solve is as follows. I have a bunch of objects that I will just describe as a vector in my code as they are straight rigid cylinders. These cylinders interact with a non-smooth curved surface in three dimensions that I think I can define as a point cloud although I am still not entirely sure how I will go about this.
I need to calculate the normal to the surface at the point the vector describing the cylinder intercepts the surface. I assume I can just calculate when cylinder[x,y,z] - surface[x,y,z] = 0 in a few more words so I think that bit will be ok.
My main issue is that I am struggling to understand how best to create the 3D point cloud from a 2D image stack and then how to calculate the normal. When I have the normal I can envision how to calculate Euler angles and what not.
Any help would be appreciated.

Get 3D coordinates from two 2D frames

I couldn't find the proper answer to my problem on the Web, so I'll ask it here. Let's say we're given two 2D photos of the same place taken from slightly different angles. I've chosen the set of points (edge detection), found correspondences between them (which point is which on other photo). Now I need to somehow find out world coordinates of these points in 3D.
For the last 5 hours I've read a lot about it but I still can't understand what steps should I follow. I've tried to estimate motion of a camera using the function recoverPose applied to an essential matrix and two sets of points on each frame. I can't understand what it gives me when I know rotation and translation matrices (thatrecoverPose returned). What should I do in order to achieve my goal?
I also know the calibration matrix of my camera (I use KITTI dataset). I've read opencv documentation but still don't understand.
It's monocular vision.

calculate particle size distribution from AFM measurements

I am trying to obtain a radius and diameter distribution from some AFM (Atomic force microscopy) measurements. So far I am trying out Gwyddion, ImageJ and different workflows in Matlab.
At the moment the best results I have found is to use Gwyddion and to take the Phase image, high pass filter it and then try an edge detection with 'Laplacian of Gaussian'. The result is shown in figure 3. However this image is still too noisy and doesnt really capture the edges of all the particles. (some are merged together others do not have a clear perimeter).
In the end I need an image which segments each of the spherical particles which I can use for blob detection/analysis to obtain size/radius information.
Can anyone recommend a different method?
[
I would definitely try a Granulometry, it was designed for something really similar. There is a good explanation of granulometry here starting page 158.
The granulometry will perform consecutive / increasing openings that will erase the different patterns according to their dimensions. The bigger the pattern, the latter it will be erased. It will give you a curve that represent the pattern dimension distributions in your image, so exactly what you want.
However, it will not give you any information about the position inside the image. If you want to have a rough modeling of the blobs present in your image, you can take a look to the Ultimate Opening.
Maybe you can use Avizo, it's a powerful software for dealing with image issues, especially for three D data (CT)

Pipeline to create Voronoi Meshes

I would like to implement a Maya plugin (this question is independent from Maya) to create 3D Voronoi patterns, Something like
I just know that I have to start from point sampling (I implemented the adaptive poisson sampling algorithm described in this paper).
I thought that, from those points, I should create the 3D wire of the mesh applying Voronoi but the result was something different from what I expected.
Here are a few example of what I get handling the result i get from scipy.spatial.Voronoi like this (as suggested here):
vor = Voronoi(points)
for vpair in vor.ridge_vertices:
for i in range(len(vpair) - 1):
if all(x >= 0 for x in vpair):
v0 = vor.vertices[vpair[i]]
v1 = vor.vertices[vpair[i+1]]
create_line(v0.tolist(), v1.tolist())
The grey vertices are the sampled points (the original shape was a simple sphere):
Here is a more complex shape (an arm)
I am missing something? Can anyone suggest the proper pipeline and algorithms I have to implement to create such patterns?
I saw your question since you posted it but didn’t have a real answer for you, however as I see you still didn’t get any response I’ll at least write down some ideas from me. Unfortunately it’s still not a full solution for your problem.
For me it seems you’re mixing few separate problems in this question so it would help to break it down to few pieces:
Voronoi diagram:
The diagram is by definition infinite, so when you draw it directly you should expect a similar mess you’ve got on your second image, so this seems fine. I don’t know how the SciPy does that, but the implementation I’ve used flagged some edge ends as ‘infinite’ and provided me the edges direction, so I could clip it at some distance by myself. You’ll need to check the exact data you get from SciPy.
In the 3D world you’ll almost always want to remove such infinite areas to get any meaningful rendering, or at least remove the area that contains your camera.
Points generation:
The Poisson disc is fine as some sample data or for early R&D but it’s also the most boring one :). You’ll need more ways to generate input points.
I tried to imagine the input needed for your ball-like example and I came up with something like this:
Create two spheres of points, with the same center but different radius.
When you create a Voronoi diagram out of it and remove infinite areas you should end up with something like a football ball.
If you created both spheres randomly you’ll get very irregular boundaries of the ‘ball’, but if you scale the points of one sphere, to use for the 2nd one you should get a regular mesh, similar to ball. You can also use similar points, but add some random offset to control the level of surface irregularity.
Get your computed diagram and for each edge create few points along this edge - this will give you small areas building up the edges of bigger areas. Play with random offsets again. Try to ignore edges, that doesn't touch any infinite region to get result similar to your image.
Get the points from both stages and compute the diagram once more.
Mesh generation:
Up to now it didn’t look like your target images. In fact it may be really hard to do it with production quality (for a Maya plugin) but I see some tricks that may help.
What I would try first would be to get all my edges and extrude some circle along them. You may modulate circle size to make it slightly bigger at the ends. Then do Boolean ‘OR’ between all those meshes and some Mesh Smooth at the end.
This way may give you similar results but you’ll need to be careful at mesh intersections, they can get ugly and need some special treatment.

Generating 3D (x,y,z) Point Clouds from Two 2D Stereo Images

I've been trying to read through the Stackoverflow questions for generating point clouds (x,y,z) coordinates from a left and right stereo image pair.
I haven't come to any definite solution, and I'm asking the community here for some help.
Problem statement: Given two stereo images, generate 3D (x,y,z) cartesian coordinate point clouds from those and do so in a way that lends itself to completing this point-cloud generation in a way that could work over a large set (thousands) of pairs of stereo images
My programming language experience lends itself to MATLAB, but I've dabbled in Python, and C++ is limited, but I may be able to work in that as well.
Speed is a factor here, so the the idea is to find a quick method of successively going through these pairs and generating the point cloud.
NOTE: I am not asking for the BEST as to avoid comparative solutions, I'm just asking for solutions.
Thank you very much!
Edit: After being recommended to utilize the Stereo Block Matching implementation in OpenCV, I may steer away from this method, as there is a wide variation in regards to texture of the topographical pictures that I'm dealing with.
What you probably want is called "Stereo Block Matching". If you don't feel like writing it yourself, OpenCV has an implementation ready to go (even a CUDA accelerated version).

Categories