Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 days ago.
Improve this question
I want to calculate the normal vectors to the chessboard as shown in blue in Python.
chessboard
Ideally each vector should be about 2x the width of the square it is in.
So far I've labelled the four corners of my chessboard, and used cv2.findHomography to get the homography matrix which translates the normal image into a flat image.
normal flat
I was hoping that I could multiply the normal vector in flat space, (0, 0, 1), through the inverse homography matrix to get the normal vectors in the original space, but this hasn't worked at all.
The paper I am following suggests this
part1
part2
but I cannot see how to get these values from the 4 corners of the chessboard. Is there something in cv2 that can help?
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
How can I programmatically dilate the contour of a given shape? Currently, I'm able to find and draw the contour of a shape using python+openCV, but I want to draw an offset to the contour of the shape in a raster image (like in the image below).
By offset, meaning expanding or minimizing the original shape, not just scaling it. The following is an example of that result:
Based on your sketch you don't seem to be interested in just scaling up the original contour. So I guess the simplest way to approach this is morphological region growing. With the help of dilation you can expand the region of your original contour in the image domain and then recover the newly created contour.
Have a look at this implementation
Scale contours up/grow outward
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Using Python, I want to check whether a circle has an equal radius along all 360 degrees. Actually, I have segmented a circular shape from an image and then want to check whether it has a same-sized and equal radius along all the 360 degrees or not.
Can someone kindly help and tell how to do that?
In Python, OpenCV is a strong choice for an imaging library. You'd want to fit a contour to the circle and use moments to check the properties. You can fit an ellipse and see how close the major and minor axis of the ellipse match.
See this tutorial/docs for details.
Also, this question should help.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a set of images. I have to use them for training a network. I want to simulate a lens flare effect and chromatic aberration on the images. I have tried to find some function in OpenCV, scikit and other python image library but no help from there. How can i simulate these effect on my image? Rough idea or code will be useful. Images are in jpg format.
Depends on what kind of lens flare you are trying to achieve. Create e.g. hexagon mask and overlay multiple instances of it partially transparently between start and end point of the flare axis? Hexagons should be at least slightly bigger "in sun's direction" and spaced more or less in equal distance compared to each others. User should be able to click start and end points of said axis from the pic and use e.g. mouse to rotate, zoom in/out the axis and define number of flare elements to be added.
For chromatic aberration, I would split the RGB components, apply slightly different scaling factors, and merge back. Depending on whether you want to simulate a flint or crown effect, the factors will be increasing or decreasing.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
To be concrete, given 2D numerical data as is shown as line plots below. There are peaks on a background average movement (with small vibrations). We want to find the values of pairs (x1, x2) if those peaks drops down to average; or (x1) only if the line doesn't back to the average.
There are thousands of such 2D data.
What is the right statistic or machine learning algorithm to find x1 and x2 above without plotting?
Note that this is not an exact answer.
I seriously have no idea what you are trying to do. But I can suggest you a way. Assuming that there is only one peak in the graph and you have all the 2D points data i.e; (X1,Y1)...(Xn,Yn)...
Try calculating the differences between the Y values of adjacent points and get the minimum value if you are doing Yn-1 - Yn (which indicates starting point of the peak). Similarly, get the maximum value which indicates the points with the decreasing peak.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I'm looking for a algorithm (pseudocode is ok, or any readable sourcecode, implementation is prob. in Python) to recognise a partial beam/perspective rectangle on a low res image, and give the angle from horizontal. The algoritm should be realtime 30hz fast so pref analytic or convex optimization. And most importantly robust. The beam can be in any orientation and the width can vary, but the angle should be able to be estmated sub degree. Robustness is the priority.
For now lets assume there is at least some beam visible. But in application the beam could be partially occluded.
Here is a sample of some example data with true angle 78.8253 degree (its binary data with noise)
I've tried a gaussian blur, than a treshold, and than OLS in u-v image coordinates. This is a beginning, but not the way to go. The cut-off corners and edges biasses the angle.
Does anyone know of a good robust and fast way to do this? Thanks
edit 1. Hough Transform
Applying a Hough Transform after gausian blur+treshold and averaging the top 5 peaks in the transform. This is a big improvement! but still it seems biassed by the assymetry in the cutoff. Are there any candidates that take this cut-away into account?