I have a lot of pictures from a PCB taken with a X-Ray camera. I want to meassure the amount of solder in the holes. I thought about using python for that task as I am most familar with it. I have no idea where to start. I looked at openCV and scikit-image but am a little bit lost about how to approach my problem.
I attached a detail from one image where you can see one single joint. Every original picture has 8 of that joints. 1
I though about this workflow:
find the walls and upper/lower boundaries of the hole
fit a rectangle or even better a cylinder inside the boundaries
meassure the area of the rectangle/cylinder
find the solder in the hole
fit a rectangle or cylinder in the solder
meassure the area
I am already stuck at the first part of the job...My Problem is, that the edges are really sketchy. I tried some sort of prepocessing (changing the contrast and sharpness of the image) but it didn't help that much.
Does anyone has a tip where I can start to read about this type of feature detection?
Related
I'm beginning to work on a project with OpenCV (in python), and I'm trying to figure out the best way to tackle the problem I'm facing. I'm trying to get the area of an item in an image...but the surface area I'm looking for is on a 3D object.
So I found this while searching for calculating the area of a contour, but as you can see, this is only for a 2D object. For example, how could I find the area of the red question marks on the sphere in this image? Or the size of the rash on this baby's face (which is most certainly not 2-dimensional)? Is there a way to find the depth of the surface in the image, then use the high color gradient difference to find contours and calculate the difference based on the depths?
I found this deep learning paper (and associated PyTorch library), but was wondering if there was another way that I was missing...
Thanks to all for any ideas / replies.
i need some advice in a computer vision projekt that i am working on. I am trying to extract a corner in the image below. The edge im searching for is marked yellow in the right image. The edge detection is always failing because the edge is too blurred in the middle.
I run this process with opencv and python.
I started to remove the white dots with a threshold method. After that a big median blur (31-53). After that a adaptive Threshod method to seperate the areas left and right from the corners. But the sepearation is always bad because the edge is barely visible.
Is there some other way to extract this edge or do i have to try with a better camera?
Thanks for your help.
First do you have other dataset? because it is hard to discuss it just from 1 input.
Couple things that you can do.
The best is you change the camera of imaging technique to have a better and clear edge.
When it is hard to do so. Try model-based fitting.If you image is repeatable in all class. I can observe some circles on the right and 2 sharp straight-line edges on the left. Your wanted red soft edge circle is in the middle of those 2 apparent features. That can be considered as a model. then you can always use some other technique for the pixel in-between those 2 region(because they are easy to detect) . Those technique includes but not limit to histogram equalization, high pass filter or even wavelet transform.
The Wost way is to use parameter fitting to do. What you want to segment is sth not a strong edge and sth not a smooth plane. So you can tweak the canny edge detect to find those edge which is not so strong. I do not support this method. If you really no choice and no other image, then you can try it.
Last way is to use deep learning based method to train and auto segment this part out. This method might work. but it needs you to have hundred if not thousands of dataset and labels.
Regards
Shenghai Yuan
Here's what I'm trying to mimic: https://www.youtube.com/watch?v=exXD6wJLJ6s
This guy is separating the video input into many square grids and analyzing each region to know what's going on in that specific region.
For me, it seemed like he was finding the dominant color for each grid. So I tried getting dominant colors on an image with KMean method and it worked well. (I'm trying to 'divide and conquer' the problem, by addressing the problem from the smallest part.)
However, I have no idea how to get dominant color for each grid region of an image. I think I should iterate through each grid square but how?
Furthermore, it seems almost impossible for me to do the above task on a video. Can the same algorithm (detecting dominant color in an image (region)) also apply to real-time detection of a video? Wouldn't it be too sluggish?
I'm really new to OpenCV and I'm basically just following whatever tutorials that seem to be related to my project.
To sum up: I got dominant color from the image following the below tutorial and now I want to do this for each grid of an image/video.
https://www.pyimagesearch.com/2014/05/26/opencv-python-k-means-color-clustering/
This is what I've done so far:
I drew the grid on MSPaint:
So I recently took a few hundred photographs of the solar eclipse using a solar filter. All the photos contain a close to pure black background with a very bright near-white solar crescent, usually somewhere near the center of the photograph. All the photos are taken at the same zoom.
What I want to do is programmatically crop the sun out of each of the photos so they can be overlaid programmatically onto a canvas in the correct solar positions they would have appeared in the sky, according to the exif data.
The first step would be to programmatically identify the center of each crescent. The radius is constant, so that is one less step that needs to be done programmatically. I imagine for earlier photos where the sun is nearly complete this will be easier, and accuracy will decrease as the crescent gets smaller.
I wanted to use Python for this, but am open to other suggestions if there is a better tool. Can anyone point me in a good direction to get started on this project?
Thanks
OpenCV has a Hough Circle Transform that can detect circles and arcs.
There is an old discussion here:
I am trying to find a repeatable process to find the coordinates of grid intersection points from an image. The image is a montage of many smaller images. Each 'tile' of the montage has inconsistent contrast, so my naive methods are failing (the tile boundary is being selected) . A small example:
I have had minor advances from the ideas explained in How to remove convexity defects in a Sudoku square? and Grid detection in matlab
However, the grid lines are NOT necessarily straight over the entire image, so cannot approximate as a grid of straight lines. I am familiar with imageJ or Gatan digitalMicrograph software, if anyone knows of a simple solution. Otherwise matlab/python Opencv would be useful
My first idea: write a script to chop your image into tiles, and apply some contrast normalization such as CLAHE to each one. Then reassemble the tiles using the Stitching plugin with the Linear Blending option on, to avoid the sharp tile lines. After that, segmenting the grid will become much easier; see ImageJ's Segmentation page for an introduction.
This is the kind of image analysis problem that is better discussed on the ImageJ Forum where people can throw ideas and script snippets back and forth, to converge on a solution.