So I recently took a few hundred photographs of the solar eclipse using a solar filter. All the photos contain a close to pure black background with a very bright near-white solar crescent, usually somewhere near the center of the photograph. All the photos are taken at the same zoom.
What I want to do is programmatically crop the sun out of each of the photos so they can be overlaid programmatically onto a canvas in the correct solar positions they would have appeared in the sky, according to the exif data.
The first step would be to programmatically identify the center of each crescent. The radius is constant, so that is one less step that needs to be done programmatically. I imagine for earlier photos where the sun is nearly complete this will be easier, and accuracy will decrease as the crescent gets smaller.
I wanted to use Python for this, but am open to other suggestions if there is a better tool. Can anyone point me in a good direction to get started on this project?
Thanks
OpenCV has a Hough Circle Transform that can detect circles and arcs.
There is an old discussion here:
Related
I'm currently a little stuck with a problem, that sounds easier than it is (at least for me):
Let's say you have satellite images taken from LEO that show an approximately 1000 km wide area (the image axis of the camera is more or less perpendicular to the ground). There is no additional location data stored in the image, so no way of directly extracting the position the image was taken).
What I want to do is write a program (in python) that can find the location the image was taken from by matching it to a map of earth. this should be done automatically (more or less in real time) for the purpose of calculating the orbit of the satellite taking the images.
I've no problem calculating the orbit, once I have location data (even if it's very noisy), using a technique based on an Extended Kalman Filter.
Matching a satellite image to a map of earth by just using the image data, on the other hand.... I honestly don't even know where to start.
I know this is an incredibly unspecific question and not related to a specific problem, but maybe someone could point me in the right direction...
EDIT:
Just to give you an idea how unprocessed images from LEO look, I included a few reasonably good images taken over one orbit of earth.
Images have been taken with a NIR camera. Resolution of the images I included have been only 640x480 (by mistake!), but image resolution should be around 4k.
These images have some artifacts in them due to the fact that they where taken through a thick glass window of the ISS - so there are some reflections happening there...
I wanted to find the measurement or the point of the laser light which touches that plane using OpenCV and display the measurement. I searched for many resources regarding this use case but unfortunately, I could not find anything. I will attach an image for more clarification of my use case.
the image:
If you have a vertical plane and you have a laser light. and you hit this laser light to any place on the vertical plane, then using the camera and OpenCV, it should be able to detect the particular point or the reading of the vertical plane that the laser light has touched in millimeters and display the value on the screen.
I am working on a project where I am doing OCR on text on a label. My job is to deskew the image to make it readable with tesseract.
I have been using this approach, that greyscales and thresholds the picture, gets the coordinates of the black pixels, draws a minAreaRect around it and then corrects the skew by the skew angle of this rectangle. This works on , but not on images with background, like the presented image. There, it calculates a skew angle of 0.0 and does not rotate the image. (Expected result: 17°)
I suspect this happens because there are black pixels in the background. Because of them the minAreaRect goes around the whole picture, thus leading to a skew angle of 0.
I tried doing a background removal, but couldn't find a method that works well enough so that only the label with the text is left
Another approach I tried was clustering the pixels through k-means-clustering. But even when choosing a good k manually, the cluster with the text still contains parts of the background.
Not to mention that I would still need another method that goes through all the clusters and uses some sort of heuristic to determine which cluster is text and which is background, which would cost a lot of runtime.
What is the best way to deskew an image that has background?
You can try deep learning based natural scene text detection methods. With these methods you can get rotated bounding boxes for each text. Based on these get rotated bounding rectangle covering all boxes. Then use the 4 corners of that rectangle to correct the image.
RRPN_plusplus
Based on sample image RRPN_plusplus seems to do quite well on extreme angles.
EAST
Pyimagesearch has a tutorial with EAST scene text detector. Though not sure east will do good with extreme angles.
https://www.pyimagesearch.com/2018/08/20/opencv-text-detection-east-text-detector/
Image from, https://github.com/argman/EAST.
These should help you find recent better repos and methods,
https://github.com/topics/scene-text-detection
https://paperswithcode.com/task/scene-text-detection
https://paperswithcode.com/task/curved-text-detection
You could use a fast cross platform command like
deskew32 -o out1.png -a 20 -f b1 -g c Sdgqm.png
Or for more complex cases combine with dewarp but will need a third step as the auto thresholding is not upper and lower
dewarping mmrnt.png square.png 0 0
I have a lot of pictures from a PCB taken with a X-Ray camera. I want to meassure the amount of solder in the holes. I thought about using python for that task as I am most familar with it. I have no idea where to start. I looked at openCV and scikit-image but am a little bit lost about how to approach my problem.
I attached a detail from one image where you can see one single joint. Every original picture has 8 of that joints. 1
I though about this workflow:
find the walls and upper/lower boundaries of the hole
fit a rectangle or even better a cylinder inside the boundaries
meassure the area of the rectangle/cylinder
find the solder in the hole
fit a rectangle or cylinder in the solder
meassure the area
I am already stuck at the first part of the job...My Problem is, that the edges are really sketchy. I tried some sort of prepocessing (changing the contrast and sharpness of the image) but it didn't help that much.
Does anyone has a tip where I can start to read about this type of feature detection?
I am working on a script that detects the bottom of a cooking pot, given a picture from above at a slightly slanted angle. This is a tough task due to lighting, lack of edges, and the glare of the metal pot.
This is a sample image I am working with:
image
img = cv2.imread('img.jpg')
gray_img = cv2.cvtColor(img,cv2.COLOR_BGR2GRAY)
The bottom of the pot is visible, but hard to detect.
So far, I was able to produce this image using:
th2 = cv2.adaptiveThreshold(gray_img,255,cv2.ADAPTIVE_THRESH_MEAN_C,cv2.THRESH_BINARY,11,2)
And this image using:
edges = cv2.Canny(img,30,60)
The solution seems intuitive since in both images the base can be detected easily, but I am not able to figure out the implementation.
My intuition tells me that I should cluster white pixels starting from the center of the image, and line the border of the segmentation generated from the clustering. I am not quite sure about how to go about that. Any advice would be greatly appreciated.
WORK IN PROGRESS
What is your overall goal? Do you want to detect something at the bottom? Count bubbles or something like that? Or you you want to check for defects?.
As you already mentioned, it's hard because of the reflections, so you could start using less direct light and maybe use a diffuse source.
No single light bulb, no direct sun, maybe use a white canvas. Or span a thin piece of white cloth between the current light source and the camera.