Raspberrypi:remove fisheye effect from image [closed] - python

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 6 years ago.
Improve this question
I have raspberry pi and i generate code using python. That code captures a fisheye image, now I want a simple image, not a fisheye image. How do I remove that fisheye effect from the image or covert it into a simple image? I referred to some OpenCV site but couldn't find a solution..plz help me..Thanks in advance.

At this LINK you can find a useful tutorial for camera calibration in OpenCV.
It explains you the chessboard method to correct fish-eye distortions.
After calibration you can correct all pictures taken by camera.

Related

How to crop image based on contents (Python & OpenCV)? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Well currently I'm working on a personal project which is the identification of products in a scanned image taken from a store catalog.
As you may see in the image there's no lines separation between products, so using Hough lines to locate the products won't really solve the problem!
Using Tesseract is really amazing to extract the image content, the only problem that I'm facing is finding the image products automatically, I mean not cropping the image manually but I want to detect the products, cropping them with their text description and price and then extract content using OCR.
I have tried many image processing techniques but still nothing (I'm using Python and OpenCV).
Thanks in advance :)
The problem you have is usually called background removal, or alternatively foreground extraction. In this example, it might actually be relatively easy, as the background is mostly in shades of the same color - my recommendation would be to look at the GrabCut algorithm which is described here: https://docs.opencv.org/3.4.3/d8/d83/tutorial_py_grabcut.html

How to detect a moving object in a short sequence of images using opencv? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am having a short sequence of grayscale images ( 20 images ), which is consecutive frames collected from a camera. I need to identify the location of a small moving object in each image. I have tried to use background subtraction, and can identify the moving pixels in each image. How can I proceed to detect the exact location of that moving object ?
You can try this way.
(1) find the object using template matching at first time.
https://docs.opencv.org/3.0-beta/doc/tutorials/imgproc/histograms/template_matching/template_matching.html?highlight=template%20matching
(2) Tracking the object using camshift or other tracking algorithm.
https://github.com/opencv/opencv/blob/master/samples/python/camshift.py

Use of OCR to read node graphs [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I would like to code a program that will use a phones camera to read a node graph and then perform dijkstra's algorithm on it, displaying the shortest path.
Could pytesseract or other OCR tools for python read node graphs (such as in the image. Real use would be on printed ones) and give me enough information to get node coordinates and what letters are next to them, as well as the positions of the edges/which nodes they connect and what numbers are next to them?
Any help would be much appreciated.
https://i.stack.imgur.com/RUZwA.jpg

How to extract blood vein from following images using python and open? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I am trying to enhance blood vein part in forearm image shown below.
After grayscale conversion I got following image.
Then I followed histogram equalization and got better result as shown below
Based on the histogram equalized image, I need to enhance blood vein part in original image. For that purpose first I need to extract blood vein from histogram equalized image, but I am not getting any result. Can anyone suggest proper method to enhance blood vein in original infrared image. ?? I am using python+OpenCV for image processing
Area of interest to extract is shown below

I have a PNG image, and wondering how to efficiently convert it to PBM in python [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Is there any simple way for converting PNG to PBM?
ocrad requires PBM image, not PNG.
Check out Pillow. According to the docs, it supports PBM, PGM, and PPM formats out of the box. The following code should get you started:
from PIL import Image
im = Image.open("myfile.png")
im.save("myfile.pbm")

Categories