Segmentation of Signature from Image with varying background colour - python

I am trying to extract Image from a pan card with is ID card in India. Trying to extract the same with OpenCV and python. I have tried out the same using connected components background is interfering with it too much.
This is the sample input
This is the required output
I have followed this project as initial starting pointing, but not able to extract the same properly
because of the gradient in the background.
Thanks in advance for your time – if I’ve missed out anything, over- or under-emphasized a specific point let me know in the comments.

Adaptive thresholding on the blue component seems to do the job easily.

Related

Image segmentation python opencv

I want to ask for some advice about the procedure that I should implement for image segmentation working with opencv in python.
I have this kind of image and my purpose is to detect the white fiber like here
Does anyone have a proposition of the steps of image processing that I should do?
Since I can notice that object's color is different than the background, I found this guide helpful. The concept is the following :
1.apply RGB filters to your image,
2.grab contours using OpenCV, then
3.apply some handcraft conditions to them so as to fit your desired output, and finally
4.produce the box.
If all of your images share the same color patterns, this should work.. If not, it will prove noisy ..

Objects Extraction from an Image using Python

I'm trying to implement a python program to remove the background and extract the object/objects in the foreground from any given static image (as shown in the attached images). This is similar to "iPhoneX Portrait Effect" or "Bokeh Effect". However, instead of blurring the background one needs to completely remove it.
In short, I want to extract objects from any given image and also create a mask. Below are the examples of both respectively:
Object Extraction:
Image Mask:
I have somewhere listened to Google's DeepLab, but I don't know how to start with it.
Can someone help me, please!
Any step by step tutorial will be very appreciated.
Thanks in advance!
This is a really hard task, there is a reason there are basically no good software to do this already, even in photoshop this is a struggle. I can advice you to start with open Cv and their implemented facial tracking which you may need to configure to work with animals if thats your goal
resources
facial detection:
https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_objdetect/py_face_detection/py_face_detection.html#face-detection
object detection:
https://docs.opencv.org/2.4/modules/contrib/doc/facerec/facerec_tutorial.html
Firstly you should collect data(image) for which object you want to detect or extract the object/objects in the foreground and also create a mask.
Then using tensorflow you can train an instance segmentation model Using your own dataset. (Ref : instance_segmentation)
After getting mask you can extract the foreground.

Aruco marker detection with 360 camera?

recently I have been playing with the 360 fly HD camera and wondering if Aruco Marker can be detected during real time. The first thing come to my mind is to convert the fisheye image into perspective image first and then perform the detection on the perspective image(I am gonna try it and will update my result here later).
Converting a fisheye image into a panoramic, spherical or perspective projection
Hugin HowTo: Convert 360 Image to Cropped Flat Panoramic Image
I am not an expert in this field. Has anyone done this before? Is this something can be achieved by calibrating the camera differently such as correcting the camera matrix and distortion coefficient matrix?
If I am heading to the wrong direction, please let me know.
I was able to get a better understanding during the process.
First, I want to say that 360(fisheye, spherical, however you call it) image is NOT distorted. I was so tricked by my intuition and thought that the image was distorted based on what it looks like. NO it is not distorted. Please read enter link description here for more information.
Next, I have tried both 360 fly cameras and neither works. Every time I tried to access the camera with opencv, it automatically powers off and switch to storage mode. I guess the 360 dev team purposely implements this switching function to prevent "hacking" of their products. But, I've seen people successfully hacked the 360 fly, it's definitely workable.
At last, I was able to detect Aruco with Ricoh theta V(theta S should also work). It's so developer friendly and I was able to make it run in my first attempt. You just have to select the right camera and let the code run. The only problem is the range, which is expected(about 6ft) and Ricoh camera is kind of expensive($499).
click here to view succesful detection

How to solarize an image

I am trying to make a function that will solarize an image, but before I can do that I need to know how to solarize an image.
According to Wiki:
"the image recorded on a negative or on a photographic print is wholly or partially reversed in tone. Dark areas appear light or light areas appear dark"
I am running a python framework which allows me to access all the pixels in an image and their RGB values. From there does anyone have any suggestions on how to achieve this?
Here's some good information on how that effect is digitally recreated

How to find a fix number of (almost) fixed proportion rectangles with opencv?

I am writing a simple fly tracking software and I would love some input from opencv experts.
The image I have looks pretty much like:
I used to do tracking using kmeans and PIL/numpy but I re-wrote everything to use blob detection in opencv. Tracking works OK but I would also like to automatize division of ROI.
What I need to do is find each of the 32 grooves that appear in the picture, where flies live. See the black rectangle on the image as example of what I mean.
I think cornerHarris may be what I need but how do I specify only the grooves and not each single rectangle found in the image? All those grooves have proportions of roughly 10:1.
Thanks!
I don't think cvCornerHarris is even close to what you need.
A much better start would be to experiment with the demo available at: OpenCV-2.3.0/samples/cpp/squares.cpp. This technique uses Canny(), dilate() and findCountour().
Right out of the box, this demo outputs:
I believe that with a few tweaks here and there you can have your party started.

Categories