minMaxLoc function - How to access the variables? - python

I have a question, some of you might find it a stupid one but I am stuck on a thing, hopefully you can help me.
I am using the following function in order to find the location of the brightest and the darkest pixel of an image:
darkest,brightest,dark_loc,bright_loc=cv2.minMaxLoc(img)
Now I need the values from dark_loc and bright_loc (both on the x and y axis in difference variables a and b) in order to calculate the distance from the centre of the image. I need those values in order to track a laser beam on a screen. Could you please help me to extract the values?
Thanks in advance!
Regards,
Vassil
P.S.: I am not sure if it's of any relevance but I am doing this using OpenCV on an RPi 2B.

Related

Image Processing in Python- Is there a general solution to this problem?

I have been going over this for days now and have hit a road block as I am too scared to try out my hypothesis.
I would like to find out the number of grayed rectangular boxes in this image. However, I am not sure how I can do that. I was thinking of two ways:
i. Getting area of the connected components, calculating their median and getting the number of components between a certain percentile of the area (may sound pretty strange).
ii. Making a machine learning model and find out the similar boxes in the image and count them.
However, I would like them to be more generalized so that I will need to be able to make the solution fit other images that I would need to be processed.
Here is my source Image:
Any sort of help/suggestions and even solutions would be greatly appreciated.
Thanks in advance!
Maybe you are losing a lot of image information with filtering...Do you have an unfiltered source image too? I suppose ML approach would work pretty nice then.
I noticed you could achieve better resolution if your camera is 90 rotated (If you could affect this)

How do you move the mouse to all of the blue pixels on the screen's x y values averaged

I am trying to make a DIY Virtual Reality kit, and am not sufficient in Python. But by far, I think Python is the only way to achieve the following; The program takes each frame from the webcam feed, get's the blue pixels, and averages all of the XY positions of the pixels, and makes the mouse move there. In OpenCV and Python.
I've done a milion Google searches and cannot find what I need. I spent around 11 hours searching the OpenCV documentation and found nothing. I just need this program done. I'm running Python 3.7.3, and with OpenCV 2.
Any help would be much appreciated. I am fine with people giving me the full code as long as it is understandable and is legible.
Thank you.
*Edit, I am using Windows x86.
Convert your image to HSV. HSV is better for detection specific coloured areas generally. This SO link deals with the same issue as you described and have useful links on them. After you detect blue pixels with inRange function, use the mask to get an average of coordinate values(np.argwhere is useful for getting coordinates, then use np.mean over axis=0).

How to make Color gradient-weigthed distance image in opencv (python)

How to make Color gradient-weigthed distance in OpenCV for Python? For all my research that I am doing I am find just gradient distance without the color information, please can anyone know the path to it? Many thanks in advance
Image source.
*The objective is not doing Distance transformation alone, but doing it with color information (in grayscale)
I found the solution based on these papers:
https://www.researchgate.net/publication/257690553_Automatic_image_segmentation_of_nuclear_stained_breast_tissue_sections_using_color_active_contour_model_and_an_improved_watershed_method
https://www.researchgate.net/publication/260755668_A_color_and_shape_based_algorithm_for_segmentation_of_white_blood_cells_in_peripheral_blood_and_bone_marrow_images
Was just the merge of the two techniques, thank you all.

how to count point which based on projection of a set of coordinates

I have some data about a set of coordinates, like {(x=1, y=2), (x=3, y=4), ...}. I projected these coordinates on the picture. Then I would like to count these point in different position. My idea is below
First, separate the picture from several pixel parts based on 10 pixels.
Second, count the point inside the pixel box.
I am new in these area, and I use python a lot. I think this may be computer graphic problem.
I am not asking for a code solution. I just want to know which libraries or approaches that are related.
Anyone could give me a hint would be highly appreciated. Thanks.
Sure, your approach seems fine. You simply want to count the number of pixels in different image regions that you placed, correct?
I answered a question recently (with Python) that was giving an indication if there was a black pixel inside an image region. It could be easily modified to count pixels instead of simply finding one. Check it out here and modify your question or post a new one if you have code problems working it out.

image alignment with set points-python

I've been looking for a way to align at least two images in Python.
There are some very nice scripts out there, but I'd like a script that accepts the coordinates of a point in one image and use these coordinates to align the second(or more) image.
I'v been reading about phase correlation algorithms but I'd appreciate some help on how to use the coordinates of a user defined point.
Thank you!

Categories