I've been asked to convert some older Matlab code (which we no longer have a license) to Python using the opencv (cv2) library. This all has to do with SURF operations generating points and descriptors in order to eventually mosaic two images together using gdal.
The two functions are:
points = points.selectStrongest(50000);
T = estimateGeometricTransform(points1, points2, 'affine','MaxDistance',0.5);
These: selectStrongest() & estimateGeometricTransform()
Other parts I have already figured out are dealing with reading images with imread(), setting up the SURF object, and gathering points and descriptors using detectAndCompute(), but I'm guessing that I need to first select the "strongest" points before computing descriptors.
Please let me know if there is anything else I can add to clear this up. Essentially, I just don't know the python equivalent to those two functions. Perhaps it's a series of python/cv2 functions that would be necessary to recreate these?
Related
I am trying to build up an algorithm to detect some objects and track them over time. My input data is a tif multi-stack file, which I read as a np array. I apply a U-Net model to create a binary mask and then identify the coordinates of single objects using scipy.
Up to here everything kind of works but I just cannot get my head around the tracking. I have a dictionary where keys are the frame numbers and values are lists of tuples. Each tuple contain the coordinates of each object.
Now I have to link the objects together, which on paper seems pretty simple. I was hoping there was a function or a package to do so (ideally something similar to trackMate or M2track on ImageJ), but I cannot find anything like that. I am considering writing my own nearest neighbor tool but I'd like to know whether there is a less painful way (and also, I would like to consider also more advanced metrics).
The other option I considered is using cv2, but this would require converting the data in a format cv2 likes, which will significantly slow down the code. In addition, I would like to keep the data as close as possible to the original input, so no cv2 for me.
I solved it using trackpy.
http://soft-matter.github.io/trackpy/v0.5.0/
trackpy properly reads multistack tiff files (OpenCv can't).
Currently, I use MATLAB extensively for analyzing experimental scientific data (mostly time traces and images). However, again and again I keep running into fundamental problems with the MATLAB language and I would like to make the switch to python. One feature of matlab is holding me back however: its ability to add datatips to plots and images.
For a line plot the datatip is a window next to one of the data points that shows its coordinates. This is very useful to quickly see where datapoints are and what their value is. Of course this can also be done by inspecting the vectors that were used to plot the line, but that is slightly more cumbersome and becomes a headache when trying to analyze loads of data. E.g. let's say we quickly want to know for what value of x, y=0.6. Moving the datatip around will give a rough estimate very quickly.
For images, the datatip shows the x and y coordinates, but also the greyscale value (called index by MATLAB) and the RGB color. I'm mainly interested in the greyscale value here. Suppose we want to know the coordinates of the bottom tip of the pupil of the cat's eye. A datatip allows to simply click that point and copy the coordinates (either manually or programmatically). Alternatively, one would have to write some image processing script to find this pixel location. For a one time analysis that is not worthwhile.
The plotting library for python that I'm most familiar with and that is commonly called the most flexible is matplotlib. An old stock overflow question seems to indicate that this can be done using mpldatacursor and another module seems to be mplcursors. These libraries do not seem to be compatible with Spyder, however, limiting their usability. Also, I imagine many python programmers would be using a feature like datatips, so it seems odd to have to rely on a 3rd party module.
Now on to the actual question: Is there any module (or simple piece of code that I could put in my personal library) to get the equivalent of MATLAB's datatips in all figures generated by a python script?
I have a short video of a moving object. for some reason I need to estimate the object's movement distance between two specific frames. It is not necessary to be exact.
Does anyone know how I can do this in python and opencv or any image processing library?
Thanks
I don't know in opencv, there is for sure something similar but you can easily do it with scikit-image module. I do it on a regular basis to align frames.
See this example here:
https://scikit-image.org/docs/dev/auto_examples/transform/plot_register_translation.html#sphx-glr-auto-examples-transform-plot-register-translation-py
EDIT: I found something very similar in opencv here
https://docs.opencv.org/4.2.0/db/d61/group__reg.html
I need to copy images from 'Asset' folder in Windows 10 which has background images automatically downloaded. Some of these images will never be displayed and at some point deleted. To make sure I have seen all the new images before they are deleted I have created a Python script that copy these images into a different folder. To efficient I need a way to compare two images those that only the new ones are copied. All I need to do is to have a function that takes two images compare them with a simple approach to be sure that the two images are not visually identical. A simple test would be to take an image file copy it and compare the copy and the original, in which case the function should be able to tell that those are the same images.
How can I compare two images in python? I need simple and efficient way to do it. Several answers I have read are a bit complicated.
I encountered a similar problem before. I used PIL.Image.tobytes() to convert the image to a byte object, then call hash() on the byte object and compared the hash values.
Compare two images in python
Option 1:
Use ImageChops module and it contains a number of arithmetical image operations, called channel operations (“chops”). These can be used for various purposes, including special effects, image compositions, algorithmic painting, and more.
Example:
ImageChops.difference(image1, image2) ⇒ image
Returns the absolute value of the difference between the two images.
out = abs(image1 - image2)
Option 2:
Scikit-image is an image processing toolbox for SciPy.
In scikit-image, please use the compare_ssim to Compute the mean structural similarity index between two images.
References:
Python Compare Two Images
How do I transform a binary image with one single mask in it (whose values are one) into a polygon in PYTHON? My goal is to calculate the inner-angles of this mask and the orientation of the countor-lines. I assume I have to transform the mask into a polygon before I can use other libraries that do these calculations for me. I rather not use Open Cv to tdo this transformation since I have faced problems installing it in a Windows 64/Spyder envronment. Thanks for any help!
While you can surely write your own code, I suggest to have a look at libraries like AutoTrace or potrace. They should already do most of the work. Just run them via the command line and read the resulting vector output.
If you want to do it yourself, try to find the rough outline and then apply an algorithm to smooth the outline.
Related:
Simplified (or smooth) polygons that contain the original detailed polygon
How to intelligently degrade or smooth GIS data (simplifying polygons)?