I'm trying to generate jigsaw puzzle pieces using Python as part of a project I'm interested in. There are libraries such as openCV and Pillow that allow splitting the image into square pieces and of course they allow masking, but I want something that allows me to input custom cut shapes, perhaps as a bezier curve or similar. Is there a good library for this or is it necessary to write my own, perhaps outputing bezier shapes to a png mask and using that? Thanks
Related
Currently, I use MATLAB extensively for analyzing experimental scientific data (mostly time traces and images). However, again and again I keep running into fundamental problems with the MATLAB language and I would like to make the switch to python. One feature of matlab is holding me back however: its ability to add datatips to plots and images.
For a line plot the datatip is a window next to one of the data points that shows its coordinates. This is very useful to quickly see where datapoints are and what their value is. Of course this can also be done by inspecting the vectors that were used to plot the line, but that is slightly more cumbersome and becomes a headache when trying to analyze loads of data. E.g. let's say we quickly want to know for what value of x, y=0.6. Moving the datatip around will give a rough estimate very quickly.
For images, the datatip shows the x and y coordinates, but also the greyscale value (called index by MATLAB) and the RGB color. I'm mainly interested in the greyscale value here. Suppose we want to know the coordinates of the bottom tip of the pupil of the cat's eye. A datatip allows to simply click that point and copy the coordinates (either manually or programmatically). Alternatively, one would have to write some image processing script to find this pixel location. For a one time analysis that is not worthwhile.
The plotting library for python that I'm most familiar with and that is commonly called the most flexible is matplotlib. An old stock overflow question seems to indicate that this can be done using mpldatacursor and another module seems to be mplcursors. These libraries do not seem to be compatible with Spyder, however, limiting their usability. Also, I imagine many python programmers would be using a feature like datatips, so it seems odd to have to rely on a 3rd party module.
Now on to the actual question: Is there any module (or simple piece of code that I could put in my personal library) to get the equivalent of MATLAB's datatips in all figures generated by a python script?
I work with a huge library of components (step files) that are currently used in various products. My goal is to identify parts with great similarity in order to unify them. At the moment I can think of two solutions:
Compare certain properties of the 3D data with a suitable python library. E.g. identify parts with similar volume and dimensions.
Convert step files to JPG and compare the images with one of the many image processing libraries.
Both have their pitfalls.
Is there a library that can handle step files or do you know a better way to solve the problem?
You are underestimating the complexity of this project. Once the STEP geometry is loaded, taking dimensions on it (apart from bounding box extents) can be really cumbersome. Very different parts can have the same volume and comparing bitmaps you completely ignore the hidden part of the geometry.
I am using win10, python and c#. I want to calculate circumference of human parts (belly, biceps etc) with using point cloud or 3d scans like .stl .obj .ply. Now I can get the point cloud of human body with kinect v2. I have point cloud of human body, scanned 3d human body in .stl .obj .ply formats.
I need some ideas and infos about it. I don't know how to analyse the things I have and how to calculate what I want.
Here I found an example of what I am trying to do but It doesn't need to be perfectly stable like that, Its for a school homework. Maybe you can give me some ideas about how to achieve my goal. Thank you for your help.
https://www.youtube.com/watch?time_continue=48&v=jOvaZGloNRo
I get 3d scanned object with kinect v2 and use PCL to convert it into point cloud.
I don't know about using PCL with Python or C#. In general you are looking at the following steps:
Filtering the points to the interested region
Segmenting the shape
Extracting the parameters
If you're interested in only Python, then OpenCV might be the best option. You can also develop the core logic in C++ and wrap it for Python or C#. C++ also has some nice UI libaries (Qt, nanogui), please see the following details for achieving the objective with PCL
Filtering
CropBox or PassThrough can be used for this. It'll result in similar results as shown in the image assuming that the frame has been chosen properly. If not, the points cloud can be easily transformed
Segmenting the shape
Assuming you want an average circumference, you might need to experiment with Circle 2D, Circle 3D and Cylinder models. More details regarding usage and API are here. The method chosen can be simple SAC (Sample Consensus) like RANSAC (Random SAC) or advanced method like LMEDS (Least Median of Squares) or MLESAC (Max Likelihood Estimation SAC)
Extracting the parameters
All models have a radius field which can be used to find the circumference using standard formula (2*pi*r)
Disclaimer: Please take note that the shape is circular, not ellipse and the cylinder are right angled cylinders. So if the object measured (arm, or bicep) is not circular, the computed value might not be close to ground truth in extreme cases
I am trying to detect a vehicle in an image (actually a sequence of frames in a video). I am new to opencv and python and work under windows 7.
Is there a way to get horizontal edges and vertical edges of an image and then sum up the resultant images into respective vectors?
Is there a python code or function available for this.
I looked at this and this but would not get a clue how to do it.
You may use the following image for illustration.
EDIT
I was inspired by the idea presented in the following paper (sorry if you do not have access).
Betke, M.; Haritaoglu, E. & Davis, L. S. Real-time multiple vehicle detection and tracking from a moving vehicle Machine Vision and Applications, Springer-Verlag, 2000, 12, 69-83
I would take a look at the squares example for opencv, posted here. It uses canny and then does a contour find to return the sides of each square. You should be able to modify this code to get the horizontal and vertical lines you are looking for. Here is a link to the documentation for the python call of canny. It is rather helpful for all around edge detection. In about an hour I can get home and give you a working example of what you are wanting.
Do some reading on Sobel filters.
http://en.wikipedia.org/wiki/Sobel_operator
You can basically get vertical and horizontal gradients at each pixel.
Here is the OpenCV function for it.
http://docs.opencv.org/modules/imgproc/doc/filtering.html?highlight=sobel#sobel
Once you get this filtered images then you can collect statistics column/row wise and decide if its an edge and get that location.
Typically geometrical approaches to object detection are not hugely successful as the appearance model you assume can quite easily be violated by occlusion, noise or orientation changes.
Machine learning approaches typically work much better in my opinion and would probably provide a more robust solution to your problem. Since you appear to be working with OpenCV you could take a look at Casacade Classifiers for which OpenCV provides a Haar wavelet and a local binary pattern feature based classifiers.
The link I have provided is to a tutorial with very complete steps explaining how to create a classifier with several prewritten utilities. Basically you will create a directory with 'positive' images of cars and a directory with 'negative' images of typical backgrounds. A utiltiy opencv_createsamples can be used to create training images warped to simulate different orientations and average intensities from a small set of images. You then use the utility opencv_traincascade setting a few command line parameters to select different training options outputting a trained classifier for you.
Detection can be performed using either the C++ or the Python interface with this trained classifier.
For instance, using Python you can load the classifier and perform detection on an image getting back a selection of bounding rectangles using:
image = cv2.imread('path/to/image')
cc = cv2.CascadeClassifier('path/to/classifierfile')
objs = cc.detectMultiScale(image)
I am trying to use OpenCV and Python to stitch together several hundred puzzle piece images into one large, complete image. All of the images are digitized and are in a PNG format. The pieces were originally from a scan and extracted into individual pieces, so they have transparent backgrounds and are each a single piece. What is the process of comparing them and finding their matches using OpenCV?
The plan is that the images and puzzle pieces will always be different and this python program will take a scan of all the pieces laid out, crop out the pieces (which it does now), and build the puzzle back.
If this is a small fun project that you are trying to do, you can compare image histograms or use SIFT/SURF. I don't think there is implementation of SIFT, SURF in Python API. If you can find compatible equivalent, you can do it.
Comparing images are very much dependent on the data-set that you have. Some techniques work more better than the other.