How to compute head pose estimation? - python

So I would like to make a game where you control the character using your head with Python.
But, all of the tutorials I have found so far doesn't seem to be 2D. I just like to know the position of the face and its angle. Is there any tutorial/modules that can do this without too much coding?

The short answer is no, there is no way to do a generic head pose estimation in general. This is a nice tutorial on the subject which explains some of the current limitations and constraints. The biggest problem you will have is in getting a robust, diverse, well-sampled dataset in a variety of conditions suitable to your use case.

Related

Measuring sea urchins, struggling with detection and accuracy in OpenCV and potentially looking for alternative method

I am a marine biology PhD candidate with minimal experience with Python. I need to measure the diameters a large number of urchins frequently which I would like to automate and improve accuracy(it's tricky to physically measure urchins).
I have been advised to use OpenCV and have been trying to get the code from this pyimagesearch blog post. I have not found it to be highly effective for 2 reasons:
Accuracy is unlikely to be high enough (based on a small sample size I have been able to get so far. This is eluded to in the blog post, which is not ideal for round objects.
I am also mostly picking up many incorrect/inappropriate frames (not sure about terminology, but see images for example). Basically, it not only picks up the full urchin but also various points on the urchins (100s of these points). I have tried to increase the kernel size but this has not caused any changes. I cannot find out how to fix this. See these images for examples.
I am sure there is probably an easy fix for the frames (?), and I would appreciate if anyone could point me in the right direction. Furthermore, if there is a more accurate way of doing this I would also like to know.
We need to know the size of the urchins' shells and not the spines. Therefore ideally I would like to measure just the shell of the urchin and not the spines, but if I have to measure the spines as well I can just deduct a constant (average spine length for urchins of a given size) which is fine but would reduce accuracy further.
Any assistance would be appreciated.
Thanks in advance.

Edge clipping algorithm

I am trying to implement this method of edge detection on 3D meshes and I got fairly impressive results. But its far from good.
The proposed thinning algorithm is returning funky branched edges, that aren't really useful. I think it could be my fault on reproducing the algorithm in python.
https://s1.postimg.org/87w1ul2zxb/Screen_Shot_20171009161230.png
So I guess there are three ways to solve this right now.
- Find a way to clip those short branches
- Find my error on the algorithm.
- Or try to make another thinning algorithm.
And I wasn't capable of doing any of these things.

Tensor Flow Image Object Location

This is a fairly straightforward question, but I am new to the field. Using this tutorial I have a great way of detecting certain patterns or features. However, the images I'm testing are large and often the feature I'm looking for only occupies a small fraction of the image. When I run it on the entire picture the classification is bad, though when zoomed it and cropped the classification is good.
I've considered writing a script that breaks an image into many different images and runs the test on all (time isn't a huge concern). However, this still seems inefficient and unideal. I'm wondering about suggestions for the best, but also easiest to implement, solution for this.
I'm using Python.
This may seem to be a simple question, which it is, but the answer is not so simple. Localisation is a difficult task and requires much more leg work than classifying an entire image. There are a number of different tools and models that people have experimented with. Some models include R-CNN which looks at many regions in a manner not too dissimilar to what you suggested. Alternatively you could look at a model such as YOLO or TensorBox.
There is no one answer to this, and this gets asked a lot! For example: Does Convolutional Neural Network possess localization abilities on images?
The term you want to be looking for in research papers is "Localization". If you are looking for a dirty solution (that's not time sensitive) then sliding windows is definitely a first step. I hope that this gets you going in your project and you can progress from there.

Scan Matching Algorithm giving wrong values for translation but right value for rotation

I've already posted it on robotics.stackexchange but I had no relevant answer.
I'm currently developing a SLAM software on a robot, and I tried the Scan Matching algorithm to solve the odometry problem.
I read this article :
Metric-Based Iterative Closest Point Scan Matching
for Sensor Displacement Estimation
I found it really well explained, and I strictly followed the formulas given in the article to implement the algorithm.
You can see my implementation in python there :
ScanMatching.py
The problem I have is that, during my tests, the right rotation was found, but the translation was totally false. The values of translation are extremely high.
Do you have guys any idea of what can be the problem in my code ?
Otherwise, should I post my question on the Mathematics Stack Exchange ?
The ICP part should be correct, as I tested it many times, but the Least Square Minimization doesn't seem to give good results.
As you noticed, I used many bigfloat.BigFloat values, cause sometimes the max float was not big enough to contain some values.
don't know if you already solved this issue.
I didn't read the full article, but I noticed it is rather old.
IMHO (I'm not the expert here), I would try bunching specific algorithms, like feature detection and description to get a point cloud, descriptor matcher to relate points, bundle adjustement to get the rototraslation matrix.
I myself am going to try sba (http://users.ics.forth.gr/~lourakis/sba/), or more specifically cvsba (http://www.uco.es/investiga/grupos/ava/node/39/) because I'm on opencv.
If you have enough cpu/gpu power, give a chance to AKAZE feature detector and descriptor.

Can difflib be used to make a plagiarism detection program?

I am trying to figure this out...
Can the difflib.* library in Python be used to make some kind of plagiarism detection program? If so how?
Maybe anyone could help me to figure out this question.
It could be used, but you're going to face all the same general issues you find in automated plagiarism detection. It might give you a little bit of a head start on implementing some of the algorithms you need, but I don't think it is likely to take you very far.
The short answer is yes.
The long answer is that it will be a lot of work and you'll probably find that you'd be better using another language or an off-the-shelf tool, depending on the vast amount of sources you're likely to be referencing.

Categories