What is a pytorch snapshot and a Dlib model? (Running Hopenet) - python

I am trying to make Hopenet run on my computer, using the code on github
I have installed all required libraries. This is my forked code, with test_hopenet.py updated to python 3.
I installed all required libraries, and using pip install "pillow<7" because of some old requirements.
python code/test_on_video_dlib.py --snapshot PATH_OF_SNAPSHOT --face_model PATH_OF_DLIB_MODEL --video PATH_OF_VIDEO --output_string STRING_TO_APPEND_TO_OUTPUT --n_frames N_OF_FRAMES_TO_PROCESS --fps FPS_OF_SOURCE_VIDEO
It looks like I am missing some basic understanding which is supposed to be obvious:
What is a SNAPSHOT? Where do I get one?
What is a Dlib Model? Where do I get the right one?
Again - I just want to make this code run, but I can't understand the instructions.

For the dlib model you can use:
https://github.com/davisking/dlib-models/mmod_human_face_detector.dat.bz2
For the snapshot path you can use hopenet_robust_alpha1.pkl downloadable from the link "300W-LP, alpha 1, robust to image quality" under Pretrained Models section of Nathanial Ruiz's README.md

Related

Trouble setting up libraries imported from github

I'm trying to clone a repo to my machine to test changes for a pull request.
The repo in question is a clone of pytorch, and I want to add something to one of the files to fix an issue. I figured how to clone the repo, but I can't figure out how to import the pytorch libraries when I write a test file that contains something like:
import torch
x = torch.rand(5, 3)
print(x)
Where am I supposed to create a test.py file? How do I add pytorch (specifically my cloned version of pytorch) to the list of dependencies for Python to run with? I tried just creating a test.py file at the same level as the cloned repo, but i get the error message
"no module named torch.version". I am using VS code.
I'm new to using git and not extremely familiar with the structure of libraries like this. I tried looking through github, stack overflow and the pytorch docs but was unable to find an explanation.
Make sure your current selected interpreter doesn't contain pytorch, then before import torch, add the following code:
import sys
sys.path.append("\the folder that contains pytorch\")
Please have a try.

ModuleNotFoundError: No module named 'gensim.models.wrappers'

I am trying to use LDA MAllet model. but I am facing with "No module named 'gensim.models.wrappers'" error.
I have gensim installed and ' gensim.models.LdaMulticore' works properly.
Java developer’s kit is installed
I have already downloaded mallet-2.0.8.zip and unzipped it on c:\ drive.
This is the code I am trying to use:
import os
from gensim.models.wrappers import LdaMallet
os.environ.update({'MALLET_HOME':r'C:/mallet-2.0.8/'})
mallet_path = r'C:/mallet-2.0.8/bin/mallet'
Does anyone know what is wrong here? Many thanks!
If you've installed the latest Gensim, 4.0.0 (as of late March, 2021), the LdaMallet model has been removed, along with a number of other tools which simply wrapped external tools/APIs.
You can see the note in the Gensim migration guide at:
https://github.com/RaRe-Technologies/gensim/wiki/Migrating-from-Gensim-3.x-to-4#15-removed-third-party-wrappers
If the use of that tool is essential to your project, you may be able to:
install an older version of Gensim, such as 3.8.3 - though of course you'd then be missing the latest fixes & optimizations on any other Gensim models you're using
extract the ldamallet.py source code from that older version & update/move it to your own code for private use - dealing with whatever issues arise
I had the same issue with Gensim's wrapper for MALLET but didn't want to downgrade. There is this new wrapper that seems to do the job pretty well.
https://github.com/maria-antoniak/little-mallet-wrapper/blob/master/demo.ipynb

How to enable non-free algorythms (SIFT, SURF) of OpenCV for Python?

I compiled the opencv-master by adding extra modules from opencv_contrib-master with CMake based on the following guide:
http://pravo-u.ru/blog/%D1%81%D0%B1%D0%BE%D1%80%D0%BA%D0%B0-opencv-4-1-0-pre-%D0%B4%D0%BB%D1%8F-python3-%D0%B2-windows-7-x64-%D1%81-%D0%B8%D1%81%D0%BF%D0%BE%D0%BB%D1%8C%D0%B7%D0%BE%D0%B2%D0%B0%D0%BD%D0%B8%D0%B5%D0%BC-cmake/
These videos also show the same procedure:
https://www.youtube.com/watch?v=By-PKbWDZNk
https://www.youtube.com/watch?v=fIpTks0G2m0
CMake compilation is done correctly. In folder C:\Program Files (x86)\Python37-32\Lib\site-packages\~v2 there is a cv2.cp37-win32.pyd file.
I am just a little bit confused about the last step, which files do I need to copy where? What settings do I need to do?
Can anybody help me with that?
Python in VS Code can not see cv2 yet.
My built Opencv is here: D:\OpenCV\opencvbuild
I use Windows7.

Communications between Blender and Tensorflow

I have a simulation inside Blender that I would like to control externally using the TensorFlow library. The complete process would go something like this:
while True:
state = observation_from_blender()
action = find_action_using_tensorflow_neural_net(state)
take_action_inside_blender(action)
I don't have much experience with the threading or subprocess modules and so am unsure as to how I should go about actually building something like this.
Rather than mess around with Tensorflow connections and APIs, I'd suggest you take a look at the Open AI Universe Starter Agent[1]. The advantage here is that as long as you have a VNC session open, you can connect a TF based system to do reinforcement learning on your actions.
Once you have a model constructed via this, you can focus on actually building a lower level API system for the two things to talk to each other.
[1] https://github.com/openai/universe-starter-agent
Thanks for your response. Unfortunately, trying to get Universe working with my current setup was a bit of a pain. I'm also on a fairly tight deadline so I just needed something that worked straight away.
I did find a somewhat DIY solution that worked well for my situation using the pickle module. I don't really know how to convert this approach into proper pseudocode, so here is the general outline:
Process 1 - TensorFlow:
load up TF graph
wait until pickled state arrives
while not terminated:
Process 2 - Blender:
run simulation until next action required
pickle state
wait until pickled action arrives
Process 1 - TensorFlow:
unpickle state
feedforward state through neural net, find action
pickle action
wait until next pickled state arrives
Process 2 - Blender:
unpickle action
take action
This approach worked well for me, but I imagine there are more elegant low level solutions. I'd be curious to hear of any other possible approaches that achieve the same goal.
I did this to install tensorflow with the python version that comes bundled with blender.
Blender version: 2.78
Python version: 3.5.2
First of all you need to install pip for blender's python. For that I followed instructions mentioned in this link: https://blender.stackexchange.com/questions/56011/how-to-use-pip-with-blenders-bundled-python. (What I did was drag the python3.5m icon into the terminal and then further type the command '~/get-pip.py').
Once you have pip installed you are all set up to install 3rd party modules and use them with blender.
Navigate to the bin folder insider '/home/path to blender/2.78/' directory. To install tensorflow, drag the python3.5m icon into terminal, then drag pip3 icon into terminal and give the command install tensorflow.
I got an error mentioning module lib2to3 not found. Even installing the module didn't help as I got the message that no such module exists. Fortunately, I have python 3.5.2 installed on my machine. So navigate to /usr/lib/python3.5 and copy the lib2to3 folder from there to /home/user/blender/2.78/python/lib/python3.5 and then again run the installation command for tensorflow. The installation should complete without any errors. Make sure that you test the installation by importing tensorflow.

Uncertain how to run Bazel: Tensorflow Inception Retrain New Categories Tutorial Python

I am experiencing difficulty using Bazel. More specifically with the workspace requirements. I am not sure how to interpret the following line of code from the TF Retrain Inception tutorial.
**bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos**
I am able to build the "retrain.py" file. This is my output from my build:
*r#r-VirtualBox:~/Desktop/sf_tensorflow-master/tensorflow/examples/image_retraining$ bazel build retrain.py
INFO: Found 1 target...
INFO: Elapsed time: 0.200s, Critical Path: 0.01s*
But I am uncertain how to proceed with the next step. Is "bazel-bin" a folder somewhere? is it included in the Tensorflow example folder? Or is this something I have to generate?
Also, the way in which "retrain" is being referenced in the code makes me think it is a folder and no longer a python file. Did it change through this process?
I would greatly appreciate a more detailed breakdown of what Bazel is doing in this line of code and how I adjust it to run the code on my directory of images.
Thanks!
I believe to run this example you will have had to have built Tensorflow from source. It appears you're running some Linux distro; you can find the bazel build instructions here. Once you have that installed, you can pull the bleeding edge Tensorflow from here and here are "building from source" instructions. Assuming you've done all of that, if you navigate to the root of the project i.e.
cd tensorflow
you should see something like this
you can see there are 5 bazel symlinks, one of which is bazel-bin.
Now just download the images as directed and build with
bazel build -c opt --copt=-mavx tensorflow/examples/image_retraining:retrain
Once the build is finished you can run the retraining
bazel-bin/tensorflow/examples/image_retraining/retrain --image_dir ~/flower_photos
Note that both of the above commands need to be run from the root of the project (tensorflow/ in this case) and the tutorial assumes you have put the flower images in your home dir.
Hope this helps.

Categories