Where is model directory in pocketsphinx - python

I'm trying to make a simple speech recognition program in Python using Sphinx. I installed it using pip in CMD, then I installed PocketSphinx in the same way. The tutorial I'm following says I need to include the model directories for PocketSphinx, but I don't know where the directory is. How do I find it, and am I doing something wrong?

If you are using pocketsphinx-python installed via pip, and following some example code similar to that provided by the package's github page, you may find there are a few code changes needed.
Here's what's currently in the README (as of March 11, 2018):
from pocketsphinx.pocketsphinx import *
from sphinxbase.sphinxbase import *
MODELDIR = "pocketsphinx/model"
DATADIR = "pocketsphinx/test/data"
# Create a decoder with certain model
config = Decoder.default_config()
config.set_string('-hmm', path.join(MODELDIR, 'en-us/en-us'))
config.set_string('-lm', path.join(MODELDIR, 'en-us/en-us.lm.bin'))
config.set_string('-dict', path.join(MODELDIR, 'en-us/cmudict-en-us.dict'))
This not-yet-accepted pull request describes some changes which may help for those of us using pip and working on our python code outside the downloaded module's directory (at least in a *nix/Mac environment, I haven't tested on Windows). Here's a diff snippet; the key idea is to use path.dirname(pocketsphinx.__file__) to get the base directory in which to look for the model directory:
-MODELDIR = "pocketsphinx/model"
-DATADIR = "pocketsphinx/test/data"
+import pocketsphinx;
+POCKETSPHINXDIR = path.dirname(pocketsphinx.__file__)
+MODELDIR = path.join(POCKETSPHINXDIR, "model")
+DATADIR = path.join(POCKETSPHINXDIR, "data")
(Small note: I took liberty to fix a small typo in the spelling of POCKETSPHINXDIR, so this code isn't exactly the same as the pull request)

Go the location where your python is installed look for the following location inside that (this location is according to windows installation)
Lib\site-packages\speech_recognition\pocketsphinx-data
default model is en-US however there are few other language models that one can download from here
https://sourceforge.net/projects/cmusphinx/files/Acoustic%20and%20Language%20Models/

It may be late to answer now, but for newcomers, Python module has some convenience methods:
from pocketsphinx import get_model_path, get_data_path
print(get_model_path())
print(get_data_path())

Related

Building aiortc library (Python) from Yocto

I am not sure if this is the rightest place to ask this but I try anyway. I have to integrate the Python aiortc library in an embedded system which uses Yocto for building the entire environment.
Because there is no recipe for such library, I've generated one using pipoe following this tutorial.
Using the command: pipoe --package aiortc --version 0.9.28 --python python3 I have generated few bb files inside a custom layer such as: python3-aioice_xx.xx.bb, python3-aiortc_xx.xx.bb, python3-cffi_xx.xx.bb and so on (I think those are dependencies).
Now I wanted to compile this recipe to check if everything is sorted with the command: bitbake python3-aiortc, it seems to proceed well and to find all the required files until this error occurs and I don't know how to address it. Can someone help me?
I think that some relevant lines are:
ERROR: python3-aiortc-0.9.28-r0 do_configure: Function failed: do_configure
ERROR: Do not try to fetch `cffi>=1.0.0' for building. Please add its native recipe to DEPENDS.
ERROR: python3-pyee-7.0.1-r0 do_configure: Function failed: do_configure
Have a look to the complete log that I have linked for further info.
===EDIT===
Added the python3-aiortc recipe content.
SUMMARY = "An implementation of WebRTC and ORTC"
HOMEPAGE = "https://github.com/aiortc/aiortc"
AUTHOR = "Jeremy Lainé <jeremy.laine#m4x.org>"
LICENSE = "BSD"
LIC_FILES_CHKSUM = "file://LICENSE;md5=907b5e856b2e6bcd8a3cc8d338a6166f"
SRC_URI = "https://files.pythonhosted.org/packages/1a/34/d9c8e19b4d5157721a5b77750116c6bb6355f1d85b92e7de491269b9ee51/aiortc-0.9.28.tar.gz"
SRC_URI[md5sum] = "50dc651d643b16c95b0e1ad259baeb51"
SRC_URI[sha256sum] = "4a41122e043a75c93a80dbb6d884b6f7cf27b774ebdef226d819a2c3a997c550"
S = "${WORKDIR}/aiortc-0.9.28"
RDEPENDS_${PN} = "python3-aioice python3-av python3-cffi python3-crc32c python3-cryptography python3-pyee python3-pylibsrtp"
inherit setuptools3
Your recipe depends on other recipes. I am guessing you don't have them.
If you have them you probably need to add them to IMAGE_INSTALL_append in local.conf
If you don't have them, you can try to search for them in OpenEmbedded Index, for example:
https://layers.openembedded.org/layerindex/branch/master/recipes/?q=cffi

PyInstaller & Pymeasure : NotImplementedError

I am currently experiencing difficulties using PyInstaller on a code relying on Pymeasure library. The program is working fine from the prompt, but not when started from the executable generated by PyInstaller.
Here is an simple example of a code working from prompt but not when frozen:
import visa
from pymeasure.instruments.keithley import Keithley2000, Keithley2400
rm = visa.ResourceManager()
list_available = rm.list_resources()
print(list_available)
keithley = Keithley2400("GPIB1::23")
keithley.apply_current() # Sets up to source current
keithley.source_current_range = 10e-3 # Sets the source current range to 10 mA
keithley.compliance_voltage = 10 # Sets the compliance voltage to 10 V
keithley.source_current = 0 # Sets the source current to 0 mA
keithley.enable_source() # Enables the source output
keithley.measure_voltage() # Sets up to measure voltage
keithley.ramp_to_current(5e-3) # Ramps the current to 5 mA
print(keithley.voltage) # Prints the voltage in Volts
keithley.shutdown() # Ramps the current to 0 mA and disables output
Here is the output when I run the executable:
Please note that I have PyVISA 1.9.1 installed.
Why do I get this error and how do I fix that ?
You need to make sure you include the package metadata for PyVisa in your PyInstaller project. PyInstaller has a utility hook for that job; create a hook-pyvista.py hook file (if you don’t already have one) with:
from PyInstaller.utils.hooks import copy_metadata
datas = copy_metadata("pyvisa")
and tell PyInstaller about it with the --additional-hooks-dir command-line switch. See the documentation on how hooks work for more details.
pymeasurement relies on the pyvisa.__version__ attribute to determine if you have installed the correct version of that project. But pyvisa.__version__ defaults to "unknown" unless it can locate its metadata files, which would provide pkg_resources with the required metadata to retrieve the version for it.
You can verify that PyVisa was installed correctly by importing it yourself and testing the __version__ attribute:
import pyvisa
print("PyVisa version", pyvisa.__version__)
Are you sure you are connected to the instrument, your code references "GPIB1::23" but your print(list_available) returns "GPIB1::24"?

Is there any way I can download the pre-trained models available in PyTorch to a specific path?

I am referring to the models that can be found here: https://pytorch.org/docs/stable/torchvision/models.html#torchvision-models
As, #dennlinger mentioned in his answer : torch.utils.model_zoo, is being internally called when you load a pre-trained model.
More specifically, the method: torch.utils.model_zoo.load_url() is being called every time a pre-trained model is loaded. The documentation for the same, mentions:
The default value of model_dir is $TORCH_HOME/models where
$TORCH_HOME defaults to ~/.torch.
The default directory can be overridden with the $TORCH_HOME
environment variable.
This can be done as follows:
import torch
import torchvision
import os
# Suppose you are trying to load pre-trained resnet model in directory- models\resnet
os.environ['TORCH_HOME'] = 'models\\resnet' #setting the environment variable
resnet = torchvision.models.resnet18(pretrained=True)
I came across the above solution by raising an issue in the PyTorch's GitHub repository:
https://github.com/pytorch/vision/issues/616
This led to an improvement in the documentation i.e. the solution mentioned above.
Yes, you can simply copy the urls and use wget to download it to the desired path. Here's an illustration:
For AlexNet:
$ wget -c https://download.pytorch.org/models/alexnet-owt-4df8aa71.pth
For Google Inception (v3):
$ wget -c https://download.pytorch.org/models/inception_v3_google-1a9a5a14.pth
For SqueezeNet:
$ wget -c https://download.pytorch.org/models/squeezenet1_1-f364aa15.pth
For MobileNetV2:
$ wget -c https://download.pytorch.org/models/mobilenet_v2-b0353104.pth
For DenseNet201:
$ wget -c https://download.pytorch.org/models/densenet201-c1103571.pth
For MNASNet1_0:
$ wget -c https://download.pytorch.org/models/mnasnet1.0_top1_73.512-f206786ef8.pth
For ShuffleNetv2_x1.0:
$ wget -c https://download.pytorch.org/models/shufflenetv2_x1-5666bf0f80.pth
If you want to do it in Python, then use something like:
In [11]: from six.moves import urllib
# resnet 101 host url
In [12]: url = "https://download.pytorch.org/models/resnet101-5d3b4d8f.pth"
# download and rename the file to `resnet_101.pth`
In [13]: urllib.request.urlretrieve(url, "resnet_101.pth")
Out[13]: ('resnet_101.pth', <http.client.HTTPMessage at 0x7f7fd7f53438>)
P.S: You can find the download URLs in the respective python modules of torchvision.models
There is a script available that will output a list of URLs across the entire package.
From within the pytorch/vision package execute the following:
python scripts/collect_model_urls.py .
# ...
# https://download.pytorch.org/models/swin_v2_b-781e5279.pth
# https://download.pytorch.org/models/swin_v2_s-637d8ceb.pth
# https://download.pytorch.org/models/swin_v2_t-b137f0e2.pth
# https://download.pytorch.org/models/vgg11-8a719046.pth
# https://download.pytorch.org/models/vgg11_bn-6002323d.pth
# ...
TL;DR: No, it is not possible directly, but you can easily adapt it.
I think what you want to do is to look at torch.utils.model_zoo, which is internally called when you load a pre-trained model:
If we look at the code for the pre-trained models, for example AlexNet here, we can see that it simply calls the previously mentioned model_zoo function, but without the saved location. You can either modify the PyTorch source to specify this (that would actually be a great addition IMO, so maybe open a pull request for that), or else simply adopt the code in the second link to your own liking (and save it to a custom location under a different name), and then manually insert the relevant location there.
If you want to regularly update PyTorch, I would heavily recommend the second method, since it doesn't involve directly altering PyTorch's code base, and potentially throw errors during updates.

How to properly write cross-references to external documentation with intersphinx?

I'm trying to add cross-references to external API into my documentation but I'm facing three different behaviors.
I am using sphinx(1.3.1) with Python(2.7.3) and my intersphinx mapping is configured as:
{
'python': ('https://docs.python.org/2.7', None),
'numpy': ('http://docs.scipy.org/doc/numpy/', None),
'cv2' : ('http://docs.opencv.org/2.4/', None),
'h5py' : ('http://docs.h5py.org/en/latest/', None)
}
I have no trouble writing a cross-reference to numpy API with :class:`numpy.ndarray` or :func:`numpy.array` which gives me, as expected, something like numpy.ndarray.
However, with h5py, the only way I can have a link generated is if I omit the module name. For example, :class:`Group` (or :class:`h5py:Group`) gives me Group but :class:`h5py.Group` fails to generate a link.
Finally, I cannot find a way to write a working cross-reference to OpenCV API, none of these seems to work:
:func:`cv2.convertScaleAbs`
:func:`cv2:cv2.convertScaleAbs`
:func:`cv2:convertScaleAbs`
:func:`convertScaleAbs`
How to properly write cross-references to external API, or configure intersphinx, to have a generated link as in the numpy case?
In addition to the detailed answer from #gall, I've discovered that intersphinx can also be run as a module:
python -m sphinx.ext.intersphinx 'http://python-eve.org/objects.inv'
This outputs nicely formatted info. For reference: https://github.com/sphinx-doc/sphinx/blob/master/sphinx/ext/intersphinx.py#L390
I gave another try on trying to understand the content of an objects.inv file and hopefully this time I inspected numpy and h5py instead of only OpenCV's one.
How to read an intersphinx inventory file
Despite the fact that I couldn't find anything useful about reading the content of an object.inv file, it is actually very simple with the intersphinx module.
from sphinx.ext import intersphinx
import warnings
def fetch_inventory(uri):
"""Read a Sphinx inventory file into a dictionary."""
class MockConfig(object):
intersphinx_timeout = None # type: int
tls_verify = False
class MockApp(object):
srcdir = ''
config = MockConfig()
def warn(self, msg):
warnings.warn(msg)
return intersphinx.fetch_inventory(MockApp(), '', uri)
uri = 'http://docs.python.org/2.7/objects.inv'
# Read inventory into a dictionary
inv = fetch_inventory(uri)
# Or just print it
intersphinx.debug(['', uri])
File structure (numpy)
After inspecting numpy's one, you can see that keys are domains:
[u'np-c:function',
u'std:label',
u'c:member',
u'np:classmethod',
u'np:data',
u'py:class',
u'np-c:member',
u'c:var',
u'np:class',
u'np:function',
u'py:module',
u'np-c:macro',
u'np:exception',
u'py:method',
u'np:method',
u'np-c:var',
u'py:exception',
u'np:staticmethod',
u'py:staticmethod',
u'c:type',
u'np-c:type',
u'c:macro',
u'c:function',
u'np:module',
u'py:data',
u'np:attribute',
u'std:term',
u'py:function',
u'py:classmethod',
u'py:attribute']
You can see how you can write your cross-reference when you look at the content of a specific domain. For example, py:class:
{u'numpy.DataSource': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.DataSource.html#numpy.DataSource',
u'-'),
u'numpy.MachAr': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.MachAr.html#numpy.MachAr',
u'-'),
u'numpy.broadcast': (u'NumPy',
u'1.9',
u'http://docs.scipy.org/doc/numpy/reference/generated/numpy.broadcast.html#numpy.broadcast',
u'-'),
...}
So here, :class:`numpy.DataSource` will work as expected.
h5py
In the case of h5py, the domains are:
[u'py:attribute', u'std:label', u'py:method', u'py:function', u'py:class']
and if you look at the py:class domain:
{u'AttributeManager': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/attr.html#AttributeManager',
u'-'),
u'Dataset': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/dataset.html#Dataset',
u'-'),
u'ExternalLink': (u'h5py',
u'2.5',
u'http://docs.h5py.org/en/latest/high/group.html#ExternalLink',
u'-'),
...}
That's why I couldn't make it work as numpy references. So a good way to format them would be :class:`h5py:Dataset`.
OpenCV
OpenCV's inventory object seems malformed. Where I would expect to find domains there is actually 902 function signatures:
[u':',
u'AdjusterAdapter::create(const',
u'AdjusterAdapter::good()',
u'AdjusterAdapter::tooFew(int',
u'AdjusterAdapter::tooMany(int',
u'Algorithm::create(const',
u'Algorithm::getList(vector<string>&',
u'Algorithm::name()',
u'Algorithm::read(const',
u'Algorithm::set(const'
...]
and if we take the first one's value:
{u'Ptr<AdjusterAdapter>': (u'OpenCV',
u'2.4',
u'http://docs.opencv.org/2.4/detectorType)',
u'ocv:function 1 modules/features2d/doc/common_interfaces_of_feature_detectors.html#$ -')}
I'm pretty sure it is then impossible to write OpenCV cross-references with this file...
Conclusion
I thought intersphinx generated the objects.inv based on the content of the documentation project in an standard way, which seems not to be the case.
As a result, it seems that the proper way to write cross-references is API dependent and one should inspect a specific inventory object to actually see what's available.
An additional way to inspect the objects.inv file is with the sphobjinv module.
You can search local or even remote inventory files (with fuzzy matching). For instance with scipy:
$ sphobjinv suggest -t 90 -u https://docs.scipy.org/doc/scipy/reference/objects.inv "signal.convolve2d"
Remote inventory found.
:py:function:`scipy.signal.convolve2d`
:std:doc:`generated/scipy.signal.convolve2d`
Note that you may need to use :py:func: and not :py:function: (I'd be happy to know why).
How to use OpenCV 2.4 (cv2) intersphinx
Inspired by #Gall's answer, I wanted to compare the contents of the OpenCV & numpy inventory files. I couldn't get sphinx.ext.intersphinx.fetch_inventory to work from ipython, but the following does work:
curl http://docs.opencv.org/2.4/objects.inv | tail -n +5 | zlib-flate -uncompress > cv2.inv
curl https://docs.scipy.org/doc/numpy/objects.inv | tail -n +5 | zlib-flate -uncompress > numpy.inv
numpy.inv has lines like this:
numpy.ndarray py:class 1 reference/generated/numpy.ndarray.html#$ -
whereas cv2.inv has lines like this:
cv2.imread ocv:pyfunction 1 modules/highgui/doc/reading_and_writing_images_and_video.html#$ -
So presumably you'd link to the OpenCV docs with :ocv:pyfunction:`cv2.imread` instead of :py:function:`cv2.imread`. Sphinx doesn't like it though:
WARNING: Unknown interpreted text role "ocv:pyfunction".
A bit of Googling revealed that the OpenCV project has its own "ocv" sphinx domain: https://github.com/opencv/opencv/blob/2.4/doc/ocv.py -- presumably because they need to document C, C++ and Python APIs all at the same time.
To use it, save ocv.py next to your Sphinx conf.py, and modify your conf.py:
sys.path.insert(0, os.path.abspath('.'))
import ocv
extensions = [
'ocv',
]
intersphinx_mapping = {
'cv2': ('http://docs.opencv.org/2.4/', None),
}
In your rst files you need to say :ocv:pyfunc:`cv2.imread` (not :ocv:pyfunction:).
Sphinx prints some warnings (unparseable C++ definition: u'cv2.imread') but the generated html documentation actually looks ok with a link to http://docs.opencv.org/2.4/modules/highgui/doc/reading_and_writing_images_and_video.html#cv2.imread. You can edit ocv.py and remove the line that prints that warning.
The accepted answer no longer works in the new version (1.5.x) ...
import requests
import posixpath
from sphinx.ext.intersphinx import read_inventory
uri = 'http://docs.python.org/2.7/'
r = requests.get(uri + 'objects.inv', stream=True)
r.raise_for_status()
inv = read_inventory(r.raw, uri, posixpath.join)
Stubborn fool that I am, I used 2to3 and the Sphinx deprecated APIs chart to revive #david-röthlisberger's ocv.py-based answer so it'll work with Sphinx 2.3 on Python 3.5.
The fixed-up version is here:
https://gist.github.com/ssokolow/a230b27b7ea4a31f7fb40621e6461f9a
...and the quick version of what I did was:
Run 2to3 -w ocv.py && rm ocv.py.bak
Cycle back and forth between running Sphinx and renaming functions to their replacements in the chart. I believe these were the only changes I had to make on this step:
Directive now has to be imported from docutils.parsers.rst
Replace calls to l_(...) with calls to _(...) and remove the l_ import.
Replace calls to env.warn with calls to log.warn where log = sphinx.util.logging.getLogger(__name__).
Then, you just pair it with this intersphinx definition and you get something still new enough to be relevant for most use cases:
'cv2': ('https://docs.opencv.org/3.0-last-rst/', None)
For convenience, I made a small extension for aliasing intersphinx cross references. This is useful as sometimes the object inventory gets confused when an object from a submodule is imported from a package's __init__.py.
See also https://github.com/sphinx-doc/sphinx/issues/5603
###
# Workaround of
# Intersphinx references to objects imported at package level can"t be mapped.
#
# See https://github.com/sphinx-doc/sphinx/issues/5603
intersphinx_aliases = {
("py:class", "click.core.Group"):
("py:class", "click.Group"),
("py:class", "click.core.Command"):
("py:class", "click.Command"),
}
def add_intersphinx_aliases_to_inv(app):
from sphinx.ext.intersphinx import InventoryAdapter
inventories = InventoryAdapter(app.builder.env)
for alias, target in app.config.intersphinx_aliases.items():
alias_domain, alias_name = alias
target_domain, target_name = target
try:
found = inventories.main_inventory[target_domain][target_name]
try:
inventories.main_inventory[alias_domain][alias_name] = found
except KeyError:
print("could not add to inv")
continue
except KeyError:
print("missed :(")
continue
def setup(app):
app.add_config_value("intersphinx_aliases", {}, "env")
app.connect("builder-inited", add_intersphinx_aliases_to_inv)
To use this, I paste the above code in my conf.py and add aliases to the intersphinx_aliases dictionary.

Matlab's cannot call Python code that imports statsmodels

This question concerns Matlab 2014b, Python 3.4 and Mac OS 10.10.
I have the following Python file tmp.py:
from statsmodels.tsa.arima_process import ArmaProcess
import numpy as np
def generate_AR_time_series():
arparams = np.array([-0.8])
maparams = np.array([])
ar = np.r_[1, -arparams]
ma = np.r_[1, maparams]
arma_process = ArmaProcess(ar, ma)
return arma_process.generate_sample(100)
I want to call generate_AR_time_series from Matlab so I used:
py.tmp.generate_AR_time_series()
which gave a vague error message
Undefined variable "py" or class "py.tmp.generate_AR_time_series".
To look into the problem further, I tried
tmp = py.eval('__import__(''tmp'')', struct);
which gave me a detailed but still obscured error message:
Python Error:
dlopen(/opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so, 2): Symbol
not found: __gfortran_stop_numeric_f08
Referenced from: /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
Expected in: /Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
in /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so
I can call the function within Python just fine, so I guess the problem is with Matlab. From the detailed message, it seems that the problem lies in something is expected in the Matlab installation path, but of course Matlab installation path does not contain those things since they are third-party libraries for Python.
How to solve this problem?
Edit 1:
libgfortran.3.dylib can be found in a lot of places:
/Applications/MATLAB_R2014a.app/sys/os/maci64/libgfortran.3.dylib
/Applications/MATLAB_R2014b.app/sys/os/maci64/libgfortran.3.dylib
/opt/local/lib/gcc48/libgfortran.3.dylib
/opt/local/lib/gcc49/libgfortran.3.dylib
/opt/local/lib/libgcc/libgfortran.3.dylib
/Users/wdg/Documents/MATLAB/mcode/nativelibs/macosx/bin/libgfortran.3.dylib
Try:
setenv('DYLD_LIBRARY_PATH', '/usr/local/bin/');
For me, using the setenv approach from within MATLAB did not work. Also, MATLAB modifies the DYLD_LIBRARY_PATH variable during startup to include necessary libraries.
First, you have to make sure which version of gfortran scipy was linked against: in Terminal.app, enter otool -L /opt/local/Library/Frameworks/Python.framework/Versions/3.4/lib/python3.4/site-packages/scipy/special/_ufuncs.so and look for 'libgfortran' in the output.
It worked for me to copy $(MATLABROOT)/bin/.matlab7rc.sh to my home directory and change the line LDPATH_PREFIX='' in the mac section (around line 195 in my case) to LDPATH_PREFIX='/opt/local/lib/gcc49', or whatever path to libgfortran you found above.
This ensures that /opt/local/lib/gcc49/libgfortran.3.dylib is found before the MATLAB version, but leaves other paths intact.

Categories